Google's AI models have a secret ingredient that's giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it's only just scratched the surface in terms of how it can use your information to "personalize" Gemini's responses.
Google first started letting users opt in to its "Gemini with personalization" feature earlier this year, which lets the AI model tap into your search history "to provide responses that are uniquely insightful and directly address your needs." But now, Google is taking things a step further by unlocking access to even more of your information - all in the name of providing you with more personalized, AI-generated responses.
During Google I/O on Tuesday, Google introduced something called "personal context," which will allow Gemini models to pull relevant information from across Google's apps, as long as it has your permission. One way Google is doing this is through Gmail's personalized smart replies - the AI-generated messages that you can use to quickly reply to emails.
To make these AI responses sound "authentically like you," Gemini will pore over your previous emails and even your Google Drive files to cr …
Colorado-based Sierra Space is getting ready to launch its reusable space plane, Dream Chaser. | Image: Sierra Space
In The Andromeda Strain, Michael Crichton wrote about killer alien space crystals that are (spoiler alert) ultimately stymied by Earth's breadth of pH values. In reality, crystals grown in space could be key to a new generation of cancer-fighting treatments that save lives, not threaten them.
Colorado-based startup Sierra Space is nearly ready to launch its reusable space plane, Dream Chaser. It's set to carry into orbit a 3-D printed module designed by engineers at pharma giant Merck. If the test goes well, and if Dream Chaser's gentle reentry process keeps that sensitive cargo safe, this could be the start of something big - despite those crystals being microscopic.
A brief history of space crystals
Space crystals sound like something an astrology guru would hang over their bed to help them sleep, but there's real science here. According to the ISS National Lab, crystals grown in space are simply better: "Scientists hypothesize that these observed benefits result from a slower, more uniform movement of molecules into a crystalline lattice in microgravity."
Research into monoclonal antibodies points towards crystallization as being key for developing more stable, subcutaneous …
AMD has announced its latest Zen 5-based Ryzen Threadripper 9000 Series of CPUs at Computex today. The 9000 Series and 9000 WX-Series are built for the demanding workstation market, and the top Threadripper Pro 9995WX will ship with 96 cores and 192 threads.
This flagship Threadripper chip is designed for professionals who are working on visual effects, simulations, and AI model development. The Threadripper Pro 9995WX also has up to 384MB of L3 cache and 128 lanes of PCIe Gen 5, making it ideal to pair with multiple GPUs.
AMD claims that the Threadripper Pro 9995WX is 2.2x faster than Intel’s 60-core Xeon W9-3595X processor in Cinebench 2024 multi-threaded rendering.
If you don’t need a 96-core CPU, AMD’s Threadripper 9000 Series are also targeted at enthusiasts and creators who want workstation-like performance. The Ryzen Threadripper 9980X has 64 cores and 128 threads, a base frequency of 3.2GHz, and 320MB of L3 cache.
All of these new Threadripper chips, pro or not, will run at a thermal design power (TDP) of 350 watts and will work (after a BIOS update) on existing motherboards that support the sTR5 socket.
Both Threadripper 9000 Series and the Pro WX-Series processors will be available from retailers in July, but AMD isn’t announcing pricing just yet. Given the its high-end 7980X Threadripper CPU retailed at $4,999 in 2023, it’s fair to say these next-gen equivalents will be around that price.
Google I/O was, as predicted, an AI show. But now that the keynote is over, we can see that the company's vision is to use AI to eventually do a lot of Googling for you.
A lot of that vision rests on AI Mode in Google Search, which Google is starting to roll out to everyone in the US. AI Mode offers a more chatbot-like interface right inside Search, and behind the scenes, Google is doing a lot of work to pull in information instead of making you scroll through a list of blue links.
Onstage, Google presented an example of someone asking for things to do in Nashville over a weekend with friends who like food, music, and "exploring off the beaten path." AI Mode hopped into action, creating Google-curated lists of "restaurants good for foodies," recommending places with a "chill bar atmosphere with live music," highlighting "places off-the-beaten path," and suggesting websites featuring good things to do in Nashville. It even created a custom map recommending places to go. (If you're doing some shopping, AI Mode can show you a personalized batch of listings, too.)
This is essentially Google doing your planning work for you. The service generated a whole bunch of related search quer …
Microsoft is working on a new “Cross Device Resume” feature for Windows 11 that works similarly to Apple’s Handoff feature in macOS. The feature was spotted in a Microsoft Build 2025 session, before Windows Central noticed Microsoft editing out the demo that showed a mobile Spotify session resuming on a PC.
“When you open the app on your mobile device or tablet, Windows can show a subtle badge right on your app’s taskbar icon,” explains Aakash Varshney, a senior product manager for cross devices and experiences at Microsoft, in a “Create Seamless Cross-Device Experiences with Windows for your app” Build session for developers. “It’s a visual nudge that when clicked launches your app directly into the task, delivering a smooth intuitive handoff from PC to phone.”
Varshney’s now-deleted demo shows a Spotify app icon with a badge on it in the taskbar, and a message when you hover over the badge that says “resume, recently opened on your mobile device.” It’s designed to let you resume the Spotify app on PC right from where you left off on mobile. “Spotify launches and I’m instantly back in the same song, now playing on my PC,” says Varshney. “No need to search or start over, it’s a smooth one-click transition that keeps the music and user experience uninterrupted.”
Microsoft first started testing an app handoff feature in Windows 10, back in 2016. Codenamed Project Rome, the cross-device experience for apps was designed for developers to write apps that can “run on multiple devices and travel with the user as they switch between devices.” We’ve not seen much adoption of Project Rome in reality though, so hopefully this new Cross Device Resume is more widely adopted.
A year ago Xbox president Sarah Bond revealed that Microsoft was planning to launch a new Xbox mobile web store in July 2024. That never happened. I’ve been wondering what the hold up has been over the past year, and it seems we might have an answer: Apple.
Microsoft filed an amicus brief late on Tuesday, in support of Epic Games’ ongoing fight with Apple’s control over the App Store. The brief takes issue with Apple’s attempt to overturn the injunction that allows Epic and other developers to freely advertise alternative payment methods in their apps, and not have to pay Apple additional fees for purchases made outside of apps.
It’s a key ruling that has already allowed Fortnite to return to the App Store in the US, complete with the ability for Epic Games to link out to its own payment system inside the game. Microsoft has wanted to offer a similar experience for its Xbox mobile store prior to the ruling, but it says its solution “has been stymied by Apple.” Here’s how Microsoft explains it:
The district court’s injunction allows Apple to maintain its in-app exclusivity but at least should have enabled Microsoft to offer consumers a workable solution by launching its own online store — accessible via link-out — for in-app items to be purchased off-app and used in games or other apps. And that is what Microsoft wants to do. But even this solution has been stymied by Apple. Prior to the district court’s most recent order, Microsoft had been unable to implement linked-out payments (or even inform customers that alternative purchase methods exist) because of Apple’s new anti-steering policies that restrict Microsoft’s communication to users and impose an even higher economic cost to Microsoft than before the injunction.
The court ruling makes it possible for Microsoft to now launch its Xbox mobile store, but it’s clear that the software giant also wants to ensure Apple’s appeal against the ruling isn’t successful. If Microsoft did launch its Xbox mobile store and then Apple won a temporary stay, it may have to pull that store pending the appeal process.
Microsoft even notes in its filing that “Apple makes no argument that the technical or policy changes cannot be undone,” so it’s urging the ruling to be enforced pending Apple’s appeal. “Microsoft’s own experience managing app stores confirms that Apple’s policies could be restored if Apple ultimately prevails on appeal.”
The court ruling also impacts Microsoft’s main Xbox mobile app. “Similarly, Microsoft has long sought to enable Xbox app users on iOS to both buy and stream games in the app from the cloud or their other devices,” says Microsoft in its filing. “Apple’s policies have restricted Microsoft’s ability to offer these functionalities together; the injunction allows Microsoft to explore this possibility.”
Microsoft started rolling out the ability to purchase games and DLC inside the Xbox mobile app last month, but it had to remove the remote play option to adhere to Apple’s App Store policies. You can’t currently buy an Xbox game in the Xbox mobile app on iOS and then stream it inside that same app. You have to manually navigate to the Xbox Cloud Gaming mobile website on a browser to get access to cloud gaming.
Sarah Bond also announced plans to let players purchase and play games within the Xbox app on Android in October, just days after a court ruled that Google must crack open Android to third-party app stores. The feature was supposed to arrive in November, but Bond then blamed a “temporary administrative stay” for holding it back.
The original Wheel vertical turntable was a bust, the Wheel 2 redemption, and now the small mom and pop team at Miniot is back for a victory lap with the Wheel 3. It can play your record collection upright on its stand, laid flat on a table, or hung on a wall.
Inside the handmade Wheel 3 turntable you’ll find a new optical stylus that the company says “blows away anything you’ve heard before!” It also features a new “bespoke, high-end” amplifier and redesigned linear tonearm that moves up-and-down the spinning record instead of side-to-side like the Wheel 2 I reviewed back in 2023.
The volume and playback slider is still embedded in the rim at the top near the dimmable display, giving you precise digital control over your analog music track selection.
The new slimmed-down design also looks more cohesive, if colder. It’s made from two parts: a solid composite on the back married to a polished aluminum unibody front. A Wheel 3 Special Edition will feature a solid wooden back for fans of Miniot’s organic roots.
Amazon is issuing refunds to customers who’d returned products but never received their money back, in some cases from as long ago as 2018.
“Following a recent internal review, we identified a very small subset of returns where we issued a refund without the payment completing, or where we could not verify that the correct item had been sent back to us so no refund was issued,” Amazon spokesperson Maxine Tagay told The Verge. “There is no action required from customers to receive the refunds, and we have fixed the payment issue and made process changes to more promptly contact customers about unresolved returns going forward.”
Amazon emailed a similar message to affected customers, according to Bloomberg, while acknowledging the delay in processing the payments. “We could have notified these customers more clearly (and earlier) to better understand the status and help us resolve the return,” the email states. “Given the time elapsed, we’ve decided to err on the side of customers and just complete refunds for these returns.”
Amazon had hinted refunds were coming during its most recent earnings call on May 1st. CFO Brian Olsavsky confirmed that the company was reporting a one-time charge of $1.1 billion, partly attributable to “some historical customer returns,” along with the costs of stockpiling inventory in preparation for Trump’s tariffs.
There have been reports of the belated refunds from Amazon customers on Reddit, X, and LinkedIn for over a week, with one poster claiming to have received almost $1,800 this week for a TV returned to the retailer in 2018. Others claim to have received money from Amazon with no explanation why, and some report receiving refunds for products they never returned in the first place.
The company is currently facing a potential class action lawsuit that alleges it systematically failed to issue refunds to customers, or reversed refunds that had been issued. The suit was filed in 2023, but this April a judge rejected Amazon’s move to have it dismissed. It’s currently awaiting certification, a necessary step before other Amazon customers can join the class action.
Update, May 21st: Added an official statement from Amazon.
Microsoft’s head of security for AI, Neta Haiby, accidentally revealed confidential messages about Walmart’s use of Microsoft’s AI tools during a Build talk that was disrupted by protesters.
The Build livestream was muted and the camera pointed down, but the session resumed moments later after the protesters were escorted out. In the aftermath, Haiby then accidentally switched to Microsoft Teams while sharing her screen, revealing confidential internal messages about Walmart’s upcoming use of Microsoft’s Entra and AI gateway services.
Haiby was co-hosting a Build session on best security practices for AI, alongside Sarah Bird, Microsoft’s head of responsible AI, when two former Microsoft employees disrupted the talk to protest against the company’s cloud contracts with the Israeli government.
“Sarah, you are whitewashing the crimes of Microsoft in Palestine, how dare you talk about responsible AI when Microsoft is fueling the genocide in Palestine,” shouted Hossam Nasr, an organizer with the protest group No Azure for Apartheid, and a former Microsoft employee who was fired for holding a vigil outside Microsoft’s headquarters for Palestinians killed in Gaza.
Walmart is one of Microsoft’s biggest corporate customers, and already uses the company’s Azure OpenAI service for some of its AI work. “Walmart is ready to rock and roll with Entra Web and AI Gateway,” says one of Microsoft’s cloud solution architects in the Teams messages. The chat session also quoted a Walmart AI engineer, saying: “Microsoft is WAY ahead of Google with AI security. We are excited to go down this path with you.”
We asked Microsoft to comment on this protest and the Teams messages, but the company did not respond in time for publication.
Both of the protesters involved in this latest Microsoft Build disruption were former Microsoft employees, with Vaniya Agrawal appearing alongside Nasr. Agrawal interrupted Microsoft co-founder Bill Gates, former CEO Steve Ballmer, and CEO Satya Nadella later during the company’s 50th anniversary event last month. Agrawal was dismissed shortly after putting in her two weeks’ notice at Microsoft before the protest, according to an email seen by The Verge.
This latest protest comes days after Microsoft announced last week that it had conducted an internal review and used an unnamed external firm to assess how its technology is used in the war in Gaza. Microsoft says that its relationship with Israel’s Ministry of Defense (IMOD) is “structured as a standard commercial relationship” and that it has “found no evidence that Microsoft’s Azure and AI technologies, or any of our other software, have been used to harm people or that IMOD has failed to comply with our terms of service or our AI Code of Conduct.”
At yesterday’s I/O conference, Google announced plans to start putting its AI chatbot, Gemini, in a variety of different places, including cars. Today, Volvo said it was shoving its way to the front of the line to be the first to receive the new tech.
Volvo said it was expanding its preexisting partnership with Google — the Swedish automaker was one of the first to adopt the built-in Android Automotive operating system for its vehicles — to include the integration of Gemini across its model lineup. Drivers will be able to have more “natural” conversations with their car, including language translation, navigational help, and finding specific locations. They’ll even be able to ask the AI assistant to answer questions about their vehicle’s user manual. Volvo framed it as easing the driver’s “cognitive load” so they can keep their eyes on the road.
Drivers will be able to have more “natural” conversations with their car.
Earlier this month, Google said that it would make Gemini available to cars that support Android Auto, the company’s popular phone mirroring program. But whereas Android Auto users will get access to Gemini in the coming weeks, vehicles with Android Automotive — marketed as Google built-in — won’t get access until later this year.
Drivers and passengers will be able to use Gemini send texts, get directions, play music, and basically all the things that Google Assistant has been able to do. The main difference is users won’t have to use stilted, robotic commands, instead relying on Gemini’s natural language capabilities.
Volvo will also now serve as one of Google’s reference hardware platforms for the development and testing of future automotive technology. That means Volvo’s vehicles will get “new features and updates” from Google before they are added to the main Android codebase.
“Through this partnership with Google, we are able to bring the very latest features and capabilities from the leading consumer eco-system into our products first,” Alwin Bakkenes, head of global software engineering at Volvo Cars, said in a statement. “With our expanding partnership, we’re collaborating on cutting-edge solutions that shape the future of connected cars.”
Google’s annual developer conference, held at the Shoreline Amphitheater in Mountain View, California, was all about Gemini this year. And the car is quickly emerging as an important platform for the chatbot, especially with its myriad of challenges — namely steering a 2-ton metal box through a complex environment riddled with pitfalls.
Google is positioning Gemini as a team player with the other core parts of the vehicle’s operating system. “Navigation apps can integrate with Gemini using three core intent formats, allowing you to start navigation, display relevant search results, and execute custom actions, such as enabling users to report incidents like traffic congestion using their voice,” Ben Sagmoe, developer relations engineer, wrote on the Android Developers Blog.
AMD is officially announcing its Radeon RX 9060 XT GPU at Computex today. Like the number implies, this graphics card will challenge Nvidia’s recently released RTX 5060 and RTX 5060 Ti, with AMD offering models with 8GB or 16GB of VRAM. AMD is launching both models on June 5th, with the 8GB variant priced at $299, with the 16GB version priced at $349.
AMD is following Nvidia’s controversial choice to ship a modern GPU with just 8GB of VRAM in the year 2025. The 8GB of VRAM debate has been raging for months now, particularly because of the latest games that can be very demanding on the memory side. AMD is following in Nvidia’s footsteps, though, so it’ll be interesting to see what reviewers make of both cards in this important part of the market.
The RX 9060 XT will ship with 32 RDNA 4 compute units, a boost clock of 3.13GHz, and support for DisplayPort 2.1a and HDMI 2.1b. The total board power is between 150 watts and 182 watts, depending on the model. AMD claims its 16GB version of the RX 9060 XT will be around 6 percent faster than Nvidia’s RTX 5060 Ti at 1440p resolution, based on 40 games that AMD has tested itself.
We’re still waiting to hear how the RTX 5060 stacks up, because oddly, Nvidia launched its latest 50-series GPU yesterday without any reviews available. The GPU maker had reportedly prevented reviewers from obtaining the necessary driver to test the RTX 5060 ahead of the release date, presumably because it’s worried about the paltry 8GB of VRAM spec.
While the 8GB of VRAM choice for both Nvidia and AMD is controversial, Nvidia has managed to spark a further wave of outrage from PC gaming YouTubers over comments it has made to Gamers Nexus. In a 22-minute video, Gamers Nexus discusses the pressure from Nvidia to include Multi Frame Generation (MFG) in benchmarks against competitor cards that don’t have a similar feature. Gamers Nexus (GN) alleges that Nvidia has even implied that it would revoke access to interview Nvidia engineers unless the channel discussed MFG more.
Update, May 21st: Article updated with pricing and release date information that AMD didn’t share with The Verge ahead of its press conference.
Fortnite is once again available on the iOS App Store in the US, according to Epic Games. You can get it from the App Store here. Epic says it has returned to the Epic Games Store and AltStore as well.
Apple kicked Fortnite off the App Store nearly five years ago after Epic Games added its own in-app payment system to the game, which violated Apple’s rules. But after a major court ruling in Epic Games v. Apple that forced Apple to not take fees from purchases made outside of apps, the game is available to play on US iPhones once again.
Shortly after the big ruling hit, CEO Tim Sweeney said that Epic planned to bring back Fortnite to iOS in the US. He also made a “peace proposal:” “If Apple extends the court’s friction-free, Apple-tax-free framework worldwide, we’ll return Fortnite to the App Store worldwide and drop current and future litigation on the topic.“
Late last week, Epic said that Apple had blocked Fortnite’s return to the App Store, and the game also became unavailable on other alternative app stores in the EU. However, Apple said that it had “asked that Epic Sweden resubmit the app update without including the US storefront of the App Store so as not to impact Fortnite in other geographies” and that “we did not take any action to remove the live version of Fortnite from alternative distribution marketplaces.”
Epic asked the judge in the Epic v. Apple case to order Apple to review its Fortnite submission on May 16th. Yesterday, the judge said in a filing that Apple is “fully capable of resolving this issue without further briefing or a hearing,” and that if a resolution wasn’t reached, the Apple official who “is personally responsible for ensuring compliance” would have to appear at a hearing next Tuesday.
However, shortly after Fortnite returned to the App Store on Tuesday, Epic and Apple filed a joint notice saying that they have “resolved all issues” from Epic’s May 16th filing. Apple didn’t immediately reply to a request for comment.
Epic also recently rolled out a new promotion to encourage players to use its payment systems: if you use Epic’s system in Fortnite, Rocket League, or Fall Guys on PC, iOS, Android, and the web, the company will give you 20 percent back in Epic Rewards that can be used for other purchases in its games or on the Epic Games Store.
In the iOS version of Fortnite that was released on Tuesday, the app shows you that 20 percent bonus when you pick which payment system you want to use to buy V-Bucks.
If you get the app from the App Store, it will be a small initial download, and after you actually open the app, it will download the rest of the game. For a colleague, that additional download was 12.95GB.
Update, May 20th: Added details of Epic and Apple’s joint notice.
Here in sunny Mountain View, California, I am sequestered in a teeny-tiny box. Outside, there's a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google's Android XR smart glasses prototypes. (The Project Mariner booth is maybe 10 feet away and remarkably empty.)
While nothing was going to steal AI's spotlight at this year's keynote - 95 mentions! - Android XR has been generating a lot of buzz on the ground. But the demos we got to see here were notably shorter, with more guardrails, than what I got to see back in December. Probably because, unlike a few months ago, there are cameras everywhere and these are "risky" demos.
First up is Project Moohan. Not much has changed since I first slipped on the headset. It's still an Android-flavored Apple Vision Pro, albeit much lighter and more comfortable to wear. Like Oculus headsets, there's a dial in the back that lets you adjust the fit. If you press the top button, it brings up Gemini. You can ask Gemini to do things, because that is what AI assistants are here for. Specifically, I ask it to take me to my old college stomping grounds in Tokyo in Google Maps without having to open the G …
Google has announced it’s rolling out the colorful new Android 16 interface for beta testers as reported by 9to5Google. The QPR1 beta includes the company’s Material 3 Expressive design language revealed officially last week and includes new visuals for the launcher, notifications, lock screen, and a very Apple-inspired quick settings page.
QPRs, or quarterly platform releases, generally are more feature-rich updates for Android compared to the monthly security update patches. With Android 16 expected to launch to everyone soon, it will be followed in the fall by this QPR1 update that adds the new visual touches.
Users with eligible Pixel devices, including ones as old as the Pixel 6 and up to the 9A, that are registered in the Beta program can get access to the new release as soon as it’s ready. However, if you’re already beta testing Android 16 but you’d rather wait to get the new design, you can opt out of this release on the Android Beta website (note: don’t install the system update afterward, as that will wipe your device — just wait for Android 16’s official launch).
If you want to try the redesigned Android 16 but are not currently enrolled in the beta, Google posted instructions on Reddit on how to get started:
You can get started with Android 16 QPR1 Beta 1 today by enrolling your Pixel device. Eligible devices include Pixel 6, 6 Pro, 6a, 7, 7 Pro, 7a, 8, 8 Pro, 8a, 9, 9 Pro, 9a, Pixel Tablet series devices*. Once enrolled, eligible devices will receive an over-the-air (OTA) update to the latest Beta versions. If you were previously enrolled in Android 16 Beta (and have not opted-out), you will automatically receive QPR1 Beta 1 and any future Beta updates.*
Editions is the name of a new game publishing label launched by Lost in Cult, the same company known for making gorgeous books about video games, like Outer Wilds: Design Works. The new label’s aim is to preserve indie games, including some that haven’t been released on physical media before, and to celebrate their artistic contributions to the medium by including plenty of extra goodies. Notably, Lost in Cult is working with DoesItPlay? to validate its titles before they’re released. The group specializes in game preservation, ensuring that games can be run from the physical media they’re stored on without the need for a download or an internet connection.
The focus on elegantly preserving these titles is similar to what we’ve seen from Limited Run Games, while Editions’ focus on indie games reminds me of the Criterion Collection’s approach. Each game included in the Editions lineup will come with a fold-out poster, a sticker, a numbered authenticity card, a 40-page essay and developer interview, and gorgeous cover art, along with the game itself. The first three games to launch under the label include Immortality, The Excavation of Hob’s Barrow, and Thank Goodness You’re Here. Editions plans to announce a new game every month, starting in July.
Each of the three games is available to preorder through the Lost in Cult site starting at £59.99, with the option to choose between Nintendo Switch and PlayStation 5 editions when applicable. PS5 owners can opt to buy the entire first run of Editions games at a discounted price, containing Immortality and Thank Goodness You’re Here, and they’ll get a third (as of yet, unannounced) Editions title when it launches in July. Lost in Cult is asking for patience with shipments, which may take up to six months. But if they’re as good as the books, the wait will be worth it.
The Trump administration is working to limit access to covid booster shots by creating more regulatory hoops for companies developing vaccines for “healthy persons.” The Food and Drug Administration (FDA) says it’s only prioritizing covid vaccine approvals for adults older than 65 and others over the age of 6 months who have at least one “risk factor” for a severe case of covid-19.
“The FDA will approve vaccines for high-risk persons and, at the same time, demand robust, gold-standard data on persons at low risk,” FDA officials write in commentary laying out their plans in the New England Journal of Medicine (NEJM).
“This is overly restrictive and will deny many people who want to be vaccinated a vaccine.”
“This is overly restrictive and will deny many people who want to be vaccinated a vaccine,” Anna Durbin, director of the Center for Immunization Research at Johns Hopkins University, said in an email to the New York Times.
“The only thing that can come of this will make vaccines less insurable and less available,” Paul Offit, a vaccine scientist, virologist, and professor of pediatrics at the Children’s Hospital of Philadelphia, told TheAssociated Press.
The FDA says it will require more data from additional clinical trials before approvals can be granted for covid-19 vaccines being developed for people not considered to be at heightened risk from severe sickness. It says 100 to 200 million Americans will still have annual access to covid vaccines after its policy change. That would be less than 60 percent of the US population.
“We simply don’t know whether a healthy 52-year-old woman with a normal BMI who has had Covid-19 three times and has received six previous doses of a Covid-19 vaccine will benefit from the seventh dose,” the NEJM commentary says.
But previous CDC studies have shown that getting a booster can help prevent mild to moderate cases of covid up to six months after getting the shot regardless of whether a person is at higher risk or not, Offit tells TheAssociated Press. And even if someone does get sick, being vaccinated can make the illness shorter and less severe and reduce the risk of developing long covid, according to the Centers for Disease Control and Prevention.
The rate of covid-19-associated hospitalizations was 71.2 per 100,000 people during the 2024–25 season, according to the CDC — although hospitals haven’t been required to report covid-related hospital admissions to HHS since May of last year. Vaccines are an important safeguard for people with a weakened immune system. The FDA’s new directive raises questions about whether people considered healthy will be able to get vaccinated if they want to protect someone close to them who’s at greater risk.
In the NEJM article, the FDA notes that covid booster uptake has been low in the US, with less than a quarter of people getting the shot each year. “There may even be a ripple effect: public trust in vaccination in general has declined,” it says.
“It has become clear that truth and transparency are not desired by the Secretary, but rather he wishes subservient confirmation of his misinformation and lies,” Peter Marks, former director of the FDA’s Center for Biologics Evaluation and Research (CBER) that regulates vaccines, wrote in a resignation letter in March.
Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.
But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.
Google has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.
Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”
Project Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.
Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.
Google has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.
In addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.
Gemini 2.5 Pro adds an “enhanced” reasoning mode
The experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.
Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.
Xreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.
Last year we unveiled Project Astra on the #GoogleIO stage. See how it’s evolved since then — and what might be possible in the future. pic.twitter.com/ObMi7gFrrl
Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.
Google is building its AI assistant into Chrome. Starting on May 21st, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on their behalf. Google plans on letting Gemini work across multiple tabs at once later this year.
Google is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.
Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.
After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that iOS users will be able to access it for free, as well.
Google has revealed Stitch, a new AI-powered tool that can generate interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.
Google Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.
Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.
Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.
Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. It uses an AI model that “understands the human body and nuances of clothing.”
Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.
If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.
They look like normal sunglasses in the render, with what appear to be cameras in the hinges and nose bridge. | Image: Xreal
The Google smart glasses era is back, sort of. Today, Google and Xreal announced a strategic partnership for a new Android XR device called Project Aura at the Google I/O developer conference.
This is officially the second Android XR device since the platform was launched last December. The first is Samsung’s Project Moohan, but that’s an XR headset more in the vein of the Apple Vision Pro. Project Aura, however, is firmly in the camp of Xreal’s other gadgets. The technically accurate term would be “optical see-through XR” device. More colloquially, it’s a pair of immersive smart glasses.
Xreal’s glasses, like the Xreal One, are like embedding two mini TVs into what looks like a regular — if a bit chunky — pair of sunglasses. Xreal’s previous gadgets let you plug into a phone or laptop and view whatever’s on the screen, be it a show or a confidential document you want to edit on a plane. The benefit is that you can change the opacity to view (or block out) the world around you. That’s the vibe Project Aura’s giving off, too.
Details are sparse — Xreal spokesperson Ralph Jodice told me we’ll learn a bit more at Augmented World Expo next month. But we know it’ll have Gemini built-in, as well as a large field-of-view. In the product render, you can also see what looks like cameras in the hinges and nose bridge, plus microphones and buttons in the temples.
That hints at a hardware evolution compared to Xreal’s current devices. Project Aura will run a Qualcomm chipset optimized for XR, though we don’t know exactly which one. Like Project Moohan, Project Aura is counting on developers to start building apps and use cases now, ahead of an actual consumer product launch. Speaking of, Google and Xreal said in a press release that Android XR apps developed for headsets can be easily brought over to a different form factor like Project Aura.
Back when I first demoed Android XR, I was told that while Google had built prototype glasses, the plan was to work with other partners to produce a viable product. That demo also made it abundantly clear that it viewed XR devices as a key vehicle for Gemini. So far, everything we know about Project Aura is aligned with that strategy. Meaning, Google’s approach to this next era of smart glasses is similar to how it first tackled Wear OS — Google provides the platform, while third parties handle the hardware. (At least, until Google feels like it’s ready to jump into the fray itself.) That makes a ton of sense given Google’s fraught history with smart glasses hardware. But given the momentum we’ve seen through Project Astra and now, Android XR making it into the main Google I/O keynote? “Google” smart glasses are back on the menu.
Google’s prototype glasses’ demos included integration with its AI assistant. | Image: Google
Google’s second era of smart glasses is off to a chic start. At its I/O developer conference today, Google announced that it’ll be partnering with Samsung, Gentle Monster, and Warby Parker to create smart glasses that people will actually want to wear.
The partnership hints that Google is taking style a lot more seriously this time around. Warby Parker is well known as a direct-to-consumer eyewear brand that makes it easy to get trendy glasses at a relatively accessible price. Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyoncé, Rihanna, Gigi Hadid, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.
The other thing to note is that Google seems to be leaning on Samsung for XR glasses hardware, too. As part of a Keyword blog, Google’s VP of XR, Shahram Izadi, noted that it’s “advancing its partnership with Samsung to go beyond headsets” and into glasses. Also announced today at I/O, Google noted the first pair of Android XR-enabled glasses will be made by Xreal under the name Project Aura.
As for what these XR glasses will be able to do, Google was keen to emphasize that they’re a great vehicle for using Gemini. So far, Google’s prototype glasses have had cameras, microphones, and speakers so that its AI assistant can help you interpret the world around you. That included demos of taking photos, getting turn-by-turn directions, and live language translation. That pretty much lines up with what I saw at my Android XR hands-on in December, but Google has slowly been rolling out these demos more publicly over the past few months.
Altogether, it seems like Google is directly taking a page out of Meta’s smart glasses playbook. That’s a big deal, as it’s a direct nod to the success Meta’s had with its Ray-Ban smart glasses. The company revealed in February that it’s already sold 2 million pairs of its Ray-Ban smart glasses and has been vocally positioning them as the ideal hardware for AI assistants.
The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Not only do Meta’s glasses look like an ordinary pair of Ray-Bans, Ray-Ban itself is an iconic brand known for its Wayfarer shape. In other words, they’re glasses the average person wouldn’t feel quite so put off wearing. Since launching its second-gen smart glasses in late 2023, Meta has also put out a few limited edition versions, playing into the same fashion strategy as sneakers. Meta is also rumored to be releasing Oakley-branded versions of its smart glasses for athletes.
Google wants to make it easier to create AI-generated videos, and it has a new tool to do it. It’s called Flow, and Google is announcing it alongside its new Veo 3 video generation model, more controls for its Veo 2 model, and a new image generation model, Imagen 4.
With Flow, you can use things like text-to-video prompts and ingredients-to-video prompts (basically, sharing a few images that Flow can use alongside a prompt to help inform the model what you’re looking for) to build eight-second AI-generated clips. Then, you can use Flow’s scenebuilder tools to stitch multiple clips together.
Flow seems kind of like a film editing app, but for building AI-generated videos, and while I’m not a filmmaker, I can see how it might be a useful tool based on a demo I saw. In a briefing, Thomas Iljic, a product manager at Google Labs, showed me a few examples of Flow in action.
In one demo, we watched an animated-style video; the “camera” zoomed out to reveal that the video was playing on a TV; the video zoomed out again to show the room the TV was in. Then, the “camera” slowly flew through a window and watched a truck pass by.
It all looked pretty seamless, though I was only briefly seeing the video in a tiny Google Meet window, so I can’t speak to any AI strangeness that might be visible if you look closely. But the idea for Flow isn’t so much about creating long videos. Instead, it’s more about helping filmmakers quickly get their ideas “on paper,” says Iljic.
As for Google’s new models announced at I/O, Veo 3 will have better quality, is easier to prompt, and can generate video and sound together (including dialogue), Matthieu Lorrain, creative lead at Google DeepMind, tells The Verge. It’s also better at understanding longer prompts and correctly handling a succession of events in your prompt.
Veo 2 will offer tools like camera controls and object removal. And Google’s new image generation model, Imagen 4, has improved quality, can export in more formats, and is apparently better at writing real text instead of the AI garble that often appears in these images.
Flow is launching today in the US for people who subscribe to Google’s new Google AI Pro and Google AI Ultra plans. “Google AI Pro gives you the key Flow features and 100 generations per month, and Google AI Ultra gives you the highest usage limits and early access to Veo 3 with native audio generation,” according to a blog post.