Normal view

There are new articles available, click to refresh the page.
Today — 20 May 2025The Verge News

We tried on Google’s prototype AI smart glasses

20 May 2025 at 14:27

Here in sunny Mountain View, California, I am sequestered in a teeny tiny box. Outside, there's a long line of tech journalists, and we are all here for one thing: to try out Project Moohan and Google's Android XR smart glasses prototypes. (The Project Mariner booth is maybe 10 feet away and remarkably empty.)

While nothing was going to steal AI's spotlight at this year's keynote - 95 mentions! - Android XR has been generating a lot of buzz on the ground. But the demos we got to see here were notably shorter, with more guardrails than what I got to see back in December. Probably, because unlike a few months ago, there are cameras everywhere and these are "risky" demos.

First up is Project Moohan. Not much has changed since I first slipped on the headset. It's still an Android-flavored Apple Vision Pro, albeit much lighter and more comfortable to wear. Like Oculus headsets, there's a dial in the back that lets you adjust the fit. If you press the top button, it brings up Gemini. You can ask Gemini to do things, because that is what AI assistants are here for. Specifically, I ask it to take me to my old college stomping grounds in Tokyo in Google Maps without having to open the Go …

Read the full story at The Verge.

Google starts beta testing Android 16’s youthful new look

20 May 2025 at 14:05

Google has announced it’s rolling out the colorful new Android 16 interface for beta testers as reported by 9to5Google. The QPR1 beta includes the company’s Material 3 Expressive design language revealed officially last week and includes new visuals for the launcher, notifications, lock screen, and a very Apple-inspired quick settings page.

QPRs, or quarterly platform releases, generally are more feature-rich updates for Android compared to the monthly security update patches. With Android 16 expected to launch to everyone soon, it will be followed in the fall by this QPR1 update that adds the new visual touches. 

Users with eligible Pixel devices, including ones as old as the Pixel 6 and up to the 9A, that are registered in the Beta program can get access to the new release as soon as it’s ready. However, if you’re already beta testing Android 16 but you’d rather wait to get the new design, you can opt out of this release on the Android Beta website (note: don’t install the system update afterward, as that will wipe your device — just wait for Android 16’s official launch).

If you want to try the redesigned Android 16 but are not currently enrolled in the beta, Google posted instructions on Reddit on how to get started:

You can get started with Android 16 QPR1 Beta 1 today by enrolling your Pixel device. Eligible devices include Pixel 6, 6 Pro, 6a, 7, 7 Pro, 7a, 8, 8 Pro, 8a, 9, 9 Pro, 9a, Pixel Tablet series devices*. Once enrolled, eligible devices will receive an over-the-air (OTA) update to the latest Beta versions. If you were previously enrolled in Android 16 Beta (and have not opted-out), you will automatically receive QPR1 Beta 1 and any future Beta updates.*

Lost in Cult’s new Editions publishing label focuses on art and indie games preservation

20 May 2025 at 12:58
An image showing Lost in Cults Editions releases for Thank Goodness You’re Here, Immortality, and The Excavation of Hob’s Barrow.

Editions is the name of a new game publishing label launched by Lost in Cult, the same company known for making gorgeous books about video games, like Outer Wilds: Design Works. The new label’s aim is to preserve indie games, including some that haven’t been released on physical media before, and to celebrate their artistic contributions to the medium by including plenty of extra goodies. Notably, Lost in Cult is working with DoesItPlay? to validate its titles before they’re released. The group specializes in game preservation, ensuring that games can be run from the physical media they’re stored on without the need for a download or an internet connection.

The focus on elegantly preserving these titles is similar to what we’ve seen from Limited Run Games,  while Editions’ focus on indie games reminds me of the Criterion Collection’s approach.  Each game included in the Editions lineup will come with a fold-out poster, a sticker, a numbered authenticity card, a 40-page essay and developer interview, and gorgeous cover art, along with the game itself. The first three games to launch under the label include Immortality, The Excavation of Hob’s Barrow, and Thank Goodness You’re Here. Editions plans to announce a new game every month, starting in July.

Each of the three games is available to preorder through the Lost in Cult site starting at £59.99, with the option to choose between Nintendo Switch and PlayStation 5 editions when applicable. PS5 owners can opt to buy the entire first run of Editions games at a discounted price, containing Immortality and Thank Goodness You’re Here, and they’ll get a third (as of yet, unannounced) Editions title when it launches in July. Lost in Cult is asking for patience with shipments, which may take up to six months. But if they’re as good as the books, the wait will be worth it.

The FDA is making it more difficult for Americans to get vaccinated for covid 

20 May 2025 at 12:32

The Trump administration is working to limit access to covid booster shots by creating more regulatory hoops for companies developing vaccines for “healthy persons.” The Food and Drug Administration (FDA) says it’s only prioritizing covid vaccine approvals for adults older than 65 and others over the age of 6 months who have at least one “risk factor” for a severe case of covid-19. 

“The FDA will approve vaccines for high-risk persons and, at the same time, demand robust, gold-standard data on persons at low risk,” FDA officials write in commentary laying out their plans in the New England Journal of Medicine (NEJM)

The move comes as notorious antivax crusader Robert F. Kennedy reshapes the US Department of Health and Human Services, recently pushing out the FDA’s top vaccine official and thousands of other federal health workers. Some public health experts are already voicing skepticism over whether the FDA’s new guidance for covid boosters will reap any benefits. 

“This is overly restrictive and will deny many people who want to be vaccinated a vaccine.”

“This is overly restrictive and will deny many people who want to be vaccinated a vaccine,” Anna Durbin, director of the Center for Immunization Research at Johns Hopkins University, said in an email to the New York Times.

“The only thing that can come of this will make vaccines less insurable and less available,” Paul Offit, a vaccine scientist, virologist, and professor of pediatrics at the Children’s Hospital of Philadelphia, told The Associated Press.

The FDA says it will require more data from additional clinical trials before approvals can be granted for covid-19 vaccines being developed for people not considered to be at heightened risk from severe sickness. It says 100 to 200 million Americans will still have annual access to covid vaccines after its policy change. That would be less than 60 percent of the US population. 

Last week, the agency approved the Novavax covid-19 vaccine for only older adults and people at higher risk from the disease.

“We simply don’t know whether a healthy 52-year-old woman with a normal BMI who has had Covid-19 three times and has received six previous doses of a Covid-19 vaccine will benefit from the seventh dose,” the NEJM commentary says. 

But previous CDC studies have shown that getting a booster can help prevent mild to moderate cases of covid up to six months after getting the shot regardless of whether a person is at higher risk or not, Offit tells The Associated Press. And even if someone does get sick, being vaccinated can make the illness shorter and less severe and reduce the risk of developing long covid, according to the Centers for Disease Control and Prevention.

The rate of covid-19-associated hospitalizations was 71.2 per 100,000 people during the 2024–25 season, according to the CDC — although hospitals haven’t been required to report covid-related hospital admissions to HHS since May of last year. Vaccines are an important safeguard for people with a weakened immune system. The FDA’s new directive raises questions about whether people considered healthy will be able to get vaccinated if they want to protect someone close to them who’s at greater risk.

In the NEJM article, the FDA notes that covid booster uptake has been low in the US, with less than a quarter of people getting the shot each year. “There may even be a ripple effect: public trust in vaccination in general has declined,” it says.

Kennedy, meanwhile, has a long history of spreading disinformation about vaccines, advocacy for which he has been paid hundreds of thousands of dollars and profited from during the covid pandemic

“It has become clear that truth and transparency are not desired by the Secretary, but rather he wishes subservient confirmation of his misinformation and lies,” Peter Marks, former director of the FDA’s Center for Biologics Evaluation and Research (CBER) that regulates vaccines, wrote in a resignation letter in March.

The 15 biggest announcements at Google I/O 2025

By: Emma Roth
20 May 2025 at 11:57
An image showing Sundar Pichai at Google I/O.

Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.

But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.

Google’s AI Mode for Search is coming to everyone

Google has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.

Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”

Project Starline is now Google Beam

Project Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.

Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.

Imagen and Veo are getting some big upgrades

Google has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.

Google launches an AI filmmaking app

In addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.

Gemini 2.5 Pro adds an “enhanced” reasoning mode

The experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.

Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.

Xreal shows off its Project Aura prototype

Xreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.

Google is also partnering with Samsung, Gentle Monster, and Warby Parker to create other Android XR smart glasses, as well.

Google’s experimental AI assistant is getting more proactive

Last year we unveiled Project Astra on the #GoogleIO stage. See how it’s evolved since then — and what might be possible in the future. pic.twitter.com/ObMi7gFrrl

— Google (@Google) May 20, 2025

Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.

Gemini is coming to Chrome

Google is building its AI assistant into Chrome. Starting on May 21st, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on their behalf. The feature can work with up to two tabs for now, but Google plans on adding support for more later this year.

Google’s new AI Ultra plan costs $250 per month

Google is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.

Search Live will let you discuss what’s on your camera in real-time

Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.

After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that iOS users will be able to access it for free, as well.

Google’s new tool uses AI to create app interfaces

Google has revealed Stitch, a new AI-powered tool that can generate interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.

Google Meet adds AI speech translation

Google Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.

Gmail’s smart replies will soon pull info from your inbox

Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.

Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.

Google is going big on AI shopping

Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. It uses an AI model that “understands the human body and nuances of clothing.”

Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.

Google Chrome will soon help you update compromised passwords

If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.

Xreal teases Project Aura smart glasses for Android XR

20 May 2025 at 10:45
Render of a pair of sunglasses with a camera in the hinge. The words Project Aura and Android XR at at the bottom.
They look like normal sunglasses in the render, with what appear to be cameras in the hinges and nose bridge. | Image: Xreal

The Google smart glasses era is back, sort of. Today, Google and Xreal announced a strategic partnership for a new Android XR device called Project Aura at the Google I/O developer conference.

This is officially the second Android XR device since the platform was launched last December. The first is Samsung’s Project Moohan, but that’s an XR headset more in the vein of the Apple Vision Pro. Project Aura, however, is firmly in the camp of Xreal’s other gadgets. The technically accurate term would be “optical see-through XR” device. More colloquially, it’s a pair of immersive smart glasses.

Xreal’s glasses, like the Xreal One, are like embedding two mini TVs into what looks like a regular — if a bit chunky — pair of sunglasses. Xreal’s previous gadgets let you plug into a phone or laptop and view whatever’s on the screen, be it a show or a confidential document you want to edit on a plane. The benefit is that you can change the opacity to view (or block out) the world around you. That’s the vibe Project Aura’s giving off, too.

Details are sparse — Xreal spokesperson Ralph Jodice told me we’ll learn a bit more at Augmented World Expo next month. But we know it’ll have Gemini built-in, as well as a large field-of-view. In the product render, you can also see what looks like cameras in the hinges and nose bridge, plus microphones and buttons in the temples.

That hints at a hardware evolution compared to Xreal’s current devices. Project Aura will run a Qualcomm chipset optimized for XR, though we don’t know exactly which one. Like Project Moohan, Project Aura is counting on developers to start building apps and use cases now, ahead of an actual consumer product launch. Speaking of, Google and Xreal said in a press release that Android XR apps developed for headsets can be easily brought over to a different form factor like Project Aura.

Back when I first demoed Android XR, I was told that while Google had built prototype glasses, the plan was to work with other partners to produce a viable product. That demo also made it abundantly clear that it viewed XR devices as a key vehicle for Gemini. So far, everything we know about Project Aura is aligned with that strategy. Meaning, Google’s approach to this next era of smart glasses is similar to how it first tackled Wear OS — Google provides the platform, while third parties handle the hardware. (At least, until Google feels like it’s ready to jump into the fray itself.) That makes a ton of sense given Google’s fraught history with smart glasses hardware. But given the momentum we’ve seen through Project Astra and now, Android XR making it into the main Google I/O keynote? “Google” smart glasses are back on the menu.

Android XR is getting stylish partners in Warby Parker and Gentle Monster

20 May 2025 at 10:45
UX interior view of Android XR glasses
Google’s prototype glasses’ demos included integration with its AI assistant. | Image: Google

Google’s second era of smart glasses is off to a chic start. At its I/O developer conference today, Google announced that it’ll be partnering with Samsung, Gentle Monster, and Warby Parker to create smart glasses that people will actually want to wear.

The partnership hints that Google is taking style a lot more seriously this time around. Warby Parker is well known as a direct-to-consumer eyewear brand that makes it easy to get trendy glasses at a relatively accessible price. Meanwhile, Gentle Monster is currently one of the buzziest eyewear brands that isn’t owned by EssilorLuxottica. The Korean brand is popular among Gen Z, thanks in part to its edgy silhouettes and the fact that Gentle Monster is favored by fashion-forward celebrities like Kendrick Lamar, Beyoncé, Rihanna, Gigi Hadid, and Billie Eilish. Partnering with both brands seems to hint that Android XR is aimed at both versatile, everyday glasses as well as bolder, trendsetting options.

The other thing to note is that Google seems to be leaning on Samsung for XR glasses hardware, too. As part of a Keyword blog, Google’s VP of XR, Shahram Izadi, noted that it’s “advancing its partnership with Samsung to go beyond headsets” and into glasses. Also announced today at I/O, Google noted the first pair of Android XR-enabled glasses will be made by Xreal under the name Project Aura.

As for what these XR glasses will be able to do, Google was keen to emphasize that they’re a great vehicle for using Gemini. So far, Google’s prototype glasses have had cameras, microphones, and speakers so that its AI assistant can help you interpret the world around you. That included demos of taking photos, getting turn-by-turn directions, and live language translation. That pretty much lines up with what I saw at my Android XR hands-on in December, but Google has slowly been rolling out these demos more publicly over the past few months.

Altogether, it seems like Google is directly taking a page out of Meta’s smart glasses playbook. That’s a big deal, as it’s a direct nod to the success Meta’s had with its Ray-Ban smart glasses. The company revealed in February that it’s already sold 2 million pairs of its Ray-Ban smart glasses and has been vocally positioning them as the ideal hardware for AI assistants.

The latter remains to be seen, but one thing the Ray-Ban Meta glasses have convincingly argued is that for smart glasses to go mainstream, they need to look cool. Not only do Meta’s glasses look like an ordinary pair of Ray-Bans, Ray-Ban itself is an iconic brand known for its Wayfarer shape. In other words, they’re glasses the average person wouldn’t feel quite so put off wearing. Since launching its second-gen smart glasses in late 2023, Meta has also put out a few limited edition versions, playing into the same fashion strategy as sneakers. Meta is also rumored to be releasing Oakley-branded versions of its smart glasses for athletes.

Google has a new tool just for making AI videos

20 May 2025 at 10:45


Google wants to make it easier to create AI-generated videos, and it has a new tool to do it. It’s called Flow, and Google is announcing it alongside its new Veo 3 video generation model, more controls for its Veo 2 model, and a new image generation model, Imagen 4. 

With Flow, you can use things like text-to-video prompts and ingredients-to-video prompts (basically, sharing a few images that Flow can use alongside a prompt to help inform the model what you’re looking for) to build eight-second AI-generated clips. Then, you can use Flow’s scenebuilder tools to stitch multiple clips together.

Flow seems kind of like a film editing app, but for building AI-generated videos, and while I’m not a filmmaker, I can see how it might be a useful tool based on a demo I saw. In a briefing, Thomas Iljic, a product manager at Google Labs, showed me a few examples of Flow in action.

In one demo, we watched an animated-style video; the “camera” zoomed out to reveal that the video was playing on a TV; the video zoomed out again to show the room the TV was in. Then, the “camera” slowly flew through a window and watched a truck pass by.

It all looked pretty seamless, though I was only briefly seeing the video in a tiny Google Meet window, so I can’t speak to any AI strangeness that might be visible if you look closely. But the idea for Flow isn’t so much about creating long videos. Instead, it’s more about helping filmmakers quickly get their ideas “on paper,” says Iljic.

As for Google’s new models announced at I/O, Veo 3 will have better quality, is easier to prompt, and can generate video and sound together (including dialogue), Matthieu Lorrain, creative lead at Google DeepMind, tells The Verge. It’s also better at understanding longer prompts and correctly handling a succession of events in your prompt.

Veo 2 will offer tools like camera controls and object removal. And Google’s new image generation model, Imagen 4, has improved quality, can export in more formats, and is apparently better at writing real text instead of the AI garble that often appears in these images.

Flow is launching today in the US for people who subscribe to Google’s new Google AI Pro and Google AI Ultra plans. “Google AI Pro gives you the key Flow features and 100 generations per month, and Google AI Ultra gives you the highest usage limits and early access to Veo 3 with native audio generation,” according to a blog post.

Google’s Gemini AI is coming to Chrome

20 May 2025 at 10:45

Google is adding its Gemini AI assistant to Chrome, the company announced at Google I/O on Tuesday. 

Initially, Gemini will be able to “clarify complex information on any webpage you’re reading or summarize information,” according to a blog post from Google Labs and Gemini VP Josh Woodward. Google envisions that Gemini in Chrome will later “work across multiple tabs and navigate websites on your behalf.”

I saw a demo during a briefing ahead of Tuesday’s announcement. In Chrome, you’ll see a little sparkle icon in the top right corner. Click that and a Gemini chatbot window will open — it’s a floating UI that you can move and resize. From there, you can ask questions about the website.

In the demo, Charmaine D’Silva, a director of product management on the Chrome team, opened a page for a sleeping bag at REI and clicked on a suggested Gemini prompt to list the bag’s key features. Gemini read the entire page and listed a quick summary of the bag. D’Silva then asked if the sleeping bag was a good option for camping in Maine, and Gemini in Chrome responded by pulling information from the REI page and the web.

After that, D’Silva went to a shopping page on another retailer’s website for a different sleeping bag and asked Gemini to compare the two sleeping bags. Gemini did that and included a comparison table.

The tool initially only works across two tabs. But “later in the year,” Gemini in Chrome will be able to work across multiple tabs. 

D’Silva also showed a demo of a feature that will be available in the future: using Gemini to navigate websites. In the demo, D’Silva pulled up Gemini Live in Chrome to help navigate a recipe site. D’Silva asked Gemini to scroll to the ingredients, and the AI zipped to that part of the page. It also responded when D’Silva asked for help converting the required amount of sugar from cups to grams.

In Google’s selected demos, Gemini in Chrome seems like it could occasionally be useful, especially with comparison tables or in-the-moment ingredient conversions. I’d rather just read the website or do my own research instead of reading Gemini’s AI summaries, especially since AI can hallucinate incorrect information.

Gemini in Chrome is launching on Wednesday. It will initially launch on Windows and macOS in early access to users 18 or older who use English as their language. It will be available to people who subscribe to Google’s AI Pro and Ultra subscriptions or users of Chrome’s beta, canary, and dev channels, Parisa Tabriz, Google’s VP and GM of Chrome, said in the briefing.

As for bringing Gemini to mobile Chrome, “it’s an area that we’ll think about,” Tabriz says, but right now, the company is “very focused on desktop.”

Google Chrome will be able to automatically change your bad passwords

20 May 2025 at 10:45

Google is going to let Chrome’s password manager automatically change your password when it detects one that is weak, the company announced at its Google I/O conference.

“When Chrome detects a compromised password during sign-in, Google Password Manager prompts the user with an option to fix it automatically,” according to a blog post. “On supported websites, Chrome can generate a strong replacement and update the password for the user automatically.”

Google is announcing the feature at Google I/O so that developers can start to prepare their websites and apps for the change ahead of when it launches later this year.

Chrome’s password manager can already tell you if you have an unsafe password. “But if we tell you your password is weak, it’s really annoying to actually have to change your password,” Parisa Tabriz, VP and GM of Chrome, said in a briefing ahead of the event. “And we know that if something is annoying, people are not going to actually do it. So we see automatic password change as a win for safety, as well as usability. Overall, that’s a win-win for users.”

I asked if Chrome might automatically change passwords on a regular basis so they’re never outdated, but Tabriz says that Chrome won’t change a bad or compromised password without user consent. “We’re very much focused on keeping the user in control of changing their password.” 

Google will let you ‘try on’ clothes with AI

By: Emma Roth
20 May 2025 at 10:45

Google is taking its virtual try-on feature to a new level. Instead of seeing what a piece of clothing might look like on a wide range of models, it’s now testing a feature that lets you upload a photo of yourself to see how it might look on you.

The new feature is rolling out in Search Labs in the US today. Once you opt into the experiment, you can check it out by selecting the “try it on” button next to pants, shirts, dresses, and skirts that appear in Google’s search results. Google will then ask for a full-length photo, which the company will use to generate an image of you wearing the piece of clothing you’re shopping for. You can save and share the images.

Google says the feature uses an AI model that “understands the human body and nuances of clothing — like how different materials fold, stretch and drape on different bodies.” 

Google will also soon allow you to shop in AI Mode — the new, Gemini-powered search experience it began rolling out to more users in March. If you tell AI Mode that you’re looking for a new travel bag, for example, it will automatically show a personalized panel of images and product listings.

You can narrow the selection by providing more details about your needs, like noting you need a bag for a May trip to Portland, Oregon. Google says AI Mode will conduct multiple searches simultaneously, allowing it to determine what features are suitable for rainy weather and then surface results that meet those needs, such as having waterproof fabric and more pockets.

There’s also a new “agentic” checkout feature rolling out to Google users in the US in the coming months. Right now, if you tap “track price” on a product listing, select a size, color, and the amount you want to spend, Google will automatically notify you when it drops to your preferred price. A new feature lets you confirm your purchase details and then select “buy for me.” Google will then check out on the merchant’s website and “securely complete the checkout on your behalf” using Google Pay.

Google says its new image AI can actually spell

20 May 2025 at 10:45
An image of an egg carton created by Imagen 4.

Google is launching a new version of its image generation model, called Imagen 4, and the company says that it offers “stunning quality” and “superior typography.”

“Our latest Imagen model combines speed with precision to create stunning images,” Eli Collins, VP of product at Google Deepmind, says in a blog post. “Imagen 4 has remarkable clarity in fine details like intricate fabrics, water droplets, and animal fur, and excels in both photorealistic and abstract styles.” Sample images from Google do show some impressive, realistic detail, like one showing a whale jumping out of the water and another of a chameleon.

The AI model is also “significantly better at spelling and typography,” which Collins says makes it easier to create greeting cards, posters, and comics. (When OpenAI recently added image generation to ChatGPT, the company also touted its text rendering improvements, but it’s still susceptible to typos.) 

In some images provided by Google, the text does look good — it’s perfectly legible in a short comic, for example, and even a tiny font in a mock stamp is readable. But we’ll have to see how the model’s text rendering capabilities hold up in the hands of regular users.

Imagen 4 will be available on May 20th in the Gemini app, Whisk, and Vertex AI, as well as in Slides, Vids, Docs, “and more in Workspace,” Collins says. Also, Google plans to launch a “fast variant” of Imagen 4 sometime “soon,” which it says is “up to 10x faster than Imagen 3.”

Google’s ‘universal AI assistant’ prototype can now do stuff for you — and you don’t even have to ask

20 May 2025 at 10:45

Since its original launch at Google I/O 2024, Project Astra has become a testing ground for Google's AI assistant ambitions. The multimodal, all-seeing bot is not a consumer product, really, and it won't soon be available to anyone outside of a small group of testers. What Astra represents instead is a collection of Google's biggest, wildest, most ambitious dreams about what AI might be able to do for people in the future. Greg Wayne, a research director at Google DeepMind, says he sees Astra as "kind of the concept car of a universal AI assistant."

Eventually, the stuff that works in Astra ships to Gemini and other apps. Already that has included some of the team's work on voice output, memory, and some basic computer-use features. As those features go mainstream, the Astra team finds something new to work on.

This year, at its I/O developer conference, Google announced some new Astra features that signal how the company has come to view its assistant - and just how smart it thinks that assistant can be. In addition to answering questions, and using your phone's camera to remember where you left your glasses, Astra can now accomplish tasks on your behalf. And it can do it with …

Read the full story at The Verge.

AI Mode is obviously the future of Google Search

20 May 2025 at 10:45
So far, AI Mode is a tab in Search. But it’s also beginning to overtake Search.

There's a new tab in Google Search. You might have seen it recently. It's called AI Mode, and it brings a Gemini- or ChatGPT-style chatbot right into your web search experience. You can use it to find links, but also to quickly surface information, ask follow-up questions, or ask Google's AI models to synthesize things in ways you'd never find on a typical webpage.

For now, AI Mode is just an option inside of Google Search. But that might not last. At its I/O developer conference on May 20th, Google announced that it is rolling AI Mode out to all Google users in the US, as well as adding several new features to the platform. In an interview ahead of the conference, the folks in charge of Search at Google made it very clear that if you want to see the future of the internet's most important search engine, then all you need to do is tab over to AI Mode.

Google likes to remind people that much of the core technology underpinning the AI revolution was actually created at Google. The "T" in ChatGPT stands for "transformer," a concept developed by a bunch of Googlers in 2017 and presented in a now-iconic paper called Attention Is All You Need. The industry continues to look for every …

Read the full story at The Verge.

Google Meet can translate what you say into other languages

By: Emma Roth
20 May 2025 at 10:13
Meet will translate what you say in near real-time.

Google is bringing speech translation to Meet. During I/O on Tuesday, Google revealed a new Gemini-powered feature that can translate what you say into your conversation partner’s preferred language.

Google says the AI-generated translation will preserve the sound of your voice, tone, and expression. The feature is rolling out now to subscribers, and Google will also bring it to enterprises later this year.

In a demo shown by Google, an English speaker joins a call with a colleague who speaks Spanish. Once their colleague turns on Gemini’s speech translation, Meet begins dubbing over what they’re saying with an AI-generated English translation that includes all their vocal inflections — and vice versa.

Microsoft Teams similarly launched an AI translation feature in a preview earlier this year.

For now, Meet can only translate between English and Spanish, but Google plans on adding support for Italian, German, and Portuguese in the “coming weeks.”

The most powerful laser in the US recently produced 2 quadrillion watts of power

20 May 2025 at 09:47
Two researchers in protective suits working on the ZEUS laser.
ZEUS is now the most powerful laser in the US, and will be getting even more powerful later this year. | Photo: Marcin Szczepanski /Michigan Engineering

The University of Michigan has announced that its Zettawatt-Equivalent Ultrashort pulse laser System (ZEUS) produced 2 petawatts, or 2 quadrillion watts of power during its first experiment. That’s more than “100 times the global electricity power output,” according to the university, but don’t expect it to be harnessed to recreate the Death Star. Those intensely powerful blasts last just 25 quintillionths of a second long, and will be used for experiments in various areas of research including medicine, quantum physics, and materials science.

Funded by the US National Science Foundation, ZEUS cost $16 million to build and includes components like 7-inch sapphire crystals infused with titanium atoms that took over four years to manufacture. “The size of the titanium sapphire crystal we have, there are only a few in the world,” says ZEUS’ project manager, Franko Bayer.

Operating ZEUS isn’t as easy as pressing a button on a handheld laser pointer. The power of an initial infrared pulse from a laser is increased using pump lasers that increase its energy. The power is gradually increased through four rounds of these pump lasers, but to ensure the pulse “doesn’t get so intense that it starts tearing the air apart,” it passes through optical devices called diffraction gratings that stretch it out.

The pulse ends up being 12 inches across and a few feet long, but eventually it enters vacuum chambers where additional gratings flatten it down to just 0.8 microns wide so its maximum power intensity can be delivered to experiments.

The first experiment, conducted by Franklin Dollar, a professor of physics and astronomy at the University of California, Irvine, targeted the laser pulse at a cell containing helium. The collision “produces plasma, ripping electrons off the atoms so that the gas becomes a soup of free electrons and positively charged ions. Those electrons get accelerated behind the laser pulse-like wakesurfers close behind a speedboat, a phenomenon called wakefield acceleration.”

The experiment is designed to eventually produce electron beams that are as powerful as those created by particle accelerators but without the need for expensive hardware installations that are often hundreds of meters in length.

Housed in a facility the size of a school gymnasium at the university’s Gérard Mourou Center for Ultrafast Optical Science, ZEUS is the successor to the center’s HERCULES laser that reached a maximum power output of 300 terawatts in 2007. First announced in 2022, ZEUS’ current power output is roughly double the peak power of other lasers in the US, and it’s designed to eventually deliver up to 3 petawatts. 

But while ZEUS is the most powerful laser in the US, it’s still less powerful than the laser at the European ELI-NP laboratory in Măgurele, Romania that peaks at 10 petawatts of power.

Google I/O 2025 live blog: Gemini takes center stage

We’re back at the Shoreline Amphitheater in Mountain View, California, for Google I/O. This year, we’re not expecting much hardware, but we’re naturally expecting a lot of AI news. The pressure is on to prove that ChatGPT won’t make Google Search obsolete, and that Google has what it takes for Gemini to become a household name.

Since last year’s I/O, Google has shipped many AI models, including the latest Gemini 2.5 release that is widely seen as industry-leading. The company has announced its largest acquisition ever, teased plans for AR glasses, and seen early traction with its self-driving cars. It’s also fighting not to be broken up by the US government, which has ruled that Google has an illegal monopoly in search and advertising technology.

Going into this year’s I/O, Google somehow feels bigger and more vulnerable than ever. The stakes are high, and we’re here with real-time updates from where it’s all happening.

(function(n){function c(t,i){n[e](h,function(n){var r,u;if(n&&(r=n[n.message?"message":"data"]+"",r&&r.substr&&r.substr(0,3)==="nc:")&&(u=r.split(":"),u[1]===i))switch(u[2]){case"h":t.style.height=u[3]+"px";return;case"scrolltotop":t.scrollIntoView();return}},!1)}for(var t,u,f,i,s,e=n.addEventListener?"addEventListener":"attachEvent",h=e==="attachEvent"?"onmessage":"message",o=n.document.querySelectorAll(".live-center-embed"),r=0;r<o.length;r++)(t=o[r],t.getAttribute("data-rendered"))||(u=t.getAttribute("data-src"),u&&(t.setAttribute("data-rendered","true"),f=n.ncVizCounter||1e3,n.ncVizCounter=f+1,i=f+"",s="nc-frame-c-"+i,t.innerHTML='
',c(t.firstChild,i)))})(window);

Fender’s free new recording app lets you simulate its iconic amps and pedals

By: Wes Davis
20 May 2025 at 09:22

Fender has released a free new recording app called Fender Studio that seems pretty powerful. The app, available on iOS, Android, macOS, Windows, and Linux, supports multitrack recording and offers a host of effects that emulate guitar pedals and several of the company’s iconic amplifiers over the years. 

Some of the simulated amps include a 1965 Fender Twin Reverb and a 1959 Fender Bassman, but you can get more, like a Fender Super-Sonic or Tube Preamp, if you connect the app to a free Fender Connect account. It’s the same story with pedals: you get options like Overdrive and Small Hall Reverb to start, but you can unlock others like a Stereo Tape Delay and Vintage Tremolo by registering.

In addition to the simulated amps and pedals, the app has a variety of basic effects like reverb, delay, compression, and vocoder. You can use up to eight tracks for multitrack recording, but registering the app gets you up to 16 tracks. 

The app also comes with a few pre-recorded tracks to play around with. You can’t use them commercially — the app makes you agree to a license agreement that says so right at the top — but they’re a good way to familiarize yourself with what it’s capable of. 

GIF of Fender Studio in action on an iPhone.

I played around a little bit with Fender Studio on my iPhone before writing this story. At first glance, it’s similar to Apple’s GarageBand app. Both let you use effects that mimic classic amplifiers and pedals, have things like built-in tuners and metronomes, and include a number of different effects and EQ options.

But Fender’s app feels more intuitive and makes better use of the cramped space of a smartphone screen, with support for both landscape and portrait orientation (GarageBand only works in landscape). I find that I can never remember how to do things in GarageBand, and trying to figure it out each time is frustrating enough that I never bothered to develop a workflow for recording with it. 

I’ve also always been a sucker for simulated amplifiers in recording apps, one area of software that never really gave up on skeuomorphism. I haven’t used a Twin Reverb in many years and haven’t played through a Bassman, but just recording with a hollow-body guitar through my iPhone’s built-in mic — if you have a USB-C DAC, you can use that to record with a more professional mic, too — the effect was decent enough that I could see using it to conceptualize a recording or even make a quick backing track for a video on TikTok.

Google I/O 2025: All the news and announcements

20 May 2025 at 09:00

Google I/O starts today, and would you believe it? They’re going to talk about AI.

After getting everything Android out of the way in last week’s dedicated Android Show, we’re expecting today’s I/O developer conference keynote to be one big AI show. Gemini, Project Astra, and everything in between are likely to be the focus as Google tries to prove it’s the biggest player in the industry.

Android XR is also set to get a mention, with the promise of new demos and more details. We’re hoping that includes closer looks at Google’s prototype smart glasses and Samsung’s Project Moohan, and maybe even new hardware reveals.

It all starts at 10AM PT / 1PM ET, and you can also follow along with our event liveblog right here.

Read below for all of the news and updates from Google I/O 2025.

Chicago Sun-Times publishes made-up books and fake experts in AI debacle

By: Mia Sato
20 May 2025 at 08:38

The May 18th issue of the Chicago Sun-Times features dozens of pages of recommended summer activities: new trends, outdoor activities, and books to read. But some of the recommendations point to fake, AI-generated books, and other articles quote and cite people that don’t appear to exist.

Alongside actual books like Call Me By Your Name by André Aciman, a summer reading list features fake titles by real authors. Min Jin Lee is a real, lauded novelist — but “Nightshade Market,” “a riveting tale set in Seoul’s underground economy,” isn’t one of her works. Rebecca Makkai, a Chicago local, is credited for a fake book called “Boiling Point” that the article claims is about a climate scientist whose teenage daughter turns on her.

In a post on Bluesky, the Sun-Times said it was “looking into how this made it into print,” noting that it wasn’t editorial content and wasn’t created or approved by the newsroom. Victor Lim, senior director of audience development, added in an email to The Verge that “it is unacceptable for any content we provide to our readers to be inaccurate,” saying more information will be provided soon. It’s not clear if the content is sponsored — the cover page for the section bears the Sun-Times logo and simply calls it “Your guide to the best of summer.”

We are looking into how this made it into print as we speak. It is not editorial content and was not created by, or approved by, the Sun-Times newsroom. We value your trust in our reporting and take this very seriously. More info will be provided soon.

Chicago Sun-Times (@chicago.suntimes.com) 2025-05-20T14:19:10.366Z

The book list appears without a byline, but a writer named Marco Buscaglia is credited for other pieces in the summer guide. Buscaglia’s byline appears on a story about hammock culture in the US that quotes several experts and publications, some of whom do not appear to be real. It references a 2023 Outside magazine article by Brianna Madia, a real author and blogger, that I was unable to find. The piece also cites an “outdoor industry market analysis” by Eagles Nest Outfitters that I was unable to find online. Also quoted is “Dr. Jennifer Campos, professor of leisure studies at the University of Colorado,” who does not appear to exist. Buscaglia did not immediately respond to a request for comment but admitted to 404 Media that he uses AI “for background at times” and always checks the material. 

“This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses,” he told 404. “On me 100 percent and I’m completely embarrassed.”

Another uncredited article titled “Summer food trends” features similar seemingly nonexistent experts, including a “Dr. Catherine Furst, food anthropologist at Cornell University.” Padma Lakshmi is also attributed in the piece for a quote she doesn’t appear to have said.

News outlets have repeatedly run AI-generated content next to their actual journalism, often blaming the issue on third-party content creators. High-profile incidents of AI-generated content at Gannett and Sports Illustrated raised questions about the editorial process, and in both cases, a third-party marketing firm was behind the AI sludge. Newsrooms’ defense is typically that they had nothing to do with the content — but the appearance of AI-generated work alongside real reporting and writing by human staffers damages trust all the same. 

❌
❌