Reading view

There are new articles available, click to refresh the page.

The Vergecast at CES 2025: the biggest stories and best gadgets

A photo of a CES logo sign, on top of a Vergecast illustration.
Image: Alex Parkin / The Verge

CES is a TV show. And a car show. And a wearables show. And this year, oddly, kind of a pool-vacuum show? It is the biggest, most elaborate, most bizarre tech show of the year, during which practically the whole industry flies to Las Vegas to show off new stuff and make big deals.

On this episode of The Vergecast, a special live edition of the show from the Brooklyn Bowl in Las Vegas, we talk through as much of it as we can. (Thanks to everyone who came out, by the way! So much fun to get to see and hang out with all of you.) We actually begin the show with a story that didn’t start at CES but took over the week anyway: Meta’s about-face on fact-checking and content moderation. After that, we get into Samsung’s new Frame Pro TV, the end of Dell’s XPS brand, Sony’s bizarrely expensive Afeela car, and more.

After that, The Verge’s Allison Johnson, Jennifer Pattison Tuohy, and Victoria Song join us onstage to talk about what they saw at the show. We talk about phone toasters, robot vacuums, smart locks, smart glasses, Max Ink Mode, and lots more. Will anything we saw this week ever ship, and will any of it be any good? Who knows! But that’s the fun of CES. It’s a fever dream, a...

Read the full story at The Verge.

Omi is another AI companion wearable — but this one’s trying to read your mind

A man in a chair with a glowing wearable on his temple.
Seriously: would you wear something like this on your face if it could really read your mind? | Image: Omi

Nik Shevchenko closes his eyes and starts to focus intently. He’s spent the last half hour or so telling me about his new product, an $89 wearable called Omi that can listen to, summarize, and get information out of your conversations. Now he wants to show me the future. So his eyes are closed, and he’s focusing all his attention on the round white puck stuck to his left temple with medical tape. (Did I mention he’s had this thing on his face the whole time? It’s very distracting.)

“Hey, what do you think about The Verge, like as a news media website?” Shevchenko asks, to no one in particular. Then he waits. Fifteen or so seconds later, a notification pops up on his phone, with some AI-generated information about how reputable and terrific a news source The Verge is. Shevchenko is thrilled, and maybe a little relieved. The device read his brain waves to understand he was talking to it, and not to me, and answered his question without any prompting or switching.

So far, that’s all the brain-computer-interface stuff Omi can do. And it seems pretty fragile. “It just understands one channel,” he says, “it’s one electrode.” What he’s trying to build is a device that understands when you’re talking to it and when you’re not. And then eventually understands and saves your thoughts, which Shevchenko both waves off as total science fiction and says will probably be possible in two years. Whenever it happens, he thinks it might change the way you use your AI devices.

A woman with a glowing wearable around her neck. Image: Omi
This is the (more normal) way most people will wear devices like Omi.

For now, the Omi’s actual purpose is much simpler: it’s an always-listening device (the battery apparently lasts three days on a charge) that you wear on a lanyard around your neck that can help you make sense of your day-to-day life. There’s no wake word, but you can still talk to it directly because it’s always on. Think of it as 80 percent companion and 20 percent Alexa assistant.

Omi can summarize a meeting or conversation and give you action items. It can give you information — Shevchenko offhandedly wondered about the price of Bitcoin during our conversation and got a notification from the Omi companion app a few seconds later with the answer. There’s also an Omi app store, which developers are already using to plug the audio input into things like Zapier and Google Drive.

For Shevchenko himself, though, Omi is a personal mentor above all else. “I was born in the middle of nowhere on an island near Japan,” he tells me, and always wanted access to the tech visionaries he grew up admiring. For years, he says he cold-emailed people like Mark Zuckerberg and Elon Musk asking for advice and mentorship on how to make it in tech but never got much response. With no real-life options, Shevchenko decided to build his own.

Omi already has a product called “Personas,” which allows you to plug in anyone’s X handle and create a bot that assumes their social network persona. When Shevchenko shares his screen with me, it shows he’s been chatting with an AI Elon Musk for a long time. “It helps me to understand what I should be working on tomorrow,” Shevchenko says. “Or when I’m talking to someone and I don’t know an answer to the question, it will give me a small nudge — it sometimes tells me I’m wrong!” His wearable heard him say he was sick a few days ago and has been reminding him ever since to get more rest. He asks it every month to give him feedback and tell him how to do better.

He gets a lot of notifications from the Omi app, including during our call, and not all of them make much sense — one was just a transcription of a sentence he’d said a minute earlier. Shevchenko acknowledges it’s early, but he doesn’t seem bothered by the system’s misses. The communication works for him.

Different colors and materials of the Omi device. Image: Omi
Omi’s tech is actually pretty simple — it’s mostly just a microphone. The AI is the trick.

Most people won’t use Omi this way, though. The product will ship widely in the second quarter of this year, but Shevchenko says the 5,000 people with an early version of the device are using it to help remember things, look up information, and perform other tasks common to AI assistants.

In that sense, Omi has a lot in common with devices like the Limitless Pendant and bears a striking resemblance to another wearable called Friend. When Friend launched last year, Shevchenko claimed Friend CEO Avi Schiffmann was stealing his work, and the subsequent beef included everything from sniping on X to a freestyle rap diss track. Omi was actually called Friend for a while, and Shevchenko says he changed the name both to avoid confusion and because Schiffmann dropped $1.8 million on Friend.com and subsequently dominated search results.

Shevchenko is confident that Omi can improve on those other devices. All of Omi’s code is open source, and there are already 250 apps in the store. Omi’s plan is to be a big, broad platform, rather than a specific device or app — the device itself is only one piece of the puzzle. The company is using models from OpenAI and Meta to power Omi, so it can iterate more quickly on the product itself.

For all their issues and underlying concerns, it’s clear that AI models are already good enough to feel like a true companion to millions of people. You can feel about that however you’d like, but from Omi and Friend to Character.AI and Replika, bot friends are quickly becoming real friends. What they need, then, is both more information about you and more ways to help you. Omi thinks the first answer is an always-on microphone, and the second is an app store. Then, I guess, comes the brain.

Withings is making a cardiologist checkup part of its health subscription

A man sitting at a table measuring his blood pressure.
The BPM Vision is another way to track your heart health — and there’s a new way to make sense of the data, too. | Image: Withings

If you’re a Withings device owner and a Withings Plus subscriber, there’s a new feature coming to your health tracking system. It’s a telemedicine service called Cardio Check-Up, designed to make it easy to check in on your heart health with a professional.

Any Withings device that collects electrocardiogram data (which is most of them at this point) can be used in Cardio Check-Up. The Withings Plus subscription, which costs $99.95 per year, will now include four checkups annually, though they’re not live appointments — a cardiologist will instead review your data and deliver you a heart health report. It works through a provider called Heartbeat Health, which has been working with Withings on EKG features for the last few years.

Cardio Check-Up gives Withings an answer to one of the most pressing challenges facing any health wearable, which is how to help users make sense of this mountain of complex data they’re suddenly collecting. Companies like Oura and Whoop are working on ways to collate your data into actual, actionable feedback, so you can know what’s going on and how to do better without needing a medical degree of your own.

Withings is doing lots of that automated...

Read the full story at The Verge.

There’s a better way to type on TVs, and it’s based on old-school phones

A photo of the Google TV Streamer’s remote in a person’s hand.
When this is all you have to type with, you need new keyboard ideas. | Photo by Jennifer Pattison Tuohy / The Verge

Typing on a TV sucks. Those long and / or scrambled on-screen keyboards are both a nuisance to use, and a real problem for anyone wanting to make stuff for your TV.

At CES 2025, I was just introduced to a better way. It’s made by a company called Direction9, which has been working on the system for about a year, and it starts with a very old way of typing: T9. T9 was created by necessity, back in the days when cellphones’ only buttons were the number keys. (Here’s a demo for the uninitiated.) TVs are similarly constrained by their directional pad — on most set-top boxes and smart TVs there’s no other way to type.

The Direction9 system works like this: all the letters are arrayed in a three-by-three number grid, with multiple letters assigned to each number, just like T9. When you open the keyboard, your cursor defaults to the middle, and you click around to the letter you’re looking for. Every time you click the middle button to select a letter, the cursor jumps back to the center, which means you’re always only a click or two from the letter you’re looking for.

You can use the keyboard a “smart” mode, which tries to predict which word you’re looking for — click...

Read the full story at The Verge.

A new and better way to control your smart home

Hi, friends! Welcome to Installer No. 65, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, get ready to take up all your phone’s storage space, and also you can read all the old editions at the Installer homepage.)

This is the last Installer of the year! I’m taking a couple of weeks off for the holidays, and I hope you’re getting some relaxation in too. Thank you so much to everyone who has subscribed to this newsletter, emailed me your recommendations, told me I’m a lunatic about to-do lists, and generally been part of the Installerverse this year. Making this newsletter is so much fun, and I’m so thrilled to get to do it with you. Bigger and better next year!

This week, I’ve been reading about Spotify’s ghost artists and Formula 1 and Mufasa and the deeply silly New York Jets, watching Hot Frosty (you can judge me, it’s fine) and re-watching 30 Rock, beating Balatro for the very first time, and trying to convince my toddler that it’s actually not fun and cool and great to wake up at 4am every day.

I also have for you a nifty new smart home controller, a new app for the future of social networks, the next Sonic movie, and much more. Plus,...

Read the full story at The Verge.

The Vergecast Matter Holiday Spec-tacular

A photo of the Matter logo, over a holiday Vergecast illustration.
Image: Alex Parkin / The Verge

Happy holidays! ‘Tis the season for trimming trees, hanging lights, baking cookies... and spending two weeks at home trying to figure out why you can’t get the lights to automatically come on at night, and which of those stupid bulbs is causing all the rest to not work. Truly the most wonderful time of the year.

Every year on The Vergecast, we like to get into the holiday spirit by getting deep into the weeds on one of the most important specs, protocols, or systems that we all encounter every day. This year, for our annual Holiday Spec-tacular, we’re taking on everyone’s favorite kinda-sorta functional smart home protocol: Matter.

Matter is supposed to be the thing that makes the smart home work, that allows everything from your lights to your fridge to your vacuum cleaner to seamlessly connect. In reality, it is, well, not that. But it might be on its way! We begin the show with Nilay, David, and The Verge’s Jennifer Pattison Tuohy talking about the state of Matter, and where the smart home has made strides — and made mistakes — this year. We also talk about Thread. A lot. More than we expected.

After that, the trio competes in a game to see who understands the complicated, overlapping jargon of the Matter universe best. (It’s a tight race, but the right person wins in the end.) And finally, Paulus Schoutsen, the creator of Home Assistant and president of the Open Home Foundation, joins the show to talk about what it’s like to work with Matter and whether we’re ever going to get the smart home of our dreams.

This is our last episode of the year — we’ll be back with a live episode at CES, and if you’re going to be in Vegas we hope you’ll come join us! In the meantime, have a wonderful holiday, and may all your smart lights always be the right color.

If you want to know more about everything we discuss in this episode, here are some links to get you started:

See you in 2025!

Flipboard’s Surf app is a big new idea about the future of social

A screenshot of the Surf app running on an iPhone.
Surf’s homepage is just feeds. It’s feeds all the way down. | Image: David Pierce / Flipboard

Mike McCue, the CEO of Flipboard and an internet entrepreneur since the Netscape days, is a true believer in the fediverse. He doesn’t love the word: he’d much rather call it “the social web.” But whatever you want to call the open, decentralized, interconnected social networking experience that apps like Mastodon and Bluesky promise, McCue is absolutely convinced it’s the future.

For the last year or so, McCue and his team have been completely overhauling the Flipboard platform to make it a part of the social web. Once the change is done, Flipboard will be a fully decentralized way to discover and read stuff from across the internet. The process seems to be going fine, though it doesn’t seem poised to take over the fediverse the way Threads could if it fully opened up.

At the same time, though, the Flipboard team has been working on something even bigger. That something is an app called Surf (not to be confused with the other recently launched Surf), which McCue called “the world’s first browser for the social web.” He first said that to me a little over a year ago, when Surf was mostly just a bunch of mock-ups and a slide deck. Now, the app has been in beta for the last few months — I’ve been using it most of that time — and a public beta is launching today. Not everyone can get in; McCue says he wants to bring in some curators and creators first, in order for there to be lots of stuff in Surf when everyone else gets access. And he promises that’s coming soon.

But wait, sorry, back to the whole “browser for the social web” thing. McCue’s best explanation of Surf’s big theory is this: in a decentralized social world, the internet will be less about websites and more about feeds. “You won’t put in, like, theverge.com and go to the website for The Verge, but you can put in ‘the verge’ and go to the ActivityPub feed for The Verge.” Your Threads timeline is a feed; every Bluesky Starter Pack is a feed; every creator you follow is just producing a feed of content.

Surf’s job, in that world, is to help you discover and explore all those feeds. The app can see three kinds of feeds: anything from ActivityPub, which means things like Mastodon and Threads and Pixelfed; anything from AT Protocol, which means Bluesky; and any RSS feed. You can search for feeds by topic, publisher, or creator; you can curate your own feeds by combining other feeds. And then you can share those feeds, which other people can combine and recombine. It’s all a little confusing. Just imagine a nicely designed, vertically scrolling feed, somewhere between a Twitter timeline and the Apple News homepage.

Three screenshots of different content types in Surf. Image: David Pierce / Surf
You can have any kind of content in Surf — which means the app has to be good at absolutely everything.

A feed can be made up of almost any kind of content, which presents a tricky design problem for Surf. It has to be equally adept as a social network, a news app, a video platform, and a podcast player. Combining all that stuff into one place isn’t just the goal; it’s the whole point. And it’s very hard to do all of those things well.

Personally, the most eye-opening moment in my time testing Surf has been the way the app lets you automatically filter a feed. I set up a feed that’s just all my favorite stuff: my go-to podcasts, must-read blogs, a couple of can’t-miss YouTube channels, and my favorite folks on Bluesky. I can open that feed and see everything, in order, no matter what it is or who it came from. But I can also filter it to just show all the videos in the feed or tap on “Listen” to turn it into a podcast queue.

Surf isn’t yet a full-featured app for any of these uses, much less all of them, but it’s already a pretty useful app for all kinds of media. It presents videos like an endlessly scrolling TikTok feed, which is actually a pretty fun way to flip through a YouTube channel. Posts with links are formatted like news stories, with big images and headlines. It’s not a particularly dense timeline-scrolling experience, either — the whole thing is more like Flipboard’s flippy magazines than the For You pages we’re used to.

Because it’s trying to compile a bunch of disparate platforms into one, search can be messy — I found five profiles with my name and picture, for instance, and it’s not obvious which one is the one you’re looking for. Surf is also designed to be interactive, but right now, that pretty much only works if you’re a Mastodon user liking Mastodon posts. For most other things, it’s either kind of broken or entirely broken. For now, and probably for a while, Surf is going to be much better as a consumption tool than a social one.

McCue sees the social web as the beginning of an entirely new internet. He even uses old-web metaphors to explain these early products: the current era we’re in is like AOL back in the day, “a walled garden that contained all the innovation in the walled garden”; Surf is like old-school Yahoo, “a collection of feeds that other people have made.” He wants to enable paid feeds, so publishers, creators, and curators can make money on the platform. He has big ideas about custom designs for feeds, so they can look more like homepages.

There’s an awful lot left to build — not to mention a lot of protocols and tools left to convince all the internet’s platforms and publishers to work with. But I’ve been talking to McCue about this for two years now, and his conviction and optimism haven’t wavered a bit. When I tell him that I definitely wavered — that I was once all in on ActivityPub as the future but am worried seeing Bluesky grow on another protocol and hearing some of the issues Threads and others are having with ActivityPub — he just laughs. One, he says, that’s how it always goes in these early phases. Two, that’s what Surf is meant to fix.

To prove his point, McCue opens up a feed full of basketball content, created by David Rushing. Rushing was a big figure in early NBA Threads, a community that has splintered thanks to some of Threads’ moderation and community policies. Now, people are posting with #nbathreads on Bluesky and elsewhere, too. It’s messy. But Surf, McCue says, can bring it back together. He starts scrolling Rushing’s custom feed: “You’re seeing Bluesky posts, Mastodon posts, Threads posts, Flipboard posts, anything with the hashtag #nbathreads across the whole social web. If you post a podcast, if you post a YouTube video, anything with the hashtag #nbathreads, it’ll show up in this feed.” Rushing can add or remove individual posts or even use Flipboard’s filtering systems to get rid of anything that feels political, mentions gambling, or whatever else he wants to do.

McCue is practically giddy as he scrolls through all this basketball content. This is the whole thing, right here. “Ultimately,” he says, “you’re just not going to care whether something is on Threads — I don’t write you a separate kind of email because you’re on Gmail, right?” People will use lots of apps, there will be lots of communities, and that’s good. “There are nerds on Bluesky, there are nerds on Threads. How can all the nerds gather together?” That’s the question for the fediverse — sorry, the social web — and Surf looks like it might be the best answer anyone’s come up with so far.

Gemini, GTA, and the search for the next big thing

Google announced a bunch of new stuff last week, from Gemini 2.0 and Project Mariner to Android XR and Project Moohan. As ever with Google, it feels like a lot of stuff without necessarily a coherent plan behind it. But if you look closely enough, and start to put some of the pieces together, the combination of those announcements might add up to something like Google’s vision for the future of everything.

On this episode of The Vergecast (our last Tuesday episode of the year), The Verge’s Kylie Robison and Victoria Song join the show to do some Google puzzling. They describe their experiences with all of Google’s new projects and experiments, and explain why Google thinks XR could be the killer app for AI — and vice versa.

After that, Chris Grant, group publisher for The Verge and Polygon, joins to talk about the two biggest 2025 stories in gaming: the impending launches of Grand Theft Auto VI and the Nintendo Switch 2. He explains why GTA is such an important and enduring gaming franchise, why he’s confident the Switch 2 won’t be like the Wii U, and why the whole gaming world is waiting for these two things so intently.

Finally, we answer a question from the Vergecast Hotline (call 866-VERGE11 or email [email protected]!), with some help from The Verge’s publisher Helen Havlak. Helen mentioned last week that she uses Figma to manage her garden, and let’s just say you all had some follow-up questions about that. So Helen came back to explain her whole system.

Oh, and did we mention we’re doing a live Vergecast at CES? We’re doing a live Vergecast at CES! Wednesday, January 8th, 5PM, at the Brooklyn Bowl at the Linq Promenade. It’s free, it’s going to be great, come join us!

If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Google:

And in games:

Also, here is some screenshot inspo of Helen’s Figma garden in all its to-scale glory.

2025 in tech: who’s in and who’s out

An illustration including Sam Altman, Tim Cook, Nvidia, and the White House.
Image: Alex Parkin / The Verge

Hello! I’m here from the future. And I have some news. 12 months from now, all the Big Tech CEOs are still in their jobs, everybody’s using folding phones, Apple made a TV, and Nvidia is the most valuable company in the history of the universe. Wild year, huh? Or maybe not? It’s hard to remember. Time travel messes with your memory a little.

On this episode of The Vergecast, the second installment of our two-part 2025 preview, we debate some seriously iffy storylines from the end of 2025. David, our resident time traveler, brings us some big stories that either did or didn’t happen in the year to come, and Nilay Patel and Wall Street Journal columnist Joanna Stern have to help figure out what’s real and what isn’t.

Will someone really buy Snap? Is GTA VI going to be the biggest game ever? Will Bluesky continue to ascend and leave Threads in its wake? Nobody knows yet, not even the time traveler, but we have some thoughts and ideas.

As was the case with last week’s episode, we’re keeping score. Here’s how it works: each host has to decide, for each 2025 news story, whether it’ll be real or not by the end of the year. Every correct guess earns you a point; every incorrect guess...

Read the full story at The Verge.

A worthy update to my favorite mobile game ever

Hi, friends! Welcome to Installer No. 64, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, get ready for some weird documentaries, and also you can read all the old editions at the Installer homepage.)

This week, I’ve been reading about raw milk and $HAWK and WhatsApp, watching A Man on the Inside and the new Ken Burns da Vinci doc, finally getting caught up on The Great British Bake Off (about which I have SO MANY FEELINGS), storing all my loyalty numbers and Airbnb codes in Cheatsheet, and doing a genuinely upsetting amount of research on pizza stones.

I also have for you a delightful new mobile game, an E Ink tablet worth a look, a gorgeous new to-dos app, and much more. It’s a strangely Netflix-centric week, which is odd for mid-December? But so it goes. Let’s dive in.

(As always, the best part of Installer is your ideas and tips. What are you watching / reading / playing / baking / listening to / soldering this week? Tell me everything: [email protected]. And if you know someone else who might enjoy Installer, forward it to them and tell them to subscribe here.)


The Drop

Read the full story at The Verge.

Searching for the first great AI app

Photos of Google’s Project Astra, on a Vergecast background.
Image: Alex Parkin / The Verge

ChatGPT launched roughly two years and two weeks ago. Now, as we near the end of 2024, the AI race is... well, where is it, exactly? It’s more competitive than ever, there’s more money being poured into new models and products than ever, and it’s not at all clear when or even whether we’re going to get products that make it all worthwhile.

On this episode of The Vergecast, we talk about a lot of different AI news, all along a single trend line: the tech industry trying desperately to build a killer app for AI. (Ideally, for them, also one that makes money.) The Verge’s Richard Lawler joins us as we discuss Google Gemini 2.0, Project Astra and Project Mariner, and everything else Google is doing to put AI in the products you already use every day. We also talk through the new Android XR announcement, and Google’s renewed commitment to making headsets and smart glasses that work. It’s all an AI story, no matter how you look at it.

After that... more AI! We talk about the launch and near-immediate disappearance of OpenAI’s Sora, what’s new in iOS 18.2, Reddit’s clever-but-primitive new Answers feature, and more.

Finally, in the lightning round, it’s a smorgasbord of tech news. YouTube is big on TVs; Instagram is testing a way for you to test your posts; the TikTok ban is coming, but a sale sounds like the answer; Sonos once again made a great soundbar; and what the heck happened to Cruise? The year’s almost over, but the news keeps coming.

If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Google:

And in other AI news:

And in the lightning round:

Google announces Android XR, a new OS for headsets and smart glasses

The Android XR logo over several app screenshots.
XR stands for “extended reality,” which you should get used to explaining to lots of people. | Image: Google

Google is taking another run at making headsets work. The company just announced Android XR, a new operating system designed specifically for what Google calls “extended reality” devices like headsets and glasses. It’s working with Samsung and lots of other hardware manufacturers to develop those headsets and glasses, is making the new version of Android available to developers now, and hopes to start shipping XR stuff next year.

We don’t yet have a ton of details on exactly how Android XR will work or how it might differ from the Android on your phone. (The Verge’s Victoria Song got to try a few demos and prototypes — make sure you read her story.) Google is making immersive XR versions of apps like Maps, Photos, and YouTube and says it’s developing a version of Chrome that lets you do multiwindow multitasking in your browser. It will also support existing phone and tablet apps from the Play Store, much in the same way Apple supports iPad apps in the Vision Pro.

Google’s Gemini AI, of course, is at the very center of the whole experience. Google has been trying to crack headsets for more than a decade — there was Glass and Cardboard and Daydream, all of which had good ideas but none of which turned into much — and the company thinks AI is the key to making the user experience work. “We believe a digital assistant integrated with your XR experience is the killer app for the form factor, like what email or texting was for the smartphone,” said Sameer Samat, who oversees the Android ecosystem at Google, in a press briefing ahead of the launch. As Gemini becomes more multimodal, too, able to both capture and create audio and video, glasses and headsets suddenly make much more sense.

A photo of a text message displayed in AR over a real-wrold street. Image: Google
This is the kind of AR interface you’ll get with Android XR.

The choice of the term “XR” for the OS is maybe the most interesting part. There are a million terms and acronyms for this space: there’s virtual reality, augmented reality, mixed reality, extended reality, and others, all of which mean different but overlapping things. XR is probably the broadest of the terms, which seems to be why Google picked it. “When we say extended reality or XR,” Samat said, “we’re really talking about a whole spectrum of experiences, from virtual reality to augmented reality and everything in between.”

Google imagines headsets that can seamlessly transition from virtual worlds to real ones — again like the Vision Pro — and smart glasses that are more of an always-on companion. It’s also interested in audio-only devices like the Ray-Ban Meta Smart Glasses. Some things might be standalone; others might be more like an accessory to your phone. We’ll see if Google ends up building its own XR hardware, but it’s clearly trying to support a huge spectrum of devices.

Android XR is still in its early stages, and most developers are only now going to start getting the software and hardware they need to build for the new OS. But Google’s trying to move quickly next year: a device it’s building with Samsung, codenamed Moohan, is apparently slated to ship next year. Android XR is, in some ways, a culmination of bets Google has been making in AI, the broader Android ecosystem, and the wearable future of technology. All of those bets are about to get the real test: whether anyone actually puts them on.

It sure sounds like Google is planning to actually launch some smart glasses

A man in a bike helmet, wearing glasses.
Here’s what Google’s latest smart glasses prototype looks like. | Image: Google

Google is working on a lot of AI stuff — like, a lot of AI stuff — but if you want to really understand the company’s vision for virtual assistants, take a look at Project Astra. Google first showed a demo of its all-encompassing, multimodal virtual assistant at Google I/O this spring and clearly imagines Astra as an always-on helper in your life. In reality, the tech is somewhere between “neat concept video” and “early prototype,” but it represents the most ambitious version of Google’s AI work.

And there’s one thing that keeps popping up in Astra demos: glasses. Google has been working on smart facewear of one kind or another for years, from Glass to Cardboard to the Project Iris translator glasses it showed off two years ago. Earlier this year, all Google spokesperson Jane Park would tell us was that the glasses were “a functional research prototype.”

Now, they appear to be something at least a little more than that. During a press briefing ahead of the launch of Gemini 2.0, Bibo Xu, a product manager on the Google DeepMind team, said that “a small group will be testing Project Astra on prototype glasses, which we believe is one of the most powerful and intuitive form factors to experience this kind of AI.” That group will be part of Google’s Trusted Tester program, which often gets access to these early prototypes, many of which don’t ever ship publicly. Some testers will use Astra on an Android phone; others through the glasses.

Later in the briefing, in response to a question about the glasses, Xu said that “for the glasses product itself, we’ll have more news coming shortly.” Is that definitive proof that Google Smart Glasses are coming to a store near you sometime soon? Of course not! But it certainly indicates that Google has some hardware plans for Project Astra.

Smart glasses make perfect sense for what Google is trying to do with Astra. There’s simply no better way to combine audio, video, and a display than on a device on your face — especially if you’re hoping for something like an always-on experience. In a new video showing Astra’s capabilities with Gemini 2.0, a tester uses Astra to remember security codes at an apartment building, check the weather, and much more. At one point, he sees a bus flying past and asks Astra if “that bus will take me anywhere near Chinatown.” It’s all the sort of thing you can do with a phone, but nearly all of it feels far more natural through a wearable.

Right now, smart glasses like these — and like Meta’s Orion — are mostly vaporware. When they’ll ship, whether they’ll ship, and whether they’ll be any good all remains up in the air. But Google is dead serious about making smart glasses work. And seems to be just as serious about making the smart glasses itself.

Google launched Gemini 2.0, its new AI model for practically everything

Vector illustration of the Google Gemini logo.
Illustration: The Verge

Google’s latest AI model has a lot of work to do. Like every other company in the AI race, Google is frantically building AI into practically every product it owns, trying to build products other developers want to use, and racing to set up all the infrastructure to make those things possible without being so expensive it runs the company out of business. Meanwhile, Amazon, Microsoft, Anthropic, and OpenAI are pouring their own billions into pretty much the exact same set of problems.

That may explain why Demis Hassabis, the CEO of Google DeepMind and the head of all the company’s AI efforts, is so excited about how all-encompassing the new Gemini 2.0 model is. Google is releasing Gemini 2.0 on Wednesday, about 10 months after the company first launched 1.5. It’s still in what Google calls an “experimental preview,” and only one version of the model — the smaller, lower-end 2.0 Flash — is being released. But Hassabis says it’s still a big day.

“Effectively,” Hassabis says, “it’s as good as the current Pro model is. So you can think of it as one whole tier better, for the same cost efficiency and performance efficiency and speed. We’re really happy with that.” And not only is it better at doing the old things Gemini could do but it can also do new things. Gemini 2.0 can now natively generate audio and images, and it brings new multimodal capabilities that Hassabis says lay the groundwork for the next big thing in AI: agents.

Agentic AI, as everyone calls it, refers to AI bots that can actually go off and accomplish things on your behalf. Google has been demoing one, Project Astra, since this spring — it’s a visual system that can identify objects, help you navigate the world, and tell you where you left your glasses. Gemini 2.0 represents a huge improvement for Astra, Hassabis says.

Google is also launching Project Mariner, an experimental new Chrome extension that can quite literally use your web browser for you. There’s also Jules, an agent specifically for helping developers find and fix bad code, and a new Gemini 2.0-based agent that can look at your screen and help you better play video games. Hassabis calls the game agent “an Easter egg” but also points to it as the sort of thing a truly multimodal, built-in model can do for you.

“We really see 2025 as the true start of the agent-based era,” Hassabis says, “and Gemini 2.0 is the foundation of that.” He’s careful to note that the performance isn’t the only upgrade here; as talk of an industrywide slowdown in model improvements continues, he says Google is still seeing gains as it trains new models, but he’s just as excited about the efficiency and speed improvements.

This won’t shock you, but Google’s plan for Gemini 2.0 is to use it absolutely everywhere. It will power AI Overviews in Google Search, which Google says now reach 1 billion people and which the company says will now be more nuanced and complex thanks to Gemini 2.0. It’ll be in the Gemini bot and app, of course, and will eventually power the AI features in Workspace and elsewhere at Google. Google has worked to bring as many features as possible into the model itself, rather than run a bunch of individual and siloed products, in order to be able to do more with Gemini in more places. The multimodality, the different kinds of outputs, the features — the goal is to get all of it into the foundational Gemini model. “We’re trying to build the most general model possible,” Hassabis says.

As the agentic era of AI begins, Hassabis says there are both new and old problems to solve. The old ones are eternal, about performance and efficiency and inference cost. The new ones are in many ways unknown. Just to name one: what safety risks will these agents pose out in the world operating of their own accord? Google is taking some precautions with Mariner and Astra, but Hassabis says there’s more research to be done. “We’re going to need new safety solutions,” he says, “like testing in hardened sandboxes. I think that’s going to be quite important for testing agents, rather than out in the wild… they’ll be more useful, but there will also be more risks.”

Gemini 2.0 may be in an experimental stage for now, but you can already use it by choosing the new model in the Gemini web app. (No word yet on when you’ll get to try the non-Flash models.) And early next year, Hassabis says, it’s coming for other Gemini platforms, everything else Google makes, and the whole internet.

Why every company wants a podcast now

Collage of podcaster
Image: The Verge. Photos: Getty

Hello, and welcome to Decoder! I’m David Pierce, editor-at-large of The Verge. As you may have noticed, we’re dropping some extra episodes in the feed this week. You’ll have Nilay back on Friday and for next week, as we run toward the end of the year.

But I’m really excited to be here with you all today because I’m getting to talk about one of my favorite things: podcasts. There’s something strange happening these days in the podcast world — well, actually, there are kind of a lot of things happening. It’s been a wild year.

One thing I’ve noticed recently is the way companies that deal in money have been using podcasts not just as an entertainment medium but also as a weird hybrid of marketing, thought leadership, and networking. It’s something we’ve seen for a few years now with venture capital firms, for example: not only do most of the top-level VC companies have their own podcasts but also people who do podcasts about venture capital end up going into it after meeting and talking to all these folks.

It’s kind of a weird, complicated web that goes both ways, and it’s not getting any less weird or less complicated once you add stuff like crypto and politics to the mix. So I...

Read the full story at The Verge.

YouTube is a hit on TVs — and is starting to act like it

YouTube’s logo with geometric design in the background
Illustration by Alex Castro / The Verge

YouTube just released some new stats that show how the service is being consumed on televisions, and the numbers are enormous. Watch time on TV for sports content was up 30 percent year over year; viewers watched more than 400 million hours of podcasts on their TVs every month.

This is YouTube we’re talking about, though, so of course the numbers are huge. The living room has been YouTube’s fastest-growing platform for years — Alphabet’s chief business officer, Philipp Schindler, said on the company’s most recent earnings call that watch time is growing across YouTube “with particular strength in Shorts and in the living room.” Even as YouTube continues to dominate basically all facets of the entertainment business, the arrow on your TV still points up.

The trend hasn’t changed in forever, but YouTube has spent the last couple of years finally doing something about it. It launched a way to sync your phone and your TV, so you can watch a video on the big screen and interact with it on the small one. Earlier this year, the company redesigned the TV interface to make it easier to find comments, links, and channel pages while you’re watching a video. It redesigned those channel...

Read the full story at The Verge.

The Vergecast Vergecast, part two

A Vergecast logo, on a Vergecast illustration background.
Image: Alex Parkin / The Verge

Every once in a while, we turn The Vergecast inward. Not only are we constantly covering and discussing what’s happening in the tech and media worlds, we’re also living it ourselves. We’re trying to navigate changes in platforms and economies, and figure out the best ways to do our work and share it with you. And it’s complicated! So, once a year or so, we get into the weeds of how it all works for us here at The Verge.

On this episode of The Vergecast, it’s all inside baseball, all the time. Helen Havlak, our publisher (and everybody’s boss), joins the show to talk about our new subscription: why we’re doing it now, how we decided on a price, why we’re not getting rid of all the ads, and what we’re thinking about going forward.

After that, Nilay Patel, our editor-in-chief and your Vergecast co-host and friend, joins to talk about everything else you’ve been wondering. We talk about host-read ads, what we do during ad breaks, how The Verge is like a Montessori, what he thinks about our redesign two years later, and more. We got so many good questions — thanks to everyone who called and emailed! — and we couldn’t get to them all, but we tried to answer as many as we could.

If you want to know more about what The Verge is up to, and where we think (gestures widely) all this is headed, here are a few links to get you started:

Keep sending us questions, too! Call the Vergecast Hotline at 866-VERGE11, or email [email protected], and ask us anything. Our goal is always to be as transparent and accountable as we can — disclosure is our brand, after all — and we’d love to know everything else that’s on your mind.

Our hottest and coldest 2025 takes

There’s a lot we don’t know about what’s coming for tech in 2025. AI could save the world, or ruin it, or do pretty much nothing interesting at all. A new US president could change priorities and policies on antitrust fights and privacy rules. Will TikTok get banned? Will the fediverse take off? Will it be the year of Matter? Will Grand Theft Auto VI change society forever? Will the next Mission: Impossible movie be awesome? Look, I didn’t say the stakes were always super high. But there are lots of questions.

On this episode of The Vergecast, we try and offer some answers — with absolutely no evidence. Nilay, David, and Wall Street Journal columnist (and forever friend of The Verge) Joanna Stern take turns offering their predictions for the year to come.

We start with our mildest, most milquetoast takes on tech in 2025, before ramping up to our biggest, hottest, spiciest thoughts. And because we need to be held accountable for our actions, we make a game out of it.

Here’s how the game works: Each host offers a prediction, and the other two get a chance to either agree with or reject the prediction. At the end of the year, whoever is right about that prediction gets a point....

Read the full story at The Verge.

❌