In New York court on May 20th, lawyers for nonprofit Everytown for Gun Safety argued that Meta, Amazon, Discord, Snap, 4chan, and other social media companies all bear responsibility for radicalizing a mass shooter. The companies defended themselves against claims that their respective design features - including recommendation algorithms - promoted racist content to a man who killed 10 people in 2022, then facilitated his deadly plan. It's a particularly grim test of a popular legal theory: that social networks are products that can be found legally defective when something goes wrong. Whether this works may rely on how courts interpret Section 230, a foundational piece of internet law.
In 2022, Payton Gendron drove several hours to the Tops supermarket in Buffalo, New York, where he opened fire on shoppers, killing 10 people and injuring three others. Gendron claimed to have been inspired by previous racially motivated attacks. He livestreamed the attack on Twitch and, in a lengthy manifesto and a private diary he kept on Discord, said he had been radicalized in part by racist memes and intentionally targeted a majority-Black community.
More upgraded blue slimes are on the way: Square Enix just announced that remakes of Dragon Quest 1 and its sequel are launching on October 3oth. The two games will be bundled together under the name Dragon Quest I & II HD-2D Remake and will be available on the PS5, Xbox, PC, and both the Nintendo Switch and Switch 2.
This is good news if you’re a fan of the series, given that the remake of DQIII turned out excellent, marrying the classic RPG gameplay with incredible visuals and sound, along with some welcome quality-of-life tweaks. For the new remakes, Square Enix says that “in addition to beautiful HD-2D graphics, a refined battle system, numerous quality-of-life updates, and major story additions, both remakes will feature a variety of new content.” As for what that new content is, details “will be revealed at a later date.”
And if you’re wondering why this remake series kicked off with the third entry in the franchise, it’s because that’s how the storylines play out chronologically. DQIII is set years before its predecessors, and in DQI and II players take control of the descendants of the hero from the third game. It’s a little complicated, but in October newcomers will finally be able to complete the story.
To that end, Square Enix will also be launching a digital-only collection with all three games, called Dragon Quest HD-2D Erdrick Trilogy Collection. It’ll be available for $99.99.
It is as inevitable as a sunrise that in the hours after the Switch 2 launch next week, third-party sites like Ebay will be filled with listings for the new console. But in Japan, Nintendo has announced a new partnership with Japanese third-party retail sites to combat fraudulent Switch 2 listings.
Mercari, Yahoo Auctions and Yahoo Flea Market, and Rakuten Rakuma are the participating websites that will, according to a machine translation of Nintendo’s announcement, “proactively remove listings” and establish “a collaborative system for sharing information.” Nintendo says only “fraudulent listings” that violated the sites’ terms of service agreement would be targeted for removal.
In a translated statement on Yahoo Japan’s website, the retailer stated that there will be a probationary period for the Switch 2 starting June 5th in which Yahoo Japan will remove listings for the console or suspending the accounts selling them. According to Yahoo Japan, it will revise the probationary period as necessary.
Scalpers are the bane of any console launch. Bots flood online buying queues to snap up units before real humans can click “Add to Cart.” Bot users then take advantage of consumer FOMO by listing consoles for exorbitant prices. Meanwhile, other sellers use deception to trick people into buying what they think is a console but is actually a picture of a box or some other worthless item. (Anti-scalpers would later use a version of this practice for the Switch 2.)
Hopefully, Nintendo’s plan with third-party sites in Japan works well enough to expand it to other regions.
The price of the excellent 20th anniversary collection of Scott Pilgrim graphic novels is only going in one direction: down — way down. That’s good news for anyone who’s looking for an affordable way to get every installment, whether you first got into the series through the 2010 live-action movie, the stellar animated series on Netflix, or thanks to the Streets of Rage-inspired video game.
The graphic novels come in a box that’s doused in PlayStation Portable aesthetic, and they’re available in color or black and white for roughly the same price at Amazon. Normally $249.99, either set costs about $98. It’ll make a great gift to yourself, a pal, or with Father’s Day coming up, some Dads might enjoy it, too.
Other notable deals and discounts
How much should a company charge for a last-gen product compared to the current generation? If your answer was “less than half,” then the current price for the Google Pixel Watch 2 at Amazon will seem just about perfect. You can snag a Wi-Fi model for $149.99 in the champagne gold color scheme (which looks less gold than you might imagine). This price is less than half the cost of the Pixel Watch 3, making it a smart gadget to buy if you’re dipping a toe into the world of smartwatches. While it lacks the brighter screen, better battery life, and more exhaustive list of workout features in the Pixel Watch 3, the second iteration is still a great gadget. Read our review of the Pixel Watch 2.
If you’re strongly considering a PS5 or PS5 Pro in light of Sony’s recent comments that it may move to raise costs due to tariffs, do not buy anything today. Wait just one day, as starting on May 28th, you’ll be able to save on PS5 console bundles, and get the first-ever deal on the PS5 Pro ($50 off). Not only that, there will be a nice price drop on the already-discounted PSVR 2 VR headset, and on other hardware. You can read more details regarding the Days of Play deals here, which will last through June 11th.
HBO’s upcoming Harry Potter series is about to introduce the world to a new group of kids tasked with saving the world.
Today, HBO announced that newcomers Dominic McLaughlin (Grow), Arabella Stanton, and Alastair Stout have been cast in the new JK Rowling-producedHarry Potter series. McLaughlin will play Harry, Stanton will portray Hermione Granger, and Stout is the show’s Ron Weasley. In a statement about the casting, Harry Potter showrunner Francesca Gardiner and director / executive producer Mark Mylod said that they were delighted to have finally found their leads after “an extraordinary search” that saw “tens of thousands of children” audition for the roles.
“The talent of these three unique actors is wonderful to behold, and we cannot wait for the world to witness their magic together onscreen,” Gardiner and Mylod said. “It’s been a real pleasure to discover the plethora of young talent out there.”
Previously, HBO announced that John Lithgow, Janet McTeer, Paapa Essiedu, Nick Frost, Luke Thallon, and Paul Whitehouse had signed onto the new Harry Potter to portray Albus Dumbledore, Minerva McGonagall, Severus Snape, Rubeus Hagrid, and Quirinus Quirrell. There are still a few more major characters from the series who have yet to be cast, but with the core trio now locked in, we’re one more step closer to HBO’s plan for another decade of Harry Potter hype becoming reality.
The Browser Company has said repeatedly that it’s not getting rid of the Arc browser as it moves onto its new AI-centric Dia browser. But what the company also not going to do is develop new features for it. A new blog post from CEO Josh Miller explains why, and what happens next.
The Arc browser was a big rethink of what browsers should be like, and it has dedicated users, including yours truly. But a lot of the reasons for ceasing Arc’s development that Miller gives in the blog — like that it’s too complicated to go mainstream, that it was slow and unstable at times (true!), or that The Browser Company wants to recenter the experience on AI — he also gave back in October.
Why not just roll Dia into Arc? One big thing Miller mentions is security. Arc has had at least one big security issue: a security researcher discovered a vulnerability last year that The Browser Company quickly patched, but which let attackers insert arbitrary code into a users’ browser session just by knowing their user ID. According to Miller, The Browser Company has now grown its security engineering team from one person to five. This focus is particularly important, he writes, as AI agents — AI systems that carry out tasks autonomously — become more prevalent.
As for what this all means for Arc and its users, Miller still insists that the browser won’t go away. Arc will still get security and bug fixes, and will be tweaked as the Chromium code it’s based on is updated. But he also says The Browser Company isn’t going to open-source or sell Arc, because in addition to Chromium, it’s built on a custom infrastructure that also underpins Dia. He says the company would like to open the browser up someday, but not until “it no longer puts our team or shareholders at risk.”
The Browser Company didn’t immediately respond when The Verge asked whether that same bigger security team is also working to shore up the security of Arc itself. We will update as we learn more.
NPR sued President Donald Trump over his executive order cutting federal funding for NPR and PBS.
The suit, filed in a Washington, DC, federal court by NPR and public radio stations in Colorado, claims that Trump’s effort to slash the broadcasters’ congressionally granted funding is unconstitutional. It also alleges that Trump violated the First Amendment by characterizing NPR and PBS as “biased media” and rescinding their federal funding as a result.
“It is not always obvious when the government has acted with a retaliatory purpose in violation of the First Amendment,” the complaint reads. “‘But this wolf comes as a wolf.’” Trump has accused NPR and PBS of having content that is not “fair, accurate, or unbiased,” the complaint claims, and his and other administration officials’ comments about public broadcasters “only drive home” the executive order’s “retaliatory purpose.”
In an April 1st post on Truth Social, for example, Trump described NPR and PBS as “RADICAL LEFT ‘MONSTERS’ THAT SO BADLY HURT OUR COUNTRY.”
The complaint notes that the Supreme Court recently ruled that “it is no job for government to decide what counts as the right balance of private expression — to ‘un-bias’ what it thinks is biased, rather than to leave such judgments to speakers and their audiences.” The lawsuit alleges that Trump’s executive order “expressly aims to punish and control” NPR’s and PBS’s “news coverage and other speech that the administration deems ‘biased.’”
Beyond the First Amendment issues, the suit claims that Trump is violating a basic tenet of the separation of powers: Congress’ ability to determine how federal funds are spent.
“The president has no authority under the Constitution to take such actions,” the complaint reads. “On the contrary, the power of the purse is reserved to Congress.”
Congress doesn’t fund NPR or PBS directly, instead allocating money to the Corporation for Public Broadcasting (CPB), which then distributes funds to public broadcasters. CPB — a private corporation authorized by congressional statute — receives funding two years in advance.
NPR receives about 1 percent of its annual revenue from CPB. Local stations are more dependent on it, receiving 8 to 10 percent of their annual revenues from the corporation. PBS receives roughly 15 percent of its revenue from CPB.
In a statement to NPR, CPB chief Patricia Harrison said Trump doesn’t have authority over CPB. “Congress directly authorized and funded CPB to be a private nonprofit corporation wholly independent of the federal government,” said Harrison, a former co-chair of the Republican National Committee. Harrison added that when creating CPB, Congress “expressly forbade ‘any department, agency, officer, or employee of the United States to exercise any direction, supervision, or control over educational television or radio broadcasting, or over [CPB] or any of its grantees or contractors.”
Trump’s executive order is one of several administration efforts to strip public broadcasting of its federal funding. In January, Federal Communications Commission (FCC) chair Brendan Carrlaunched an investigation into whether NPR and PBS violated FCC guidelines by airing commercials.
“To the extent that taxpayer dollars are being used to support a for profit endeavor or an entity that is airing commercial advertisements,” Carr wrote in a letter to the heads of NPR and PBS, “then that would further undermine any case for continuing to fund NPR and PBS with taxpayer dollars.”
In the midst of tariff concerns and a cost-of-living crisis, Sony has decided to throw a party. Starting May 28th and lasting till June 11th, Sony’s putting on its annual Days of Play summer event in which PS5s, PS5 accessories, and some of the console’s most popular games will go on sale. The sale is Sony’s first time discounting the PS5 Pro, which launched last year.
You can check out the details of the sale on PlayStation’s website, but here’s a short list of what you can get.
In the United States and Canada, you can get a PlayStation 5 console plus Call of Duty: Black Ops 6 Bundle (Digital and Standard) starting at $399.99 USD / $509.99 CAD, a savings of up to $119.99 USD / $159.99 CAD compared to buying each separately.
In select regions such as Europe and Asia, the PlayStation 5 console (Digital and Standard) will be on sale starting at €399.99 / £339.99 / ¥65,980.
Additionally, with single digit days left until the Switch 2 launches, the PS5’s $399.99 sale price, which includes one of the console’s most popular and most played games, is a cheaper and better deal than the $450 Nintendo wants from you for a Switch 2 (or the $500 it wants for the Mario Kart World bundle.) With the announcement of this sale event and with Summer Game Fest looming in the distance, we can probably expect Sony to announce a PlayStation State of Play event sometime soon.
President Donald Trump’s media company could soon own $2.5 billion in Bitcoin. On Tuesday, Trump Media announced that it’s working with “approximately 50 institutional investors” to sell and issue $1.5 billion in stock and $1 billion in convertible notes. The company, which operates Truth Social among other services, will be used to establish a large holding of Bitcoin.
Trump Media says Crypto.com and the crypto banking platform Anchorage Digital will provide custody services for the company’s Bitcoin treasury.
“We view Bitcoin as an apex instrument of financial freedom, and now Trump Media will hold cryptocurrency as a crucial part of our assets,” Trump Media CEO Devin Nunes said in the press release. “Our first acquisition of a crown jewel asset, this investment will help defend our Company against harassment and discrimination by financial institutions.”
Trump Media announced a fintech subsidiary named Truth.fi earlier this year, and the Trump family is behind crypto startup World Liberty Financial, which has collected millions in deals The New York Times concluded “eviscerated the boundary between private enterprise and government policy.”
The Realme GT 7 and more affordable 7T have launched in Europe and India. | Image: Realme
After exclusively launching its GT 7 smartphone in China last month, Realme has finally announced two global versions of the phone that feature a massive 7,000mAh battery: the GT 7 and the slightly cheaper GT 7T.
The GT 7 comes in black and blue color options and starts at €749.99 for 12GB of RAM and 256GB of storage. There’s also a limited Dream Edition of the GT 7, created in collaboration with the Aston Martin Formula One Team, configured with 16GB/512GB for €899.99. The GT 7T, available in an additional yellow color option, starts at €649.99 for a 12GB/256GB configuration. Both versions are now available for preorder through Realme’s website for the European and Indian markets.
Despite all that battery power, the GT 7 is just 8.3mm thick (the GT 7T is 8.25mm thick). For comparison, the iPhone 16 Pro is 8.25mm thick with a 3,582mAh battery, while the Samsung Galaxy S25 Ultra measures in at 8.2mm with only a 5,000mAh battery.
The Realme GT 7 is powered by a MediaTek Dimensity 9400e processor and features a 6.78-inch AMOLED display with a 2780 x 1264 pixel resolution, 120Hz refresh rate, and 6,000 nits of peak brightness. The phone’s 7,000mAh battery will keep it running for up to three days of light usage, claims Realme, or up to seven hours with intense gaming. There’s no wireless charging, but with its included 120W charger and a USB-C cable the GT 7 can be charged to 50 percent capacity in 14 minutes, while a full charge takes 40 minutes.
On the back of the GT 7 you’ll find a 50MP telephoto camera with 2x optical zoom capabilities, another 50MP camera with optical image stabilization, and an 8MP ultrawide shooter. On the front is a 32MP selfie camera, and an optical fingerprint scanner hidden beneath its display. The GT 7 offers IP69 dust and water resistance; supports Wi-Fi 7, NFC, and Bluetooth 5.4; and launches with Android 15. Realme promises four additional OS updates and six years of Android security updates.
The cheaper GT 7T is instead powered by the MediaTek Dimensity 8400-Max processor, and although it has a slightly larger 6.8-inch screen with a small bump in resolution, the display’s max brightness is limited to just 1,800 nits. The 7T variant also only features two cameras on the back, ditching the GT 7’s telephoto offering for just a 50MP main camera and a second 8MP option with a wide-angle lens.
WhatsApp for iPad is available now on the App Store.
Meta now has a dedicated iPad app for WhatsApp, more than 15 years after the messaging service and the first iPad launched (2009 and 2010, respectively). Available to download today via the App Store, WhatsApp for iPad supports many of the same features as its iPhone counterpart, allowing users to join audio and video calls with up to 32 people, use both the rear and front device cameras, and share their screen with other call participants.
The WhatsApp for iPad works with iPadOS features like Stage Manager, Split View, and Slide Over, enabling it to run alongside other applications. That means users can view their messages in a split-screen view while browsing the web or watching videos, making the larger screen more practical for multitasking while using the app, compared to constantly switching away from WhatsApp on smaller mobile devices.
If you wanted to use WhatsApp on a larger screen before the iPad app, you had to either run the web version in your iPad’s browser or use the desktop apps for Mac or PC. In 2022, WhatsApp head Will Cathcart said that Meta would “love” to develop a native iPadOS WhatsApp experience, noting that “people have wanted an iPad app for a long time.”
The WhatsApp account on X teased on Monday that the app was coming via a not-so-subtle eyes emoji, but there was no indication that it would drop this soon. Meta is also rumored to be developing an Instagram app for iPad that’s optimized for the larger display, but the company hasn’t dropped any hints about that in the way it did for WhatsApp.
Today, Epic announced the impending arrival of “Dance with Sabrina,” a new rhythm-focused gaming mode within Fortnite Festival that will run from May 30th to June 16th. Described as a “reimagined interactive music experience,” Dance with Sabrina will challenge players within the Festival Jam Stage to fill up their heart meters by matching the beat of some of Carpenter’s songs like “Espresso” and “Bed Chem” as her avatar performs them on stage.
By racking up the most hearts, Festival players will be able to “contribute to the show” during the next song in one of three roles: Dance Leader, Special Effects Pro, or Video Artist. Players who match the most beats throughout the entire event will be pictured along with Carpenter’s avatar in the Dance With Sabrina finale snapshot.
Some Just Dance-style dancing would have been very cool to see in Fortnite, but Dance with Sabrina sounds a bit more like timed button mashing with some emoting thrown in for pizazz. That doesn’t exactly seem like the most entertaining prospect, but this is part of how Epic figures out what works, and what doesn’t.
Today, I’m talking with Alphabet and Google CEO Sundar Pichai. We recorded this conversation in person after the Google I/O developer conference last week in what’s becoming a bit of a Decodertradition. This is the third year that we’ve done Decoder after I/O, and this one felt really different. Google is in a very confident place right now, and you can really feel that in this conversation.
If you caught any of the news from I/O, you know that it was all about AI — particularly a huge new set of AI products, not just models and capabilities. Sundar told me that these products represent a new phase of the AI platform shift, and we talked about that at length: how that shift is playing out, what the markers of these phases are, and whether any of these products can actually deliver a return on the huge investments Google has made in AI over the years.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
This year’s I/O also marked the beginning of what appears to be a new era for search and the web. Google’s new vision for search goes well beyond links to webpages to something that feels a lot more like custom app development. When you search for something, Google’s new AI Mode will build you a custom search results page, including interactive charts and potentially other kinds of apps, in real time.
You can see that vision in the new AI Mode, which is now available to all US users. Google’s plan is to “graduate” features from AI Mode into the main search experience over time. I wanted to know how Sundar was thinking about that graduation process and how he thinks that will affect the web itself, which is shaped more than anything by the incentives of Google Search and SEO.
You’ll hear Sundar say in several different ways that the web is still getting bigger and Google is sending more traffic to more websites than ever before, but the specifics of that are hotly contested. Just before we talked, the News Media Alliance — the trade group that represents publishers like Conde Nast, The New York Times, and The Verge’s parent company Vox Media — issued what can only be described as a furious statement, calling AI Mode “theft.” So we talked about that, too, and about what happens to the web when AI tools and eventually agents do most of the browsing for us.
What does it mean for the web to go from a series of websites when AI tools just see it as a series of databases instead of as sites for people to use? Why would companies like Uber, DoorDash, or Airbnb allow their businesses to get commoditized in that way? If you’ve been listening to the show, you know I’ve been talking about this idea a lot, so Sundar and I spent some time on this idea; it was a real Decoder conversation.
Of course, we also talked about the smart glasses that Google announced at I/O and when the next era of AI hardware might arrive — including what Sundar thinks of the big OpenAI and Jony Ive deal that was announced just before we spoke. And I couldn’t let this go without asking about the major antitrust trials that Google is involved in, including the government’s demand that Google sell Chrome, and what the negotiations with the Justice Department have involved. President Donald Trump has long complained about his search results being too negative, but Sundar told me that he will not change search rankings in response to political pressure, calling search “sacrosanct.”
There’s a lot in this one — I’m eager to hear what you all think of it.
Okay: Alphabet and Google CEO Sundar Pichai. Here we go.
This transcript has been lightly edited for length and clarity.
Sundar Pichai, you’re the CEO of Alphabet and Google. Welcome back to Decoder.
Nilay, good to be back. Feels like a nice tradition post-I/O to be talking to you. So good to be back.
I think this is the third year we’ve done this after I/O. I’m excited. Thank you for keeping the tradition alive. Lots to talk about. You announced a lot of things yesterday during the keynote. There’s AI mode rolling out for US users, and big updates to Gemini. There’s Veo 3 and Imagen, the generators that you solved Pokémon with, which is very exciting.
My takeaway from yesterday was that Google feels very confident now. There’s a real confidence about the technology coming to life and the products. A lot of things are shipping imminently. What’s the one piece that gave you that confidence? Is it just the volume of things that are shipping? Is it one technology that clicked into being ready for consumers? Where is it coming from?
I think it comes from the depth and breadth of the AI frontier we are pushing in a more fundamental and foundational way. We spoke a lot about this theme called research becomes reality, but it is… We’ve always felt we are a deep computer science company, and we’ve been AI-first for a while. So putting all that together and bringing it to our products at the depth and breadth is what I think is really pleasing to see. For example, people may not have noticed it much. It was so quick. We spoke about text diffusion models in the middle of the whole thing, but we are pushing the frontier on all dimensions, right? And Demis [Hassabis] spoke about world models. So I think that’s the exciting part, like how deep we are pushing this frontier and then bringing it to users, and maybe that’s what makes it feel that way.
You mentioned research into reality several times. Obviously, a lot of these projects have been cooking in the labs for a long time. You’ve said many times over the past many, many years that you and I have talked about it that you think AI will be as profound as electricity.
But you said something yesterday that I think adds to that, which is that we are in a new phase of the platform shift. People have talked about AI being a platform shift for quite a while, but that always has meant to me that there’s a user interface platform shift coming, right? We’re going to interact with computers in natural language in more natural ways, they’ll interact with us back in that same way, and everything will change. Is that the platform shift?
Yeah, you are right. Each of these platform shifts changes many things on the I/O front. Nothing to do with Google I/O, just I/O in the traditional computer science sense. You could feel it. Yesterday, when I watched the Android XR… I’ve used them and played around with them, but watching it, with two people talking in different languages, you can envision the future one day where it’ll actually be seamless. In a way, you couldn’t have done it with phones, you couldn’t have done it without AI because there’s nothing in your way. You’re looking at the other person and talking, right? And that is an element of platform shift, but there are many more elements.
This is the only platform where I think the actual platform is, over time, capable of creating, self-improving, and so on. In a way, we could have never talked about any other platform before, so that’s why I think it’s much more profound than the other platform shifts. It’ll allow people to create new things because, at each layer of the stack, there’s going to be profound improvements. And so I think that virtuous cycle you get in terms of how you can unleash this creative power to all of society, be it software engineers, be it creators — I think that is going to happen in a much more multiplicative way. So when I say it’s a next phase, I’m talking about that part too.
Let me just make that more concrete for people. I think the last platform shift we all understand is the shift to mobile.
That’s right.
And that was, we’re going to have multi-touch, we’re going to have faster cell connections, we’re going to increase processing power that can go with you everywhere. And then there was a layer of applications that was enabled by all of those things. You can push a button, and a Toyota Camry will show up wherever you are in the world. It’s like a very powerful thing that requires all of those ideas. How would you describe the phase we’re in now compared to that? The phase of this, that first phase of AI was that the transformers work and the models work, and we can all see this capability. The second phase, what is it to you?
Just imagine when the internet came, blogging became a thing. Pre-internet, very few people had a means by which they could put their thoughts out to the world. With the internet came a new medium, which allowed people to create and express themselves in a new way. With mobile came cameras, and you could shoot and you could create videos. Look at what’s happened with YouTube. For me, a similar part of this is that we are all talking about things like vibe coding. Yesterday, you saw Veo 3, so we are now in that phase. I think people are going to be able to create AI applications, you can call it, vibe coding, there are many names for it. But that power is yet to be unleashed. We are barely scratching the surface, and these models aren’t quite there. You can kind of do one-shot coding, but you really need to be a programmer to iterate and create something with polish, right? But that frontier is evolving pretty rapidly.
So I think you’re going to see a new wave of things, just like mobile did. Just like the internet did. I came to Google at the time when AJAX was the revolution, the fact that the web became dynamic. You had things like Google Maps, Flickr, and Gmail, that all suddenly came into existence. But I think AI is going to turbocharge in a way we haven’t seen before.
It feels like what you’re describing is that we’re in the phase where the products are developed, right? The capabilities were the first phase, and now we’re going to make some actual products.
And more people can build products than ever before. That’s the multiplicative part I’m talking about. Not just this platform that helps you create more products. The process of creating, developing, etc., is going to be accessible to a much wider swath of humanity than ever before.
I’m wondering, when you look at the landscape of products that exist now, most people experience AI in Gemini or ChatGPT as a chatbot. It’s a general-purpose interface to a bunch of knowledge that will talk to you. What products do you see that will have the same kind of impact as the Web 2.0 products you were talking about, besides the general-purpose chatbot?
Well, obviously, you’ve seen a wave with coding IDEs, like that entire landscape is… I can’t even keep track of how many new companies have come into it, and people are using a lot of it. And yesterday we showed a bunch of partners with whom we are working. So that’s an area where AI is making the most progress. You’re seeing the application layer, at least in terms of code editors, really come into vogue. We’ve had success with NotebookLM. We launched Flow yesterday. Flow is a new [AI video] product that allows you to create and imagine.
So those are all the applications we are doing. I think others are beginning to do it. People are working on legal assistance, and there are all kinds of startups. I was recently in a doctor’s office, and they have an AI that transcribes the whole thing, puts it all in reports, and so on. That’s an enterprise application layer. It kind of works completely differently from when I went on a visit two years ago. So all that change is happening across the board, but I think we are just in the early stages. You will see it play out over the next three to five years in a big way.
Did you ask your doctor what model their transcription software is running on?
[Laughs] No, I didn’t. No, I didn’t.
One of the reasons I’m asking this, and I’m pushing on this, is that the huge investment in the capability from Google and others has to pay off in some products that return on that investment. NotebookLM is great. I don’t think it’s going to fully return on Google’s data center investment, let alone the investment in pure AI research. Do you see a product that can return on that investment at scale?
Do you think in 2004 if you had looked at Gmail, which was a 20% project, which people were internally using as an email service, how would we be able to think about Gmail as what led us to do workspace, or get into the enterprise? I made a big bet on Google Cloud, which is tens of billions of dollars in revenue today. And so my point is that things build out over time. Think about the journey we have been on with Waymo. I think one of the mistakes people often make in a period of rapid innovation is thinking about the next big business versus looking at the underlying innovation and saying, “Can you build something and put out something which people love and use?” And out of which you do the next thing, and create value out of it.
So when I look at it, AI is such a horizontal piece of technology across our entire business. It’s why it impacts not just Google search, but YouTube, Cloud, and all of Android. You saw XR, etc., Google Play, things like Waymo, and Isomorphic Labs, which is based on AlphaFold. So I’ve never seen one piece of technology that can impact and help us create so many businesses. AI is going to be so useful as an assistant. I think that people will be willing to pay for it, too. We are introducing subscription plans, and so there’s a lot of headroom ahead, I think. And obviously, that’s why we are investing, because we see that opportunity. Some of it will take time, and it may not always be immediately obvious.
I gave the Waymo example. The sentiment on Waymo was quite negative three years ago. But actually, as a company, we increased our investment in Waymo at that time, right? Because you’re betting on the underlying technology and you’re seeing the progress of where it’s going. But these are good questions. In some ways, if you don’t realize the opportunities, that may constrain the pace of investment in this area, but I’m optimistic we’ll be able to unlock new opportunities.
One reason I wanted to start here as the foundation of the conversation is that you showed off Android XR yesterday. You showed off some prototype glasses, and you have some partners making glasses. A lot of people think augmented reality glasses powered by AI will be the realization of the full platform shift, right?
You will have an always-on assistant that can look at the world around you. You showed some of those demos yesterday. The form factor will change, and the interface will change. This will be marketed as big as smartphones were. How close do you think we are to that as a mainstream product?
It was a nice reflective moment all the way back from Google Glass. Wearing the product, I think there’s a difference between goggles and glasses. Everyone at The Verge understands as well, but obviously, we are also shipping goggles. We have announced products with Samsung to come later this year.
On the XR side, I’m excited about our partnership with Gentle Monster and Warby Parker. We’ll have products in the hands of developers this year, but I think those products will be pretty close to what people eventually see as final products. I’m excited. I think the pace is actually pretty palpable. I’d be shocked if you and I were sitting next year and I wasn’t wearing one of those when I’m doing this.
Do you think that will be a mainstream iPhone-level replacement product? Because there’s a lot of hardware that needs to be developed along the way to pull that off.
You’re wearing something on your face. I have a prescription. The bar is higher in terms of making the experience seamless enough that you’re willing to wear it on your face and enjoy it for all. I don’t necessarily think next year it will be as mainstream as what you’re talking about, but would millions of people be trying it? I think so, yeah. Both are true.
I have to ask you… Just before we sat down, OpenAI announced that Jony Ive was selling a company he had started called “io,” and Ive and his design consultants in Lovefrom would take overall design. They didn’t announce a product, but they said it’s the future of computing and it’s coming next year. Do you anticipate more of that competition, that your competitors who don’t have a smartphone operating system will go even harder in this direction?
I’m looking forward to an OpenAI announcement ahead of Google I/O, the night before. First of all, look, stepping back, I mean Jony Ive is one of a kind. You look at his track record over the years, I’ve met him only once or twice, but I’ve admired his work, obviously like so many of us. I think it’s exciting. This is why I feel like there’s so much innovation ahead, and I think people tend to underestimate this moment. In some ways, I always like to point out that when the internet happened, Google didn’t even exist.
I think AI is going to be bigger than the internet. There are going to be companies, products, and categories created that we aren’t aware of today. I think the future looks exciting. I think there’s a lot of opportunity to innovate around hardware form factors at this moment with this platform shift. I’m looking forward to seeing what they do. We are going to be doing a lot as well. I think it’s an exciting time to be a consumer, it’s an exciting time to be a developer. I’m looking forward to it.
Ive, in that video, described the phone and the laptop as legacy platforms, which is very interesting considering his own history. Are you all the way there that the phone and the laptop are legacy platforms?
I think these things, if anything, I’ve found through this AI moment that I’m using the web a lot more, because it’s easier to create a Veo 3 video in my browser on a big screen, right? And so I think the way I’ve internalized this, computing will be available, and you don’t have to make these hard choices. Computing will become so essential to you. You’re going to have it in multiple ways around you when you need it, right? I use a phone, a tablet, a laptop, and I have my workstation. And so I have the breadth of it, but over time… It makes sense to me that at some point in the future, consuming content by pulling out this black glass display rectangle in front of you and looking at it is not the most intuitive way to do it, but I think it’s going to take some time.
I feel like we could do a full hour just on Android tablets and where they could go. We’re going to come back for that.A big part of what you’re describing implicates search in really big ways, right? We’re going to be surrounded by information search, Gemini, or some future Google product. We’ll organize that information, take action for you across the web in some way, and you will have a companion. Maybe you only pull out your tablet to watch a video or something. A lot of what’s going on with search has downstream effects on the web, and downstream effects on information providers broadly. Last year, we spent a lot of time talking about those effects. Are you seeing that play out the way that you thought it would?
It depends. I think people are consuming a lot more information, and the web is one specific format. We should talk about the web, but zooming back out, there are new platforms like YouTube and others. I think people are just consuming a lot more information, right? It feels like an expansionary moment.
I think there are more creators, and people are putting out more content. And so people are generally doing a lot more. Maybe people have a little extra time on their hands, and so it’s a combination of all that. On the web, look, things that have been interesting and… We’ve had these conversations for a while. Obviously, in 2015, there was this famous meme, “The web is dead.” I always have it somewhere around, and I look at it once in a while. Predictions… It has existed for a while. I think the web is evolving pretty profoundly. I think that is true. When we crawl and look at the number of web pages available to us, that number has gone up by 45% in the last two years alone, right? That’s a staggering thing to think of.
Can you detect if that volume increase is because more pages are generated by AI or not? This is the thing I may be worried about the most, right?
It’s a good question. We generally have many techniques in search to try and understand the quality of a page, including whether it was machine-generated, etc. That doesn’t explain the trend we are seeing.
Generally, there are more web pages, right? At an underlying level, I think that’s an interesting phenomenon. I think everybody as a creator like you are at The Verge, has to do everything in a cross-platform, cross-format way. I look at the quality of video content you put out, it’s very sophisticated and very different from how The Verge used to be, maybe five to 10 years ago, right? It has profoundly changed. I think things are becoming more dynamic, and cross-format. I think another thing people are underestimating with AI is that AI will make it zero friction to move from one format to another, right? Because our models are natively multimodal. We tease people’s imagination with audio overviews in NotebookLM, right? The fact that you can throw a bunch of documents at it, you have a podcast, and you can join and learn from it.
I think this notion, the static moment of producing content by format, whereas… I think machines can help translate it. It’s almost like different languages and they can go seamlessly between. I think it’s one of the incredible opportunities to be unlocked right ahead. But maybe, I didn’t want to drift from the question we were having. Look, I think people are producing a lot of content, and I see consumers consuming a lot of content. And we see it in our products; others are seeing it too. That’s how I would probably answer at the highest level.
The way I see it currently is that the web is at an all-time high as an application platform, right? The fact that Figma exists and is as successful as it is, and its primary interface as a web app is, I think, remarkable. A lot of the products you are talking about are expressed as web apps. Even some of the most interesting search results you showed yesterday are how Google would generate a custom web app for you and display it in a search result to do some data visualization. I think that’s all looking incredible.
I think the web as a transaction platform is reaching new highs, especially with rulings that mean smartphone makers have to let people push transactions to the web. There’s something very interesting happening there. As a media platform, it feels like it’s at an all-time low, right?
Do you mean the web as a media platform?
The web as a media platform, as an information platform. If I were starting The Verge today with 11 of my co-founders and friends, we would start a TikTok channel, and we might start a YouTube channel. We would definitely not start a website with the dependencies we have as a website today. And that’s the dynamic that it feels like AI is pushing on even harder.
I’m not fully sure I agree, right? I think if you were to go and restart The Verge again, I bet you would have an extraordinary web presence.
At best, no, I’ve thought about this a lot. I think at best our web presence would look like a Substack or a Ghost or something, right?
Maybe. I’m not fully sold on that, but you know the space. I acknowledge that you know that space better than I do. I don’t have that intuition, which you do here. But look, in fact, you say the web application platform is at an all-time high, but I’ve looked. I was vibe coding with Replit a few weeks ago. The power of what you’re going to be able to create on the web — we haven’t given that power to developers in 25 years.
That is going to come ahead. It’s not exactly clear to me. Maybe today you’re looking at it and say, I wouldn’t put all the investment in because it looks like a lot of investment to do that. But that may not be true two years out, right? If you feel like you would create a TikTok channel, then maybe with 2% extra effort you could have a robust web presence. Why wouldn’t you, right? And so I’m not fully sold on it, but I think it’s a good question to ask. But you have to somehow reconcile that with the fact that overall, web traffic seems to be growing. We see more web pages. Somewhere, we have to explain all of that, too.
The publishers, as they often do, responded to Google I/O announcements. So, the News Media Alliance, after AI mode was announced yesterday, I would say they’re very upset. Here’s a statement from the President of the News Media Alliance, “Links for the last redeeming quality of search that gave publishers traffic and revenue. Now, Google takes content by force and uses it with no return, no economic return. That’s the definition of theft.” And they go on to say the DOJ and lawsuits must address it. That’s pretty furious. That’s not a negotiation, right? That’s a “We just want this to stop.” How do you respond to that very loud set of people who say, “Yeah, okay, maybe it’s growing somewhere, but for us, it’s crushing our businesses.”
First of all, through all the products, AI mode is going to have sources. And we are very committed as a direction, as a product direction to make… I think part of why people come to Google is to experience that breadth of the web and go in the direction they want to, right? So I view us as giving more context. Yes, there are certain questions that may get answers, but overall… And that’s the pattern we see today, right? And, if anything over the last year, it’s clear to us that the breadth of area we are sending people to is increasing. And so I expect that to be true with AI mode as well.
But if it was increasing, wouldn’t they be less angry with you?
You’re always going to have areas where people are robustly debating value exchanges, etc., like app developers and platforms. That’s not on the web, etc. There’s always going to be — when you’re running a platform — these debates. I would challenge, I think more than any other company, we prioritize sending traffic to the web. No one sends traffic to the web in the way we do. I look at other companies, newer emerging companies, where they openly talk about it as something they’re not going to do. We are the only ones who make it a high priority, agonize, and so on. We’ll engage, and we’ve always engaged.
There are going to be debates through it all, but we are committed to, I’ve said this before, everything we do across all… You will see us five years from now sending a lot of traffic out to the web. I think that’s the product direction we are committed to. I think it’s what users look for when they come to Google, and the nature of it will evolve. But I am confident that that’s the direction we’ll continue taking.
Is there public data that shows that AI overviews and AI mode actually send more traffic out than the previous search engine results page?
The way we look at it is… I mean, obviously, we take a lot of… We are definitely sending traffic to a wider range of sources and publishers. And because just like we’ve done over 25 years, we’ve been through the same with featured snippets, the quality of… It’s higher-quality referral traffic, too. And we can observe it because the time that people spend is one metric. And there are other ways by which we measure the quality of our outbound traffic, and it’s also increasing. And overall, through this transition, I think AI is also growing, and the growth compounds over time. So whenever we have worked through these transitions, it ends up posted. That’s how Google has worked for 25 years, and we end up sending more traffic over time. So that’s how I would expect all this to play out.
So why do you think that there is so much general economic turmoil on that side of the house? If you’re sending more traffic and the goal over time is to make sure that that works… We’re a year into it, and it doesn’t seem to have gotten better over there.
No, look, we are sending traffic to a broader source of people. People may be surfacing more content, looking at more content, so someone may individually see less. There are all kinds of… At the end of the day, we are reflecting what users want. If you do the wrong thing, users won’t use our product and go somewhere else. And so you have all these dynamics underway, and I think we have genuinely… We took a long time designing AI overviews, and we are constantly iterating in a way that we prioritize this, sources, and sending traffic to the web.
I mean, my criticism of this industry, just to be clear, is that everyone’s addicted to Google, and it would be better if they weren’t. But they’re addicted to Google, right? And they’re feeling it. And then on top of that, you see… You’ve mentioned several times that overall queries are increasing on Google surfaces, but they’re changing. They’re getting longer, they’re getting more complicated. AI mode might walk you through several steps. Maybe some people are searching on TikTok now.
Eddie Cue on the stand in the trial the other day said that search in Safari for the last month dropped for the first time in 22 years. That’s a big stat. Obviously, your stock price was affected by it. There was a statement. Is that trend bearing out that the standard Google search is dropping from devices, and different kinds of searches are increasing?
No. We’ve been very clear. We’re seeing overall query growth in search.
But have you actually seen the drop in Safari?
We have a comprehensive view of how we look at data across the board. There can be a lot of noise in search data, but everything we see tells us that we are seeing query growth, including across Apple’s devices and platforms. Specifically, I think we quantified the query growth from AI overviews. And what’s healthy is that the query growth is continuing to grow over time.
So to step back, and this is what I’ve said before, it feels very far from a zero-sum game to me. I said this last year. I think people are… It’s interesting we spoke about TikTok, right? Think about how profound of a new product TikTok was. How has YouTube done since TikTok has come, right? You could ask all these questions there. Why is it that TikTok arrives and YouTube has grown? And so I think what we always underestimate in these moments is that people are engaging more, doing more with it. We are improving our products. And so that’s how I would think about these moments.
Let me just broaden that out to agents. I watched Demis Hassabis yesterday. He was on stage with Sergey Brin and Alex Kantrowitz asked him, “What does the web look like in 10 years?” And Demis said, “I don’t know that an agent-first web looks anything like the web that we have today. I don’t know that we have to render web pages for agents the way that we have to see them.”
That kind of implies that the web will turn into a series of databases for various agents to go and ask questions to, and then return those answers. And I’ve been thinking about this in the context of services like Uber, DoorDash, and Airbnb. Why would they want to participate in that and be abstracted away for agents to use the services they’ve spent a lot of time and money building?
Two things, though. First, there’s a lot to unpack, a fascinating topic. The web is a series of databases, etc. We build a UI on top of it for all of us to conceive.
This is exactly what I wanted, “the web is a series of databases.”
It is. But I think I listened to the Demis and Sergey conversation yesterday. I enjoyed it. I think he’s saying for an agent-first web, for a web that is interacting with agents, you would think about how to make that process more efficient. Today, you’re running a restaurant, people are coming, dining and eating, and people are ordering takeout and delivery. Obviously, for you to service the takeout, you would think about it differently than all the tables, the clothing, and the furniture. But both are important to you.
You could be a restaurant that decides not to participate in the takeout business. I’m only going to focus on the dining experience. You’re going to have some people that are vice versa. I’m going to say, I’m going to go all in on this and run a different experience. So, to your question on agents… I think of agents as a new powerful format. I do think it’ll happen in enterprises faster than in consumer. In the context of an enterprise, you have a CIO who’s able to go and say, “I really don’t know why these two things don’t talk to each other. I am not going to buy more of this unless you interoperate with this.” I think it’s part of why you see, on the enterprise side, a lot more agentic experiences. On the consumer side, I think what you’re saying is a good point. People have to think about and say, “What is the value for me to participate in this world?” And it could come in many ways. It could be because I participated in it, and overall, my business grows.
Some people may feel that it’s disintermediating, and doesn’t make sense. I think all of that can happen, but users may work with their feet. You may find some people are supporting the agent experience, and your life is better because of it. And so you’re going to have all these dynamics, and I think they’re going to try and find an equilibrium somewhere. That’s how everything evolves.
I mean, I think the idea that the web is a series of databases and we change the interface… First of all, this is the most Decoder conversation that we’ve ever had. I’m very happy with it. But I had Dara [Khosrowshahi] from Uber on the show. I asked him this question from his perspective, and his answer attracts yours broadly. He said, first, we’ll do it because it’s cool and we’ll see if there’s value there. And if there is, he’s going to charge a big fee for the agent to come and use Uber.
Because losing the customer for him, or losing the ability to upsell or sell a subscription, none of that is great. The same is true for Airbnb. I keep calling it the DoorDash problem. DoorDash should not be a dumb pipe for sandwiches. They’re actually trying to run a business, and they want the customer relationship. And so if the agents are going across the web and they’re looking at all these databases and saying, okay, this is where I get food from, and this is where I get cars from, and this is where I book… I think the demo was booking a vacation home in Spanish, and I’m going to connect you to that travel agent.
Is it just going to be tolls that everyone pays to let the agents work? The CIO gets to just spend money to solve the problem. He says, “I want this capability from you. I’m just going to pay you to do it.” The market, the consumer market, doesn’t have that capability, right?
Well, look, all kinds of models may emerge, right? I can literally envision 20 different ways this could work. Consumers could pay a subscription for agents, and the agents could revenue share back. So that is the CIO-like use case you are talking about, that’s possible. We can’t rule that out. I don’t think we should underestimate… People may actually see more value in participating in it. I think this is… It’s tough to predict, but I do think that over time, if you’re removing friction and improving user experience, it’s tough to bet against those in the long run. And so I think if you are lowering friction for it and then people are enjoying using it, somebody’s going to want to participate in it and grow their business. And would brands want to be in retailers? Why don’t they sell directly today? Why won’t they do that?
Because retailers provide value in the middle. And why do merchants take credit cards? Why… I’m just saying. So there are many parts, and you find equilibrium because merchants take credit cards so they see more business as part of taking credit cards than not, which justifies the increased cost of taking credit cards. It may not be the perfect analogy, but I think there are all these kinds of effects going around, and so… But what you’re saying is true. Some of this will slow progress in agents just because we all are excited about Agent2Agent (A2A) and Model Context Protocol (MCP)… And we think no, some of it will slow progress, but I think it’ll be very dynamic. Yeah.
Yeah. There are other pressures on Google. There are antitrust pressures. The government would like you to sell Chrome. Can you do all the things you want to do if you’re made to sell Chrome?
I don’t want to comment on… We are in a legal process. I look at having been directly involved in building Chrome. I look at… I think there are very few companies that would’ve… We not only improved our product, but we also improved the state of the web by building Chrome. We open-sourced it. We provided Chromium. Everyone else has access to the browser. So I think the amount of R&D, the amount of innovation we put into it, the investments in security, etc., we do, so I think we-
But if you’re made to sell it, can you do all the things that you want to do?
I don’t think that’s the scenario we’re looking at, but stepping back… Look, I think as a company, we think of ourselves as a deeply foundational technology company, which then translates into products that touch billions of people. So we do it across many, many things. And so, of course, I think, look, as a company, we’re going to continue investing and doing our best to innovate and build a successful business in all scenarios. So this is how I would answer it.
The Trump administration is extremely transactional, I would say. The tech industry has a new relationship with Trump in his second term. You were at the inauguration. Have you had conversations about what a settlement might look like and what the Trump administration might demand to make these problems go away?
We’ve engaged with the DOJ, like we did over the years, in the context of all the cases we have. So that’s how we normally do these conversations.
Trump has very publicly said he doesn’t like his search ranking, and he wants it changed in some way. Would you ever adjust the search ranking for Donald Trump?
No. Look, we have a… I can’t… Today, the way Google Search works is that I cannot… No person at Google can influence the ranking algorithm.
AI mode is different, right? We’ve seen system prompts adjusted in very chaotic ways from some of your competitors. Is that something that you would be open to in a world where you’re serving the full answer?Would you adjust the AI mode responses in response to political pressure?
No.
Because we’ve certainly seen in Grok and others, the system prompts change the answers in dramatic ways.
The way we do ranking is sacrosanct to us. We’ve done it for over 25 years. We make a lot of… There are a lot of ranking signals we take into account and stuff. And if there’s broad feedback from people that something isn’t working, we will look at it systematically and try and make changes, but we don’t look at individual cases and change the rankings.
When you think about those sources of information, one of the things that I have been thinking about a lot is, I don’t know, how the CDC web pages have changed a lot recently. Diversity, equity, and inclusion language has been removed from pages across the government. Those used to be very high-ranking sources in Google Search. We just implicitly trusted the CDC’s web pages in some ways. Are you re-evaluating that? How there might be misinformation on some of these pages that then gets synthesized into AI results?
Oh, it’s a misunderstanding of how search works. We don’t individually evaluate the authoritativeness of a page right then. It’s what our signals do. Obviously, our signals are multiple orders of magnitude more complicated than PageRank today. But to use PageRank as an example, we weren’t the ones determining how authoritative a page is. It’s how other pages were linking to it, like an academic citation, etc. So we are not making those decisions today. And so I don’t see that changing.
As you synthesize more of the answers, do you think you’re going to have to take more responsibility for the results?
We are giving context around it, but we’re still anchoring it in the sources we find. But we’ve always felt a high bar at Google. I mean, last year when we launched AI Overviews, I think people were adversarially querying to find errors, and the error rate was one in 7 million for adversarial queries, and so… But that’s the bar we’ve always operated at as a company. And so I think to me, nothing has changed. Google operates at a very high bar. That’s the bar we strive to meet, and our search page results are there for everyone to see. With that comes natural accountability, and we have to constantly earn people’s trust. So that’s how I would approach it.
What do you think the marker is for the next phase of the platform shift after this one? We opened by talking about how we’re in a second phase. What’s the marker for the final phase, or the third phase?
Of the platform shift, do you mean?
Yeah.
Of the AI platform?
What are you looking for as the next marker?
I think the real thing about AI, which I think is why I’ve always called it more profound, is self-improving technology. Having watched AlphaGo start from scratch, not knowing how to play Go, and within a couple of hours or four hours, be better than top-level human players, and in eight hours, no human can ever aspire to play against it. And that’s the essence of the technology, obviously in a deep way.
I think there’s so much ahead on the opportunity side. I’m blown away by the ability to discover new drugs, completely change how we treat diseases like cancer over time, etc. The opportunity is there. The creative power, which I talked about, which we’re putting in everyone’s hands, the next generation of kids, everyone can program and will… If you think of something, you can create it. I don’t think we have comprehended what that means, but that’s going to be true. The part where the next phase of the shift is going to be really meaningful is when this translates into the physical world through robotics.
So that aha moment of robotics, I think, when it happens, that’s going to be the next big thing we will all grapple with. Today they’re all online and you’re doing things with it, but on one hand… Today, I think of Waymo as a robot. So we are running around driving a robot, but I’m talking about a more general-purpose robot. And when AI creates that magical moment with robotics, I think that’ll be a big platform shift as well.
Yeah. I’m looking forward to it. Next year we’re going to do this with glasses and robots. It’s going to be great.
[Laughs] We’ll give it a shot.
Thank you so much, Sundar.
All right. Thanks, Nilay. I appreciate it.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
Up to four Z100 speakers (pictured) can be wirelessly connected to supporting TVs.
A Dolby Atmos feature that makes it easier to create a spatial audio setup is coming to select TCL TVs and speakers. TCL is adding support for Dolby Atmos FlexConnect to QD-Mini LED TVs in its 2025 Precise Dimming Series, including the QM8K, QM7K, and QM6K, as well as launching its new Z100 smart speaker to pair with the feature.
FlexConnect allows users to connect their TV with specialized external speakers that can be placed anywhere in a room, providing Atmos-enabled audio without requiring a symmetrical setup. The feature aims to provide an alternative solution to installing a traditional surround-sound system, which can be restricted by room size, outlet locations, and mounting or cabling requirements.
The TCL TVs and wireless speaker are the first devices with FlexConnect support to be announced for the US, with the Z100 having already launched in China for CN¥1,499 (about $208). US pricing has not been confirmed yet. TCL says that up to four Z100 speakers can be connected to supporting FlexConnect TVs in the US “starting this summer.”
The concept is similar to other wireless Dolby Atmos products like the $2,500 Sony Bravia Theater Quad, but it should be considerably more affordable. Also, while the Theater Quad is only available in gray, the Z100 comes in gray, teal, and beige finishes, which may be easier to coordinate with your home decor.
The Nintendo Switch is on the cusp of becoming Nintendo's bestselling hardware ever. In retrospect, it's easy to see why: it's a device that seamlessly transitions from a home console to a handheld, erasing the distinction between the two. It's been so successful that Nintendo isn't changing all that much with the Switch 2. But both consoles are well-executed versions of ideas Nintendo has been working on since the failed Wii U - and maybe even earlier.
Purely by sales numbers, the Wii U was a flop. The Switch has sold more than 150 million units in its eight-year lifetime. The Wii U, by comparison, sold 13.56 million units - less than a 10th of what the Switch did - making it Nintendo's worst-selling home console.
As a result, it had a much shorter lifespan, launching in late 2012 before being superseded by the Switch about four and a half years later. But many of its ideas and its games were not only excellent, they were also well ahead of their time, and, in some ways, predicted several modern trends in gaming.
The Wii U, with its tablet controller, also worked as a device that could play games on a TV or in handheld mode. The ability to see a game on a big screen or curl up …
The European Union has warned Shein that several of its practices violate EU consumer protection law, including by offering “fake discounts” and pressuring customers into completing purchases with phony deadlines. This week, the EU’s Consumer Protection Cooperation (CPC) Network said the fast-fashion retailer needs to bring its practices in line with the law, or else face a fine.
In addition to showing price reductions that don’t accurately reflect a product’s previous price, the investigation by the CPC found that Shein uses “deceptive” product labels to trick users into thinking an item comes with special characteristics when “the relevant feature is required by law.”
The investigation also found that Shein shows “false or deceptive” sustainability claims, as well as “incomplete and incorrect” information about a customer’s rights to return goods and get refunds. The EU says Shein hides its contact information, making it difficult for customers to get in touch with the company.
Last year, the EU designated Shein a “very large online platform,” which means it’s subject to the rules under the Digital Services Act (DSA). The law requires sites like Shein, Amazon, AliExpress, Meta, and TikTok to remove illegal goods, services, or content from their platforms. Companies that break the rules could face fines of up to 6 percent of their global turnover.
The EU also asked for more information from Shein on how it’s complying with laws against misleading product rankings, reviews, and ratings. Shein has one month to comply with the EU’s request.
Meta has a well-earned reputation as the fastest follower in tech. Did your startup launch a cool feature that people like? Before you can say "Series B," Meta will have built something eerily similar, embedded it into its vastly more popular platforms, and eaten your lunch. This probably isn't how CEO Mark Zuckerberg would like to see his empire, but it's genuinely an asset; few companies have done as good a job of identifying and jumping on trends.
Sometimes, though, jumping on trends means spending billions to acquire Instagram and WhatsApp. And sometimes that lands you in an antitrust trial against the FTC. Over the last several weeks, in a courtroom in DC, executives and experts have been asked whether Meta bought those companies and helped them achieve greatness, or bought them to prevent them from doing so. Which argument Judge James Boasberg ultimately believes will have huge ramifications for the whole industry.
On this episode of The Vergecast, The Verge's Lauren Feiner takes us through what we've learned so far in the trial. (Lauren has been in the courthouse for virtually every day of testimony - we finally caught her on a day off.) She explains why WhatsApp is s …
The India-only OnePlus 13S (pictured) comes with some goodies that global OnePlus users will eventually benefit from.
While the latest phone in the OnePlus 13 lineup is exclusive to India, the AI features that it comes with will be making their way to US users. The most notable is Plus Mind, which allows users to easily save information to a dedicated “Mind Space” instead of manually logging important dates or appointments.
Schedules, reservations, event information, and other data can be extracted from images or text and then automatically added to a user’s calendar, and then it can be quickly retrieved later using OnePlus’ AI search feature. OnePlus says a new Plus Mind feature will be rolling out “later this year,” which will automatically categorize saved information to help improve organization.
Plus Mind will initially be available on the compact OnePlus 13S that’s launching in India on June 5th, a variant of the 13T that was recently released in China. Both phones, which are unavailable in the US and Europe, have the new customizable Plus Key button, which replaces OnePlus’ signature Alert Slider. Like the Alert Slider, Plus Key can be used to switch between normal, mute, and vibrate-only modes, alongside other actions like launching the camera, recording audio, or translation.
Plus Mind will be rolled out to the rest of the OnePlus 13 series via a future software update, and the company says that more devices will follow. Plus Mind can be activated with the new Plus Key button or an upward three-finger swipe on older phones that lack the Plus Key, such as the OnePlus 13. Every OnePlus phone launching this year is expected to feature the new Plus Key.
Alongside Plus Mind, OnePlus is developing a suite of AI tools, including an AI translation app that combines text, live voice, camera-based, and screen translation capabilities, and an image-reframing tool that analyzes photos to suggest the best cropping options. Another AI editing tool that’s rolling out “this summer” will be able to automatically correct facial expressions and blinking.
Two features that are only launching in India are AI VoiceScribe, which OnePlus says can “record, summarize, and translate calls and meetings” directly within messaging and meeting apps, and AI Call Assist, which provides automatic summaries and real-time translation during calls. OnePlus global PR manager James Paterson told The Verge that “OnePlus is working to bring these features to other regions in the future, subject to local AI regulations.”
Update, May 27th: added availability information from OnePlus.
There are multiple ways to unlock the Tapo DL100, but it lacks a fingerprint or palm reader. | Image: TP-Link
TP-Link has announced a new budget-friendly electronic door lock joining its Tapo smart home line that’s launching today for $69.99. Despite the entry-level pricing, the new Tapo DL100 Smart Deadbolt Door Lock offers multiple unlocking methods, including a keypad, physical key, the Tapo mobile app, and voice commands through smart assistants like Amazon’s Alexa, Google Assistant, and Samsung SmartThings.
The Tapo DL100 is about $10 cheaper than the Wyze Lock Bolt, which we selected as the best budget smart lock in our buyer’s guide. The DL100 doesn’t offer the convenience of a fingerprint scanner like the Lock Bolt, but it does come with Bluetooth and Wi-Fi. Wyze’s budget option is Bluetooth-only unless you pair it with the company’s video doorbell.
With Wi-Fi turned on, TP-Link estimates that the DL100 will run for up to seven months on a set of four AA batteries, but that can be extended to up to 10 months when only using Bluetooth. Should the batteries die unexpectedly and you’ve lost your key, the smart lock can be temporarily powered using a USB-C port.
For enhanced security, the DL100 supports a feature called pin masking, where you can enter your access code as part of a larger string of numbers to help obscure your PIN. You can also set up one-time codes for visiting guests or codes for service workers like repair technicians that only work during specific times of the day.
The DL100 is also compatible with Tapo’s smart doorbell, giving you the option to unlock the door while seeing a live preview of who’s standing on your porch. The mobile app also sends notifications when the door has been locked and unlocked, and it keeps detailed logs — including those on specific users — based on which codes were used. The Tapo DL100 also comes with an IP54 weather-resistance rating so it can survive getting wet during a rain storm or an accidental blast from a hose.
A hinged design lets PopSockets’ new grip double as a more versatile phone stand. | Image: PopSockets
PopSockets is launching a new version of its MagSafe grip that can be used to prop a phone up vertically for scrolling TikTok or video calls. The PopSockets Kick-Out Grip and Stand is less than a millimeter thicker than the company’s current lineup of MagSafe PopGrips, according to the company’s founder, David Barnett, but introduces a hinge so the pop-up grip can now fold out and double as a support stand.
The Kick-Out Grip and Stand is available starting today through the PopSockets online store for $39.99 in colors that include black, dusk, putty, and latte. It’s also available through Best Buy’s website and retail locations which offer two exclusive color options: French navy and silver, both featuring a shiny metal finish. If those colors don’t work for you, as with previous iterations of the PopSockets, the collapsible grip on the new Kick-Out can be removed with a twist and replaced with alternates in different colors or designs.
It’s compatible with iPhones and cases that support Apple’s MagSafe feature, as well as Android devices and accessories that support the Qi2 standards’s Magnetic Power Profile. For older devices lacking wireless charging, or those using protective cases that are too thick for MagSafe to work effectively, an included adhesive adapter makes the Kick-Out Grip and Stand compatible with nearly any smartphone.
PopSocket’s new grip is thicker than competitor’s products like the OhSnap Snap 4 and it needs to be completely removed if you want to use a wireless charger. But the Kick-Out is more comfortable to hold than the Snap 4, can be personalized, and offers more functionality.
Although the Kick-Out Grip and Stand is the first modern version of the PopSockets that can be used to prop a phone up vertically, it’s technically not the very first. After gluing a couple of buttons to the back of his phone so he could wrap and store his wired earbuds, Barnett went on to design and successfully crowdfund an iPhone 4 case featuring two of the accordion style pop-up grips that PopSockets still feature today. It could be used to store headphones, but also prop an iPhone up horizontally or vertically.
Wireless earbuds like Apple’s AirPods eventually made the PopSockets’ original dual grip design obsolete, but that wasn’t the end of the product. “A friend of mine calls me the luckiest man on earth because phones grew into my invention,” says Barnett. “I invented PopSockets to solve the problem of headset tangle, and turns out they served as a great grip.”
PopSockets eventually simplified the design of its products as consumers embraced them as a more secure way to hold their phones that continue to grow larger and heavier, with one hand. To date, the company has sold over 285 million of the grips, and while their success may seem like a fortunate accident, the latest version brings a useful improvement that PopSockets has been working on perfecting for several years. “Our products are all designed to bring joy to daily phone life by eliminating pain points. I think this one hits the mark,” says Barnett.