Reading view

There are new articles available, click to refresh the page.

Self-driving cars: Google and others map the road to automated vehicles

Google self driving autonomous car stock 1024

Google and a number of automakers are spearheading the movement to get automated vehicles on America’s roads. Self-driving cars are street legal in three states, and Google’s fleet has collectively logged over 300,000 miles of time on the road. However, there are several obstacles in the path of widespread adoption, with legal and moral opposition to the concept coming from all corners. Follow this StoryStream to track the technology’s progress as it transitions from experimental testing to consumer reality that could save thousands of lives.

Google Play is going to start highlighting apps with widgets

Google is making some changes to Google Play on Android devices to better highlight apps that include widgets, according to a blog post. The changes include a new search filter for widgets, widget badges on app detail pages, and a curated editorial page dedicated to widgets.

“Historically, one of the challenges with investing in widget development has been discoverability and user understanding,” product manager Yinka Taiwo-Peters says in the post. “You’ve asked for better ways for users to find and utilize your widgets, and we’re delivering.” Yinka Taiwo-Peters also acknowledges that “we understand that the effort required to build and maintain widgets needs to be justified by user adoption.”

With the search filter, users will be able to more easily search for apps with widgets. The badge “eliminates guesswork for users and highlights your widget offerings, encouraging them to explore and utilize this capability,” Taiwo-Peters says. And the curated editorial page will show off “collections of excellent widgets.”

The updated widget discoverability tools will be “coming soon,” Taiwo-Peters says.

Logitech’s best gaming mouse has fallen to its lowest price ever

A hand holding up the black Logitech G Pro X Superlight 2 mouse.
The Logitech G Pro X Superlight 2 is down to a new low, but only in black.

If you’re looking for a top-notch gaming mouse you can use for both work and play, Logitech’s G Pro X Superlight 2 is a great option that’s currently on sale for its best price to date. Right now, you can buy it at Walmart in select colors starting at $113.99 ($45 off) or at Best Buy starting at $114.99 ($44 off); it’s also on sale at Amazon starting at $119.99 (about $39 off).

The wireless gaming mouse is Logitech’s fastest, with a Hero 2 sensor that tracks at 500 inches per second (and up to 32,000 DPI) and a 4kHz polling rate. At just 60 grams (2.1 ounces), it’s also Logitech’s lightest, making it easy to move quickly and accurately. It also boasts an attractive matte plastic design, meaning it won’t look out of place at the office, even if it is perfectly suitable for tournament-level play. 

Despite the lightweight build, the Superlight 2 features clicky hybrid optical / mechanical switches and five programmable buttons, which can be customized via Logitech’s companion software. The mouse also offers up to 95 hours of battery life, boasts USB-C charging, and supports both the original Logitech Powerplay mousepad and the upcoming Powerplay 2, which can keep your mouse continuously charged at $20 less than the original pad.

Read our hands-on impressions.

Some more ways to save

  • Now until March 9th, Anker’s 2-in-1 USB-C to USB-C Cable is on sale for $17.99 ($8 off) at Amazon and directly from Anker (with promo code WS7DV2FVJTOY).  The four-foot cable comes with a pair of USB-C attachments, allowing you to charge two devices simultaneously. It also offers 140W of total output but can intelligently share power, automatically adjusting power distribution based on the connected gadget.
  • Apple will likely introduce a new iPad this week, but if you’re looking for something smaller, the latest iPad Mini is on sale at Amazon, Best Buy, and Walmart with Wi-Fi and 128GB of storage starting at $399 ($100 off), matching its best price to date. The 8.3-inch Mini is the smallest tablet in Apple’s lineup, rendering it the ideal form factor if you use your tablet to read a lot. Its A17 Pro chip isn’t as powerful as the processors in the M4-powered iPad Pro or M2-powered Air, but it’s faster than that of the base iPad; it also supports Apple Intelligence, unlike the entry-level tablet. Read our review.
  • Amazon, Best Buy, and B&H Photo are selling Amazon’s latest Fire TV Cube for $109.99 ($30 off), which is one of its better prices to date. The third-gen Cube is Amazon’s highest-end streaming device, one that comes with Wi-Fi 6E support and an HDMI input for connecting both cable boxes and gaming consoles. It also offers support for 4K streaming in multiple HDR formats — including Dolby Vision — while continuing to double as an Alexa-powered smart speaker you can use to play music, control smart home devices, and carry out other tasks. Read our review.

The FTC is being hit by terminations

At least a dozen probationary staffers at the Federal Trade Commission were terminated last week, The Verge has learned.

The terminations took place across the agency, according to two sources familiar with the matter, one of whom said that included both the Bureau of Consumer Protection and Bureau of Competition. The sources did not definitively link the terminations to actions by Elon Musk’s Department of Government Efficiency (DOGE), but the move followed a familiar DOGE playbook: apparently indiscriminate cuts targeting probationary employees, who may be new to the agency or a specific role. The FTC did not immediately respond to a request for comment.

This appears to be one of the first times during the current Trump administration that the FTC has been impacted by staffing cuts of this scope, even as DOGE has driven mass firings across the federal government. Republican FTC Chair Andrew Ferguson has largely aligned himself with Trump, but the cuts indicate that the agency still may not be safe from broader changes to government staffing — though it’s still unclear how far it will go.

Are you a current or former US federal government worker? Reach out securely with tips from a non-work device to Lauren Feiner via Signal at laurenfeiner.64.

In an email obtained by The Verge, the FTC chapter of the National Treasury Employees Union acknowledged to members last week that it was aware of terminations at the agency, but did not yet know their scope. The email also included information about ongoing legal efforts by the national organization, including the filing of claims at the Office of Special Counsel of the Merit Systems Protection Board, as well as institutional grievances at each agency where probationary employees have been terminated. The FTC union website now hosts a similar notice, with next steps for terminated employees.

The Washington Post reported last week that the FTC’s internal equal opportunity office had been cut from six to three staffers. Last month, Ferguson had declared that “DEI is Over at the FTC,” and said he had “Terminated the Diversity Council” at the agency. The scope of the recent probationary terminations, however, appear distinct from these directives.

Also last week, FTC staff in Washington were told by DOGE that they’d soon be moving into the building being vacated by the US Agency for International Development (USAID), according to Bloomberg, which DOGE had effectively dismantled.

SwitchBot launches cut-your-own smart shades

The new DIY smart shades from SwitchBot can be made to measure in your own home. | Image: SwitchBot

SwitchBot’s inexpensive, adjustable smart roller shades are finally here. First announced in September, the SwitchBot Roller Shades are now available at switch-bot.com, claiming to be the world’s first adjustable smart shades. This means you don’t need to worry about getting your measurements right before ordering, as you can adjust the width of both the shade’s headrail and the fabric to fit your window frame after buying them. Starting at $199, they’re cheap for smart shades.

Smart motorized shades are a great convenience. They can be programmed to automatically open and close at set times of day or based on sunrise and sunset, and with an additional sensor, they can adjust based on temperature and light in the room. SwitchBot’s shades work over Bluetooth and are compatible with Matter, meaning they will work with Apple Home, Amazon Alexa, Google Home, and other smart home ecosystems through a SwitchBot hub.

SwitchBot’s smart shades come in two colors, white and grey, and four sizes: small (22.8 to 31.5 inches) for $199.99), medium (31.5 to 43.3 inches) for $219.99, large (43.3 to 55.1 inches) for $259.99, and extra large (55.1 to 72.8 inches) for $309.99. Pick the size that’s closest to your window and do the fine-tuning once you have them in hand.

That pricing is very competitive. Most motorized shades start around $400 for a small window, and while Ikea used to have an inexpensive option with its Fyrtur smart roller shades, those appear to have been discontinued. Lutron recently launched a lower-cost option to its excellent Serena shades, but those still start at $399.

SwitchBot’s approach is to let you do more of the work at home, providing an adjustable headrail and a specialized cutting tool.

Smartwings shades are comparable in price, at around $180, and they offer more fabric and light filtering options than SwitchBot. (SwitchBot says you can swap out the fabric, but it doesn’t sell different options on its site, so I guess that’s on you if you want a different look.) Plus, Smartwings has the option of Matter over Thread connectivity, so no need for a proprietary hub for smart home control. But you do need to make sure you get your measurements correct, as they’re not adjustable, meaning you could make a fairly expensive mistake if you’re not great with a tape measure.

SwitchBot’s approach is to let you do more of the work at home, providing an adjustable headrail and a specialized cutting tool. Once you’ve got your headrail set to the correct size, you attach a cutting tool to the end of the shade and roll it until the entire length has been cut. According to Switchbot’s installation video, it looks pretty straightforward but also fairly time-intensive. Personally, I’d rather a pro do that part for me, but if you’re really worried about your measuring skills, it’s nice to have this as an option.

The SwitchBot Roller Shade is battery-powered, and the company claims eight months of use per charge. It can be recharged via a USB-C cable or with a separate solar panel that costs $30. The latter also adds a light sensor, allowing for automation based on light levels.

The shades can connect to the SwitchBot Hub 2 for Matter compatibility, which is also a temperature sensor, allowing for automating the shades based on the room’s climate. A wired remote control connected by a USB-C cable is included, or you can purchase a $20 SwitchBot wireless remote.

The iPhone 15 Pro will get Visual Intelligence with iOS 18.4

An image showing someone using Apple’s Visual Intelligence feature on an iPhone.

Apple’s latest iOS 18.4 developer beta adds the Visual Intelligence feature, the company’s Google Lens-like tool, to the iPhone 15 Pro and iPhone 15 Pro Max, as reported by 9to5Mac.

Apple had told Daring Fireball that the feature would be coming to those iPhones in a future software update but didn’t say which. Barring any last-minute changes, it now seems like it will be available widely with the official rollout of iOS 18.4, which is expected in April.

Visual Intelligence launched as a feature accessible from the Camera Control button for the iPhone 16 lineup that debuted in September. Because the iPhone 15 Pro and Pro Max don’t have the Camera Control button, you’ll instead be able to use Visual Intelligence through the Action Button or via Control Center, similar to the newly launched iPhone 16E.

According to 9to5Mac, today’s iOS 18.4 developer beta 2 update also brings the Action Button and Control Center options for Visual Intelligence to the rest of the iPhone 16 lineup.

Microsoft’s new Dragon Copilot is an AI assistant for healthcare

Microsoft has announced Microsoft Dragon Copilot, an AI system for healthcare that can, among other things, listen to and create notes based on clinical visits. The system combines voice-dictating and ambient listening tech created by AI voice company Nuance, which Microsoft bought in 2021.

According to Microsoft’s announcement, the new system can help its users streamline their documentation through features like “multilanguage ambient note creation” and natural language dictation. Its AI assistant offers “general-purpose medical information searches from trusted content sources,” as well as the ability to automate tasks like “conversational orders, note and clinical evidence summaries, referral letters, and after visit summaries.”

The goal of all of this is to “free clinicians from much of the administrative burden of healthcare” so they can focus on patient care, according to Joe Petro, Microsoft VP of Microsoft Health and Life Sciences Solutions and Platforms. Microsoft says its own surveys found that clinicians who have used the Nuance tech that makes up Dragon Copilot suffered less burnout and that 93 percent of their patients reported a “better overall experience.”

Microsoft is one of many companies offering such AI tools for healthcare settings. A Google Cloud blog published today highlights how healthcare firms are using Google’s medical AI offerings, like by creating medical assistant AI agents for identifying patient health risks; they’re also using the new multimodal image-searching features Google debuted for its Vertex AI Search for healthcare product.

The FDA published considerations for generative AI devices in healthcare last year, in which it noted many potential benefits of the tech, but also the risks of models making things up. In a study last year, researchers found that was an issue at times with Nambla’s OpenAI Whisper-powered medical transcription software. Microsoft says it is “committed to developing responsible AI by design,” and that Dragon Copilot’s “capabilities are built on a secure data estate and incorporate healthcare-specific clinical, chat and compliance safeguards for accurate and safe AI outputs.”

YouTube is stepping up its efforts to sell you other streaming services

YouTube is preparing a big redesign of its TV app that will make it look more like Netflix, according to a report from The Information. The new design, which is expected to launch “in the next few months,” will reportedly display paid content from various streaming services on the homepage.

YouTube currently lets you explore shows and movies from services like Paramount Plus, Max, and Crunchyroll as part of the Primetime Channels feature it rolled out in 2022. Similar to the setup on Amazon’s Prime Video, you can select and subscribe to these third-party services through YouTube, all while the platform gets a cut of the payment.

However, you can only access this content through the Movies and TV tab of the YouTube app, which The Information says makes it harder for users to find. YouTube also reportedly stopped adding new services to its Primetime offering as it struggled with integrating paid content into its homepage.

But now, the company aims to solve this by putting paid subscriptions on the app’s homepage. It will also give creators the ability to display their videos in seasons – something it announced it would start letting creators do on their channels – as well as play previews of shows automatically, The Information reports.

“The vision is that when you come to our [TV] app and you’re looking for a show, it’ll just blend away whether that show is from a Primetime Channel or that show is from a creator,” Kurt Wilms, YouTube’s senior director of product management, told The Information. The company has made some notable design tweaks to its TV app over the past year, and announced last month that TVs have become the “primary device” people use to watch YouTube.

All the news on Microsoft’s latest Copilot and Windows AI features

At Microsoft’s New York City event today, it’s announcing an all-new Copilot experience. The new Copilot design includes a new card-based look across mobile, web, and Windows. Copilot is getting more personalized with features like Copilot Vision, which adds the ability to see what you’re looking at, an OpenAI-like natural voice conversation mode, and a virtual news presenter mode that can read the headlines to you.

Windows 11 is getting new features like Phone Link status in the start menu that can show notifications and your phone’s battery life. And both Paint and Photos are getting fun new features like Generative Fill and Erase. Copilot Plus PCs are getting a revamped AI-powered Windows Search that includes a Google Circle to Search-like “Click to Do” feature and the ability to search for a photo using just a text description.

Earlier this year, an exec reorganization put Pavan Davuluri in charge of Windows and Surface and made Mustafa Suleyman the new CEO of AI. Now, a full year after Panos Panay’s abrupt departure, we will find out more about where Microsoft’s “AI PC” push is headed.

You can read all the updates from the event below.

T-Mobile’s parent company is making an ‘AI Phone’ with Perplexity Assistant

phone side by side showing ai assist screen

Deutsche Telekom is building a new Perplexity chatbot-powered “AI Phone,” the companies announced at Mobile World Congress (MWC) in Barcelona today. The new device will be revealed later this year and run “Magenta AI,” which gives users access to Perplexity Assistant, Google Cloud AI, ElevenLabs, Picsart, and a suite of AI tools.

The AI phone concept was first revealed at MWC 2024 by Deutsche Telekom (T-Mobile’s parent company) as an “app-less” device primarily controlled by voice that can do things like book flights and make restaurant reservations. The capabilities are like those promised by “large action model” products, including the Rabbit R1. In a press release, Deutsche Telekom board member Claudia Nemat says the forthcoming AI phone with Perplexity Assistant can book a taxi and do your shopping without having to switch apps on your phone.

Some of the AI tools, including Perplexity’s AI-powered search engine, will also be available on other devices using the MeinMagenta app (Perplexity also recently launched an Android app for its assistant). Deutsche Telekom hasn’t said much else about the AI phone hardware, but the images show something of a budget to midrange Android device with thick forehead and chin bezels, as well as a pink sleep / wake button.

Deutsche Telekom plans to launch its AI phone in the second half of this year. And this summer, the company will add some of the AI features from Google Cloud AI, ElevenLabs, and Picsart into the MeinMagenta app, which works with other smartphones.

Google brings Gemini widgets to iPhone as it looks to seize on Siri’s weaknesses

Google’s Gemini app for iOS and iPadOS now includes a handful of lockscreen widgets that make it easier (and faster) to access the AI assistant’s various features. As noted by 9to5Google, the app’s latest update adds six widgets in total, with each of them letting you hop right into a particular Gemini function.

The company seems to be making a very intentional effort to lure iPhone and iPad users away from Siri — or at least get people using Gemini instead of OpenAI’s ChatGPT whenever they want to interact with a large language model. Google has also recently been advertising Gemini on Apple-centric tech podcasts.

A screenshot of Google Gemini’s new widgets on an iPhone lockscreen.

This all comes as Apple reportedly finds itself making slow progress at developing a smarter, more capable Siri that can compete with today’s leading AI platforms. Over the weekend, Bloomberg’s Mark Gurman reported that a truly overhauled Siri is likely still several years away. So if you’re Google, this full-court press to establish familiarity with Gemini in the meantime makes a lot of sense. Apple itself may offer Gemini as yet another resource for Apple Intelligence in the coming months.

As for the widgets, here’s what Google says each one does:

  • Type prompt: Stuck on a question? Type anything right away.
  • Talk Live: Talk things through, or brainstorm aloud with Gemini.
  • Open mic: Quickly open your mic to set reminders, create calendar events, and more.
  • Use camera: Take a photo of what’s in front of you, and ask Gemini questions all about it.
  • Share image: Choose an image to get more info, create new art, or start a chat.
  • Share file: Use a file to share the information or inspiration behind your question.

Google released a standalone Gemini app for iOS back in November, splitting it off from the main Google search app.

The US faces ‘devastating’ losses for weather forecasts, federal workers say

Art depicts a home floating in floodwaters.

The federal agency that produces weather forecasts and leads research on climate and the oceans has canceled leases for research centers and slashed its staff to “devastating” effect, current and former employees tell The Verge

Last week, the National Oceanic and Atmospheric Administration (NOAA) laid off hundreds of probationary employees, who make up roughly 10 percent of its workforce. The agency has plans to lay off around 50 percent of its staff in total, according to Andrew Rosenberg, a former deputy director at NOAA and co-editor of the SciLight newsletter.

“If these cuts continue, you will feel them personally”

The agency has also canceled a lease for a building housing the National Centers for Environmental Prediction in Maryland, which produces information for the National Weather Service, Air Force, Navy, and Federal Aviation Administration, Rosenberg tells The Verge. He also says NOAA has canceled a lease for a radar development lab in Oklahoma. NOAA and the universities housing those facilities did not immediately confirm those cancellations with The Verge. Axios reported separately last Friday that NOAA had canceled leases for office space, without mentioning the locations specifically.  

NOAA staffers are demonstrating today outside of the agency’s headquarters in Silver Spring, Maryland, calling attention to risks they see ahead as President Donald Trump and the Elon Musk-led Department of Government Efficiency (DOGE) upend an agency that Americans rely on to stay safe from extreme weather events and flooding.

“I can tell you the losses will be and already are devastating,” a NOAA employee, who was granted anonymity because of the risk of retaliation, tells The Verge. “Believe me when I say, you may not know all the work that goes on behind the scenes, but if these cuts continue, you will feel them personally at some point when that work is gone.” 

The employee says that with the types of cuts they’ve heard proposed, some National Weather Service offices won’t be able to staff their operations desks full time. “I’d challenge anyone to work seven to 10 8-hour shifts in a row, with the stress of knowing your job could be cut arbitrarily at any time, and not make any mistakes through all of that,” they say. “Cutting NOAA staff will invariably cost not only lives, but millions — if not billions — of dollars.”

Weather balloon launches, which are needed to collect data for forecasts, were already suspended in western Alaska last week because of a lack of staffing after layoffs. People who lost their jobs at NOAA were given very little time to exit their offices, let alone prepare for a transition in workloads. They were notified by email last Thursday and given around an hour and half to leave the office, according to Rosenberg and a NOAA employee who lost their job last week and who was also granted anonymity.

Both Rosenberg and the other employees are anticipating further risks to NOAA’s work. After storming NOAA headquarters in February, DOGE reportedly intends to cut NOAA’s budget by 30 percent. 

Project 2025, the right-wing manifesto for the second Trump administration, said NOAA “should be dismantled and many of its functions eliminated, sent to other agencies, privatized, or placed under the control of states and territories.” It also proposed leasing out more government allocated radio frequency spectrum for private use, which could lead to more interference with frequencies used for weather forecasting. 

Privatizing weather forecasts — essentially turning a free service into something people have to pay for — could put more Americans at risk during extreme weather events, sources tell The Verge. So, too, does auctioning off too much radio frequency spectrum if that interferes with the collection of data needed to make forecasts.

“Gutting NOAA puts Americans in danger,” says the former employee. “Trump and DOGE aren’t ‘trimming the fat,’ they are hobbling the services that all of us rely on every day to stay safe, to do business, and to live our lives peacefully.”

Amazon’s Panos Panay on the long road to Alexa’s AI overhaul

Today, I’m talking with Panos Panay, who’s in charge of devices and services at Amazon. That includes everything like Alexa, Ring security cameras, Eero Wi-Fi routers, and the Project Kuiper satellite internet service that’s meant to compete with Starlink. 

Panos and I talked the day after he announced Alexa Plus, the new AI-powered version of Amazon’s famous voice assistant, and this episode gets pretty deep into the weeds of how all this works and how Panay thinks about running his teams to make it happen. 

This is actually another one of those full circle Decoder episodes — I talked to Panay’s predecessor, Dave Limp, on the show in 2021.  If you’re following executive shuffles, you know that Limp left Amazon to go work for Jeff Bezos as CEO of Blue Origin in 2023. Panay was hired as his replacement from Microsoft, where he was running Surface and Windows. It’s safe to say that the two have very different approaches to running this team and its products, so I was excited to dig into what changes Panay had made in order to make the new Alexa Plus happen.

Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!

Now, I’ve known Panay for a long time — if you’re a tech fan, you know that he was the Microsoft exec who really brought the Windows hardware market back to life by introducing the Surface line of tablets and laptops, and he eventually ended up overseeing Windows itself. You’ll hear Panay say that the idea of infusing Alexa with AI really drew him to Amazon — like so many folks in tech, he sees AI as a platform shift that will change the way we use computers, and Amazon has a big advantage with the enormous number of Alexa devices that are already being used globally. Just making them a bit smarter and more capable with AI sounds easy, but actually doing it is fairly hard, and we sat in the weeds of the execution for a while. 

There’s a lot here, and a lot of different parts of Amazon that needed to work together in new ways — that’s pure Decoder bait, and Panay was game to really get into it. It even got a little emotional there.

One note before we start: Panay talks about “experts” a lot, and in this context he means the individual services that power different parts of the Alexa Plus experience, kind of like apps on a smartphone. You’ll hear what I mean, but if it gets confusing, just think “app” and it’ll click into place.

Okay: Panos Panay, head of products and services at Amazon. Here we go.

This interview has been lightly edited for length and clarity.

Panos Panay, you told me that you don’t care about your title, but technically it’s SVP of devices and services at Amazon. Welcome to Decoder.

Good to see you, man. I love being here.

I’m really excited to talk to you. I was sitting in the audience yesterday as you were announcing Alexa Plus. I have a lot of questions about how it works, the feature set, where do you think it’s going. But it occurred to me, as I was sitting there watching you present it, and then later as I was watching some of the demos of it working, that to make it happen had to have required some big structure and culture rethinks inside of Amazon itself.

You joined about a year and a half ago. Decoder is all about structure and culture rethinks. So there’s a lot here. There’s a product to talk about, but then there’s the path of getting to that product. Is that how you see it? That you had to reset some parts of Amazon to get to Alexa Plus?

I don’t think resetting Amazon; Amazon’s incredibly ambitious in so many ways. Always learning, changing. I mean it’s pretty powerful. I think resetting the devices team a little bit, yeah. First off, we hadn’t really had a large-scale event, as I understand it — obviously, I wasn’t there — since pre-pandemic.

The events under your predecessor, Dave Limp, they were entertaining in a way. It was, here’s a firehose of stuff with Alexa in it. Microwave, a coffee maker. We would count, maybe, like, they announced 45 products.

Yesterday, you announced one new product, Alexa Plus and no new hardware, and that’s a pretty big difference.

I think that was important. So yeah, I guess that’s a change for sure from what it’s been. What we did yesterday as a team, it was a little bit of a reset. The team was pumped to do it, excited. We were never going to announce hardware. It wasn’t a goal. We need to reset Alexa for the world, and bring Alexa Plus forward. That is a bit of a cultural shift. We’re just going to focus on the service and what it’s going to be.

Great products are coming. We already have great products in market. We launched stuff at the holiday. And the team, they rallied. The company rallied. It’s pretty awesome. Having [Amazon CEO] Andy [Jassy] there is fantastic. And you can feel a vibe in that room for sure. I hope you did. I mean, you made your snarky comment about the music when you got in there. Man, we check every detail. I think I missed, I may have missed, I don’t know…

The chiptune rave music? It was pretty good. I always wonder who sets the playlist, ’cause you can do a lot with music in the pre-show.

Every single part of that show after the moment the mic starts has been very, very well thought through. Yesterday’s event was the highest risk event I’ve ever done. I mean, bar none.

I mean, I watched you reintroduce laptops at Microsoft in competition with your partners.

It doesn’t compare.

Really? Why so risky?

Because when you’re basically doing hardware, you have fallbacks. The demos aren’t, they’re not not live, but you always can just go to the hardware. When you’re reinventing or re-architecting an entire service, there’s no backup. It was the product. I think the only product video we had, like actual video, was the kids’ portion. Because, honestly, you’re not kids in the audience. So sharing a kids feature without some emotion is a waste of time. It’s like, here’s a kid feature, please write about it. So putting a little bit of emotion and storytelling in it.

Those were all real demos. That all really happened. That was one of the principles of the event. It wasn’t like, let’s go make up a fake story and we’ll just put film. That was the one area where it was just, it wasn’t a vision piece, it was the product, but it was the only area that wasn’t live. And so there was a lot of trepidation. This was the hardest kind of event we’ve put together, risk profile-wise.

Let’s talk about Alexa Plus for just one second and get a sense of it, and then I want to talk about how you made it happen.

So I think there’s a part that seems very obvious to people. You see an LLM, you see it interact with you. You’re like, this thing is great at natural language input and output, maybe it’s going to lead us to AGI and maybe it’s not, whatever, but the core piece of it is, the computer can talk to you in a non-deterministic way. Everyone saw that and said, okay, Siri should work like this. Alexa should work like this. Google Assistant should work like this. And then the actual implementation of it has taken everybody a really long time.

What’s the gap there?

It’s not just an LLM. I think that it seems easy. Put a voice to the LLM, let the LLM talk, or [text-to-speech], bring it out, bring out the voice. Or if it’s speech-to-speech, it doesn’t matter which tech, but if you want the elements of connecting to thousands of —I’m speaking for Alexa. You asked a broader question, but let me just talk about Alexa.

You want the element of connecting to thousands and thousands of APIs, partners that have been connected to Alexa forever. You’re trying to manage hundreds of millions of customers who already have the product. You want to update as many of those devices as you possibly can, meaning you don’t want to leave a customer behind. And there will be some devices that are eight, nine years old that won’t work. But everything else, most things will, relative to what’s used in the market today.

So you’ve got to carry forward all that history because people still love Alexa. We’re still growing. We still have usage that’s higher than you would expect, and we can’t leave those customers behind. That’s the worst thing. We focus on not doing that. So there’s that element. Sitting on top of an LLM, you’re now going, okay, just talking is just not that interesting. Although, awesome. Like having ambient conversation, I think it’s a superpower moving forward for Alexa. It’s different today on Alexa. It’s like point, shoot, ask the question. Hope to get the answer.

Yeah. You guys call it Alexa Speak.

Yeah. I do. Like with my team a year ago, we’d be in meetings and product meetings and we’d be talking and people would say, “Let me show you the new Alexa with a demo.” And they would Alexa Speak to it. And it was like, nope. Speak normally. Go to natural conversation. Don’t adjust your speech for Alexa. That’s exactly what you don’t want if you want natural conversation.

It’s hard, though. You’ve been training people, we’ve been training ourselves for 10 years. Calling a timer is, “Can you set a timer for eight minutes?” Calling a timer on the new Alexa is, “I’m making a ramen egg.” “Gotcha. I’ll set a timer for eight minutes,” where she just proactively comes back and sets it. I didn’t demo that yesterday because I didn’t want the timer headline, but it’s a really badass experience. It’s really cool. And so there’s a level of that transformation where — I’m off-topic, let me go back.

At the end of the day, the LLM needs to be able to, now it’s the base layer. Then you’ve got the next layer, which is just a series of different models. Picking the right model to do the job. And then that model is basically picking the right expert. And so the LLM plays a role, especially in the natural side of it, but as it makes it through the stack, it narrows down for accuracy. It narrows down for speed. It then narrows down for holding memory and personalizing it. And now you just have a series of experts basically sitting on top and one of them is conversational.

And so, that’s not just an LLM, that’s a series of… if you look at one of these other products, they’re not just LLMs, they’re basically, they’re mainly, I don’t know, overstating it, understating it but, so not to be rude, but they’re chatbots. And they’re pretty good. They’re damn good. And then when you start typing long form and rewriting and dropping in summaries, that’s very powerful. Creating videos, creating photos, isolated but powerful. But the idea that these experts all sit on top of the stack and basically kind of, there’s a runtime that orchestrates and says, okay, call these experts. These two experts have to work together. Got it. And then it operates. That’s just not simple. And the first thing I was asked when I got there was, I don’t know, I actually don’t…

It’s like 18? Something like that?

Yeah. I don’t know. It doesn’t matter. But it doesn’t feel like anything short. That’s for sure. Hey, why don’t you just change the brain with an LLM and everything will be fine?

Yeah, I think I probably asked that question the first time when we first spoke.

You might have. Yeah, I mean it’s the first question. And I’m like, well which one? And it won’t work. All you’ll do is talk, and it’ll be super verbose, and it’ll sound like you’re talking to the internet, and it’s just not that. It doesn’t work. And then everything else breaks. Which is the hardest thing. I don’t think anyone else is doing what we’re doing. We’ve got thousands of APIs now that we’re able to call. You’re able to get these, if you will, experts or agents, whatever you want to call them. It’s not a real word, it’s just being able to talk to each other at the right time. And then try, the invocation is like there’s something invoked and now the LLM at the bottom is arbitrating like, oh, what’s she trying to do? What’s he trying to do? Got it.

Route it to the right model. Route it to the right expert. Got it. This expert needs to talk to that expert. I’ll give you an example if you want it. But that level of complication — there’s nothing simple about it. It’s why you haven’t seen it. It’s why it doesn’t exist outside of videos. So the biggest thing I needed was to not do a demo, but to use the product live. Meaning you can code a demo just to be a demo. It’s code. But the principle was very, very clear. And this hasn’t changed at Amazon, to be clear. The team’s all in like we are going to show the product. And that’s what you saw.

One of the questions I have is just about that orchestration layer. We’ve seen other companies try to build it. Even when Microsoft launched Bing with ChatGPT several years ago, they were talking about orchestration at that time. Is that something that’s evolving in the same way in different places? Do you have a unique approach?

Yeah, I think we do.

Is that competitive?

I think it is. I think it’s hugely competitive. It’s pretty easy to invoke a single API off — I mean not easy, I don’t want to discount anything but, orchestrate to a grounding, let’s say the expert is a grounding expert. I’m going to ground the local info. We’re in New York. I know everything about New York. I’m going to make sure this conversation stays within New York. Calling one API, make sure you’re grounded to that local info.

Is “expert” a term of art within Amazon?

It’s just my term. As a team, we talk this way. I don’t want to overstate it. I think some people call them agents, some people call them APIs, some people call them, I don’t know, grounding to a certain experience, maybe? Our challenge was, that’s not enough. We already have that. I mean it’s deterministic today with Alexa, but we already have it. And so, meaning you can call a single API at a time, but then you get frustrated ’cause you’re like, I needed more than that. 

Let me give you an example. It’s a simple one. Let’s call “photos” an agent or an expert or just an app. I mean app’s a bad word, ’cause you’re not opening an app. But let’s just say the photos expert, and the music expert are both very important to this next example. The other day, I’m leaving the house. And I go, I have Alexa Plus, obviously. And I go, Alexa, do me a favor. Find all the photos of Mary’s… Start a slideshow and put music behind it.

Okay. I just did a search command. I did a photos expert command. And they have to talk to each other. He’s looking for Mary, slideshow, got it. And then that expert has to call the music expert and basically say, play the music. All right. It does a phenomenal job. It does it in under two seconds, and I get a slideshow. It’s pretty cool. Music’s playing. I’m about to leave the house. It automatically chose music and some playlist. And then I just said, change the music to, in turn without reinvoking Alexa, which I think you saw yesterday if you were watching, it’s very small. And I just said, put something on that Mary would like. And then it switched it and I’m perfect. And I just walked out the door. Okay, that’s an emotional moment. It’s one of my favorite parts of the product. If you said, P, what’s one of the things? I’m like, that’s it.

You’re pulling emotion out of the things that matter most to you. Mary wakes up, she comes in the kitchen, there’s a slideshow playing and it’s got music. She texts me, do you know Alexa’s on right now? I don’t know what’s happening. And I’m like, well, do you like it? She’s like, it’s fun. I’m not turning it off. I’m like, well, I left it. It was a message I left for you. Now the next step of that is to, Alexa, leave a message for Mary when you see her. And she will. But these are all, they’re multi-turn conversations, but they’re also “and” statements. So when you have, basically these conjunctions coming together, the continuation of a statement, ’cause I just want to talk in natural language. To invoke all of that in one place is, I think it’s beyond, it’s incredible what Alexa can do. I don’t see that anywhere else. It’s quite powerful.

So even in that example, and this is what I was saying at the top — it’s complicated.

It is super complicated, but you’re like, a slideshow, what’s the big deal, P? I’m like, well, I’ll be clear, on that screen, it’s emotional, it’s ambient. It was natural. Yeah. But it is somewhat simple in the way you talk about it.

Well, right, the outcome is simple. This is a thing that I want. But I’m looking at, okay, to make that actually happen, my photos need to be in Amazon’s photo service.

Correct.

I need to be in Amazon’s music service.

Correct. Well, no, Spotify would’ve worked there too. But yes, you need to have a music service.

That’s compatible.

But I would like it to be Amazon.

Yeah. Those divisions inside of Amazon all need to talk to each other in a common framework that Alexa can address.

Correct. Yeah. I happen to be responsible for photo service, so I’ve got that. It’s a blessing.

But I look at Amazon, I look at Amazon’s structure. Again, a lot of Decoder is like you can describe Amazon, can you describe other companies the same way? Okay, then. 

Amazon specifically has a language, how it describes how it’s organized. So famously, it’s single-threaded owners, right, like single-threaded leaders?

When you came in, obviously from a different management culture at Microsoft, how did you say, “Okay, I need everybody to participate,” because that seems like the thing in particular that Amazon has not been great at? And to make Alexa work the way you want it to, Amazon has to be great at it.

I think it’s a good question. At the end of the day, first off, all of Amazon’s rallying around Alexa. It’s crazy. It’s so cool. It comes down to a few things.

Actually, can I ask about even that, is that instinctual? Is that you got them to do it? Is it, Andy Jassy sent an email that said get on board?

Yeah, I think Andy’s been a huge part of it. I have a role. I mean, I came in with a vision that I think Alexa is a thing that we can anchor and change the world with.

Is that what drew you, this is one of my other questions, is that what drew you from Microsoft to Amazon is Alexa Plus?

Yeah, of course. Yeah, 100%. I don’t know if it was Alexa Plus, I’m not going to say that. It was the advent of where we can take AI and, yeah, I’ve got two questions in my head now, man. I need to compartmentalize both, but I’ll go there. You can see the turning point, I was there, I was in the middle of it, and it is just awesome moments, what Amazon brings relative to just even what I’m responsible for and how they can all connect magically through AI.

I fully believe this transformation’s happening, and Amazon’s the leader in ambient AI, period, end of story, and in the home, if we can connect all these things. A year and a half ago, when I was talking to Andy about joining Amazon, he was just so ambitious about it. He’s like, “Look, come in and do it. Let’s do it.” And so that is the tipping point. There’s a lot of nuance in that, but that was the tipping point, like, “Let’s go. We can change the world. You can think of the scale, the relative level of investment, the ambition, the patience that Amazon brings, but happy to talk about it.”

But yeah, the answer to the first question is sure, I come in, lay down a vision, kind of re-architect the team a little bit, get the explicit focus on, first thing we’ve got to do is get Alexa right. Once we do that, we’ll bring the hardware together. And to get Alexa right, it takes music, photos, shopping, and these are — you know, photos, of course, is under me, but you have across the company, you have music, video, shopping. We’ll just use those three as huge tenets for the product, and those leaders are exceptional. There’s no “we’re not going to work together,” it’s the opposite.

At Amazon, we set goals, and they are cross-company goals. And so the goals are set out from Amazon Nova, which is one of the anchoring points of the product, to what music needs to be on the product. Sure, the expert is kind of a joint thing, the music expert, but ultimately that music service has to be perfect and the music team’s killing it right now. Shopping, all in, how to make it great. We didn’t do a lot of shopping yesterday just because it would’ve been like a meme, you know, of course shopping, like oh, yeah, it’s going to be amazing. And then video, same, and there’s other areas, but we align and we go.

But it does start with a commitment from me for sure, you know, I’m in, I’m all in, I’m going to re-architect it. It’s not going to be easy. It’s going to take time. Andy’s patience, I would say the company’s patience to get it right for the customer is extraordinary, like extraordinary. I mean, Andy was pushing me. He wants urgency, of course, like you would expect from an Andy Jassy, but he also wants the right thing for the customer. And when you talk about customer obsession, let’s get it right. Let’s do it right and get it right. And we didn’t move slow. Even though you asked what’s taken so long, I don’t see that, you know what I mean, from where I’m sitting. I know it feels late because there’s been a lot of announcements, but I think we’re here at the right time.

You have a big team, and you talked about re-architecting. I think this brings me to the Decoder question. You oversee everything from Ring and Link to the photo service to the satellite service, Project Kuiper.

You took over what, October of ’23? November of ’23, you cut some folks. How have you restructured your group?

We refocused on Alexa, we really did. It was in a lot of different places, and so we just made it super clear. I had an Alexa platform team and an Alexa product team. It’s not a platform team, maybe that’s not the right way to say it, but just an engineering going across and then a product team vertically, is the way I look at it, and that AI stack going across. And so once you get that focus and that clear ownership, that leadership, you quickly see speed change.

That was the biggest shift, I think. Also I made some shifts as a team where a lot of the core horizontal functions are, if you think about the lowest level of the OS or the stack as a horizontal or hardware or supply chain, we’re kind of intermixed with the product verticals. So I’ve shifted that around too, just to get more product focus. One of the number one tenets is we’re going to make great products. I’d like to just start there.

I heard a rumor that at one of your first meetings you said that there were not great aspirational products, and that’s what you needed to do. Is that true?

Yeah, I mean, look, I don’t know exactly what was said, but at the end of the day, I immediately started pushing the team to have amazing pride in their products. We have to, because that pride shows up for our customers, and yeah, we want to push for it. That is a little bit of a, it’s just, let’s be super clear, these products have to be great. We’re not making tradeoffs if they’re not.

One of the things about Alexa is, again, in a previous administration, we would see Alexa coffee makers and microwaves, and the idea was we would just push microphones and speakers out everywhere and you would build this ambient platform, everything is sort of listening, everything is sort of aware of you. That was the big dream of ambient computing, that the computer would vanish into many different devices. You’re laying out something a little bit different, right, that there’s going to be a focal point in a piece of hardware. Yesterday was a lot about screens.

It was.

There’s a lot of multimodal interaction where you’re talking and touching a screen at the same time. That’s different, right, to say, okay, there’s going to be a place where you interact with Alexa?

Yeah.

That implies you’re going to cut down this giant ecosystem of ambient devices. How are you seeing that roadmap?

I think you’ve got to focus the roadmap. I think there’s no doubt. What you need is products that people want in their home, but also need, so I don’t think that history is broken. Obviously, the more endpoints the better, but they’ve got to be the right ones and they’ve got to be the ones that people want to use.

At one point, I think I saw a smoke detector with an Alexa microphone in it. I was like, we’re getting a little far afield here.

Here’s what I will say. The go-forward is: focus on making great products and the right ones. I don’t think you’re going to see thousands of products a year coming out, that’s not the goal at all. What I want is some attention to detail, making sure the right products for the customers are there, the things that fit into your home, the things that fit on your eyes, things that fit in your ears, so you can take Alexa with you, and just narrow the experiences that are great that way. And I have to tell you, the focal point, yeah, it is a screen on an Echo device in the home that can run your home. You don’t need it. With Alexa Plus, you actually don’t need it, it’s just a better experience.

And so when I’m asked, because there is a little, I mean, I’m treading a little bit here on some hallowed ground, like there’s a little bit of… Look, we’re going to light up all your Echo devices, but it’s just going to be awesome if you have a screen. And so when somebody says, “So, do you recommend a screen?” My answer is, “Yeah.” Do you have to have a screen? No? Well, you’re still going to have a great experience. Remember, you have a screen in your pocket. It’s called a phone. That phone has an incredible new Alexa Plus app on it, and so you have a screen, but you don’t need it to operate it. But let’s say you start a conversation with your voice and you just want to remember what that conversation was, you’re going to go to your phone to just capture it or you can send something to your phone. I think we cut it just for time in the demo yesterday, but anything you’re doing, you can send to phone because it’s like a longer form I want on my phone.

We’re also launching Alexa.com, so you’re going to use it on your PC, so it’ll be in the right places, but at the end of the day, if focal point is to control your home, which by the way, hundreds of millions of customers, that’s really the focal point today, you put a screen there, it’s emotional, it’s informative, it’s useful, and it’ll make a difference. It’ll make a difference.

So you come in, you restructure, you obviously want to get more focus on the products. All of that feels like we’re trying to change the culture, right? The structure is really a proxy for culture, in many ways.

Yeah.

That brings me to the other big Decoder question. Amazon has a famous decision-making culture, one-way doors, two-way doors. You can write books about it. You’re writing the press release before you write the product. You have a long history at Microsoft, you’re obviously trying to change some of that culture, how are you making decisions there? What’s your framework? Are you inheriting all the Amazon approaches or are you bringing your own riff to it?

I often get accused of making the final decision only when I have to. It doesn’t mean I’m not making decisions. When I was studying up to come to Amazon and making that decision, that was a life decision for me, it was a big one, and I was so inspired talking to Jeff, talking to Andy, just inspired, no doubt. I also love Microsoft, so I’m inspired where I was sitting, so there’s all these conflicts. Those are personal to me, oh my gosh. But when I started reading, let’s go to decision-making, and then I just watched a few stories that Jeff had told, talked to Andy about it, it basically, from a leadership principle standpoint and from some of the things you hear about on decision-making principles, like one-way, two-way doors, it’s hard to explain this, but it’s so aligned to the way I was running my team. That’s how I’d operate. It was weird. I was just reading the LPs and I’m like, I used to have a culture box.

The LPs are leadership principles?

Oh, sorry, right, leadership principles at Amazon. You should check them out. You can go to Amazon and find them. They’re rad. They’re inspiring, and they’re almost, sometimes they’re just obvious, not all, not all of them. And they’re hard to believe, like big bets, is that real? I’m like, yeah, it’s pretty damn real, it’s pretty incredible. Leaders, they do, they dive deep. Yeah, they do, they get into everything. And I think those are real, but in the spirit of when I started reading them and then the way I made decisions, Nilay, they were aligned. I mean, I’m not, no BS. They were just, it felt right. I had a culture box when I was running my team at Surface and Windows, and that culture box had five cultural principles. They were basically five of the LPs, but that’s how I ran it, and so it was so connected.

And when I got to Amazon, it was almost — what a team! I found this team that was not only hungry, but unbelievably talented, massive and capable, knows how to ship, knows how to invent, and it’s just a little bit of direction, that’s all. My job’s to give that direction, and so making sure I lay out the vision, making sure everyone knows where we’re going, what are the highest priorities, but when it came to decision-making, to answer your question, is I fully operate in the values of, all right, let’s make this call today, but no. And I think one of the strongest points of a leader, without any doubt, and I learned this from one of my colleagues in the past who I worked for, he used to teach me. He’d go, “Hey, Pete, when you’ve made a decision, the best leaders in the world are willing to be wrong. Now, you’ve got to be right a lot, but you’re willing to be wrong.”

This is simple to say, but it’s a powerful concept. What does willing to be wrong mean? It means you’ve got to put your ego aside, you’ve got to be vulnerable. Do you know how hard that is, in front of a team of thousands of people? Just, “Yep, I was wrong.” What does being wrong mean? It’s not like this dramatic, “I’m wrong, I’m sorry.” That’s not it. When being wrong, it’s not necessarily the wrong statement, it is the you got new information a week later? Then use the information. And if it was a two-way door decision, guess what? Make the right decision. But if you’re not a great leader, you don’t change that decision because you’re like, “I already made the call, sorry,” but you knew it wasn’t right for the customer or for the business or whatever the reason. It’s just a fail.

And this was very early in my career. It’s very similar to the two-way door, one-way door. Once you’ve made the hard call and you’re past the point of return, that’s it, you made the call. And you have to make decisions sometimes, man, and those are hard. You lose sleep over it. When I made the decision to have the event, “We’re doing it.” And they’re like, “Well, the product’s not 100% done.” I go, “It doesn’t matter. I’m at 90% usage. We’re going.” And everyone’s like, “You realize that,” and that was a two-way door decision until I send out the invites. And so we checked the information a day before we sent out the invites, and like, “We’re going.” The minute you send the invites, that’s a one-way door decision. There’s no pulling back. It didn’t matter how sick I was, it didn’t matter who couldn’t make it, none of it mattered.

And then we’re lining it up, we had the venue booked, and we’re like, okay, that wasn’t a one-way door decision, you can always cancel the venue, not cool, but if you had to. And you kind of go through it, and then you get to that point, you know, that’s it. There’s no new information that was going to change it. And so great leaders, they’ll make those decisions, but they’ll always be willing, they’ll always be willing to check themselves, and not just check themselves, but be willing then, when they have new information, if the right decision is in front of them, you’ve got to change it, and I always live by it.

And so when you come to this world of, when you say this culture, Nilay, the Amazon culture is incredible. You have no idea how empowering that is. It’s a two-way door decision, all right, let’s make the call. If we’re wrong, let’s deal with it, but then we move, and we move. And I get accused a lot of, you know, like to make a call and like, “Do you have all the info?” “Probably not, but we’re moving.”

Yeah, we’ve got to try something.

Yeah, and it’s been pretty fun that way.

Let’s put this into practice. I want to talk about Alexa Plus in great detail now. I think I have a sense of how you got the team to get the product so you could have an event. The big announce, the last thing you announced, was the pricing, and you started with, it was big reveal, well done, well played, you said it’s $20 a month, and-

Credit to Andy. That’s Andy, that wasn’t me.

And it’s free with Prime. This is a big decision, right? Pricing is maybe the most important decision.

Yeah. I think the exact words were, “$19.99 per month, but free with Prime.”

I will note that Prime itself costs $15 a month. You’re pricing the service $5 more than Prime. Are you subsidizing Alexa Plus with Prime?

I don’t think I understand.

Does it cost you more to run than you’re getting inside of that membership?

I want customers to understand that the service is better with Prime. At the end of the day, if you have Prime Video, Prime Shopping, Amazon Prime, you fundamentally get the best music experience. You get photos, unlimited photos. That just makes the Alexa experience better. You don’t need to have it, it’s a great experience without it, but it’s just better. And so we talked about it, we want people on Prime. If you’re on all those services, it comes together and as a collection on your product, it just makes the personalization so much stronger, it makes the invocation of services so much easier.

Was this an obvious decision, from day one this is going to be part of Prime?

No.

How’d you make that call?

Just a series of events. I think back to two-way door decisions, that definitely, I don’t think it was the first decision, there were different ways to think about it. It costs more to run the service, that’s all there is to it. You’re going to invoke an LM, you have many models working, there’s a lot of inference, that’s true. Then you heard Andy talk about how much cost is coming down with Trainium2 and you just see the efficiencies, if you will, that are coming through, those are plumbed through the plan. We have an incredible opportunity in front of us. And so it wasn’t about how much you’re spending, how much you’re making, it is about making a great product. And once we were like, we want to make sure people have the best product possible, that is the anchor. And so we’re like, all right, it’s got to be with Prime, that’s the best way to get customers there. And that’s it.

I think people want it to be more complicated, because I’ve been asked this question a bunch of times. I generally haven’t answered it. I’d be like, oh, you have a choice. You can pay 15 or 20, it’s your choice. Just choose. But not to be, I’m not trying to be pompous or whatever. I think if you’re on Prime, you’re going to love it, so I inverted the equation.

The other piece of that I see, the other way to think about it that I was curious about, you mentioned this, Alexa has distribution, you have a huge installed base of devices. This is I think the first at scale non-phone AI product. I can’t think of any others. 

Yeah, it might be. I have to think about it.

There’s Google Assistant, but they haven’t launched the way that you’ve launched this product yet. Gemini isn’t doing all this stuff yet. There’s Homepods but Siri doesn’t do it yet. I don’t think the Humane Pin was keeping you up at night, and now it’s gone.

Well, it’s not gone, I think. Went into HP, right?

No, they shut it down.

Oh, they did?

Yeah, it’s gone. They won’t work anymore in a couple of weeks. It’s a real thing, we’ve been breaking news to you here on the show.

Wow, that’s huge. You’re so informed.

Sadly, that’s my only job, is to be informed. Make no decisions, just know everything.

[Laughs] I don’t see it that way. 

But that is the scale. If it’s not a phone, you need something else. There’s been a lot of excitement about what something else could be because you have a new user interface paradigm with voice, with natural language. But you already have it, you have the installed base.

And saying it’s going to be with Prime means you’re just going to deploy it to that installed base, because I’m guessing people with Alexa and people with Prime has a pretty massive overlap.

Yeah, there is.

So you’re just going to launch it to that whole service. Is that going to be a flywheel?  Because the promise of Alexa 10 years ago was this will compete with your phone. I don’t think that actually happened. Do you think that this will help you compete with the phone in that way?

I think it’s more of a compliment now than it’s ever been. You need the phone, we send things to the phone, we want you on it as well. I want you on the Alexa app on your phone, it’s an awesome experience. We can play with it if you want after, but I think it’s a compliment to the phone, I think it does replace a lot of things. I’ll tell this, I say it to my team all the time, look, our customers are going to find the easiest path to something. They just will, it’s innate. It saves time, it’s about speed, it’s about efficiency. The only time that’s not true is when you’re getting more joy, and a lot of times joy comes from speed or happiness comes from being able to complete a task quicker. And so let me go back to the point of ambient. One of the core tenets when we started Alexa Plus and the vision for it was we have the largest install base in homes on the planet.

I think that’s a pretty definitive statement, I think it’s true. I probably have to check with the lawyers to say something like that, so maybe I’m wrong, so let me qualify it. We might have the largest install base on the planet, and it’s incredible. The way Alexa Plus is designed is it’s meant to be ambient, it’s meant to be a conversation, and it will replace tasks you do on your phone. It’s going to happen. And so does it replace the phone? Absolutely not. But does it replace certain things? I think I told you the story before, let me tell you again. When I was building laptops 12 years ago, when I’d first started on Surface, people came to me and said, there were a few people that were like, “You’ve lost the plot, P. You’re going after this thing and the laptop is dead.” Why? Because phones are replacing the laptop, and I mean you’re using a laptop 12 years later and it’s pretty important to you.

Probably more important now than it was 12 years ago. So what had happened was jobs moved to the phone that were really important, shopping, social media, your photos, I don’t know, pick communication. But what happened was the things that didn’t move to the phone only got stronger on the PC over that time, and so they essentially became compliments to one another. If you’re going to sit down and write a long story, you’re going to do it with a keyboard. You want to be snackable information, you’re going to pick up your phone. And then one got better at one of them and the other got better at the other, and incredibly so. It actually strengthened them both.

I see this as very similar. I think as Alexa Plus comes into market, I think it’s going to be better at a lot of things and it’s going to move jobs to it. I believe that. I think there’ll be more emotion to be pulled out of something that’s conversational, knows you well, is personal to you. You can have a conversation, it knows your calendar, it can get some stuff done in a simple way. You might not always do [the task] on it. I don’t know, it doesn’t matter to me where you do it. I just want to give you the shot, and if it’s the easiest way to do it. Can I give you just a fun example? I was sitting on the couch last week with Costas, my son. He’s 24. I don’t know, he’s 24 ish.

Those are pretty fuzzy ages.

I think maybe 24. He was born in… Yeah, 24. And so we were hanging out and we were talking about the Clippers and he had asked me a few questions, and I’m a fan of the Clippers growing up, and then of course since Steve [Ballmer, former Microsoft CEO] bought them, I just love the team. And I asked “Costas, did the Clippers win last night?” He goes, is Kawhi even playing?” This is, I think, a week and a half ago. I don’t remember the day. And now we have Alexa Plus in the house everywhere, and my son works on AI now, he’s blown away by it. He had to sign an NDA that he can’t talk about what he sees. And I realized right at that moment — Nilay, I was going to lose him, because you know what happens? You pick up your phone, you open it, now you see your notifications, you know that feeling, and you’re like, oh, I’m going to check my notifications, or I’m going to jump on TikTok, or whatever it is that you love about your phone.

He’s going to go get the information, answer it, and I’m going to lose my kid to his phone. And now all of a sudden we went from this moment hanging out to him on the phone, it happens all the time, and it blew my mind. He goes, “I don’t know. Alexa, did the Clippers win last night?” And Alexa goes, “The Clippers did win last night.” And then his score and blah blah, Kawhi Leonard scored so many. And he’s like, “Is Kawhi playing?” “Yeah, Kawhi’s been back for several weeks.” And he now started having a conversation, the three of us are having a conversation, the job moved. He would’ve never done that.

So this was the promise of the original Alexa, right? There’s celebrity ads during the Super Bowl, people are just hanging out with their Alexas.

It was a great ad by the way.

It was a great ad.

Oh my gosh, what a great ad.

But it couldn’t do it. A decade later we have trained a generation of consumers to believe that these products are limited and that we should use them to play music and set timers. How are you going to teach everybody that it can — actually, a more important question: can it do it?

It can do it. I think we’re resetting the next 10 years right now.

Are LLMs durable enough as a technology to build all the things you want them to do?

Not just the LLM. It’s not just the LLM.

I understand that it’s not just the LLM, but it is the enabling technology that’s making all this go.

They’re durable, but they’re going to continue to evolve at a rapid pace, and they have to. They are. But you have to be smart about how you build on top of it. I mean, obviously everyone’s doing a great job, I’m sure. I think the promise is there. I’m not going to understate it, I won’t overstate it, I can’t, I believe the promise is there.

I’m here at Amazon because I believe it’s going to change the world how people engage AI, and it’s going to be easier because your device is there and ready for you, and we’re going to make beautiful devices. And so all this will come together in a way where there’s a team that’s going to connect all these experiences. You saw a little bit of Fire TV and Ring, that all of a sudden these natural moments are going to happen and you’re not going to have to guess, you’re not going to wonder.

If it can do it, because it’s not deterministic, you’re not issuing these Boolean commands.

Correct. Exactly, right. And so hopefully everyone understands that concept, but since it’s not deterministic and now you’re going to ask a question, even if Alexa doesn’t do it, she’s going to talk about what you’re trying to solve and you’re going to actually get to an answer. As opposed to, “I don’t know.”

One of the things that I think is really interesting about the product, you talked about the kid’s demo where it was telling a story to a kid. I’ve had my kid talk to ChatGPT in that way, I think it’s fascinating to see that interaction develop. Then there’s simple stuff. Yesterday I sat in one of the smart home demos and they turned the lights from blue and green to a warm yellow and I was like, that’s a lot of data center to turn a light from one color to another. So you can see inside of the orchestration you’re describing, there’s the most expensive thing, to have this real time creative story. Then there’s “turn the light off,” which should be simpler and cheaper. I’m assuming the orchestration is picking what model to use when.

That’s exactly right. And some will do it on the edge too. You don’t have to do it all. If it’s a point and shoot command, we’ll do it in a simpler way.

But then I ran into Mike Krieger from Anthropic, who was at the event. Anthropic is one of your models, and he said the most interesting thing to me that I heard yesterday. He said, “Sometimes when I talk to Alexa, I can tell when it’s Anthropic because I know our model so well.” And he’s like, “No one else will be able to tell.” But he was like, “Sometimes I talk to it and I say, oh, that’s my boy,” which was incredible.

A product person knows their product and maybe they’re seeing ghosts in the machine, but it was just incredible. How are you picking between Nova and Anthropic? How are you picking the cost of these different models that you have to invoke? What are they better at? How are you making that determination?

Actually, the orchestrator picks the model that’s right for the job. The how, I won’t get into the details, but there’s some awesomeness here. One of the things that inspired most people is that we’re using a multi-model approach, which I think is a little bit novel. But at the end of the day, it depends on what the task is, it depends on what’s being asked for. I think right now you’re seeing 70% of the utterances running through Amazon Nova, 30% running through Anthropic, something at that rate. It changes, it just depends on how you use the product and what you’re using it for. It is also non-deterministic. Basically, there’s a model that’s like, what’s the best model to pick? And then you’re looking for accuracy and speed. First understanding, then accuracy, then speed, and you target. Then you move it, you pick the right model and then you fire to the expert, and there’s a small model and the expert if you will sometimes, and then those all orchestrate together and that’s how it works.

Inside of that is the way that you talk to your partners.

Slightly different than all of that.

I think you just did an API-driven one where you asked for an Uber, and Uber’s got a bunch of APIs and you just talk to them.

Uber’s been awesome. Uber, OpenTable, Grubhub, these things that you use every day, they’re just in-depth connected. That’s like opening an app on your phone, at the end of the day.

We understand how computers work. You call an API, it delivers a result. You call another API, great, the Uber’s booked. Then there’s the more agentic stuff that you were showing off. It wasn’t quite ready yet, but a lot of people have this idea. I believe the example was we’re going to book a stove repair, and it was a Miele stove.

He was going to choose last minute depending on how the demos went. I think he did, did he do a Miele dishwasher?

I know it was Miele because I was like, oh, those are expensive to fix. That’s what I knew in my head.

[Laughs] That’s what he said. It was pretty funny.

And then he went on to Thumbtack, which is a partner, so he had permission, but what it was doing was it was looking at the Thumbtack website and clicking around and reading that back to you. And even with permission, I think of that as why wouldn’t you just get an API? If you have the permission, why not do it deterministically?

Yeah, then the partner just has to do the work.

Right, so this is basically cutting down the amount of work a partner is doing.

Yeah, you don’t want to do the work, no problem. It’s just a couple of different ways to engage it. From an SDK perspective, this is just basically permissions, and we have to work on authorization and payment at the end of that, which is the trickiest part. I’m not going to get into how, but that’s the trickiest part. And so completing the task is the trick, getting almost there, it’s not that hard, but completing it. And so that’s where you need the partner to be like, yeah, sure, we want this traffic and we’re going to go create the service and send it through. Great. If you don’t, no problem.

But the answer on why not do the API is just these relationships are different, partners want to work in different ways. One of the things we are trying to do, and I’m really re-engaging Alexa, is we want to open SDKs. Basically, we want to open the product up for developers to come in and do what they want, come make it great. And if somebody asks to fix something in their house, we got it, we have a way to get you there.

So that implies a lot of things. Having tried to get a Miele dishwasher fixed in my life, it is expensive. 

The repair person has to actually be on Thumbtack, they have to actually be using that service to actually book their appointments and take payments. That is not necessarily true, they might just be marketing there, but there’s a lot of things you have to know that you’re depending on that ecosystem to provide you to make Alexa just book a repair service professional for you. That’s the part where every time I talk to anybody about agentic systems I’m like, oh, this is where it falls in, payment is the other one. And the thing I’ve been calling it is just the DoorDash problem.

If you say, “order me some food,” and it goes and uses DoorDash for you or GrubHub or whatever, you’ve commoditized those service providers and you’ve started to crush their margins. And after a while, you might not want to be… Because they can’t upsell you anymore, they can’t sell you their subscription credits or whatever else they want to do. They can’t put advertising in front of you because the robot’s looking at their website, not a person. And I don’t know why they would participate in that unless you have actually solved this payment problem, to make that valuable to them.

I think the partnerships are unique for sure. I think it’s quite different. Remember, you always go back to your phone, the information’s there, it’s in the app. It’s not like we’re doing something on the side and doing it anonymously and you don’t have the customer info, I think is one thing. The second thing is when you have those challenging… Let’s use a Thumbtack example, let’s stick there for a minute. If you don’t have a Thumbtack account, the first time you do it’ll just pop a QR code and say, here, connect, authorize, go. And then forever then you’re going to fix things and Thumbtack’s going to push you through it. There are just some simple things that you can do that make the customer journey simple and gets you to those connection points. And once you do that, which is everything, God, you understand this, setup is everything, removing that barrier to entry. To make Alexa Plus great, you’ve got to share your contacts, you’re going to want to add your photos.

I think you’ve got to share your contacts. You’re going to want to add your photos. You’re going to want to connect your service providers. It’s a one-time kind of low barrier to entry go, and then you’re all in. And the partners, we don’t talk about the deals with the partners or anything like that, but there’s benefit on both sides. But at the end of the day, it’s the right thing for the customer. And I think there’s a lot of partners out there that believe in that same philosophy. Let’s get our customer to the endgame.

But if you run one — say you run food delivery service A. I won’t name names to keep them out of it. But if I run food delivery service A and I have a deal with you, and food delivery service B shows up and signs a deal with you, and I just ask Alexa to order some food, suddenly Alexa is in control of a lot of revenue.

Yeah. But you have preferences, customers have preferences, they know what to say.

Why would they have a preference over where the sandwich comes from, like what intermediary brings you the sandwich?

That’s their choice. You can’t speak for that. You can’t speak for it for the customer, but I would say they just have a choice, and they’ll get a choice.

And you’re going to express that choice on a screen?

I’m going to keep partners out of it for this, so I won’t give you the examples, but there will be simple ways to make it clear to the customer what they want.

The other part of this, which is equally complicated is partnerships, and that’s agentic stuff. And usually when I talk to people at agentic services, it’s to open the ecosystem to say, “Okay, we can browse the web for you. Now we have access to everything.” You are doing that in a much tighter way. You’re saying, “This is how we’re going to bring partners in.”

Why make that decision? Why not say, “We can just go browse the web and do whatever”?

I just think it’s right. It’s their business. And so, we’re seeing a lot of participation. There’s a lot of partners.

They’re excited, from what I see. Not all; I can’t speak for all of them. I’m not trying to talk in absolutes. But you have this moment where you’re like — the promise of Alexa is here. Ambient is here forever. They’ve all made skills in the past or they’ve done something that they didn’t get invoked. And it’s hard because the customer had to point and shoot as opposed to just speak in natural language; they had to know exactly what they were asking for. But at the end of the day, now you have a truth in: just speak, and something comes up. And now partners are like, “Well, if they’re looking for something from me, I’m in.” But I think it’s right to be partnering and not doing it another way.

Which I’m pumped about. We have a great biz dev team, it’s what they do.

So that’s asking Alexa to do something, and it goes off and does something in the world, right? It schedules a person or orders some food, it books a flight, great. Then there’s the stuff in your home, which Alexa has historically been very good at. 

Turn the lights on and off, make a routine. I’m very intrigued by the idea of automating routine creation with natural language. Right? Make a bedtime routine for me. That is as messy as it gets, right?

No.

That’s not even partnerships. That’s Matter and Z-Wave and all.

We do it all before then. This one’s different. We already have partners that work with Alexa. If you already work with Alexa, you get the magic.

That’s it. It’s awesome. You saw it yesterday. There was no new code written on the partner side.

Really?

Nothing. I have my Govee lights at home right now that I put on the house. I’m just talking to them to change the color. That’s it. I would’ve never opened the app to change the color on my lights.

It just seems like the promise of the smart home forever, and this is what you’re describing, is that it will get more invisible.

This is what’s awesome, dude. Right?

It’s going to get more invisible.

You have to understand this is freaking awesome. 

But I’m looking at the last five years, like, “Oh, this is more visible than ever.”

You have no idea how badass my team is. This team, now I’m talking Eero, Ring, Blink, Fire TV. This team, including Alexa, Kuiper, they’re incredible, man. They’re so damn capable. I’ve not seen invention like this. Now how we get it to the customer, we refine a little bit of that. But I’ve got to tell you, and this is a great example, because this works with the Alexa program and the thousands and thousands and thousands, dare I say, hundreds of thousands of things that work with Alexa. That is one of the largest connective tissues on the planet. It’s crazy. And they’ve set it up so well that now when Alexa Plus shows up, your routines are by voice done, like 100%, Nilay.

It’s so damn cool. The other day Mary was so frustrated with me, and I don’t have a smart home at my house in the Seattle area, but I use it in another area. And she was so frustrated with me. She’s like, “The lights are on all the time.” I just grabbed my app. I’m like, “Alexa, every night just turn off the lights outside at 10:00 PM and don’t turn them on again until 7:00 PM the next day. That was it.

The promise of some of the smart home standards that have made this messier, like Matter or Thread, is that you will be able to control these devices device-agnostic, right?

Yep. We’ll take advantage of those as well. Yep.

For example, everyone talks about the smart home only in the context of their own lived experiences.

Well, how do you not? What are you going to do?

It’s hard to be on track.

What story are you going to tell? I’ve got plenty of customer stories.

But my joke is that if a thing doesn’t show up in control center on my wife’s iPhone, it doesn’t exist. She’s not going to open an app. She’s going to swipe down and see that panel and that’s how we’re doing it. So you’ve got to bridge into that. The promise of something like Matter is, we’re going to see it across all of these surfaces. It’s all going to work together. Are you thinking that far ahead? Because where does the logic of my smart home live?

Especially if you’re talking about putting hardware with a screen centrally in your home. Okay, now you’ve got a little computer running your house. And everything should talk to that, and that’s where the logic should live.

In theory, but we also have the cloud to arbitrate. We have so many different methods in. You can use Matter, you can use Bluetooth LE sometimes. You can use Zigbee, but you can also —

Ring famously runs on Z-Wave all the time.

You can use Z-Wave. You can fundamentally use Works with Alexa, just plug them right in. There’s no limitation for us to connect these things, because basically we can orchestrate to it. The team has thought through it from every way to Sunday, but they’ve also been working on it for 10 years.

It’s phenomenal. It’s probably one of the things I’m most excited about, because you basically democratize the smart home, a hundred percent. Yes. It won’t work unless you gave someone a button on their phone today, but we just talked about this. You know where the job’s better? Just say what you want.

It’s a much better job to be done. I tried to do it with the music demo yesterday. I’m not sure it landed this point, which is like, just plug them in. The speakers were there. I’m going to move music to the speakers. I’m going to do it nuanced. I think one time I said “Move. I want to move the music. I want to hear the music. I want you to bring the music here.” I used different language so it wasn’t continuous. That was all real working. Probably those little nuances get lost on the natural language as if I had a direct command. I didn’t. It could have been any of those. Or play, which I try to stay away from. And so, it’s the same concept. You just think it and say it, think it and say it. It’s very powerful. And on smart home, it comes to life amazingly. And this is credit to an incredible team. They’ve thought it through.

Do you think that we’ll see more of an explosion of consumer smart? There’s big investments people.

I think so. I think this is the tipping point.

You’ve got to put a bunch of light switches in or buy all new light bulbs.

I think so. Tipping point, because you don’t have to be an expert. Just plug it in, that’s it, and then say something.

I want to believe you, but I’ve been burned so many times.

I don’t care if you believe me or not at this point.

I’m just saying.

When you get after it, man.

I’m ready to get the products. I’m ready to try.

You go get after it. It’s pretty fascinating. This is what an LM is great at. And then, the expert that we have to go rationalize and so it doesn’t have to be deterministic. And so, it’s pretty interesting.

By the way, it has to learn as well, so if you go, “Turn on that light.” “Which light?” “That one over there.” “Oh, you mean the one in the living room?” “Yeah.” “Okay.” Now that’s not a good example, because you’re up against a switch, which takes, is just go touch the switch. But how fast the system learns, that’ll never happen. It’ll never happen again. It’ll be like, “Oh, I know what he needs. He’s asking on this device and I got it. I know I’m turning on the light.”

What’s one thing you want Alexa Plus to do that it can’t do today?

I’ve shown you everything, but I’ll tell you, and if I can touch back to my Mary example, I want these moments to connect not only the home but the family. And it’s got some pretty amazing attributes. The idea that I can leave the house and leave a message and walk out the door, and then when Anastasia shows up downstairs, she gets the message, and it’s a lovely note from her dad with maybe a direction of what to do. The fact that it’s this totally natural language moment feels magical. Alexa is being proactive on your command, not intrusive, but you are asking her to be. When you start seeing those things, that’s the thing. That’s the thing I want it to be. Because you’re just going to connect deeper into people’s lives in a way that makes it better, that you know me well enough. I want you to use these products and tell me your life is better.

But there’s not a specific thing where you’re like, “I need the next turn of capability here.”

Look, I have a vision for where this thing goes. I can’t take you there. We’ve already revealed everything, and we’re going to preview in a month. And it’s like, I’m sure we tipped over a few carts yesterday, and so I’ve just got to be careful how far I take it. There’s so much for the future. But I showed you a few of my favorites. And that’s what we did. We narrowed it down. There’s thousands of things it does now.

Try to narrow it down to the ones that both told the story but are also most emotional to me, because that matters, what I’m presenting and I think sharing. And the biggest thing, you want the team to have pride in the best stuff they’ve created. And those moments are pride moments for the team.

There’s something I’m really curious about. I’ve asked basically everybody who has had something to do with Alexa about this for a decade. Amazon always calls Alexa she. For some reason this robot has a gender, and it’s a she, and it’s always a she. Why is Alexa gendered in this way?

There’s eight voices with Alexa Plus. I don’t think we talked about it yesterday. It’s in the blog post that we wrote. Not the blog post, the About Amazon post. I’m told I’m a dork when I say blog.

I run a blog. You can say blog.

Okay.

It just depends what you’re using. Pick your voice. But the default, the default, I use the default voice. I love the new voice.

You can use the old voice. I love the new voice. It’s the default. And then, you can pick a male voice or another voice, and you can call it what you want.

I just wondered. For a decade, you gendered this robot pretty real, honestly.

Yeah. It is. This voice, the more we’re using, I called her she yesterday. I understood that. I had a couple of people ask me, I’m like, “Well, that’s kind of how I was thinking about it.”

I don’t think it’s more complicated than that.

That makes sense. I don’t even mean to… I understand it’s a loaded time in American history we’re asking this question, but I actually don’t even mean it in that context. I just mean it’s a robot. It doesn’t actually have one of those. It’s only what we assign to it.

Yeah. I think look, it is, but it is getting more personal. It’s going to be more meaningful in your life.

For sure. Do you want people to think about it as a person in that way?

You don’t want to go all the way there, but yeah, I think it’s okay that you think you have another set of ears when you want them, another set of thinking if you need it. I think it’s quite powerful.

All right, last question. This is rolling out soon to some devices. I think it’s the screens, the Echo Show 15 and 21.

Yeah.

When is it going to hit everywhere?

Actually, the 8, 10, 15 and 21.

Eight, 10, 15, so the screens.

It’s rolling out next month starting with those devices, and it’ll be a gradual rollout. And then, it’ll roll out to all devices. If you want to be in first, my push is I want people using screen devices, for sure. We’re rolling it out there first because it’s such a great experience. You go get a device and you’re on the list, you’ll be first to get it. That’s basically it. If you already have a 10, a 15, a 21 and you subscribe, then we’ll get it out to you as well. That’s where we’re starting.

And it’ll light up the whole house, by the way.

Oh, the other Alexas?

Yeah. 

So you have a screen, and it comes to your screen.

Yeah. Let’s say you have five Echoes at home right now, and you just go get a screen and it’ll light up your whole house.

Do you think you’ll drive a hardware cycle of people trying to buy screens to get Alexa?

I hope so. I think they should. And not because I want to sell another device, but I want people to have that experience. I think it’s a miss not to have it. It’s a miss.

I want to drive a cycle in the spirit of not trying to be sales, not my thing. But I will say if you want the best experience, go get a screen device. We’re pleased already. I didn’t expect… Pleased just seeing the reaction from yesterday. It’s nice to see. But I think in a month, people will get it in their hands, we’ll start the preview. Most features will be done, most. There’ll be a few that are coming later, for sure. And then, we’ll roll it out to everybody when it’s the right time.

Last question, you’ve said you’ve got a vision.

This is your third last question.

I know, but that’s how I do it.

I love this.

This is why I’m good at this. It’s tricky. Really, last question: You’ve laid out a vision for where you want to go. You’ve talked about the big opportunity here. I’ve asked you if you think LLMs are durable enough to pull all this off. You said they are.

Do you see this as a platform shift the way that other people have talked about it as a platform shift? Do you think we’re going to actually reconsider how we interact with computers at the biggest level, the way that touch screens did it, the way that mice and keyboards did it?

Not to be too cliche, I think 10 years ago was a magnificent moment when Alexa launched, 10 years and a couple months, but what a moment. It really was a reset. I think right now, 10 years later, I actually do think this is that next moment. But this one is, to your point, that promise. I think this is the shift. I think this is that time. It’s going to take years. This is not like, don’t worry, you’re not going to miss out. Somebody’s like, “Well, why are you so late?” I’m like, “Late? Do you know we’re just at the beginning?”

And by the way, our Roadmap is awesome. And I believe in this team, in their invention, and the company’s patience for invention, and its ability to make the big bet and stick with it. It not only creates an incredible future opportunity, but with that opportunity and bet and invention, you also have the moment right now is just starting. It’s literally just starting, dude. It’s just starting.

It’s a great future. It’s fantastic. And I think the home transforms forever starting now. But it takes time. It takes time. And I would say patience is one of the strongest qualities of Amazon. I had once heard infamously on a great leader, and I don’t know the quote, but our best overnight invention took seven years. It takes time. But right now, we’re here. 10 years later, here we are. And it’s the beginning of that next gen. I think it is a shift. Right this moment.

All right. No better place to end it, Panos. Thank you so much for being on Decoder.

Great to see you, Nilay.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Google’s Pixel 7 Pro is on sale for just $199.99 (update: sold out)

Google’s Pixel 7 Pro offers a spacious screen and an all-day battery life.

Update, March 3rd: The Google Pixel 7 Pro has sold out at Best Buy.

It may be a few years old at this point, but the Google Pixel 7 Pro is still a good investment given it currently costs less than some of the best budget phones on the market. Right now, you can pick up an unlocked 7 Pro for $199.99 with 12GB of RAM and 256GB of storage at Best Buy. That’s the lowest price we’ve seen on the last-gen smartphone, which initially launched at $999.

The Pixel 9 Pro is a better phone, but Best Buy’s latest Google promo is still an excellent deal on a smartphone with flagship specs that typically costs considerably more. For just $200, you get a 6.7-inch screen with a speedy 120Hz refresh rate and an impressive triple camera array, one that includes a 48-megapixel telephoto lens. The last-gen smartphone also offers all-day battery life and a robust IP68 rating for water and dust resistance, along with a handful of Pixel-exclusive features, including spam call blocking and the transcribing Recorder app.

There are some caveats to bear in mind, though. The Pixel 7 Pro doesn’t offer some of the AI camera tricks found on Google’s newer phones, and it runs on Google’s slower (y …

Read the full story at The Verge.

BYD cars now have an on-vehicle DJI drone launch platform

EV maker BYD unveiled “Lingyuan,” a vehicle-mounted drone launching system developed in collaboration with DJI that is available for all of the company’s vehicles, reports Chinese state media outlet Xinhua. The system is only available in China, like BYD’s vehicles, and costs 16,000 yuan (or about $2,197).

BYD’s video below, reposted by YouTube channel ShanghaiEye, has some real science fiction vibes: the driver taps a button on their vehicle’s touch screen, and doors slide open on the top of the car, revealing a rising landing platform with a drone on it. The drone is shown lifting off while the EV is in motion in some shots, then following the car down the road.

A CnEVPost story says the drones can take off and land “at speeds up to 25 km/h” and that they can return automatically as long as they’re within two kilometers of the vehicle. The story also says the drone can follow at up to 54 kilometers per hour. The hangar charges the drone when it’s docked. 

The Lingyuan purchase price includes a DJI drone — reports don’t seem to say which, but the video shows a DJI Air 3S — as well as the roof-mounted drone hangar, and apps that work with the system, according to South China Morning Post. The apps include one for video editing (possibly one of DJI’s existing apps) as well as one for “AI recognition,” whose function isn’t specified by SCMP. Google-translated text from BYD’s demo video revealing the system mentions “AI Posture recognition, Lingyuan takes photos around the car.” It’s not clear whether BYD owners can install the system on a vehicle themselves.

BYD and DJI collaborated in “developing a fully integrated system from the ground up,” BYD chairman Wang Chuanfu said at a launch event in Chinese tech hub Shenzhen on Sunday, writes SCMP. The automaker also reportedly introduced a version of its Bao 8 SUV that comes “pre-equipped with the Lingyuan system.”

It’s not BYD’s first venture into drone launchers. Its luxury brand, Yangwang, launched an off-road variant of its U8 SUV with one attached to its roof rails, but that looks much larger than the Lingyuan featured in BYD’s video.

Lingyuan sounds a lot like DJI’s Dock 3 drone-in-a-box solution meant for use in tasks like power line inspections or emergency response. The Dock 3 was the first verson of the DJI Dock to let you launch the drones from a moving vehicle, but it costs quite a bit more than the Lingyuan system: it starts at $21,059, a price that doesn’t include installation, for a bundle with a DJI Matrice 4D drone.

Tim Cook teases a new Apple ‘Air’

Apple CEO Tim Cook just teased the launch of a new device coming to its Air lineup this week. In a post on X, Cook posted a short video with the text, “There’s something in the air,” alongside a caption saying, “This week.”

Though Cook doesn’t specify which device Apple plans to launch, it’s likely a new MacBook Air. Bloomberg’s Mark Gurman reported that the company is getting ready to reveal 13- and 15-inch MacBook Air models with an upgraded M4 chip sometime this month. Apple also revealed its M3 MacBook Air around the same time last year.

This week. pic.twitter.com/uXqQaGNkSk

— Tim Cook (@tim_cook) March 3, 2025

But there are other Air-branded devices that Apple is rumored to be working on, too. Gurman previously said that Apple planned to release iPad Air models and keyboards close to the reveal of the iPhone 16E, which just launched last month. Over the weekend, Gurman noted that Apple is “beginning to wind down” iPad Air inventory, suggesting a new product launch is coming soon.

There have been rumors about a new iPhone “Air” model as well, although it’s probably a little early for that to get announced.

Doctor Who is ready to take the long way around again in new season 2 trailer

A little doom and dread always helps Doctor Who hit that much harder, and the show’s new season 2 trailer makes it seem like it’s going to come out swinging when it returns to Disney Plus in a few weeks.

Though many of the Doctor’s (Ncuti Gatwa) companions have been amazed by their adventures to alien planets, newcomer Belinda Chandra (Varada Sethu) seems none too pleased in the new trailer as she finds herself whisked off into space. As a nurse, Belinda seems used to dealing with headstrong doctors who don’t always take kindly to having their decisions questioned. But she has no frame of reference for the cosmic danger that the Doctor is dead set on protecting their reality from. 

Like Ruby Sunday (Millie Gibson) (who also makes a brief appearance in the trailer), the Doctor appears to have a mysterious connection to Belinda suggesting that she might also be more than an ordinary human. That probably has something to do with the way Belinda looks exactly like Mundy Flynn (also Sethu) from last season’s episode “Boom,” but we won’t know for sure until Doctor Who returns on April 12th.

Google’s Chrome extension cull hits more uBlock Origin users

Google is disabling the original uBlock Origin ad blocker for more Chrome browser users, alongside other extensions that are no longer supported as the browser migrates to its new extension specification, Manifest V3. According to Google, the new standard aims to improve privacy and security, but also removes a feature that some ad blocking extensions relied on to work — a compromise that Mozilla is unwilling to make for its own Firefox browser

Users online are reporting on Reddit and X that Chrome is removing outdated extensions. In Chrome, a notification window will appear underneath the extensions tab on the browser taskbar with a message encouraging users to remove the impacted add-on, saying it has been turned off and is “no longer supported.” Two buttons are available that allow users to either quickly delete or manage their extensions.

Google’s uBlock Origin phaseout on Chromium-based browsers began in October, but started to have a wider impact in recent weeks. Bleeping Computer has also reported that extensions on staffers devices are being turned off, and Verge staffers have seen similar updates on our own machines.

These changes come as Google migrates Chrome away from the now defunct Manifest V2 specification. Support is being killed not just for uBlock Origin, but for any extension that hasn’t (or is unable to) update to Manifest V3. uBlock Origin users can switch to uBlock Origin Lite, which has more limited filtering capabilities than its predecessor due to Manifest V3’s ad blocking restrictions.

Chrome won’t be the only service affected by the Manifest V3 rollout — other Chromium-based web browsers like Microsoft Edge are also losing V2 support and Brave says it can only offer “limited” support once all Manifest V2 items are removed from the Chrome Web Store. Mozilla says that Firefox will continue offering both extension specifications, however, potentially giving uBlock Origin users a new browser to relocate to.

All the news about Netflix’s gaming efforts

Red Netflix “N” logo surrounded by other logos outlined in white against a black background

Netflix is making a big push into video games. The company first dabbled in games with interactive titles like Black Mirror: Bandersnatch and a Carmen Sandiego game. But starting in 2021, it made gaming a much bigger priority — Netflix hired a former EA exec and let people play a selection of mobile games for free as part of their subscriptions.

Since then, Netflix’s gaming arm has launched player gamertags and started hiring to develop a “AAA PC game,” while bringing on big names like former Halo lead Joseph Staten. Ubisoft has even announced several games in development for Netflix, including an Assassin’s Creed title to go with a new live-action series the two companies are working on. Netflix also rolled out more big-name titles like Monument Valley and a Tomb Raider roguelike. Most Netflix subscribers haven’t tried its games yet, but that might change soon.

In August 2023, it launched the first tests for its cloud-streamed games that let you play its titles on a TV or on the web, which could help Netflix more easily compete with other non-mobile gaming platforms.

We’ll be watching Netflix’s gaming efforts closely, and you can read our coverage right here.

Blue Ghost private lander reaches the Moon intact

This photograph snapped by Blue Ghost shows the Moon's surface and Earth on the horizon.

The Blue Ghost spacecraft has landed on the moon, making history as the first private lander to “successfully” achieve this feat according to its creator, Firefly Aerospace. The Texas-based commercial aerospace firm announced on Sunday that its lander had “softly touched down on the Moon’s surface in an upright, stable configuration.”

This is technically the second private spacecraft to land on the lunar surface. Intuitive Machines’ Odysseus lander, which touched down last February, was the first US spacecraft to land on the Moon since the 1972 Apollo 17 mission, but its mission was cut short after it toppled over.

Having nailed its own landing, Blue Ghost will now spend a full lunar day (about two weeks) performing research operations, including “lunar subsurface drilling, sample collection, X-ray imaging, and dust mitigation experiments.” These experiments aim to provide environmental data and test technologies that will help NASA again land crewed astronaut missions on the Moon. Blue Ghost is also expected to capture high-definition images of a total eclipse on March 14th, which will see Earth blocking the sun when viewed from the Moon.

Blue Ghost’s shadow seen on the Moon’s surface! We’ll continue to share images and updates throughout our surface operations. #BGM1 pic.twitter.com/iP7fWOSths

— Firefly Aerospace (@Firefly_Space) March 2, 2025

“With the hardest part behind us, Firefly looks forward to completing more than 14 days of surface operations, again raising the bar for commercial cislunar capabilities,” said Firefly Aerospace’s Chief Technology Officer Shea Ferring. “We want to thank NASA for entrusting in the Firefly team, and we look forward to delivering even more science data that supports future human missions to the Moon and Mars.”

Blue Ghost was launched from Cape Canaveral aboard a SpaceX Falcon 9 rocket on January 15th, alongside Resilience — another private lunar lander built by Japan’s iSpace aerospace firm. Resilience is taking a longer, more energy-efficient route to reach the Moon that’s aided by gravity propulsion, and isn’t expected to arrive until early May.

Blue Ghost arrived at its lunar destination after traveling 2.8 million miles over 45 days while downlinking more than 27GB of data. It then spent two weeks orbiting the Moon prior to landing on March 2nd, touching down just 100 meters away from its target location within the 300-mile-wide Mare Crisium.

“This incredible achievement demonstrates how NASA and American companies are leading the way in space exploration for the benefit of all,” acting NASA Administrator Janet Petro said. “We have already learned many lessons – and the technological and science demonstrations onboard Firefly’s Blue Ghost Mission 1 will improve our ability to not only discover more science, but to ensure the safety of our spacecraft instruments for future human exploration – both in the short term and long term.”

❌