Honor has announced the 400 and 400 Pro, two midrange phones that launch today in the UK and Europe. They’re capable-looking handsets in plenty of respects, but stand out mostly for the guarantee of six years of software support, bested only by Google’s Pixel 9A for the price.
In fairness, at £699.99 / €799 (around $900), the 400 Pro is really a flagship in its own right. It’s powered by 2023’s Qualcomm Snapdragon 8 Gen 3 chipset, and boasts a 6.7-inch AMOLED screen, IP68 and 69 ratings, and sizable 5,300mAh battery (with an even larger 6,000mAh cell outside Europe). The triple camera is impressive too, with a 200-megapixel main shooter, plus an ultrawide and telephoto.
I’m a bigger fan of the regular 400 though, and not just because at £399.99 / €499 (around $560) it’s substantially cheaper. It has straight sides, rather than the curved edges of the Pro model, and combined with a smaller 6.5-inch display it’s much more comfortable to use.
The 400 ships with the same main and ultrawide cameras as the Pro, only giving up on the telephoto. Its IP65 water-resistance is a little less comprehensive, and the Snapdragon 7 Gen 3 won’t offer quite as much power, but the combination of the same large battery and a smaller screen should give this great endurance. The main downside for me would be giving up the Pro’s wireless charging.
Importantly, Honor’s commitment to six years of OS version updates and six years of security patches is the same for both phones, and should see them through to Android 21 in 2031. That matches Samsung’s promise for its Galaxy A56, and falls just one year short of the seven years guaranteed for the Pixel 9A. Both 400 phones arrive running Honor’s MagicOS and Android 15, and include a unique AI image-to-video feature powered by Google’s Veo 2 model, currently not available on any other phones.
While I’m not entirely sold on the 400 Pro’s near-flagship price tag, the base 400 looks like a compelling alternative to the Pixel 9A and Galaxy A56. It’s cheaper than either, should last for about as long, and bests both on quite a few specs.
Whether you’re hunting for a last-minute graduation gift or an early Father’s Day present, Grid Studio’s deconstructed gadgets are worth a look. The company transforms old-school electronics into collages in shadowbox-style frames, which make for truly memorable gifts. And now through May 27th, many are currently steeply discounted in honor of Father’s Day.
From vintage smartphones to classic gaming consoles, there are plenty of nostalgic gadgets on sale. Apple fans can snag classics like theiPhone 4 for $99 ($70 off), matching its all-time low, or go even further back with the first-generation iPhone 2Gfor $299 ($400 off). For gamers, theGame Boy Color is on sale for $149 ($100 off) and the PlayStation Portable 1000 (PSP) for $99, down from $229.
Verizon has asked the Federal Communications Commission to get rid of the rule requiring it to unlock phones after 60 days. In a letter to the FCC spotted by LightReading, Verizon claims the current unlocking requirement “benefits bad actors and fraudsters.”
The FCC first imposed an unlocking requirement following Verizon’s purchase of C-Block spectrum in 2008. It forced Verizon to allow customers to change to a new cellular carrier after purchasing a phone from the company, making it easier to switch away from than other providers.
But now, Verizon wants the 60-day period extended even longer, calling the FCC’s current requirement “outdated regulation that has become both burdensome and harmful.” The company also says eliminating the rule aligns with the FCC’s recent initiative to get rid of “unnecessary” regulations.
It adds that “recent industry experience shows that even a lock of 60 days does not deter device fraud,” which is why the “industry standard” for providers who don’t have to abide by the 60-day unlocking rule is a minimum of six months.
“Waiving this rule will benefit consumers because it will allow Verizon to continue offering subsidies and other mechanisms to make phones more affordable,” Verizon says. “Waiving the rule also will benefit competition because it will eliminate the distorted playing field that currently exists.”
Xiaomi has unveiled its first in-house flagship chipset, the Xring O1, and it’s got enough power to go head-to-head with Qualcomm’s Snapdragon 8 Elite. The company also unveiled a 15S Pro phone and Pad 7 Ultra tablet that the new chip will power, plus a new version of the Watch S4 powered by another Xiaomi chip.
The Xring O1 isn’t Xiaomi’s first phone chipset, but it’s the first since 2017’s midrange Surge S1, and is far more powerful than that. Developed using a second-generation 3nm process, this is a chip intended to rival the 8 Elite, MediaTek’s Dimensity 9400, and Apple’s A18 series. On paper, it looks up to the task.
Xiaomi has opted for a 10-core CPU, more than any of the competition. Two Arm Cortex-X925 prime cores are clocked at 3.9GHz, with four more cores at 3.4GHz, two at 1.9GHz, and another two at 1.8GHz. The 16-core Immortalis-G925 is also top-spec, matching the graphics horsepower in MediaTek’s Dimensity flagship.
Chip architecture has started to vary enough between the major players that clock speeds and core counts aren’t a great guide to performance. Neither are lab benchmarks, though Xiaomi’s claimed AnTuTu score of over three million puts this up there with the best, and it’s bullish about the chip’s power-efficiency too.
What this tells us, though, is that Xiaomi is serious about the Xring O1 holding its own as a true flagship: it should be in the same ballpark as Android alternatives from Qualcomm and MediaTek, and far ahead of the most powerful chips from Samsung’s Exynos team.
To hammer the point home, Xiaomi is launching the Xring O1 inside the 15S Pro, which is essentially a rerelease of last year’s 15 Pro, but with the Snapdragon 8 Elite swapped out for Xiaomi’s own chip. It also comes in a rather sleek carbon fiber design. It’s joined by the Pad 7 Ultra, also using the O1, a premium tablet with a 14-inch OLED screen and large 12,000mAh battery, that at 5.1mm is one of the thinnest tablets on the market.
It’s clear that Xiaomi’s ambitions go beyond a single chip, and even beyond phones and tablets. To emphasize that, it’s also launched the Xring T1, a flagship chipset designed for smartwatches. Details are light, but it includes a 4G modem, and Xiaomi has used it to power an eSIM version of the Watch S4.
All of this sounds like bad news for Qualcomm, which has long counted Xiaomi as a major customer. It’s often the first to announce a phone running the latest Qualcomm flagship each year, and as the third-biggest smartphone manufacturer in the world, Xiaomi is big business for Qualcomm. It will take some comfort in the fact that just this week the two companies signed a multi-year agreement for Xiaomi phones to keep using Qualcomm’s flagship Snapdragon 8-series chips, but there can’t be much doubt that Xiaomi’s long-term plan is to go it alone. After all, if Apple can, why can’t Xiaomi?
Today, I’m talking with Uber CEO Dara Khosrowshahi. We recorded this conversation the day Uber announced a big set of product updates, including new options for shared rides and some features to make commuting easier and more predictable. Dara was in New York for all that, so he came to our studio, and we did this one together, which always makes for a great episode.
As it happens, the traffic in New York that day was truly terrible — so we started by talking about how often Dara actually takes an Uber, what that’s like for him, and what it’s like when he goes and serves as an Uber driver, something he does regularly. There’s a lot of security around an Uber Eats delivery when Dara’s behind the wheel, it turns out.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
Uber might be the single best example of the major service apps that boomed into existence in the early part of the smartphone era. In the simplest terms, I think of it as the button that can make a Toyota Camry appear nearly anywhere in the world. But underneath that simple idea is a complicated dance with tons of dependencies. Since he took over for founder Travis Kalanick almost a decade ago, it’s been Dara’s job to really operationalize and smooth the company out into a stable, profitable organization.
That’s Decoder bait through and through, so we spent a lot of time deep in how Dara thinks about Uber’s various businesses, how they’re split between product functions and the regional teams in all the countries that Uber operates in, and how Uber plans to grow as things like Waymo and other autonomous vehicle companies enter the picture.
Like so many mature service companies, a big part of the answer is to try and become a consistent part in people’s lives that drives recurring revenue – that’s things like Uber One subscriptions, but also bulk trip discounts for regular commuters. And Uber’s biggest news last week was something it called Route Share: predetermined spots where an Uber is guaranteed to come every 20 minutes and pick you up.
Yes, I asked Dara if Uber had just reinvented the bus. His answer is that Uber should be a complement to great public transit, and we talked a little about congestion pricing in New York and how that’s playing out in the data that Uber can see.
If you’ve been listening to Decoder recently, you know that I’m really curious about how service apps like Uber will handle things like AI agents, which promise to let you book a car simply by asking an assistant like Alexa or Siri. What’s in it for Uber to have its service commoditized away behind someone else’s interface, especially as it’s trying to grow new lines of business?
Dara had a lot of thoughts here, and it’s clear that the business side of AI agents has just as long a way to go as the actual tech. There’s a lot in this one, and Dara didn’t hold back. I think you’re going to like it.
Okay: Uber CEO Dara Khosrowshahi. Here we go.
This interview has been lightly edited for length and clarity.
Dara Khosrowshahi, you’re the CEO of Uber. Welcome to Decoder.
Thank you for having me.
Thank you for being here in person. I love doing these conversations in person in our studio here in New York. It’s a different energy, so thank you for coming in.
In-person is the new thing.
I have to ask you, did you take an Uber in New York City to our office?
I did not take an Uber. The reality in my life is that I need security and all that stuff, so sometimes I take Uber and then the security vehicle follows me. Today we had a lot of traffic, so I couldn’t do it.
Yeah, there was a lot of traffic today.
I always wonder how often you get to dogfood your own products.
You’ve got to dog food your own products. Absolutely.
Tim Cook said in an interview recently that he uses every single Apple product every single day, which mathematically seems very challenging. Do you take them all? Do you take UberX and Uber Black?
Totally. You can’t do it every single day, but I use Uber. I use Uber Eats. Actually, one of the really important moves I made was starting to deliver. Most Uber employees use Uber as consumers, but not as many use Uber as earners, as drivers, or as couriers. Early on when I joined, we were building more for the rider or the eater than the earner.
So when I was in San Francisco during COVID, I was going crazy at home. So I got my e-bike and I started delivering food. Then, I got a Tesla and I started driving folks around. I really do think that it’s important to dogfood. You can’t do it every single day, because you have a day job. But for one, you learn about your product. Just as importantly, you’re setting an example for your employees.
When you were driving people around for Uber, was security in the car with you? How does this work?
They were tracking me, and then they were following me to make sure everything was okay.
This is the most intense Uber ride of all time.
It was really cool. By the way, it’s a lot harder than it looks. When I first started driving, I was so nervous. I didn’t want to screw up. I didn’t want to take the wrong route. It’s actually a lot more challenging than you think it is.
There’s a lot to talk about today. You have some news from your Go-Get announcements. You have ways for people to use Uber more consistently, which I think is very interesting. There’s a partnership with VW to launch autonomous rides in Los Angeles next year. That’s on top of all your other autonomous partnerships. I’m obviously very interested in that.
I want to start with Uber just conceptually right now. If I think of Uber in the most reductive way possible, it’s an app where I open it, I push a button, and a Toyota Highlander will show up anywhere I am in the world.
Pretty much, or another car. For you, a Toyota Highlander.
It’s almost always a Toyota Highlander.
That’s awesome. I love it.
There’s a whole lot to that. But in the most reductive sense, I think that’s how people perceive Uber. It’s a button that makes a car show up.
I think so, yes.
Is that the foundation of everything else, or are there more foundations?
Ultimately, we want to be your everyday app, kind of like your iOS for everyday living. We started with rideshare, and that was the core of our business, but then we’ve expanded into other categories. Obviously there’s Uber Eats and grocery. And to some extent, I think we’re building this real-time logistics network.
It started with moving people around. Now we’re moving things and food around, and we’re expanding into many, many more categories. But to the consumer, we want to be known as the app that makes your day a little bit easier, that helps you go where you want or get anything that you want.
The news includes a bunch of things like Route Share. There’s more ways to use the subscription more consistently, more ways to plan commutes to work. So that core function — that there’s a supply of people with cars in the world and Uber can aggregate the demand and push the button and the cars will come find you.
That’s the thing that appears to be changing in your news. It’s not “push the button and the car comes to you.” Instead of commuting, Uber will just manage the logistics of that for you and you will subscribe to it, and that will be a recurring fee. Is that the change here?
To some extent, what we’re doing is saving you time with someone else doing the driving instead so you can do whatever you want. Now, there’s a trade-off as it relates to reliability or time and price. The faster you want your ride or the more reliable you want that ride to be, the more it’ll cost us.
If you say, “You know what? I may not need my ride in the next four minutes. I can wait 15 minutes. Or instead of having the car come all the way to me, I can walk two blocks if that makes the network more affordable or I can save money,” then that’s a trade-off you can make. Human beings are always trading off. Essentially you’re paying for your time. With our network, there’s always a trade-off between reliability / time and price.
So, what we’re trying to do is find products that allow people to self-identify where they want that trade-off. For example, you have a really important trip and are going to take a flight. You need to absolutely know that the Uber is going to be there. You can pick a reserve. You pay a premium for that reserve. The driver shows up early. The reliability is like over 99 percent. That’s one version.
The other version is for the commuter. This is someone who is going to work every single day using Uber. The price of your commute every day can get up there. So, what we wanted to do is allow somebody to trade off a little bit of price for reliability. For example, we have one product called Routematch, which has set routes coming every 20 minutes, so there’s predictability there. You will share your ride with up to two other people, but you are trading off a little bit. You can’t quite get your ride on demand because there’s a schedule there, and you may share with somebody. You’re trading off a little bit of that personal reliability or time in order to get about 50 percent off.
So, we’re constantly making trade-offs as it relates to our services. Originally, when we were thinking about these trade-offs, we thought it was about the demo. There are some people who are less time sensitive and more price sensitive or the other way around. It turns out that that is true, but there are also occasions. An occasion might be travel, where you’re going to pay more to be more reliable. There may be another occasion where if you know the 20-minute increments, you can adjust your life to get that 50 percent off on your commute. So it’s been a really interesting evolution and how we think about the trade-offs and the efficiencies of the network.
Is convenience on that list of priorities for you? You talked about time and reliability, but it seems like if I open the app, it being inexpensive and right there is as important as all the other stuff.
I think your definition of convenience may be different. Something that’s cheaper may be more convenient for a person. Something that’s more timely can be more convenient for another. Ultimately, I think convenience is an amalgamation of all of that and it depends on what you consider to be convenient.
One of the reasons I asked that question is because in various other parts of the tech industry that we cover, consumers will pick convenience over quality almost every time. The example I use, which is not a one-to-one to Uber, is music. Consumers will pick just the worst quality streaming if it’s available. They’ll watch bootleg YouTube videos of the songs they want to hear. They often will not pay for the high-quality lossless elsewhere. And you can do price discrimination based on that.
I challenge you a little bit on that because in the early days, bootleg music, bootleg movies, etc. were the biggest use case. But actually as streaming made it more convenient, people were willing to pay. So I do think there’s a price / convenience trade-off and it’s personal. I think habits change as well. We are just trying to figure out what’s the most optimal price / convenience trade-off that we can make to aggregate the most demand and the most supply.
One of the reasons I asked about convenience is that you framed Route Share or walking a little bit to get to the spot as reliability. To me it’s actually convenience. I’m going to trade a little bit of convenience and maybe pay a lot less.
You can totally go there.
I read this press release announcing Route Share, and I had this very mid-2010s reaction, which was what if Uber just invented a bus. Did you just invent a bus?
I think to some extent it’s inspired by the bus. If you step back a little bit, a part of us looking to expand and grow is about making Uber more affordable to more people. I think one of the things that makes tech companies different from most companies out there is that our goal is to lower prices. If we lower the price, then we can extend the audience.
There are two ways of lowering price as it relates to Route Share. One is you get more than one person to share a car because cars cost money, drivers’ time costs money, etc., or you reduce the size or price of the vehicle. And we’re doing that actively. For example, with two-wheelers and three-wheelers in a lot of countries. We’ve been going after this shared concept, which is a bus, for many, many years. We started with UberX Share, for example, which is on-demand sharing.
But this concept takes it to the next level. If you schedule and create consistency among routes, then I think we can up the matching quotient, so to speak, and then essentially pass the savings on to the consumer. So, call it a next-gen bus, but the goal is just to reduce prices to the consumer and then help with congestion and the environment. That’s all good as well.
Congestion is interesting. We’re talking in New York City. We’re a few months into the congestion pricing program, which was designed to get people out of their cars and onto the bus, mass transit, even into Ubers and taxis. Have you seen any statistics since congestion pricing has been in effect that says Uber usage has gone up?
So Uber usage has gone up, but we don’t have the counterfactual. Would it have gone up if there were no congestion pricing? But what we’re seeing is very similar to some of the public data out there, which says traffic is flowing faster. I think The New York Times said traffic speeds were up like 6 or 7 percent. Uber has very similar numbers as well.
So, Ubers are getting there faster, but I don’t have a counterfactual. Was it good or bad for our business? We supported congestion pricing. The way I think about it is that it’s actually taking cars or vehicles that have the lowest utility off the road. It’s a little bit like surge pricing if you think about it, where during peak times, we want to get more drivers out there.
At the same time, we want to encourage people who may not need to travel during peak times. We actually want to dampen demand and move it to non-peak times. So, congestion pricing to some extent is a “city surge.” They’re increasing prices for a certain time, and it removes demand for vehicles that have the lowest utilization. It just so happens that Uber’s taxis are very high utility vehicles on the road.
You’re closing in on a decade as the CEO of Uber. The way you talk is very operational. You’re maximizing the utility of these assets on the road, whether your drivers own them (we should talk about the autonomous fleets that seem like they’re going to come into existence). Then, there’s maximizing utility, value, and all this stuff. You took over from a founder who had a much grander vision, which was that no one should own a car and that all transportation should be run by Uber.
Well, that’s still a vision. Absolutely.
The way you’re talking about it, likeRoute Share competing with the bus, is it still the vision to take over from those things? Do you perceive yourself to be in competition with public transport?
No. Actually, I view that we are in competition with personal car ownership. That was Travis [Kalanick]’s vision as well. Public transport is a teammate. Uber consistently does really well in cities where there’s significant public transport because — and it goes to what I was talking about earlier — there are just different use cases. You might take public transport for your downtown commute, but if you’re going to dinner with your friends, then you might take an Uber.
So, Uber and public transport compete for a particular trip. That’s absolutely true. You have to decide, “Am I going to take the bus, am I going to take the subway, or am I going to take an Uber?” As it relates to your life, we’re very much complements.
The big kahuna that we’re going after is car ownership. That ultimately is a vision. It remains a vision. It’s hard to do because the car is a really flexible product with mass manufacturing and it’s really cost-effective. So, it’s going to take a lot of work to get there. We’re less than 3 percent of miles traveled on the road even though we’re a very big company. We’ve got a long way to go, but the vision definitely hasn’t changed.
Are there some markets where you think you’ve made a bigger dent in car ownership than others?
I don’t know if we’ve made a bigger dent, but there are definitely some markets where our penetration of miles traveled is higher. For example, in Latin America, our penetration is very high. Penetration in New York or other big US cities where there is mass transit is higher as well. I haven’t looked at whether we’ve made a dent in ownership.
Another way to think about it is there’s some percentage of people who would’ve gotten a car but Uber exists in their life. Do you track that? Do you measure that?
We don’t track that. But I can tell you — this drives me crazy. My son is over 18. I don’t know about you but did you get a license the minute you could drive?
Oh yeah. At 15-and-a-half, in Wisconsin.
Exactly! It was just such a thing. It was a goal in life. It represented freedom. I’m still trying to get my son to get his driver’s license, but Uber’s freed him up. If you look at the percentage of 16 or 18 year olds who are getting their license, that percentage is coming down significantly. I think it used to be two-thirds, but now it’s probably in the 50 percent range. I could be wrong. So, it is absolutely having an effect on car ownership.
Do you think that’s a leading indicator for Uber success, or is that a lagging indicator of Fortnite success?
[Laughs] Probably the two are mixed. I also think it’s an indication of the urbanization of our populations, but I haven’t actually looked into whether heavy Fortnite players are Uber users as well. I suspect they may be. Certainly Uber Eats users.
I’m going to leave that one alone. There’s another hour of PhD-level sociology in that comment alone.
Correlation, causation, who knows.
Let’s talk about Uber itself. You took over in 2017. You took over a company led by a very strong founder who had a vision of how the company should be run. How is Uber structured at this point? This is a classic Decoder question.
We have a combination matrix and line of business structure, so we have two global leads: one for our mobility business and one for our delivery business. Then, most of the other functions — marketing and PR, government relations, product finance, etc. — are matrixed. I would say that as the company has matured, we have matrixed more, which I think happens to a lot of companies that mature. Their practices become more predictable and they’re trying to bring a higher degree of expertise into those practices. So, it’s a combination. We constantly have this creative battle between the lines of businesses and the matrices. Ultimately, I think it works.
I mean, that’s the tension. I think the listeners know that the secret of the show is I can just guess at 80 percent of the problems if you tell me the structure of the company.
Listen, I think that the conflict is constructive. This is a terrible metaphor, but if you look at purebred dogs, they tend to get sick more, like bulldogs that have been bred year after year after year, versus hybrids that aren’t pure breeds. To some extent, I think that companies that have a structure that’s one note — there’s a line of business (LOB) structure or a matrix-only structure — there’s a lack of conflict that sometimes allows the strength of the weakness of either structure to go overboard. Matrix structures are usually associated with more efficiency, but slower decision making. LOB are sometimes much faster but have difficulty scaling.
I think my directs and some of the teams get frustrated with conflict. Some of these decisions come up to me. But I think the conflict is actually really healthy. The conflict is what allows us to not fall prey because these two teams are constantly keeping each other in check. If you’ve got a matrix function, it may not realize that, let’s say, the marketing in Egypt is not sharp enough, but the Egypt GM is living and dying by that marketing. So the two keep each other in balance. That Egypt GM might not know what an absolutely first-rate marketer is. That conflict keeps those two in check and ultimately gets a better outcome. Conflict can be unpleasant, but I think the fight makes us better.
As a guy who runs a single vertical line of business in a matrix company, I understand what you’re saying.
There’s no perfect way.
Put that into practice. You have a hypothetical there. What’s an example where you have had to mediate that conflict?
I think marketing is one, since we just talked about that. I’ve had to mediate as it relates to, let’s say, the allocation of capital. Every single line of business and GM is going to want some budget for branding because they’re fighting it out on the ground. Then, if you look on a global basis, how do you allocate that marketing spend? The answer to the global solution may be different than the answer to the request of that Egypt GM. Those answers during planning may be different.
Early on as we were building this matrix structure, there was a lot of conflict. Now, the teams have a real understanding. They’re getting into a rhythm, and the conflicts get to me much less often. Same thing with product. Product prioritization is very similar, with a constant push and pull.
As a global company, you want to build globalized products that are as similar to each other as possible. At the same time, there are local competitors that may have some product twists and turns that may be only applicable to that market and may not make sense for you to build on a global basis. That push and pull is a tough one sometimes.
When I say 80 percent of the problems, that’s the one I can usually guess at with 100 percent accuracy. How do you solve that? Do you have local product teams that are allowed to just make Uber whatever they want?
We have the GMs on the ground who really keep the product teams accountable, but our product planning is global by nature and it is not perfect. Sometimes the GMs who yell the loudest get their way. We want to make sure that our product people travel to the market so they feel the pain of the market, so to speak. Honestly, it’s something we have to keep working on. We don’t have it perfect by any means.
There’s a difference there from the founder-led company that is totally functional at the beginning with gobs of VC money coming in. I mean, that’s the early Uber, right? We have a bunch of zero-interest-rate VC money and we’re going to win by just buying all the drivers in a market, setting the prices to zero, and we’ll figure it out in the end. And that was totally functional. You’ve created it in a different structure. I’m just wondering, how did you make that transition?
Let me correct you there. It wasn’t functional. It was GM-led.
Even back in the day?
The GMs were mini CEOs of every single country. Travis would hire a GM and parachute them into a market. You’re right that we did go out and buy supply. As a company, we really are supply-led, where first you build liquid supply and then you invest in demand. In the early days, you essentially had to price under market and lose a bunch of money.
As liquidity increases, the matches increase, and the efficiency of the market increases, you can start pulling more profits from a marketplace. That was essentially the formula. It might’ve seemed crazy at the time, but I think that Travis and the founding team got it right. Creating liquidity in the marketplace is exactly what allows you to get to profitability. Whoever created the liquidity supply and demand fastest was the one who ultimately won.
So in the early days, it was all GMs spending a ton of money on liquidity. When I came in, I started to globalize some of those matrix functions. It took time. It was painful at times, but I think we’re at the right balance now.
What’s the hardest choice you had to make along that way?
The hardest choice was actually on product. Specifically, when I was around for a while, our Rides and Eats businesses were complete verticals. So you had different ops teams, different marketing teams, and different product teams. Eats was competing against pure plays: DoorDash, Just Eat, Deliveroo, etc. Eats was 10 or 15 percent of the business.
So if you decide to matrix that function, for example, is your head of product going to spend time on the 85 percent for Rides or are they going to spend time on the 15 percent for Eats? And no matter how much I said, “Hey, spend time on Eats,” they were getting paid for Ride’s success. So, we went a few years with Rides and Eats essentially being totally separate.
Then, there was a big decision that we made to combine the product teams and the product and tech teams. That’s what makes the company go. There’s a lot more that happens around it, but if you don’t have great product or you’re not strong technically, you’re going to suffer. That decision was probably the hardest decision. It came with some bumps and bruises, but I think it was a really, really good decision. For one, Eats got big enough for everybody in the company to care about it, once it got past that 15 percent.
Second, we were late to the game with Eats. We were years behind the other pure players. So if you ask yourself, “Why is Eats going to win?,” we obviously had great talent, but you have to assume your competitors have great talent as well. The answer was the platform. Uber as a brand was an everyday use case, and you could actually introduce your audience who are using Rides on a daily basis to Eats. We then launched a membership program that worked for both.
The platform and the fact that we have greater scale than anyone else allows us to invest more and then cross-sell our riders into eaters. Now more are eaters into riders. Then, encourage drivers to move people but also move things every once in a while.
I want to ask you about cross-selling on the platform in a very specific way. but you’ve walked right into the other big Decoder question I ask everybody, which is about decisions. You just described a big decision. What’s your framework for making decisions?
One framework that I use is, “is it a one-way door or a two-way door?” That’s classic Amazon. I think Jeff Bezos introduced it. It’s a great framework.
One day we’re going to make a supercut of people saying that on the show.
So many people have been impacted by him, and deservedly so. He’s one of the great CEOs of the world. Just make the decision. Use your instinct. It’s always a two-way door, that’s easy. But that’s an example of a one-way door.
You mean merging the product teams?
Yeah, merging the product teams. You’ve got to go for it. I can’t tell you that there was a perfect construct one way or the other. At some point, you have to make some of these decisions based on instinct. One of the constructs is what do you have to believe so that the outcome of what you’re doing today is going to change? So, if I keep doing the same thing but I do it an inch and a half better, am I going to get to the Promised Land? Is that just going to take way too much time, or is there uncertainty in getting there where I need a shock to the system?
As it relates to Eats, we did unbelievably well in the early days, but there was a point where we were hitting a ceiling. It was then where I decided that to get to that next level, we actually need a shock to the system, and we changed our organizational structure.
In hindsight, I think it was a really good decision, not necessarily because of the product side. We still have dedicated Rides and Eats product leaders. It was actually more on the engineering side, on the code base. It allowed our engineers to generalize a code base to take some of, let’s say, our marketplace talent from Rides, who were second to none, and move them over to Eats.
It’ll take a little while, but I’ll give you a quick example of some of these unanticipated benefits. When you order an Uber ride, we essentially have to scan the market for all the cars out there and where they’re going, and are they available or not available. We have to match you to a car, and we’ve got to price that ride in less than 30 seconds. We want the ETA to be four or five minutes, and so we have very, very little time to make that decision.
So, the algorithms that are running have to work very quickly. They’re making tens of millions of predictions per second. Make that match and go. That takes certain architecture and a bunch of compute. With Eats, we have the average time to make food at like 10 to 12 minutes. So, you have a lot more time to make that match.
But we had to make all the same calculus. Where is the courier? Which courier should I match you with? How do I price you? The time with which we had to make the match on the courier side was much longer. So we would recalc the marketplace in two or three minutes. With Eats, we essentially make a bid to a courier who can accept the delivery or not. Let’s say for a delivery that I’ve got three bids I can make before the food gets cold. My first bid might be $6. If they said no, the second bid may be $7. If they said “No, I really have to get you the food,” the third bid might be $10. Once we switched the teams over, the Rides team said, “Oh no, we can make this recalc in 15 seconds.”
Now, we’ve got the ability to make 10 bids. So instead of going from six to seven to 10, you go from $6 to $6.50, $6.75, $7. So the ultimate cost per transaction is lower just because of compute, speed of algorithms, and decision-making. I didn’t anticipate that when I made the switch, but those kinds of benefits — in terms of back end code, engineering, just scale — were actually the biggest we saw.
I understand why that’s great for me as a customer because cost comes down. I understand why it’s great for Uber because the margin goes up. Is that good for the courier?
Well, the couriers get more business. For example, our ability to make more bids has allowed us to increase the number of batches. So now the courier is carrying two goods and gets tipped twice close to 50 percent of the time. That was a side benefit for the courier as well, which we think ultimately helps earnings.
The way you’re describing Uber is as a platform. A lot of the announcements are about using it more as a platform: “Engage with us more often in your life.”
It’s a kind of cold platform, but it’s an everyday use case.
The reason I ask about this is because maybe 60 percent of the interviews I’ve done on Decoder this year are people telling me they’re going to build AI agents. You’re just going to talk to the computer and the Toyota Camry is going to show up.
AI is a cool thing.
Everyone is telling me this is going to happen in ways big and small, whether we’re building Model Context Protocol applications at the bleeding edge of tech standards or whether we’re just going to click around on your website using a testing tool, which is a real product that exists in this world. And Uber has been clicked around by these products.
Those all disintermediate your platform. You have to do the hard work of creating liquidity of supply and making sure the Toyota Camry appears, and they own the customer relationship. Why would you ever participate in this? I ask them all this question. They’re like, “I don’t know. The market will figure it out.” I’m like, that’s you! You’re the market. But you’re the other side of the market. Would you participate?
It’s a really good question. We talk about it all the time, and time will tell as to what the right decision is. I believe in running open platforms. I don’t believe in companies that try to fight the course that technology is taking. They always get left behind. I think we should go to where consumers want us to go. I do think that’s all nicey-nicey, but at the same time, one of the advantages that we have is our unique inventory.
Essentially, we’ve got 8.5 million drivers and couriers and over 1.2 million merchants out there. It’s very difficult for a company that has an agent to disintermediate us and go direct to the almost 10 million pieces of inventory, if you want to be impersonal. These are people, these are businesses. It’s very difficult for them to draw that inventory. While you may make your inventory available to these agents, you can charge a toll if you have unique inventory. So, if the agents bring us more demand, which means more orders for our restaurant partners or more rides for our drivers, that’s worth something for us.
At the same time, because we’re becoming more and more of an everyday use case — the frequency of our average user is six times a month and it continues to increase — we can create our own local agents. We know it’s your commute time, so why don’t we get you a car? Maybe we can actually make a dispatch to the car before you actually push the button to get it because we can predict that you’re going to do it.
So I think, as it relates to AI and these agents, we want to work with these players. We come from a place of strength because of the unique inventory and the fragmentation of these markets that we’re organizing. We’re going to build our own agents as well. As long as we’re thinking about the consumer, our earners, and our merchants, I think we’ll be okay.
Let’s say I start Nilay’s Model Company with Nilay’s Voice Assistant, and I want to get people sandwiches. I want you to be able to say, “I need to go to the airport, get me a car. Go read my email, figure out when my flight is, schedule a car.” This is the dream. For some reason, we’ve got to throw giant data centers at this problem. This is what they pitch me.
So, I come to you and I say, “I want to be able to get cars from Uber.” What is the percentage toll? What is the extra margin you would have to charge me, in dollars and cents, to make it worth it for me to take the customer away from you in that way? Because when the customer opens your app, you get to cross-sell them into Uber One or ask if they would like some food when they arrive. There’s all these other incremental opportunities that you would forego if I take that customer.
So I have a weird philosophy on this. Initially, I charge you zero. I think that companies sometimes–
Alarm bells just went off at every level.
No, listen! People spend so much time trying to figure out what the economics might be when the first thing is to try it out. Is it going to be a good experience or not? Is your scheduling actually going to work, or is it going to be off and the driver has to wait for 10 minutes, which is terrible? Let’s just figure it out. Then, once you optimize the experience, we can measure. Are you an incremental consumer for Uber or are you totally cannibalistic?
If it’s cannibalistic, then I’m going to charge a lot of money. You can’t have any money because you’re getting the benefit and my content. You’re not bringing me any business at all. If it is incremental, then I would pay some take rate. Is it a 5 percent, 10 percent, 20 percent take rate? It depends on the incrementality.
But I think so much innovation has slowed down because companies try to figure out the economics first. Figure out the experience first and then the economics. Listen, if I do a bad deal for a year, who cares? I’m going to renegotiate with you. I’m building stuff for the next 10 years. Success or failure isn’t going to be determined by my take rate being 5 or 20 percent in year one. It can set precedent, and precedents are dangerous. That’s why I would say to charge zero. Let’s try it out. Let’s see what the experience is. Let’s try to measure out what the value add is, and then the economics will take care of themselves.
You said you’re having these conversations a lot internally at Uber. What is the shape of those conversations? Are there partners you’re excited to work with? Are there agents you’ve seen that are promising?
We’re working with a number of players. I’d say OpenAI is probably the top player we’re working with as it relates to agents. It’s imperfect. It’s really early. The volume is tiny right now. The work is just to make the experience really great. It’s way too early for me to tell you if it’s going to work or not. Eventually it will, but I don’t know the shape right now, so I want to experiment.
When you think about that particular future, everybody wants this product. Again, 60 percent of my conversations are people just dreaming of a future where you talk to the robots and the robots go do stuff for you.
Not a bad future.
That implies a lot of things. At the very beginning, it implies that Uber exists so you even have an API for you to go ask for a car from. At the base layer of that it implies that Toyota exists, and that there are literally vehicles with an effectively standardized service, support, and maintenance network around the world. The Toyota Camry is an atomic unit of transportation across the entire world, and that’s an amazing thing that you depend on so that people can talk to the robot and the car will show up.
That’s going to change the dynamics of those cars completely. You don’t have independent contractors with the Toyota Camry atomic unit that you don’t have to worry about. Where do we get wheels for a Toyota Camry? It’s just not a problem that Uber has to solve. Cars start driving themselves and suddenly, a lot of people have to worry about different things. How do you think that’s going to change while the agents are coming for you?
Oh my god, now you’ve added agents to the mix.
At the end of the day, I think what you really want to do is say, “I need a car,” and then there’s just a network of autonomous vehicles waiting to accept bids.
I think you’re correlating two things that I would separate. I was trained as an engineer, and you want to separate and simplify the two problems, although they may interact in certain ways. We talked about agents. If I bring value in by creating inventory for that agent, the value’s going to be determined based on whether that demand for the agent is incremental or not. If it’s not incremental, the value’s really small, which means the content provider should get the vast majority of the economics. If it is incremental, then we’ll talk about the value of that incrementality.
Now, when you look at autonomous vehicles, you’re essentially replacing the driver with a machine. Our everyday drivers not only drive. That’s their main job, but they usually own the car, refuel the car, clean the car, etc. So you have to replace that work set. First is the software layer. Waymo is working on it, there’s VW. There are tons of different companies, like Pony and WeRide, that are working on that. We want to work with all the software providers, make sure they’re safe and affordable, and bring them to market.
Second is the car itself. The car can’t just be a good, old-fashioned car. It has to be a super car. It’s wheels but also a sensor kit. It needs much, much more compute to work, and the cost of that hardware right now is inordinately expensive. But it will come down after a couple of generations. Then, you have fleet management, which we just talked about and involves cleaning cars, etc., and then fleet ownership as well. The manager may own the car or ultimately… actually I think you’re going to have car ownership by the Blackstones of the world. Right now, Marriott doesn’t own any of its hotels. There are these entities called REITs that own the hotels. There will be fleets, with these financial companies and retirement pension funds owning big fleets of cars.
Our job is going to be bringing demand to these really expensive cars. These AVs are so expensive, you’re not going to have that many of them on the road in the early days. So, there will be a hybrid model like we have in Austin right now. If a Waymo happens to be close to you, we’ll hail a Waymo. If the Waymo is 10 minutes away and there are a bunch of drivers who are really close, we’ll get you a driver.
The percentage of AVs over a period of time is going to increase, and our job is going to be to manage that shift smoothly and make sure that the utilization, whether it’s a Waymo or VW, is really high. Right now, what we’re seeing in Austin is that the average Waymo is 99 percent more productive in terms of trips per day than the average driver. So we can really drive utilization.
Ultimately, if the demand for that Waymo comes through the Uber app the old-fashioned way, through an Uber agent, or through an OpenAI or another agent, I don’t think it really matters in terms of the autonomous transition. That’s going to happen separately. We want to manage it. Where that demand comes from, whether agent, app, or something that we haven’t imagined, that’s a separate issue. So we just have to manage both transitions. I don’t think the two are really going to conflict with each other.
Let me try to make them conflict.
All right. I like it.
You have some big partners. I think a lot of people assumed Waymo would be a competitor, but you found ways to partner with them.
I think they’re a competitor and a partner. I’ll give you an example. Domino’s is a competitor to Uber Eats because sometimes people go directly to the Domino’s app, and it’s a partner at the same time. I think people look for the drama, like if you are with me or are you against me. The fact is that they work with us — coopetition or whatever you want to call it. Waymo’s going to work with us in certain circumstances, and it’s going to be looking to bring customers directly in others.
Right, because you control a whole bunch of demand for their product.
Yes.
The one competitor I don’t see making as pragmatic a decision as that is Tesla. We will see how [the Robotaxi] actually exists, but the way it has been described is it’s going to take everyone’s individually owned Teslas, flip on some software, and while you’re sleeping, they’re going to go off and drive at night.
I have a lot of questions generally about where the demand in the middle of the night comes from, but so it goes. But that’s a system where you suddenly increase a bunch of autonomous car inventory that is just floating around the city, and you can get demand by just saying, “I need a car.” Then, you have an individual unit of supply that’s like, “Here I am.” You can build an entirely other system that does not require paying you a transaction fee.
That does feel like how you can combine these two ideas because the individual operator of the Tesla Robotaxi might have a different approach. How do you reconcile that? Do you think Tesla’s going to make that work? Do you think that all those people are going to end up turning over their houses to Airbnb management companies like Airbnb owners do?
I think it remains to be seen. We have to take anything that Tesla does seriously. Elon Musk’s an unbelievable entrepreneur. He’s built so many companies. My thinking is that the individual owner of the Tesla is going to get a subset of demand if that demand can only come from Tesla. Then, he or she could get from Tesla demand plus Uber demand.
So 10 years from now, every single Toyota comes with self-driving software. I think that is relatively likely. You can buy a Toyota and when you’re asleep put that Toyota on Uber, which has unbelievable amounts of demand, and make X dollars per day. Or you can buy a Tesla and put it on the Tesla app, which I think is going to have less demand. Ultimately, if you’re looking out for the owner and you want to allow them to monetize as much as possible, that owner’s going to look for the most demand, which is actually going to drive them to multi-source.
This is why five years ago, Domino’s was on its earnings calls saying, “I’m never going to use third-party apps,” and it is now. Ultimately, it’s a restaurant, and you want to monetize that restaurant as best you can, and having multiple platforms feeding that restaurant, so to speak, is mathematically the best way. So, I’m hoping that Tesla eventually decides, “I can build my direct channel, but I can also work for Uber.” That’s ultimately best for Tesla owners.
Have you talked to Tesla about collaborating in any way?
We talk to them all the time. I think there are 150,000 Teslas on the network. We’d love to collaborate with them, and if they want to take us up on that, we’re game.
Do you think they can actually pull off the Robotaxi fleet as they’ve described it?
I don’t know. From my standpoint — and this is judgment — you really need superhuman safety. Superhuman, to me, doesn’t mean better than a human. It means five times better than a human. I think the data suggests that Waymo is around that level. It’s not 100 percent clear to me whether camera-only can get there. Most of the players who are building this technology from the ground up are going with cameras and LiDAR to have redundancy as it relates to perception. If Tesla can do it, more power to them. We’ll see. I think time will tell.
The thing that I think about is what I already mentioned, which is that I know what kind of riders exist in the middle of the night. I was one of those riders in my 20s, and it just seems like there’s going to be an awful lot of puke in an awful lot of Teslas.
There’s a lot that drivers do that we take for granted. There’s a lot of work.
How do you manage that with your autonomous fleet now? Somebody gets sick in the car, somebody makes a mess in the car, and you don’t know about it until the next rider shows up.
That’s something that you have to handle, which is a rider complaint, or you might have cameras inside the car, for example. The newer cars have that.
Do you make an AI puke detector?
I guess we’re going to have to build that. That’s going to go into our product roadmap.
Smell-O-Vision in the car to make sure.
That’s an interesting one, but that essentially is going to be the job of the fleet operator.
How does that cut work with the fleet operator? I took an Uber to work today. I figured I should use your app before I talked to you. I asked the driver and he said, “I want my rates to go up.” Maybe every driver I’ve ever talked to wants that.
Everyone wants to make more money. Absolutely.
That’s the driver’s perspective. I’m curious how you’re going to make the rates go up with big fleet operators who now have giant fixed costs, are happy to have 24/7 utilization, and maybe take a lower rate for greater use while the drivers just want high rates. How do you square that circle? It feels like the driver rates are going to get pushed down as autonomy comes into the mix.
Our job is going to be managing the balance between demand and supply. So, I think autonomy is actually going to drive more demand. Even though autonomous inventory is going to be an ever-increasing percentage of our overall inventory, the number of drivers that we need on our network over the next 5-10 years is going to increase because demand increases more, so to speak.
Twenty years from now, you might be right. From that standpoint, we just have to manage our inventory. There’s a turnover of drivers all the time, and we want to make sure that we give drivers the right earnings expectations. In a city where demand isn’t growing because autonomous supply is coming in, maybe we just have to slow down driver recruitment and not make them promises as it relates to earnings.
So, I think communication, being straight with people, is the ultimate answer. But I don’t take it for granted. This is going to be a transition that we have to manage very, very carefully and make sure that our driver base is taken care of.
What do you think the deals with the fleet operators will look like? Maybe the entire thesis of The Verge is that adding computers to things makes them very complicated, and we should pay attention to that complexity. I’ll use the Toyota Camry again, but with the Honda Accord or the Chevy Malibu my parents had in the ’80s when I was a kid, you didn’t have to update the software. They didn’t have buffer overflows. The infotainment deck did not restart in the middle of the day.
They might have cranky mechanical issues, but again, there’s an ecosystem to help fix that very quickly. If you get a flat tire in New York, you can just push the car around the corner and there’s a taxi garage waiting for you. That’s incredible. You’ve got a bunch of cars that have tons and tons of software at the bleeding edge of technology, yet cameras can get dirty or broken or whatever. That’s a different set of costs on top of the existing mechanical costs.You’re not going to repair the robotic cars. The fleet managers are going to.
We actually work with fleet managers today, and the reason is that a bunch of drivers can’t afford vehicles or can’t get loans for vehicles. Companies can. Also, we’re actively trying to electrify our vehicle base. Close to 10 percent of our vehicles are now electric, with more than 20 percent in certain cities in Europe. So, we have experience with fleet operators. About 15 percent of our volume comes from fleet operators. Many of those fleet operators own the cars. Not all fleet operators own the cars. I think if they own the cars, then they’re just looking for a return on invested capital, and the return is going to depend on the risk. The more volume we bring them, then the cheaper the cars are going to be because utilization is going to be really, really high.
Then, there are going to be some pure play fleet operators where the financial players are going to own the cars. That’s going to look more like cost plus. So if it costs you $10 to operate the fleet, we’ll pay you $11 or $12. Our philosophy right now, as I said earlier, is to make the damn thing work, make it safe, and then worry about the economics. AV loses money for us right now, but we know it’s going to be huge in the future. It’s going to ultimately bring the cost of our service down, and it’s going to make the street safer. For us now, it’s about making the system work, and then the economics are going to fall into place.
But I think fleet operations will be a part of the ecosystem and they won’t be the majority. What we will bring to these autonomous players is an end-to-end solution. So, they have to work on what they’re really good at, which is self-driving algorithms, and then leave the rest up to us.
But it’s that middle part that I’m just focused on.
Well, we work with these fleet operators. For example, one of our fleet partners is operating the Waymo fleet in Austin right now. Certain players may want to do the fleet operation themselves, but I think that our local on-the-ground expertise and the fact that we are running fleets with millions of cars today will be another benefit. I don’t know if VW in LA is going to want to operate its own fleet. We will be able to bring that capability. If they want to do it themselves, that’s totally fine as well.
I think I’m just stuck on that the fleet operators are going to have a bunch of really highly paid IT people sitting around rebooting the fiber optic networks of these cars.
If it’s just the reboot, that’ll be easy. Again, these are complex vehicles. The cost of the vehicles is really high right now, and the care for those vehicles is going to be not-zero.
When you think about the safety side, which you’ve talked about several times now, the question of autonomous liability exists in the world. I book a Waymo through Uber. The Waymo gets in an accident. Who accepts that liability today?
We think early on, most of the self-driving companies will self-insure. They’re big companies. They can afford to self-insure. But ultimately, whether they insure, we insure, or the owners insure, it’ll work itself out.
What is the mechanism for working it out?
Looking at the incident rate and looking at the cost per incident. I think incident rates are going to come way down, especially if these autonomous vehicles are superhuman safe. Cost per incident is going to go up, which is because the cars are much more expensive. But I think that the overall cost of insurance as a percentage of your ride is going to come way down because of the safety of these cars.
We’re almost out of time here. I’ve got to ask you the last question in honor of my Uber driver today. If you were to tell Uber drivers today, “Here’s how I will make you make more money,” what would you say?
I would say we’re going to bring you more rides. The income for Uber drivers in New York, for example, is more than $50 per utilized hour. New York’s an expensive place, but we think that’s a pretty good trade-off for a job that brings lots and lots of flexibility. But it’s a hard job, traffic is tough. So, we’ll keep working on it. Ultimately, the way to offer more earnings to drivers is to bring more demand, and every single day we’re working on that.
Dara, this is great. Thank you so much for being on Decoder.
Thank you. I really appreciate it.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
If you’ve been wanting to grab a subscription to Nvidia’s GeForce Now cloud game streaming service, now’s a good time to commit. That’s because six-month subs to its mid-tier Performance subscription — which offers ad-free 1440p gameplay, short queue times, plus ray-tracing on supported games — is 40 percent off. Instead of $49.99, you’ll pay $29.99, a reasonable price if you don’t have a gaming PC, or just want a reliable way to play many (but maybe not all) of your Steam, Epic Games Store, and other titles away from your PC.
Each tier of GeForce Now offers different session lengths; the Performance tier caps you at six hours of playtime before freeing up your virtual machine for another player. However, you can simply start a new session immediately after. This deal will be active through July 6th, and after your six months are up, you’ll be charged the full price unless you cancel beforehand.
This discount comes at an especially opportune time for Steam Deck owners, as the official GeForce Now should be launching soon for Valve’s handheld, although an exact date hasn’t been publicly confirmed. Now, it might seem kind of redundant to play Steam games via the cloud when you can simply load them onto a Steam Deck’s storage, but Nvidia’s virtual machines are more powerful. You’ll be gaining visual fidelity not possible on the Deck, but adding some input latency — a given with any and all cloud game streaming services.
Humane’s AI Pin. | Photo by Amelia Holoway Krales / The Verge
More details are trickling out about Jony Ive and Sam Altman’s new AI device. In a post on Thursday, Apple analyst Ming-Chi Kuo says his research indicates that the device could be larger than Humane’s AI pin, but with a “form factor as compact and elegant as an iPod Shuffle.”
Kuo adds that “one of the intended use cases” is wearing the device around your neck. It also may not come with a display, Kuo says, featuring just built-in cameras and microphones for “environmental detection.” The device could also connect to smartphones and PCs to use their computing and display capabilities.
My industry research indicates the following regarding the new AI hardware device from Jony Ive's collaboration with OpenAI: 1. Mass production is expected to start in 2027. 2. Assembly and shipping will occur outside China to reduce geopolitical risks, with Vietnam currently the… pic.twitter.com/5IELYEjNyV
This latest leak aligns with a report from The Wall Street Journal, which says the device will be aware of a user’s life and surroundings, but probably won’t be a pair of glasses. On Wednesday, Altman revealed that OpenAI is buying Ive’s AI hardware company, io, for $6.5 billion, which will “take over design for all of OpenAI, including its software.” Ive and Altman are aiming to launch their first devices in 2026.
Tamagotchi Paradise introduces a new dial control letting you zoom in on characters to the cellular level. | Screenshot: <a href="https://www.youtube.com/watch?v=dgWn6l16bjY">YouTube</a>
There are still plenty of Tamagotchi devices that have you focusing on raising just a single digital character, but the new Tamagotchi Paradise puts you in charge of a planet full of needy dependents. To make it a little easier to navigate its digital world, the device includes a full color screen and complements its three button controls with a dial on the side for zooming from a view of the entire planet right down to the cellular level of your virtual pets.
Launching in Japan first on July 12th, 2025, for $45, the Tamagotchi Paradise experience starts with you hatching an entire planet in an event called the Egg Bang. From there you can zoom in on your world to one of three different unique ecosystems called Tama Fields that include land, water, and sky. Which ecosystem you start with is dependent on which of three colored versions of the Tamagotchi Paradise you buy, but you’ll eventually be able to unlock all three Tama Fields as you play.
Each Tama Field will affect how the Tamagotchi characters grow and evolve through unique activities and foods. The game includes over 50 different Tamagotchis separated into 12 different species that can be raised from eggs, but Tamagotchi Paradise expands that with a breeding mechanic so two compatible characters can create new Tamagotchi with 50,000 possible combinations available. Two Tamagotch Paradise devices can also be physically connected, letting characters from each interact to potentially start a family or become foes that prank each other and even get in fights.
If caring for a planet full of virtual pets isn’t hard enough, the device’s dial even lets you zoom in on Tamagotchis to the cellular level, which is how you care for and cure them when they get sick. It sounds exhausting, but unlike kids, dogs, or cats, if the responsibilities become overwhelming, you can simply pop out the device’s AAA batteries for some respite.
Produced by Story Syndicate (Britney v Spears) and directed by Mark Monroe (Jim Henson: Idea Man), Titan: The OceanGate Disaster is a both a look back at everything that went wrong with the Titan project and a deep dive into the mind of OceanGate CEO Stockton Rush, one of the five people who died when the submersible ultimately imploded at the bottom of the sea. In the doc’s first trailer, Rush seems to be the only person at OceanGate who refused to see the multitude of ways in which the failed expedition was going to put people’s lives in danger.
The trailer makes it seem like there was no one at the company who could convince their boss that asking an accountant to pilot an underwater vehicle or trying to hook a video game controller up to control a submersible were blatant signs of poor judgment. But these are the sorts of perils that come with working for CEOs who simply can’t fathom that they might be wrong.
Following its debut at this year’s Tribeca Festival on June 6th, Titan: The OceanGate Disaster will make its Netflix debut on June 11th.
Video game consultants like Laura Kate Dale came into 2023 with a lot of hope. Since 2020, accessibility in games had become a mainstream discussion, bolstered by high-profile releases like The Last of Us Part II, and it appeared things could only get better. Yet, as the year drew on, she says, "there started to be signs that, behind the scenes, accessibility advancement was slowing down."
Now, that momentum has come to a relative standstill. Consultants speaking to The Verge paint a picture of repetitive conversations, fighting to maintain basics that should already be established, and a sense that the broader industry has taken its foot off the gas after the early months of the incipient covid-19 pandemic provided a real sense of hope that accessibility was here to stay.
"The gaming culture of that time is a reflection of catering to the disabled experience, because accessibility was sorely needed by everyone," says Kaemsi, an online broadcaster. "The rise of accessibility back in 2020 was almost a promise that, when we started recovering from the lockdowns, the world would start considering everyone in all facets of living, and all we needed to do was give people a chance to …
SwitchBot claims its lock is the first retrofit smart lock with 3D facial recognition. As a retrofit lock, it can be installed without replacing your existing lock. Instead, the Ultra attaches to the rear of your lock and controls it using a mechanical motor, leaving the front unchanged, so you can still use your keys. The Vision keypad is mounted outside of your door and connects to the lock via Bluetooth.
Unlike most retrofit locks, Switchbot’s Lock Ultra, and its predecessors Switchbot Lock Pro and Switchbot Lock, are compatible with nearly all existing locks in Europe and America. In addition to facial recognition, when connected to the keypad, the lock can be controlled with a fingerprint, a keycode, and an NFC card. Without the keypad, it works with a traditional key, app control, and auto-unlocking using geofencing (which activates the lock when your smartphone arrives at your house).
The company says the Ultra can get up to nine months of battery life on the included rechargeable battery and also has a backup battery that lasts for up to five years. It works over Bluetooth and requires one of Switchbot’s gateways to connect to Wi-Fi and integrate with smart home systems, including Apple Home (through Matter), Amazon Alexa, and Google Home.
The lock can also work with the new SwitchBot Hub 3 ($119.99), the company’s latest smart home hub designed to control and integrate SwitchBot’s Bluetooth products. The Hub 3 can bridge up to 30 SwitchBot devices to Matter ecosystems such as Apple Home.
The new Hub is a complete revamp from the second-gen version. Larger and with a sleek, all-black design, it comes with an integrated stand and features a display, a dial, and four customizable buttons.
The hub also functions as a temperature and humidity sensor, as well as a light and motion sensor. Its display can wake up on motion and show information such as indoor temperature and door lock/unlock status, and the buttons can control smart home scenes.
The dial can connect to compatible devices to control them, for example, adjust the brightness of lights, the volume on connected TVs, and the temperature of a thermostat. SwitchBot says the hub can control Matter devices that are integrated with Apple Home, and it will work with Home Assistant.
The Hub is also an IR controller and can integrate with over 100,000 IR device codes.
The new products are now available to preorder from SwitchBot’s website, with shipping slated for June.
When you select “hear the highlights,” a playback bar will appear at the bottom of the screen. | Image: Amazon
Amazon is testing new AI-generated audio summaries that will let you listen to two AI “hosts” chat about a product’s features. Along with product details, the AI audio clips also draw information from user reviews and information from the web.
The audio summaries open with a “friendly reminder” that you’re listening to an AI-generated clip, followed by an introduction to an “expert” AI host who’s supposed to give you a rundown of a product’s features. It’s similar to Google’s AI-generated audio overviews, which have two AI hosts discuss your research, documents, or slides in a podcast-like format.
In the clip for the SHOKZ OpenRun Pro, the AI host introduces us to “Max,” who says the key difference about these headphones is that they “conduct sound through your cheekbones instead of going into your ears.” The AI host then follows up with questions about the headphones, like who would benefit from the design and whether the sound quality is up to par.
“While the microphone gets praise for noise cancellation, some users find they’re not loud enough for an immersive music experience,” the AI “expert” Max says. “But customers do mention they’re better than earpods in certain situations.”
Amazon’s AI-generated audio summaries are currently only available to some customers in the US, but the company plans on bringing them to more products and customers in the “coming months.”
At last month's rapturously received Slate debut, it took an executive's quip that "Slate" and "Tesla" use the same five letters to shift my brain into high gear. I've covered the EV world for 15-plus years, and I virtually never spend time on counterfactuals. There's quite enough to cover in the real world.
But … I'm of the opinion Tesla could, and should, have launched a small, simple, cheap compact pickup truck-in other words, what Slate debuted-rather than the pickup it did produce, the Cybertruck. That expensive and polarizing vehicle has been, to put it bluntly, a sales disaster. Over 18 months, Tesla has sold only about 50,000, versus projections of many times that volume. Worse, while EV crossover utilities sell tens of thousands a month, the more expensive EV pickup trucks to date have not.
The path not taken
The company that led the world in EV production for more than a decade could have launched an inexpensive small pickup that would have democratized EVs to a whole new class of buyers. Tesla likely could have offered more range at the same price due to its in-house battery cell production. And it would have been a global product, likely to be sold in Europe and Ch …
Apple hasn’t announced when the Mr. Scorsese series will be released.
After directing dozens of documentaries over his 60-year career, legendary filmmaker Martin Scorsese will now have his own life chronicled for Apple TV Plus. In its announcement, Apple says the five-part Mr. Scorsese documentary series will explore how themes like “the place of good and evil in the fundamental nature of humankind” have shaped Scorsese’s filmography as far back as his student work at New York University.
Apple hasn’t mentioned a release date for the docuseries, which is being directed by Rebecca Miller (She Came to Me, Personal Velocity). Mr. Scorsese will benefit from “exclusive, unrestricted access to Scorsese’s private archives,” according to Apple, alongside extensive interviews with Scorsese himself that dive into how his own life experiences have influenced his work.
The series also includes “never-before-seen interviews” with the filmmaker’s friends, family, and creative collaborators, including Robert De Niro, Leonardo DiCaprio, Mick Jagger, Robbie Robertson, Steven Spielberg, and Miller’s husband, Daniel Day-Lewis.
“[Scorsese’s] work and life are so vast and so compelling that the piece evolved from one to five parts over a five-year period,” said Miller. “Crafting this documentary alongside my longtime collaborators has been one of the defining experiences of my life as a filmmaker.”
Altman suggested that the acquisition could increase OpenAI’s value by $1 trillion, and envisioned a “family of devices” being born from the partnership. Information about the first device, which Altman is aiming to release by late 2026, has been kept tightly under wraps since its development was confirmed last year over concerns that competitors will set about trying to copy the product before it’s launched to the public.
Altman dropped some hints during the call that shape our expectations, however, including that it will be unobtrusive, fully aware of a user’s life and surroundings, and will serve as a “third core device” a person would put on a desk after a MacBook Pro and an iPhone. OpenAI is already predicting that the device will be popular, with Altman saying that it will ship “faster than any company has ever shipped 100 million of something new before.”
Altman told OpenAI employees on the call that they have “the chance to do the biggest thing we’ve ever done as a company here.” The Journal reports that Ive referred to the project as “a new design movement,” and harkened back to his Apple career that saw him work closely with Steve Jobs before his passing in 2011. Now teamed up with Altman, Ive said, “the way that we clicked, and the way that we’ve been able to work together, has been profound for me.”
Fujifilm has a new pint-size addition to its X-series cameras coming in late June: the X Half. It’s an 18-megapixel “half-frame” camera with a portrait-oriented sensor and viewfinder and a fixed 32mm-equivalent f/2.8 lens.
Despite being digital, the X Half is all about the vintage film aesthetic. The $849.99 camera is so dedicated to an analog-like lifestyle that it’s got an entire secondary screen just for picking one of its 13 film simulations, and it doesn’t shoot RAW photos at all — just JPGs, for a more what-you-see-is-what-you-get experience.
Fujifilm’s definition of a half-frame is a bit different from the traditional one. Usually, a half-frame film camera like the Pentax 17 captures images measuring 18mm x 24mm (around half the size of full-frame / 35mm format). But the X Half uses a 1-inch-type sensor measuring 8.8mm x 13.3mm, which is about half the dimensions of the APS-C sensors in other Fujifilm cameras like the X100VI and X-T5. So I guess it counts on a technicality.
But like the Pentax 17 and other actual half-frame cameras, the X Half is all about taking casual, fun snapshots and bringing it with you everywhere. It weighs just 8.5 ounces / 240 grams and is small enough to fit in most small bags or even some oversized pockets. The X Half is close in size to a traditional disposable camera, but unlike a one-time-use film camera it has a proper glass autofocusing lens with aspherical corrections, and it even shoots some basic 1080 x 1440 video. (Though, in my briefing on the camera, Justin Stailey of Fujifilm North America described the lens as having “some character.” Which is often a colorful way of saying the lens isn’t the sharpest.)
Once you take some shots via the X Half’s traditional optical viewfinder (that’s right, there’s no EVF or hybrid finder here) or its portrait-orientation 2.4-inch touchscreen, you can connect to a dedicated smartphone app (launching slightly after the camera) for extra functions. You can create your own two-up diptychs like a traditional half-frame camera, though here you can pick out the two side-by-side pictures, or you can opt for two videos or one picture and one video.
Fujifilm has baked other analog-inspired features into the X Half app, like a Film Camera Mode that collects your next 36, 54, or 72 images and arranges them into a contact sheet. But the film nerdiness goes deeper than that, as the digital film strip will be branded with the film simulation you used. There’s even a faux film advance lever for making diptychs, and in Film Camera Mode it forces you to use it between taking each shot.
You can lean further into the film kitsch by adding filters, like a light leak effect, expired film look, or a ’90s-era time and date stamp to the corner. Of course, since the camera does not shoot RAW, your chosen filter and film simulation are fully baked into the JPG file. You can’t undo any of them or change it later in post-processing like you’d normally be able to with a RAW.
Fujifilm is certainly taking a unique approach with the X Half, trying to capture the interest of younger photo enthusiasts who in recent years have been drawn to the imperfections and vibes of vintage film and aging point-and-shoot digital cameras. I don’t know how many of them will be jumping at the opportunity to scratch that creative itch with an $850 camera compared to alternatives costing a fraction of that — like a $70 Camp Snap for digital or any 35mm disposable film camera for $10 to $20 — but even if it’s half the fun I had with the Pentax 17 it should prove a good time.
Dyson’s new PencilVac uses green LEDs on both sides to illuminate dust and dirt. | Image: Dyson
Dyson has announced what it’s claiming is the “world’s slimmest vacuum cleaner.” At first glance, its new PencilVac looks like a broom rather than a vacuum because the battery, motor, and electronics are all integrated into a thin handle that’s just 38mm in diameter — the same thickness as Dyson’s Supersonic r hair dryer. It weighs in at just under four pounds and is powered by the company’s smallest and fastest vacuum motor yet.
The PencilVac is designed to be a replacement for the slim Dyson Omni-glide, which launched in 2021 with a cleaning head that used two spinning brushes so it could suck up dust and dirt in multiple directions. The new PencilVac is not only slimmer and lighter than the Omni-glide, it uses four spinning brush bars that Dyson calls Fluffycones.
As the name implies, the Fluffycones each feature a conical design that causes long hairs to slide down to the narrow end of each brush and fall off so they can be sucked up instead of getting tangled up around the brushes. The Fluffycones slightly protrude at the sides for better edge cleaning, and are paired with green LED lights (instead of the lasers that Dyson’s other vacuums use) that illuminate dust and debris so you can see when floors have been properly cleaned.
Other innovations Dyson is introducing with the PencilVac include a motor that’s just 28mm in diameter but spins at 140,000RPM to generate 55AW of suction, and a new two-stage dust filtration system that prevents clogging and performance loss as the vac fills up. Given its size, the PencilVac has a smaller dust bin than Dyson’s other cleaners, but uses a new design that compresses dust as it’s removed from the airflow to help maximize how much dirt the bin can hold.
The PencilVac magnetically connects to a floor dock for charging and storage, and features a small LCD screen that shows the cleaning mode and an estimate of how long before the battery dies. It’s also Dyson’s first vacuum to connect to the MyDyson mobile app, which offers access to additional settings, alerts for when the filter needs to be cleaned, and step-by-step maintenance instructions.
The vacuum’s slim design does come with some trade-offs when compared to the company’s larger models. Its cleaning head is designed for use on hard floors, not carpeting, and while it can be swapped with alternate attachments like a furniture and crevice tool, it doesn’t convert to a shorter handheld vac. Runtime is also limited to just 30 minutes of cleaning at its lowest power setting, but its battery is swappable and Dyson will sell additional ones to extend how long you can clean.
Dyson hasn’t revealed pricing details yet, and while the PencilVac will launch in Japan later this year, it won’t be available in the US until 2026.
Microsoft employees have discovered that any emails they send with the terms "Palestine" or "Gaza" are getting temporarily blocked from being sent to recipients inside and outside the company. The No Azure for Apartheid (NOAA) protest group reports that "dozens of Microsoft workers" have been unable to send emails with the words "Palestine," "Gaza," and "Genocide" in email subject lines or in the body of a message.
"Words like 'Israel' or 'P4lestine' do not trigger such a block," says NOAA organizer Hossam Nasr. "NOAA believes this is an attempt by Microsoft to silence worker free speech and is a censorship enacted by Microsoft leadership to discriminate against Palestinian workers and their allies."
Microsoft confirmed to The Verge that it has implemented some form of email changes to reduce "politically focused emails" inside the company.
"Emailing large numbers of employees about any topic not related to work is not appropriate. We have an established forum for employees who have opted in to political issues," says Microsoft spokesperson Frank Shaw in a statement to The Verge. "Over the past couple of days, a number of politically focused emails have been sent to tens of tho …
A lawsuit against Google and companion chatbot service Character AI — which is accused of contributing to the death of a teenager — can move forward, ruled a Florida judge. In a decision filed today, Judge Anne Conway said that an attempted First Amendment defense wasn’t enough to get the lawsuit thrown out. Conway determined that, despite some similarities to videogames and other expressive mediums, she is “not prepared to hold that Character AI’s output is speech.”
The ruling is a relatively early indicator of the kinds of treatment that AI language models could receive in court. It stems from a suit filed by the family of Sewell Setzer III, a 14-year-old who died by suicide after allegedly becoming obsessed with a chatbot that encouraged his suicidal ideation. Character AI and Google (which is closely tied to the chatbot company) argued that the service is akin to talking with a video game non-player character or joining a social network, something that would grant it the expansive legal protections that the First Amendment offers and likely dramatically lower a liability lawsuit’s chances of success. Conway, however, was skeptical.
While the companies “rest their conclusion primarily on analogy” with those examples, they “do not meaningfully advance their analogies,” the judge said. The court’s decision “does not turn on whether Character AI is similar to other mediums that have received First Amendment protections; rather, the decision turns on how Character AI is similar to the other mediums” — in other words whether Character AI is similar to things like video games because it, too, communicates ideas that would count as speech. Those similarities will be debated as the case proceeds.
While Google doesn’t own Character AI, it will remain a defendant in the suit thanks to its links with the company and product; the company’s founders Noam Shazeer and Daniel De Freitas, who are separately included in the suit, worked on the platform as Google employees before leaving to launch it and were later rehired there. Character AI is also facing a separate lawsuit alleging it harmed another young user’s mental health, and a handful of state lawmakers have pushed regulation for “companion chatbots” that simulate relationships with users — including one bill, the LEAD Act, that would prohibit them for children’s use in California. If passed, the rules are likely to be fought in court at least partially based on companion chatbots’ First Amendment status.
This case’s outcome will depend largely on whether Character AI is legally a “product” that is harmfully defective. The ruling notes that “courts generally do not categorize ideas, images, information, words, expressions, or concepts as products,” including many conventional video games — it cites, for instance, a ruling that found Mortal Kombat’s producers couldn’t be held liable for “addicting” players and inspiring them to kill. (The Character AI suit also accuses the platform of addictive design.) Systems like Character AI, however, aren’t authored as directly as most videogame character dialogue; instead, they produce automated text that’s determined heavily by reacting to and mirroring user inputs.
“These are genuinely tough issues and new ones that courts are going to have to deal with.”
Conway also noted that the plaintiffs took Character AI to task for failing to confirm users’ ages and not letting users meaningfully “exclude indecent content,” among other allegedly defective features that go beyond direct interactions with the chatbots themselves.
Beyond discussing the platform’s First Amendment protections, the judge allowed Setzer’s family to proceed with claims of deceptive trade practices, including that the company “misled users to believe Character AI Characters were real persons, some of which were licensed mental health professionals” and that Setzer was “aggrieved by [Character AI’s] anthropomorphic design decisions.” (Character AI bots will often describe themselves as real people in text, despite a warning to the contrary in its interface, and therapy bots are common on the platform.)
She also allowed a claim that Character AI negligently violated a rule meant to prevent adults from communicating sexually with minors online, saying the complaint “highlights several interactions of a sexual nature between Sewell and Character AI Characters.” Character AI has said it’s implemented additional safeguards since Setzer’s death, including a more heavily guardrailed model for teens.
Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, called the judge’s First Amendment analysis “pretty thin” — though, since it’s a very preliminary decision, there’s lots of room for future debate. “If we’re thinking about the whole realm of things that could be output by AI, those types of chatbot outputs are themselves quite expressive, [and] also reflect the editorial discretion and protected expression of the model designer,” Branum told The Verge. But “in everyone’s defense, this stuff is really novel,” she added. “These are genuinely tough issues and new ones that courts are going to have to deal with.”
Signal is taking proactive steps to ensure Microsoft’s Recall feature can’t screen capture your secured chats, by rolling out a new version of the Signal for Windows 11 client that enables screen security by default. This is the same DRM that blocks users from easily screenshotting a Netflix show on their computer or phone, and using it here could cause problems for people who use accessibility features like screen readers.
While Signal says it’s made the feature easy to disable, under Signal Settings > Privacy > Screen Security, it never should’ve come to this. Developer Joshua Lund writes that operating system vendors like Microsoft “need to ensure that the developers of apps like Signal always have the necessary tools and options at their disposal to reject granting OS-level AI systems access to any sensitive information within their apps.”
Despite delaying Recall twice before finally launching it last month, the “photographic memory” feature doesn’t yet have an API for app developers to opt their users’ sensitive content out of its AI-powered archives. It could be useful for finding emails or chats (including ones in Signal) using whatever you can remember, like a description of a picture you’ve received or a broad conversation topic, but it could also be a massive security and privacy problem.
Lund notes that Microsoft already filters out private or incognito browser window activity by default, and users who have a Copilot Plus PC with Recall can filter out certain apps under the settings, but only if they know how to do that. For now, Lund says that “Signal is using the tools that are available to us even though we recognize that there are many legitimate use cases where someone might need to take a screenshot.”