Reading view

There are new articles available, click to refresh the page.

Apple designer Susan Kare made 32 new, Mac-inspired physical icons

Legendary Apple artist Susan Kare has released 32 new retro-inspired icons that are designed to live outside of your computer screen. Instead, the “Esc Keys” collection Kare created in collaboration with Asprey Studio consists of mechanical keyboard keycaps and wearable pendants, each featuring an 8-bit pixel art illustration like a dog, a plant, mail, and coffee.

While these icon designs are brand new, the style will be recognizable to anyone familiar with Kare’s work. She’s responsible for creating much of the iconography on the first Macintosh personal computer operating system, including the “Happy Mac” boot-up icon and the original floppy disk file save symbol.

Kare and Asprey Studio founder Alastair Walker told Fast Company that “there’s hidden meaning” to each of the new designs that represent things that people can enjoy doing away from their keyboards. Each piece in the Esc Keys collection is crafted in silver or gold-coated silver and is limited edition — from 30 to 120 pieces depending on the icon — so purchasing something won’t come cheap.

12 icons designed by Susan Kare, including a coffee cup, watering can, plants, and dogs.

Prices start at $650 for silver computer keys, ranging up to $2,064 for necklace pendants in gold vermeil (a method that coats solid silver in a layer of gold). Solid gold options are “available upon request” apparently, and each piece comes with its own blockchain-inscribed digital artwork to verify “ownership and provenance.”

Kare has paid similar homage to Apple’s computing history in her other art collections. There are prints of her original Macintosh designs available on her website, and in 2014, she sold hand-painted Jolly Roger pirate flags, inspired by the one she painted to fly above the Apple campus office in 1983, for up to $2,500.

Vimeo CEO Philip Moyer is betting on the human touch — and AI

Today, I’m talking with Vimeo CEO Philip Moyer. You probably know Vimeo from its beginnings as an artsier, more creative competitor to YouTube. But over the last few years, and especially after it went public in 2021, Vimeo has really turned itself into an enterprise software company, selling video hosting services to companies of all sizes.

And this episode is a particularly fun full-circle Decoder moment — I interviewed Philip’s predecessor, Anjali Sud, both when she was the CEO of Vimeo and again more recently in her new gig as CEO of Tubi. So it was fascinating to close the loop and see how Philip is changing Vimeo after taking over, especially as the entire ecosystem of online video is shifting so rapidly. 

Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!

Philip is pushing to make Vimeo a different kind of YouTube competitor, one that can support everything from independent creators to huge corporations. It’s a shift from the strategy Anjali used to reset the company and take it public, and there’s a lot of interesting nuance to it. Everyone wants to put videos on the internet, it turns out, but only some of those people want them to be ingested by YouTube’s advertising and recommendation systems. 

Philip himself has a ton of big tech experience: he’s worked at Amazon, Google, and Microsoft, and he’s deep in the weeds on both the tech and the business. You’ll hear us talk about Google’s business and YouTube in particular quite a bit in this one, but we also get into TikTok and what it means to have the incentives of algorithmic video platforms drastically influence both creators worldwide and the culture we all consume.

Of course, we talked about AI, too, and how it’s upending every platform in different ways. Vimeo has been marketing itself as an “AI-powered video platform” lately, so I wanted to know what Philip is thinking about Vimeo’s creator-focused mission colliding with the pitfalls of AI-generated video. 

We also spent some time on the simple supply-and-demand math problem that seems like it will change the creator economy drastically in the years to come: that is, if the amount of video on the internet explodes because of AI while the total amount of time we can spend watching video remains relatively fixed, how is anyone going to make any money at all?

This is a fun one; I think you can really tell that Philip and I could have kept talking for a very long time.

Okay, Vimeo CEO Philip Moyer. Here we go.

This transcript has been lightly edited for length and clarity. 

Philip Moyer, you are the CEO of Vimeo. Welcome to Decoder.

Thank you, Nilay. It’s great to be here.

I am very excited to talk to you. I have got to tell you, just structurally, this is one of the very first truly full-circle Decoder episodes. Because we had your predecessor Anjali on as CEO of Vimeo, and then we had her on in her new job as CEO of Tubi, and now we have you on as her replacement.

Wow, it’s full circle.

It’s full circle for a show that’s about structure, decision-making, and organizational culture — this is as good as it gets for me. So thank you so much for joining us.

That’s fantastic. I have a lot of respect for Anjali and have to say a big thank you to her for all she built here, so it’s wonderful to hear.

I’m very curious. Anjali was in the middle of executing a big pivot when we talked about Vimeo. People know Vimeo as what it started as — a consumer video service, kind of the art-ier competitor to YouTube. She pivoted it to a [software-as-a-service] business. She was very open: “This is now a SaaS company, we’re doing a lot of enterprise work, we’re a video hosting provider to a lot of people, we are a video creation tool for small businesses.” That was several years ago. I talked to her three years ago at Vimeo. You’re the new CEO. What do you think of Vimeo today?

I think in a lot of ways, I mean a little bit of what she started, and a lot of what we’re continuing here, is going back to the roots of what Vimeo is known for. People came to us because, at the time, 20 years ago, it was hard to upload video. I kind of call that… I’ll say the second or third epoch of video was right around that time period when video files were getting bigger. It was hard to stream video. They were no longer just a single file.

We were starting to get multiple formats. People came to us because of the quality of our transcoding and then, ultimately, our ability to be able to serve it directly to the person that was most important that they wanted to provide that video. And a lot of what we’re doing right now, and what we’ve taken that Anjali had started and extended pretty significantly is… It turns out that lots and lots of organizations and individuals around the world want to be able to keep a video private or they want to be able to serve it only to an intended audience.

They don’t want to have someone collect data, they don’t want to have an algorithm that tries to send them down a rabbit hole. Instead, they want to just literally be able to provide that video. And so we’re seeing that video makes up about 82 percent of the world’s internet today and we’re seeing that same concept arrive with private individuals, doctors, educators, and large organizations. And so yes, I would say that what we’re really becoming now is… Our goal is to become the largest private video distribution platform in the world. We think that there’s an increasing demand for video that isn’t public or algorithm-driven, but instead, it could be very personal and delivered at the right moment, at the right time, to the right individual.

Let me try to just understand that in context. There was the beginning of [shared video], right? You’re saying there are several epochs of [online] video. There’s the beginning, which was on iOS as a “send to YouTube” button at the operating system level because no one thought that public videos were a big deal. You just needed someplace to watch videos and YouTube was that. 

And then YouTube grew into the social media juggernaut it is now. TikTok exists, all that’s there, and the initial reaction was that it’s very hard to compete with that. “We’ll be for enterprises, business customers have video needs, we’ll service them.” Are you saying there’s some kind of middle ground, in the middle of the spectrum between big algorithmic consumer video platforms and enterprise, where just a lot of regular people want private video sharing?

We have hundreds of thousands of school districts that use us. Teachers want students to be able to upload video assignments. Especially in a world of ChatGPT and AI, teachers want to see that the student is actually doing the assignment. We’ve got tens of thousands of medical professionals who want to be able to send a video to a patient without having some algorithm capture their disease state or the question that they have. We’ve got lots and lots of marketing organizations that want to be able to serve their video over to an individual inside of a big company, or to their client, and they don’t want it to be public. And then what we’re also finding is, on the YouTube front, organizations that were hosting entire YouTube sections on their website… YouTube is redirecting its customers over to some other place.

I was meeting with a really large financial services company’s CEO and the chief economist. I took them to the website and I said, “Let me show you your chief economist speaking.” I clicked on the video, and before the video played, we had to watch some advertising on some Bitcoin or something. And on the right-hand side, it said, “Here’s how the top three credit rating organizations are trying to control the world. This is the video you should watch next.” And we’re just trying to watch the economist’s video. So organizations are getting tired of being redirected or having their data captured. 

A lot of organizations are starting to bypass what I call the big walled gardens, and in a lot of ways we’re back in the MSN and AOL era for websites. That’s where we are with video right now and people want to be able to go directly to their consumers. They want to be able to serve the message in an unfettered way, and then ultimately, they just want to be able to ensure the highest quality and the most personalized experience. We’re seeing tremendous demand for those kinds of scenarios. Again, it’s everything from the school teacher, the fitness instructor, and in some cases, the faith institution, all the way up to some of the biggest companies in the world — and I do mean that literally the biggest companies in the world are becoming customers.

I think the thing that I’m keying on there is… there’s the consumer-facing video platforms and then you have a bunch of enterprise video needs and you’re in some way describing a bunch of core enterprise customers, right? Large school districts, and companies big enough to have a chief economist. I think those are classic enterprise customers and then somewhere in the middle, those companies actually want to reach consumers without participating in algorithmic media, and that consumer surface is what Vimeo has gotten away from. Are you saying you’re pushing back towards it?

We get about 100 billion views of video a year on Vimeo, and only 20 percent of it is on Vimeo.com. We show up on e-commerce platforms. I was meeting with a physician, who is an independent practitioner, a fertility doctor, and he records a video before his patients come in and he sends that video over to his patients in the fertility clinic. So we have individual proprietors, people who want to be able to share a family video or something that they learned if they had been at a doctor’s visit or otherwise — so we have lots and lots of people that are using [the platform as] individuals. 

Now on the filmmaker side… What is most amazing to me right now is the sheer number of filmmakers that I have coming to me, and this has really started kicking up since I’ve become CEO. I would tell you it’s been a trend for about the past six months. I have so many filmmakers that are coming to me saying, “We don’t like the deal… We don’t like the deal that we have with the big studios. We don’t like the fact that if we go to YouTube, as an example, they take 45 cents of every advertising dollar. Or if we want to go onto one of the big platforms, they’ll take as much as 50 cents of every dollar. Is there a way for me to be able to sell tickets to my audience?”

In some cases, some of these filmmakers that come to us have audiences that are bigger than they might get on one of those platforms and so we’re finding people who want to go direct. Our streaming business is like that. When you post on some of these big platforms… I really encourage people to look at the terms of service of the major consumer-based video platforms. It says in their terms of service that they’re able to monetize your content any way they want to. They can reuse your content, they can serve the content, and quite frankly, they’re capturing 45 cents of every dollar in that process. So a lot of these organizations want to be able to bypass those kinds of economics. 

So is that you building a consumer interface? You’re saying it’s only a small percentage that’s coming to Vimeo.com. Where are they finding an audience? Is it all on their own websites? Is it on other people’s platforms? Where is that audience actually going?

It’s been interesting because there is this trend among streamers in particular where they’ll go to the large platforms, they’ll get some following, and then when they want to be able to serve premium content. That’s when they’ll come over and say, “We want to put SVOD. We want to be able to put a gate in front of this content.” 

Or they may want to go live, they may want to go asynchronously, and so they’ll come to us and say, “Look, we want to be able to have a common library so that people can see past live events.” Plus, they want to be able to serve new content. In some cases, we’re getting organizations that want to do more interactive content — so like clickable videos, as an example. It’s a whole variety of creators that are saying, “Look, we might not want to adhere to the… let’s call it compliance requirements, the economic requirements, or the IP requirements of the big platforms. Give us an environment that we can control ourselves.”

And that environment lives on their websites?

Exactly. Sidemen is a great example of a group. They have one of the biggest followerships in the UK. They sold out Wembley Stadium in two hours and they famously played a soccer match where, when a yellow card was shown, one of the Sidemen held up an Uno reverse e-card to the ref and it blew up the internet over in the UK. They serve on Vimeo. They have both content that they put on YouTube or they put on Instagram, but then some of the more extended content they actually put on Vimeo, and then that library lives on us as well for a lot of things. Dropout’s is another great example of that. Try Guys is a great example of that. Zeus Networks, Martha Stewart — where they want a little bit more control over the content and they want control over the monetization more so than what the traditional platforms give you.

I’m going to ask you a question, and you’re just going to have to bear with me on the mathematical nature of this question. Hopefully, it makes sense. I have a lot of CEOs of web hosting companies on the show because I’m very curious about the web in the age of platforms and where the audience comes from. The last great referrer of web traffic is, as everyone knows, Google Search; Google Search is undergoing some sort of gigantic AI-powered identity crisis. Who knows what’s going over there, but it’s changing. 

So I have the CEO of Squarespace or the CEO of Wix, or whatever other hosting providers, and I say, “Why does anyone build a website? Why would you do that instead of starting a TikTok channel now?” And they all say, “Well, it’s to do e-commerce, right?” Embedded in that is some sense that, “Okay, you’ve built a following on some platform, now you want to sell something to your audience. And you have to sell the spoons somewhere, so you’re going to start Spoons.com and that’s going to be hosted on Squarespace, and that’s the way it goes.”

You’re describing the content itself as being valuable, and being more valuable when it’s hosted on Vimeo. Maybe you’re selling it, maybe you’re doing subscriptions, whatever you’re doing. But that’s happening on a website because you can’t transact that way on YouTube or TikTok. You can’t make the content valuable, but you’re still stuck with how you get the audience to come to the website that’s still just some fraction of a search audience or some fraction of conversions from one of the social video platforms. And that — this is the math — seems like the upper bound of your growth. 

Because some number of people have to come to the website, some number of people have to choose to transact on a video from the Try Guys, and that can only grow insofar as all of those individual customers can get people to come to the websites. Do you see that the web is the limiter in that way?

No, not at all. I’ve worked for Google, Amazon, and Microsoft in my life. Most recently at Google, I worked in all manners of businesses and data problems, and I have this foundational philosophy that there’s actually more data behind firewalls and paywalls than there is in front. There’s more information behind those firewalls and paywalls than in front. And when I take a look at the enterprise market for video… In the past, you’d have a marketing video and it was hard, or you might have a couple of product videos for e-commerce. Or you might have, for example, the CEO’s message. Video is coming to literally every single element of business. In the same way that it’s 82 percent of the internet, it’s coming in [to business]. And so whether or not it’s that e-newsletter, whether or not it’s for sales… You look at an organization like Seismic or Gong, that records sales calls, and then it helps to coach individuals.

If you watch a video, you’re 67 percent more likely to buy a product. And so we’ve got very large e-commerce customers where they now have millions of videos on us that are serving to every single product page on their website. So what I’m seeing is quite frankly that there is an explosion of video. It’s such an engaging medium. When you watch a video, you have 91 percent better retention than when you read something. A lot of the stuff that’s behind the firewall and behind the paywall is now getting video enabled, and it’s going across every single division inside of an organization, and it actually dwarfs what I’ll call a lot of the content.

We’re going to see video show up in so many different ways and in so many different businesses. People are starting to use video to be able to determine efficiencies inside quick-service restaurants. They’re starting to use video to be able to evaluate what’s on a shelf and whether or not there’s a stock on the shelf. So when I think about this, I don’t think about it just in terms of one segment of our organization. 

Actually, the beauty of Vimeo is that we’re able to live inside and outside the firewall, and YouTube does not live inside the firewall. We’re able to hook in and sign a [business associate agreement] to do HIPAA for a doctor. YouTube’s not going to do that. You think about all the interactions in the healthcare industry that actually can be video-enabled… And so our upper bound of growth is a larger opportunity than what YouTube is focused on right now.

YouTube, right now, is focused on video podcasts. They picked their shot and they’re going to take it. By the way, every time anyone says that stat about video retention, I feel like a dinosaur because I need to read. For as much video podcasting as I literally do, I am a reader. 

I’m an underliner. I have to underline things.

I learned to highlight with five colors in law school. It’s still where I’m at. Maybe the future is video and that’s why we do a video podcast, but I’m still a reader to my core. 

There are three parts of the video business. We’ve talked a lot about distribution, and where you might distribute that video. It sounds like that’s where you think there’s a lot of growth across organizations, even to consumer in some new way. Then there’s monetization, which I want to go to, but the first part is the hardest and I think undergoing the most change in terms of what we expect videos to be, and that’s obviously creation. You need to make a video, you need to distribute it, you need to monetize it. 

The creation of video right now is super interesting because you have not just the young generation, but everybody learning to speak the language of TikTok. TikTok, I think, is most importantly expressed to people as a video editor, not just a scrolling video tool, and it’s a very powerful video editor that you can also use in CapCut. 

Then there’s AI, which is making it a lot easier to make all kinds of videos in all kinds of ways, and then there’s the smaller AI components, like it’s going to write a script for you that you can read and maybe that’s good. Maybe that’s bad, but it’s all just in the mix. Everyone is expecting the tools to guide them. You can see in particular how TikTok tools, challenges, filters, and templates create a kind of culture that builds upon itself. Are you thinking of that component of it? Like “We need to build an enterprise TikTok editor for people just to bring them into the pipeline?”

I think there are a couple of dynamics that are happening right now. This is what gets me so excited about this… One of the biggest things that brought me here is that the barriers to video creation are dropping so dramatically, which leads to that mass proliferation of video, and then the difficulty in being able to manage at that scale. That’s just, foundationally, the market forces that are behind us. 

I always pause for a second and tell people, “I’ll be able to talk to you for a long time about artificial intelligence in a couple of seconds, but let me talk to you about what’s happening in video formats.” You’re one hundred percent right. Right now, we’re in that era when mobile video is becoming much easier. People are becoming more comfortable. Covid really helped us get comfortable with dogs barking in the background, babies being inserted into frames, and basically, I’ll just call it more casualness, in video.

Before it was highly scripted, if you recall. Highly scripted. And so culturally, people are getting much more comfortable shooting video. The proliferation of tools has been extraordinary. Now, we did make some acquisitions in the past. Magisto was an example of this because we really felt that we had to make it easier and easier to be able to create video. Well, I was thrilled with the proliferation of tools. 

We shot a video for our Reframe conference. I shouldn’t say shot a video — we actually created a video using 16 AI tools that didn’t exist 18 months ago. Over $15 billion of venture capital has gone into creating those tools and that’s just one small set. But you’re absolutely right… You’ve got all these tools that are being created and so we are thrilled about that, but simultaneously, the format of video is also proliferating, and so you’ve got traditional like 1068, and you’ve got 4K that’s starting to become more commonplace.

8K is arriving. When you do 8K, it’s roughly about six times the size of a 4K video. Well, 16K and 32K televisions are on the horizon right now. You’ve got wide stream formats, square formats for podcasting, rectangular formats, and then we just recently released Apple Vision Pro support to be able to stream on an Apple Vision Pro — which is 8K per eye, 36 frames per second. Simultaneously, while the tools are proliferating, the format types are also proliferating.  So, your ability to both accept video from any format… In some cases, you accept something that’s an old, old format that you have to get improved, or it’s a super high quality, giant widescreen format that needs to be cut up for all the different areas that you’re going to serve that video.

What I would tell you is that it’s becoming more complex for the creator to choose which tool to use and when, and then how to ensure that the right format gets served at the right moment. So the two simultaneous things that are happening in our business are creative tools and formats, and they are exponentially growing right now. They’re exponentially growing the amount of video that a creator has to deal with.

What’s growing the fastest?

It’s really interesting. As you can imagine, I think the square format is popping up a lot. We are seeing a lot of demand for 4K — 4K in live formats and in serving formats. I think people are starting to demand that format more, which is obviously for us… We have to move more bits, we have to store more bits, we’ve got to transcode more bits, and so I would tell you that’s probably the thing that we’re seeing spike the most in terms of consumption. The traditional mobile stuff is going to be there and it’s going to be constant. I think it’s almost growing at the speed of so-called mobile phones, but I’m actually surprised about how many people are coming to us asking us for 4K.

Why do you think that is? I look at the broader industry and you see the big streamers are pushing everybody to 1080p with ads. That’s sort of the default for Max or Netflix, and then you have to pay extra for 4K. Are you seeing that demand in the same way they do, which is that people will pay extra for it? Or are you seeing that demand as this is now the understood industry norm?

It may be the place that we sit in the industry. As I mentioned at the top of our talk, people have always come to us for quality. So it may just be that because we’ve been known for quality — we’ve been known for the quality of our transcoding, our stream, the service that we do — that people are not finding that kind of support elsewhere and coming to us for it. I think that people are experimenting with those formats. 

I’ve been pleasantly surprised with the sheer number of people who have come to us since we launched the Apple Vision Pro. They are coming to us with really interesting film projects to do 8K per eye, stitching all the camera work together.  You’ll see us talk a lot about this at SXSW, about what we think. I’m seeing some good excitement in those 8K formats as well, that’s all I’ll say, but it might just be the position that we sit in the industry.

It’s so interesting to see the rest of the industry basically insist that consumers don’t care about 4K. You and I are talking two days before the Super Bowl. For all of Fox’s talk about 4K, they’re still producing that in 1080p and then upscaling it. It’s fascinating to see the consumer side of the market land at one standard quality level while you’re saying the enterprise side of the market — the more discerning part of the market — is now not only assuming that 4K will exist, but that you will support 8K per eye, 36 frames per second on the Vision Pro.

And when there’s 16K for TVs out, I think people will be buying them.

That’s a lot of cost, right? I mean you’re talking about moving an enormous amount of data. Are you just getting ahead of it because that’s what the customers expect? You have a background in cloud services and big data. Is that something where you say, “Okay, this is just scalable? We can solve this problem with the tools we have”? Or do you have to build new systems?

It’s a little bit like… I’ll use the corollary of what’s going on with token size inside of AI models,  where everybody knows that the very first version of ChatGPT was maybe, I don’t know, a hundred million tokens and then it popped over to a billion tokens and will be up to a trillion tokens. So the cost to be able to deliver all of that will come down over time.

The cost of storage comes down, the cost of bandwidth comes down, and then even the innovations that are in the televisions, those costs will be coming down. When you think about the quality of the TVs we have now versus even just 10 years ago, it’s so discernibly different and I think that as those costs come down, somebody has to serve that content and our infrastructure… We have the infrastructure to be able to do it, and so for some of us, we have to stay slightly ahead so that we are that place that’s always viewed as quality. So yeah, I guess it has to be part of our DNA that we’re always going to support those cutting-edge formats.

We’ve opened the door to AI and I definitely want to talk about that and in particular, using AI as a creative tool, and how your customers might be thinking about that and their relationship to it. But first, I just want to get to the Decoder questions. We’ve talked a lot about how you’re thinking about growing Vimeo’s business. What would you say is the most tangibly different thing you’re doing compared to your predecessor Anjali?

There are a couple of things. I think Anjali was supporting a lot of different businesses. I’ll say as you went through covid and as you went through when video was hard… I would say it wasn’t as culturally ingrained as it is right now. She had to make a lot of decisions around the business. When I got here, a lot of people asked me this question: “Well, do we serve the consumer or do we serve the enterprise? Do we serve the filmmaker or do we serve the physician?” When I really spent time with who our customer was, I really had to get deep, deep, deep down inside and go, who really uses us? Show me the type of company, show me the names of the companies, and the industries that we’re in. 

It was very clear to me that we serve the creator who is professional, somebody who is using video for their business professionally. It’s not a hobby. It’s actually to get a job done. Being able to consensus around that creative pro, not trying to go and create a YouTube competitor or a low-cost tool for the hobbyist, but truly that we serve that professional creator, and then being able to describe very succinctly the fact that all of that comes together. Sometimes a professional creator wants to serve a single video. Sometimes they need to be able to manage thousands of videos. Sometimes they want to be able to go live, sometimes they need a high quality, and sometimes they need help to divide it, to cut it up into rectangular or square formats. 

I think one of the core things that I did when I got here was really obsess about the customer. Every meeting we start, we start by telling a customer’s story. I learned a lot of this, I would tell you, between Google and Amazon. Amazon is, I would say, renowned for this, but we really tell the stories of who our creators are and then really build the ability to move quickly and listen to those enterprise requirements. When you start a company, and… I started at Microsoft in the very early days of Microsoft. It wasn’t an enterprise company then, believe it or not. Amazon, when I got there, didn’t have a lot of financial services companies using the cloud. I went through that transition. At Google, it wasn’t really known as an enterprise-grade cloud when I first got there. 

So I’ve been through this transition, and when you start a product that is going to serve the enterprise, what I always tell people is that it’s easier to get more complex, but it’s really hard to be complex and get easier. And so we were starting from these roots as a consumer and filmmaker’s product, and a lot of what I’ve focused on is really listening quickly to not just our individual customers, but the entire spectrum, and being able to say yes to those requirements.

I also come with a tremendous amount of experience as you can imagine. You could look at my background — it’s years of enterprise experience, and so, I know what’s required to be able to do HIPAA. I understand how to do General Data Protection Regulation (GDPR). I understand compliance requirements. When we go into things like artificial intelligence, or we go into storage and distribution, I have a lot of instincts around that. Spending the time to explain where we’re going as a company, to be able to serve both inside and outside the firewall, and the requirements that we’re going to have architecturally, and just explaining that to the organization and weaving together… Hey, that filmmaker wants their content protected the same way that the largest retailer in the world or the CEO wants their content protected, then weaving those two messages together and building a product roadmap that’s going to serve both.

I feel like that’s probably the biggest thing that I’ve done since getting here: unifying the vision into a single cohesive vision. And then the second thing is really making sure that all of us are telling the stories. “Hey, did you know that this fertility clinic is using us in this way? Did you know that this school teacher is using us in this way? Did you know that this faith organization uses us in this way? Do you know that the largest retailer uses us this way?” Getting people to tell the stories internally, I think, was important. 

The last thing I’ll tell you is that there was a little bit of a shine that came off the company after the IPO. I think, giving the company confidence and saying, “Let me tell you who’s actually using us.” I don’t think that a lot of people really realized, internally even, the broad array of customers we have. I put this in our shareholder letter: we have eight out of the top eight big box retailers. We have eight out of the top eight media companies that are using us internally. We have huge numbers of insurance and financial services companies, companies that could use anybody, and I said they’re using us for a reason. So getting some confidence back into the company that we actually are an incredibly valuable tool in a world of increasing complexity — I think that gives the company a lot more confidence to be even bolder as we go forward.

You’re talking a lot about culture change, and renewed focus. The thesis of this show is that that comes out of structure. I’m just looking at our notes here from my producers. In the past few months, you’ve named a new chief marketing officer, a new chief information security officer, a new people officer, and a new chief revenue officer. You’re obviously making some changes in the organization. How was Vimeo structured and how are you restructuring?

I think that, in some cases, our technology organizations were incredibly siloed. How we did trust and safety as an example, in some cases how we did data, and who owned which parts of engineering. They were actually broken up in a lot of ways. I think oftentimes our marketing organization had one mandate while the sales organization might’ve had another mandate. A lot of what I would tell you was really important to me… One of my first hires was a chief technology and product officer, Bob Petrocelli. I brought in an individual who unifies product engineering and is able to have single-threaded leadership over top of parts of the business that are that important. They have to be working together well. 

Some of the things that we’re doing in trust and safety, we actually think we can turn around and expose that to our enterprise customers. It turns out they probably don’t want things on their platform the same way that we don’t want certain things on our platform. So getting any internal function to become an external function and getting that kind of view that we all serve the customer in some way, shape, or form is super important inside of our product and technology organizations. One of the most important things I had to do was bring in our Chief Marketing Officer, Charlie Ungashick, who has really extensive experience marketing to individuals and enterprise.

We are going out and talking about how we can protect videos, serve videos, and provide AI to that entire audience. So we had to get an individual who was able to oversee both parts of the business. I’ve also done some recent restructuring where we put an individual completely in charge of what we call our self-service business. To be able to move even faster in that part of the business and obsess on everything from the top of the funnel all the way through when a subscriber comes in, right down to “what are we actually using in the product”… And this is a big element as well.

I will tell you one of the other changes I didn’t mention is that we’re obsessing on use right down to the feature level. I look at those reports on a weekly basis, like how many people are using our edit feature, and how many people are using our live feature. How many people applied permissions to a video? Did that increase week over week? What did we do? And so that self-service leader is really now a single-threaded leader, and we also had a single-threaded leader around our streaming business. We really started seeing some of the results from that inside the company. Giving single-threaded leadership, I will tell you it is talked about a lot, but oh my God, it’s beautiful to actually be able to call up somebody that owns the number, that owns the resources, that worries about it as much as you do every single day.

Single-threaded leadership is an Amazon concept. You’ve worked at all the companies, so I can pick out where the concepts come from. It’s pretty fun for me. That’s a classic Amazon concept.

It is.

You need to have a pretty small team that owns the thing and there’s a leader who is responsible for the whole stack. That is how you get silos. You can look at Amazon’s product and say, “Oh, there’s a bunch of single-threaded leaders here.” This is not necessarily cohesive. Everything is running as fast as it can, but the holistic vision of the Amazon product suffers for it at scale, right? You started out talking about having too many silos and we’re talking about single-threaded leaders. How are you managing that tension?

I was asked one time to give a talk on what it was like to work at Microsoft, Amazon, and Google. I got an opportunity to work directly with Gates, Ballmer, Jassy, and then certainly with Thomas Curry and Sundar Pichai. One of the most important lessons I learned very early on at Microsoft was about really establishing a strong single-sentence vision for the entire company about what we’re trying to do in the future. And you wake up every single morning and I mean… I was there in the early days when the vision was a PC on every desktop in every home. That was extraordinary at the time. Now we have a PC in every pocket, but we all knew that what we were trying to do was unlock information for the world by putting this powerful computing device in someone’s hands.

And so regardless of the divisions or otherwise, it all feathered into a common vision. It was a lot of what I had to do when I got here. I owed the company a strong agreement among everybody in the company about what we’re trying to build. Are we trying to build the best livestream product? Are we trying to build the best marketing platform? Are we trying to build a product for filmmakers? We settled on this common vision and then we’re able to say, “Okay, this is the individual that owns this part of the business.” There’s a huge portion of our individual business where people swipe a credit card and start using us or register for free. A huge number of those customers actually end up as enterprise customers. I called up one of the top retailers and started talking to him about Vimeo and he said, “Well, first of all,” he goes, “You don’t have to tell me who you are.”

He goes, “My son is on Vimeo every weekend. He’s an independent filmmaker.” He goes, “So I know who you are, but why are you calling me?” And I said, “Well, we have 2,600 accounts, self-service accounts that are on [our platform]. We should do an enterprise agreement.” So being able to explain to the organization how the two sides work together and being able to make decisions in a room between where we’re applying more features may be in one part of the business or another… And how those features actually feather, how we might start them for an individual, but they have to grow to work for an entire enterprise — is all really, really important. 

Starting with a strong vision that everybody buys into, that they understand their piece of it, is really critical. And then for each one of those leaders, I expect them to have a strong vision for how they’re going to contribute to the overall vision. That’s another important thing. You can’t let their vision exist in the absence of the rest of the company’s vision, so you have to actively stitch those visions together.

You brought up decisions. That’s the other classic Decoder question. I will warn you: this is a honeypot for former Amazon executives. When you ask Amazon executives how they make decisions, everybody sings chapter and verse, but you’ve worked at a bunch of places. You are now the CEO of this company. How do you make decisions? What’s your framework?

The other company I didn’t talk about that I learned a lot from was Google. One of the things that I would tell you Google gave me was that they managed something like 10 out of the top 11 billion-user products in the world and were really thinking big. Actually giving an organization incredibly lofty goals, and sometimes you only reach 80–90 percent of them. One of the things that I do first and foremost is — that I really am a believer in — that you’ve got to set these very high goals. You need to have this vision and you need to be willing to put yourself out there to set extremely high goals. And then back into that from a decision-making process, I would tell you that we’ve made a number of decisions around which products we focus on, which areas we deprecate.

I come back to the customer. One of the things I really try to hold people accountable to, and I think it’s really important, I learned a lot of this both at Google and at Amazon, but actually explaining the customer problem that we’re trying to solve. And there are all kinds of studies that you can do. There are user studies, data studies, and so forth, but truly being able to assess what that workflow looks like. What are we trying to solve? What is the most challenging thing for the customer? What is actually frustrating the customer most? And really having a strong sense for your customer and the customer anecdotes as well as what we call… At Google, we called it customer empathy, actually putting yourself in the shoes of the customer. One of the things that we ask everyone inside of Vimeo to do is be a user of the product.

So the things that are frustrating us, we are elevating those into our decision-making process. We both bring the voice of the customer in and, we bring our own voice in, and then we also are saying, “Okay, well what’s going to help us grow? What’s going to grow the next million users for us or what’s going to grow us to 10X?” So I can’t just tell you it’s one thing. It’s a little bit of a framework of the customer, making sure we’re tethered to big ideas and really making sure that we’re being innovative enough in how we push the team.

Well, I applaud you for being the only former Amazon executive to not talk about one-way and two-way doors when I ask that. You’ve done it, you’ve achieved escape velocity. I mean, I appreciate the one-way and two-way doors. I’m just saying.

No, I get it. I don’t know if I’d buy that. The thing I do love, I mean, at Google, I think they’ve proven that there’s not a lot of one-way doors.

Fair enough. Google’s interesting. We’ve talked a lot about YouTube during this conversation. Vimeo has come out against YouTube. You have entire blog posts about how your search capabilities are better than YouTube’s search capabilities or how Vimeo is a better platform. You’ve talked, even in this episode, about wanting things to be private, not being part of the algorithmic ecosystem or the advertising ecosystem on YouTube. 

Google is big. They think so big that sometimes they let opportunities just slide away because they think at such a massive scale. How are you thinking about competing with YouTube at that scale when they seem to own so much of the attention, space, and video?

I don’t think that a lot of product companies love the fact that you have to go to YouTube to get some customer support for one of their products. Meanwhile, one of their competitors could be rolling right next to you. And I think that when I look at YouTube… I was at Google and spent a lot of time with customers, and I really foundationally believe… I love YouTube, I’ll watch YouTube as much as the next person. I think that what they’re doing for, I’ll call it the attention economy, for what they’re doing around content, for democratizing access to more and more content — I think it’s absolutely wonderful. And quite frankly, as I said, a lot of our customers are great YouTube customers as well. People will house their videos on Vimeo and post on YouTube in a lot of ways. But I really do think that there is… In the same way that you don’t do a lot of your business on Facebook or you don’t do it on LinkedIn, you kind of do it behind closed walls.

I think that a lot of the economy runs behind firewalls and paywalls. So I think that we can go directly at that. The other thing I’m going to say is… Think about what happened in content and why some of these platforms rose. Think about, and again, I’m old enough to remember MSN and AOL — the reason why we went there was because news had to be consolidated. It was hard to create websites. It was difficult to find information, it had to be curated. Well, Netflix and YouTube were born in an era where it was really hard to categorize content to say, “Hey, this video is about a cat,” or “This video is about how to plug an HDMI cord into the back of your LG TV.” And so there was categorization that had to take place. There was standardization of the data, the metadata, and then recommendations engines. I don’t know if you remember, but Netflix famously paid a million dollars to be able to write their recommendation engine.

They went out and said, “Whoever builds the best recommendation engine will win a million dollars.” Well, with an AI model, I can categorize content in seconds now. With a recommendation engine, I can buy recommendation engines off the shelf. And quite frankly, the metadata that we can produce now out of a video is extraordinarily more detailed than a human being can even write. I can tell you precisely when the purse left the beach, who was carrying the purse, and what brand of shoes the individual was wearing. All stuff that may be missed when a human being has to enter all that metadata. And so what I see is that there is going to be a democratization of content classification, content recommendation, and context discoverability. I think that there are searches… There is a single search place that you go to be able to get your content, videos, and maps, and pick your favorite things.

But I think that there are billions of dollars going into discovering other ways to find and interact with information. And so I think that Vimeo can serve that kind of information outside of a traditional Google search in a lot of ways, whether or not you’re on an intranet inside of a company, whether or not you’re inside an AI model and you don’t want to leave the ecosystem of the AI model. We can provide an answer to a question as well. I think YouTube is fantastic, I love it, but I think that for the discoverability, accessibility, indexing, and recommendations, there’s a whole new era coming and we intend to be a part of it.

There’s another piece of that dynamic that I’m wondering if you are considering, which is that Google is a huge company that is under an enormous amount of pressure right now. Maybe it’s so much pressure that it will be hard for the company to execute. There’s an antitrust trial in this country that resulted in the government suggesting Google break itself up and sell Chrome. There’s Donald Trump in the mix, who may or may not make some sort of deal. There’s a Donald Trump in the mix who’s done a tariffs regime with China that resulted in a Chinese antitrust investigation of Google — which is amazing because Google doesn’t really operate in that country.

Europe exists, much to the chagrin of many of our tech companies. There’s just a lot going on. There’s a lot of pressure on Google to not flex that dominance. And then there’s competitive pressure from the AI products, like ChatGPT, SearchGPT, and Bing — to whatever extent that Microsoft believes that Bing is a real competitor to Google. Does that create an opening for you? Do you see that as a real opening or is that just well, if those doors open, you’ll be ready?

I have a lot of friends at Google. I really enjoyed my time there. I don’t wish them ill in any way, and I really hope that they sail through this era of challenge for them in a really great way. Clearly, I came here from Google because I saw the opportunity. I really did see the opportunity that… We’re about to go through a seismic shift in the accessibility of information with new ways to go and access it. Whether or not you’re using ChatGPT, Anthropic, or Mistral, there are so many different ways to be able to discover information. The notion of the common crawl on the web, the ability to be able to crawl the whole web, index it, and then to be able to ingest it into these models shows that it’s democratizing access to that information discovery. 

Video is a very important element of video, and I think you’d agree. You can’t just imagine only one platform is going to serve all the video answers in the world, and so that’s where I see it’s such a super opportunity for us at Vimeo.

Let’s talk about AI. I want to start by asking a mathematical question. One of my theses for the year is that the creator economy is under an enormous amount of pressure. Not just from AI, but also from what you’re describing: this huge shift to video. You can see that there’s just an exponential increase in the supply of video on all these platforms. More and more kids are making videos. More and more people are choosing to communicate in video-first ways. More enterprises are doing it. And then you have AI, which is just making it easier and easier to produce a massive amount of video. So the platforms are getting flooded with supply. There’s not as much ad revenue as there is an increase in video supply, so you do the division and you’re like, well, the ad rates are going to go down, and then attention is sort of fixed. There are only so many people with so many hours in a day, and presumably, people do have to eat and do productive work. 

So attention is kind of fixed, right? It’s just like a fixed number that you can capture. That all just seems like it’s a bubble that’s going to pop. You flood an ecosystem that has been pretty stable for a few years with an enormous amount of supply, the ad rates go down, and attention stays fixed. Something happens in there, and it seems like, to me, AI is the most important component of that because it’s the thing that can change the economics of the supply the fastest. 

You just say, “Make 50 videos about my product,” and now we have 50 videos about the product on whatever platform. Is that an opportunity for you? That this whole creator economy, or the video creator economy as we know it, seems like it’s going to have a pretty basic shift in its economics?

That’s a huge question. 

To me, it is the question of 2025 — if you asked me, “What is The Verge doing in 2025?” There’s Elon Musk and DOGE, and then there’s what happens to the creator economy. 

I think with the creator economy, we are reaching saturation. I mean, think of your own experience. I don’t know when we’re going to get to the post-mobile phone era, but this is not a way we’re going to live for the rest of eternity as human beings. And so on the creator’s side, yes, I think that there is a saturation point, but I also think that people are looking for a little bit of a higher quality experience. 

I think people are getting tired of the doom scrolling. I think the mere fact that we name it, the fact that we are now acknowledging that we get sent down rabbit holes… I do think people will like storytelling. And I do think there’s going to be really different opportunities. I get asked all the time, “When will AI be able to take my favorite book and turn it into a movie?” And now, think about that. Think about how wonderful that would be. Think about being able to take your child’s favorite book and turn it into a video for them that has an extended storyline. I think storytelling is as old as humanity and it’s going to continue forward and so I do —

Can I just stop you there for one second? I know the Decoder audience fairly well. A lot of people just started screaming at you in their car because they think that’s a bad outcome.

Do you think that’s a bad outcome?

I have a young child. The idea that we’re going to read The Wild Robot and then some AI tool is going to make the movie The Wild Robot instead of the beautiful actual movie made by people, The Wild Robot — I would argue that that’s a bad outcome.

I think it is for a class. Here’s what bugs me the most right now. Last year the big six studios only put out 88 movies. 88.

Right, because the economics of video have collapsed on them. They don’t have a distribution monopoly.

Exactly, and I think there are so many stories to be told. If the creator of The Wild Robot gets paid for having a movie and is able to be monetized in some shape or form, in a really beautiful way, I actually think we’re supporting storytellers in a foundational way. I think that that’s a decade away. I would say maybe five to seven years away. And so first and foremost, I do think that AI is going to help people create more stories. I think they are. I think they’re going to be able to illustrate more stories, let’s put it that way. 

I talk to a lot of creative types who tell me, “Look, AI is fairly disjointed right now. It’s indeterministic. I don’t know what I’m going to get out of it.” Human curation of AI creation is going to be a necessity, in the same way that shooting on a green screen and then being able to put in a background for a movie is indeterministic until the human being decides what’s on that green screen. What I’m saying is that I do think that longer-form stories are going to be more compelling. I think people are going to want to stay inside of a story a little bit longer. That doesn’t mean that the creator is going away. It doesn’t mean human curation is going away. I just think that we’re going to be able to tell more beautiful stories in more ways. So I’ll park there because we are pro-creator, we are pro-filmmaker, and we serve a lot of them. They’re not going away. We’re going to uplift them and make them faster.

I think this is a real tension, and I see it expressed all the time. I’ve heard it from your peers on the show when they tell me about the tools that people use in Photoshop, right? Generative fill in Photoshop, according to Adobe CEO Shantanu Narayen, is like a hundred percent usage rate. But then, everyone yells about generative fill existing, and there’s a real mismatch between consumer expectations and how people feel about AI,  and then about the creatives actually using the tools at high rates. I get all that. I also think there’s a mismatch between you saying you’re for filmmakers and how marketers want to use AI.

I think we’re careening towards a world of basically custom creative being shown to individual users. [A world] where some brand uploads their assets to a video platform and ads get assembled for you in AI. For you, a specific user, [to get] ads targeted to your interests. We’re headed there and the big platform’s already talking about it. But those really commercial uses of AI — “we’re going to make a whole bunch of ads and we’re going to do some of the most creative filmmaking that exists” — they don’t seem like they’re happening at the same rate or with the same level of acceptance, or even like they should happen with the same tools, and you have every piece of the puzzle in front of you. Where do you see the biggest growth and where do you see the biggest pushback?

One of our creators, Jake Oleson, recently shot Currents for Apple Vision Pro. And when you shoot for an Apple Vision Pro, you have to maintain perfect stillness in the camera. You’ll shoot four to six cameras and by hand, you have to stitch all these things together. If you get an opportunity to watch Currents, it’s absolutely stunning. You kind of look at it and go, “Oh my God, I’ve seen the future of filmmaking. I truly have.” And I do think that this is where I say that I think we’re going to get into a post-mobile phone era for watching content and creativity. And I think that we’ll experience film in new ways. We’ll experience stories in new ways. And I think that I’m seeing the best creators blend together the content and AI usage with traditional techniques to make something incredible.

Most filmmakers that I talk to start with something they’ve shot and then enhance it with AI. One of the things that is most interesting to me is that, in the marketing world, the thing that’s taking off the most right now are not avatars (and avatars, I like to call them…  like nobody wants to talk to a robot), it’s actually people that are sitting there going, “Hey, I just bought this piece of furniture. This looks really cool. Let me show you what it looks like inside of my house.” Actually, authenticity in a world of robots I think is actually… I’m already seeing it. We offered to a number of our customers, “Hey, would you like us to do some avatars?” And then we also offered to them, “Hey, we have this super down-and-dirty create tool where you can record, we can put a teleprompter up in front of you so you can do your own script. And we can either make the avatar look perfect or you can be sitting in your living room and do this quick thing about your product or about your service.” And inevitably, all of them go to the real human being doing this. 

I’m just going to tell you point blank: I’m not seeing the robots take off. I’m not seeing it. And we’ve tried to serve both. I think that humans have always risen above. They’ve always brought authenticity. They’ve always brought through how you kind of know when you’re getting something and when you’re not. Even in the chatbot world, how many people get frustrated when they’re talking to a chatbot online? They quickly want to talk to a human being. I don’t know how to say it to you, but we sense that there are no ghosts in the machine. So I don’t know how to say it to you any other way. I studied artificial intelligence for a long time and I’m very confident in the beauty of the human soul in the context of creativity.

I feel like I’m more cynical than you, but I spend more time on social platforms, it feels like. And the problem, generally, is you can sense it. Some people can feel it, and a lot of people cannot, right? Or they just let it wash over them and then you end up in sort of interminable fights about metadata or labeling. Google just rolled out SynthID for images that you edit in Google Photos

None of that stuff seems to have landed. It has certainly not landed in a universal way. Vimeo has some labeling features. You have some ideas about how you might show people AI-generated content or expose that metadata to people. Do you think that’s working? Is that something you’re going to continue? Is that something that you think needs to be expanded?

When I was at Google, there were about 42 different regulatory bodies that were working on AI legislation. The last we checked, there are over one thousand on a worldwide basis. And I’m raising that to say that when you do translations in certain states inside of the United States, like Illinois or Texas, you can’t actually modify people’s lips and put words in their mouths, so that’s just one of the regulations. Over in Europe, you actually do have to identify that something’s been AI-enhanced or modified. So I do think that we as humanity are wrestling with it when we want to know that something’s not real.

The mere fact that that’s coming from all over the world, that you’re seeing the desire to know when something’s not real, I can’t say whether or not that’s good or bad, but I can tell you that it’s actually a human desire. And so, getting something done like changing the credit card on my telephone bill, I’ll deal with a bot to do that. But if I’m really having a problem, or my elderly father-in-law is having a problem, I actually do want a human being to pick up the phone and just talk him through it. I guess what I’d say to you is… I think in filmmaking as well, we’ve always used tools to tell our story. It’s been the invention of so many tools to help us tell stories. AI is just another tool to help us tell the stories. I would certainly like to know when characters aren’t real.

I guess this is the hardest question. This is an existentially philosophical question, but: Where do you draw the line? Where do you personally think the line should be for when you have to put the label on something?

I’ll give you two examples. I think when I stabilize a video with AI, that does not require it. I think if I’m talking to a deepfake avatar, or a creator that is photorealistic, that is just lying to me, that situation probably requires it, right? I’ve replaced products in this movie with other products and that may require it. Where do you think the line is?

I think that I haven’t been asked this question before, but as I reflect on it, I would like to know that a particular character, animal, or something that’s in the film is actually not real — that it’s completely made up. Now you can tell that in animation, but in a real film, if I know that a certain character or a certain scene is actually completely fabricated with a single individual in it, I probably would like to know that. Or when there’s dialogue involved where something’s talking back to me that’s not for real, I probably want to know about it. When I look at some of the Marvel movies, clearly you start crossing the line. Well, does the fox in Guardians of the Galaxy do it? We all know the fox is-

The latest Marvel movie they just announced is Fantastic Four, and they made the poster with AI, and there’s fan backlash to it. So I’m just wondering if you see the norms moving faster than the technology, or slower? You are shipping these products and you have such a direct line to creators, so it feels like you’re caught up right in the middle of where we put the labels. 

The thing that really does bother me right now is the influencers that don’t even exist. If I’m looking at something that’s clearly animated, I’m okay with it. I would love to know that somebody’s voice was actually used for real by that individual. So it’s going to be complex and I wonder out loud, will we stop caring, and at what point? Will we become comfortable that basically the whole thing is simply animated because that’s really what we’re talking about? We’re just creating animation that’s higher and higher fidelity in a lot of ways. 

But I think it should probably be noted that, at some point, the human doesn’t exist. I feel like that’s probably where I’d cross the line, or that dog doesn’t actually exist, especially if the dog’s a main character. So you might end up doing it based on the classification of the importance of the character and whether or not there’s actual, true existence there. And how much modification was done to the individual based on the class of character in the story?

I asked that question three different ways and pushed on it again because it feels like the pressure on the creator economy and the social platforms is just going to go up. With Meta, Mark Zuckerberg is out there openly saying, “We will have AI content in people’s social feeds on Facebook and Instagram.” How they choose to label it, whether or not the mean dogs have labels — I don’t know. I don’t know what Mark Zuckerberg is going to do. No one can see into his soul, but it’s pretty obvious that if he could get a bunch more cute cat videos on Facebook using AI, he’s going to do it. It’s obvious why he would do that. 

Does that create an opportunity for you to say Vimeo is for real content or real people? Is that something you would lean into? Because you do have the AI tools. When you open the website today, it says you’re an AI-powered video platform. There’s a lot of conflation between “we can do better classification and with better recommendations” and “we can do better marketing videos and we’re going to steal everyone’s data and make cat videos for Facebook attention spans.”

I think as I sit here and think about protecting the Vimeo brand, I aspire to actually be the place that people trust. When I first got here, we were approached to basically crawl our content and it was like as many companies were approached… We talked to the creators and the creators said, “Listen, don’t replace us. Just uplift us and protect us. Make sure that Vimeo is always a place where we’re protected and that’s why we come to you and that’s why we want to continue to come to you.” 

So we made a decision not to allow that crawling, and then shortly thereafter we had to say, “Anytime you use AI on Vimeo, we’re going to actually guarantee that none of the AI models will improve based on your usage of it.” Unless you say, “Improve based on my usage, like understanding my storyline, my filters, my dialogue, or my style of dialogue,” we’ll do that for you as an individual creator. We’ll create your own private AI.

We were actually approached as well to say, “Listen, we need you.” A number of the creators said, “We want you to help us identify when content has been generated by AI.” We do a lot of business over in the EU, and so we said, “Yes, we’re going to do that as well.” The thing that I describe about AI… Back in the era of the production line, humans stopped being able to keep up in a lot of ways, and so we started creating robots. We started creating machines that turned on screws and so forth. And next year at this time, there’s going to be more information created in the next year than there has in the history of mankind coming up until this point. Humans are struggling to keep up with a production line of information. So we’ve invented these machines.

I also think AI is going to help us identify these things. It will help us filter and help us be able to say, “This is AI generated,” or “This content cannot be verified from the source.” We have to do some work around what we call KYC or know your creator. In some jurisdictions that we operate in, we have to actually say, “Yes, this is a human being that created this, this is the company that created this.” I actually think we have an opportunity to serve as that — like “Yes, this was created by a real human,” which actually stands out in a world of robots. So I think there are a lot of opportunities to protect the viewer and the creator, as well as serve them in helping produce stories faster.

I want to ask one more foundational question about AI. There’s a lot of talk about cost right now in the AI world. There’s DeepSeek, which might have brought down the cost of training. There’s an argument about whether the cost of inference will drop. At the same time, Sam Altman is saying he’s going to build $500 billion worth of data centers all around the world with SoftBank. 

You use these tools, right? You’re deploying these tools against some large data sets in video, which is where the costs tend to go up the fastest. Where do you think that is? Is that working out for you? Are you making more money on the use of AI than you’re spending on it right now?

The short answer is yes. I would tell you that I expect the cost of inference to drop dramatically. We were experimenting with some of the exact same things that DeepSeek claimed to do, to be able to use really low-cost chips to be able to do inference. And I do expect inference… the cost of inference is going to go through the floor. I used to joke to say, “If I need to order a Frosty and a double burger at Wendy’s, I don’t need to wade through all of Taylor Swift’s boyfriends and songs to be able to get through that.” 

So distillations of models to be able to serve at the exact moment of time that’s necessary, whatever the language or the function is. I think that you’re going to see that distillation will help us with this. The most recent Blackwell chips from Nvidia were about four times more efficient. Lighter weight models are super important to us. So we’re going to solve a lot of the inference problems and the cost associated with inference. I’m seeing it drop dramatically for what we do, and so it’ll be very manageable over time. I think that the real big cost for a lot of these companies is the training of some of this stuff, and that is going to come in line as well. We’ll get to a point of diminishing returns, like do you really need to go to 10 trillion-parameter models, or do you need something that’s just lighter weight to be able to do chemistry, biology, security, or coloration, as an example? I think, right now, we’re in the era of big models, and I don’t think that’ll last.

Do you think we need to spend $500 billion on data centers all around the world?

We need to lay down a new infrastructure of silicon. The silicon that’s around the world right now is highly optimized for general compute, and this is a new mathematical model that has to be supported with silicon. Otherwise, we’d actually consume more power if we didn’t have specialized chips that run this math equation. So all that’s happening right now is that, yes, we need to run our current compute and now we have a new algorithm we have to run. We’re going to need optimized silicon to be able to run that extra algorithm. So the short answer is that we actually do have to duplicate the silicon around the world.

Is that something that you can drive at Vimeo? We talked about needing to serve 8K and the price of storage and compute for those falling on a pretty predictable basis. We need to invent new silicon to support AI workloads. That’s like a whole industry effort, right? And the pressure is all on maybe a handful of companies in one foundry to pull that off. How do you make those bets?

I would tell you that we’re doing a lot in evaluating quality across a whole spectrum. One of the things that, as I said to you at the very start, we’re obsessing about is how our creators really want to use AI. They don’t want to be replaced and all this. And so what we’re doing is we’re picking each one of these areas and actually establishing quality frameworks inside of Vimeo — I almost said Google — inside of Vimeo where we’re saying, “Hey, this is high-quality translation,” or, “This is what we need to do to be able to support understanding what changed frame to frame.” And so a lot of what we’re doing is that we’re saying, “Okay, what’s the best model for the job that our creators are going to need to get done?” And then under the surface, we’re stitching all that together so the creator doesn’t even know there might be multiple models that are supporting them.

We’ve established quality and then also handoffs for that creator because, as I said to you, we’re creating AI that’s going to be unique to that creator. And so we’re going to remember whether it’s over here in the translation world, over in asking a question, indexing, or otherwise. We stitch it all together for the creator so they don’t even know that we might be using multiple AI, but it’s establishing quality bars for each one of those things. And then also economics — making sure we get the best economics and best performance, like queries per second from the model providers for that area so they can serve our massive minutes of video and number of creators. We’re managing performance, cost, and quality on behalf of the creator across multiple models.

Well, Philip, as you can probably tell, I could talk to you forever, but we are out of time. What’s next for Vimeo? Give people a preview of what’s coming up next and we’ll let you get out of here.

I’m probably most excited right now, as I mentioned to you, about the massive formats that are coming at the creator. I’m super excited about what can be done with immersive formats. I’m also starting to see a lot of people who want to go back into these sphere-like experiences. I do think that that’s going to be exciting and you’ll see us continue to push the edge there. You’re going to see us invest more in the filmmaking community. 

Literally on Monday, I’m headed over to the Berlin Film Festival after hopefully, my Philadelphia Eagles do well in the Super Bowl. So you’re going to see us do even more around staff picks and celebrating filmmakers in every geography we serve around the world. I mean, it’s been fairly US-centric, and you’re going to see us get a lot more global in supporting filmmakers.

Also, I would tell you as we look over at our enterprise customers, we think we can support customers in their customer journey. This mass proliferation of video across every part of the organization in the service of customers. We’re going to do really well at just-in-time video serving just the right video to just the right person at just the right moment of the customer interaction. So you’ll see us really come out with some exciting things about that between the format and the AI, things we can do to transform storytelling.

Amazing. Well, we’ll have to have you back when we do just-in-time immersive video, AKA to both eyes at the same time. Phil, thanks.

That’d be amazing. Thank you.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Monster Hunter Wilds thrives on spectacle

In my first few hours with Monster Hunter Wilds, I fought against a gigantic spider in waters that were as red as blood and dodged the onslaught of a twisting dragon amid a desert storm, complete with powerful strikes of lightning terrorizing the ground around me. I went up against sandworms and giant apes and a poisonous bug that could inflate itself while causing a cascade of explosions in the ground below. Each battle felt monumental — and the scale only increased the more I played.

This focus on spectacle is what sets Wilds apart. The game is the follow-up to Monster Hunter World, which shook up the series with a larger, more open world and catapulted it from Japanese phenomenon to global blockbuster. World went on to become Capcom’s bestselling game ever, no small feat for the company behind Resident Evil and Street Fighter. Instead of changing things up once again, Wilds ups the ante and makes its monster battles feel bigger than ever.

The MonHun games are all built around a specific gameplay cycle. You prepare for a hunt by getting your gear in order, you head out and study your target, and then you spend a long time slowly whittling down the health of a giant monster …

Read the full story at The Verge.

Sigma’s BF is a minimalist full-frame camera with no memory card slot

The Sigma BF camera pictured in two color options with a lens attached.
The Sigma BF features a minimalist design and controls simplified to four butttons and a dial. | Image: Sigma

Sigma has announced a new compact 24.6-megapixel full-frame camera called the BF with a clean, minimalist design featuring just four button controls and a dial. The BF’s body is milled from a single block of aluminum, its user interface has been redesigned with a “completely new information structure compared to conventional digital cameras,” according to a release from the company, and it trades a memory card slot for a built-in SSD.

The Sigma BF will be available in black or silver finished for $1,999 and is expected to ship sometime in April 2025. That pricing doesn’t include a lens. The BF is compatible with the L-Mount lens standard initially developed by Leica but now used by Panasonic and Sigma as well. Sigma will be updating its I Series collection of prime lenses with a new silver color option to match the BF.

The back of the Sigma BF camera in silver.

The BF’s minimalist design is most apparent on the back of the camera. Next to a 3.2-inch touchscreen display that doesn’t feature any articulation are three touch controls featuring haptic feedback so they feel like you’re touching real buttons. Above them are a dial for navigating menus with an additional haptic button in the center, and a smaller status monitor screen that shows settings options so the camera’s main display doesn’t get overly cluttered with information. You’ll find the shutter button on top of the camera, next to a couple of small microphone holes.

The streamlined user-interface on the BF surrounds the live preview with shooting-related settings including shutter speed, aperture, ISO, and EV compensation. Secondary settings are hidden in an optional menu, while camera management functions are buried in a system menu.

As is becoming more common with digital cameras, Sigma has included 13 different color modes with the BF, allowing photographers to achieve a specific look in-camera without the need for post-processing. These include options like standard and rich, as well as more creative modes like forest green, sunset red, warm gold, cinema, and monochrome.

You won’t find a memory card slot on the Sigma BF, but a USB-C port for charging and transferring files is included. Inside is a 230GB SSD which the company says is enough to store 14,000 JPEGs or 4,300 uncompressed RAW files. The camera can also capture 6K video at up to 29.97 frames per second, and can store up to 2.5 hours of video at its highest quality setting.

At full resolution, the Sigma BF can capture images at up to eight frames per second and it relies on a hybrid autofocus system combining phase and contrast detection. Its AF system also uses what the company describes as “state of the art algorithms” to detect and quickly focus on specific subjects including people, dogs, and cats.

How to set up crash detection on your Android phone

Hand holding an Android phone with a variety of small graphics in the background.

Your phone comes with a number of useful features that we hope you’ll never have to use — and crash detection falls into that category. Movement sensors detect when you’re driving — and when you come to a sudden and abrupt stop. Your phone or watch can then alert emergency services, along with family and friends, even if you’re incapacitated. 

The feature is available on Pixel phones starting with the Google Pixel 4A, which launched in 2020. (It has been rumored that crash detection is built into the Galaxy S25 series phones as well, but if it is, Samsung hasn’t enabled the feature yet.) It is also an option on the Pixel Watch 2 and 3.

When a crash is detected, your phone and watch both sound an alarm and ask if something has happened. If you’re using a phone and a wearable together, you’ll see an alert on both screens, and you can reply on either device.

The slider options you get are I’m OK (to cancel the alarm) or Call 911 & notify contacts (to go ahead and notify emergency services and your contacts). If you haven’t responded after 60 seconds, the alerts get sent out automatically, with an audio message to explain that an accident has happened and where you are.

Left: graphic showing blue car hitting gray car, above a toggle titled Car Crash Detection and a description of how it works.

Set …

Read the full story at The Verge.

Apple’s iPhone 17 lineup is looking a little Pixelated

In order: The iPhone 17 “Air,” iPhone 17, iPhone 17 Pro Max, and iPhone 17 Pro. | Image: <a href="https://x.com/MajinBuOfficial/status/1893715103293272506/photo/1">Majin Bu</a>

Apple is several months away from launching the iPhone 17 series but a significant camera redesign may be on the horizon. Leaker Majin Bu has shared CAD renders of what are purported to be the iPhone 17, 17 Pro, 17 Pro Max, and the rumored iPhone 17 Air — with the latter three all featuring Pixel-like rectangular camera bars.

The new CAD renders show the rear camera bars on the iPhone 17 Pro and Pro Max models stretched to extend their currently square design, now reaching across the entire upper body. They still retain the rounder edges seen on the current models. The 17 “Air” features a similar design, albeit with only a single rear camera lens. According to these renders, the camera module on the standard iPhone 17 model will be largely unchanged, differentiating it from the premium models.

Majin Bu is an established leaker but, as MacRumors notes, the information he’s shared hasn’t always been correct. These renders are just some of several similar leaks about the iPhone 17’s rectangular camera bar in the last few months, however, with concept designs also shared by Front Page Tech host Jon Prosser and other leakers including Ice Universe, Fixed Focus Digital, and Digital Chat Station.

In January, Majin Bu also posted an image of iPhone bodies alleged to be “part of” the iPhone 17 family. The design looks similar to the iPhone “Air” render he shared yesterday, and resembles the pill-shaped camera bar sported by Google’s Pixel 9 lineup. Rumors about the so-called iPhone 17 Air have been floating around for months describing a new, slimmer model that’s expected to join the upcoming iPhone lineup, similar to Samsung’s upcoming Galaxy S25 Edge.

iPhone 17, the design seems confirmed. pic.twitter.com/5Wh6alUiMr

— Majin Bu (@MajinBuOfficial) January 21, 2025

It’s early days and these leaks shouldn’t be taken as gospel. We won’t know the official iPhone 17 design until Apple reveals it later this year, with an announcement expected sometime in September.

What’s the deal with all these airplane crashes?

First, let’s lay out the facts.

Four commercial jet crashes have occurred in the last 10 weeks: Azerbaijan Airlines Flight 8243 on Christmas Day; Jeju Air Flight 7C2216 on December 29th; American Airlines Flight 5342 on January 29th; and Delta Connection Flight 4819 on February 17th.

There have been several private airplane crashes in the news recently, too, from the air ambulance crash in Philadelphia, Pennsylvania, just before the Super Bowl to the mid-air collision in Scottsdale, Arizona, only last week. In fact, data from the National Transportation Safety Board (NTSB) shows that there have been 13 fatal airplane crashes in the United States alone since the beginning of the year, including both private and commercial aviation. 

That’s just what is happening in the sky. On the ground, things appear just as chaotic.

On the ground, things appear just as chaotic

The Federal Aviation Administration announced that it was laying off around 400 employees starting on Valentine’s Day, just two weeks after the mid-air collision above Ronald Reagan National Airport. In a combative post on X, Secretary of Transportation Sean Duffy said that all laid-off workers were “probati &hellip;

Read the full story at The Verge.

Apple responds to tariff threat with a $500 billion US investment plan

US President Donald Trump speaks with Tim Cook in 2019.

Apple has announced plans to invest more than $500 billion in the US over the next four years, including hiring 20,000 new employees and launching a new server factory in Texas. The announcement was teased after a meeting last week between CEO Tim Cook and President Donald Trump, and comes as the company tries to mitigate the business impact of Trump’s trade tariffs, with a 10 percent tariff already in effect on goods imported from China, and a 25 percent tariff threatened for chips.

The announcement echoes one Apple made in early 2018, during the first Trump administration. At that point Apple also promised 20,000 new jobs as part of a $350 billion spend in the US, alongside a new campus in Austin which is still under construction. The company successfully appealed for tariff exemptions for some of its products, and a new US investment may be a way to secure further protection from Trump’s new charges. Apple has not confirmed how many of the new investments were already planned before Trump took office.

The company announced a few concrete elements of the increased US spend. The most significant is a new factory in Houston, set to open next year, which will produce servers to power Apple Intelligence, the company’s suite of AI features. Apple says that this factory alone will “create thousands of jobs.”

In addition, Apple is doubling its $5 billion US Advanced Manufacturing Fund to $10 billion. Launched in 2017, the fund is intended to “support world-class innovation and high-skilled manufacturing jobs across America.” In this case, it means Apple making a multibillion-dollar order for chips from a TSMC factory in Arizona.

More generally, Apple says that over the term of the Trump administration it will hire 20,000 new employees, with the majority focused on “R&D, silicon engineering, software development, and AI and machine learning.” It will also open an Apple Manufacturing Academy in Detroit in which Apple engineers and other experts will offer consultations to local businesses on “implementing AI and smart manufacturing techniques,” along with free classes for workers.

“We are bullish on the future of American innovation, and we’re proud to build on our long-standing U.S. investments with this $500 billion commitment to our country’s future,” said Cook in a statement. “From doubling our Advanced Manufacturing Fund, to building advanced technology in Texas, we’re thrilled to expand our support for American manufacturing. And we’ll keep working with people and companies across this country to help write an extraordinary new chapter in the history of American innovation.”

Apple’s most recent announcement on US investment was a 2021 promise to spend $430 billion over the following five years, including a 3,000-employee campus in North Carolina, though development on that project has since paused.

Grok blocked results saying Musk and Trump ‘spread misinformation’

Grok, Elon Musk’s ChatGPT competitor, temporarily refused to respond with “sources that mention Elon Musk/Donald Trump spread misinformation,” according to xAI’s head of engineering, Igor Babuschkin. After Grok users noticed that the chatbot had been given instructions to not respond with those results, Babuschkin blamed an unnamed, ex-OpenAI employee at xAI for updating Grok’s system prompt without approval.

In response to questions on X, Babuschkin said that Grok’s system prompt (the internal rules that govern how an AI responds to queries) is publicly visible “because we believe users should be able to see what it is we’re asking Grok.” He said “an employee pushed the change” to the system prompt “because they thought it would help, but this is obviously not in line with our values.”

Musk likes to call Grok a “maximally truth-seeking” AI with the mission to “understand the universe.” Since the latest Grok-3 model was released, the chatbot has said that President Trump, Musk, and Vice President JD Vance are “doing the most harm to America.” Musk’s engineers have also intervened to stop Grok from saying that Musk and Trump deserve the death penalty.

"Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."

This is part of the Grok prompt that returns search results.https://t.co/OLiEhV7njs pic.twitter.com/d1NJbs7C2B

— Wyatt walls (@lefthanddraft) February 23, 2025

The Space Force shares a photo of Earth taken by the X-37B space plane

On Friday, the Space Force published a picture taken last year from a camera mounted on the secretive X-37B space plane while high above the Earth. Space.com notes that the “one other glimpse” of the plane in space was while it was “deploying from Falcon Heavy’s upper stage” during its December 2023 launch.

The Space Force says it snapped the photo during experimental “first-of-kind” aerobraking maneuvers “to safely change its orbit using minimal fuel.” The Air Force said in October this would involve “a series of passes using the drag of Earth’s atmosphere,” and that once complete, it would resume its other experiments before de-orbiting.

An X-37B onboard camera, used to ensure the health and safety of the vehicle, captures an image of Earth while conducting experiments in HEO in 2024.The X-37B executed a series of first-of-kind maneuvers, called aerobraking, to safely change its orbit using minimal fuel. pic.twitter.com/ccisgl493P

— United States Space Force (@SpaceForceDoD) February 21, 2025

This is the X-37B’s seventh mission; its sixth, which concluded in November 2022, lasted about two-and-a-half years (or 908 days) and was its longest mission to date. Prior to its launch, the Space Force described mission goals that included “operating in new orbital regimes” and testing ”future space domain awareness technologies.“ It also mentioned an onboard NASA experiment involving plant seeds’ radiation exposure during long spaceflight missions.

Elon Musk claims federal employees have 48 hours to explain recent work or resign

Elon Musk tweeted Saturday that federal workers would soon get an email “requesting to understand what they got done last week.” According to the New York Times, the email from the Office of Personnel Management went to agencies across the federal government that afternoon, including the FBI, State Department, and others, with a deadline for response by 11:59PM ET on Monday. 

However, the message lacked a detail from Musk’s tweet, according to the Times, where he said, “Failure to respond will be taken as a resignation,” which a number of lawyers have said would be illegal. The Washington Post reports that experts said it “may be asking some recipients to violate federal laws,” and Sam Bagenstos, a University of Michigan law professor quoted by the Times, said, “There is zero basis in the civil service system for this.” 

House minority leader Hakeem Jeffries said in a statement Sunday that “Elon Musk is traumatizing hardworking federal employees, their children and families. He has no legal authority to make his latest demands.”

The stunt is another echo of Musk’s approach after he took over Twitter, with requests to review engineer’s code and saying that failing to respond to an email would be regarded as a resignation. Across hundreds of tweets posted on Saturday and early Sunday, Musk — who may or may not run the “Department of Government Efficiency” (DOGE), in addition to his various companies — claimed, without presenting evidence, to be rooting out fraud and employees who don’t do any work.

Leaders of at least some of the departments, like the FBI and State Department, reportedly told their workers to await guidance to respond, while the Post reports that acting Cybersecurity and Infrastructure Security Agency director Bridget Bean told staff to comply with the “valid request.”  

Unions like the American Federation of Government Employees and the National Treasury Employees Union told employees “not to respond, either just yet or at all,” Axios writes. CNN reporter Pete Muntean said the National Air Traffic Controllers Association called the “email an unnecessary distraction to a fragile system.”

Hades II just keeps getting better

Hades II just received its second major update as part of its early access development, which was a great  excuse for me to jump back in. Since its initial release, I’ve logged more than 30 hours and actually held myself back from playing much more – I don’t want to get tired of the game before it hits 1.0 – but with the new update, I wanted to see what’s new and try to beat the new final boss on my very first run.

Sadly, I haven’t even been able to see what the boss is yet. I did make it to the update’s new region, but I got destroyed by a dangerous miniboss. Still, I’ve still been really impressed with what Supergiant Games has added since May to make what’s already a very good game even better.

The big additions are impressive. Hades II initially launched with six regions — four for an Underworld route and two for a “surface” route — and with each major update, Supergiant has added a new region with new enemies, characters, and music to round out that surface route.

The first major update, which came out in October, added the game’s first new region, Mount Olympus, and it feels as epic as Mount Olympus should. It has grand architecture, fearsome e &hellip;

Read the full story at The Verge.

Apple’s M4 MacBook Air bump may be just around the corner

Last year’s M3 MacBook Airs.

Apple is readying its MacBook Air line for an update to M4 chips in March, according to Bloomberg’s Mark Gurman in today’s Power On newsletter. With the slim laptops’ spec bump, the MacBook line’s M4 transition will be complete.

Gurman didn’t provide timing beyond that the laptops are coming next month, but as usual before it launches a product, Apple is “preparing its marketing, sales and retail teams for the debut” and letting its retail stock of the laptops clear out. Both the 13-inch and 15-inch models are expected to come at the same time, like last year.

Since the Apple Silicon transition, the MacBook Airs have largely shared specs with the low-end MacBook Pro, just packed into a slimmer laptop with omissions like fewer ports and no cooling fan. The base model 14-inch Pro starts with 10-core CPUs and 10-core GPUs and feature 16GB of RAM — you can get a sense of that configuration’s performance from our review of the base M4 MacBook Pro. Ideally, the new Air models will also get the Pro’s key upgrade of being able to simultaneously connect to two external displays with the lid open.

That leaves only the Mac Studio and Mac Pro, which are still M2-generation machines, without M4 chips. Gurman has pegged the Mac Studio’s M4 bump for “between March and June” and the Mac Pro’s anywhere from June to this fall.

Our favorite apps for listening to music

Hi, friends! Welcome to Installer No. 72, your guide to the best and Verge-iest stuff in the world. (If you’re new here, welcome, hope you like gadgets, and also you can read all the old editions at the Installer homepage.) 

This week, I’ve been reading about Hasan Piker and calculator apps and car thieves and the real economics of YouTuber life, using my month of Paramount Plus to watch Sonic the Hedgehog 3 and Yellowjackets, replacing my big podcast headphones with the Shure SE215 in-ear headphones, switching all my reading out of the Kindle ecosystem for increasingly obvious reasons, and taking copious notes on Kevin Kelly’s 50 years of travel tips.

I also have for you Apple’s slightly confusing latest smartphone, a couple of new things to watch this weekend, the best new Xbox game in a while, and much more. Also, the first part of our group project on all the ways we listen to music. Let’s do this.

(As always, the best part of Installer is your ideas and tips. What are you watching / reading / playing / listening to / hot-gluing this week? Tell me everything: [email protected]. And if you know someone else who might enjoy Installer, forward it to them and tel &hellip;

Read the full story at The Verge.

The iOS 18.4 beta brings Matter robot vacuum support

The Switchbot S10 on its dock.
The Switchbot S10 is one robot vacuum with Matter support.

Apple released the first developer beta of iOS 18.4 yesterday, which users have since discovered contains support for robot vacuums in the Apple Home app through Matter.

As spotted by 9to5Mac, Smart Home Centre confirmed the functionality using a Switchbot S10, which offers its own beta support for Matter. (Switchbot first added Matter robot vacuum support last year, but it required a hub and was kind of a hack.) Apple Home screenshots shared in the story show the robot vacuum’s Home widget (complete with a little robot vacuum glyph) along with a control screen featuring a start / stop button, options for choosing between “Vacuum” and “Vacuum and Mop,” selections for operating modes like “Quiet” or “Deep Clean.” There’s also a “Send to Dock” option, although Smart Home Centre notes that this only paused the S10.

Robot vacuums in the new iOS beta can also be added to automations and scenes. You can see how all of it works in the outlet’s video below.

Apple was expected to add Matter support for robot vacuum cleaners last year, but that didn’t materialize. Few robot vacuum companies offer Matter support at the moment, and some of those are still waiting on a firmware update to enable it. Robot vacuum makers have confirmed to us that these models will support Matter:

Some of the other changes users have spotted in the first developer beta for iOS 18.4 include the addition of an ambient music Control Center option, a new “sketch” style option in Image Playground, Apple Intelligence-powered Priority Notifications, and the ability to set a default translation app. More changes could be coming, as this is only the first beta for a release that had been expected to begin Siri’s big upgrade, a shift that may still be more than a month away.

The long wait for a glimpse of Luigi

There are so many people here that nobody can tell where the end of the line is. New people arrive, ask if there’s a line, shuffle into a blob of bodies idling and waiting for someone to give them instructions. The hallway is horribly warm — unclear if it’s from the bodies or the heat — and it’s a little smelly, which could just be me but I don’t think it is. I estimate between 100 and 150 people are hanging around, waiting for 2:15PM to roll around, their anticipation building. This is not a club with a strict bouncer, though it feels like it. This is the Luigi Mangione hearing.

The hearing is a relatively minor pre-trial status update, but for the people most tapped in, there is a lot riding on it — the Luigi info-drip has been a bit dry lately. Court dates for the 26 year old accused of murdering UnitedHealthcare CEO Brian Thompson in December keep getting pushed back. Mangione, who is currently being held in federal custody in a Brooklyn jail, has not made a public appearance since before Christmas. (Mangione is accused of gunning down Thompson in December outside a Midtown Manhattan hotel, and has pleaded not guilty.) On TikTok, commenters regularly complain th &hellip;

Read the full story at The Verge.

Spotify HiFi was announced four years ago, and it’s almost here — maybe

I’m hard-pressed to find another example of a tech company announcing something and then waiting over four years to actually ship it, but that’s exactly the situation we’ve reached with Spotify and its long-delayed HiFi feature. The latest reports indicate it’s finally coming in a matter of months as part of a Music Pro package that Spotify hopes will ensure the service’s continued profitability.

But this has become quite the saga.

First introduced on February 22nd 2021, Spotify HiFi was to roll out later that year — or such was the original plan, anyway. In that story, I wrote “your turn, Apple Music,” which is funny in retrospect since Apple Music managed to successfully deliver lossless and high-resolution audio just a few months later (and at no added cost for subscribers). Amazon stopped charging extra for lossless music at around the same time.

A photo of Spotify CEO Daniel Ek on a stage.

By all accounts, this aggressive approach from both companies totally derailed Spotify HiFi, which was always going to demand an upcharge over the service’s regular Premium subscription. The company went radio silent on the feature, and Spotify spokespeople never provided any meaningful updates on its status.

T &hellip;

Read the full story at The Verge.

AT&T will let you split your bill with people on your plan

AT&T has introduced SplitPay, a new payment option that lets those sharing a phone plan with others split their payment line-by-line, so no one person has to pay the entire bill. The company says the program is available for “select postpaid wireless plans,” and that those using SplitPay can still get multi-line discounts.

It sounds like a nice idea, especially if you’ve ever had the experience of bothering people you’re sharing a plan with for their part of a bill that you pay. As for what happens if not everyone pays up, AT&T says the account holder is still responsible for the bill, and late payments could still result in extra fees or suspended service. The company writes that it will text each payer a payment link and what they owe when a billing cycle begins, and says it will notify the primary payer about any outstanding payments prior to the bill’s due date.

To set up SplitPay, you can head to AT&T’s SplitPay page, select the account holder, and then pick the individual lines and devices, like smartwatches or tablets, you want to assign to each payer, according to a help page on the program.

Asus is making a ‘Fragrance Mouse,’ and it’s coming to the US

If you were paying attention to CES this year, you may have come across the Asus Adol 14 Air Fragrance Edition’s curious gimmick: a magnetically-attached oil diffuser in the lid that emits the aroma of essential oils once the laptop heats up. Asus has now announced details about a “Fragrance Mouse” to go with it. Mentioned along with the company’s Copilot Plus PCs at CES 2025, it’s coming to the US “around late April, early May,” company spokesperson Anthony Spence told The Verge in an email.

The Fragrance Mouse has a light-duty mousing layout of two buttons and a scroll wheel. Its trick is on the underside, where a small compartment holds a refillable vial you can load with essential oils of your choosing. It’s an otherwise standard affair — the mouse connects wirelessly over Bluetooth or a 2.4GHz wireless USB dongle, offers adjustable DPI (1200dpi, 1600dpi, and 2400dpi), and is powered by a single AA battery. Asus says it’s “available in distinctive Iridescent White or Rose Clay finishes.”

Underside of the Fragrance Mouse.

You may not be able to get a complete stinky laptop and mouse set, since the Adol 14 Air Fragrance Edition has only been released in China since being introduced in July 2024, as Ars Technica notes. Spence was unable to confirm pricing details for the Fragrance Mouse in his email to The Verge.

Update February 22nd: Added that Asus had previously mentioned the Fragrance Mouse in January.

Lost Records: Bloom & Rage blends its teen drama with a heavy dose of ’90s nostalgia

The fuzz of the cathode-ray tube (CRT) monitor, alongside static grains and flickering scanlines, is a touchstone for ’90s-era nostalgia. It’s shorthand for those halcyon days when technology was predominantly analog and millennial kids spent their summers shoving bulky tapes into VHS players, recording favorite bits of their after-school television shows, and making their own home videos with camcorders. It’s this vignette that developer Don’t Nod Montréal leans heavily into in Lost Records: Bloom & Rage. The game follows a blossoming friendship — and apparent falling-out — of four teenagers over an unforgettable summer. And it all starts with a good dose of that nostalgia: the ubiquitously blue anti-drug message that precedes the title screen, complete with the telltale flicker of a CRT monitor.

Such adolescent longing is all par for the course for Don’t Nod. Alongside Telltale, the studio popularized the choose-your-own-adventure style of narrative games with Life is Strange, while foregrounding the outsized pain and tribulations of teenhood. But more than just coating teenage drama in a layer of dreamy nostalgia, Bloom & Rage is also an opportunity for Don’t &hellip;

Read the full story at The Verge.

❌