Reading view

There are new articles available, click to refresh the page.

Microsoft adds over 50 ‘Retro Classics’ to Game Pass

Microsoft has announced that a new “Retro Classics” collection is now available to Game Pass subscribers. Reminiscent of the Nintendo Switch Online classic games library, the collection includes Pitfall, Grand Prix, and more than 50 other Activision titles from the 1980s and 1990s. It’s not as many titles as the 1,300 retro games that Antstream, Microsoft’s partner in the offering, has available on its streaming service, but it won’t cost Game Pass subscribers any extra.

Retro Classics, which Microsoft writes is part of its “commitment to game preservation and backwards compatibility,” is available on Xbox consoles, PC, or via Xbox cloud gaming on compatible devices like some LG and Samsung smart TVs and the Meta Quest headset. Other games included in the collection include Cosmic Ark, MechWarrior 2: 31st Century Combat, and Atlantis.

Based on screenshots, it looks like the collection will include titles from the original PlayStation, the SNES, MS-DOS, and more. Players will be able to collect achievements and participate in events like tournaments and community challenges as well.

Microsoft says this is only the start; the collection will expand to include more than 100 games from Activision and Blizzard eventually. Like Nintendo’s retro collection, you’re out of luck if you don’t have Microsoft’s gaming subscription, as the titles in the collection aren’t available for sale separately.

Antstream Arcade separately announced a temporary deal for Game Pass subscribers. Until June 4th, members can sign up for a year of access to Antstream’s library, which includes more than 1,300 games, for $9.99 via the Microsoft Store.

Here is the full collection of launch titles for the Retro Classics collection, which Activision’s Dustin Blackwell sent to The Verge:

  • Activision prototype #1
  • Atlantis
  • Atlantis II
  • Barnstorming
  • Baseball
  • Beamrider
  • Bloody Human Freeway
  • Boxing
  • Bridge
  • Caesar II
  • Checkers
  • Chopper Command
  • Commando
  • Conquests of the Longbow: The Legend of Robin Hood
  • Cosmic Ark
  • Crackpots
  • Decathlon
  • Demon Attack
  • Dolphin
  • Dragster
  • Enduro
  • Fathom
  • Fire Fighter
  • Fishing Derby
  • Freddy Pharkas: Frontier Pharmacist
  • Freeway
  • Frostbite
  • Grand Prix
  • H.E.R.O.
  • Kaboom!
  • Laser Blast
  • MechWarrior
  • MechWarrior 2: 31st Century Combat
  • Megamania
  • Pitfall II: Lost Caverns
  • Pitfall!
  • Police Quest 1
  • Pressure Cooker
  • Quest for Glory 1
  • Riddles of the Sphinx
  • River Raid
  • River Raid II
  • Robot Tank
  • Sky Jinks
  • Space Quest 2
  • Space Quest 6
  • Space Treat Deluxe
  • Spider Fighter
  • Star Voyager
  • Tennis
  • The Adventures of Willy Beamish
  • The Adventures of Willy Beamish
  • The Dagger of Amon Ra
  • Thwocker
  • Title Match Pro Wrestling
  • Torin’s Passage
  • Trick Shot
  • Vault Assault
  • Venetian Blinds
  • Zork I
  • Zork Zero

Update, May 21st: Added list of Retro Classics launch titles.

Android 16 adds AI-powered weather effects that can make it rain on your photos

Sorry, Mario.

Google’s latest Android 16 beta adds a bunch of new wallpaper and lock screen options for Pixel phones, including live-updating weather animations and a feature that automatically frames subjects of photos within a variety of bubbly shapes.

When you select an image to use as a wallpaper in the beta, you can tap the sparkly collection of starbursts that has become the de facto symbol for AI features to access the new effects. One of them, “Shape,” washes your screen in a solid color, with a punchout frame in the middle centered on the subject of your photo, be it a person, animal, or object. You can choose from five different shape options: a slanted oval, rounded rectangle, an arched opening, a flowery shape, and a hexagon. It’s a little like the iOS “Depth Effect” feature that partially obscures the clock on your lock screen with a person’s head.

Screenshots of a cat in a “shapes” frame (left) versus two cats in the original image (right)

Right now, your phone picks what part of the image should be the subject, with no option to resize or reposition it. In a picture of two cats that my colleague Dominic Preston tried, the phone automatically centered the frame on one of the cats, with no option to use the other instead.

A new “Weather” option interacts with the subject of your photo, like by pelting them with raindrops or wrapping them in fog. The default choice, “Local,” changes the effect depending on nearby weather conditions, but you can pick fog, rain, snow, or sun if you’d rather use one persistent effect. These options join the previous “Cinematic” wallpaper mode that automatically creates a parallax effect, moving your subject around the background of the image when you tilt your phone. That feature is now activated with a toggle labeled “Add 3D motion to this photo” and produced slightly different results when I tried it out with the same image.

Google is also testing updates to the lock screen, including offering more control over what notifications appear there. For instance, the beta now has a toggle for “Show seen notifications” that, when turned off, will hide notifications you’ve already seen.

Finally, 9to5Google spotted that a blog post from Google’s I/O conference offers a look at its “Live Updates” feature, which, like iOS’s Live Activities, presents live-updating lock screen elements showing you when, say, your Uber driver is arriving. In the GIF above, you can see what aspects like its progress bar and time estimates will look like.

Fender’s free new recording app lets you simulate its iconic amps and pedals

Fender has released a free new recording app called Fender Studio that seems pretty powerful. The app, available on iOS, Android, macOS, Windows, and Linux, supports multitrack recording and offers a host of effects that emulate guitar pedals and several of the company’s iconic amplifiers over the years. 

Some of the simulated amps include a 1965 Fender Twin Reverb and a 1959 Fender Bassman, but you can get more, like a Fender Super-Sonic or Tube Preamp, if you connect the app to a free Fender Connect account. It’s the same story with pedals: you get options like Overdrive and Small Hall Reverb to start, but you can unlock others like a Stereo Tape Delay and Vintage Tremolo by registering.

In addition to the simulated amps and pedals, the app has a variety of basic effects like reverb, delay, compression, and vocoder. You can use up to eight tracks for multitrack recording, but registering the app gets you up to 16 tracks. 

The app also comes with a few pre-recorded tracks to play around with. You can’t use them commercially — the app makes you agree to a license agreement that says so right at the top — but they’re a good way to familiarize yourself with what it’s capable of. 

GIF of Fender Studio in action on an iPhone.

I played around a little bit with Fender Studio on my iPhone before writing this story. At first glance, it’s similar to Apple’s GarageBand app. Both let you use effects that mimic classic amplifiers and pedals, have things like built-in tuners and metronomes, and include a number of different effects and EQ options.

But Fender’s app feels more intuitive and makes better use of the cramped space of a smartphone screen, with support for both landscape and portrait orientation (GarageBand only works in landscape). I find that I can never remember how to do things in GarageBand, and trying to figure it out each time is frustrating enough that I never bothered to develop a workflow for recording with it. 

I’ve also always been a sucker for simulated amplifiers in recording apps, one area of software that never really gave up on skeuomorphism. I haven’t used a Twin Reverb in many years and haven’t played through a Bassman, but just recording with a hollow-body guitar through my iPhone’s built-in mic — if you have a USB-C DAC, you can use that to record with a more professional mic, too — the effect was decent enough that I could see using it to conceptualize a recording or even make a quick backing track for a video on TikTok.

iPhones are on the menu for Amazon drone delivery

The Federal Aviation Authority (FAA) has given Amazon’s Prime Air drones the go-ahead to deliver new categories of devices, including products with lithium ion batteries like iPhones, AirPods, and more, Amazon has announced. The company says those product categories can be shipped to your door within 60 minutes — if you’re in one of the eligible delivery areas in Arizona or Texas, that is.

Amazon writes that it recently streamlined its drone deliveries. The new process includes Amazon giving you a delivery time with a five-minute window on either side, and customers no longer having to go outside and put a QR code on the ground.

Screenshot of drone delivery area confirmation.

The first time you order one of these drone drops, you’ll pick from one of the predetermined-to-be-eligible delivery zones on an aerial picture of your house. Amazon will use the same spot from then on, assuming it’s clear, until you change it. The drones drop packages from about 13 feet in the air, so it’s a good idea to keep your pets or kids inside during the delivery window. At the moment, drone deliveries are only available in College Station, Texas, and in the West Valley part of the Phoenix, Arizona metro area, and only when the weather is favorable.

Exciting update in drone delivery from Amazon: Prime Air is now expanding its selection to include popular electronics with lithium-ion batteries, like phones, AirTags, and even grilling thermometers.

Customers who are in eligible areas for drone delivery in Texas and Arizona… pic.twitter.com/wQSpUTE4tu

— Amazon (@amazon) May 20, 2025

The deliveries come via Amazon’s new MK30 drones, a key part of the drone delivery program the company has been working to get off the ground for over a decade. MK30s are limited to 5-pound packages, but they can fly farther than the drones it used previously, and can even handle light rain. Last year, Amazon managed to get FAA approval to fly its drones beyond the visual line of sight of its operators, greatly expanding where it can actually make its deliveries.

Bluesky is testing a new ‘live’ indicator, starting with the NBA

Bluesky is making it easier to know when an NBA game is happening with a new test that adds a red border to the NBA’s profile picture, along with a “live” callout below it. When you click the profile picture, you’ll be taken out of Bluesky and to whatever live event the organization is promoting, Bluesky COO Rose Wang announced yesterday.

“We aren’t trapping you in Bluesky,” Wang writes in her post. “We want you to use Bluesky to discover what’s happening.” 

In the announcement, Wang quote-posted an NBA promotional post about two games that were set to take place last night, indicating that the badge would have shown up during them. Bluesky didn’t immediately respond to The Verge’s email asking for a screenshot of the new indicator and whether it plans to extend the test to other sports or non-sports organizations. As TechCrunch points out, Wang confirmed that the feature will appear for WNBA games as well.

Though Wang doesn’t say it, her post feels like a dig at the various deals Twitter made with sports organizations like the NFL, MLB and NHL, and the NBA to stream their content on its platform, rather than linking out to their streams elsewhere. In an interview with SportsPro last month, Wang said Bluesky doesn’t have the means or desire to take on partnerships like those, but the new live badge testing shows it’s certainly not above doing what it can to nurture its burgeoning “Sports Bluesky.”

Microsoft is opening its on-device AI models up to web apps in Edge

Web developers will be able to start leveraging on-device AI in Microsoft’s Edge browser soon, using new APIs that can give their web apps access to Microsoft’s Phi-4-mini model, the company announced at its Build conference today. And Microsoft says the API will be cross-platform, so it sounds like these APIs will work with the Edge browser in macOS, as well. 

The 3.8-billion-parameter Phi-4-mini is Microsoft’s latest small, on-device model, rolled out in February alongside the company’s larger Phi-4. With the new APIs, web developers will be able to add prompt boxes and offer writing assistance tools for text generation, summarizing, and editing. And within the next couple of months, Microsoft says it will also release a text translation API. 

Microsoft is putting these “experimental” APIs forth as potential web standards, and in addition to being cross-platform, it says they’ll also work with other AI models. Developers can start trialing them in the Edge Canary and Dev channels now, the company says. 

Google offers similar APIs for its Chrome browser. With them, developers can use Chrome’s built-in models to offer things like text translation, prompt boxes for text and image generation, and calendar event creation based on webpage content.

Apple is trying to get ‘LLM Siri’ back on track

Apple Intelligence has been a wreck since its first features rolled out last year, and a big new report from Bloomberg’s Mark Gurman details why — and how Apple is trying to piece things back together. And much of its effort hinges on rebuilding Siri from the ground up.

Gurman has reported in the past that Apple is working on what it’s internally calling “LLM Siri” — a reworked, generative AI version of the company’s digital assistant. Apple’s previous approach of merging the assistant with the existing Siri hasn’t been working. Gurman describes in great detail a number of reasons why, but here’s a quick summary:

  • Apple software chief Craig Federighi was “reluctant to make large investments in AI.” The company doesn’t like to invest in a goal without a clear endpoint, Gurman writes, but where AI is concerned, one unnamed Apple executive told him “…you really don’t know what the product is until you’ve done the investment.” That would have meant expensive GPUs, which the company didn’t rush to buy and later didn’t have enough of to keep up with competitors.
  • Apple started late. Another executive told Gurman that Apple Intelligence “wasn’t even an idea” before ChatGPT launched in late 2022.
  • Apple AI chief John Giannandrea thought people didn’t want AI chatbots. He told employees that customers commonly want to be able to disable tools like ChatGPT.
  • Old Siri didn’t work with new Siri. Apple apparently saw bolting generative AI features onto the old Siri as the fastest way to catch Apple up in AI, but it wasn’t working. “It’s whack-a-mole. You fix one issue, and three more crop up,” an employee told Gurman.
  • Giannandrea didn’t “fit in” with Apple’s inner circle. Giannandrea was a rare outside executive hire when he came on in 2018, and he didn’t have the same “forceful” personality as others in company leadership. He didn’t fight hard enough to get big funding amounts, the report says. Apple employees told Gurman that Giannandrea didn’t push his workers hard enough, and that he doesn’t see big AI companies like OpenAI or Google as an urgent threat to Apple.
  • Marketing got out over its skis. The company’s AI marketing focused heavily on promised features like an improved Siri or Apple Intelligence being able to take context from apps across your system before they were ready — features that it has since been forced to delay.

Now the company is trying to rejigger its approach. Part of that is a total overhaul of Siri, rather than just trying to make generative AI work in concert with the old Siri. According to Gurman, Apple has its AI team in Zurich working on a new architecture that will “entirely build on an LLM-based engine.” Gurman reported in November last year that the company was working on this, and the idea is that it will make the assistant “more believably conversational and better at synthesizing information.”

Another part of the solution is leveraging iPhones and differential privacy to improve Apple’s synthesized data — comparing fake training data with language from iPhone users’ emails, but doing so on-device and sending only the synthesized data back to Apple for AI training. And one way the company is discussing improving Siri is letting the LLM version loose on the web to “grab and synthesize data from multiple sources.” Basically, Siri as an AI web search tool not unlike Perplexity, which is one of the companies Apple has approached about partnering for AI search in Safari.

Whatever the outcome, apparently Giannandrea won’t be a direct part of it, having been taken off of product development, Siri, and robotics projects in the spring. According to Gurman, Apple execs have talked about putting him “on a path to retirement,” but are concerned that some of the research and engineering folks he brought with him would leave with him, too. Whatever the case, Gurman says Giannandrea plans to stick around, “relieved Siri is now someone else’s problem.”

China begins assembling its supercomputer in space

Rocket launching.
China’s Long March 2D rocket.

China has launched the first 12 satellites of a planned 2,800-strong orbital supercomputer satellite network, reports Space News. The satellites, created by the company ADA Space, Zhijiang Laboratory, and Neijang High-Tech Zone, will be able to process the data they collect themselves, rather than relying on terrestrial stations to do it for them, according to ADA Space’s announcement (machine-translated).

The satellites are part of ADA Space’s “Star Compute” program and the first of what it calls the “Three-Body Computing Constellation,” the company writes. Each of the 12 satellites has an onboard eight-billion parameter AI model and is capable of 744 tera operations per second (TOPS) — a measure of their AI processing grunt — and, collectively, ADA Space says they can manage five peta operations per second, or POPS. That’s quite a bit more than, say, the 40 TOPS required for a Microsoft Copilot PC. The eventual goal is to have a network of thousands of satellites that achieve 1,000 POPs, according to the Chinese government.

The satellites communicate with each other at up-to-100Gbps using lasers, and share 30 terabytes of storage between them, according to Space News. The 12 launched last week carry scientific payloads, including an X-ray polarization detector for picking up brief cosmic phenomena such as gamma-ray bursts. The satellites also have the capability to create 3D digital twin data that can be used for purposes like emergency response, gaming, and tourism, ADA Space says in its announcement.

The benefits of having a space-based supercomputer go beyond saving communications time, according to South China Morning Post. The outlet notes that traditional satellite transmissions are slow, and that “less than 10 per cent” of satellite data makes it to Earth, due to things like limited bandwidth and ground station availability. And Jonathan McDowell, a space historian and astronomer at Harvard University, told the outlet, “Orbital data centres can use solar power and radiate their heat to space, reducing the energy needs and carbon footprint.” He said both the US and Europe could carry out similar projects in the future, writes SCMP.

Amazon claims it’s ‘constantly inviting’ new customers to Alexa Plus

Yesterday, Reuters ran a story with the headline “Weeks after Amazon’s Alexa+ AI launch, a mystery: where are the users?,” in which it detailed its difficulty locating first-hand accounts of the AI-upgraded assistants’ use online. The Verge asked Amazon about the story, and the company has responded to say that the idea that Alexa Plus isn’t available is “simply wrong.”

Here’s the company’s full — and rather strongly-worded! — statement on the matter, provided by Amazon spokesperson Eric Sveum via email to The Verge:

It’s simply wrong to say that Alexa+ isn’t available to customers—that assertion is false. Hundreds of thousands of customers have access to Alexa+ and we’re constantly inviting more customers that have requested Early Access.

Sveum also shared the below screenshot of what the email invite should look like.

Screenshot of an Alexa Plus invite email.

Alexa Plus is Amazon’s generative AI-updated version of Alexa, which it announced in February is free to Amazon Prime subscribers or $19.99 a month otherwise.

While Reuters doesn’t say Alexa Plus isn’t available to customers yet, it does quote an analyst who said, “There seems to be no one who actually has it.”

The outlet also reported that its efforts to find any real-world Alexa Plus users came up empty, writing that it had “searched dozens of news sites, YouTube, TikTok, X, BlueSky and Meta’s Instagram and Facebook, as well as Amazon’s Twitch and reviews of Echo voice-assistant devices on Amazon.com.” It added that it spoke with two people who’d posted on Reddit claiming to have used Alexa Plus, but that they “did not provide Reuters with hard evidence and their identities could not be corroborated.”

Still, Engadget reported today that a wave of emails had gone out on Friday, inviting Amazon Alexa users to try out Alexa Plus. The outlet also reported that an Amazon spokesperson had told it “hundreds of thousands” of customers have tried the assistant.

Amazon started rolling out its early access program to a few customers at the end of March. At the time, it was missing features like the ability to order takeout from Grubhub using conversational context, or identify family members and remind them about chores. A page on Amazon’s website notes that some features are still “coming soon,” like being able to access Alexa Plus in a web browser or on a Fire TV or Amazon tablet. The company has said it’s prioritizing those who own certain Echo Show devices above others.

How a DoorDash driver scammed the company out of $2.5 million

A former DoorDash delivery driver pleaded guilty this week to a wire fraud conspiracy that scammed the company out of over $2.5 million, the US Attorney’s Office in California’s Northern District announced on Tuesday. He and others made it happen over a period of months using fake customer accounts, deliveries that never happened, driver accounts, and access to DoorDash employee credentials.

Here’s how the Attorney’s office describes the scheme. The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.

Devagiri faces up to 20 years in prison and a $250,000 fine, and is scheduled for a status hearing in September. He and four others were charged in August for their roles in the scheme, which prosecutors say was carried out between November 2020 and February 2021. The DoorDash employee whose insider credentials they used, Tyler Thomas Bottenhorn, was charged separately in 2022 and pleaded guilty the following year, the Attorney’s Office wrote in October.

Epic asks judge to make Apple let Fortnite back on the US App Store

Illustration of the App Store logo in front of a background of gavels.
Epic CEO Tim Sweeney. | Image: Cath Virginia / The Verge

Epic is asking District Judge Yvonne Gonzalez Rogers to order Apple to review — and approve if compliant with Apple’s guidelines — Epic’s submission of Fortnite to the US App Store in a new court filing. The company argues in the document that Apple is once again in contempt of the judge’s April order restricting it from rejecting apps over their use of outside payment links.

In a letter from Apple that Epic shared late Friday, Apple writes that it won’t “take action on the Fortnite app submission until after the Ninth Circuit rules on our pending request for a partial stay of the new injunction.” Epic claims the delay is retaliation for its legal fight with the company, and notes in its filing that Apple “expressly and repeatedly” told it and the court that it would approve Fortnite if the app complied with Apple’s guidelines, which it insists its current submission does.

Following Gonzalez Rogers’ decision in April, Epic said Fortnite would return to the US App Store. The company has since submitted the game twice, most recently to include content from an update to the EU version of the game. But instead of Apple approving Fortnite in the US, the game disappeared from the EU App Store.

Epic claimed that was because it can’t release in the EU because of Apple’s decision to block its US submission. Apple said it had merely asked that it resubmit the app without including the US storefront, “so as not to impact Fortnite in other geographies.” But in a post announcing its new filing, Epic claims that would mean it has to submit multiple versions of the app, which it says is against Apple’s guidelines.

Epic is asking that the court enforce its injunction, find Apple in contempt again, and require the company to “accept any compliant Epic app, including Fortnite, for distribution on the U.S. storefront of the App Store.”

The hitch here is that throughout this case, Judge Gonzalez Rogers hasn’t gone so far as to require Fortnite’s return to the store, finding in her 2021 ruling that Epic had still knowingly broken its developer agreement with apple. 9to5Mac writes that the judge would likely need to agree that Apple is once again in contempt of court, as she did in her April 30th ruling. The difference between now and then — and what could work in Epic’s favor — is just how annoyed she seemed with Apple in the text of that ruling.

Apple did not immediately respond to The Verge’s request for comment.

SoundCloud changes its TOS again after an AI uproar

Music-sharing platform SoundCloud is saying it “has never used artist content to train AI models,” and that it’s “making a formal commitment that any use of AI on SoundCloud will be based on consent, transparency, and artist control.” The update comes several days after artists reported that changes made last year to its terms of use could mean it reserved the right to use their music and other content to train generative AI tools.

“The language in the Terms of Use was too broad and wasn’t clear enough. It created confusion, and that’s on us,” writes SoundCloud CEO Eliah Seton.

The terms that SoundCloud is currently using were updated in February last year with text including this passage:

In the absence of a separate agreement that states otherwise, You explicitly agree that your Content may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services as part of and for providing the services.”

But Seton says that “in the coming weeks,” that line will be replaced with this:

We will not use Your Content to train generative AI models that aim to replicate or synthesize your voice, music, or likeness without your explicit consent, which must be affirmatively provided through an opt-in mechanism.

Seton reiterates that SoundCloud has never used member content to train AI, including large language models, for music creation or to mimic or replace members’ work. And, echoing what a SoundCloud spokesperson told The Verge in an email over the weekend, Seton says if the company does use generative AI, it “may make this opportunity available to our human artists with their explicit consent, via an opt-in mechanism.”

Ed Newton-Rex, the tech ethicist who first discovered the change, isn’t satisfied with the changes. In an X post, he says the tweaked language could still allow for “models trained on your work that might not directly replicate your style but that still compete with you in the market.” According to Rex, “If they actually want to address concerns, the change required is simple. It should just read “We will not use Your Content to train generative AI models without your explicit consent.” 

SoundCloud did not immediately respond to The Verge’s request for comment.

Apple might let you scroll with your eyes in the Vision Pro

Apple is testing a feature that will let users scroll through Vision Pro apps using the headset’s eye-tracking capability, according to Bloomberg’s Mark Gurman.

Eye-based scrolling will apparently work across all of Apple’s built-in apps, and Gurman says the company is working on letting third-party developers use the feature, too. How it would actually function is a mystery, but I could see a system where you have to look at the edge of a page long enough to start scrolling, or focusing on a UI element, then looking above or below it to move a page.

You can do a version of eye-based scrolling with the Dwell Control accessibility feature, which lets you open menus or carry out actions by briefly resting your eyes on items in your view. To scroll, you can gaze at an icon until the page scrolls a set amount — it’s clunky, and I’d be surprised if what Apple is testing works the same way. 

Apart from accessibility alternatives, other ways to scroll include the default — pinching with your finger and thumb and raising or lowering your hand — connecting a Bluetooth mouse, or using the analog stick on a wireless game controller.

Over the weekend, Gurman wrote in the subscriber edition of his Power On newsletter for Bloomberg that Apple is planning a “pretty feature-packed release” for visionOS 3, so we may hear more about this new eye-tracked scrolling feature during its June WWDC show.

Max was an all-time bad rebrand

Max, formerly HBO Max, is now HBO Max again. Warner Bros. Discovery announced the change today, rolling back one of the clumsiest rebrands in history from a streaming service that's had more than its share of clumsy ideas.

Right from the start, it seemed like everyone outside of Warner Bros. Discovery knew the change was a bad one. The first takes were brutal, like this take from Design Matters host Debbie Millman in a Fast Company story:

"I am completely bewildered by the HBO Max rebrand," says Debbie Millman, a designer, brand consultant, and host of the Design Matters podcast. "HBO took four decades of prestige and casually tossed it all into a dumpster, lit a match, and cheered as it burned."

Or this from Inc, a month after the rebrand in with a story about the Max app's bad redesign:

According to the company, the problem with HBO Max was the HBO part. There is an important lesson here, which is that your brand is not what you call your streaming service. Your brand is the way your customers feel about whatever you make. In this case, the feelings are pretty clear-this is a bad idea.

Digs at the rebrand are littered all over the place. Like an Engadget story headline …

Read the full story at The Verge.

Google will let restaurants highlight specials on their search profiles

Google is preparing to add a new section to restaurant and bar search profiles. The new section will let owners include timely deals or events in search results in a way that they can control, either manually or by linking to a social media profile, the company says in a post spotted by Search Engine Land.

The promos will show up under the label “This week” in the block of information that you’ll get for most local businesses in a Google search. Google suggests that companies will use it for things like daily specials or to promote upcoming live music events. It’ll launch first in the US, UK, Canada, Australia, and New Zealand for “single Food and Drink businesses,” the company says, though it’s not clear when.

We're excited to announce a new way for restaurants and bars to highlight events, deals, and specials prominently at the top of your Google Business Profile. "What's Happening" puts your timely updates, like "Today's Special" or "Live Music on Saturday," front and center! pic.twitter.com/sRO6VmWnhY

— Google Business Profile (@GoogleMyBiz) May 13, 2025

Called “What’s Happening” on the business-facing side, bars and restaurants can update this through Posts on Google, which Google launched nearly a decade ago for brands and celebrities before expanding it to local businesses, letting them post content directly to a carousel in their Search profiles. Alternatively, companies can link their Instagram, Facebook, or X accounts for automatic cross-posting.

Paul McCartney and Dua Lipa call on the UK to pass AI copyright transparency law

Dua Lipa performs on May 12th in Spain. | Photo by Aldara Zarraoa/Redferns for ABA

Last week, Paul McCartney, Dua Lipa, Ian McKellen, Elton John, and hundreds of others in the UK creative industry signed an open letter backing an effort to force AI firms to reveal the copyrighted works used to train their models.  They support an amendment to the UK’s Data (Use and Access) Bill proposed by letter organizer Beeban Kidron, adding the requirement, which the UK government has opposed.

The British House of Lords passed the amendment yesterday, 272 to 125, reports The Guardian, and now it’s going back to the House of Commons, where the amendment could be removed again. The British government says the fight over the amendment “is holding back both the creative and tech sectors and needs to be resolved by new legislation,” writes The Guardian. 

From the letter:

We will lose an immense growth opportunity if we give our work away at the behest of a handful of powerful overseas tech companies, and with it our future income, the UK’s position as a creative powerhouse, and any hope that the technology of daily life will embody the values and laws of the United Kingdom.

Also signed by many media companies, music publishers, and arts organizations, the letter insists that the amendments “will spur a dynamic licensing market that will enhance the role of human creativity in the UK, positioning us as a key player in the global AI supply chain.”

Companies like OpenAI and Meta have been accused in court of using copyrighted material without permission to train their models. Baroness Beeban Kidron, who tabled the amendment, writes that although the UK’s creative industries welcome creative advancements enabled by AI, “…how AI is developed and who it benefits are two of the most important questions of our time.” 

“My lords,” The Guardian quotes Kidron as saying yesterday, “it is an assault on the British economy and it is happening at scale to a sector worth £120bn to the UK, an industry that is central to the industrial strategy and of enormous cultural import.”

Airbnb’s new app has all of your vacation extras in one place

Airbnb has announced a major redesign of its app, expanding its offerings well beyond private home rentals to include new Airbnb Services alongside the company’s Airbnb Experiences. And you don’t have to book a stay to use the new offerings.

At the moment, these new services cover ten categories, including personal chefs, catering, massages, personal trainers, and spa treatments. The company says its providers “are vetted for quality through an evaluation of expertise and reputation,” and that they “have an average of 10 years of experience, have completed Airbnb’s identity verification process, and are required to submit relevant licenses and certifications.” Services range in cost, with some entry-level offerings priced under $50, according to the company, and they can be used without actually booking an Airbnb stay.

The app redesign will also include “reimagined” Airbnb Experiences you can participate in that can be as simple as a host-led city tour or as wild as learning Mexican wrestling techniques in the ring with a luchador. The company is taking applications from those interested in offering either services or experiences.

As for the app’s redesign, Airbnb says it’s added a new Explore tab homepage that offers recommendations for homes, experiences, and services in specific destinations, a trips tab for setting up your itinerary, and a redesigned messages platform with photo and video sharing. The Airbnb host is revamped, as well, the company says — that includes a new reservations management tab with details about upcoming reservations, an updated calendar with a daily view and hourly schedule, and new listing management tools.

PayPal launches iPhone NFC payments in Germany after EU forced Apple to open up

German iPhone users are starting to report that they’re now able to use PayPal’s tap-to-pay feature at in-store payment terminals, according to German tech site iPhone Ticker. The new capability, which PayPal announced earlier this month, is a result of the EU forcing Apple to open iPhone NFC chips up for third-party contactless payments under the Digital Markets Act

PayPal’s contactless wallet works with terminals that support Mastercard payments and is iPhone-only for now, so it won’t work on an Apple Watch, iPhone Ticker reports (via a machine translation). In December, a Norwegian payment app called Vipps became the first to take advantage of Apple’s changing ecosystem, nearly a year after Apple first announced it was opening up its NFC hardware to third-party wallet apps for EU users last year. 

Apple hasn’t only opened iPhone tap-to-pay to the EU. It announced in August that it would let developers offer in-app NFC-based payments in the US and other regions, too. The company also allows businesses to use iPhone NFC readers to accept contactless payments in third-party apps, which is a capability PayPal started offering in Venmo and PayPal Zettle in March last year.

Google Photos adds ‘Quick Edits’ when you share images

Google Photos on a Pixel 6.

Google has announced that it’s now added a “Quick Edits” feature that lets you edit an image as you’re sharing it without necessarily saving those changes to your library. 

We’re not talking deep edits here: you’ll be able to crop or auto-enhance the image, then compare it to the unedited version before you let the photo fly. Whatever changes you make will be saved when sharing in Google Photos itself, creating a shared link, or adding the image to a shared Google Photos album. However, they won’t be saved if you’re sharing through another app like Google Messages or WhatsApp. 

The option to create these edits will appear automatically when you go to share an image. If having that extra step shoved into your sharing workflow isn’t your cup of tea, there’s an option to turn it off when you tap the Settings icon on that screen.

The feature only works for images that you haven’t already edited, and “typically doesn’t appear for photos that have been edited and sorted into document albums such as receipts or screenshots.” And it’s only available in Google Photos for Android 14 and up.

Apple will let the Vision Pro ‘see’ for you

Apple previewed new Vision Pro accessibility features today that could turn the headset into a proxy for eyesight when they launch in visionOS later this year. The update uses the headset’s main camera to magnify what a user sees or to enable live, machine-learning-powered descriptions of surroundings.

The new magnification feature works on virtual objects as well as real-world ones. An example from Apple’s announcement shows a first-person view as a Vision Pro wearer goes from reading a zoomed-in view of a real-world recipe book to the Reminders app, also blown up to be easier to read. It’s a compelling use for the Vision Pro, freeing up a hand for anyone who might’ve done the same thing with comparable smartphone features.

Animation showing Apple’s new magnification feature in the Vision Pro.

Also as part of the update, Apple’s VoiceOver accessibility feature will “describe surroundings, find objects, read documents, and more” in visionOS.

The company will release an API to give approved developers access to the Vision Pro’s camera for accessibility apps. That could be used for things like “live, person-to-person assistance for visual interpretation in apps like Be My Eyes, giving users more ways to understand their surroundings hands-free.” It may not seem like much now, given the Vision Pro’s reportedly meager sales, but the features could be useful in rumored future Apple wearables, like camera-equipped AirPods or even new Apple-branded, Meta Ray-Ban-like smart glasses.

Finally, Apple says it’s adding a new protocol in visionOS, iOS, and iPadOS that supports brain-computer interfaces (BCI) using its Switch Control accessibility feature, which provides for alternate input methods such as controlling aspects of your phone using things like head movements captured by your iPhone’s camera. 

A Wall Street Journal report today explains that Apple worked on this new BCI standard with Synchron, a brain implant company that lets users select icons on a screen by thinking about them. The report notes that Synchron’s tech doesn’t enable things like mouse movement, which Elon Musk’s Neuralink has accomplished.

❌