Elden Ring will soon be more than just a hit video game. On Thursday, Bandai Namco and A24 announced a live-action Elden Ring film directed by Alex Garland.
Japanese developer FromSoftware released Elden Ring across Xbox, PlayStation, and PC in 2022, while a version for the Nintendo Switch 2 is set to be released this year. The action RPG became an instant hit, and puts you in the role of a Tarnished tasked with restoring the Elden Ring by defeating various challenging bosses throughout the Lands Between. The Elden Ring spinoff Nightreign is coming out on May 30th, 2025.
Garland is a writer, director, and producer best known for films like Ex Machina, 28 Days Later, and Dredd.
FromSoftware first hinted at the potential to expand Elden Ring “beyond the realm of games” in 2022. Last year, George R.R. Martin — the A Song of Ice and Fire creator who helped write the game — also said, “There is some talk about making a movie out of Elden Ring.” A publication called Nexus Point News first reported on the adaptation with A24 earlier this month, but pulled its original article and didn’t explain why.
As YouTube prepares to air an exclusive NFL opening week game for free on September 5th, it’s hiring former Disney exec Justin Connolly. The move has caused Disney to respond by suing both YouTube and Connolly, saying he was in the middle of leading the team negotiating Disney’s license renewal with YouTube.
He’d most recently been running the streaming services and linear media networks at Disney, and will take over as YouTube’s new global head of media and sports, as first reported by Bloomberg. After spending more than 20 years at Disney and ESPN, he’ll be managing YouTube’s relationship with the media companies that distribute content on YouTube TV, as well as leading its live sports coverage.
Bloomberg also first reported the lawsuit filed yesterday in California Superior Court, as Disney alleges that YouTube induced Connolly to breach a three-year employment contract that started in January and would’ve kept him there as an executive until an early termination option for March 1st, 2027. Connolly left Disney last week, just months before the launch of its standalone ESPN streaming service this fall.
In its complaint (which you can read below), Disney’s lawyers write:
Critically, Connolly leads the Disney team negotiating a license renewal with YouTube. Connolly has intimate knowledge of Disney’s other distribution deals, the financial details concerning Disney’s content being licensed to YouTube, and Disney’s negotiation strategies, both in general and in particular with respect to YouTube. It would be extremely prejudicial to Disney for Connolly to breach the contract which he negotiated just a few months ago and switch teams when Disney is working on a new licensing deal with the company that is trying to poach him.
YouTube did not comment on the lawsuit.
YouTube has become a growing force in live sports, with its live TV streaming service amassing more than 8 million subscribers and adding the NFL Sunday Ticket package in 2023. The platform will also. Earlier this year, YouTube revealed that it has become more popular on TVs than phones.
Other streaming companies have also increased their focus on sports recently, with Amazon preparing to broadcast NBA games and Inside the NBA next season, Apple’s close relationship with MLB and MLS, as well as Netflix’s broadcasts with the NFL and other events.
Update, May 22nd: Added details of Disney’s lawsuit.
A source tells Bloomberg that Apple’s glasses will be similar to Meta’s but “better made.”
Apple is planning to debut its first pair of smart glasses next year, according to a report from Bloomberg. The upcoming glasses will reportedly come with cameras, microphones, and speakers, “allowing them to analyze the external world and take requests via the Siri voice assistant,” Bloomberg says.
The glasses would also be capable of taking phone calls, controlling music playback, performing live translations, and offering directions. They’ll also reportedly feature an in-house chip, though plans for incorporating augmented reality still “remain years away.”
A source tells Bloomberg that Apple’s device will be similar to Meta’s Ray-Ban smart glasses, “but better made.” Meta sold more than 1 million pairs of its Ray-Ban smart glasses last year, while Google just announced that it’s working with Xreal, Warby Parker, Samsung, and Gentle Monster to create AI smart glasses on its Android XR platform.
In addition to ramping up work on smart glasses, Bloomberg reports that Apple has scrapped plans to create a smartwatch with cameras and AI features, like Visual Intelligence. Apple is still working on AirPods with cameras, Bloomberg says.
Mozilla is shutting down Pocket, the handy bookmarking tool used to save articles and webpages for later. The organization announced that Pocket will stop working on July 8th, 2025, as Mozilla begins concentrating its “resources into projects that better match their browsing habits and online needs.”
Following the shutdown, you’ll only be able to export saves until October 8th, 2025, which is when Mozilla will permanently delete user data. Mozilla says it will start automatically canceling subscriptions as well, and will issue prorated refunds to users subscribed to its annual plan on July 8th.
It has also taken down the Pocket web extension and app as of May 22nd, 2025, but users who have already installed the app will be able to re-download it until October 8th.
Pocket — originally called Read It Later — launched in 2007 and grew in popularity as people used it to keep track of the articles, recipes, videos, and more that they planned to revisit. In 2015, Mozilla added Pocket to Firefox as the browser’s default read-it-later app, and then acquired it two years later.
Mozilla says it’s shuttering Pocket because “the way people save and consume content on the web has evolved.” Pocket’s email newsletter, called Pocket Hits, will continue under a new name, “Ten Tabs,” but it will no longer have a weekend edition.
In addition to shutting down Pocket, Mozilla is also sunsetting its fake reviews detector, Fakespot. “We acquired Fakespot in 2023 to help people navigate unreliable product reviews using AI and privacy-first tech,” Mozilla says. “While the idea resonated, it didn’t fit a model we could sustain.” Review Checker, the Fakespot-powered tool built into Firefox, is shutting down on June 10th, 2025, too.
“This shift allows us to shape the next era of the internet — with tools like vertical tabs, smart search and more AI-powered features on the way,” Mozilla says. “We’ll continue to build a browser that works harder for you: more personal, more powerful and still proudly independent.”
If you’ve ever wondered how the Internet Archive uploads all the physical documents on its site, now you can get a behind-the-scenes look at the process. The Internet Archive launched a new YouTube livestream that shows the digitization of microfiche in real time — complete with some relaxing, lo-fi beats.
Microfiche is a sheet of film that contains multiple images of miniaturized documents. It’s an old form of storing newspapers, court documents, government records, and other important documents. The Internet Archive uses these microfiche cards to digitize and upload documents to its online library.
“Operators feed microfiche cards beneath a high-resolution camera, which captures multiple detailed images of each sheet,” Chris Freeland, the Internet Archive’s director of library services, writes in a post on the site. “Software stitches these images together, after which other team members use automated tools to identify and crop up to 100 individual pages per card.”
From there, the Internet Archive processes the pages, makes them text-searchable, and then uploads them to its public collections.
The livestream runs from Monday through Friday from 10:30AM ET to 6:30PM ET. “During the day, you’ll see scanners working on custom machines to digitize all the microfiche in the world,” Tung says. “During the off hours, you can also see everything else that the Archive has to offer, like silent films in the public domain or historical pictures from NASA.”
Verizon has asked the Federal Communications Commission to get rid of the rule requiring it to unlock phones after 60 days. In a letter to the FCC spotted by LightReading, Verizon claims the current unlocking requirement “benefits bad actors and fraudsters.”
The FCC first imposed an unlocking requirement following Verizon’s purchase of C-Block spectrum in 2008. It forced Verizon to allow customers to change to a new cellular carrier after purchasing a phone from the company, making it easier to switch away from than other providers.
But now, Verizon wants the 60-day period extended even longer, calling the FCC’s current requirement “outdated regulation that has become both burdensome and harmful.” The company also says eliminating the rule aligns with the FCC’s recent initiative to get rid of “unnecessary” regulations.
It adds that “recent industry experience shows that even a lock of 60 days does not deter device fraud,” which is why the “industry standard” for providers who don’t have to abide by the 60-day unlocking rule is a minimum of six months.
“Waiving this rule will benefit consumers because it will allow Verizon to continue offering subsidies and other mechanisms to make phones more affordable,” Verizon says. “Waiving the rule also will benefit competition because it will eliminate the distorted playing field that currently exists.”
Humane’s AI Pin. | Photo by Amelia Holoway Krales / The Verge
More details are trickling out about Jony Ive and Sam Altman’s new AI device. In a post on Thursday, Apple analyst Ming-Chi Kuo says his research indicates that the device could be larger than Humane’s AI pin, but with a “form factor as compact and elegant as an iPod Shuffle.”
Kuo adds that “one of the intended use cases” is wearing the device around your neck. It also may not come with a display, Kuo says, featuring just built-in cameras and microphones for “environmental detection.” The device could also connect to smartphones and PCs to use their computing and display capabilities.
My industry research indicates the following regarding the new AI hardware device from Jony Ive's collaboration with OpenAI: 1. Mass production is expected to start in 2027. 2. Assembly and shipping will occur outside China to reduce geopolitical risks, with Vietnam currently the… pic.twitter.com/5IELYEjNyV
This latest leak aligns with a report from The Wall Street Journal, which says the device will be aware of a user’s life and surroundings, but probably won’t be a pair of glasses. On Wednesday, Altman revealed that OpenAI is buying Ive’s AI hardware company, io, for $6.5 billion, which will “take over design for all of OpenAI, including its software.” Ive and Altman are aiming to launch their first devices in 2026.
When you select “hear the highlights,” a playback bar will appear at the bottom of the screen. | Image: Amazon
Amazon is testing new AI-generated audio summaries that will let you listen to two AI “hosts” chat about a product’s features. Along with product details, the AI audio clips also draw information from user reviews and information from the web.
The audio summaries open with a “friendly reminder” that you’re listening to an AI-generated clip, followed by an introduction to an “expert” AI host who’s supposed to give you a rundown of a product’s features. It’s similar to Google’s AI-generated audio overviews, which have two AI hosts discuss your research, documents, or slides in a podcast-like format.
In the clip for the SHOKZ OpenRun Pro, the AI host introduces us to “Max,” who says the key difference about these headphones is that they “conduct sound through your cheekbones instead of going into your ears.” The AI host then follows up with questions about the headphones, like who would benefit from the design and whether the sound quality is up to par.
“While the microphone gets praise for noise cancellation, some users find they’re not loud enough for an immersive music experience,” the AI “expert” Max says. “But customers do mention they’re better than earpods in certain situations.”
Amazon’s AI-generated audio summaries are currently only available to some customers in the US, but the company plans on bringing them to more products and customers in the “coming months.”
The trade association backing some of the biggest news publishers in the US slammed Google’s newly expanded AI Mode, which trades traditional search results for an AI chatbot-like interface. In a statement on Wednesday, the News/Media Alliance said the new feature is “depriving” publishers of both traffic and revenue.
During Google I/O on Tuesday, the company announced that it’s expanding AI Mode to all users in the US, which appears in a new tab directly within Search. When users enter a query, AI Mode serves up an AI-generated response alongside a list of relevant links.
“Links were the last redeeming quality of search that gave publishers traffic and revenue,” Danielle Coffey, the CEO and president of News/Media Alliance, said in the statement. “Now Google just takes content by force and uses it with no return, the definition of theft. The DOJ remedies must address this to prevent continued domination of the internet by one company.”
This week, an internal document disclosed as part of Google’s antitrust trial over its search dominance showed that the company decided against asking publishers for permission to have their work included in its AI search features, as reported by Bloomberg. Instead, publishers must opt out of search results completely if they don’t want their work included in AI features.
Google Search head Liz Reid said during her testimony that allowing publishers to opt out of individual features would add “enormous complexity,” according to Bloomberg. “By saying a publisher could be like, ‘I want to be in this feature but not that feature,’ it doesn’t work,” Reid said. “Because then we would essentially have to say, every single feature on the page needs a different model.”
Luminar, a company that develops lidar systems for autonomous vehicles, started laying off workers just one day after its founder and CEO, Austin Russell, abruptly resigned. In a regulatory filing spotted by TechCrunch, the company said it began carrying out restructuring efforts on May 15th, which include a “reduction in its workforce.”
The company doesn’t say how many employees are affected by this most recent round of layoffs, but it expects to spend around $4 million to $5 million on its restructuring plans. Luminar appointed Paul Ricci as CEO after Russell stepped down last week, but Russell will remain on Luminar’s board.
Google’s AI search results are about to get even more ads. On May 21st, the company announced that it’s going to start testing ads in AI Mode, the new AI-powered search feature that just rolled out to everyone in the US.
AI Mode is the new tab in Google Search that opens an AI chatbot-like interface, where you can get a rundown of what you’re searching for, along with links to relevant websites. But these answers could soon have product recommendations and other ads.
As an example, Google says if a user asks AI Mode for tips on how to build a website, the feature will surface a step-by-step guide on how to get started. It might even show a “helpful ad” for a website builder, which will have a “sponsored” label on it. Google says it’s testing search and shopping ads in AI Mode for users on desktop and mobile.
The search giant is also expanding its ads in AI Overviews — the AI-generated summaries that appear at the top of some search results — from mobile to desktop devices. Now, if you’re searching for advice on how to bring small dogs on flights, you might see a “sponsored” list of small dog carriers and links to buy them beneath the AI Overview.
Ads in AI Overviews on desktop are rolling out to everyone in the US starting today. Google also plans to bring ads to English-language AI Overviews in “select countries” later this year.
A 19-year-old college student will plead guilty to carrying out a massive hack against PowerSchool, a popular student information system used by schools around the country. On Tuesday, the Department of Justice said Matthew Lane of Massachusetts agreed to plead guilty to four counts, including cyber extortion, unauthorized access to protected computers, and aggravated identity theft.
Though the DOJ doesn’t identify PowerSchool by name, the details outlined by the DOJ line up with the attack, such as the hacker’s threat to leak the names, email addresses, phone numbers, Social Security numbers, dates of birth, and medical information of tens of millions of students and teachers if the company didn’t pay a $2.85 million ransom. A source close to the situation also tells NBC News that the company in question is PowerSchool.
In January, PowerSchool said it became aware of a data breach involving the “unauthorized exfiltration of certain personal information” from its customer support portal, PowerSource. The company later revealed that it paid the ransom in an attempt to keep the attacker from making its information public.
However, PowerSchool customers later received additional threats to expose stolen data. “As is always the case with these situations, there was a risk that the bad actors would not delete the data they stole, despite assurances and evidence that were provided to us,” PowerSchool said.
The DOJ accuses Lane of breaking into PowerSchool using stolen login credentials and transferring the information of students and teachers to a computer server in Ukraine.
The agency also charged Lane with breaching and extorting another unnamed US-based telecom company.
“As alleged, this defendant stole private information about millions of children and teachers, imposed substantial financial costs on his victims, and instilled fear in parents that their kids’ information had been leaked into the hands of criminals — all to put a notch in his hacking belt,” US Attorney Leah Foley said in the press release.
Windows in Android’s desktop mode can stretch and move across your screen.
Google is working with Samsung to bring a desktop mode to Android. During Google I/O’s developer keynote, engineering manager Florina Muntenescu said the company is “building on the foundation” of Samsung’s DeX platform “to bring enhanced windowing capabilities in Android 16,” as spotted earlier by 9to5Google.
Samsung first launched DeX in 2017, a feature that automatically adjusts your phone’s interface and apps when connected to a larger display, allowing you to use your phone like a desktop device.
A demo during the presentation revealed a Samsung DeX-like layout, with apps like Gmail, Chrome, YouTube, and Google Photos centered in the taskbar at the bottom of the screen. It also showed how Android 16’s adaptive apps can move and stretch across the screen. The time sits at the top-left corner of the screen, with the Wi-Fi signal and battery on the right.
In March, Android Authority’s Mishaal Rahman reported on Google’s plans to create a desktop mode of its own, and later enabled an early version of the feature on a Pixel device. Google shared more details in a blog post about the update, saying Android 16’s emphasis on adaptiveness will also help apps work on more kinds of devices, like foldables, tablets, Chromebooks, mixed reality wearables, and even cars.
Shahram Izadi, Google’s head of Android XR, talking about the advantage of using Gemini.
Google's AI models have a secret ingredient that's giving the company a leg up on competitors like OpenAI and Anthropic. That ingredient is your data, and it's only just scratched the surface in terms of how it can use your information to "personalize" Gemini's responses.
Google first started letting users opt in to its "Gemini with personalization" feature earlier this year, which lets the AI model tap into your search history "to provide responses that are uniquely insightful and directly address your needs." But now, Google is taking things a step further by unlocking access to even more of your information - all in the name of providing you with more personalized, AI-generated responses.
During Google I/O on Tuesday, Google introduced something called "personal context," which will allow Gemini models to pull relevant information from across Google's apps, as long as it has your permission. One way Google is doing this is through Gmail's personalized smart replies - the AI-generated messages that you can use to quickly reply to emails.
To make these AI responses sound "authentically like you," Gemini will pore over your previous emails and even your Google Drive files to cr …
Google just wrapped up its big keynote at I/O 2025. As expected, it was full of AI-related announcements, ranging from updates across Google’s image and video generation models to new features in Search and Gmail.
But there were some surprises, too, like a new AI filmmaking app and an update to Project Starline. If you didn’t catch the event live, you can check out everything you missed in the roundup below.
Google has announced that it’s rolling out AI Mode, a new tab that lets you search the web using the company’s Gemini AI chatbot, to all users in the US starting this week.
Google will test new features in AI Mode this summer, such as deep search and a way to generate charts for finance and sports queries. It’s also rolling out the ability to shop in AI Mode in the “coming months.”
Project Starline, which began as a 3D video chat booth, is taking a big step forward. It’s becoming Google Beam and will soon launch inside an HP-branded device with a light field display and six cameras to create a 3D image of the person you’re chatting with on a video call.
Companies like Deloitte, Duolingo, and Salesforce have already said that they will add HP’s Google Beam devices to their offices.
Google has announced Imagen 4, the latest version of its AI text-to-image generator, which the company says is better at generating text and offers the ability to export images in more formats, like square and landscape. Its next-gen AI video generator, Veo 3, will let you generate video and sound together, while Veo 2 now comes with tools like camera controls and object removal.
In addition to updating its AI models, Google is launching a new AI filmmaking app called Flow. The tool uses Veo, Imagen, and Gemini to create eight-second AI-generated video clips based on text prompts and / or images. It also comes with scene-builder tools to stitch clips together and create longer AI videos.
Gemini 2.5 Pro adds an “enhanced” reasoning mode
The experimental Deep Think mode is meant for complex queries related to math and coding. It’s capable of considering “multiple hypotheses before responding” and will only be available to trusted testers first.
Google has also made its Gemini 2.5 Flash model available to everyone on its Gemini app and is bringing improvements to the cost-efficient model in Google AI Studio ahead of a wider rollout.
Xreal and Google are teaming up on Project Aura, a new pair of smart glasses that use the Android XR platform for mixed-reality devices. We don’t know much about the glasses just yet, but they’ll come with Gemini integration and a large field-of-view, along with what appears to be built-in cameras and microphones.
Last year we unveiled Project Astra on the #GoogleIO stage. See how it’s evolved since then — and what might be possible in the future. pic.twitter.com/ObMi7gFrrl
Project Astra could already use your phone’s camera to “see” the objects around you, but the latest prototype will let it complete tasks on your behalf, even if you don’t explicitly ask it to. The model can choose to speak based on what it’s seeing, such as pointing out a mistake on your homework.
Google is building its AI assistant into Chrome. Starting on May 21st, Google AI Pro and Ultra subscribers will be able to select the Gemini button in Chrome to clarify or summarize information across webpages and navigate sites on their behalf. Google plans on letting Gemini work across multiple tabs at once later this year.
Google is rolling out a new “AI Ultra” subscription that offers access to the company’s most advanced AI models and higher usage limits across apps like Gemini, NotebookLM, Flow, and more. The subscription also includes early access to Gemini in Chrome and Project Mariner, which can now complete up to 10 tasks at once.
Speaking of Project Astra, Google is launching Search Live, a feature that incorporates capabilities from the AI assistant. By selecting the new “Live” icon in AI Mode or Lens, you can talk back and forth with Search while showing what’s on your camera.
After making Gemini Live’s screensharing feature free for all Android users last month, Google has announced that iOS users will be able to access it for free, as well.
Google has revealed Stitch, a new AI-powered tool that can generate interfaces using selected themes and a description. You can also incorporate wireframes, rough sketches, and screenshots of other UI designs to guide Stitch’s output. The experiment is currently available on Google Labs.
Google Meet is launching a new feature that translates your speech into your conversation partner’s preferred language in near real-time. The feature only supports English and Spanish for now. It’s rolling out in beta to Google AI Pro and Ultra subscribers.
Gmail’s smart reply feature, which uses AI to suggest replies to your emails, will now use information from your inbox and Google Drive to prewrite responses that sound more like you. The feature will also take your recipient’s tone into account, allowing it to suggest more formal responses in a conversation with your boss, for example.
Gmail’s upgraded smart replies will be available in English on the web, iOS, and Android when it launches through Google Labs in July.
Google is testing a new feature that lets you upload a full-length photo of yourself to see how shirts, pants, dresses, or skirts might look on you. It uses an AI model that “understands the human body and nuances of clothing.”
Google will also soon let you shop in AI Mode, as well as use an “agentic checkout” feature that can purchase products on your behalf.
If Chrome detects that your password’s been compromised, Google says the browser will soon be able to “generate a strong replacement” and automatically update it on supported websites. The feature launches later this year, and Google says that it will always ask for consent before changing your passwords.
Google is taking its virtual try-on feature to a new level. Instead of seeing what a piece of clothing might look like on a wide range of models, it’s now testing a feature that lets you upload a photo of yourself to see how it might look on you.
The new feature is rolling out in Search Labs in the US today. Once you opt into the experiment, you can check it out by selecting the “try it on” button next to pants, shirts, dresses, and skirts that appear in Google’s search results. Google will then ask for a full-length photo, which the company will use to generate an image of you wearing the piece of clothing you’re shopping for. You can save and share the images.
Google says the feature uses an AI model that “understands the human body and nuances of clothing — like how different materials fold, stretch and drape on different bodies.”
Google will also soon allow you to shop in AI Mode — the new, Gemini-powered search experience it began rolling out to more users in March. If you tell AI Mode that you’re looking for a new travel bag, for example, it will automatically show a personalized panel of images and product listings.
You can narrow the selection by providing more details about your needs, like noting you need a bag for a May trip to Portland, Oregon. Google says AI Mode will conduct multiple searches simultaneously, allowing it to determine what features are suitable for rainy weather and then surface results that meet those needs, such as having waterproof fabric and more pockets.
There’s also a new “agentic” checkout feature rolling out to Google users in the US in the coming months. Right now, if you tap “track price” on a product listing, select a size, color, and the amount you want to spend, Google will automatically notify you when it drops to your preferred price. A new feature lets you confirm your purchase details and then select “buy for me.” Google will then check out on the merchant’s website and “securely complete the checkout on your behalf” using Google Pay.
Meet will translate what you say in near real-time.
Google is bringing speech translation to Meet. During I/O on Tuesday, Google revealed a new Gemini-powered feature that can translate what you say into your conversation partner’s preferred language.
Google says the AI-generated translation will preserve the sound of your voice, tone, and expression. The feature is rolling out now to subscribers, and Google will also bring it to enterprises later this year.
In a demo shown by Google, an English speaker joins a call with a colleague who speaks Spanish. Once their colleague turns on Gemini’s speech translation, Meet begins dubbing over what they’re saying with an AI-generated English translation that includes all their vocal inflections — and vice versa.
For now, Meet can only translate between English and Spanish, but Google plans on adding support for Italian, German, and Portuguese in the “coming weeks.”
MSI has revealed its latest iteration of its Claw PC gaming handheld — and this time, it’s powered by AMD. The company showed off the Claw A8 BZ2EM at Computex 2025, which comes with an AMD Ryzen Z2 Extreme chip along with up to 24GB of DDR5 memory.
That’s a bit less than the 32GB of memory that came with the Intel-equipped Claw 8 AI Plus released late last year, but it still has an 8-inch full HD display, a 120Hz refresh rate, and a 1TB M.2 SSD. The AMD-powered Claw A8 will also come in two colors: white and lime green.
There’s a new MSI Claw 8 AI Plus “Polar Tempest” edition, too, which features an up to Intel Core Ultra 7 258V processor and a 2TB NVMe SSD. It also has what MSI calls a “glittering” white coating. MSI didn’t reveal a release date or price for either model, but it will likely be in the same ballpark as the standard Intel-powered MSI Claw 8 Plus, which Best Buy lists as costing $999.99.
It’s finally possible to purchase an audiobook from Spotify’s iPhone app with just a few taps. On Monday, Spotify announced that Apple approved an update that allows users in the US to see audiobook pricing within the app and buy individual audiobooks outside the App Store.
The update also lets Spotify Premium subscribers purchase additional audiobook listening hours. This change follows last month’s Epic Games vs. Apple ruling, which upended the iPhone maker’s control over the App Store. Under the ruling, Apple can’t collect fees on purchases made outside the app store, nor can it govern how developers point to external purchases.
Spotify submitted the update last week, but now it’s official. The music streaming service pulled audiobook purchases from its iOS app in 2022 after accusing Apple of “choking competition” with App Store rules that made it more difficult to purchase audiobooks. Spotify also started letting iPhone users purchase subscriptions outside the App Store earlier this month.
The iOS apps for Kindle, Patreon, and Delta’s emulator have also taken advantage of the court ruling, but Epic Games is still fighting to bring Fortnite back to the App Store. “This change lowers the barriers for more users to embrace their first — or tenth — audiobook, while allowing publishers and authors to reach fans and access new audiences seamlessly,” Spotify said in its announcement.
Huawei just launched a super sleek folding laptop that might be as thin as your phone. The MateBook Fold, which consists of a single OLED display, is just 7.3mm (~0.3 inches) thick when unfolded and 14.9mm (~0.6 inches) when closed, as spotted earlier by Android Headlines.
To compare, Lenovo’s ThinkPad X1 Fold measures 8.6mm (0.34 inches) thick unfolded and 17.4mm (0.68 inches) when folded. But unlike Lenovo’s device, the MateBook Fold is only available in China for now, with a price of around $3,300.
The MateBook Fold’s 18-inch display folds at a 90-degree angle to form a 13-inch upper screen, mimicking a traditional laptop with a digital keyboard instead of a physical one. The device weighs just 1.16kg (~2.6lbs), with its tandem OLED offering a 3.3K (3296 x 2472) resolution and a peak brightness of up to 1600 nits. The laptop also comes with up to 32GB of RAM and 2TB of storage.
This also marks the debut of Huawei’s in-house operating system, called HarmonyOS 5, on PC. Huawei first launched HarmonyOS on Android, but it has since brought its operating system to PCs after losing access to Microsoft Windows in March due to US sanctions. In addition to coming with the MateBook Fold, the system is available on the new MateBook Pro as well.