❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 25 February 2025Main stream

OpenAI expands Deep Research to all paying ChatGPT users

25 February 2025 at 12:00

When OpenAI announced Deep Research at start of February, the company promised to bring the tool to Plus users "in about a month," and now it's doing exactly that. Starting today, the feature, which you can use to prompt ChatGPT to create in-depth reports on nearly any subject, is rolling out to Plus, Team, Edu and Enterprise users. Previously, you needed a $200 per month Pro plan to try out Deep Research.Β 

For the time being, Plus users will get 10 Deep Research queries per month included with their plan. For Pro subscribers, OpenAI is increasing the monthly limit to 120, up from 100 previously. Additionally, the company has made a couple of improvements to how the tool works. ChatGPT will now embed images alongside citations to provide "richer insights." The system also has a better understanding of file types, which should translate to better document analysis.Β 

A screenshot of a report generated by ChatGPT's Deep Research tool, with a sidebar showing the chatbot's citations.
OpenAI

If you want to give the new feature a try, write a prompt as you normally would but then tap the Deep Research icon before sending your request through to OpenAI. Depending on the complexity of question, it can take ChatGPT anywhere between five and 30 minutes to compile an answer. OpenAI has said Deep Research is currently "very compute intensive," so it be a while before Free users get to try the capability out for themselves.Β Β Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-expands-deep-research-to-all-paying-chatgpt-users-200045108.html?src=rss

Β©

Β© OpenAI

A mouse pointer hovers over the Deep Research button on ChatGPT.

Paramount+ adds 50 classic MTV Unplugged episodes

25 February 2025 at 07:50

If you're a music fan of a certain age, there's a good chance MTV Unplugged has special place in your heart. With the first episode airing in 1989, over the decades the series has produced some of the most memorable live performances in history. Who could forget Nirvana's set, recorded less than a year before Kurt Cobain would tragically take his own life in 1994, or when Alice in Chains played one of its final shows with former lead vocalist Layne Staley. There are too many memorable episodes to count, and now you can watch more than 50 of them, including the two I just mentioned, on Paramount+.

As Paramount notes, many of the episodes haven't been available to watch in more than 20 years. From that perspective, the most interesting release is Oasis' (in)famous 1996 set. For the uninitiated, it's an episode that almost didn't happen. In the days leading up to the performance, the story goes that lead singer Liam Gallagher complained of a sore throat. On the day the band was scheduled to tape the episode, he showed up an hour before "absolutely sh**faced," according to his brother Noel, who went on to sing the entire set on his own. Despite its place in music history, before today it was nearly impossible to find a high-quality video of the performance. On YouTube, for instance, you can find a bootleg recording or two, but as you can imagine, the fidelity isn't there.

This isn't the first the Paramount+ has dug into the MTV archives to expand its catalog. Earlier this year, the streamer had an entire special program around Eric Clapton's 1992 set. If you want to check out the performances for yourself, Paramount+ offers a seven-day free trial.

This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/paramount-adds-50-classic-mtv-unplugged-episodes-155004134.html?src=rss

Β©

Β© Frank Micelotta Archive via Getty Images

Kurt Cobain and Nirvana during the taping of MTV Unplugged at Sony Studios in New York City, 11/18/93. Photo by Frank Micelotta. *** Special Rates Apply *** Call for Rates ***
Yesterday β€” 24 February 2025Main stream

Anthropic’s new Claude model can think both fast and slow

24 February 2025 at 12:33

Another week, and there's another new AI model ready for public use. This time, it's Anthropic with the introduction of Claude 3.7 Sonnet. The company describes its latest release as the market's first "hybrid reasoning model," meaning the new version of Claude can both answer a question nearly instantaneously or take its time to work through it step by step. As the user you can decide what approach Claude takes, with a dropdown menu allowing you to select the "thinking mode" you want it to take.

"We've developed Claude 3.7 Sonnet with a different philosophy from other reasoning models on the market. Just as humans use a single brain for both quick responses and deep reflection, we believe reasoning should be an integrated capability of frontier models rather than a separate model entirely," writes Anthropic. "This unified approach also creates a more seamless experience for users."

Anthropic doesn't name OpenAI explicitly, but the company is clearly taking a shot at its rival. Between GPT-4, o1, o1-mini and now o3-mini, OpenAI offers many different models, but unless you follow the company closely, the number of systems on offer can be overwhelming; in fact, Sam Altman recently admitted as much. "We hate the model picker as much as you do and want to return to magic unified intelligence," he posted on X earlier this month.

Anthropic says it also took a different approach to developing Claude's reasoning capabilities. "We've optimized somewhat less for math and computer science competition problems, and instead shifted focus towards real-world tasks that better reflect how businesses actually use LLMs," the company writes. To that point, current Claude users can look forward to "particularly strong improvements in coding and front-end web development."

Claude 3.7 Sonnet is available to use starting today across all Claude plans, including Anthropic's free tier. Developers, meanwhile, can access the new model through the company's API, Amazon Bedrock and Google Cloud's Vertex AI.Β 

Speaking of developers, Anthropic is also introducing Claude Code, a new "agentic" tool that allows you to delegate coding tasks to Claude directly from a terminal interface. Available currently as a limited research preview, Anthropic says Claude Code can read code, edit files, write and run tests, and even push commits to GitHub.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropics-new-claude-model-can-think-both-fast-and-slow-203307140.html?src=rss

Β©

Β©

A screenshot of Claude's interface, showing the new model hybrid reasoning modes.
Before yesterdayMain stream

OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance

21 February 2025 at 15:04

OpenAI has banned the accounts of a group of Chinese users who had attempted to use ChatGPT to debug and edit code for an AI social media surveillance tool, the company said Friday. The campaign, which OpenAI calls Peer Review, saw the group prompt ChatGPT to generate sales pitches for a program those documents suggest was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram and other platforms. The operation appears to have been particularly interested in spotting calls for protests against human rights violations in China, with the intent of sharing those insights with the country's authorities.

"This network consisted of ChatGPT accounts that operated in a time pattern consistent with mainland Chinese business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting, rather than automation," said OpenAI. "The operators used our models to proofread claims that their insights had been sent to Chinese embassies abroad, and to intelligence agents monitoring protests in countries including the United States, Germany and the United Kingdom."

According to Ben Nimmo, a principal investigator with OpenAI, this was the first time the company had uncovered an AI tool of this kind. "Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models," Nimmo told The New York Times.

Much of the code for the surveillance tool appears to have been based on an open-source version of one of Meta's Llama models. The group also appears to have used ChatGPT to generate an end-of-year performance review where it claims to have written phishing emails on behalf of clients in China.

"Assessing the impact of this activity would require inputs from multiple stakeholders, including operators of any open-source models who can shed a light on this activity," OpenAI said of the operation's efforts to use ChatGPT to edit code for the AI social media surveillance tool.

Separately, OpenAI said it recently banned an account that used ChatGPT to generate social media posts critical of Cai Xia, a Chinese political scientist and dissident who lives in the US in exile. The same group also used the chatbot to generate articles in Spanish critical of the US. These articles were published by "mainstream" news organizations in Latin America and often attributed to either an individual or a Chinese company.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-bans-chinese-accounts-using-chatgpt-to-edit-code-for-social-media-surveillance-230451036.html?src=rss

Β©

Β© REUTERS / Reuters

Pro-Chinese protesters, carrying Chinese flags and a Hong Kong flag, take part in a rally against hostility towards mainland Chinese, on Canton Road at the Tsim Sha Tsui shopping district in Hong Kong, March 15, 2015. Hong Kong retailers' sales in January were the lowest since 2003 and revenue growth this year will likely be the slowest in at least four years, hit by a drop in visitors from the mainland who have been put off in part by rising hostility among Hong Kongers. REUTERS/Bobby Yip (CHINA - Tags: POLITICS BUSINESS CIVIL UNREST)

Apple disables iCloud's Advanced Data Protection feature in the UK

21 February 2025 at 09:30

Apple users in the UK can no longer access one of the company's most powerful data protection tools, as first reported by Bloomberg. The feature, Advanced Data Protection (ADP), allows iPhone users to add optional end-to-end encryption to a variety of iCloud data. The move comes amid an ongoing dispute between Apple and the UK over a government order that would require the company to build a backdoor to allow British security officials to access the encrypted data of users globally.Β 

"ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices," Apple told Engadget. "We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy."Β 

The current ADP screen, which says the feature is no longer available in the UK.
Mathew Smith / Engadget

"Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users," a notification explains when users go to enable the feature on their iPhone, iPad or Mac following Apple's decision. If you live in the UK and have ADP enabled, you will need to manually disable the encryption to keep your iCloud account. Apple told EngadgetΒ it will provide customers with a grace period to comply, though the company has yet to say how much time it will give. The company added it would share additional guidance in the future. Due to the nature of end-to-end encryption, Apple cannot automatically disable ADP on behalf of its users. Β Β 

Apple's decision to disable ADP in the UK does not mean the company is removing end-to-end encryption for many of the other services it offers in the country. When it comes to iMessage, passwords, health data and more, those are still protected by end-to-end encryption by default.Β Β Β 

"Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before," Apple said. "Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom. As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will."

News of the UK's backdoor request broke last week when The Washington Post reported that government officials issued a "technical capability notice" to the company under the country's Investigatory Powers Act. Last year, the UK government made changes to the law to "ensure the intelligence services and law enforcement have the powers they need to keep pace with a range of evolving threats from terrorists, hostile state actors, child abusers and criminal gangs." The order reportedly demands Apple give security officials the capability to view all of a user's fully encrypted material whenever the government wants and wherever the target is located.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-disables-iclouds-advanced-data-protection-feature-in-the-uk-173016447.html?src=rss

Β©

Β© REUTERS / Reuters

Britain's King Charles and CEO of Apple, Tim Cook walk at Apple's UK Head Office at Battersea Power Station in London, Britain, December 12, 2024. REUTERS/Toby Melville

Why OpenAI is trying to untangle its 'bespoke' corporate structure

20 February 2025 at 08:00

On the Friday after Christmas, OpenAI published a blog post titled "Why OpenAI's structure must evolve to advance our mission." In it, the company detailed a plan to reorganize its for-profit arm into a public benefit corporation (PBC). In the weeks since that announcement, I've spoken to some of the country's leading corporate law experts to gain a better understanding of OpenAI's plan, and, more importantly, what it might mean for its mission to build safe artificial general intelligence (AGI).

What is a public benefit corporation?

"Public benefit corporations are a relatively recent addition to the universe of business entity types," says Jens Dammann, professor of corporate law at the University of Texas School of Law. Depending on who you ask, you may get a different history of PBCs, but in the dominant narrative, they came out of a certification program created by a nonprofit called B Lab. Companies that complete a self-assessment and pay an annual fee to B Lab can carry the B Lab logo on their products and websites and call themselves B-Corps. Critically, B Corp status isn't a designation with the weight of law, or even an industry-wide group, behind it β€” it's a stamp of approval from this specific nonprofit.

As a result, B Lab eventually felt the certification program "was not enough," says Professor Michael Dorff, executive director of the Lowell Milken Institute for Business Law and Policy at UCLA. "They wanted something more permanent and more rooted in the law." So the organization began working with legal experts to create a model statute for what would become the benefit corporation. B Lab lobbied state legislatures to pass laws recognizing benefit corporations as legal entities, and in 2010, Maryland became the first state to do so. In 2013, Delaware enacted its own version of the law. To make things somewhat confusing, the state went with a different name: the public benefit corporation.

Delaware is arguably the most important state for corporate law in the US, thanks to the Delaware Chancery Court and its body of business-friendly case law. As of 2022, 68.2 percent of all Fortune 500 companies, including many tech giants, are incorporated in the state despite largely operating elsewhere. Delaware is also the state where OpenAI plans to reincorporate its for-profit as a PBC.

The basic idea behind public benefit corporations is that they're business entities that impose a constraint on their board to balance profit maximization, a public benefit that's stated in the charter of the company, and the concerns of people impacted by its conduct.

"It's a bit of a paradigm shift," says Professor Dammann, but don't confuse a PBC with a nonprofit. "The key characteristic of a nonprofit is what we call a non-distribution constraint, meaning if a nonprofit makes a profit, they can't distribute it to their shareholders," Professor Dammann says. "If you form a public benefit corporation, there's no such non-distribution constraint. At its heart, a PBC is still a for-profit corporation."

Why is OpenAI pursuing a PBC structure?

First and foremost, a PBC structure β€” whether it's private or selling share on the open market β€” would get OpenAI out from under that non-distribution constraint. But there are likely some other considerations at play.

OpenAI hasn't publicly said this, but it appears some of its employees believe a PBC structure could protect the company from a hostile takeover if it were to go public. In a recent Financial Times report, a source within the company said a PBC structure would give OpenAI a "safe harbor" if a rival firm were to try to buy the company. It "gives you even more flexibility to say 'thanks for calling and have a nice day'," the person said.

The specific threat OpenAI likely wants safe harbor from is what's known as the Revlon doctrine, which is named after a 1986 Delaware Supreme Court case involving the cosmetics company Revlon Inc. and now defunct supermarket chain Pantry Pride, then led by CEO Ronald Perelman. "The Revlon doctrine holds that if you're a publicly traded corporation [incorporated in Delaware] and somebody stages a takeover attempt, then under certain conditions, you have to sell to the highest bidder," says Professor Dammann.

The underlying rationale behind Revlon is that a for-profit company’s sole function is to generate profits, so the board is forced to make whatever choice will return the most money to shareholders.

"We don't know for sure, but we're fairly confident that the Revlon doctrine doesn't apply to public benefit corporations," says Professor Dammann. Theoretically, PBC boards may have the flexibility to reject a takeover bid if they believe a buyer won't adhere to the social values the company was founded on. However, because "none of this has been litigated," according to Professor Dorff, it remains a purely hypothetical defense.

Moreover, it's unclear if reorganizing as a PBC would offer OpenAI more protection against a hostile takeover attempt than what it already has as a nonprofit. "I don't think this has been tested with this particular kind of structure, but my sense is that the nonprofit would not be obligated to sell even in a Revlon moment," says Professor Dorff.

"We need to raise more capital"

A chart detailing OpenAI's corporate structure
OpenAI

Publicly, OpenAI has said it needs to secure more investment, and that its current structure is holding it back. "We once again need to raise more capital than we'd imagined," OpenAI wrote in December, two months after securing $6 billion in new venture funding. "Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness."

Unpacking what the company likely means by "structural bespokeness" requires a short history lesson. In 2019, when OpenAI originally created its for-profit arm, it organized the company using a unique "capped-profit" structure. The company said it would limit investor returns to 100x, with excess returns going to the nonprofit. "We expect this multiple to be lower for future rounds as we make further progress," OpenAI added.

It's fair to be critical of the company's claims. "You'd have to ask the investors, but I have to say that 100x is an exceptional rate or return, so the idea that you cannot get investment because of a 100x cap seems rich to me," says Professor Dorff. In fact, there are suggestions OpenAI was already making itself more attractive to investors before announcing its reorganization plan in December. In 2023, The Economist reported that the company changed its cap to increase (and not decrease as OpenAI had originally said it would) by 20 percent per year starting in 2025. At this time, OpenAI does not expect to be profitable until 2029, and racked up about $5 billion in losses last year.

"We want to increase our ability to raise capital while still serving our mission, and no pre-existing structure we know of strikes the right balance," OpenAI said in 2019. At that point, Delaware's PBC legislation had been law for nearly six years. However, the company is now arguing that a PBC structure would "enable us to raise the necessary capital with conventional terms like others in this space."

In OpenAI's defense, calling its current structure convoluted would be an understatement. As you can see from the company's own org chart, there are two other entities under the OpenAI umbrella, including a holding company that's an intermediary between the nonprofit and for-profit. Engadget was able to find at least 11 different Delaware companies registered to OpenAI. George R.R. Martin, Jodi Picoult and other members of the Author's Guild probably described it best in their copyright lawsuit against the company, calling OpenAI "a tangled thicket of interlocking entities that generally keep from the public what the precise relationships among them are and what function each entity serves within the larger corporate structure."

OpenAI did not respond to multiple requests for comment from Engadget.

"A stronger nonprofit supported by the for-profit’s success"

Sam Altman, CEO of OpenAI, at Station F, is seen through glass, during an event on the sidelines of the Artificial Intelligence Action Summit in Paris, France, Feb. 11, 2025.     Aurelien Morissard/Pool via REUTERS
Reuters

OpenAI's nonprofit arm does essentially two things: controls the for-profit side's business, and exists as a "vehicle" to develop "safe and broadly beneficial AGI" (artificial general intelligence).

According to the company, its current structure does not allow its nonprofit arm to "easily do more than control the for-profit." If it were freed of that responsibility β€” by say, handing it off to investors β€” OpenAI suggests its nonprofit could focus its resources on charitable initiatives, all while becoming "one of the best-resourced nonprofits in history."

To remedy the situation, OpenAI's board says the nonprofit should give up absolute control over the for-profit and take whatever degree of control comes with the amount of stock it's granted through the reorganization process. "The nonprofit's significant interest in the existing for-profit would take the form of shares in the PBC at a fair valuation determined by independent financial advisors," OpenAI says of this part of its plan.

Professor Dorff argues who controls OpenAI is critical to the company maintaining its mission. The move to reorganize the for-profit as a PBC is not controversial. "Companies do it all the time; there’s a straightforward and clear process to do that," he tells me. "What is controversial is what they're trying to do to change the nature of the nonprofit's ownership interest in the for-profit."

At the risk of oversimplifying things, OpenAI's board of directors wants to divest the company's nonprofit of two of its most important assets: control of the for-profit and its rights to the profits from AGI. "You can't just do that," says Professor Dorff. "The assets of the nonprofit must remain dedicated to the purpose of the nonprofit." There are rules that allow nonprofits to modify their purpose if their original one is made defunct, but those won't apply to OpenAI since we're not living in a world with safe (or any) AGI.

Think of it this way, what is the value of artificial general intelligence? It's not a traditional asset like real estate or the EVs sold by Tesla. AGI, as defined by OpenAI, doesn't yet and may never exist. "One could imagine it's worth all the labor of the economy because it could eventually replace human labor," says Professor Dorff. Whatever the eventual value of the technology, Professor Dorff says he's unsure "any number would enable the nonprofit to do what it's supposed to do without control."

No matter how OpenAI spins it, any version of this plan would result in a massive loss of control for the current nonprofit entity and its board.

One more thing

Something the experts I spoke to agreed on was that the laws governing PBCs aren't very effective at ensuring companies stick to their social purpose. "The legal constraints aren't very strict," Professor Dammann says, adding, "the problem with a very broad public benefit is that it's not so constraining anymore. If you're dedicated to a very broad version of the public good, then you can always defend every decision, right?"

"The dual goal of profit and public purpose doesn't really tell you how a company is going to manage those objectives," says Jill Fisch, professor of Business Law at the University of Pennsylvania Law School. "To the extent that public purpose sacrifices profits, and it doesn't have to, but to the extent that it does, how much of a sacrifice is contemplated?"

"What matters a lot in PBC governance is what the private arrangements are," Professor Dorff adds. "That is, what do the documents say?" A company's certificate of incorporation, shareholder agreements and bylaws can provide "very robust" (or very few) mechanisms to ensure it sticks to its social purpose. As Professor Dorff points out, OpenAI's blog post said "nothing about those."

Contrast that with when OpenAI announced its "capped profit" plan. It gave us a glimpse of some of its paperwork, sharing a clause it said was at the start of all of its employee and investor agreements. That snippet made it clear OpenAI was under no obligation to generate a profit. Right now, there's a lot we don't know about its restructuring plan. If the company is still serious about its mission of "ensuring artificial general intelligence benefits all of humanity" it owes the public more transparency.

What happens next?

FILE PHOTO: Elon Musk leaves after a meeting with Indian Prime Minister Narendra Modi at Blair House, in Washington, D.C., U.S., February 13, 2025. REUTERS/Nathan Howard/File Photo
Reuters

Elon Musk's recent $97.4 billion bid to buy the nonprofit's assets complicates OpenAI's plan. In this situation, the nonprofit isn't obligated to sell its assets to Musk under Revlon or anything else β€” the company simply is not for sale. However, as part of OpenAI's reorganization plan, the for-profit will need to compensate the nonprofit for its independence. Musk's bid likely an attempt to inflate the price of this transaction to one higher than what Sam Altman and the rest of OpenAI's board of directors had in mind. To say Musk and Altman have had a contentious relationship since the former left OpenAI would be an understatement on a grand scale, and having an enemy who not only has the most money of any human on the planet, but also broad and largely unchecked control of the United States' executive branch data, may frustrate plans.

OpenAI also faces a ticking clock. According to documents seen by The New York Times, the company has, under the terms of its latest investment round, less than two years to free its for-profit from control of the nonprofit. If it fails to do so, the $6.6 billion it raised in new funding will become debt.

This article originally appeared on Engadget at https://www.engadget.com/ai/why-openai-is-trying-to-untangle-its-bespoke-corporate-structure-160028589.html?src=rss

Β©

Β© Igor Bonifacic / Engadget

The icon for ChatGPT on iOS

Great cameras, not Apple Intelligence, is what people want from an iPhone 16e

20 February 2025 at 05:00

After much anticipation, Apple finally announced the iPhone 16e yesterday. Looking at its position in the company's lineup, the 16e is a headscratcher. My colleague Ian Carlos Campbell already wrote about how strange it is that the phone is missing MagSafe, a feature universally loved by Apple users. However, the omission that stands out the most to me is that the iPhone 16e doesn't come with more than a single rear camera, and no, 2x telephoto cropping doesn't count.

Sure, if you put the 16e against its predecessor, the 2022 iPhone SE, it’s not a surprising omission β€” but when you consider today's broader smartphone market, it's a glaring weakness. At $599, the 16e is $100 more than the Pixel 8a, a device with two amazing rear cameras and an AI-capable processor (more on that in a moment). The 8a is also frequently on sale for as little as $399. Some people hate the Pixel comparison, so I'll give you another one. Last spring, Nothing released the $349 Phone 2a. Like the 8a, it has two rear cameras. Oh, and a fresh design that's not borrowed from 2020. At almost $200 more than the phone it replaces, the 16e is very much not a midrange device.

I know what you're thinking: what's wrong with one camera, as long as that camera is great? In the case of the 16e, I think the problem is that Apple is misreading the market and what people want from their next phone. All consumer devices are a compromise in some way. Those become more pronounced as you move down the market.

For most people, their phone is their primary camera and how they document their lives and memories. Think about what was the first thing you tested when you upgraded to your current phone. I bet it was the cameras. In that context, more are better, because they make it easier to capture moments that are important to you.

For a device some outlets are describing as "low-end," the iPhone 16e features a state-of-the-art chip. It might be cut down with one less GPU core, but the 16e's A18 is still a 3nm chip with 8GB of RAM to support the processor. Apple clearly felt the A18 was necessary to get its AI suite running on the 16e. But that means the rest of the phone had to suffer as a result, starting with the camera package.

I don't know about you, but if I were in the market for a new phone, I would want the most bang for my buck. The SE line had its share of drawbacks, including a dated design and a lackluster screen, but at $429, they made sense. For all its faults, the SE still felt like a bargain in 2022 because you were getting a modern chip, access to iOS and all the great apps that come with it and Apple's excellent track record of software support. With the iPhone 16e, you're not saving nearly as much off the price of a regular iPhone. Yes, everything I said about the SE's strengths is still true of the 16e and it even builds on that phone with additions like a better battery and an OLED screen, but the smartphone market has evolved so much in the last three years.

Again, I know people hate the Pixel comparison, but the 8a makes far fewer compromises. Not only does it feature a more versatile camera system, but it also comes with a high refresh rate OLED. The 8a's Tensor G3 chip is also fully capable of running Google's latest AI features.

I know offering the best hardware features for the price has never been Apple's approach, but that approach only made sense when the company had the best software experience. We can all agree Apple Intelligence has not met its usual quality standards. Just look at notification summaries, one of the main selling points of Apple Intelligence. Apple recently paused all news and entertainment alerts generated by the system to address their poor quality.

Right now, Apple Intelligence is not a compelling reason to buy a new iPhone, and its inclusion on the 16e at the expense of other features feels, at best, a cynical attempt to boost adoption numbers. If the 16e was $100 cheaper, maybe I would be less critical, but right now it feels like Apple missed the mark.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/great-cameras-not-apple-intelligence-is-what-people-want-from-an-iphone-16e-130041307.html?src=rss

Β©

Β© Apple

A white iPhone 16e seen from the side, with its OLED display lit.

Google Lens for iPhone now lets you draw to do visual searches

19 February 2025 at 09:00

Google is introducing two small but meaningful enhancements to its Lens technology. To start, Chrome and Google app users on iPhone can now draw, highlight or tap on text and images to carry out a visual search of what they see in front of them. If this sounds familiar, it’s because Google is basically bringing over an interface paradigm it debuted last year with Circle to Search on Android to iPhone. While the implementation is different and more limited due to the constraints of iOS, the idea is the same: Google wants to save you the trouble of opening a new Chrome tab or saving a screenshot when you want to find more information about an image you see.

For now, Google says you can access the new feature, whether you’re using Chrome or the Google app, by opening the three-dot menu and selecting "Search Screen with Google Lens." In the future, the company will add a dedicated Lens shortcut to the address bar in Chrome.

Separately, the next time you use Lens, you’ll be more likely to encounter Google’s AI Overviews, particularly when you use the software to find information on more unique or novel images. In those instances, you won’t need to prompt Lens with a question about the image you just snapped for the software to try and offer a helpful explanation of what you’re seeing. Instead, it will do that automatically.

Ahead of today’s announcement, Harsh Kharbanda, director of product management for Google Lens, gave me a preview of the feature. Kharbanda used Lens to scan a photo of a car with an usual surface on its hood. An AI Overview automatically popped up explaining that the car had a carbon vinyl wrap, which it further said people use for both protection and to give their rides a more sporty appearance. According to Kharbanda, Google will roll out this update to all English-language users in countries where AI Overviews are available, with the feature first appearing in the Google app for Android and iOS, and arriving soon on Chrome for desktop and mobile devices.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-lens-for-iphone-now-lets-you-draw-to-do-visual-searches-170055399.html?src=rss

Β©

Β© Google

Google Lens on iOS, found in the Google app and Chrome, now makes it easier to conduct visual searches.

Microsoft trained an AI model on a game no one played

19 February 2025 at 08:00

World models β€” AI algorithms capable of generating simulated environments β€” represent one forefront of machine learning. Today, Microsoft published new research in the journal Nature detailing Muse, a model capable of generating game visuals and controller inputs. Unexpectedly, it was born out of a training set Microsoft built from Bleeding Edge.

If, like me, you had completely erased that game from your memory (or never knew it existed in the first place), Bleeding Edge is a 4 vs. 4 brawler developed by Ninja Theory, the studio better known for its work on the Hellblade series. Ninja Theory stopped updating Bleeding Edge less than a year after release, but Microsoft included a clause in the game’s EULA that gave it permission to record games people played online. So if you were one of the few people who played Bleeding Edge, congratulations, I guess: you helped the company make something out of a commercial flop.

So what's Muse good for anyway? Say a game designer at Blizzard wants to test an idea for a new hero in Overwatch 2. Rather than recruiting a team of programmers and artists to create code and assets that the studio may eventually scrap, they could instead use Muse to do the prototyping. Iteration is often the most time-consuming (and expensive) part of making a video game, so it’s easy to see why Microsoft would be interested in using AI to augment the process; it offers a way for the company to control runaway development costs. That’s because, according to Microsoft, Muse excels at a capability of world models the company calls persistency.

"Persistency refers to a model’s ability to incorporate (or 'persist') user modifications into generated gameplay sequences, such as a character that is copy-pasted into a game visual," says Katya Hofmann, senior principal research manager at Microsoft Research. Put another way, Muse can quickly adapt to new gameplay elements as they’re introduced in real-time. In one of the examples Microsoft shared, you can see the "player" character immediately react as two power-ups are introduced next to them. The model seemingly knows that the pickups are valuable and something players would go out of their way to obtain. So the simulation reflects that, in the process creating a convincing facsimile of a real Bleeding Edge match.Β 

According to Fatima Kardar, corporate vice president of gaming AI at Microsoft, the company is already using Muse to create a "real-time playable AI model trained on other first-party games," and exploring how the technology might help it bring old games stuck on aging hardware to new audiences.Β 

Microsoft says Muse is a "first-of-its-kind" generative AI model, but that’s not quite right. World models aren’t new; in fact, Muse isn’t even the first one trained on a Microsoft game. In October, the company DecartΒ debuted Oasis, which is capable of generating Minecraft levels. What Muse does show is how quickly these models are evolving.Β 

That said, there's a long way for this technology to go, and Muse has some clear limitations. For one, the model generates visuals at a resolution of 300 x 180 pixels and about 10 frames per second. For now, the company is releasing Muse's weights and sample data, and a way for researchers to see what the system is capable of.

This article originally appeared on Engadget at https://www.engadget.com/ai/microsoft-trained-an-ai-model-on-a-game-no-one-played-160038242.html?src=rss

Β©

Β© Ninja Theory/Microsoft

Gizmo, one of the heroes of Bleeding Edge, points his weapon at an opponent out of frame.

The Guardian is the latest news organization to partner with OpenAI

14 February 2025 at 07:55

The Guardian Media Group, owner of The Guardian and The Observer newspapers, is partnering with OpenAI. The deal will see reporting from The Guardian appear as a news source within ChatGPT, alongside article extracts and short summaries. In return, OpenAI will provide the Guardian Media Group with access to ChatGPT Enterprise, which the company says it will use to develop new products, features and tools.Β Β Β Β Β Β 

"This new partnership with OpenAI reflects the intellectual property rights and value associated with our award-winning journalism, expanding our reach and impact to new audiences and innovative platform services," said Keith Underwood, chief financial and operating officer of the Guardian Media Group.

The Guardian Media Group joins a growing list of news publishers that are now working with OpenAI after an initial period of uncertainty over the company and its business model. What started as a trickle with The Associated Press in 2023 has since become a flood, with many of the English-speaking world's leading publishers inking deals with the AI startup.Β 

In some ways, The Guardian has been more proactive than others. In 2023, the newspaper publish an article detailing its approach to generative AI. A year later, it announced a partnership with ProRata, a company that built a platform that allows AI platforms to attribute search results and share revenue with content owners. Today's announcement also comes after a major coalition of publishers, including The Guardian, announced a lawsuit against Cohere, a Canadian startup they allege improperly used more than 4,000 copyrighted works to train its AI models.Β Β Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/the-guardian-is-the-latest-news-organization-to-partner-with-openai-155555243.html?src=rss

Β©

Β© REUTERS / Reuters

Copies of the Guardian newspaper are displayed at a news agent in London August 21, 2013. British Prime Minister David Cameron ordered his top civil servant to try to stop revelations flowing from the Guardian newspaper about U.S. and British surveillance programmes, two sources with direct knowledge of the matter said. REUTERS/Suzanne Plunkett (BRITAIN - Tags: MEDIA POLITICS)

Gemini Advanced can now recall your past conversations to inform its responses

13 February 2025 at 14:24

Google is making Gemini just a bit better. Starting today, the company's chatbot will recall past conversations in an effort to provide more useful responses. "That means no more starting over from scratch or having to search for a previous conversation thread," Google explains. "Plus, you can build on top of previous conversations or projects you’ve already started."Β 

Google notes Gemini "may" indicate if it referenced a past conversation to formulate a response. If the idea of a chatbot recalling information about you makes you feel uncomfortable, Google says users can "easily review, delete or decide how long" Gemini retains their chat history. Additionally, it's possible to disable this feature altogether from the My Activity panel.

Gemini is not the first chatbot to include a memory feature. ChatGPT will "remember" things about you in certain contexts. For example, I recently asked OpenAI's chatbot a question about Jeff Buckley's vocal range, to which it later asked me if I was a fan of his music. When I said yes, a notification appeared stating "memory updated."Β Β 

More broadly, building chatbots with long, reliable memories is part of the "agentic" AI future many companies, including Google and OpenAI, are building towards. At I/O 2024, for instance, Google debuted Project Astra, which featured a built-in memory, though it was limited to a relatively short window of time and could "mis-remember" things.Β 

Gemini's new memory feature has begun rolling out in English to Gemini Advanced subscribers. It will become available in more languages over the coming weeks.Β Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/gemini-advanced-can-now-recall-your-past-conversations-to-inform-its-responses-222407226.html?src=rss

Β©

Β© Google

Gemini's interface says "Hello, Igor," with the prompt window empty below.

What to expect at Mobile World Congress 2025: Nothing, Samsung, Xiaomi and more

25 February 2025 at 09:46

On March 3, Mobile World Congress will kick off in Barcelona, Spain. While it’s not the premier show it once was, many of the smartphone industry’s leading players still attend the conference and frequently launch new devices there. Typically, we hear from companies like Lenovo, Arm, Xiaomi, Dell and more at the conference, as well as standards organizations like the GSMA on developments in areas like 5G or SIM technology. And judging by the event agenda on the MWC website, we will be seeing at least some kind of presence from Meta about WhatsApp, though it's unlikely anything major gets announced at that event. Below, you’ll find a list of the more notable devices we expect to be launched at MWC 2025.

Nothing Phone 3a series

At MWC 2022, Nothing’s Carl Pei showed off a prototype of what would become the company’s first handset, the Nothing Phone 1, behind closed doors, and at last year’s event, Nothing announced the Phone 2(a). This year, we’re definitely getting at least one new device from the company at MWC, with Nothing teasing the reveal of the 3a series for March 4, the second day of the show.Β 

Since Engadget first published this article, Nothing has gone on to reveal the design of one of the phones it plans to announce next week, the 3a Pro. On Monday, the company posted a nearly 11-minute long video showcasing the design of the upcoming device. Notably, the phone features a prominent camera bump to accommodate a periscope telephoto lens. That's not something we see on many phones in the 3a price range, so it will be interesting to see how it performs.Β Β 

Phone (3a) Series. Power in perspective.

4 March 10 AM GMT. pic.twitter.com/auesJycJQy

β€” Nothing (@nothing) January 30, 2025

Xiaomi 15 Ultra

Xiaomi 14 Ultra
Xiaomi

In 2024, the Xiaomi 14 Ultra made its global debut ahead of MWC, and it’s looking like history will repeat. Before the start of this month, there was some evidence to suggest Xiaomi would launch its new flagship at MWC 2025, but more recent rumors suggest the company plans to announce the 15 Ultra on February 26. In any case, Xiaomi is listed as an exhibitor at MWC 2025, so if the phone does debut before the end of this month, there’s a good chance it will be on the show floor for people to try out. Like the 14 Ultra before it, it looks like the 15 Ultra will be a photography powerhouse, with the phone rumored to feature a 1-inch main sensor and 200-megapixel periscope telephoto lens.

HMD Global

Since 2017, HMD Global has been a mainstay at MWC. First, with its Nokia-branded phones, including retro throwbacks like the 8110 Reloaded, and now more recently with devices carrying its own name. Given that history, it seems a safe bet the company will have something to announce at the show. What that could be is more of a mystery, though it’s possible the sub-$100 HMD Key could get a global release.

Samsung Galaxy S25 Edge

A side view of a Galaxy S25 Edge cast in shadow.
Samsung

After Samsung teased the Galaxy S25 Edge at Unpacked last month, you might think it would be fitting for the company to launch the phone in Barcelona next month. After all, MWC was the venue where, up until the Galaxy S10 in 2019, Samsung announced every S series phone beginning with the S2 back in 2011, and the company’s presence at MWC was the highlight of the event. Unfortunately, it doesn’t seem Samsung is feeling nostalgic for and the sunny boulevards of Barcelona, with little in the way of rumors suggesting we could see the S25 Edge at MWC 2025. Still, never discount the chance Samsung may have a surprise up its sleeve.

Redmagic

Chinese manufacturer Redmagic is not only attending MWC 2025 but the company has already provided a preview of what it plans to show off at the event. Expect two new versions of its 10 Pro phone, including a limited edition "Golden Saga" variant.Β Β 

Everything else

As Engadget’s resident AI reporter, I’m obligated to mention a lot of companies will probably have AI-related announcements to share at MWC 2025. Don’t expect anything from the big players like OpenAI β€” the company isn’t registered as an exhibitor β€” but with artificial intelligence being the trendy thing in the industry right now, everyone will be trying to cash in on the hype; in fact, β€œAI+” is one of the main themes of MWC 2025.

Update, February 21 2025, 1:45PM ET: This story has been updated to add more context in the intro, calling out a few more companies that typically present at MWC, as well as some additional info pulled from the MWC agenda page.

Update, February 25 2025, 12:45PM ET: This story has been updated to add more context about Nothing's MWC plans and to add mention of Redmagic.Β Β 

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/what-to-expect-at-mobile-world-congress-2025-nothing-samsung-xiaomi-and-more-140057306.html?src=rss

Β©

Β© REUTERS / Reuters

A man uses a mobile phone, as he attends the Mobile World Congress (MWC) in Barcelona, Spain, February 27, 2024. REUTERS/Bruna Casas

The best midrange smartphones for 2025

A great phone doesn’t need to cost a fortune. In 2025, features once exclusive to high-end devices – big batteries, multi-camera arrays, high refresh rate displays and more – have trickled down to more affordable models. Sure, you’ll still need to buy a flagship smartphone to get the best camera or fastest processor, but you don’t need to compromise nearly as much anymore if you’re looking for a great handset at a reasonable price. If you have less than $600 to spend, let us help you figure out what features to prioritize when trying to find the best midrange smartphone.

The best midrange phones for 2025

What is a midrange phone, anyway?

While the term frequently appears in articles and videos, there isn’t an agreed-upon definition for β€œmidrange” beyond a phone that isn’t a flagship or an entry-level option. Most of our recommendations cost between $400 and $600 β€” any less and you should expect significant compromises. If you have more to spend, you might as well consider flagships like the Apple iPhone 16 and the Samsung Galaxy S25.

What to consider before buying a midrange smartphone

Buying a new device can be intimidating, but a few questions can help guide you through the process. First: what platform do you want to use? If the answer is iOS, that narrows your options down to exactly one phone. (Thankfully, it’s great.) And if you’re an Android fan, there’s no shortage of compelling options. Both platforms have their strengths, so you shouldn’t rule either out.

Of course, also consider how much you’re comfortable spending. Even increasing your budget by $100 more can get you a dramatically better product. Moreover, manufacturers tend to support their more expensive devices for longer. It’s worth buying something toward the top limit of what you can afford.

Having an idea of your priorities will help inform your budget. Do you want a long battery life or fast charging? Do you value speedy performance above all else? Or would you like the best possible cameras? While they continue to improve every year, even the best midrange smartphones still demand some compromises, and knowing what’s important to you will make choosing one easier.

What won’t you get from a midrange smartphone?

Every year, the line between midrange and flagship phones blurs as more upmarket features and specs trickle down to more affordable models. When Engadget first published this guide in 2020, it was tricky to find a $500 phone with waterproofing and 5G. In 2025, the biggest thing you might miss out on is wireless charging – and even then, that’s becoming less true.

One thing your new phone probably won’t come with is a power adapter; many companies have stopped including chargers with all of their smartphones. Performance has improved in recent years, but can still be hit or miss as most midrange phones use slower processors that can struggle with multitasking. Thankfully, their cameras have improved dramatically, and you can typically expect at least a dual-lens system on most midrange smartphones below $600.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/best-midrange-smartphone-183006463.html?src=rss

Β©

Β© Sam Rutherford for Engadget

The best midrange smartphones

OpenAI will offer free ChatGPT users unlimited access to GPT-5

12 February 2025 at 13:19

OpenAI's upcoming GPT-5 release will integrate its o3 reasoning model and be available to free users, CEO Sam Altman revealed in a roadmap he shared on X. He said the company is also working to simplify how users interact with ChatGPT.Β 

"We want AI to 'just work' for you; we realize how complicated our model and product offerings have gotten," Altman wrote. "We hate the model picker as much as you do and want to return to magic unified intelligence."Β 

In its current iteration, forcing ChatGPT to use a specific model, such as o3-mini, involves either tapping the "Reason" button in the prompt bar or one of the options present in the model picker, which appears after the chatbot answers a question. If you pay for ChatGPT Plus or Pro, that dropdown menu can get pretty long, with multiple models and intelligence settings to choose from.Β Β 

OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5:

We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings.

We want AI to β€œjust work” for you; we realize how complicated our model and product offerings have gotten.

We hate…

β€” Sam Altman (@sama) February 12, 2025

As for the company's roadmap, Altman says GPT-4.5 will be OpenAI's "last non-chain-thought model," meaning everything that comes after will feature the capability to solve problems by breaking them down into a series of intermediate steps. Following the release of GPT 4.5, OpenAI's primary goal is "to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks."

Looking ahead to GPT-5, Altman says OpenAI will release the model "as a system that integrates a lot of our technology," including o3 and its recently released Deep Research feature. In a change of plans, OpenAI won't release o3 as a standalone model. Previously, Altman had said the new system would arrive "shortly after" o3-mini, which OpenAI made available for public use at the end of last month.Β 

Once GPT-5 arrives, OpenAI plans to offer free users unlimited access to the model, "subject to abuse thresholds," at "the standard intelligence setting." Plus users will get to run GPT-5 "at a higher level of intelligence," while Pro users will get to run the model at "an even higher level of intelligence."Β 

Altman did not provide an exact timeline for either GPT-4.5 or GPT-5, other than to say they could arrive within weeks or months.Β Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-will-offer-free-chatgpt-users-unlimited-access-to-gpt-5-211935734.html?src=rss

Β©

Β© Reuters / Reuters

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

Google I/O 2025 kicks off on May 20

11 February 2025 at 13:28

Google has set the date for its next I/O developer conference. This year, the annual event will take place over two days starting on May 20, the company announced on Tuesday. As in previous years, the conference will feature an in-person component at the Shoreline Amphitheatre right on the company's doorstep in Mountain View, California.Β 

"We’ll start day one with keynotes, followed by breakout sessions, workshops, demos, networking opportunities and more continuing on day two," Google said. In a separate email the company sent to Engadget, it promised to share updates on Gemini and Android, alongside new innovations related to web and cloud development. Last year's conference saw Google spend a lot of time talking about AI, including initiatives like Project Astra, and it's probably a safe bet to say I/O 2025 will be similar in that regard, with potential updates from DeepMind on Gemini 2.0, Project Mariner and more.Β 

Notably, this year I/O will overlap with Microsoft Build, which is set to run from May 19 to 22. Like I/O, Build is expected to include a major focus on AI.Β Β Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/google-io-2025-kicks-off-on-may-20-212810869.html?src=rss

Β©

Β© REUTERS / Reuters

Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, May 8, 2018. REUTERS/Stephen Lam

OpenAI's board 'unanimously' rejects Elon Musk's $97.4 billion takeover bid

14 February 2025 at 13:39

Elon Musk launched a $97.4 billion bid to take control of OpenAI. The Wall Street JournalΒ reported a group of investors led by Musk's xAI submitted an unsolicited offer to the company's board of directors on Monday. The group wants to buy the nonprofit that controls OpenAI's for-profit arm.Β 

When asked for comment, an OpenAI spokesperson pointed Engadget to an X post from CEO Sam Altman. "No thank you but we will buy twitter for $9.74 billion if you want," Altman wrote on the social media platform Musk owns.Β 

On Friday, OpenAI's board of directors officially rejected Musk's bid. "OpenAI is not for sale, and the board has unanimously rejected Mr. Musk's latest attempt to disrupt his competition," the company said in a response attributed to Bret Taylor, the chair of OpenAI's board of directors. "Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure AGI benefits all of humanity."Β 

Taylor, incidentally, was the chairman of Twitter's board before Musk bought the social media platform for $44 billion in 2022.Β 

"OpenAI is not for sale, and the board has unanimously rejected Mr. Musk's latest attempt to disrupt his competition. Any potential reorganization of OpenAI will strengthen our nonprofit and its mission to ensure AGI benefits all of humanity."

β€”Bret Taylor, Chair, on behalf of…

β€” OpenAI Newsroom (@OpenAINewsroom) February 14, 2025

"It’s time for OpenAI to return to the open-source, safety-focused force for good it once was," Musk said in a statement his attorney shared with The Journal. "We will make sure that happens."

A flow chart detailing OpenAI's corporate structure
OpenAI

It's hard to say how serious this bid from Musk is and what β€” if any β€” chance it has to succeed. OpenAI is not a traditional company, and the nonprofit structure Sam Altman and others at the company want it to get away from may in fact protect it from Musk's offer. Were OpenAI a for-profit company with publicly traded shares Musk's bid would likely trigger what's known in corporate law as a Revlon moment, where, under certain circumstances, the company's board of directors would be forced to sell to the highest bidder to maximize shareholder profits.Β Β 

Update 02/14 4:34PM ET: Added response from OpenAI's board of directors.Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-board-unanimously-rejects-elon-musks-974-billion-takeover-bid-215221683.html?src=rss

Β©

Β© Reuters / Reuters

FILE PHOTO: Tesla and SpaceX CEO Elon Musk arrives to the inauguration of U.S. President-elect Donald Trump in the Rotunda of the U.S. Capitol on January 20, 2025 in Washington, DC. Donald Trump takes office for his second term as the 47th president of the United States. Chip Somodevilla/Pool via REUTERS/File Photo

OpenAI co-founder John Schulman has left Anthropic after less than a year

6 February 2025 at 09:11

Less than a year into his tenure at the company, OpenAI co-founder John Schulman is leaving Anthropic. The startup confirmed Schulman’s departure after The Information, Reuters and other publications reported on the exit.

"We are sad to see John go but fully support his decision to pursue new opportunities and wish him all the very best,” said Jared Kaplan, Anthropic's chief science officer, in a statement the company shared with Engadget. Schulman left OpenAI last August alongside Peter Deng, the company’s former vice-president of consumer product. Schulman is considered one of the original architects of ChatGPT.

Following his departure from OpenAI, Schulman said he was joining Anthropic to focus on AI alignment β€” the process of making machine learning models safe to use β€” and a desire to return β€œto more hands-on technical work.” Schulman hasn’t publicly said why he decided to leave Anthropic, nor what he plans to do next. His X profile still says he β€œrecently joined” Anthropic.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-co-founder-john-schulman-has-left-anthropic-after-less-than-a-year-171124425.html?src=rss

Β©

Β© Anthropic

Anthropic's Claude AI chatbot.

DeepSeek limits model access due to overwhelming server demand

6 February 2025 at 07:13

DeepSeek recent explosion in popularity continues to be a problem for the AI startup. In a notification spotted by Bloomberg, the company said it was temporarily limiting access to its application programming interface service in response to a shortage of server capacity.Β 

"Due to current server resource constraints, we have temporarily suspended API service recharges to prevent any potential impact on your operations," DeepSeek said. "Existing balances can still be used for calls. We appreciate your understanding!" Separately, DeepSeek announced pricing for its chat model would increase to $0.27 per million input tokens and $1.10 per million output tokens starting February 8.

DeepSeek has been dealing with overwhelming demand for its services since the debut of its R1 model on January 20. The company's emergence as a leading premier AI provider, and the fact it was able to train R1 for a fraction of the price it cost OpenAI to develop its o1 reasoning model, sent US investors into a panic. Major tech stocks, including NVIDIA, shed $1 trillion of value the Monday after DeepSeek's chatbot hit the top of the App Store. Since then, OpenAI has released its o3-mini model and Deep Research feature for ChatGPT.Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/deepseek-limits-model-access-due-to-overwhelming-server-demand-151339342.html?src=rss

Β©

Β© Reuters / Reuters

FILE PHOTO: The DeepSeek app is seen in this illustration taken on January 29, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

Lyft uses Anthropic's Claude chatbot to handle customer complaints

6 February 2025 at 06:00

Lyft is partnering with Anthropic to bring the startup's AI tech to its platform. "Anthropic, known for its human-centric approach to AI, will work with Lyft to build smart, safe, and empathetic AI-powered products that put riders and drivers first," the two said in a joint press release.

If you're a frequent Lyft rider, you can see the early results of that collaboration when you go through the company's customer care AI assistant, which features integration with Anthrophic's Claude chatbot. According to Lyft, the tool is already helping to resolve thousands of driver issues every day, and has reduced average resolution times by 87 percent. The company plans to make it available to riders soon.Β 

Moving forward, Lyft plans to integrate Anthropic's tech across its business. As part of the partnership, Lyft will get early access to the startup’s products and models, and will in turn assist Anthropic with testing those capabilities. Lyft says this will allow it to integrate Anthropic's AI models in a way that aligns with the needs of its drivers and customers. Last but not least, Anthropic will provide training and education to Lyft’s software engineers.

"Lyft is using Claude to both reimagine the future of ridesharing, and at the same time deliver tangible benefits to their community today," said Michael Gerstenhaber, vice president of product management at Anthropic. "This approach, combined with their deep collaboration with our team of experts, creates a blueprint for how companies can successfully bring AI into their business."

Lyft is no stranger to working with other companies, particularly when it involves AI technology. At the end of last year, it partnered with three companies in the autonomous vehicle space β€” Mobileye, May Mobility and Nexar. Lyft plans to start introducing their technologies into its network starting this year.

This article originally appeared on Engadget at https://www.engadget.com/ai/lyft-uses-anthropics-claude-chatbot-to-handle-user-complaints-140026067.html?src=rss

Β©

Β© Lyft

Lyft's new AI support agent is powered by Anthropic's Claude chatbot.

ChatGPT Search no longer requires an OpenAI account to use

5 February 2025 at 12:55

OpenAI is showing no signs of slowing down its recent pace of updates. On Wednesday, the company announced the expanded availability of ChatGPT Search. After rolling out the tool first to paid subscribers last fall, and then making it available to all logged-in free users at the end of 2024, now anyone can use ChatGPT Search with no account or sign-in necessary.Β Β Β Β Β 

"Like the logged-in experience, ChatGPT can search the web and get you fast, timely answers with links to relevant web sources directly in ChatGPT," OpenAI said.Β 

In most cases, ChatGPT will automatically search the web to source the most up-to-date information related to your question. Users can also force the chatbot to scour the internet by tapping the "Search" button below the prompt bar.Β Β Β 

ChatGPT search is now available to everyone on https://t.co/nYW5KO1aIg β€” no sign up required. pic.twitter.com/VElT7cxxjZ

β€” OpenAI (@OpenAI) February 5, 2025

Effectively, today's announcement means OpenAI is ready to take on Google's dominance in search, though, if I had to guess, right now it's more concerned about staying ahead of upstarts like DeepSeek. In just the last week, the company announced the availability of its latest AI model, and a new ChatGPT feature called Deep Research. Oh, and it even showed off a new logo.Β 

This article originally appeared on Engadget at https://www.engadget.com/ai/chatgpt-search-no-longer-requires-an-openai-account-to-use-205538282.html?src=rss

Β©

Β© OpenAI

A mouse pointer hovers over the "Search" icon in ChatGPT's prompt bar.
❌
❌