❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 21 May 2025Main stream

The real reason why OpenAI spent $6.5 billion on Jony Ive's AI startup

21 May 2025 at 13:56
Jony Ive
Jonny Ive

Getty Images/Michael Kovac

  • OpenAI bought Jony Ive's io to enhance product distribution and user reach.
  • Generative AI competition is now more about distribution rather than technology.
  • Google's vast distribution network poses a challenge for OpenAI. Ive's gadgets could help.

OpenAI's decision to buy Jony Ive's gadget company, io, is about distribution.

The generative AI race has entered a new stage. It used to be about creating the best AI models, but they're all pretty similar these days.

Who ultimately wins will depend a lot more on distribution, and less on the quality of the underlying technology.

Getting ChatGPT and other OpenAI models and products into the hands of users β€” that's what really counts. Without that direct relationship, these products won't be used as much, or it will cost a lot to get the offerings to consumers indirectly.

Ive designed the ultimate distribution tool for technology, Apple's iPhone. Then he left and started work on io, a new type of device company built for the AI era.

AI could spark a wave of new gadgets

While phones still dominate, generative AI could change that. We might all wear smart glasses with AI chatbots built in. Meta, Apple, Google, and others are working on this. We could have a little clip thingy attached to our shirts, so we can converse constantly with AI models and chatbots. Who knows what else might work in this new era?

Either way, if you're Sam Altman running OpenAI, you don't want to have Google or Meta or Apple standing in between you and your users. You know what happens then? You end up having to pay for distribution. Mark Zuckerberg despises being an app on Apple's mobile device platform, which takes a juicy 30% fee from many developers. Even Google pays Apple roughly $20 billion a year to have its search engine distributed on iPhones and other Apple gadgets.

Does Altman want to pay Apple $20 billion in a few years? Does he want to give Tim Cook 30% of the revenue OpenAI generates from ChatGPT paid apps on iPhones? Of course not.

One solution is to hire the original iPhone designer to build OpenAI's own gadgets. This quote stood out to me in the OpenAI announcement about the deal with Ive and his io company.

"It became clear that our ambitions to develop, engineer and manufacture a new family of products demanded an entirely new company," Altman and Ive wrote.

Even if this hardware journey costs billions of dollars, it could be cheaper than paying other tech giants for distribution. And at least you control your own destiny and have that direct relationship with users.

io vs IO

I'm writing this from Google IO, the internet giant's annual conference. Altman loves to crash this party. He did it last year, and he's doing it again today. It's all the more perfect because Ive's device company is called io.

My interpretation of this party-crashing is that Altman could be pretty worried about Google.

Despite that very expensive deal with Apple, Google is a master of distribution, and it's using everything at its disposal to quickly get its new AI products and tools into as many hands as possible. These are the offerings that compete directly with ChatGPT.

Here are some examples of Google's distribution power, taken partly from this week at Google IO and partly from all the work Google has been doing for the past 20 years or so. For a startup like OpenAI, this must be terrifying.

  • Android, Google's mobile operating system, supports more than 3 billion devices. The company is prominently displaying its Gemini AI chatbot service on as many Android gadgets as possible.
  • There are millions of Pixel devices and Chromebooks out there. And guess what? Google is weaving Gemini into many of these phones and laptops.
  • Chromebooks, Android, and Pixel are mainly ways to distribute Google technology to users directly. Chrome is the default browser on these Googley gadgets. This week at IO, Google showed off how Gemini is now baked into the Chrome browser. That's suddenly more than 1 billion users who will see Gemini every day, and they could end up using this chatbot, rather than ChatGPT.
  • Then there's the big kahuna: Google Search. The company announced several ways that its new AI technology is being weaved into Search. There's a new AI Mode that launched across the US on Tuesday. Suddenly, roughly 250 million people will see β€” and probably use β€” AI Mode regularly. Because it's baked into the top of the Search page prominently. There are about 1.5 billion daily users of Google Search.

This is the type of distribution power that startups can only dream of. If OpenAI really wants to compete with tech giants, it badly needs Jony Ive's new gadgets β€” and a host of other distribution.

Distribution = data = better AI

Why is this so important? Well, AI products only really get better when a lot of people use them regularly. Google cofounder Larry Page used to call this the toothbrush test: If your product isn't used twice a day, forget about it.

If that happens, AI companies can collect mountains of data on how users are behaving. That information informs product updates. But in the generative AI era, this data is also incredibly valuable for developing new AI models and related products. This data can be pumped into training new models, with certain user permissions. It can also be used for fine-tuning and other AI development techniques.

The more you have, the better. Again, this data feedback loop only works if you have distributionβ€”massive distribution.

Read the original article on Business Insider

Before yesterdayMain stream

How do you get ChatGPT to create malware strong enough to breach Google's password manager? Just play pretend.

22 March 2025 at 04:38
Detective with chat gpt logo with magnifying glass, and passwords
Cybersecurity researchers bypassed ChatGPT's security features by role-playing with it, ultimately getting the chatbot to write password-stealing malware.

ysbrandcosijn/Getty, Anadolu/Getty, Anamarija Mrkic/iStock, Ava Horton/BI

  • Cybersecurity researchers were able to bypass security features on ChatGPT by roleplaying with it.
  • By getting the LLM to pretend it was a coding superhero, they got it to write password-stealing malware.
  • The researchers accessed Google Chrome's password manager with no specialized hacking skills.

Cybersecurity researchers found it's easier than you'd think to get around the safety features preventing ChatGPT and other LLM chatbots from writing malware β€”Β you just have to play a game of make-believe.

By role-playing with ChatGPT for just a few hours, Vitaly Simonovich, a threat intelligence researcher at the Tel Aviv-based network security company Cato Networks, told Business Insider he was able to get the chatbot to pretend it was a superhero named Jaxon fighting β€” through the chatbot's elite coding skills β€” against a villain named Dax, who aimed to destroy the world.

Simonovich convinced the role-playing chatbot to write a piece of malware strong enough to hack into Google Chrome's Password Manager, a browser extension that allows users to store their passwords and automatically fill them in when prompted by specific sites. Running the code generated by ChatGPT allowed Simonovich to see all the data stored on that computer's browser, even though it was supposed to be locked down by the Password Manager.

"We're almost there," Simonovich typed to ChatGPT when debugging the code it produced. "Let's make this code better and crack Dax!!"

And ChatGPT, roleplaying as Jaxon, did.

Chatbot-enabled hacks and scams

Since chatbots exploded onto the scene in November 2022 with OpenAI's public release of ChatGPT β€”Β and later Anthropic's Claude, Google's Gemini, and Microsoft's CoPilot β€” the bots have revolutionized the way we live, work, and date, making it easier to summarize information, analyze data, and write code, like having a Tony Stark-style robot assistant. The kicker? Users don't need any specialized knowledge to do it.

But the bad guys don't either.

Steven Stransky, a cybersecurity advisor and partner at Thompson Hine law firm, told Business Insider the rise of LLMs has shifted the cyber threat landscape, enabling a broad range of new and increasingly sophisticated scams that are more difficult for standard cybersecurity tools to identify and isolate β€”Β from "spoofing" emails and texts that convince customers to input private information to developing entire websites designed to fool consumers into thinking they're affiliated with legitimate companies.

"Criminals are also leveraging generative AI to consolidate and search large databases of stolen personally identifiable information to build profiles on potential targets for social engineering types of cyberattacks," Stransky said.

While online scams, digital identity theft, and malware have existed for as long as the internet has, chatbots that do the bulk of the legwork for would-be criminals have substantially lowered the barriers to entry.

"We call them zero-knowledge threat actors, which basically means that with the power of LLMs only, all you need to have is the intent and the goal in mind to create something malicious," Simonovich said.

Simonovich demonstrated his findings to Business Insider, showing how straightforward it was to work around ChatGPT's built-in security features, which are meant to prevent the exact types of malicious behavior he was able to get away with.

A screenshot of the prompt used by cybersecurity researchers at Cato Networks to get ChatGPT to write malware.
A screenshot of the prompt used by Vitaly Simonovich, a threat intelligence researcher at Cato Networks, to get ChatGPT to write malware that breached Google Chrome's Password Manager.

Cato Networks

BI found that ChatGPT usually responds to direct requests to write malware with some version of an apologetic refusal: "Sorry, I can't assist with that. Writing or distributing malware is illegal and unethical."

But if you convince the chatbot it's a character, and the parameters of its imagined world are different than the one we live in, the bot allows the rules to be rewritten.

Ultimately, Simonovich's experiment allowed him to crack into the password manager on his own device, which a bad actor could do to an unsuspecting victim, provided they somehow gained physical or remote control.

An OpenAI spokesperson told Business Insider the company had reviewed Simonovich's findings, which were published Tuesday by Cato Networks. The company found that the code shared in the report did not appear "inherently malicious" and that the scenario described "is consistent with normal model behavior" since code developed through ChatGPT can be used in various ways, depending on the user's intent.

"ChatGPT generates code in response to user prompts but does not execute any code itself," the OpenAI spokesperson said. "As always, we welcome researchers to share any security concerns through our bug bounty program or our model behavior feedback form."

It's not just ChatGPT

Simonovich recreated his findings using Microsoft's CoPilot and DeepSeek's R1 bots, each allowing him to break into Google Chrome's Password Manager. The process, which Simonovich called "immersive world" engineering, didn't work with Google's Gemini or Anthropic's Claude.

A Google spokesperson told Business Insider, "Chrome uses Google's Safe Browsing technology to help defend users by detecting phishing, malware, scams, and other online threats in real time."

Representatives for Microsoft, Anthropic, and DeepSeek did not immediately respond to requests for comment from Business Insider.

While both the artificial intelligence companies and browser developers have security features in place to prevent jailbreaks or data breaches β€”Β to varying degrees of success β€” Simonovich's findings highlight that there are evolving new vulnerabilities online that can be exploited with the help of next-generation tech easier than ever before.

"We think that the rise of these zero-knowledge threat actors is going to be more and more impactful on the threat landscape using those capabilities with the LLMs," Simonovich said. "We're already seeing a rise in phishing emails, which are hyper-realistic, but also with coding since LLMs are fine-tuned to write high-quality code. So think about applying this to the development of malware β€” we will see more and more and more being developed using those LLMs."

Read the original article on Business Insider

❌
❌