❌

Normal view

There are new articles available, click to refresh the page.
Yesterday β€” 26 December 2024Main stream

Suchir Balaji's mom talks about his life, death, and disillusionment with OpenAI: 'He felt AI is a harm to humanity'

26 December 2024 at 02:00
Suchir Balaji as a youngster.
Suchir Balaji as a youngster.

Poornima Ramarao

  • Former OpenAI employee Suchir Balaji alleged the startup violates copyright laws.
  • His death in November reignited a debate about how top AI companies affect humanity.
  • In an interview with BI, Balaji's mom shared his initial hopes for AI, and why they were dashed.

In October, Suchir Balaji made waves when he spoke against OpenAI.

In an interview with The New York Times, he discussed how powerhouse AI companies might be breaking copyright laws.

OpenAI's models are trained on information from the internet. Balaji helped collect and organize that data, but he grew to feel the practice was unfair. He resigned in August. And in November, he was named by NYT lawyers as someone who might have "unique and relevant documents" for their copyright-infringement case against OpenAI.

"If you believe what I believe, you have to just leave," he told the Times.

On November 26, the young engineer was found dead in his apartment. The tragedy struck a chord, stoking conspiracy theories, grief, and debate. What do we lose when AI models gain?

In an exclusive interview with Business Insider, Balaji's mother, Poornima Ramarao, offered clues.

Balaji joined OpenAI because of AI's potential to do good, she said. Early on, he loved that the models were open-source, meaning freely available for others to use and study. As the company became more financially driven and ChatGPT launched, those hopes faded. Balaji went from believing in the mission to fearing its consequences for publishers and society as a whole, she told BI.

"He felt AI is a harm to humanity," Ramarao said.

An OpenAI spokesperson shared that Balaji was a valued member of the team, and that his passing deeply affected those who worked closely with him.

"We were devastated to learn of this tragic news and have been in touch with Suchir's family to offer our full support during this difficult time," the spokesperson wrote in a statement. "Our priority is to continue to do everything we can to assist them."

"We first became aware of his concerns when The New York Times published his comments and we have no record of any further interaction with him," OpenAI's spokesperson added. "We respect his, and others', right to share views freely. Our hearts go out to Suchir's loved ones, and we extend our deepest condolences to all who are mourning his loss."

Recruited by OpenAI

Growing up, Balaji's dad thought he was "more than average," Ramarao said. But she thought her son was a prodigy. By two years old, he could form complex sentences, she recalled.

"As a toddler, as a little 5-year-old, he never made mistakes. He was perfect," Ramarao said.

At age 11, he started learning to code using Scratch, a programming language geared toward kids. Soon, he was asking his mom, who's a software engineer, questions that went over her head. At 13, he built his own computer. At 14, he wrote a science paper about chip design.

"Dad would say, don't focus too much. Don't push him too much," Ramarao said.

Suchir Balaji and his mom, Poornima Ramarao.
Suchir Balaji and his mom, Poornima Ramarao.

Poornima Ramarao

They moved school districts to find him more challenges. His senior year, he was the US champion in a national programming contest for high-schoolers, leading to him getting recruited, at 17 years old, by Quora, the popular online knowledge-sharing forum. His mom was against it, so he fibbed to her about applying. But he had to fess up by the first day on the job because he couldn't drive yet.

"I had to give him a ride to his office in Mountain View," Ramarao said.

She was worried about how he'd handle "so many adults," but he made friends to play poker with and enjoyed Quora's abundant cafeteria.

She viewed it as a lesson in learning to trust her Balaji.

"Then I understood, okay, my son is really an advanced person. I cannot be a hindrance to him," Ramarao said.

After working for about a year, he went to UC Berkeley, and soon won $100,000 in a TSA-sponsored challenge to improve their passenger-screening algorithms.

It was all enough to be recruited by OpenAI. He interned with the company in 2018, per his LinkedIn, then joined full-time in 2021 after graduating.

An early standout

Suchir Balaji on vacation before his death.
Suchir Balaji on vacation before his death.

Poornima Ramarao

Over his nearly four-year tenure at OpenAI, Balaji became a standout, eventually making significant contributions to ChatGPT's training methods and infrastructure, John Schulman, an OpenAI cofounder, wrote in a social media post about Balaji.

"He'd think through the details of things carefully and rigorously. And he also had a slight contrarian streak that made him allergic to 'groupthink' and eager to find where the consensus was wrong," Schulman said in the post. Schulman didn't reply to BI's requests for comment.

Balaji had joined the company at a critical juncture, though.

OpenAI started off as a non-profit in 2015 with the explicit mission of ensuring that AI benefited all of humanity. As the startup moved away from its open-source and non-profit roots, Balaji became more concerned, Ramarao said.

When it launched ChatGPT publicly in November 2022, he reconsidered the copyright implications, she said.

Earlier that year, a big part of Balaji's role was gathering digital data β€” from all corners of the English-speaking internet β€” for GPT-4, a model that would soon power ChatGPT, per the Times interview. Balaji thought of this like a research project.

Using other people's data for research was one thing, he wrote in a later essay. Using it to make a product that could take away from those creators' revenue or traffic was another.

OpenAI didn't comment on Balaji's concerns to Business Insider. In court, it has argued that the legal doctrine of "fair use" protects how its models ingest publicly-available internet content.

"Too naive and too innocent"

By late 2023 and early 2024, Balaji's enthusiasm for OpenAI had fizzled out entirely, and he began to criticize CEO Sam Altman in conversations with friends and family, Ramarao said.

He used to tell his mom when he was working on "something cool," but more and more, he had nothing to say about his job, she told BI.

When he resigned in August, Ramarao didn't press the issue.

Come October, when she saw his bombshell interview with the Times, she unleashed a torrent of anxiety at Balaji. In shining a spotlight on what he thought was corporate wrongdoing, he was taking it all on his shoulders, she said.

"I literally blasted him," she said of their conversation. "'You should not go alone. Why did you give your picture? Why did you give your name? Why don't you stay anonymous? What's the need for you to give your picture?'"

"You have to go as a group. You have to go together with other people who are like-minded. Then he said, 'Yeah, yeah, yeah. I'm connecting with like-minded people. I'm building a team,'" she continued. "I think he was too naive and too innocent to understand this dirty corporate world."

Balaji's parents are calling for an investigation

When Balaji left OpenAI in August, he took a break.

"He said, 'I'm not taking up another job. Don't ask me,'" Ramarao said.

From Balaji's parents' vantage point, everything seemed fine with the young coder. He was financially stable, with enough OpenAI stock to buy a house one day, she said. He had plans to build a machine learning non-profit in the medical field.

"He wanted to do something for society," his mom said.

On November 21, a Thursday, Balaji celebrated his 26th birthday with friends while on vacation. The next day, he let his mom know when his flight home took off, and spoke with his dad on the phone before dinner. His dad wished him a happy birthday and said he was sending a gift.

Suchir Balaji with friends on vacation.
Suchir Balaji with friends on vacation.

Poornima Ramarao

According to Ramarao, the medical examiner said that Balaji died that evening, or possibly the next morning.

"He was upbeat and happy," she said. "What can go wrong within a few hours that his life is lost?"

On Saturday and Sunday, Ramarao didn't hear from her son. She thought that maybe he'd lost his phone or gone for a hike. But on Monday, she went and knocked on his door. He didn't answer. She thought about filing a missing person complaint. But, knowing he'd have to go in-person to remove it, she hesitated. "He'll get mad at me," she said of her thinking at the time.

The next morning, she called the San Francisco police. They found his body just after 1 p.m. PST, according to a spokesperson for the department. But Ramarao wasn't told or allowed inside, she said. As officers trickled in, she pleaded with them to check if his laptop and toothbrush were missing, she told BI; that way she'd know if he'd traveled.

"They didn't give the news to me," Ramaro said. "I'm still sitting there thinking, 'My son is traveling. He's gone somewhere.' It's such a pathetic moment."

Around 2 p.m., they told her to go home. She refused.

"I sat there firmly," Ramarao told BI.

Then, around 3:20 p.m., a long white van pulled up with the light on.

"I was waiting to see medical help or nurses or someone coming out of the van," she said. "But a stretcher came. A simple stretcher. I ran and asked the person. He said, 'We have a dead body in that apartment.'"

About an hour later, a medical examiner and police asked to speak with Ramarao one-on-one inside the apartment's office. They said that Balaji had died by suicide, and that from looking at CCTV footage, he was alone, according to Ramarao. There was no initial evidence of foul play, the department spokesperson told BI.

Balaji's parents aren't convinced. They arranged for a private autopsy, completed in early December. Ramarao said the results were atypical, but she declined to share any more details. BI has not seen a copy of the report.

Balaji's parents are working with an attorney to press the SF police to reopen the case and do a "proper investigation," Ramarao said.

Meanwhile, they and members of their community are trying to raise awareness of his case through social media and a Change.org petition. Besides seeking answers, they want to invoke a broader discussion about whistleblowers' vulnerability and lack of protections, Ramarao and a family friend, who's helping organize a an event about Balaji on December 27, told BI.

"We want to leave the question open," Ramarao said. "It doesn't look like a normal situation."

BI shared a detailed account of Ramarao's concerns and memory of November 26 with spokespeople for the SF police and the Office of the Chief Medical Examiner. These officials did not respond or offer comments.

Ramarao emphasized to BI that the family isn't pointing fingers at OpenAI.

Suchir Balaji with his parents.
Suchir Balaji with his parents.

Poornima Ramarao

'Yes, mom'

Ramarao said she shared a close bond with her son. He didn't eat enough fruit, so every time she visited, she'd arrange shipments to his apartment from Costco. He tended to skip breakfast, so she'd bring granola bars and cookies.

Balaji rarely expressed his emotions and always paid for everything. But on November 7, during their last meal together, something made Ramarao try extra hard to pay, give him a ride home, and seek reassurance. He still paid for the meal and called an Uber. But he did offer his mom two words of encouragement.

"I asked him, 'Suchir this is the hardship. This is how I raised you, and if you were to choose parents now, would you choose me as mom?' He didn't think for a second,'" she said. "'Yes, mom.' And you know what? As a mother, that will keep me going as long as I'm alive."

Read the original article on Business Insider

Before yesterdayMain stream

Verily's plan for 2025: Raise money, pivot to AI, and break up with Google

18 December 2024 at 02:01
Verily CEO Stephen Gillett
Verily CEO Stephen Gillett.

Business Wire

  • Verily, an Alphabet spinoff, plans to raise money and focus its strategy on healthcare AI in 2025.
  • It plans to sell tech tools that other companies can use to build AI models and apps.
  • The changes are underway as Verily separates itself from Alphabet and looks to mature as a company.

Verily Life Sciences plans to reorient its strategy around AI in 2025, just as it marks its 10th anniversary as a company under Alphabet.

The unit, which uses technology and data to improve healthcare, is looking to mature. As of January, it will have separated from many of Google's internal systems in an attempt to stand independently. Simultaneously, it's refocusing its strategy around AI, according to two employees with knowledge of the matter, who asked to remain anonymous to discuss confidential information.

This new strategy would primarily involve other healthcare companies using Verily's tech infrastructure to develop AI models and apps, resulting from a multi-year effort across teams. It ultimately aims to become companies' one-stop-shop for tech needs like training AI for drug discovery and building apps for health coaching.

The unit is also looking to raise another round of capital in the next year, the two people familiar with the matter said. The company's last investment was a $1 billion round led by Alphabet in 2022. Alphabet will likely lead the round again, although leadership could also bid for outside capital as Verily tries to become "increasingly independent," one source said.

The question for next year is whether Verily can finally start turning long-gestating ideas into profits. One of the people said Verily still generates the most revenue selling stop-loss insurance to employers, which is a far cry from the higher-margin business it's aiming for. The Wall Street Journal reported last year that this business, called Granular Insurance, was Verily's most lucrative.

Verily has been criticized in the past for having aΒ rudderless strategy. It's entertained bets on topics as diverse as editing mosquito populations and helping pharmaceutical companies run clinical trials.

In an email to Business Insider, a spokesperson for Verily declined to comment on non-public information. He confirmed the company's plans to provide tech infrastructure for third parties, designed to provide "precision health capabilities across research and care."

Verily campus
Verily's South San Francisco campus

Tada Images

The AI strategy's origin story

Verily's idea to become a tech provider for other healthcare companies grew out of its own internal needs a few years ago when it decided to "re-platform" its various bets on a shared infrastructure, a source familiar with the matter said.

The multi-year effort is now coming to fruition, and Verily plans to sell the core technology it uses to health plans, providers, digital health startups, and life sciences companies.

The platform will include data storage and AI training. Companies could also use Verily's tech tools to spin up apps without having to code as much. For example, a digital health startup could use Verily's tools to build a coaching app with AI insights on weight loss.

"Large pharma companies, for example, look at the work we do and recognize that the data science applications or clinical research tools that they need to build themselves could be better if they were built using our platform," said Verily CEO Stephen Gillett in an interview with Fortune in November.

In that interview, Gillett said Verily's tech tools would include sophisticated AI capabilities for healthcare, data aggregation, privacy, and consent. One source said the company plans to start rolling them out in 2025.

Myoung Cha, Verily's chief product officer, joined from startup Carbon Health.
Myoung Cha, Verily's chief product officer, joined from startup Carbon Health.

Carbon Health

Even as the leading AI models learn from the entirety of the internet, healthcare data remains largely private. Subsequently, Verily is betting that there's a growing need to further specialize the models for patient care and research. The upstart already does this work through its partnership with clients like the National Institutes of Health. Through a business line called Workbench, Verily hosts massive datasets for the NIH, complete with analysis tools.

Verily hasn't dropped its ambitions to grow its own healthcare business. In 2026, it plans to relaunch a diabetes and hypertension-focused app, Lightpath, broadly for health plans and employers β€” this time with AI coaches supplementing human ones. Verily also intends to expand Lightpath to more health conditions.

Verily's reshuffling

Verily spun out of Google's moonshot group in 2015 and remained part of Alphabet's collection of long-term businesses, sometimes called "other bets." Under its then-CEO Andy Conrad, the unit explored a menagerie of ideas from surgical robots to wearables. Several of these projects β€” glucose-monitoring contact lenses, for instance β€” haven't panned out.

Shortly after Gillett replaced Conrad as CEO in 2023, he announced the company would lay off 15% of its workforce and "advance fewer initiatives with greater resources."

Since then, Verily has pruned projects and teams to save costs and sharpen its focus. Dr. Amy Abernethy, Verily's former chief medical officer who joined the company in 2021, focused on aiding clinical research before departing late last year.

Verily's shift to AI, meanwhile, seems to have coincided with the hiring of Myoung Cha and Bharat Rajagopal as the chief product and revenue officers, respectively, earlier this year.

Verily's former CEO Andy Conrad.
Andy Conrad, Verily's former CEO.

Google

Cutting ties with Google

Executing the AI strategy isn't the only challenge Verily's leadership faces in 2025.

Since 2021, the life science unit has been reducing its dependency on Google's internal systems and technology through an internal program known as Flywheel. BI previously reported that it set a December 16, 2024, deadline to cut many of these ties.

The separation involves Verily employees losing many of their cushy Google benefits, which has been a point of consternation for the group, the two people said.

Gillett remarked in a town hall meeting earlier this year that some employees may feel Verily is no longer the place for them after the separation, according to a person who heard the remarks.

Read the original article on Business Insider

YouTube star Marques Brownlee has pointed questions for OpenAI after its Sora video model created a plant just like his

10 December 2024 at 11:23
Marques Brownlee's Sora review.
Marques Brownlee reviewed OpenAI's Sora.

Marques Brownlee

  • On Monday, OpenAI released Sora, an AI video generator, in hopes of helping creators.
  • One such creative, Marques Brownlee, wants to know if his videos were used to train Sora.
  • "We don't know if it's too late to opt out," Brownlee said in his review of Sora.

On Monday, OpenAI released its Sora video generator to the public.

CEO Sam Altman showed off Sora's capabilities as part of "Shipmas," OpenAI's term for the 12 days of product launches and demos it's doing ahead of the holidays. The AI tool still has some quirks, but it can make videos of up to 20 seconds from a few words of instruction.

During the launch, Altman pitched Sora as an assistant for creators and said that helping them was important to OpenAI.

"There's a new kind of co-creative dynamic that we're seeing emerge between early testers that we think points to something interesting about AI creative tools and how people will use them," he said.

One such early tester was Marques Brownlee, whose tech reviews have garnered roughly 20 million subscribers on YouTube. One could say this is the kind of creator that OpenAI envisions "empowering," to borrow execs' term from the livestream.

But in his Sora review, posted on Monday, Brownlee didn't sugarcoat his skepticism, especially about how the model was trained. Were his own videos used without his knowledge?

This is a mystery, and a controversial one. OpenAI hasn't said much about how Sora is trained, though experts believe the startup downloaded vast quantities of YouTube videos as part of the model's training data. There's no legal precedent for this practice, but Brownlee said that to him, the lack of transparency was sketchy.

"We don't know if it's too late to opt out," Brownlee said.

In an email, an OpenAI spokesperson said Sora was trained using proprietary stock footage and videos available in the public domain, without commenting on Business Insider's specific questions.

In a blog post about some of Sora's technical development, OpenAI said the model was partly trained on "publicly available data, mostly collected from industry-standard machine learning datasets and web crawls."

Brownlee's big questions for OpenAI

Brownlee threw dozens of prompts at Sora, asking it to generate videos of pretty much anything he could think of, including a tech reviewer talking about a smartphone while sitting at a desk in front of two displays.

Sora's rendering was believable, down to the reviewer's gestures. But Brownlee noticed something curious: Sora added a small fake plant in the video that eerily matched Brownlee's own fake plant.

Marques Brownlee's Sora review.
Sora included a fake plant in a video that was similar to Brownlee's own plant.

Marques Brownlee

The YouTuber showed all manner of "horrifying and inspiring" results from Sora, but this one seemed to stick with him. The plant looks generic, to be sure, but for Brownlee it's a reminder of the unknown behind these tools. The models don't create anything fundamentally novel; they're predicting frame after frame based on patterns they recognize from source material.

"Are my videos in that source material? Is this exact plant part of the source material? Is it just a coincidence?" Brownlee said. "I don't know." BI asked OpenAI about these specific questions, but the startup didn't address them.

Marques Brownlee's Sora review.
Sora created a video of a tech reviewer with a phone.

Marques Brownlee

Brownlee discussed Sora's guardrails at some length. One feature, for example, can make videos from images that people upload, but it's pretty picky about weeding out copyrighted content.

A few commenters on Brownlee's video said they found it ironic that Sora was careful to steer clear of intellectual property β€” except for that of the people whose work was used to produce it.

"Somehow their rights dont matter one bit," one commenter said, "but uploading a Mickeymouse? You crook!"

In an email to BI, Brownlee said he was looking forward to seeing the conversation evolve.

Millions of people. All at once.

Overall, the YouTuber gave Sora a mixed review.

Outside of its inspiring features β€” it could help creatives find fresh starting points β€” Brownlee said he feared that Sora was a lot for humanity to digest right now.

Brownlee said the model did a good job of refusing to depict dangerous acts or use images of people without their consent. And though it's easy to crop out, it adds a watermark to the content it makes.

Sora's relative weaknesses might provide another layer of protection from misuse. In Brownlee's testing, the system struggled with object permanence and physics. Objects would pass through each other or disappear. Things might seem too slow, then suddenly too fast. Until the tech improves, at least, this could help people spot the difference between, for example, real and fake security footage.

But Brownlee said the videos would only get better.

"The craziest part of all of this is the fact that this tool, Sora, is going to be available to the public," he said, adding, "To millions of people. All at once."

He added, "It's still an extremely powerful tool that directly moves us further into the era of not being able to believe anything you see online."

Read the original article on Business Insider

OpenAI launches AI video generator Sora to the public

9 December 2024 at 11:17
Sora screenshot explore page
OpenAI just launched its AI video generator, Sora, to the public.

screenshot/OpenAI

  • OpenAI publicly launched the AI video generator Sora, offering new creative tools.
  • Sora can create up to 20-second videos from text and modify existing videos by filling frames.
  • It's rolling out in the US and many other countries to paid ChatGPT Plus and Pro users.

As part of Shipmas Day 3, OpenAI just launched its AI video generator, Sora, to the public.

Sora can generate up to 20-second videos from written instructions. The tool can also complete a scene and extend existing videos by filling in missing frames.

Rohan Sahai, Sora's product lead, said a team of about five or six engineers built the video generator in months.

"Sora is a tool," Joey Flynn, Sora's product designer, said. "It allows you to be multiple places at once, try multiple ideas at once, try things that are entirely impossible before."

OpenAI showed off the new product and its various features during a livestream Monday with CEO Sam Altman.

A screenshot of Sora's Explore page for browsing AI videos from the community.
A screenshot of Sora's "explore" page for browsing AI videos from the community.

OpenAI

Sora includes an "explore" page, a browsable feed of videos shared by the community. OpenAI also showed the various style presets available, such as pastel symmetry, film noir, and balloon world.

To customize videos further, there's also Storyboard, which lets users organize and edit sequences on a timeline. The feature helps pull together text prompts that Sora then builds into scenes.

Sora storyboard feature
The company showed off Sora's features, including Storyboard.

screenshot/OpenAI

In February, OpenAI made Sora available to a limited group of creators, including designers and filmmakers, to get feedback on the model.

The company said in a blog post at the time that the product "may struggle to simulate the physics of a complex scene" and may not understand cause and effect. It may also mix up left and right and struggle to depict events that happen over time, it added.

The tool has already made a strong impression on some in Hollywood. Tyler Perry previously put his plans for an $800 million studio expansion on hold after seeing Sora. The billionaire entertainer referred to Sora demonstrations as "shocking" and said AI would likely reduce the need for large sets and traveling to locations for shoots.

However, the tool's product designer said in the demonstration Monday that Sora wasn't going to create feature films at the click of a button. Flynn said the tool was more "an extension of the creator who's behind it."

"If you come into Sora with the expectation that you'll just be able to click a button and generate a feature film, I think you're coming in with the wrong expectation," Flynn added.

The team also briefly touched on safety issues. Sahai said during the presentation that OpenAI had a "big target" on its back and that the team wanted to prevent illegal activity while balancing creative expression with the new product.

"We're starting a little conservative, and so if our moderation doesn't quite get it right, just give us that feedback," Sahai said. "We'll be iterating."

OpenAI said Sora would roll out to the public in the US and many other countries on Monday. But Altman said it would be awhile before the tool became available in the UK and most of Europe.

ChatGPT Plus subscribers, who pay $20 monthly, can get up to 50 generations a month of AI videos that are five seconds long and have a 720p resolution. ChatGPT Pro users, who pay $200 a month, get unlimited generations in the slow-queue mode and 500 faster generations, Altman said in the demo. Pro users can generate up to 20-second-long videos that are 1080p resolution, without watermarks.

While nonpaying users can't create Sora videos, they can browse Sora's explore feed, Altman said.

The prominent YouTuber Marques Brownlee published what he described as the first-ever Sora review on Monday, telling his nearly 20 million subscribers that the results were both "horrifying and inspiring."

After a brief overview of Sora's strengths and weaknesses β€” the YouTuber said that it could make provocative videos of cosmic events in deep space and other abstractions but that it struggled with realistic depictions of physics in day-to-day life, like a man running with a football β€” Brownlee was frank about his concerns.

Millions of people can now use Sora for basically whatever they want. And while the program has decent guardrails, one can be circumvented, he said. The little watermark that Sora adds to the bottom-right corner of its videos can be cropped out, Brownlee said.

"And it's still an extremely powerful tool that directly moves us further into the era of not being able to believe anything you see online," he said, adding: "This is a lot for humanity to digest right now."

Read the original article on Business Insider

ChatGPT's search market share jumped recently, while Google has slipped, new data shows

6 December 2024 at 10:52
Sam Altman with TIME logo behind him side-by-side Sundar Pichai adjusting ear piece

Mike Coppola/NurPhoto/Getty

  • Google's search share slipped from June to November, new survey finds.
  • ChatGPT gained market share over the period, potentially challenging Google's dominance.
  • Still, generative AI features are benefiting Google, increasing user engagement.

ChatGPT is gaining on Google in the lucrative online search market, according to new data released this week.

In a recent survey of 1,000 people, OpenAI's chatbot was the top search provider for 5% of respondents, up from 1% in June, according to brokerage firm Evercore ISI.

Millennials drove the most adoption, the firm added in a research note sent to investors.

Google still dominates the search market, but its share slipped. According to the survey results, 78% of respondents said their first choice was Google, down from 80% in June.

It's a good business to be a gatekeeper

A few percentage points may not seem like much, but controlling how people access the world's online information is a big deal. It's what fuels Google's ads business, which produces the bulk of its revenue and huge profits. Microsoft Bing only has 4% of the search market, per the Evercore report, yet it generates billions of dollars in revenue each year.

ChatGPT's gains, however slight, are another sign that Google's status as the internet's gatekeeper may be under threat from generative AI. This new technology is changing how millions of people access digital information, sparking a rare debate about the sustainability of Google's search dominance.

OpenAI launched a full search feature for ChatGPT at the end of October. It's also got a deal with Apple this year that puts ChatGPT in a prominent position on many iPhones. Both moves are a direct challenge to Google. (Axel Springer, the owner of Business Insider, has a commercial relationship with OpenAI).

ChatGPT user satisfaction vs Google

When the Evercore analysts drilled down on the "usefulness" of Google's AI tools, ChatGPT, and Copilot, Microsoft's consumer AI helper, across 10 different scenarios, they found intriguing results.

There were a few situations where ChatGPT beat Google on satisfaction by a pretty wide margin: people learning specific skills or tasks, wanting help with writing and coding, and looking to be more productive at work.

It even had a 4% lead in a category that suggests Google shouldn't sleep too easy: people researching products and pricing online.

Google is benefiting from generative AI

Still, Google remains far ahead, and there were positive findings for the internet giant from Evercore's latest survey.

Earlier this year, Google released Gemini, a ChatGPT-like helper, and rolled out AI Overviews, a feature that uses generative AI to summarize many search results. In the Evercore survey, 71% of Google users said these tools were more effective than the previous search experience.

In another survey finding, among people using tools like ChatGPT and Gemini, 53% said they're searching more. That helps Google as well as OpenAI.

What's more, the tech giant's dominance hasn't dipped when it comes to commercial searches: people looking to buy stuff like iPhones and insurance. This suggests Google's market share slippage is probably more about queries for general information, meaning Google's revenue growth from search is probably safe for now.

So in terms of gobbling up more search revenue, ChatGPT has its work cut out.

Evercore analyst Mark Mahaney told BI that even a 1% share of the search market is worth roughly $2 billion a year in revenue. But that only works if you can make money from search queries as well as Google does.

"That's 1% share of commercial searches and assuming you can monetize as well as Google β€” and the latter is highly unlikely in the near or medium term," he said.

Read the original article on Business Insider

OpenAI rolls out the full version of o1, its hot reasoning model

5 December 2024 at 11:23
Sam Altman presenting onstage with the OpenAI logo behind him.
OpenAI CEO Sam Altman.

Jason Redmond/AFP/Getty Images

  • OpenAI released the full version of its o1 reasoning model on Thursday.
  • It says the o1 model, initially previewed in September, is now multimodal, faster, and more precise.
  • It was released as part of OpenAI's 12-day product and demo launch, dubbed "shipmas."

On Thursday, OpenAI released the full version of its hot new reasoning model as part of the company's 12-day sprint of product launches and demos.

The model, known as o1, was released in a preview mode in September. OpenAI CEO Sam Altman said during day one of the company's livestream that the latest version was more accurate, faster, and multimodal. Research scientists on the livestream said an internal evaluation indicated it made major mistakes about 34% less often than the o1 preview mode.

The model, which seems geared toward scientists, engineers, and coders, is designed to solve thorny problems. The researchers said it's the first model that OpenAI trained to "think" before it responds, meaning it tends to give more detailed and accurate responses than other AI helpers.

To demonstrate o1's multimodal abilities, they uploaded a photo of a hand-drawn system for a data center in space and asked the program to estimate the cooling-panel area required to operate it. After about 10 seconds, o1 produced what would appear to a layperson as a sophisticated essay rife with equations, ending with what was apparently the right answer.

The researchers think o1 should be useful in daily life, too. Whereas the preview version could think for a while if you merely said hi, the latest version is designed to respond faster to simpler queries. In Thursday's livestream, it was about 19 seconds faster than the old version at listing Roman emperors.

All eyes are on OpenAI's releases over the next week or so, amid a debate about how much more dramatically models like o1 can improve. Tech leaders are divided on this issue; some, like Marc Andreessen, argue that AI models aren't getting noticeably better and are converging to perform at roughly similar levels.

With its 12-day deluge of product news, dubbed "shipmas," OpenAI may be looking to quiet some critics while spreading awkward holiday cheer.

"It'll be a way to show you what we've been working on and a little holiday present from us," Altman said on Thursday.

Read the original article on Business Insider

❌
❌