Reading view

There are new articles available, click to refresh the page.

Google says it could water down its search partnerships in antitrust proposal

Google logo piecing itself together.
Google on Friday proposed limiting its search partnerships as a possible remedy to resolve an antitrust case regarding its search business.

Google; Chelsea Jia Feng/BI

  • Google on Friday proposed possible remedies to resolve an antitrust case over its search business.
  • Last month, the DOJ suggested that the judge force Google to sell its Chrome browser.
  • Judge Amit Mehta is expected to rule on the final remedies by August 2025.

Google on Friday proposed limitations to its search partnerships as a potential remedy to resolve antitrust violations in its search business.

The proposal would allow Google to continue partnering with third-party companies like Apple in revenue-sharing deals that make Google the default search engine on their devices, unlike the Justice Department's proposal. However, Google's proposal would make the deals non-exclusive, the company said in its filing.

"We don't propose these changes lightly," Google said in a blog post about the proposal. "They would come at a cost to our partners by regulating how they must go about picking the best search engine for their customers. And they would impose burdensome restrictions and oversight over contracts that have reduced prices for devices and supported innovation in rival browsers, both of which have been good for consumers."

Last month, the Justice Department and a group of states asked Judge Amit Mehta to force Google to sell its Chrome browser to resolve the case. They also asked that Google be stopped from entering default search agreements with Apple and other companies and that Google should open its search engine results to competitors.

Industry experts previously told Business Insider that selling Chrome off would open up the browser market and would likely be cheered on by search rivals and advertisers, though it remains unclear how a possible Chrome spinoff might work.

Both sides will present arguments for their proposals at a hearing scheduled for April. The judge is expected to rule on the final remedies by August.

Kent Walker, Google's president of global affairs, previously said the company intends to appeal the judge's ruling, potentially delaying a final decision by several years.

Representatives for the Justice Department's antitrust division did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Google cut manager and VP roles by 10% in its efficiency push, CEO Sundar Pichai said in an internal meeting

Sundar Pichai speaking at event
Google CEO Sundar Pichai.

Justin Sullivan/Getty

  • Google's CEO said it had cut managers, directors, and VPs by 10% as part of its efficiency drive.
  • The company has been boosting efficiency by reducing layers and reorganizing teams.
  • Google has been facing down threats from OpenAI and other AI rivals.

Google had cut the number of top management roles by 10% in its push for efficiency, CEO Sundar Pichai told employees in an all-hands meeting on Wednesday.

Pichai said that Google had made changes over the past couple of years to simplify the company and be more efficient, according to two people who heard the remarks, who asked to remain anonymous because they're not authorized to speak to the press.

Pichai said this had included a 10% reduction in managers, directors, and vice presidents, one source said.

A Google spokesperson said that some of the roles in that 10% figure were transitioned to individual contributor roles and that some were role eliminations.

The company has been on an efficiency drive for more than two years. In September 2022, Pichai said he wanted Google to be 20% more efficient, and the following January, the company had a historic round of layoffs that saw 12,000 roles eliminated.

The efficiency push has coincided with AI rivals such as OpenAI unleashing new products that threaten Google's search business.

Google has responded by injecting generative AI features into its core businesses and launching a flurry of new AI features, such as a new AI video generator beating OpenAI's in early testing and a new set of Gemini models, including a "reasoning" model that shows its thought process.

In Wednesday's all-hands meeting, Pichai also clarified the meaning of the word "Googleyness," telling staff that it needed updating for a modern Google.

Are you a current or former Google employee with something to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

Google's CEO just clarified what 'Googleyness' means in 2024

Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024
Google CEO Sundar Pichai speaks at a Google I/O event in Mountain View, California.

Jeff Chiu / AP Photo

  • In an all-hands, Google CEO Sundar Pichai said the word 'Googleyness' had become too broad.
  • Pichai clarified what the word means for the company.
  • Now it's about being "Mission First" and being "Bold and Responsible."

"Googleyness" has long been a vague word for the search giant. Once used to determine if a candidate is a good fit for hiring, it has evolved in definition over the years.

Google CEO Sundar Pichai just attempted to clarify what the word means for Googlers now.

In a company all-hands meeting on Wednesday, Pichai told staff the definition of "Googleyness" had become too broad and that he felt obliged to clarify it, according to two employees who heard the remarks, who asked to remain anonymous because they're not authorized to speak to the press.

Pichai defined "Googleyness" as the following, per one of those sources:

"Mission First"

"Make Helpful Things"

"Be Bold & Responsible"

"Stay Scrappy"

"Hustle & Have Fun"

"Team Google"

A Google spokesperson declined to comment.

The term "Googleyness" has always been amorphous. In his 2015 book Work Rules, Google's former head of people operations, Laszlo Block, listed certain attributes that he considered "Googleyness," such as "intellectual humility," "enjoying fun," and "comfort with ambiguity."

The company previously changed its hiring guidelines to "avoid confusing Googleyness with culture fit," The Information reported in 2019. The change came after the company had been criticized for its lack of diversity in its workplace.

Are you a current or former Google employee with something to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

Google's AI video generator blows OpenAI's Sora out of the water. YouTube may be a big reason.

Dog on a flamingo, as created by Google's Veo 2
A video from Google's Veo 2

Google

  • Early testers of Google's new AI video generator compared it with OpenAI's Sora.
  • So far, Google's results are blowing OpenAI's out of the water.
  • Google has tapped YouTube to train its AI models but says other companies can't do the same.

Not to let OpenAI have all the fun with its 12 days of 'Shipmas,' Google on Monday revealed its new AI video generator, Veo 2. Early testers found it's blowing OpenAI's Sora out of the water.

OpenAI made its Sora AI video generator available for general use earlier this month, while Google's is still in early preview. Still, people are sharing comparisons of the two models running the same prompts, and so far, Veo has proved more impressive.

Why is Google's Veo 2 doing better than Sora so far? The answer may be YouTube, which Google owns and has used to train these models.

TED host and former Googler Bilawal Sidhu shared some comparisons of Google's Veo 2 and OpenAI's Sora on X.

He said he used the prompt, "Eating soup like they do in Europe, the old fashioned way," which generated a terrifying result in Sora and something more impressive in Google's Veo 2.

Veo 2 prompt: "Eating soup like they do in Europe, the old fashioned way" https://t.co/gX9gh1fFy6 pic.twitter.com/kgR7VP2URl

— Bilawal Sidhu (@bilawalsidhu) December 18, 2024

Here's another, which took a prompt that YouTube star Marques Brownlee had tried in a video reviewing Sora.

Google Veo 2 vs. OpenAI Sora

Tried MKBHD's prompt: "A side scrolling shot of a rhinoceros walking through a dry field of low grass plants"

Took the 1st video I got out of Veo, and it's not even close. Prompt adherence & physics modeling? Leagues apart. Sora nails the look, but… pic.twitter.com/mus9MdRsWo

— Bilawal Sidhu (@bilawalsidhu) December 17, 2024

This one from EasyGen founder Ruben Hassid included a prompt for someone cutting a tomato with a knife. In the video he shared, we see how Google's Veo 2 had the knife going cleanly through the tomato and avoiding fingers, while the knife in Sora's video cut through the hand.

I tested Sora vs. the new Google Veo-2.

I feel like comparing a bike vs. a starship: pic.twitter.com/YcHsVcUyn2

— Ruben Hassid (@RubenHssd) December 17, 2024

Granted, these are cherry-picked examples, but the consensus among AI enthusiasts is that Google has outperformed.

Andreessen Horowitz partner Justine Moore wrote on X that she had spent a few hours testing Veo against Sora and believed Sora's "biases towards more motion" while Veo focuses on accuracy and physics.

Sora vs. Veo 2:

I spent a few hours running prompts on both models and wanted to share some comparisons ⬇️.

IMO - Sora biases towards more motion, whereas Veo focuses more on accuracy / physics. And a larger % of clips from Veo are usable.

"man jumping over hurdles" pic.twitter.com/WI9zIaJA64

— Justine Moore (@venturetwins) December 18, 2024

Google has been open about tapping YouTube data for its AI, but it does not permit others to do the same. The New York Times previously reported that OpenAI had trained its models using some YouTube data anyway. YouTube CEO Neal Mohan said OpenAI doing this would violate Google's policies.

BI previously reported that Google's DeepMind also tapped YouTube's vast trove of content to build an impressive music generator that never saw the light of day.

Google did not immediately respond to a request for comment.

Are you a Google or OpenAI employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

Verily's plan for 2025: Raise money, pivot to AI, and break up with Google

Verily CEO Stephen Gillett
Verily CEO Stephen Gillett.

Business Wire

  • Verily, an Alphabet spinoff, plans to raise money and focus its strategy on healthcare AI in 2025.
  • It plans to sell tech tools that other companies can use to build AI models and apps.
  • The changes are underway as Verily separates itself from Alphabet and looks to mature as a company.

Verily Life Sciences plans to reorient its strategy around AI in 2025, just as it marks its 10th anniversary as a company under Alphabet.

The unit, which uses technology and data to improve healthcare, is looking to mature. As of January, it will have separated from many of Google's internal systems in an attempt to stand independently. Simultaneously, it's refocusing its strategy around AI, according to two employees with knowledge of the matter, who asked to remain anonymous to discuss confidential information.

This new strategy would primarily involve other healthcare companies using Verily's tech infrastructure to develop AI models and apps, resulting from a multi-year effort across teams. It ultimately aims to become companies' one-stop-shop for tech needs like training AI for drug discovery and building apps for health coaching.

The unit is also looking to raise another round of capital in the next year, the two people familiar with the matter said. The company's last investment was a $1 billion round led by Alphabet in 2022. Alphabet will likely lead the round again, although leadership could also bid for outside capital as Verily tries to become "increasingly independent," one source said.

The question for next year is whether Verily can finally start turning long-gestating ideas into profits. One of the people said Verily still generates the most revenue selling stop-loss insurance to employers, which is a far cry from the higher-margin business it's aiming for. The Wall Street Journal reported last year that this business, called Granular Insurance, was Verily's most lucrative.

Verily has been criticized in the past for having a rudderless strategy. It's entertained bets on topics as diverse as editing mosquito populations and helping pharmaceutical companies run clinical trials.

In an email to Business Insider, a spokesperson for Verily declined to comment on non-public information. He confirmed the company's plans to provide tech infrastructure for third parties, designed to provide "precision health capabilities across research and care."

Verily campus
Verily's South San Francisco campus

Tada Images

The AI strategy's origin story

Verily's idea to become a tech provider for other healthcare companies grew out of its own internal needs a few years ago when it decided to "re-platform" its various bets on a shared infrastructure, a source familiar with the matter said.

The multi-year effort is now coming to fruition, and Verily plans to sell the core technology it uses to health plans, providers, digital health startups, and life sciences companies.

The platform will include data storage and AI training. Companies could also use Verily's tech tools to spin up apps without having to code as much. For example, a digital health startup could use Verily's tools to build a coaching app with AI insights on weight loss.

"Large pharma companies, for example, look at the work we do and recognize that the data science applications or clinical research tools that they need to build themselves could be better if they were built using our platform," said Verily CEO Stephen Gillett in an interview with Fortune in November.

In that interview, Gillett said Verily's tech tools would include sophisticated AI capabilities for healthcare, data aggregation, privacy, and consent. One source said the company plans to start rolling them out in 2025.

Myoung Cha, Verily's chief product officer, joined from startup Carbon Health.
Myoung Cha, Verily's chief product officer, joined from startup Carbon Health.

Carbon Health

Even as the leading AI models learn from the entirety of the internet, healthcare data remains largely private. Subsequently, Verily is betting that there's a growing need to further specialize the models for patient care and research. The upstart already does this work through its partnership with clients like the National Institutes of Health. Through a business line called Workbench, Verily hosts massive datasets for the NIH, complete with analysis tools.

Verily hasn't dropped its ambitions to grow its own healthcare business. In 2026, it plans to relaunch a diabetes and hypertension-focused app, Lightpath, broadly for health plans and employers — this time with AI coaches supplementing human ones. Verily also intends to expand Lightpath to more health conditions.

Verily's reshuffling

Verily spun out of Google's moonshot group in 2015 and remained part of Alphabet's collection of long-term businesses, sometimes called "other bets." Under its then-CEO Andy Conrad, the unit explored a menagerie of ideas from surgical robots to wearables. Several of these projects — glucose-monitoring contact lenses, for instance — haven't panned out.

Shortly after Gillett replaced Conrad as CEO in 2023, he announced the company would lay off 15% of its workforce and "advance fewer initiatives with greater resources."

Since then, Verily has pruned projects and teams to save costs and sharpen its focus. Dr. Amy Abernethy, Verily's former chief medical officer who joined the company in 2021, focused on aiding clinical research before departing late last year.

Verily's shift to AI, meanwhile, seems to have coincided with the hiring of Myoung Cha and Bharat Rajagopal as the chief product and revenue officers, respectively, earlier this year.

Verily's former CEO Andy Conrad.
Andy Conrad, Verily's former CEO.

Google

Cutting ties with Google

Executing the AI strategy isn't the only challenge Verily's leadership faces in 2025.

Since 2021, the life science unit has been reducing its dependency on Google's internal systems and technology through an internal program known as Flywheel. BI previously reported that it set a December 16, 2024, deadline to cut many of these ties.

The separation involves Verily employees losing many of their cushy Google benefits, which has been a point of consternation for the group, the two people said.

Gillett remarked in a town hall meeting earlier this year that some employees may feel Verily is no longer the place for them after the separation, according to a person who heard the remarks.

Read the original article on Business Insider

Google launches Gemini 2.0, its latest AI model that can 'think multiple steps ahead'

Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., Tuesday, May 14, 2024
Google CEO Sundar Pichai speaks at a Google I/O event in Mountain View, California.

Jeff Chiu / AP Photo

  • Google unveiled Gemini 2.0, enhancing the AI product's capabilities.
  • Gemini 2.0 focuses on agentic AI, improving multi-step problem-solving.
  • Google's Project Astra and Mariner showcase advanced AI integration with popular services.

It's December, which apparently means it's time for all the AI companies to show off what they've been working on for the past year. Not to be left out, Google lifted the lid on its next-generation AI model, Gemini 2.0, which it promises is a big step up in smarts and capabilities.

If the theme of Gemini 1.0 was multimodality — an ability to combine and understand different types of information, such as text and images — Gemini 2.0 is all about agents, AI that can act more autonomously and solve multi-step problems with limited human input.

"Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision," said Google CEO Sundar Pichai in a blog post announcing Gemini 2.0 on Wednesday.

Users can test out some of Gemini 2.0's new abilities this week, including a new "Deep Research" feature that will have Gemini scour the web for information on a topic and prepare it in an easy-to-read report. Google said Deep Research, which will be available to Gemini Advanced subscribers, will perform like a human in the way it searches and locates relevant information on the web before starting a new search based on what it's learned.

Google plans to bring Gemini 2.0 to its AI Overviews feature in Search. The feature, which has dramatically transformed the way Google retrieves answers from the web, got off to a rocky start (pizza glue, anyone?). Google then scaled Overviews back and made various technical tweaks to improve performance.

With Gemini 2.0, Google says Overviews can tackle more complex searches, including multi-step questions and multimodal queries that use text and images. Google said it's already started testing the improved Overviews this week and will roll them out more broadly early next year.

This week, Google is also rolling out an experimental version of Gemini 2.0 Flash — a model designed for high-volume tasks at speed — that developers can play with. Anyone accessing the Gemini chatbot through the browser or the Gemini app will also be able to try it with the new model.

Google said Flash 2.0 will make Gemini faster, smarter, and more capable of reasoning. It's also now capable of generating images natively (previously, Google had stitched on a separate AI model to conjure up pictures within Gemini). Google said that should improve image generation, as it's drawing from Gemini 2.0's vast knowledge of the world.

Project Astra

Most of the other interesting new announcements Google teased won't be available for wider public consumption for a while.

One of these is Project Astra, which Google first previewed at I/O back in May. Google demoed a real-time AI assistant that could see the world around it and answer questions. Now, Google is showing an even better version of Astra built on Gemini 2.0, which the company said can draw on some of Google's most popular services, such as Search, Lens, and Maps.

In a new virtual demo, Google showed someone holding up their phone camera to a London bus and Astra answering a question on whether that bus could get them to Chinatown. The new and improved Astra can also converse in multiple (and mixed) languages, Google said.

Google will roll out Astra to a limited number of early testers, and it didn't say when more people will have access to it. Bibo Xu, the Astra product manager at Google DeepMind, told reporters on a call that Google expects these features to roll out through its apps over time, suggesting Astra may arrive incrementally rather than as one big product.

Google also teased Astra running on a pair of augmented reality glasses.

Project Mariner

Additionally, Google teased Project Mariner, a tool that lets AI take control of a browser and scour the web for information. It can recognize pixels, images, text, and code on a webpage and use them to navigate and find answers.

Google referred to Mariner as an early research prototype and said it's only letting a select group of early testers try it via a Chrome extension.

"We're early in our understanding of the full capabilities of AI agents for computer use and we understand the risks associated with AI models that can take actions on a user's behalf," said Google Labs product manager Jaclyn Konzelmann.

For example, Google said it would limit certain actions, such as by having Mariner ask for final confirmation before making an online purchase.

Got a tip share? Some insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

Google worried its Gemini Workspace product lagged rivals like Microsoft and OpenAI in key metrics, leaked documents show

Aparna Pappu on stage at Google IO 2023
Aparna Pappu, former head of Google Workspace, onstage at Google IO 2023

Google

  • Google found its AI Workspace product lagged rivals, internal documents show.
  • A study earlier this year found the tool trailed Microsoft and Apple in brand familiarity and usage.
  • Google hopes Workspace is one way it can turn AI into profit.

As Google pours untold amounts of cash into AI, it's banking on products such as Gemini for Google Workspace to turn that investment into revenue. An internal presentation reveals the company worried that Gemini lagged behind its rivals across key metrics.

Gemini for Google Workspace puts Google's AI features into a handful of the company's productivity tools, such as Gmail, Docs, and Google Meet. Users can have the AI model rewrite an email, whip up a presentation, or summarize documents filled with dense information. Google, which charges customers extra for these add-ons, claims the features will save users time and improve the quality of their work.

Gemini for Google Workspace trailed all its key rivals, including Microsoft, OpenAI, and even Apple, when it came to brand familiarity and usage, according to an internal market research presentation reviewed by Business Insider.

The data tracked Gemini's brand strength during the first half of 2024 and included data on what percentage of audiences use and pay for Gemini for Google Workspace in certain segments.

One document seen by BI said that Workspace's Gemini tools were "far behind the competition" but that a strong Q4 could help the company in 2025.

In a written statement, a spokesperson said the data came from a study tracking brand awareness during the brand transition from "Duet AI" to "Gemini" earlier this year and called the data "old and obsolete."

"In the time since, we've brought Gemini for Workspace to millions more customers and made significant, double-digit gains across key categories, including familiarity, future consideration, and usage. We're very pleased with our momentum and are encouraged by all the great feedback we are getting from our users," the spokesperson added.

The internal data tracked Gemini's brand strength across commercial, consumer, and executive groups. In the US commercial group, Gemini scored lower than Microsoft Copilot and ChatGPT across four categories: familiarity, consideration, usage, and paid usage, one slide showed. Paid usage was measured at 22%, 16 points lower than Copilot and ChatGPT.

Data for the UK in the commercial group also showed Gemini mostly behind its rivals, although it scored slightly higher than Copilot in paid usage. In Brazil and India, Gemini for Workspace fared better than Copilot across most categories but still fell below ChatGPT, the data showed.

"Gemini trails both Copilot and ChatGPT in established markets," the document said, adding that it "rises above Copilot across the funnel" in Brazil and India.

In another part of Google's internal presentation that focused on brand familiarity, Google's Gemini for Workspace came in last place in consumer, commercial, and executive categories, trailing ChatGPT, Copilot, Meta AI, and Apple AI.

Familiarity was particularly low for the US consumer category, with Gemini for Workspace scoring just 45%, while Copilot scored 49%, ChatGPT and Apple both scored 80%, and Meta scored 82%.

'We have the same problem as Microsoft'

Microsoft's Copilot, which does similar tasks like summarizing emails and meetings, likewise struggles to live up to the hype, with some dissatisfied customers and employees who said the company has oversold the current capabilities of the product, BI recently reported.

"We have the same problem as Microsoft," said a Google employee directly familiar with the Gemini for Workspace strategy. "Just with less market share." The person asked to remain anonymous because they were not permitted to speak to the press.

Google's data showed Apple and Meta's AI products have much bigger market recognition, which could benefit those companies as they roll out business products that compete with Google's.

Internally, the Workspace group has recently undergone a reshuffle. The head of Google Workspace, Aparna Pappu, announced internally in October that she was stepping down, BI previously reported. Bob Frati, vice president of Workspace sales, also left the company earlier this year. Jerry Dischler, a former ads exec who moved to the Cloud organization earlier this year, now leads the Workspace group.

Are you a current or former Google employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

Google CEO Sundar Pichai says AI progress will get harder in 2025 because 'the low-hanging fruit is gone'

Sundar Pichai speaking at event
Sundar Pichai said the "hill is steeper" for AI progress.

Justin Sullivan/Getty

  • Google CEO Sundar Pichai says AI progress will be more challenging in 2025.
  • Pichai said he doesn't buy into the idea of an AI performance wall but said, "The hill is steeper."
  • Industry insiders say they're looking for new breakthroughs to solve AI's current bottlenecks.

Google and Alphabet CEO Sundar Pichai said that while he doesn't believe progress in AI development is hitting a "wall," he does see it slowing down in the months ahead.

Speaking at The New York Times' Dealbook summit on Wednesday, Pichai said Google was preparing to launch its next generation of models, but said he also expects progress next year to slow down.

"I think the progress is going to get harder when I look at '25. The low-hanging fruit is gone," he said. "The hill is steeper."

Whether AI has hit a performance plateau is a major topic of debate within the industry right now. Some leaders, including OpenAI Sam Altman, have pushed back on the idea that AI is hitting a wall.

However, some industry experts and company insiders told BI there are bottlenecks when it comes to feeding models new high-quality data. As such, companies are exploring new approaches, such as focusing on how models reason.

Former OpenAI chief scientist and Safe Superintelligence cofounder Ilya Sutskever told Reuters last month that results from scaling up when it comes to pre-training models had plateaued. "Everyone is looking for the next thing," he told the publication.

"I don't fully subscribe to the wall notion," Pichai said on Wednesday, adding that he has a lot of confidence that there will still be progress in 2025.

"When you start out quickly scaling up you can throw more compute and you can make a lot of progress, but you definitely are going to need deeper breakthroughs as we go to the next stage," he said. "So you can perceive it as there's a wall, or there's some small barriers."

Pichai added that the next wave of AI progress will require technical breakthroughs in reasoning and completing a sequence of actions "more reliably."

During the discussion, the Google chief also took a shot at Microsoft after Andrew Ross Sorkin quoted its CEO, Satya Nadella, criticizing Google for not being the obvious leader in AI today despite having such a head start over its rivals

"I would love to do a side-by-side comparison of Microsoft's own models and our models any day, any time," Pichai retorted. "They're using someone else's models," he added, referring to how Microsoft uses models from OpenAI.

Are you a current or former Google employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).

Read the original article on Business Insider

AI improvements are slowing down. Companies have a plan to break through the wall.

A pink brain in a small square
 The tech world has been debating if AI models are plateauing.

iStock; Rebecca Zisser/BI

  • The rate of AI model improvement appears to be slowing, but some tech leaders say there is no wall.
  • It's prompted a debate over how companies can overcome AI bottlenecks.
  • Business Insider spoke to 12 people at the forefront of the AI boom to find out the path forward.

Silicon Valley leaders all-in on the artificial intelligence boom have a message for critics: their technology has not hit a wall.

A fierce debate over whether improvements in AI models have hit their limit has taken hold in recent weeks, forcing several CEOs to respond. OpenAI boss Sam Altman was among the first to speak out, posting on X this month that "there is no wall."

Dario Amodei, CEO of rival firm Anthropic, and Jensen Huang, the CEO of Nvidia, have also disputed reports that AI progress has slowed. Others, including Marc Andreessen, say AI models aren't getting noticeably better and are all converging to perform at roughly similar levels.

This is a trillion-dollar question for the tech industry. If tried-and-tested AI model training methods are providing diminishing returns, it could undermine the core reason for an unprecedented investment cycle that's funding new startups, products, and data centers — and even rekindling idled nuclear power plants.

Business Insider spoke to 12 people at the forefront of the AI industry, including startup founders, investors, and current and former insiders at Google DeepMind and OpenAI, about the challenges and opportunities ahead in the quest for superintelligent AI.

Together, they said that tapping into new types of data, building reasoning into systems, and creating smaller but more specialized models are some of the ways to keep the wheels of AI progress turning.

The pre-training dilemma

Researchers point to two key blocks that companies may encounter in an early phase of AI development, known as pre-training. The first is access to computing power. More specifically, this means getting hold of specialist chips called GPUs. It's a market dominated by Santa Clara-based chip giant Nvidia, which has battled with supply constraints in the face of nonstop demand.

"If you have $50 million to spend on GPUs but you're on the bottom of Nvidia's list — we don't have enough kimchi to throw at this, and it will take time," said Henri Tilloy, partner at French VC firm Singular.

Jensen Huang wih Nvidia hardware
Jensen Huang's Nvidia has become the world's most valuable company off the back of the AI boom.

Justin Sullivan/Getty

There is another supply problem, too: training data. AI companies have run into limits on the quantity of public data they can secure to feed into their large language models, or LLMs, in pre-training.

This phase involves training an LLM on a vast corpus of data, typically scraped from the internet, and then processed by GPUs. That information is then broken down into "tokens," which form the fundamental units of data processed by a model.

While throwing more data and GPUs at a model has reliably produced smarter models year after year, companies have been exhausting the supply of publicly available data on the internet. Research firm Epoch AI predicts usable textual data could be squeezed dry by 2028.

"The internet is only so large," Matthew Zeiler, founder and CEO of Clarifai, told BI.

Multimodal and private data

Eric Landau, cofounder and CEO of data startup Encord, said that this is where other data sources will offer a path forward in the scramble to overcome the bottleneck in public data.

One example is multimodal data, which involves feeding AI systems visual and audio sources of information, such as photos or podcast recordings. "That's one part of the picture," Landau said. "Just adding more modalities of data." AI labs have already started using multimodal data as a tool, but Landau says it remains "very underutilized."

Sharon Zhou, cofounder and CEO of LLM platform Lamini, sees another vastly untapped area: private data. Companies have been securing licensing agreements with publishers to gain access to their vast troves of information. OpenAI, for instance, has struck partnerships with organizations such as Vox Media and Stack Overflow, a Q&A platform for developers, to bring copyrighted data into their models.

"We are not even close to using all of the private data in the world to supplement the data we need for pre-training," Zhou said. "From work with our enterprise and even startup customers, there's a lot more signal in that data that is very useful for these models to capture."

A data quality problem

A great deal of research effort is now focused on enhancing the quality of data that an LLM is trained on rather than just the quantity. Researchers could previously afford to be "pretty lazy about the data" in pre-training, Zhou said, by just chucking as much as possible at a model to see what stuck. "This isn't totally true anymore," she said.

One solution that companies are exploring is synthetic data, an artificial form of data generated by AI.

According to Daniele Panfilo, CEO of startup Aindo AI, synthetic data can be a "powerful tool to improve data quality," as it can "help researchers construct datasets that meet their exact information needs." This is particularly useful in a phase of AI development known as post-training, where techniques such as fine-tuning can be used to give a pre-trained model a smaller dataset that has been carefully crafted with specific domain expertise, such as law or medicine.

One former employee at Google DeepMind, the search giant's AI lab, told BI that "Gemini has shifted its strategy" from going bigger to more efficient. "I think they've realized that it is actually very expensive to serve such large models, and it is better to specialize them for various tasks through better post-training," the former employee said.

Google i/o event Sundar Pichai Gemini
Google launched Gemini, formerly known as Bard, in 2023.

Google

In theory, synthetic data offers a useful way to hone a model's knowledge and make it smaller and more efficient. In practice, there's no full consensus on how effective synthetic data can be in making models smarter.

"What we discovered this year with our synthetic data, called Cosmopedia, is that it can help for some things, but it's not the silver bullet that's going to solve our data problem," Thomas Wolf, cofounder and chief science officer at open-source platform Hugging Face, told BI.

Jonathan Frankle, the chief AI scientist at Databricks, said there's no "free lunch " when it comes to synthetic data and emphasized the need for human oversight. "If you don't have any human insight, and you don't have any process of filtering and choosing which synthetic data is most relevant, then all the model is doing is reproducing its own behavior because that's what the model is intended to do," he said.

Concerns around synthetic data came to a head after a paper published in July in the journal Nature said there was a risk of "model collapse" with "indiscriminate use" of synthetic data. The message was to tread carefully.

Building a reasoning machine

For some, simply focusing on the training portion won't cut it.

Former OpenAI chief scientist and Safe Superintelligence cofounder Ilya Sutskever told Reuters this month that results from scaling models in pre-training had plateaued and that "everyone is looking for the next thing."

That "next thing" looks to be reasoning. Industry attention has increasingly turned to an area of AI known as inference, which focuses on the ability of a trained model to respond to queries and information it might not have seen before with reasoning capabilities.

At Microsoft's Ignite event this month, the company's CEO Satya Nadella said that instead of seeing so-called AI scaling laws hit a wall, he was seeing the emergence of a new paradigm for "test-time compute," which is when a model has the ability to take longer to respond to more complex prompts from users. Nadella pointed to a new "think harder" feature for Copilot — Microsoft's AI agent — which boosts test time to "solve even harder problems."

Aymeric Zhuo, cofounder and CEO of AI startup Agemo, said that AI reasoning "has been an active area of research," particularly as "the industry faces a data wall." He told BI that improving reasoning requires increasing test-time or inference-time compute.

Typically, the longer a model takes to process a dataset, the more accurate the outcomes it generates. Right now, models are being queried in milliseconds. "It doesn't quite make sense," Sivesh Sukumar, an investor at investment firm Balderton, told BI. "If you think about how the human brain works, even the smartest people take time to come up with solutions to problems."

In September, OpenAI released a new model, o1, which tries to "think" about an issue before responding. One OpenAI employee, who asked not to be named, told BI that "reasoning from first principles" is not the forte of LLMs as they work based on "a statistical probability of which words come next," but if we "want them to think and solve novel problem areas, they have to reason."

Noam Brown, a researcher at OpenAI, thinks the impact of a model with greater reasoning capabilities can be extraordinary. "It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer," he said during a talk at TED AI last month.

Google and OpenAI did not respond to a request for comment from Business Insider.

The AI boom meets its tipping point

These efforts give researchers reasons to remain hopeful, even if current signs point to a slower rate of performance leaps. As a separate former DeepMind employee who worked on Gemini told BI, people are constantly "trying to find all sorts of different kinds of improvements."

That said, the industry may need to adjust to a slower pace of improvement.

"I just think we went through this crazy period of the models getting better really fast, like, a year or two ago. It's never been like that before," the former DeepMind employee told BI. "I don't think the rate of improvement has been as fast this year, but I don't think that's like some slowdown."

Lamini's Zhou echoed this point. Scaling laws — an observation that AI models improve with size, more data, and greater computing power —work on a logarithmic scale rather than a linear one, she said. In other words, think of AI advances as a curve rather than a straight upward line on a graph. That makes development far more expensive "than we'd expect for the next substantive step in this technology," Zhou said.

She added: "That's why I think our expectations are just not going to be met at the timeline we want, but also why we'll be more surprised by capabilities when they do appear."

Amazon Web Services (AWS) CEO Adam Selipsky speaks with Anthropic CEO and co-founder Dario Amodei during a 2023 conference.
Amazon Web Services CEO Adam Selipsky speaks with Anthropic CEO Dario Amodei during a 2023 conference.

Noah Berger/Getty

Companies will also need to consider how much more expensive it will be to create the next versions of their highly prized models. According to Anthropic's Amodei, a training run in the future could one day cost $100 billion. These costs include GPUs, energy needs, and data processing.

Whether investors and customers are willing to wait around longer for the superintelligence they've been promised remains to be seen. Issues with Microsoft's Copilot, for instance, are leading some customers to wonder if the much-hyped tool is worth the money.

For now, AI leaders maintain that there are plenty of levers to pull — from new data sources to a focus on inference — to ensure models continue improving. Investors and customers just might have to be prepared for them to come at a slower pace compared to the breakneck pace set by OpenAI when it launched ChatGPT two years ago.

Bigger problems lie ahead if they don't.

Read the original article on Business Insider

Google's DeepMind and YouTube built and shelved 'Orca,' a 'mind-blowing' music AI tool that hit a copyright snag

Demis Hassabis speaking.
Google DeepMind CEO Demis Hassabis.

Google

  • Google's DeepMind and YouTube previously built and shelved Orca, an AI music tool.
  • Orca could generate music mimicking artists. Google trained it on copyrighted YouTube music videos.
  • Google's AI strategy led to Orca's development. Legal risks halted its release.

Name your favorite artist, choose a genre, and feed it some lyrics, and AI will create a song that sounds completely authentic.

That was the vision of "Orca," a project Google's DeepMind and YouTube collaborated on and ultimately shelved last year after butting up against copyright issues, according to four people familiar with the matter, who asked to remain anonymous because they were not permitted to talk to the press.

The tool, which was internally codenamed "Orca," let anyone generate music with just a few simple prompts. It was developed as Google scrambled to catch OpenAI.

Users could generate a new song by giving Orca prompts like a specific artist, lyrics, and musical genre, said one person familiar with the project. For example, they could use the tool to generate a hip-hop song with the voice of Taylor Swift, that person said, adding that it was "mind-blowing."

Google eventually approached some music labels about releasing the Orca tool to the public, offering a revenue-share agreement for the music and artists Orca trained from, and the labels demurred, forcing Google to put the brakes on the project, that person said, adding that it was a "huge legal risk."

Orca is yet another example of how tech companies have moved at breakneck speeds to get ahead in the AI race. It also demonstrated how tech companies were willing to ride roughshod over their own rules to compete.

Google had previously avoided using copyrighted videos for AI training. When OpenAI started scraping YouTube for its own models, Google leadership decided to be more aggressive and reneged on its rule, said a person with direct knowledge of Orca.

Google has terms that allow it to scrape data from YouTube videos to improve its own service, although it's unclear if building an AI music generator would fall under this policy.

Developments on Orca throughout 2023 were so promising that at one point, some employees suggested that giving it a codename after a killer whale wasn't a good idea if DeepMind was about to destroy an entire music industry, one person involved recalled.

Some researchers inside Google had developed a similar model of their own, MusicLM, trained on "a large dataset of un-labeled music," as detailed in a paper published early last year.

In November 2023, DeepMind announced a music generation AI model named Lyria, which was a pared-down version of the Orca project. Users could ask Lyria to generate music using the voice and music style of some artists who had explicitly worked with Google on the project, such as John Legend — although it was far more limited in scope than Orca, three people familiar with the project said.

Some employees who worked on Lyria and Orca left to found a new startup named Udio, which makes an AI music creation app.

Google did not respond to a request for comment.

Are you a current or former DeepMind or YouTube employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1 628-228-1836) or email ([email protected]).

Read the original article on Business Insider

The DOJ wants Google to sell its Chrome browser. Here are the winners and losers if that happens.

Chrome logo with DOJ logo
A judge ruled in August that Google maintains an illegal monopoly in the search and advertising markets.

Google; Getty Images; Chelsea Jia Feng/BI

  • The DOJ asked the judge in its antitrust case against Google to force the company to sell Chrome.
  • Chrome is a key distribution method for Search, which provides crucial data for Google's ads.
  • A breakup would be a blow to Google and likely create opportunities for competitors.

A possible breakup of Google just became slightly more likely.

The Justice Department on Wednesday asked the judge in its antitrust case against Google to force the company to sell its Chrome browser.

That follows Judge Mehta's ruling in August that Google maintains an illegal monopoly in search and advertising markets. Google will get to suggest its own remedies, likely in December, and the judge is expected to rule next year.

If Google ends up having to sell or spin off Chrome, it would be a blow to the company. Meanwhile, advertisers and search rivals would likely cheer the news, according to industry experts.

Separating Chrome from Google and preventing default search placement deals "would put Google Search into competition with other paths for advertisers to reach potential customers," said John Kwoka, a professor of economics at Northeastern University. "Advertisers would find competitors for their business, rather than needing to pay a dominant search engine."

Chrome is a hugely popular Google product that the company leans on to grow and maintain its search advertising empire. Chrome holds 61% of the US browser market share, according to StatCounter, while 20% of general search queries come through user-downloaded Chrome browsers, according to the August ruling from Judge Mehta.

Distribution and self-reinforcing data

Chrome is a valuable distribution mechanism for Google Search, and a portal into the searching habits of billions of users.

When you open Chrome and type something into the search bar at the top, these words are automatically transformed into a Google Search. On other browsers and non-Google devices, that's not necessarily the case. With Windows devices, for instance, the main browser defaults to Microsoft's Bing search engine. And when there's an option for users, Google pays partners billions of dollars to set its search engine as the default.

Chrome avoids all these complications and costs because Google controls it and sets its own search engine as the default for free.

Once this important distribution tool is in place, Google collects mountains of user data from the browser, and from searches within the browser. This information goes into creating higher-value targeted advertising.

There's an equally powerful benefit of Chrome: When people use it to search on the web, Google monitors what results they click on. It feeds these responses back into its Search engine and the product gets constantly better. For instance, if most people click on the third result, Google's Search engine will likely adjust and rank that result higher in the future.

This self-reinforcing system — supported by Chrome — is very hard to compete against. One of the few ways to compete is to get more distribution than Google. If Chrome were an independent product, rival search engines might be able to get a piece of this distribution magic.

In 2011, venture capitalist Bill Gurley called Chrome and Android "very expensive and very aggressive 'moats,' funded by the height and magnitude of Google's castle."

Google has also tapped Chrome as a way to reach users with new AI products, including Lens, its image-recognition search feature, as it tries to fend off emerging rivals such as OpenAI.

The lesson of Neeva

Many have tried to take on Google in the browser market, and many have failed. Take Neeva, a privacy-focused search engine launched by Google's former ads boss Sridhar Ramaswamy and other ex-Googlers.

Not only did the startup have to develop a search engine from the ground up, it also had to build its own web browser to compete with Chrome because this is such a major source of distribution in the search business.

Neeva lasted four years before closing its doors.

"People forget that Google's success was not a result of only having a better product," Ramaswamy once told The Verge. "There were an incredible number of shrewd distribution decisions made to make that happen."

A 'manageable inconvenience'

Teiffyon Parry, chief strategy officer of the adtech company Equativ, said that losing 3 billion monthly Chrome users would be "no small blow" to Google.

However, Google has many other ways of reaching users and scooping up data, including Gmail, YouTube, a host of physical devices, and the Play Store. The company also has a standalone app that functions as a web browser and has the potential to effectively replace Chrome.

"Chrome has served Google exceptionally well, but its loss would be a manageable inconvenience," said Parry.

Implications for the web

Lukasz Olejnik, an independent cybersecurity and privacy consultant, is concerned about what might happen to the broader web if Chrome is sold off.

"Chrome is adopting web innovations really fast," he said, giving Chrome's security features as an example of how Google has innovated.

Without Google's financial support, Chrome might struggle on its own, and it's possible that progress on the web slows, weakening the ecosystem, he explained.

"The worst case scenario is deterioration of security and privacy of billions of users, and the rise of cybercrime on unimaginable levels," he warned.

Would Chrome even survive on its own?

One of the biggest questions is how a Chrome spinoff might work. A Bloomberg analysis valued Chrome at $15 billion to $20 billion if it were to be sold or spun off. Would antitrust regulators allow a major rival to buy it?

It's "unlikely" that Meta would be allowed to acquire it, tech blogger Ben Thompson wrote on Wednesday. That would leave someone like OpenAI as a potential buyer, he said, adding that the "distribution Chrome brings would certainly be welcome, and perhaps Chrome could help bootstrap OpenAI's inevitable advertising business."

And if Google has to sell Chrome, will it also be banned from making distribution deals with whoever buys the browser?

"The only way [a spun-off Chrome] could make money is through an integrated search deal," said tech commentator John Gruber on a recent podcast.

There may be ways around it. Earlier this year, a group of researchers published a paper analyzing Google Chrome's role in the search market and within Google's business (it should be noted one of the authors works at rival DuckDuckGo).

"The precedent set by Mozilla's financial dependence on Google highlights potential challenges for Chrome in maintaining its operations without similar support," the researchers said, nodding to the fact Google pays Firefox a lot of money to be its default search engine, despite Firefox's dwindling user numbers.

The researchers proposed one way to divest Chrome without letting it die is to let Google still financially support it if necessary but block Google from exclusive contracts that make Google Search the default. They also suggested web browsers could be reclassified as public utilities.

"Under such a classification, Chrome's agreements and decisions would be subject to heightened scrutiny, particularly to safeguard consumer welfare and prevent exclusionary practices," they wrote.

Google's response

Google plans to appeal any ruling, potentially delaying any final decision by several years. In a statement earlier this week, Lee-Anne Mulholland, Google's vice president of regulatory affairs, said the DOJ was pushing "a radical agenda that goes far behind the legal issues in this case."

"The government putting its thumb on the scale in these ways would harm consumers, developers and American technological leadership at precisely the moment it is most needed," she added.

Are you a current or former Google employee? Got more insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1 628-228-1836) or email ([email protected]).

Read the original article on Business Insider
❌