❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

What Big Tech CEOs are saying about their massive AI spending plans

2 May 2025 at 10:59
Big tech earnings illustration of logos and dollar signs.

Business Insider

  • Another Big Tech earnings week has wrapped.
  • It gave us fresh signs of what major players in the AI space are willing to spend to get ahead.
  • Here's a look at which Big Tech companies look more cautious and which look more bullish on AI spending.

A slew of Big Tech companies reported quarterly earnings this week, and with those earnings, some progress reports on the tens of billions of dollars they're funneling into AI.

Some Big Tech companies are showing signs of reining in AI spending while others are plowing full steam ahead.

Cumulatively, Meta, Microsoft, Alphabet, and Amazon plan to spend more than $300 billion this year, much of which will be on AI. Apple has also said it plans to spend $500 billion over the next four years.

Here's a look at where some of the biggest players stand:

Google

Google parent company Alphabet estimates $75 billion in capital expenditures for 2025, largely for data centers and server capacity for AI. This is well above the consensus estimate of $57.9 billion and represents an increase of about 43% year-over-year.

"We are confident about the opportunities ahead, and to accelerate our progress, we expect to invest approximately $75 billion in capital expenditures in 2025," CEO Sundar Pichai said in a February earnings release.

"We expect to increase our investments in capital expenditure for technical infrastructure, primarily for servers, followed by data centers and networking," CFO Anat Ashkenazi said during the company's earnings call that month.

First-quarter capex was $17.2 billion for the company, which is breaking ground for several new data centers to support its AI work, including Google Search AI overview and the Gemini chatbot.

Amazon

Amazon previously said it expects increased capital expenditures this year of $100 billion, largely for AI, particularly for the company's AWS cloud computing division.

In Amazon's Q1 earnings Thursday, the company reported capex of $24.3 billion, up more than 70% year-over-year.

CEO Andy Jassy added AWS has seen an "explosion of coding agents."

Jassy said on a third-quarter earnings call last year that the massive investment was justified because AI is "a really unusually large, maybe once-in-a-lifetime type of opportunity."

"I think that both our business, our customers and shareholders will be happy, medium to long-term, that we're pursuing the capital opportunity and the business opportunity in AI," he said.

Meta

In its recent Q1 earnings, Meta raised its full-year capex estimate from a range of $60 to $65 billion to $64 to $72 billion now.

The change "reflects additional data center investments to support our artificial intelligence efforts as well as an increase in the expected cost of infrastructure hardware," the company said in its earnings report.

This is a marked increase from the company's 2024 capex of $39.23 billion.

CEO Mark Zuckerberg kicked off his remarks in Meta's Q1 earnings call by talking about AI being "the major theme" at Meta right now that's "transforming everything we do."

Zuckerberg noted these are "long-term investments that are downstream from us," but said the company "will be wildly happy with the investments that we are making."

Microsoft

Microsoft is showing signs it may not remain as bullish on AI spending as some of its peers.

CFO Amy Hood said on the Q1 earnings call Wednesday that the company expects capex to grow in the coming fiscal year, but noted, "it will grow at a lower rate than FY 2025 and will include a greater mix of short lived assets, which are more directly correlated to revenue than long lived assets."

The company has said before that it expects capex of $80 billion in fiscal year 2025 in order to "build out AI-enabled datacenters to train AI models and deploy AI and cloud-based applications around the world." More than half of the investment will be in the US.

Earlier this month, Noelle Walsh, the head of Microsoft cloud operations, said the company "may strategically pace our plans."

"In recent years, demand for our cloud and AI services grew more than we could have ever anticipated and to meet this opportunity, we began executing the largest and most ambitious infrastructure scaling project in our history," she wrote in a LinkedIn post.

"By nature, any significant new endeavor at this size and scale requires agility and refinement as we learn and grow with our customers. What this means is that we are slowing or pausing some early-stage projects," she continued.

Apple

On February, Apple announced its biggest spend commitment in the company's history, forΒ $500 billion in the US over four years toward AI initiatives, manufacturing, and silicon engineering, among other expenses.

"We're going to be expanding our teams and our facilities in several states, including Michigan, Texas, California, Arizona, Nevada, Iowa, Oregon, North Carolina, and Washington," CEO Tim Cook said in the company's Q1 earnings call Thursday. "And we're going to be opening a new factory for advanced server manufacturing in Texas."

Read the original article on Business Insider

AI's $3 trillion question: Will the Chinchilla live or die?

14 March 2025 at 02:01
A contestant holds a pair of chinchillas at the Fourth Annual Chinchilla Show in New York.
A contestant holds a pair of chinchillas at the Fourth Annual Chinchilla Show in New York.

Getty Images

  • Chinchillas are cuddly and cute.
  • Chinchilla is also an established way to build huge AI models using mountains of data.
  • There's at least $3 trillion riding on whether this approach continues or not.

About five years ago, researchers at OpenAI discovered that combining more computing power and more data in ever-larger training runsΒ produces better AI models.

A couple of years later, Google researchers found that adding more data to this mix produces even better results. They showed this by building a new AI model called Chinchilla.

These revelations helped create large language models and other giant models, like GPT-4, that support powerful AI tools such as ChatGPT. Yet in the future, the "Chinchilla" strategy of smashing together oodles of computing and mountains of data into bigger and longer pre-training runs may not work as well.

So what if this process doesn't end up being how AI is made in the future? To put it another way: What if the Chinchilla dies?

Building these massive AI models has so far required huge upfront investments. Mountains of data are mashed together in an incredibly complex and compute-intensive process known as pre-training.

This has sparked the biggest wave of infrastructure upgrades in technology's history. Tech companies across the US and elsewhere are frantically erecting energy-sucking data centers packed with Nvidia GPUs.

The rise of new "reasoning" models has opened up a new potential future for the AI industry, where the amount of required infrastructure could be much less. We're talking trillions of dollars of capital expenditure that might not happen in coming years.

Recently, Ross Sandler, a top tech analyst at Barclays Capital, and his team estimated the different capex requirements of these two possible outcomes:

  • The "Chinchilla" future is where the established paradigm of huge computing and data-heavy pre-training runs continue.
  • The "Stall-Out" alternative is one in which new types of models and techniques require less computing gear to produce more powerful AI.

The difference is stunning in terms of how much money will or will not be spent. $3 trillion or more in capex is on the line here.

The reason is "reasoning"

"Reasoning" AI models are on the rise, such as OpenAI's o1 and o3 offerings, DeepSeek's R1, and Google's Gemini 2.0 Flash Thinking.

These new models use an approach called test-time or inference-time compute, which slices queries into smaller tasks, turning each into a new prompt that the model tackles.

Reasoning models often don't need massive, intense, long pre-training runs to be created. They may take longer to respond, but their outputs can be more accurate, and they can be cheaper to run, too, the Barclays analysts said.

The analysts said that DeepSeek's R1 has shown how open-source reasoning models can drive incredible performance improvements with far less training time, even if this AI lab may have overstated some of its efficiency gains.

"AI model providers are no longer going to need to solely spend 18-24 months pre-training their next expensive model to achieve step-function improvements in performance," the Barclays analysts wrote in a recent note to investors. "With test-time-compute, smaller base models can run repeated loops and get to a far more accurate response (compared to previous techniques)."

Mixture of Experts

A rescued chinchilla is held by a veterinarian at the San Diego Humane Society in Oceanside, California after Hollywood mogul and co-creator of The Simpsons, Sam Simon, financed the purchase of a chinchilla farm in order to rescue over 400 chinchillas and close the Vista, California business August 19, 2014. REUTERS/Mike Blake
Another photo of a chinchilla

Thomson Reuters

When it comes to running new models, companies are embracing other techniques that will likely reduce the amount of computing infrastructure needed.

AI labs increasingly use an approach called mixture of experts, or MoE, where smaller "expert" models are trained on their tasks and subject areas and work in tandem with an existing huge AI model to answer questions and complete tasks.

In practice, this often means only part of these AI models is used, which reduces the computing required, the Barclays analysts said.

Where does this leave the poor Chinchilla?

pet chinchilla drinking water
Yet another photo of a chinchilla.

Shutterstock

The "Chinchilla" approach has worked for the past five years or more, and it's partly why the stock prices of many companies in the AI supply chain have soared.

The Barclays analysts question whether this paradigm can continue because the performance gains from this method may decline as the cost goes up.

"The idea of spending $10 billion on a pre-training run on the next base model, to achieve very little incremental performance, would likely change," they wrote.

Many in the industry also think data for training AI models is running out β€” there may not be enough quality information to keep feeding this ravenous chinchilla.

So, top AI companies might stop expanding this process when models reach a certain size. For instance, OpenAI could build its next huge model, GPT-5, but may not go beyond that, the analysts said.

A "synthetic" solution?

chinchilla
OK, the final picture of a chinchilla, I promise.

Itsuo Inouye/File/AP

The AI industry has started using "synthetic" training data, often generated by existing models. Some researchers think this feedback loop of models helping to create new, better models will take the technology to the next level.

The Chinchillas could, essentially, feed on themselves to survive.

Kinda gross, though that would mean tech companies will still spend massively on AI in the coming years.

"If the AI industry were to see breakthroughs in synthetic data and recursive self-improvement, then we would hop back on the Chinchilla scaling path, and compute needs would continue to go up rapidly," Sandler and his colleagues wrote. "While not entirely clear right now, this is certainly a possibility we need to consider."

Read the original article on Business Insider

Mark Zuckerberg says Meta will have 1.3M GPUs for AI by year-end

24 January 2025 at 07:39

Meta CEO Mark Zuckerberg said that the company plans to significantly up its capital expenditures this year as it aims to keep pace with rivals in the cutthroat AI space. In a Facebook post Friday, Zuckerberg said that Meta expects to spend $60 billion-$80 billion on CapEx in 2025, primarily on data centers and growing […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌