❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A chip company you probably never heard of is suddenly worth $1 trillion. Here's why, and what it means for Nvidia.

18 December 2024 at 01:00
Broadcom CEO Hock Tan speaking at a conference
Broadcom CEO Hock Tan

Ying Tang/NurPhoto via Getty Images

  • Broadcom's stock surged in recent weeks, pushing the company's market value over $1 trillion.
  • Broadcom is crucial for companies seeking alternatives to Nvidia's AI chip dominance.
  • Custom AI chips are gaining traction, enhancing tech firms' bargaining power, analysts say.

The rise of AI, and the computing power it requires, is bringing all kinds of previously under-the-radar companies into the limelight. This week it's Broadcom.

Broadcom's stock has soared since late last week, catapulting the company into the $1 trillion market cap club. The boost came from a blockbuster earnings report in which custom AI chip revenue grew 220% compared to last year.

In addition to selling lots of parts and components for data centers, Broadcom designs and sells ASICs, or application-specific integrated circuits β€” an industry acronym meaning custom chips.

Designers of custom AI chips, chief among them Broadcom and Marvell, are headed into a growth phase, according to Morgan Stanley.

Custom chips are picking up speed

The biggest players in AI buy a lot of chips from Nvidia, the $3 trillion giant with an estimated 90% of market share of advanced AI chips.

Heavily relying on one supplier isn't a comfortable position for any company, though, and many large Nvidia customers are also developing their own chips. Most tech companies don't have large teams of silicon and hardware experts in house. Of the companies they might turn to design them a custom chip, Broadcom is the leader.

Though multi-purpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI chip market in the long-term, custom chips are growing fast.

Morgan Stanley analysts this week forecast the market for ASICs to nearly double to $22 billion next year.

Much of that growth is attributable to Amazon Web Services' Trainium AI chip, according to Morgan Stanley analysts. Then there are Google's in-house AI chips, known as TPUs, which Broadcom helps make.

In terms of actual value of chips in use, Amazon and Google dominate. But OpenAI, Apple, and TikTok parent company ByteDance are all reportedly developing chips with Broadcom, too.

ASICs bring bargaining power

Custom chips can offer more value, in terms of the performance you get for the cost, according to Morgan Stanley's research.

ASICs can also be designed to perfectly match unique internal workloads for tech companies, accord to the bank's analysts. The better these custom chips get, the more bargaining power they may provide when tech companies are negotiating with Nvidia over buying GPUs. But this will take time, the analysts wrote.

In addition to Broadcom, Silicon Valley neighbor Marvell is making gains in the ASICs market, along with Asia-based players Alchip Technologies and Mediatek, they added in a note to investors.

Analysts don't expect custom chips to ever fully replace Nvidia GPUs, but without them, cloud service providers like AWS, Microsoft, and Google would have much less bargaining power against Nvidia.

"Over the long term, if they execute well, cloud service providers may enjoy greater bargaining power in AI semi procurement with their own custom silicon," the Morgan Stanley analysts explained.

Nvidia's big R&D budget

This may not be all bad news for Nvidia. A $22 billion ASICs market is smaller than Nvidia's revenue for just one quarter.

Nvidia's R&D budget is massive, and many analysts are confident in its ability to stay at the bleeding edge of AI computing.

And as Nvidia rolls out new, more advanced GPUs, its older offerings get cheaper and potentially more competitive with ASICs.

"We believe the cadence of ASICs needs to accelerate to stay competitive to GPUs," the Morgan Stanley analysts wrote.

Still, Broadcom and chip manufacturers on the supply chain rung beneath, such as TSMC, are likely to get a boost every time a giant cloud company orders up another custom AI chip.

Read the original article on Business Insider

Will the world's fastest supercomputer please stand up?

11 December 2024 at 06:57
TRITON Supercomputer_13
TRITON Supercomputer at the University of Miami

T.J. Lievonen

  • Oracle and xAI love to flex the size of their GPU clusters.
  • It's getting hard to tell who has the most supercomputing power as more firms claim the top spot.
  • The real numbers are competitive intel and cluster size isn't everything, experts told BI.

In high school, as in tech, superlatives are important. Or maybe they just feel important in the moment. With the breakneck pace of the AI computing infrastructure buildout, it's becoming increasingly difficult to keep track of who has the biggest, fastest, or most powerful supercomputer β€” especially when multiple companies claim the title at once.

"We delivered the world's largest and fastest AI supercomputer, scaling up to 65,000 Nvidia H200 GPUs," Oracle CEO Safra Catz and Chairman, CTO, echoed by Founder Larry Ellison on the company's Monday earnings call.

In late October, Nvidia proclaimed xAI's Colossus as the "World's Largest AI Supercomputer," after Elon Musk's firm reportedly built a computing cluster with 100,000 Nvidia graphics processing units in a matter of weeks. The plan is to expand to 1 million GPUs next, according to the Greater Memphis Chamber of Commerce (where the supercomputer is located.)

The good ole days of supercomputing are gone

It used to be simpler. "Supercomputers" were most commonly found in research settings. Naturally, there's an official list ranking supercomputers. Until recently the world's most powerful supercomputer was named El Capitan. Housed at the Lawrence Livermore National Laboratory in California 11 million CPUs and GPUs from Nvidia-rival AMD add up to 1.742 Exaflops of computing capacity. (One exaflop is equal to one quintillion, or a billion billions, operations per second.)

"The biggest computers don't get put on the list," Dylan Patel, chief analyst at Semianalysis, told BI. "Your competitor shouldn't know exactly what you have," he continued. The 65,000-GPU supercluster Oracle executives were praising can reach up to 65 exaflops, according to the company.

It's safe to assume, Patel said, that Nvidia's largest customers, Meta, Microsoft, and xAI also have the largest, most powerful clusters. Nvidia CFO Colette Cress said 200 fresh exaflops of Nvidia computing would be online by the end of this year β€” across nine different supercomputers β€” on Nvidia's May earnings call.

Going forward, it's going to be harder to determine whose clusters are the biggest at any given moment β€” and even harder to tell whose are the most powerful β€” no matter how much CEOs may brag.

It's not the size of the cluster β€” it's how you use it

On Monday's call, Ellison was asked, if the size of these gigantic clusters is actually generating better model performance.

He said larger clusters and faster GPUs are elements that speed up model training. Another is networking it all together. "So the GPU clusters aren't sitting there waiting for the data," Ellison said Monday.

Thus, the number of GPUs in a cluster isn't the only factor in the computing power calculation. Networking and programming are important too. "Exaflops" are a result of the whole package so unless companies provide them, experts can only estimate.

What's certain is that more advanced models β€” the kind that consider their own thinking and check their work before answering queries β€” require more compute than their relatives of earlier generations. So training increasingly impressive models may indeed require an arms race of sorts.

But an enormous AI arsenal doesn't automatically lead to better or more useful tools.

Sri Ambati, CEO of open-source AI platform H2O.ai, said cloud providers may want to flex their cluster size for sales reasons, but given some (albeit slow) diversification of AI hardware and the rise of smaller, more efficient models, cluster size isn't the end all be all.

Power efficiency too, is a hugely important indicator for AI computing since energy is an enormous operational expense in AI. But it gets lost in the measuring contest.

Nvidia declined to comment. Oracle did not respond to a request for comment in time for publication.

Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088.

Read the original article on Business Insider

China opens antimonopoly probe into Nvidia, escalating the chip war with the US

9 December 2024 at 04:28
Nvidia CEO Jensen Huang.
Nvidia CEO Jensen Huang.

Sam Yeh/AFP via Getty Images

  • China's top antimonopoly regulator is investigating Nvidia.
  • The investigation is related to the company's 2020 acquisition of an Israeli chip firm.
  • Nvidia's stock fell by 2.2% in premarket trading on Monday.

China's top antimonopoly regulator has launched an investigation into Nvidia, whose shares dropped by 2.2% in premarket trading on Monday following the latest escalation of chip tensions with the US.

The State Administration for Market Regulation said on Monday that it was investigating whether the chipmaker giant violated antimonopoly regulations.

The probe is related to Nvidia's acquisition of Mellanox Technologies, an Israeli chip firm, in 2020. China's competition authority approved the $7 billion takeover in 2020 on the condition that rivals be notified of new products within 90 days of allowing Nvidia access to them.

The US-China chip war has been escalating. Last week, China's commerce ministry said it would halt shipments of key materials needed for chip production to the US. The ministry said the measures were in response to US chip export bans, also announced last week.

Nvidia, which is headquartered in Santa Clara, California, has also faced antitrust scrutiny in the US. The Department of Justice has been examining whether Nvidia might have abused its market dominance to make it difficult for buyers to change suppliers.

Nvidia did not immediately respond to a request for comment from Business Insider made outside normal working hours.

Read the original article on Business Insider

Amazon isn't seeing enough demand for AMD's AI chips to offer them via its cloud

6 December 2024 at 13:30
AWS logo at re:Invent 2024
AWS logo at re:Invent 2024

Noah Berger/Getty Images for Amazon Web Services

  • AWS has not committed to offering cloud access to AMD's AI chips in part due to low customer demand.
  • AWS said it was considering offering AMD's new AI chips last year.
  • AMD recently increased the sales forecast for its AI chips.

Last year, Amazon Web Service said it was considering offering cloud access to AMD's latest AI chips.

18 months in, the cloud giant still hasn't made any public commitment to AMD's MI300 series.

One reason: low demand.

AWS is not seeing the type of huge customer demand that would lead to selling AMD's AI chips via its cloud service, according to Gadi Hutt, senior director for customer and product engineering at Amazon's chip unit, Annapurna Labs.

"We follow customer demand. If customers have strong indications that those are needed, then there's no reason not to deploy," Hutt told Business Insider at AWS's re:Invent conference this week.

AWS is "not yet" seeing that high demand for AMD's AI chips, he added.

AMD shares dropped roughly 2% after this story first ran.

AMD's line of AI chips has grown since its launch last year. The company recently increased its GPU sales forecast, citing robust demand. However, the chip company still is a long way behind market leader Nvidia.

AWS provides cloud access to other AI chips, such as Nvidia's GPUs. At re:Invent, AWS announced the launch of P6 servers, which come with Nvidia's latest Blackwell GPUs.

AWS and AMD are still close partners, according to Hutt. AWS offers cloud access to AMD's CPU server chips, and AMD's AI chip product line is "always under consideration," he added.

Hutt discussed other topics during the interview, including AWS's relationship with Nvidia, Anthropic, and Intel.

An AMD spokesperson declined to comment.

Do you work at Amazon? Got a tip?

Contact the reporter, Eugene Kim, via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Business Insider's source guide for other tips on sharing information securely.

Editor's note: This story was first published on December 6, 2024, and was updated later that day to reflect developments in AMD's stock price.

Read the original article on Business Insider

Silicon and supercomputers will define the next AI era. AWS just made a big bet on both.

4 December 2024 at 07:07
AWS CEO Matt Garman onstage at Re: Invent 2024.
Amazon is betting on its own chips and supercomputers to forge ahead with its AI ambitions.

Noah Berger/Getty Images for Amazon Web Services

  • AWS unveiled a new AI chip and a supercomputer at its Re: Invent conference on Tuesday.
  • It's a sign that Amazon is ready to reduce its reliance on Nvidia for AI chips.
  • Amazon isn't alone: Google, Microsoft, and OpenAI are also designing their own AI chips.

Big Tech's next AI era will be all about controlling silicon and supercomputers of their own. Just ask Amazon.

At its Re: Invent conference on Tuesday, the tech giant's cloud computing unit, Amazon Web Services, unveiled the next line of its AI chips, Trainium3, while announcing a new supercomputer that will be built with its own chips to serve its AI ambitions.

It marks a significant shift from the status quo that has defined the generative AI boom since OpenAI's release of ChatGPT, in which the tech world has relied on Nvidia to secure a supply of its industry-leading chips, known as GPUs, for training AI models in huge data centers.

While Nvidia has a formidable moat β€” experts say its hardware-software combination serves as a powerful vendor lock-in system β€” AWS' reveal shows companies are finding ways to take ownership of the tech shaping the next era of AI development.

Putting your own chips on the table

Amazon CEO Andy Jassy.
Amazon is pushing forward with its own brand of chips called Trainium.

Noah Berger/Getty Images for Amazon Web Services

On the chip side, Amazon shared that Trainium2, which was first unveiled at last year's Re: Invent, was now generally available. Its big claim was that the chip offers "30-40% better price performance" than the current generation of servers with Nvidia GPUs.

That would mark a big step up from its first series of chips, which analysts at SemiAnalysis described on Tuesday as "underwhelming" for generative AI training and used instead for "training non-complex" workloads within Amazon, such as credit card fraud detection.

"With the release of Trainium2, Amazon has made a significant course correction and is on a path to eventually providing a competitive custom silicon," the SemiAnalysis researchers wrote.

Trainium3, which AWS gave a preview of ahead of a late 2025 release, has been billed as a "next-generation AI training chip." Servers loaded with Trainium3 chips offer four times greater performance than those packed with Trainium2 chips, AWS said.

Matt Garman, the CEO of AWS, told The Wall Street Journal that some of the company's chip push is due to there being "really only one choice on the GPU side" at present, given Nvidia's dominant place in the market. "We think that customers would appreciate having multiple choices," he said.

It's an observation that others in the industry have noted and responded to. Google has been busy designing its own chips that reduce its dependence on Nvidia, while OpenAI is reported to be exploring custom, in-house chip designs of its own.

But having in-house silicon is just one part of this.

The supercomputer advantage

AWS acknowledged that as AI models trained on GPUs continue to get bigger, they are "pushing the limits of compute and networking infrastructure."

That means companies serious about building their own AI models β€” like Amazon in its partnership with Anthropic, the OpenAI rival that raised a total of $8 billion from the tech giant β€” will need access to highly specialized computing that can handle a new era of AI.

Adam Selipsky and Dario Amodei sitting onstage at a conference with the logos of Amazon and Anthropic behind them.
Amazon has a close partnership with OpenAI rival Anthropic.

Noah Berger/Getty

With this in mind, AWS shared that it was working with Anthropic to build an "UltraCluster" of servers that form the basis of a supercomputer it has named Project Rainier. According to Amazon, it will scale model training across "hundreds of thousands of Trainium2 chips."

"When completed, it is expected to be the world's largest AI compute cluster reported to date available for Anthropic to build and deploy their future models on," AWS said in a blog, adding that it will be "over five times the size" of the cluster used to build Anthropic's last model.

The supercomputer push follows similar moves elsewhere. The Information first reported earlier this year that OpenAI and Microsoft were working together to build a $100 billion AI supercomputer called Stargate.

Of course, Nvidia is also in the supercomputer business and aims to make them a big part of its allure to companies looking to use its next-generation AI chips, Blackwell.

Last month, for instance, Nvidia announced that SoftBank, the first customer to receive its new Blackwell-based servers, would use them to build a supercomputer for AI development. Elon Musk has also bragged about his company xAI building a supercomputer with 100,000 Nvidia GPUs in Memphis this year.

AWS made no secret that it remains tied to Nvidia for now. In an interview with The Wall Street Journal, Garman acknowledged that Nvidia is responsible for "99% of the workloads" for training AI models today and doesn't expect that to change anytime soon.

That said, Garman reckoned "Trainium can carve out a good niche" for itself. He'll be wise to recognize that everyone else is busy carving out a niche for themselves, too.

Read the original article on Business Insider

4 things we learned from Amazon's AWS conference, including about its planned supercomputer

3 December 2024 at 15:59
AWS chip
AI chips were the star of AWS CEO Matt Garman's re:Invent keynote.

Business Wire/BI

  • AWS announced plans for an AI supercomputer, UltraCluster, with Trainium 2 chips at re:Invent.
  • AWS may be able to reduce reliance on Nvidia by developing its own AI infrastructure.
  • Apple said it's using Trainium 2 chips for Apple Intelligence.

Matt Garman, the CEO of Amazon Web Services, made several significant new AWS announcements at the re:Invent conference on Tuesday.

His two-and-a-half hour keynote delved into AWS's current software and hardware offerings and updates, with words from clients including Apple and JPMorgan. Graphics processing units (GPUs), supercomputers, and a surprise Apple cameo stuck out among the slew of information.

AWS, the cloud computing arm of Amazon, has been developing its own semiconductors to train AI. On Tuesday, Garman said it's creating UltraServers β€” containing 64 of its Trainium 2 chips β€” so companies can scale up their GenAI workloads.

Moreover, it's also building an AI supercomputer, an UltraCluster made up of UltraServers, in partnership with AI startup Anthropic. Named Project Rainier, it will be "the world's largest AI compute cluster reported to date available for Anthropic to build and deploy its future models on" when completed, according to an Amazon blog post. Amazon has invested $8 billion in Anthropic.

Such strides could push AWS further into competition with other tech firms in the ongoing AI arms race, including AI chip giant Nvidia.

Here are four takeaways from Garman's full keynote on Tuesday.

AWS' Trainium chips could compete with Nvidia.

Nvidia currently dominates the AI chip market with its sought-after and pricey GPUs, but Garman backed AWS's homegrown silicon during his keynote on Tuesday. His company's goal is to reduce the cost of AI, he said.

"Today, there's really only one choice on the GPU side, and it's just Nvidia. We think that customers would appreciate having multiple choices," Garman told the Wall Street Journal.

AI is growing rapidly, and the demand for chips that make the technology possible is poised to grow alongside it. Major tech companies, like Google and Microsoft, are venturing into chip creation as well to find an alternative to Nvidia.

However, Garman told The Journal the doesn't expect Trainium to dethrone Nvidia "for a long time."

"But, hopefully, Trainium can carve out a good niche where I actually think it's going to be a great option for many workloads β€” not all workloads," he said.

AWS also introduced Trainium3, its next-gen chip.

AWS' new supercomputer could go toe to toe with Elon Musk's xAI.

According to The Journal, the chip cluster known as Project Rainier is expected to be available in 2025. Once it is ready, Anthropic plans to use it to train AI models.

With "hundreds of thousands" of Trainium chips, it would challenge Elon Musk's xAI's Colossus β€” a supercomputer with 100,000 of Nvidia's Hopper chips.

Apple is considering Trainium 2 for Apple Intelligence training.

Garman said that Apple is one of its customers using AWS chips, like Amazon Graviton and Inferentia, for services including Siri.

Benoit Dupin, senior director of AI and machine learning at Apple, then took to the stage at the Las Vegas conference. He said the company worked with AWS for "virtually all phases" of its AI and machine learning life cycle.

"One of the unique elements of Apple business is the scale at which we operate and the speed with which we innovate," Dupin said.

He added, "AWS has been able to keep the pace, and we've been customers for more than a decade."

Now, Dupin said Apple is in the early stages of testing Trainium 2 chips to potentially help train Apple Intelligence.

The company introduced a new generation of foundational models, Amazon Nova.

Amazon announced some new kids on the GenAI block.

AWS customers will be able to use Amazon Nova-powered GenAI applications "to understand videos, charts, and documents, or generate videos and other multimedia content," Amazon said. There are a range of models available at different costs, it said.

"Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are at least 75% less expensive than the best-performing models in their respective intelligence classes in Amazon Bedrock," Amazon said.

Read the original article on Business Insider

Amazon makes massive downpayment on dethroning Nvidia

22 November 2024 at 11:09
Anthropic CEO Dario Amodei at the 2023 TechCrunch Disrupt conference
Dario Amodei, an OpenAI employee turned Anthropic CEO, at TechCrunch Disrupt 2023.

Kimberly White/Getty

  • Amazon on Friday announced another $4 billion investment in the AI startup Anthropic.
  • The deal includes an agreement for Anthropic to use Amazon's AI chips more.
  • The cloud giant is trying to challenge Nvidia and get developers to switch away from those GPUs.

Amazon's Trainium chips are about to get a lot busier β€” at least that's what Amazon hopes will happen after it pumps another $4 billion into the AI startup Anthropic.

The companies announced a huge new deal on Friday that brings Amazon's total investment in Anthropic to $8 billion. The goal of all this money is mainly to get Amazon's AI chips to be used more often to train and run large language models.

Anthropic said that in return for this cash injection, it would use AWS as its "primary cloud and training partner." It said it would also help Amazon design future Trainium chips and contribute to building out an Amazon AI-model-development platform called AWS Neuron.

This is an all-out assault on Nvidia, which dominates the AI chip market with its GPUs, servers, and CUDA platform. Nvidia's stock dropped by more than 3% on Friday after the Amazon-Anthropic news broke.

The challenge will be getting Anthropic to actually use Trainium chips in big ways. Switching away from Nvidia GPUs is complicated, time-consuming, and risky for AI-model developers, and Amazon has struggled with this.

Earlier this week, Anthropic CEO Dario Amodei didn't sound like he was all in on Amazon's Trainium chips, despite another $4 billion coming his way.

"We use Nvidia, but we also use custom chips from both Google and Amazon," he said at the Cerebral Valley tech conference in San Francisco. "Different chips have different trade-offs. I think we're getting value from all of them."

In 2023, Amazon made its first investment in Anthropic, agreeing to put in $4 billion. That deal came with similar strings attached. At the time, Anthropic said that it would use Amazon's Trainium and Inferentia chips to build, train, and deploy future AI models and that the companies would collaborate on the development of chip technology.

It's unclear whether Anthropic followed through. The Information reported recently that Anthropic preferred to use Nvidia GPUs rather than Amazon AI chips. The publication said the talks about this latest investment focused on getting Anthropic more committed to using Amazon's offerings.

There are signs that Anthropic could be more committed now, after getting another $4 billion from Amazon.

In Friday's announcement, Anthropic said it was working with Amazon on its Neuron software, which offers the crucial connective tissue between the chip and the AI models. This competes with Nvidia's CUDA software stack, which is the real enabler of Nvidia's GPUs and makes these components very hard to swap out for other chips. Nvidia hasΒ a decadelong head startΒ on CUDA, and competitors have found that difficult to overcome.

Anthropic's "deep technical collaboration" suggests a new level of commitment to using and improving Amazon's Trainium chips.

Though several companies make chips that compete with or even beat Nvidia's in certain elements of computing performance, no other chip has touched the company in terms of market or mind share.

Amazon's AI chip journey

Amazon is on a short list of cloud providers attempting to stock their data centers with their own AI chips and avoid spending heavily on Nvidia GPUs, which have profit margins that often exceed 70%.

Amazon debuted its Trainium and Inferentia chips β€” named after the training and inference tasks they're built for β€” in 2020.

The aim was to become less dependent on Nvidia and find a way to make cloud computing in the AI age cheaper.

"As customers approach higher scale in their implementations, they realize quickly that AI can get costly," Amazon CEO Andy Jassy said on the company's October earnings call. "It's why we've invested in our own custom silicon in Trainium for training and Inferentia for inference."

But like its many competitors, Amazon has found that breaking the industry's preference for Nvidia is difficult. Some say that's because ofΒ CUDA, which offers an abundant software stack with libraries, tools, and troubleshooting help galore. Others say it's simple habit or convention.

In May, the Bernstein analyst Stacy Rasgon told Business Insider he wasn't aware of any companies using Amazon AI chips at scale.

With Friday's announcement, that might change.

Jassy said in October that the next-generation Trainium 2 chip was ramping up. "We're seeing significant interest in these chips, and we've gone back to our manufacturing partners multiple times to produce much more than we'd originally planned," Jassy said.

Still, Anthropic's Amodei sounded this week like he was hedging his bets.

"We believe that our mission is best served by being an independent company," he said. "If you look at our position in the market and what we've been able to do, the independent partnerships we have Google, with Amazon, with others, I think this is very viable."

Read the original article on Business Insider

Nvidia's Blackwell era will test the world's most valuable company in a highwire act

21 November 2024 at 06:11
Nvidia CEO Jensen Huang holds to chip boards.
Nvidia's new AI chip, Blackwell, is key to the company's next growth phase.

Justin Sullivan/Getty Images

  • Nvidia has a lot riding on Blackwell, its new flagship AI chip.
  • Investors had a tepid response to earnings despite reporting $35.1 billion in third-quarter revenue.
  • To deliver with Blackwell, Nvidia must juggle performance expectations and complex supply chains.

Nvidia looks set to end the year as confidently as it started. How next year plays out will significantly depend on the performance of Blackwell, its next-generation AI chip.

The Santa Clara-based chip giant reminded everyone why it has grown more than 200% this year to become the world's most valuable company. On Wednesday, it reported another blowout earnings. Revenue hit $35.1 billion in its third quarter, up 94% from a year ago.

But despite the strong earnings, which Wedbush analyst Dan Ives said "should be framed and hung in the Louvre," investors remained cautious as they focused their attention on the highwire act Nvidia must pull off with Blackwell.

The new chip, known as a GPU, was first unveiled by CEO Jensen Huang at the company's GTC conference in March. It was revealed as a successor to the Hopper GPU that companies across Silicon Valley and beyond have used to build powerful AI models.

While Nvidia confirmed on Wednesday that Blackwell is now "in full production," with 13,000 samples shipped to customers last quarter, signs emerged to suggest that Nvidia faces a difficult path ahead as it prepares to scale up its new-era GPUs.

Nvidia must navigate complex supply chains

First, Blackwell is what Nvidia CFO Colette Kress called a "full-stack" system.

That makes it a beast of machinery that needs to be fit for an incredibly wide range of specific needs from a variety of customers. As she told investors on the earnings call on Wednesday, Blackwell is built with "customizable configurations" to address "a diverse and growing AI market." That includes everything from "x86 to Arm, training to inferencing GPUs, InfiniBand to Ethernet switches," Kress said.

Nvidia will also need incredible precision in its execution to satisfy its customers. As Kress said on the earnings call, the line for Blackwell is "staggering," with the company "racing to scale supply to meet the incredible demand customers are placing on us."

To achieve this, it'll need to focus on two areas. First, meeting demand for Blackwell will mean efficiently orchestrating an incredibly complex and widespread supply chain. In response to a question from Goldman Sachs analyst Toshiya Hari, Huang reeled off a near-endless list of suppliers contributing to Blackwell production.

Huang holds up two chips while speaking onstage at the GTC conference.
Huang holds up the Blackwell on the left and its predecessor, the H100, on the right.

Nvidia

There were Far East semiconductor firms TSMC, SK Hynix, and SPIL; Taiwanese electronics giant Foxconn; Amphenol, a producer of fiber optic connectors in Connecticut; cloud and data center specialists like Wiwynn and Vertiv, and several others.

"I'm sure I've missed partners that are involved in the ramping up of Blackwell, which I really appreciate," Huang said. He'll need each and every one of them to be in sync to help meet next quarter's guidance of $37.5 billion in revenue. There had been some recent suggestions that cooling issues were plaguing Blackwell, but Huang seemed to suggest they had been addressed.

Kress acknowledged that the costs of the Blackwell ramp-up will lead to gross margins dropping by a few percentage points but expects them to recover to their current level of roughly 75% once "fully ramped."

All eyes are on Blackwell's performance

The second area Nvidia will need to execute with absolute precision is performance. AI companies racing to build smarter models to keep their own backers on board will depend on Huang's promise that Blackwell is far superior in its capabilities to Hopper.

Reports so far suggest Blackwell is on track to deliver next-generation capabilities. Kress reassured investors on this, citing results from Blackwell's debut last week on the MLPerf Training benchmark, an industry test that measures "how fast systems can train models to a target quality metric." The Nvidia CFO said Blackwell delivered a "2.2 times leap in performance over Hopper" on the test.

Collectively, these performance leaps and supply-side pressures matter to Nvidia for a longer-term reason, too. Huang committed the company to a "one-year rhythm" of new chip releases earlier this year, a move that effectively requires the tech giant to showcase a vastly more powerful variety of GPUs each year while convincing customers that it can dole them out.

While performance gains seem to be showing real improvements, reports this year have suggested that pain points have emerged in production that have added delays to the rollout of Blackwell.

Nvidia remains ahead of rivals like AMD

For now, investors appear to be taking a wait-and-see approach to Blackwell, with Nvidia's share price down less than a percentage point in pre-market trading. Hamish Low, research analyst at Enders Analysis, told BI that "the reality is that Nvidia will dominate the AI accelerator market for the foreseeable future," particularly as "the wave of AI capex" expected from tech firms in 2025 will ensure it remains "the big winner" in the absence of strong competition.

"AMD is a ways behind and none of the hyperscaler chips are going to be matching that kind of volume, which gives Nvidia some breathing room in terms of market share," Low said.

As Low notes, however, there's another reality Nvidia must reckon with. "The challenge is the sheer weight of investor expectations due to the scale and premium that Nvidia has reached, where anything less than continually flying past every expectation is a disappointment," he said.

If Blackwell misses those expectations in any way, Nvidia may need to brace for a fall.

Read the original article on Business Insider

Nvidia’s CEO defends his moat as AI labs change how they improve their AI models

20 November 2024 at 17:26

Nvidia raked in more than $19 billion in net income during the last quarter, the company reported on Wednesday, but that did little to assure investors that its rapid growth would continue. On its earnings call, analysts prodded CEO Jensen Huang about how Nvidia would fare if tech companies start using new methods to improve […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌