Broadcom's CEO says he's too busy riding the AI wave to consider a takeover of rival Intel.
In an interview with the Financial Times, Hock Tan said he had no interest in "hostile takeovers."
Broadcom has soared to a $1 trillion market capitalization for the first time thanks to the AI boom.
The chief executive of $1 trillion AI chip giant Broadcom has dismissed the prospect of a takeover bid for struggling rival Intel.
In an interview with the Financial Times, Hock Tan said that he has his "hands very full" from riding the AI boom, responding to rumors that his company could make a move for its Silicon Valley rival.
"That is driving a lot of my resources, a lot of my focus," Tan said, adding that he has "not been asked" to bid on Intel.
The Broadcom boss is also adopting a "no hostile takeovers" policy after Donald Trump blocked his company's offer for Qualcomm in 2018 on national security grounds. Broadcom was incorporated in Singapore at the time.
"I can only make a deal if it's actionable," Tan told the FT. "Actionability means someone comes and asks me. Ever since Qualcomm, I learned one thing: no hostile offers."
Broadcom and Intel have experienced diverging fortunes since the start of the generative AI boom. Broadcom has more than doubled in value since the start of the year to hit a $1 trillion market capitalization for the first time, while Intel has collapsed by more than half to $82 billion.
Broadcom, which designs custom AI chips and components for data centers, hit a record milestone last week after reporting its fourth-quarter earnings. Revenues from its AI business jumped 220% year over year.
Intel, meanwhile, has had a much rougher year. Its CEO Pat Gelsinger β who first joined Intel when he was 18 and was brought back in 2021 after a stint at VMWare β announced his shock retirement earlier this month after struggling to keep pace with rivals like Nvidia in the AI boom.
Gelsinger, who returned to revitalize Intel's manufacturing and design operations, faced several struggles, leading him to announce a head count reduction of roughly 15,000 in August and a suspension of Intel's dividend.
Its challenges have led to several rumors of being bought by a rival, a move that would mark a stunning end to the decades-old chip firm. Buyer interest remains uncertain, however. Bloomberg reported in November that Qualcomm's interest in an Intel takeover has cooled.
Broadcom did not immediately respond to BI's request for comment outside regular working hours.
Microsoft bought more than twice as many Nvidia Hopper chips this year than any of its biggest rivals. The tech giant bought 485,000 Nvidia Hopper chips across 2024 according to reporting from the Financial Times, which cited data from tech consultancy Omdia. To compare, Meta bought 224,000 of the same flagship Nvidia chip this year. [β¦]
Groq is taking a novel approach to competing with Nvidia's much-lauded CUDA software.
The chip startup is using a free inference tier to attract hundreds of thousands of AI developers.
Groq aims to capture market share with faster inference and global joint ventures.
There is an active debate about Nvidia's competitive moat. Some say there's a prevailing perception of a 'safe' choice when investing billions in a technology, in which the return is still uncertain.
Many say it's Nvidia's software, particularly CUDA, which the company began developing decades before the AI boom. CUDA allows users to get the most out of graphics processing units.
Competitors have attempted to make comparable systems, but without Nvidia's headstart, it has been tough to get developers to learn, try, and ultimately improve their systems.
Groq, however, is an Nvidia competitor that focused early on the segment of AI computing that requires less need for directly programming chips, and investors are intrigued. The 8-year-old AI chip startup was valued at $2.8 billion at its $640 million Series D round in August.
Though at least one investor has called companies like Groq 'insane' for attempting to dent Nvidia's estimated 90% market share, the startup has been building its technology exactly for the opportunity that is coming in 2025, Mark Heaps, Groq's "chief tech evangelist" said.
'Unleashing the beast'
"What we decided to do was take all of our compute, make it available via a cloud instance, and we gave it away to the world for free," Heaps said. Internally, the team called the strategy, "unleashing the beast". Groq's free tier caps users at a ceiling marked by requests per day or tokens per minute.
Heaps, CEO and ex-Googler Jonathan Ross, and a relatively lean team have spent 2023 and 2024 recruiting developers to try Groq's tech. Through hackathons and contests, the company makes a promise β try the hardware via Groq's cloud platform for free, and break through walls you've hit with others.
Groq offers some of the fastest inference out there, according to rankings on Artificialanalysis.ai, which measures cost and latency for companies that allow users to buy access to specific models by the token β or output.
Inference is a type of computing that produces the answers to queries asked of large language models. Training, the more energy-intensive type of computing, is what gives the models the ability to answer. So far, the hardware used for those two tasks has been different.
After the inference service was available for free, developers came out of the woodwork, he said, with projects that couldn't be successful on slower chips. With more speed, developers can send one request through multiple models and use another model to choose the best response β all in the time it would usually take to fulfill just one request.
Roughly 652,000 developers are now using Groq API keys, Heaps said.
Heaps expects speed to hook developers on Groq. But its novel plan for programming its chips gives the company a unique approach to the most crucial element within Nvidia's "moat."
No need for CUDA libraries
"Everybody, once they deployed models, was gonna need faster inference at a lower cost, and so that's what we focused on," Heaps said.
So where's the CUDA equivalent? It's all in-house.
"We actually have more than 1800 models built into our compiler. We use no kernels, and we don't need people to use CUDA libraries. So because of that, people can just start working with a model that's built-in," Heaps said.
Training, he said, requires more customization at the chip level. In inference, Groq's task is to choose the right models to offer customers and ensure they run as fast as possible.
"What you're seeing with this massive swell of developers who are building AI applications β they don't want to program at the chip level," he added.
The strategy comes with some level of risk. Groq is unlikely to accumulate a stable of developers who continuously troubleshoot and improve its base software like CUDA has. Its offering may be more like a restaurant menu than a grocery store. But this also means the barrier to entry for Groq users is the same as any other cloud provider and potentially lower than that of other chips.
Though Groq started out as a company with a novel chip design, today, of the company's roughly 300 employees, 60% are software engineers, Heaps said.
"For us right now, there is a billions and billions of dollars industry emerging, that we can go capture a big share of market in, while at the same time, we continue to mature the compiler," he said.
Despite being realistic about the near-term, Groq has lofty ambitions, which board CEO Jonathan Ross has described as "providing half the world's inference." Ross also says the goal is to cast a net over the globe β to be achieved via joint ventures. Saudi Arabia is on the way. Canada and Latin America are in the works.
Earlier this year, Ross told BI the company also has a goal to ship 108,000 of its language processing units or LPUs by the first quarter of next year β and 2 million chips by the end of 2025, most of which will be made availablethrough its cloud.
Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088
Broadcom's stock surged in recent weeks, pushing the company's market value over $1 trillion.
Broadcom is crucial for companies seeking alternatives to Nvidia's AI chip dominance.
Custom AI chips are gaining traction, enhancing tech firms' bargaining power, analysts say.
The rise of AI, and the computing power it requires, is bringing all kinds of previously under-the-radar companies into the limelight. This week it's Broadcom.
Broadcom's stock has soared since late last week, catapulting the company into the $1 trillion market cap club. The boost came from a blockbuster earnings report in which custom AI chip revenue grew 220% compared to last year.
In addition to selling lots of parts and components for data centers, Broadcom designs and sells ASICs, or application-specific integrated circuits β an industry acronym meaning custom chips.
Designers of custom AI chips, chief among them Broadcom and Marvell, are headed into a growth phase, according to Morgan Stanley.
Custom chips are picking up speed
The biggest players in AI buy a lot of chips from Nvidia, the $3 trillion giant with an estimated 90% of market share of advanced AI chips.
Heavily relying on one supplier isn't a comfortable position for any company, though, and many large Nvidia customers are also developing their own chips. Most tech companies don't have large teams of silicon and hardware experts in house. Of the companies they might turn to design them a custom chip, Broadcom is the leader.
Though multi-purpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI chip market in the long-term, custom chips are growing fast.
Morgan Stanley analysts this week forecast the market for ASICs to nearly double to $22 billion next year.
Much of that growth is attributable to Amazon Web Services' Trainium AI chip, according to Morgan Stanley analysts. Then there are Google's in-house AI chips, known as TPUs, which Broadcom helps make.
In terms of actual value of chips in use, Amazon and Google dominate. But OpenAI, Apple, and TikTok parent company ByteDance are all reportedly developing chips with Broadcom, too.
ASICs bring bargaining power
Custom chips can offer more value, in terms of the performance you get for the cost, according to Morgan Stanley's research.
ASICs can also be designed to perfectly match unique internal workloads for tech companies, accord to the bank's analysts. The better these custom chips get, the more bargaining power they may provide when tech companies are negotiating with Nvidia over buying GPUs. But this will take time, the analysts wrote.
In addition to Broadcom, Silicon Valley neighbor Marvell is making gains in the ASICs market, along with Asia-based players Alchip Technologies and Mediatek, they added in a note to investors.
Analysts don't expect custom chips to ever fully replace Nvidia GPUs, but without them, cloud service providers like AWS, Microsoft, and Google would have much less bargaining power against Nvidia.
"Over the long term, if they execute well, cloud service providers may enjoy greater bargaining power in AI semi procurement with their own custom silicon," the Morgan Stanley analysts explained.
Nvidia's big R&D budget
This may not be all bad news for Nvidia. A $22 billion ASICs market is smaller than Nvidia's revenue for just one quarter.
Nvidia's R&D budget is massive, and many analysts are confident in its ability to stay at the bleeding edge of AI computing.
And as Nvidia rolls out new, more advanced GPUs, its older offerings get cheaper and potentially more competitive with ASICs.
"We believe the cadence of ASICs needs to accelerate to stay competitive to GPUs," the Morgan Stanley analysts wrote.
Still, Broadcom and chip manufacturers on the supply chain rung beneath, such as TSMC, are likely to get a boost every time a giant cloud company orders up another custom AI chip.
Arm and Qualcomm are heading to trial this week in Delaware after two years of legal disputes.
The legal battle over a licensing agreement puts Arm in conflict with one of its largest customers.
The trial could have big implications for the entire chip industry, from M&A to IP.
A legal battle between two of the world's biggest chip companies, Arm and Qualcomm, is heading to trial this week β and its outcome could have wide-ranging consequences for the entire industry.
The jury trial in Delaware, starting Monday, is the result of a two-year fight between the two major chip companies. The dispute centers on a licensing arrangement connected to Qualcomm's $1.4 billion acquisition of chip startup Nuvia in 2021.
The fight has put Arm in conflict with one of its largest customers. Qualcomm pays Arm roughly $300 million a year in fees, Reuters reported, citing Stacy Rasgon, a senior analyst at Bernstein Research.
The trial is expected to last until Friday, with each side given 11 hours to present its case. It is set to include testimony from the CEO of Arm, Rene Haas, the chief executive of Qualcomm, Cristiano Amon, and the founder of Nuvia, Gerard Williams.
The legal battle
Arm first filed the lawsuit against Qualcomm in August 2022, alleging a breach of contract and trademark infringement.
The suit revolved around Qualcomm's 2021 acquisition of Nuvia, a chip design startup.
Nuvia had a license to use Arm's architecture to design server chips before Qualcomm acquired it. After the deal closed, Qualcomm reassigned Nuvia engineers to work on a laptop processor. Arm claims that Qualcomm failed to properly transfer the license after the acquisition.
Arm has argued Qualcomm should have renegotiated the licensing agreement because it had different financial terms with each company. Arm, which is majority-owned by SoftBank, has accused Qualcomm of continuing to use its intellectual property in products designed with Nuvia's technology despite not having the required licensing agreements.
In response, Qualcomm has said its existing license with Arm is sufficient and countersued the company, accusing Arm of overstepping its rights. Qualcomm has also said the lawsuit is harming its business and ability to innovate.
Haas addressed the case in a recent interview with The Verge's Alex Heath.
"I can appreciate β because we talk to investors and partners β that what they hate the most is uncertainty," the Arm CEO said. "But on the flip side, I would say the principles as to why we filed the claim are unchanged."
The company has previously said the lawsuit was a last-resort move to protect its intellectual property.
Arm is not seeking monetary damages from Qualcomm but is asking it to destroy any products built using Arm's IP without proper licensing.
Consequences for the chip industry
The trial could have ramifications for IP licensing agreements, mergers and acquisitions, and contract law in the tech industry, wrote Jim McGregor,a principal analyst and partner at TIRIAS Research, in an article for Forbes.
"In addition, it will have an impact on the entire electronics ecosystem, especially each party's supply chains and customer bases," he continued.
Arm and Qualcomm are longtime allies, and the trial is an unusual escalation for two companies so closely tied together.
"It's really not in either of their best interests to go nuclear," Rasgon told The Financial Times. "I think it would make sense to see a settlement β they need each other."
The case could also disrupt a wave of AI computers. Arm said in June that Qualcomm used designs based on Nuvia engineering to create new low-power AI PC chips, which launched earlier this year. Should Arm win the legal battle, it could halt shipments of laptops made by partners β including Microsoft β that contain disputed Qualcomm chips.
Representatives for Arm and Qualcomm did not immediately respond to a Business Insider request for comment.
Intel's co-CEOs discussed splitting the firm's manufacturing and products businesses Thursday.
A separation could address Intel's poor financial performance. It also has political implications.
Intel Foundry is forming a separate operational board in the meantime, executives said.
Intel's new co-CEOs said the company is creating more separation between its manufacturing and products businesses and the possibility of a formal split is still in play.
When asked if separating the two units was a possibility and if the success of the company's crucial, new "18A" process could influence the decision, CFO David Zinsner and CEO of Intel Products Michelle Johnston Holthaus, now interim co-CEOs, said preliminary moves are in progress.
"We really do already run the businesses fairly independently," Holthaus said at a Barclays tech conference Thursday. She added that severing the connection entirely does not make sense in her view, "but, you know, someone will decide that," she said.
"As far as does it ever fully separate? I think that's an open question for another day," Zinsner said.
Already in motion
Though the co-CEOs made it clear a final decision on a potential break-up has not been made, Zinsner outlined a series of moves already in progress that could make a split easier.
"We already run the businesses separately, but we are going down the path of creating a subsidiary for Intel Foundry as part of the overall Intel company," Zinsner said.
In addition, the company is forming a separate operational board for Intel Foundry and separating the operations and inventory management software for the two sides of the business.
Until a permanent CEO is appointed by the board, the co-CEOs will manage most areas of the company together, but Zinsner alone will manage the Foundry business.The foundry aims to build a contract manufacturing business for other chip designers. Due to the sensitive, competitive intellectual property coming from clients into that business, separation is key.
"Obviously, they want firewalls. They want to protect their IPs, their product road maps, and so forth. So I will deal with that part of the foundry to separate that from the Intel Products business." Zinsner said.
Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088.
Apple is working with semiconductor company Broadcom on its first server chip designed to handle AI applications, according to The Information, which cited three people with knowledge of the project.Β Apple is known for designing its own chips β called Apple Silicon and primarily manufactured by TSMC β for its devices. But those chips werenβt [β¦]
Micron Technology will receive more than $6.1 billion after the US Department of Commerce finalized one of the largest CHIPS Act awards ever to "the only US-based manufacturer of memory chips," Vice President Kamala Harris said in a press statement.
Micron will use the funding to construct "several state-of-the-art memory chips facilities" in New York and Idaho, Harris said.Β The chipmaker has committed to a "$125 billion investment over the next few decades" and promised to create "at least 20,000 jobs," Harris confirmed.
Additionally, Micron "agreed to preliminary terms for an additional investment of $275 million to expand" its facility in Manassas, Virginia, Harris confirmed. Those facilities will mostly be used to manufacture chips for automotive and defense industries, Harris noted.
China's top antimonopoly regulator is investigating Nvidia.
The investigation is related to the company's 2020 acquisition of an Israeli chip firm.
Nvidia's stock fell by 2.2% in premarket trading on Monday.
China's top antimonopoly regulator has launched an investigation into Nvidia, whose shares dropped by 2.2% in premarket trading on Monday following the latest escalation of chip tensions with the US.
The State Administration for Market Regulation said on Monday that it was investigating whether the chipmaker giant violated antimonopoly regulations.
The probe is related to Nvidia's acquisition of Mellanox Technologies, an Israeli chip firm, in 2020. China's competition authority approved the $7 billion takeover in 2020 on the condition that rivals be notified of new products within 90 days of allowing Nvidia access to them.
The US-China chip war has been escalating. Last week, China's commerce ministry said it would halt shipments of key materials needed for chip production to the US. The ministry said the measures were in response to US chip export bans, also announced last week.
Nvidia, which is headquartered in Santa Clara, California, has also faced antitrust scrutiny in the US. The Department of Justice has been examining whether Nvidia might have abused its market dominance to make it difficult for buyers to change suppliers.
Nvidia did not immediately respond to a request for comment from Business Insider made outside normal working hours.
AWS's new AI chips aren't meant to go after Nvidia's lunch, said Gadi Hutt, a senior director of customer and product engineering at the company's chip-designing subsidiary, Annapurna Labs. The goal is to give customers a lower-cost option, as the market is big enough for multiple vendors, Hutt told Business Insider in an interview at AWS's re:Invent conference.
"It's not about unseating Nvidia," Hutt said, adding, "It's really about giving customers choices."
AWS has spent tens of billions of dollars on generative AI. This week the company unveiled its most advanced AI chip, called Trainium 2, which can cost roughly 40% less than Nvidia's GPUs, and a new supercomputer cluster using the chips, called Project Rainier. Earlier versions of AWS's AI chips had mixed results.
Hutt insists this isn't a competition but a joint effort to grow the overall size of the market. The customer profiles and AI workloads they target are also different. He added that Nvidia's GPUs would remain dominant for the foreseeable future.
In the interview, Hutt discussed AWS's partnership with Anthropic, which is set to be Project Rainer's first customer. The two companies have worked closely over the past year, and Amazon recently invested an additional $4 billion in the AI startup.
He also shared his thoughts on AWS's partnership with Intel, whose CEO, Pat Gelsinger, just retired. He said AWS would continue to work with the struggling chip giant because customer demand for Intel's server chips remained high.
Last year AWS said it was considering selling AMD's new AI chips. But Huttsaidthose chips still weren't available on AWS because customers hadn't shown strong demand.
This Q&A has been edited for clarity and length.
There have been a lot of headlines saying Amazon is out to get Nvidia with its new AI chips. Can you talk about that?
I usually look at these headlines, and I giggle a bit because, really, it's not about unseating Nvidia. Nvidia is a very important partner for us. It's really about giving customers choices.
We have a lot of work ahead of us to ensure that we continuously give more customers the ability to use these chips. And Nvidia is not going anywhere. They have a good solution and a solid road map. We just announced the P6 instances [AWS servers with Nvidia's latest Blackwell GPUs], so there's a continuous investment in the Nvidia product line as well. It's really to give customers options. Nothing more.
Nvidia is a great supplier of AWS, and our customers love Nvidia. I would not discount Nvidia in any way, shape, or form.
So you want to see Nvidia's use case increase on AWS?
If customers believe that's the way they need to go, then they'll do it. Of course, if it's good for customers, it's good for us.
The market is very big, so there's room for multiple vendors here. We're not forcing anybody to use those chips, but we're working very hard to ensure that our major tenets, which are high performance and lower cost, will materialize to benefit our customers.
Does it mean AWS is OK being in second place?
It's not a competition. There's no machine-learning award ceremony every year.
In the case of a customer like Anthropic, there's very clear scientific evidence that larger compute infrastructure allows you to build larger models with more data. And if you do that, you get higher accuracy and more performance.
Our ability to scale capacity to hundreds of thousands of Trainium 2 chips gives them the opportunity to innovate on something they couldn't have done before. They get a 5x boost in productivity.
Is being No. 1 important?
The market is big enough. No. 2 is a very good position to be in.
I'm not saying I'm No. 2 or No. 1, by the way. But it's really not something I'm even thinking about. We're so early in our journey here in machine learning in general, the industry in general, and also on the chips specifically, we're just heads down serving customers like Anthropic, Apple, and all the others.
We're not even doing competitive analysis with Nvidia. I'm not running benchmarks against Nvidia. I don't need to.
For example, there's MLPerf, an industry performance benchmark. Companies that participate in MLPerf have performance engineers working just to improve MLPerf numbers.
That's completely a distraction for us. We're not participating in that because we don't want to waste time on a benchmark that isn't customer-focused.
On the surface, it seems like helping companies grow on AWS isn't always beneficial for AWS's own products because you're competing with them.
We are the same company that is the best place Netflix is running on, and we also have Prime Video. It's part of our culture.
I will say that there are a lot of customers that are still on GPUs. A lot of customers love GPUs, and they have no intention to move to Trainium anytime soon. And that's fine, because, again, we're giving them the options and they decide what they want to do.
Do you see these AI tools becoming more commoditized in the future?
I really hope so.
When we started this in 2016, the problem was that there was no operating system for machine learning. So we really had to invent all the tools that go around these chips to make them work for our customers as seamlessly as possible.
If machine learning becomes commoditized on the software and hardware sides, it's a good thing for everybody. It means that it's easier to use those solutions. But running machine learning meaningfully is still an art.
What are some of the different types of workloads customers might want to run on GPUs versus Trainium?
GPUs are more of a general-purpose processor of machine learning. All the researchers and data scientists in the world know how to use Nvidia pretty well. If you invent something new, if you do that on GPU, then things will work.
If you invent something new on specialized chips, you'll have to either ensure compiler technology understands what you just built or create your own compute kernel for that workload. We're focused mainly on use cases where our customers tell us, "Hey, this is what we need." Usually the customers we get are the ones that are seeing increased costs as an issue and are trying to look for alternatives.
So the most advanced workloads are usually reserved for Nvidia chips?
Usually. If data-science folks need to continuously run experiments, they'll probably do that on a GPU cluster. When they know what they want to do, that's where they have more options. That's where Trainium really shines, because it gives high performance at a lower cost.
AWS CEO Matt Garman previously said the vast majority of workloads will continue to be on Nvidia.
It makes sense. We give value to customers who have a large spend and are trying to see how they can control the costs a bit better. When Matt says the majority of the workloads, it means medical imaging, speech recognition, weather forecasting, and all sorts of workloads that we're not really focused on right now because we have large customers who ask us to do bigger things. So that statement is 100% correct.
In a nutshell, we want to continue to be the best place for GPUs and, of course, Trainium when customers need it.
What has Anthropic done to help AWS in the AI space?
They have very strong opinions of what they need, and they come back to us and say, "Hey, can we add feature A to your future chip?" It's a dialogue. Some ideas they came up with weren't feasible to even implement in a piece of silicon. We actually implemented some ideas, and for others we came back with a better solution.
Because they're such experts in building foundation models, this really helps us home in on building chips that are really good at what they do.
We just announced Project Rainier together. This is someone who wants to use a lot of those chips as fast as possible. It's not an idea β we're actually building it.
Can you talk about Intel? AWS's Graviton chips are replacing a lot of Intel chips at AWS data centers.
I'll correct you here. Graviton is not replacing x86. It's not like we're yanking out x86 and putting Graviton in place. But again, following customer demand, more than 50% of our recent landings on CPUs were Graviton.
It means that the customer demand for Graviton is growing. But we're still selling a lot of x86 cores too for our customers, and we think we're the best place to do that. We're not competing with these companies, but we're treating them as good suppliers, and we have a lot of business to do together.
How important is Intel going forward?
They will for sure continue to be a great partner for AWS. There are a lot of use cases that run really well on Intel cores. We're still deploying them. There's no intention to stop. It's really following customer demand.
Is AWS still considering selling AMD's AI chips?
AMD is a great partner for AWS. We sell a lot of AMD CPUs to customers as instances.
The machine-learning product line is always under consideration. If customers strongly indicate that they need it, then there's no reason not to deploy it.
And you're not seeing that yet for AMD's AI chips?
Not yet.
How supportive are Amazon CEO Andy Jassy and Garman of the AI chip business?
They're very supportive. We meet them on a regular basis. There's a lot of focus across leadership in the company to make sure that the customers who need ML solutions get them.
There's also a lot of collaboration within the company with science and service teams that are building solutions on those chips. Other teams within Amazon, like Rufus, the AI assistant available to all Amazon customers, run entirely on Inferentia and Trainium chips.
Do you work at Amazon? Got a tip?
Contact the reporter, Eugene Kim, via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Business Insider's source guide for other tips on sharing information securely.
"We are very driven toward 'no wafer left behind,'" Naga Chandrasekaran, the chief global operations officer, said at the UBS Global Technology and AI Conference on Wednesday.
But Intel needs a "no capital left behind" mindset, he added.
Chandrasekaran, who joined Intel this year after two decades at Micron, said that Intel's strategy of producing excess wafers in the hope that there will be demand may have worked when it was closer to a monopoly.
Intel was Silicon Valley's dominant chipmaker in the 2000s. But it has lost ground to AI king Nvidia, Samsung, and several Taiwanese and American players over the years, missing out on skyrocketing artificial intelligence demand. Companies like Microsoft and Google have been designing their own chips, further limiting Intel's market.
Intel's share price has dropped almost 50% this year as it has faced multiple challenges, including billions in losses, sweeping layoffs, and buyouts.
Chandrasekaran and Intel's interim co-CEO David Zinsner, who also participated in Wednesday's fireside talk, said that the company needs to be more mindful of capital spending and operating expenses.
"We're going line by line through this stuff and he's challenging everything and we're picking off things," Zinsner said of Chandrasekaran's strategy. "You've got to absolutely think about every dollar going to capital and scrutinizing it for sure."
The company said in its most recent annual report that it expects continued high capital expenditures "for the next several years" amid an expansion. Intel spent $25.8 billion on capital expenditures last year, up from $18.7 billion two years ago.
On Wednesday, the execs also said that Intel would stick to its current financial forecast and that they were not worried about the impact of the incoming Trump administration.
The company is set to get a $7.9 billion CHIPS Act grant, which is mostly awarded in tax credits, as part of a government program to boost the American semiconductor industry. The Commerce Department told The New York Times that Intel was receiving less than the $8.5 billion originally promised because it also received a separate grant of $3 billion to produce chips for the military.
Trump's tariff threats are not publicly ruffling the Intel executives.
"We have good geographic dispersion of our factories. We can move things around based on what we need," Zinsner said.
Bloomberg and Reuters reported Wednesday that the chipmaker is considering at least two people to replace Gelsinger, who abruptly retired on Sunday after clashing with Intel's board over turnaround plans. Candidates include Lip-Bu Tan, a former Intel board member, and Matt Murphy, the CEO of Marvell Technology.
AWS unveiled a new AI chip and a supercomputer at its Re: Invent conference on Tuesday.
It's a sign that Amazon is ready to reduce its reliance on Nvidia for AI chips.
Amazon isn't alone: Google, Microsoft, and OpenAI are also designing their own AI chips.
Big Tech's next AI era will be all about controlling silicon and supercomputers of their own. Just ask Amazon.
At its Re: Invent conference on Tuesday, the tech giant's cloud computing unit, Amazon Web Services, unveiled the next line of its AI chips, Trainium3, while announcing a new supercomputer that will be built with its own chips to serve its AI ambitions.
It marks a significant shift from the status quo that has defined the generative AI boom since OpenAI's release of ChatGPT, in which the tech world has relied on Nvidia to secure a supply of its industry-leading chips, known as GPUs, for training AI models in huge data centers.
While Nvidia has a formidable moat β experts say its hardware-software combination serves as a powerful vendor lock-in system β AWS' reveal shows companies are finding ways to take ownership of the tech shaping the next era of AI development.
Putting your own chips on the table
On the chip side, Amazon shared that Trainium2, which was first unveiled at last year's Re: Invent, was now generally available. Its big claim was that the chip offers "30-40% better price performance" than the current generation of servers with Nvidia GPUs.
That would mark a big step up from its first series of chips, which analysts at SemiAnalysis described on Tuesday as "underwhelming" for generative AI training and used instead for "training non-complex" workloads within Amazon, such as credit card fraud detection.
"With the release of Trainium2, Amazon has made a significant course correction and is on a path to eventually providing a competitive custom silicon," the SemiAnalysis researchers wrote.
Trainium3, which AWS gave a preview of ahead of a late 2025 release, has been billed as a "next-generation AI training chip." Servers loaded with Trainium3 chips offer four times greater performance than those packed with Trainium2 chips, AWS said.
Matt Garman, the CEO of AWS, told The Wall Street Journal that some of the company's chip push is due to there being "really only one choice on the GPU side" at present, given Nvidia's dominant place in the market. "We think that customers would appreciate having multiple choices," he said.
It's an observation that others in the industry have noted and responded to. Google has been busy designing its own chips that reduce its dependence on Nvidia, while OpenAI is reported to be exploring custom, in-house chip designs of its own.
But having in-house silicon is just one part of this.
The supercomputer advantage
AWS acknowledged that as AI models trained on GPUs continue to get bigger, they are "pushing the limits of compute and networking infrastructure."
With this in mind, AWS shared that it was working with Anthropic to build an "UltraCluster" of servers that form the basis of a supercomputer it has named Project Rainier. According to Amazon, it will scale model training across "hundreds of thousands of Trainium2 chips."
"When completed, it is expected to be the world's largest AI compute cluster reported to date available for Anthropic to build and deploy their future models on," AWS said in a blog, adding that it will be "over five times the size" of the cluster used to build Anthropic's last model.
The supercomputer push follows similar moves elsewhere. The Information first reported earlier this year that OpenAI and Microsoft were working together to build a $100 billion AI supercomputer called Stargate.
Of course, Nvidia is also in the supercomputer business and aims to make them a big part of its allure to companies looking to use its next-generation AI chips, Blackwell.
AWS made no secret that it remains tied to Nvidia for now. In an interview with The Wall Street Journal, Garman acknowledged that Nvidia is responsible for "99% of the workloads" for training AI models today and doesn't expect that to change anytime soon.
That said, Garman reckoned "Trainium can carve out a good niche" for itself. He'll be wise to recognize that everyone else is busy carving out a niche for themselves, too.
Intel is reportedly considering Lip-Bu Tan and Matt Murphy for CEO after Pat Gelsinger's exit.
Gelsinger's departure follows Intel's struggles in the global chip market and stock decline.
Murphy leads Marvell, while Tan is a former Intel board member.
The contest to become Intel's new CEO is onβ and two possible candidates' names have already leaked.
The American chipmaker is considering at least two people from outside the company to replace former CEO Pat Gelsinger, who abruptly retired on Sunday. Candidates include former board member Lip-Bu Tan and Marvell Technology CEO Matt Murphy, Bloomberg and Reuters reported Wednesday, citing people familiar with the matter.
After a clash over Gelsinger's plan to gain ground against rival chipmaker Nvidia, Intel's board gave the CEO the option to retire or step down, Bloomberg reported. Gelsinger, who joined the role three years back, has been temporarily replaced by co-CEOs: David Zinsner, who has been Intel's chief financial officer for nearly three years, and Michelle Johnston Holthaus, the new CEO of product.
His departure follows Intel's yearlong struggle to keep up with the global chip race. Intel has seen its share price drop almost 50% this year as it has faced multiple challenges, including billions in losses, sweeping layoffs, and buyouts.
Gelsinger's plans to revitalize the company included ambitions to build more factories in the US and Europe to scale its production capacity. He also wanted the company to designits own line of AI chips, named Gaudi, to take on Nvidia.
However, these efforts have proven expensive and have produced poor results. Last month, Gelsinger said the company was set to miss its target of $500 million in 2024 sales for Gaudi 3 due to software-related issues.
One board member and one outsider
The two CEO contenders reported so far have strong chipmaking backgrounds.
Semiconductor veteran Tan served on Intel's board between 2022 and this year, where he was on the mergers and acquisitions committee. He left the board in August, citing "demands on his time."
Tan is currently the chairman of Walden International, a venture capital firm. His prior board seats include SoftBank Group and Hewlett Packard Enterprise.
Murphy, meanwhile, does not have a public prior connection to Intel. He is the CEO of Marvell, an American semiconductor manufacturer that produces chips for data centers and service providers. He worked in sales and marketing for circuits producer Analog Devices for over two decades before joining Marvell and has served on the boards of eBay and the Global Semiconductor Alliance.
Marvell gained over 10% in after-hours trading on Tuesday after forecasting fourth-quarter revenue above estimates as it benefits from strong artificial intelligence chip demand. Its stock is up 59% this year.
"As the chairman and CEO of this company, I'm 100% focused on Marvell," Murphy, who has been in the position for eight years, said on Tuesday in an earnings call, when asked about being offered other opportunities.
Marvell has an $83 billion market capitalization and about 6,500 employees, as of 2024. Intel has a $97 billion market cap and about 131,000 employees, according to its website.
Representatives of Intel, Murphy, and Tan did not respond to requests sent outside business hours.
Intel's CEO departure reignited debate on splitting its factories from the company.
Intel's fabs are costly, but they're also considered vital for US national security.
CHIPS Act funding requires Intel to maintain majority control of its foundry.
One central question has been hanging over Intel for months: Should the 56-year-old Silicon Valley legend separate its chip factories, or fabs, from the rest of the company?
Intel's departing CEO, Pat Gelsinger, has opposed that strategy. As a longtime champion of the company's chip manufacturing efforts, he was reluctant to split it.
The company has taken some steps to look into this strategy. Bloomberg reported in August that Intel had hired bankers to help consider several options, including splitting off the fabs from the rest of Intel. The company also announced in September that it would establish its Foundry business as a separate subsidiary within the company.
Gelsinger's departure from the company, announced Monday, has reopened the question, although the calculus is more complicated than simple dollars and cents.
Splitting the fabs from the rest of its business could help Intel improve its balance sheet. It likely won't be easy since Intel was awarded $7.9 billion in CHIPS and Science Act funding, and it's required to maintain majority control of its foundries.
Intel declined to comment for this story.
A breakup could make Intel more competitive
Politically, fabs are importantΒ to Intel's place in the American economy and allow the US to reduce dependence on foreign manufacturers. At the same time, they drag down the company's balance sheet. Intel's foundry, the line of business that manufactures chips, has posted losses for years.
Fabs are immensely hard work. They're expensive to build and operate, and they require a level of precision beyond most other types of manufacturing.
Intel could benefit from a split, and the company maintains meaningful market share in its computing and traditional (not AI) data center businesses. Amid the broader CEO search, Intel also elevated executive Michelle Johnston Holthaus to CEO of Intel Products and the company's co-CEO. Analysts said this could better set up a split.
Regardless, analysts said finding new leadership for the fabs will be challenging.
"The choice for any new CEO would seem to center on what to do with the fabs," Bernstein analysts wrote in a note to investors after the announcement of Gelsinger's departure.
On one hand, the fabs are "deadweight" for Intel, the Bernstein analysts wrote. On the other hand, "scrapping them would also be fraught with difficulties around the product road map, outsourcing strategy, CHIPS Act and political navigation, etc. There don't seem to be any easy answers here, so whoever winds up filling the slot looks in for a tough ride," the analysts continued.
Intel's competitors and contemporaries are avoiding the hassle of owning and operating a fab. The world's leading chip design firm,Β Nvidia, outsourcesΒ all its manufacturing. Its runner-up, AMD, experienced similar woes when it owned fabs, eventually spinning them out in 2009.
Intel has also outsourced some chip manufacturing to rival TSMC in recent years β which sends a negative signal to the market about its own fabs.
Intel is getting CHIPS Act funding
Ownership of the fabs and CHIPS Act funding are highly intertwined. Intel must retain majority control of the foundry to continue receiving CHIPS Act funding and benefits, a November regulatory filing said.
Intel could separate its foundry business while maintaining majority control, said Dan Newman, CEO of The Futurum Group. Still, the CHIPS Act remains key to Intel's future.
"If you add it all up, it equates to roughly $40 billion in loans, tax exemptions, and grants β so quite significant," said Logan Purk, a senior research analyst at Edward Jones.
"Only a small slice of the commitment has come, though," he continued.
Intel's fabs need more customers
Intel is attempting to move beyond manufacturing its own chips to becoming a contract manufacturer. Amazon has already signed on as a customer. Though bringing in more manufacturing customers could mean more revenue, it first requires more investment.
There's a more ephemeral reason Intel might want separation between its Foundry and its chip design businesses, too. Foundries regularly deal with many competing clients.
"One of the big concerns for the fabless designers is any sort of information leakage," Newman said.
"The products department competes with many potential clients of the foundry. You want separation," he added.
It was once rumored that a third party might buy Intel. Analysts have balked at the prospect for political and financial reasons, particularly since running the fabs is a major challenge.
Pat Gelsinger's successor has big problems to pick up.
The departing Intel CEO has struggled with a turnaround and left the company behind on AI.
Intel also faces an uphill battle in its bid to outdo its industry rival TSMC.
Intel's announcement that Pat Gelsinger is retiring has thrown the storied chip firm's future into deep uncertainty. Its interim leaders and its next CEO must pick up the pieces of a turnaround plan designed to fix a business in turmoil, play catch-up in a lucrative AI race, and navigate Donald Trump's second term.
The mission assigned to Gelsinger when he took over as CEO in 2021 was to restore the then-52-year-old company to relevancy. Its business of designing and manufacturing chips β once industry-leading β was struggling. It had to contend with Big Tech customers pushing forward with their own designs, and with production setbacks from manufacturing woes.
Now the challenges are even greater. It has failed to capitalize on the generative-AI boom that has enriched rivals like Nvidia and TSMC. It is also struggling to make the case that it's a national champion of US industry as chipmaking becomes increasingly critical to the nation's future prosperity.
In October, Intel announced a net loss of $16.6 billion in its third quarter, adding to a loss in its second quarter. The company announced it would suspend its dividend and reduce its head count by 15%. The market has not reacted kindly. The chip giant has lost half its value since the start of the year, sinking to a market capitalization of about $103 billion.
After Gelsinger's exit, which Bloomberg reported followed a clash with the board, it's on Intel's interim co-CEOs β David Zinsner and Michelle Johnston Holthaus β and its future leader to overcome these problems.
Closing the AI gap
In 2006, Intel's CEO at the time, Paul Otellini, famously turned down an offer from Steve Jobs to make chips for the iPhone. Intel's move to shun the smartphone market has been replayed in the AI boom.
"It hung on to PCs for too long and ignored what Nvidia was doing," said Peter Cohan, an associate professor of management practice at Babson College. "By the time Intel began to work on AI chips, it was too late, and the company had basically no market share."
Though Intel has tried to play catch-up, it has struggled to deliver. Analysts and researchers pointed to a few reasons.
Hamish Low, a research analyst at Enders Analysis, said Intel had issues during Gelsinger's tenure getting operations ready for the AI boom while dealing with the internal challenges of separating its foundry division from its design business.
"This long, drawn-out corporate process of trying to get your own house in order, when that's your focus, clearly generative AI just skipped right by," Low told Business Insider.
He added that Intel was long known as "the x86 CPU company," referring to its architecture for more general computer chips. The AI world runs on chips known as GPUs loaded into servers, so trying to shift focus while restructuring the business proved tough.
"When it suddenly is GPUs and accelerated computers, it was always going to be a tough challenge to pivot into doing those kinds of server GPU chips," he said.
Intel has felt the pains of this pivot. Its line of AI chips, known as Gaudi, has paled in comparison with offerings from competitors like Nvidia. In an October earnings call, Gelsinger said the company would "not achieve our target of $500 million in revenue for Gaudi in 2024."
"They're in this awkward position where their server-side chips are just too subscale to ever really gain meaningful market share," Low said. "It's hard to see who the real customers for those would be."
Daniel Newman, the CEO of the research firm The Futurum Group, argued that Intel struggled because it didn't "count on the democratization of silicon." Companies like Microsoft and Google have been designing their own chips, further limiting Intel's market.
"They all went down the path of making their own investments and bets on silicon, so what was left was this second-tier enterprise market," Newman told BI. "If you look at the enterprise market, they're not buying a lot of AI silicon yet."
Making the case as a national champion
The other big challenge facing Intel's next leaders is proving that the company can be a national champion of US chipmaking. That won't be straightforward.
Doing so would require strengthening its fab-manufacturing business. "Standing up a successful fab is not an overnight thing," Newman said. "This is a multiyear process."
A lot of the challenge, he said, is that success would require "quite a bit of customer acquisition" from those who have seemed reluctant to shift high-volume production to Intel. As it stands, Intel is its own biggest customer for chip manufacturing.
That's because everyone else largely turns to the Taiwanese giant TSMC, which has grown by more than 83% this year, hitting a market capitalization of $1 trillion. Newman acknowledged that it's unclear whether this is because TSMC is technologically superior. He sees a new Intel process called 18A as being competitive, following news in September that Amazon Web Services was adopting it for a particular chip.
What's more likely, he said, is that TSMC's customers think, "TSMC's not broken, why fix it?" If Intel wants to get serious about building a leading manufacturing business, it'll need to find a way to take on TSMC.
The president-elect has emphasized a protectionist policy. Given the importance of chipmaking to US industry and national security, he could look to throw weight behind Intel.
How that might happen is unclear. Trump has criticized the CHIPS Act, which is set to give Intel $7.9 billion in grants to boost its domestic manufacturing capabilities. Trump has said he'd prefer tariffs as a tool to incentivize chip manufacturing on US soil.
In September, Intel announced plans to spin off its manufacturing unit into its own subsidiary. But not everyone is convinced these moves will be enough.
Cohan told BI he thinks "it is highly unlikely that Intel can be more successful than TSMC in making chips" without significant support from the US government.
"That company lacks the capital to do that on its own, and its knowledge of how to make Nvidia chips is way behind TSMC's," he said. "Why would Nvidia even choose to give up its relationship with TSMC for a less successful rival?"
At its re:Invent conference, AWS today announced the general availably of its Trainium2 (T2) chips for training and deploying large language models (LLMs). These chips, which AWS first announced a year ago, will be four times as fast as their predecessors, with a single Trainium2-powered EC2 instance with 16 T2 chips providing up to 20.8 [β¦]
Tenstorrent just closed its latest funding round, valuing the company at about $2.6 billion.
The startup computing company aims to rival Nvidia with more affordable AI chips and processing.
The nearly $700 million round attracted investors Samsung, Bezos Expeditions, and LG Electronics.
In its latest funding round, Tenstorrent, a startup computing company that builds powerful AI hardware and software to compete with Nvidia, attracted big-name investors β including Jeff Bezos and Samsung.
A companyΒ statementΒ released Monday said its Series D funding round raised $693 million, valuing the AI chip startup at about $2.6 billion, per Bloomberg. Samsung Securities and AFW Partners, a venture capital investment firm based in Seoul, led the round, along with Bezos Expeditions, LG Electronics, and Hyundai Motor Group, among other investors, Tenstorrent announced.
"We are excited by the breadth of investors that believe in our vision," Tenstorrent COO Keith Witek said in the statement. "If you look at this group, you see a balance of financial investors and strategic investors, as well as some notable individuals that have conviction in our plans for AI. They respect our team, our technology, and our vision. They see the ~$150M in deals closed as a strong signal of commercial traction and opportunity in the market."
Tenstorrent wasΒ foundedΒ in 2016 by Ljubisa Bajic, Ivan Hamer, and Milos Trajkovic. In 2020, Jim Keller, a prolific microprocessor engineer known for his work at Apple and Tesla, joined the company as its chief technology officer and became CEO in 2023. The operation, with 10 offices worldwide, builds AI hardware, offers open-source software for chip builders, and licenses products to clients who want to design their own silicon.
While still a fraction of the size of Nvidia, Tenstorrent aims to siphon off a portion of the chipmaking giant's massive market share by offering increased interoperability with other tech providers using an open-source approach that relies on more commonplace technology, Bloomberg reported.
Tenstorrent advocates the use of an open standard instruction set architecture called RISC-V. Designed by computer scientistsΒ at theΒ University of California, Berkeley,Β RISC-VΒ defines how software controls the CPU in a computer and is offered under royalty-free open-source licenses.
Nvidia's approach has instead focused more on the proprietary, from its chips to specific data center layouts, making it difficult for some of Nvidia's customers to switch to chips from competing companies without incurring tremendous costs.
"In the past, I worked with proprietary tech, and it was really tough," Keller told Bloomberg. "Open source helps you build a bigger platform. It attracts engineers. And yes, it's a little bit of a passion project."
A spokesperson for Nvidia declined to comment. Representatives for Tenstorrent, Samsung, and Bezos Expeditions did not immediately respond to requests for comment from Business Insider.
Joe Biden's final move to stop China from racing ahead of the US in AI may be too little too late, reports say.
On Monday, the Biden administration announced new export controls, perhaps most notably restricting exports to China of high-bandwidth memory (HBM) chips used in AI applications. According to Reuters, additional export curbs are designed to also impede China from accessing "24 additional chipmaking tools and three software tools," as well as "chipmaking equipment made in countries such as Singapore and Malaysia."
Nearly two dozen Chinese semiconductor companies will be added to the US entity list restricting their access to US technology, Reuters reported, alongside more than 100 chipmaking toolmakers and two investment companies. These include many companies that Huawei Technologiesβone of the biggest targets of US export controls for yearsβdepends on.
The US has introduced new export controls on China's semiconductor industry, targeting 140 firms.
They aim to curb China's AI and defense-tech growth, partly because of national security concerns.
Trump has called China the "main threat" to the US AI industry.
The US announced on Monday another round of export controls on China's semiconductor industry, multiple outlets reported, weeks ahead of Donald Trump's second term.
Washington plans to restrict exports to 140 Chinese companies, including the chip-equipment heavyweight Naura Technology Group, to curb China's growing capabilities in artificial intelligence and defense technology.
It is the third crackdown the US has initiated on China's chipmaking industry since October 2022. The move is set to also stop the export of advanced high-bandwidth memory, a key component in the development of AI chips in China.
The chip manufacturers Semiconductor Manufacturing International Corp. and Huawei are also on the list of 140 companies.
"They're the strongest controls ever enacted by the US to degrade the People's Republic of China's ability to make the most advanced chips that they're using in their military modernization," Gina Raimondo, the US's commerce secretary, told reporters from the Financial Times, The New York Times, and others on Sunday.
The US has long been embroiled in a technological race against China, which developing AI and military tech at pace.
The Biden administration's latest sanctions are partly driven by national security concerns that China's access to high-quality chips could allow it to bolster its military applications, especially through the use of AI.
Last month, Reuters reported that researchers in China affiliated with the People's Liberation Army had used Meta's open-source AI model Llama to develop an AI tool that could be applied to military use cases.
While this latest wave of measures is from the Biden administration, Beijing reportedly anticipates further sanctions from Trump. China has been attempting to stockpile chips from the US in recent months, with purchases reaching $1.11 billion in October, an analysis of customs data from the South China Morning Post found.
Previous comments from Trump suggest he's aligned with the Biden administration when it comes to thwarting China's AI growth. In a June interview with the social-media personality Logan Paul, Trump billed China as the "main threat" to the US AI industry. "We have to be at the forefront," he said.
There are still some internal disagreements about the approach to restricting Huawei chip-production facilities, the Financial Times said. Some of the Chinese tech giant's chip-production plants were not included on the list, with one person close to the discussions telling the FT that they were not in operation, so it's not clear whether they would be used for the production of advanced chips.
Mao Ning, a spokesperson for China's foreign ministry, said at a press conference earlier this month, "China is firmly opposed to the US overstretching the concept of national security, abusing export control measures, and making malicious attempts to block and suppress China."
The US Department of Commerce didn't immediately respond to a request for comment from Business Insider made outside normal working hours.