Normal view
Why the world's largest contract chipmaker isn't concerned about AI chip competition

I-Hwa Cheng/AFP/Getty Images
- TSMC, the world's largest contract chipmaker, says it's benefiting from strong AI demand.
- While Nvidia dominates the AI space with its GPUs, there are competitors, including ASICs.
- TSMC's CEO, CC Wei, said the company makes both GPUs and ASICs.
As competition heats up in the AI chip race for graphic processing units and application-specific integrated circuits, Taiwan Semiconductor Manufacturing Company isn't bothered.
As the world's largest contract chipmaker, it makes them all, said CEO CC Wei.
"I look left and right, and no matter how I look at it, the orders are all from TSMC," Wei told local media on Tuesday after a shareholder meeting in the northern Taiwanese city of Hsinchu.
Last month, TSMC reported a 42% rise in the first quarter revenue from a year ago. It was the chip giant's fastest pace of growth since 2022.
Instead of competitors, it's the indirect impact of President Donald Trump's tariffs that concerns TSMC's CEO.
"Tariffs have some impact on TSMC, but not directly. That's because tariffs are imposed on importers, not exporters," Wei said.
However, tariffs can lead to slightly higher inflation, which may damp end demand and impact TSMC.
"I am not afraid of anything. I am only afraid of the global economy slowing," he said.
Even so, demand for AI chips has been robust and outpacing supply, he said. In March, TSMC announced an additional $100 billion investment in the US to boost American chipmaking.
Wei said TSMC expects the company's revenue to grow in the mid-20% range this year, with a "record profit" even amid the tariff uncertainty and a recent sharp rise in the Taiwan dollar, which would weigh on its gross margin.
The tech sector's demand for AI has been strong, as evidenced by Nvidia's first-quarter earnings report last week. The tech titan's revenue beat Wall Street's expectations even amidΒ US export restrictions on China's chip sales.
Nvidia's share price is up 5.2% so far this year after surging nearly 25% over the last month alone.
However, TSMC's share prices have been weighed down by macroeconomic and tariff concerns. The Taiwan-listed stock is down about 8% this year. TSMC American Depositary Receipts are little changed this year.
US imposes new rules to curb semiconductor design software sales to China
AMD strikes a deal to sell ZT Systemsβ server-manufacturing business for $3B
Trump canβt keep China from getting AI chips, TSMC suggests
As the global artificial intelligence (AI) race presses on amid a US-China trade war, the Taiwan Semiconductor Manufacturing Company (TSMC)βa $514 billion titan that manufactures most of the world's AI chipsβis warning that it may not be possible to keep its customers' most advanced technology out of China's hands.
US export controls require chipmakers to monitor shipments and know their customers to restrict China's access to AI chips. But in a recently published 2024 report, TSMC confirmed that its "role in the semiconductor supply chain inherently limits its visibility and information available to it regarding the downstream use or user of final products that incorporate semiconductors manufactured by it."
Essentially, TSMC expects that it plays too big a role in the semiconductor industry to stop all the possible unintended end-uses of the semiconductors it manufactures. Similarly, it appears impossible to track all the third parties determined to skirt sanctions. And if TSMC's hands are truly tied, that ultimately means that the US can't effectively stop the latest AI tech from trickling into China.
Β© JHVEPhoto | iStock Editorial / Getty Images Plus
-
Latest News
- Nvidia is the original hardcore tech company. Alumni say CEO Jensen Huang's demanding pace reigns.
Nvidia is the original hardcore tech company. Alumni say CEO Jensen Huang's demanding pace reigns.

JOSH EDELSON / AFP
- Nvidia has a reputation for a demanding work culture.
- Yet it has stayed relatively flexible as other tech firms return to the office and cut perks.
- Huang's fast pace, long days, and streamlined five-point emails help to maintain accountability.
Nvidia doesn't need a big cultural shift to get workers to be hardcore. They've been there for years.
Companies like Shopify, Microsoft, and Meta are ramping up the intensity for workers, pushing the need to get ahead in AI and drive efficiency. The shift inside tech companies has led to a culling of "low performers," inflexible return-to-office mandates, and a reduction in perks.
Nvidia's staff has grown immensely in the past few years, and its market capitalization is on a wild roller coaster ride. But the tentpoles of the company's culture go back much further than the AI boom. The company and its CEO, Jensen Huang, are also the subject of two books released in the past four months, which corroborate what former Nvidians have told Business Insider.
Nvidia has a demanding work culture that trickles down from its famous CEO, providing a foil for the tech firms that aspire to be hardcore but do so by fiat.
"Basically, every single person in Nvidia is directly accountable to Jensen," said Stephen Witt, the author of "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip."
Nvidia declined a request for comment from BI.
'The mission is the boss'
Nvidia has an extremely horizontal structure, with dozens of people β about 60 β reporting directly to Huang.
He sets the direction and the goal, but the Santa Clara, California-based company also has a defining mantra: "The mission is the boss," author Tae Kim wrote in his book "The Nvidia Way: Jensen Huang and the Making of a Tech Giant."
Nvidia shies away from short-term goals, Kim said. There is a central goal or mission, but planning and strategizing are constant processes that don't focus on management incentives or satisfying a hierarchy.
Project leaders may suddenly find themselves reporting directly to Huang. These newly anointed direct reports are dubbed "pilots in charge" and are subject to his wrath and carry his weight, Kim said.
According to a former Nvidia employee who asked to remain anonymous to discuss internal matters, everyone in the company must be prepared to answer Huang in detail.
"His ability to track small details across countless projects is incredible," a former director told BI.
This method of extreme accountability means Nvidia hasn't had to rein in employees as many other companies have post-pandemic. Nvidia is still remote-friendly, for example. But meetings are far from relaxing.
Huang is known to publicly discuss failures and disagreements to benefit the group rather than spare feelings. If he suspects someone isn't on top of their work, a public, cross-examination may ensue. Perks are few, but that's always been the case, two former Nvidians told BI.
The "mission is boss" ethos helps Nvidia avoid the pitfalls of large firms, which often struggle to make quick decisions, let alone pivot when needed, Kim wrote.
"Jensen really doesn't tolerate bullshit," a former engineer from Nvidia's early days told BI. This intolerance makes playing politics nearly impossible, they said.
"It's not just, 'You did something wrong.' It's, 'You did something wrong that was self-serving' β that's the typical problem in big companies," they said.
The philosophy is that the mission can change, but as long as everyone serves it rather than their manager, the company should thrive. Nvidia's pivot to focus on machine learning was even communicated in a companywide Friday-night email in 2014. By Monday, Nvidia was an AI company, Witt wrote.
Email accountability
Huang is known to send more than 100 emails a day, which brings another Jensen-ism into play. (Kim's book has an entire appendix of "Jensen-isms")
The 62-year-old CEO often refers to Nvidia's modus operandi as "speed of light." That's how fast Huang wants everything at Nvidia to progress. He's publicly used the phrase to refer to everything from hiring processes to fixing technical problems.
Witt thinks that Nvidia's email culture was possibly an inspiration for a memorable moment from the early days of DOGE. On a Saturday, Elon Musk requested that every federal government employee send a five-point email recounting what they had done that week. Jensen Huang has requested these emails from his staff since 2020.
According to Kim, at least 100 of Huang's daily diet of emails are "top five" emails.
'Nowhere to hide'
The irony in Nvidia's position among the Silicon Valley elite is that the "mission is boss" mentality and the constant email status updates mean a lot of Nvidians have flexibility that most of Big Tech, including Nvidia's largest customers, have abandoned in 2025.
The hours can be long at Nvidia, which also stems from Huang. Sixty-hour weeks are the norm, and 80-hour weeks are likely at crucial times, offering contrast to companies that feel the need to delineate exact office hours.
"I don't even know when Jensen sleeps," another former Nvidia director said.
Many Nvidians are still able to work from wherever they like. The reason is two-fold, Witt said.
"One of the reasons he's so big on work-from-home is because it gives women, and especially young mothers, the opportunity to continue their work without their careers getting interrupted," Witt said.
Inspired by his wife, Lori Huang, a brilliant electrical engineer who dropped out of the workforce after becoming a mother of two, Huang is aware that some valuable engineering brains find balancing work and family difficult.
"It works really well at Nvidia," Witt said. "You know if you're dropping the ball at Nvidia, the spotlight is turning directly at you, more or less instantly. There is nowhere to hide if you are shirking your work at Nvidia, and I think that makes work-from-home work better for them."
Nvidia for life
If there's one hallmark of the new era of hardcore tech culture, it's layoffs. Rolling layoffs are constantly whirring in the background of tech workplaces in 2025.
That's where Nvidia fully diverges from the pack.
The company hasn't had layoffs since 2008, and despite the hard-charging atmosphere rife with accountability, the turnover at the company is minuscule β under 5% annually for the past two years.
Witt said that's in part due to a self-selection dynamic. Engineers who like a no-nonsense atmosphere where technological supremacy is the focus naturally gravitate toward the company.
"He can get these guys to work for Nvidia on little more than a dream, but those guys will do it because they know the circuits, they know the technology. And they know that Jensen's always at the cutting edge, even if it's not making money. They'll do anything to be at the cutting edge," Witt told BI.
But another reason many Nvidians spend decades with the company could come from Huang's competitive anxiety.
"When Nvidia is evaluating an engineer, they won't think just about what they're worth. They'll think about what it's worth to keep that person away from the competition," Witt said.
Huang, though, has offered a different explanation.
"I don't like giving up on people because I think that they could improve," Huang said at a Stripe event last year. "It's kind of tongue in cheek, but people know that I'd rather torture them into greatness."
Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088
-
Latest News
- AMD's CTO says AI inference will move out of data centers and increasingly to phones and laptops
AMD's CTO says AI inference will move out of data centers and increasingly to phones and laptops

2025 ARPA-E Energy Innovation Summit
- AMD sees the AI inference shift as a chance to grab market share from Nvidia.
- AI inference will move from data centers to edge devices, like smartphones, AMD's CTO says.
- Mark Papermaster expects an AI 'killer app' in the next three to six years.
The lion's share of artificial intelligence workloads moving from training to inference is great news for AMD, its CTO said.
AI training workloads β the ones that make up the gargantuan task of building large language models, imbuing them with a familiar writing or speaking style, and knowledge β used to be most of what AI computing was used for. Inference is the computing process that happens when AI generates outputs like answering questions or creating images.
It's hard to pin down exactly when the switch happened β probably some time last year. But inference is now and will likely stay the largest segment of accelerated computing going forward. Since then, AMD executives have been hyping up a window of opportunity to wrest market share from Nvidia.
"People like the work that we've done in inference," CEO Lisa Su said on the company's February earnings call.
AI at scale is all about inference.
If you ask Mark Papermaster, AMD's Chief Technology Officer, where it all goes from there, he'll tell you that as inference grows, it's headed for the edge.
"Edge devices" are the industry term for computers that live outside the data center. Our phones and laptops all qualify, but so could smart traffic lights or sensors in factories. Papermaster's job is to make sure AMD is headed in the right direction to meet the demand for AI computing across devices as it grows.
AMD has had to play catch-up in the data center since Nvidia's 10-year head start. But at the edge? The field is more open.
Business Insider asked Papermaster what he thinks the future of handheld AI looks like.
This Q&A has been edited for clarity and length.
What's the most prominent use for AI computing in edge devices like laptops and phones?
The use case you're starting to see is local, immediate, low-latency content creation.
Why do we use PCs? We use them to communicate, and we use them to create content. As you and I are talking β this is a Microsoft Teams event β AI is running underneath this. I could have a correction on it such that if I look side to side, you just see me centered. That's an option. I can hit automatic translation β you could be in Saudi Arabia and not speak any English, and we could have simultaneous translation once these things become truly embedded and operational, which is imminent.
It's truly amazing what's coming because just locally on your PC, you'll be able to verbally describe: 'Hey, I'm building a PowerPoint. I need this. I need these features. I'm running Adobe. This is what I want.'
Today, I've got to go back to the cloud. I've got to run the big, heavy compute. It's more expensive and it takes more time.
That's the immediate example that's front and center, and this is why we've invested heavily in AI PCs. That's imminent from Microsoft and others in the next six months.
The other application that we're already seeing is autonomous anything. It starts with cars, but it's way beyond cars. It's the autonomous factory floor.
OK, say it's 2030 β how much inference is done at the edge?
Over time, it'll be a majority. I can't say when the switch over is because it's driven by the applications β the development of the killer apps that can run on edge devices. We're just seeing the tip of the spear now, but I think this moves rapidly.
You might consider phones as an analogy. Those phones were just a nice assist until the App Store came out and made it really easy to create a ton of applications on your phone.
Now, things that used to always be done with more performant computing could be done more locally. Things that were done in the cloud could be done locally. As we start to get killer applications, we're going to start to see that shift go very rapidly. So it's in the next three to six years, no doubt.
I keep running into examples that suggest the way models are getting better is to just keep piling on more inference compute.
How do you know that three years from now, there's not going to be some breakthrough that makes all these devices being designed now completely out of date?
Everything you're describing is to gain even more capability and accuracy. It doesn't mean that what we have is not useful. It's just going to be constantly improving, and the improvement goes into two vectors.
One vector is becoming more accurate. It can do more things, and typically drives more compute. There's an equal vector that runs in parallel, saying, 'How could I be more optimized?'
I call it the DeepSeek moment. It sort of shook the world. Now you have everybody β Microsoft, Meta, Google β making their models more efficient. So you have both examples where it's taking more and more compute and examples where there's innovation driving more and more efficiency. That's not going to change.
-
Latest News
- Nvidia could be hit hard by the new chip export license. Analysts warn the big decision is still to come.
Nvidia could be hit hard by the new chip export license. Analysts warn the big decision is still to come.

Patrick T. Fallon for AFP via Getty Images
- Nvidia faces new export license rules for selling chips to China and other countries.
- The Trump administration's decision could impact Nvidia's revenue. Its stock sank Wednesday.
- Analysts predict this move by the Trump administration could bring better news in the near future.
Wall Street analysts had some choice words for the latest shake-up to Nvidia's regulatory landscape β "disruptive," "surprise," and "abrupt," just to name a few. Bernstein analysts went so far as to say "The Trump rug remains in full effect."
New rules regarding Nvidia's Chinese business surprised many company stakeholders this week. On Tuesday, after market close, the company announced that it had been informed that the Trump administration would require a new license for all accelerated chips shipping to China and a small group of other countries including Russia.
Nvidia said it would take a charge of up to $5.5 billion in inventory, purchase commitments, and reserves in the first quarter, which ends on April 27.
"Based on our discussions, this is effectively a ban," wrote UBS analysts in a note to investors Tuesday.
Even those analysts unwilling to read the disclosure as a full-on ban said any licensing process is likely to be lengthy, so revenue from Nvidia's H20 chip, the one the company designed specifically to meet Biden-era export restrictions, is expected to be minimal for the foreseeable future.
"This is not a ban; it's a licensing requirement, but again, the inventory write-down suggests that the company is not optimistic about being granted licenses," Morgan Stanley analysts wrote.
At the time of writing, the regulation has not appeared in the Federal Register or the Department of Commerce website, so all analyst reactions are related to Nvidia's disclosure. The company's stock was down more than 7% from Tuesday to Wednesday market close.
A spokesperson from Nvidia declined to comment.
China chips are big money for Nvidia
Nvidia priced the charges it will likely incur in the first quarter (ending April 27) at $5.5 billion. However, there was no warning about the company's first-quarter results, which will be announced on May 28. Though China sales will almost certainly be lower than expected, several analysts expect the company may still be able to meet revenue targets for the first quarter.
"Given the strong demand for H200 chips since DeepSeek's launch, we think NVDA could offset somewhat lost China H20 revenues," BNP Paribas analysts wrote. The same analysts estimated Nvidia's China data center business constitutes 10% to 12% of Nvidia's total revenue.
UBS suggested earnings per share would fall by 20 cents, and Morgan Stanley analysts expect 8% to 9% of data center revenues to disappear in the near term.
A decrease in ongoing sales to China was already expected.
Restrictions on what the company is allowed to sell to China are not new. Hence the company has attempted to reduce its reliance on that market over the last two years. Since the H20 is only relevant to the Chinese market, oversupply won't affect Nvidia's sales of any other chips, Morgan Stanley wrote.
Bigger decisions down the road
Since Nvidia's chips are the most expensive in existence, and buyers still keep lining up, tariffs are less of a concern than export restrictions, Morgan Stanley analysts said. Beyond the Chinese market, there are bigger potential impacts looming.
The Biden administration's AI diffusion rules are set to go into effect next month β and could have an even more material impact on Nvidia if enacted as is, since they restrict exports to many more countries such as Singapore, Mexico, Malaysia, UAE, Israel, Saudi Arabia, and India.
Since the White House and Nvidia have demonstrated some cooperation this week, with the Trump administration celebrating the company's announcements around expanded US manufacturing, analysts have converged around a theory about what comes next.
"We are optimistic that the company's demonstrably good relationship with the government, as Trump tweeted yesterday, will mitigate these concerns," Morgan Stanley analysts wrote.
-
Latest Tech News from Ars Technica
- Trump threatens to spike chipmakersβ costs by billions as China mulls exemptions
Trump threatens to spike chipmakersβ costs by billions as China mulls exemptions
The semiconductor industry is bracing to potentially lose more than $1 billion once Donald Trump announces chip tariffs.
Two sources familiar with discussions between chipmakers and lawmakers last week told Reuters that Applied Materials, Lam Research, and KLAβthree of the largest US chip equipment makersβcould each lose about "$350 million over a year related to the tariffs." That adds up to likely more than $1 billion in losses between the three, and smaller firms will likely face similarly spiked costs, estimating losses in the tens of millions.
Some chipmakers are already feeling the pain of Trump's trade war, despite a 90-day pause on reciprocal tariffs and a tenuous exception for semiconductors and other electronics.
Β© William_Potter | iStock / Getty Images Plus
Nvidiaβs H20 AI chips may be spared from export controls βΒ for now
-
Latest News
- Jensen Huang shot down comparisons to Elon Musk and yelled at his biographer. The author told BI what Huang is like.
Jensen Huang shot down comparisons to Elon Musk and yelled at his biographer. The author told BI what Huang is like.

Artur Widak/NurPhoto
- Nvidia CEO Jensen Huang likes to conduct intense, public examinations of his team's work.
- Stephen Witt's book about Huang and Nvidia debuted in the US on Tuesday.
- Witt experienced Huang's ire when he brought up the more sci-fi-adjacent potential for AI.
At Nvidia, getting a dressing down from CEO Jensen Huang is a rite of passage.
The CEO has confirmed this in interviews before, but the writer Stephen Witt can now speak from experience. Witt is the author of "The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip," which chronicles the CEO's life and career and Nvidia's historic rise from a background player to a star of the AI revolution.
Witt describes a fair bit of yelling throughout Nvidia's history.
The company's culture is demanding. Huang prefers to pick apart the team's work in large meetings so that the whole group can learn. Witt's book delves into not just what Nvidians have done but how they think β or don't think β about what their inventions will bring in the grander scheme of history.

Robert Smith
In the final scene of the book, which was already available in Asia and was released in the US on Tuesday, Witt interviewed Huang last year in a room covered in whiteboards detailing Nvidia's past and future. Huang was visibly tired, having just wrapped up the company's nonstop annual conference. After a series of short, curt responses, Witt played a clip from 1964 of the science-fiction writer Arthur C. Clarke musing that machines will one day think faster than humans, and Huang changed entirely.
Witt wrote that he felt like he had hit a "trip wire." Huang didn't want to talk about AI destroying jobs, continue the interview, or cooperate with the book.
Witt told Business Insider about that day and why Huang sees himself differently than other tech titans like Tesla's Elon Musk and OpenAI's Sam Altman. Nvidia declined to comment.
This Q&A has been edited for clarity and length.
At the end of the book, Huang mentions Elon Musk and the difference between them. You asked him to grapple with the future that he's building. And he said, "I feel like you're interviewing Elon right now, and not me." What does that mean?
I think what Jensen is saying is that Elon is a science-fiction guy. Almost everything he does kind of starts with some science-fiction vision or concept of the future, and then he works backward to the technology that he'll need to put in the air.
In the most concrete example, Elon wants to stand on the surface of Mars. That's kind of a science-fiction vision. Working backward, what does he have to do today to make that happen?
Jensen is exactly the opposite. His only ambition, honestly, is that Nvidia stays in business. And so he's going to take whatever is in front of him right now and build forward into the farthest that he can see from first principles and logic. But he does not have science-fiction visions, and he hates science fiction. That is actually why he yelled at me. He's never read a single Arthur C. Clarke book β he said so.
He's meeting Elon Musk, Sam Altman, and other entrepreneurs in the middle. They're coming from this beautiful AGI future. Jensen's like, "I'm going to just build the hardware these guys need and see where it's going." Look at Sam Altman's blog posts about the next five stages of AI. It's really compelling stuff. Jensen does not produce documents like that, and he refuses to.
So, for instance, last month, Musk had a livestreamed Tesla all-hands where he talked about the theory of abundance that could be achieved through AI.
Exactly. Jensen's not going to do that. He just doesn't speculate about the future in that way. Now, he does like to reason forward about what the future is going to look like, but he doesn't embrace science-fiction visions. Jensen's a complicated guy, and I'm not still completely sure why he yelled at me.
This is hard to believe, but I guarantee you it is true. He hates public speaking, he hates being interviewed, and he hates presenting onstage. He's not just saying that. He actually β which is weird, because he's super good at it β hates it, and he gets nervous when he has to do it. And so now that GTC has become this kind of atmosphere, it really stresses him out.

Stephen Witt
Earlier in the book, Huang flippantly told you that he hopes he dies before the book comes out. The comment made me think about who might succeed 62-year-old Huang. Did you run into any concrete conversations about a succession plan?
He can't do it forever, but he's in great shape. He's a bundle of energy. He's just bouncing around. For the next 10 years, at least, it's going to be all Jensen.
I asked them, and they said they have no succession plan. Jensen said: "I don't have a successor."
Jensen's org chart is him and then 60 people directly below him. I say this in the book β he doesn't have a second in command. I know the board has asked this question. They didn't give me any names.
You describe in the book how you were a gamer and used Nvidia graphic cards until you very consciously stopped playing out of worry you were addicted. Did Nvidia just fall off your radar for 10 to 15 years after that? How did you end up writing this book?
This is an interesting story. I should have put this in the book. I bought Nvidia stock in the early 2000s and then sold it out of frustration when it went up.
I basically mirrored [Nvidia cofounder] Curtis Priem's experience and sold it in 2005 or 2006 β which looked like a great trade for seven years because it went all the way back down. I was like, "Oh, thank God I sold that," because it went down another 90% after that.
I probably broke even or lost a small amount of money. I have worked in finance and one of the counterintuitive things that people don't understand about finance is the best thing you can do for your portfolio is sell your worst-performing stock because you get tax advantages.
So I was aware of kind of the sunk-cost fallacy, and it looked like a great trade. Then I paid no attention to the company for 17 years. It wasn't until ChatGPT came along that I even paid attention to them coming back. And I was like, wait β what's going on with Nvidia? Why is this gaming company up so much? I started researching, and I realized these guys had a monopoly on the back end of AI.
I was like, "Oh, I'll just take Jensen and pitch him to The New Yorker." I honestly thought the story would be relatively boring. I was shocked at what an interesting person Jensen is. I thought for sure when I first saw the stock go up, they must have some new CEO doing something interesting.
To my great surprise, I learned that Jensen was still in charge of the company and in fact, at that point, was the single longest-serving tech CEO in the S&P 500.
I was like, it's the same guy? Doing the same thing? And then Jensen was so much more compelling of a character than I ever could hope for.
-
Latest News
- Building AI is about to get even more expensive — even with the semiconductor tariff exemption
Building AI is about to get even more expensive — even with the semiconductor tariff exemption

Michael Buholzer/Reuters
- Most semiconductors are tariff-exempt, but full products with chips inside may face tariffs.
- Without supply chain shifts, Nvidia's non-exempt AI products could see cost impacts from tariffs.
- On-shoring assembly of chips may mitigate tariff effects, but increase costs.
Most Semiconductors, the silicon microchips that run everything from TV remote controls to humanoid robots are exempt from the slew of tariffs rolled out by the Trump administration last week. But that's not the end of the story for the industry which also powers the immense shift in computing toward artificial intelligence that's already underway, led by the US.
There are roughly $45 billion worth of semiconductors (based on 2024 totals gathered by Bernstein), that remain tariff-free β $12 billion of which comes from Taiwan, where AI chip leader Nvidia manufactures. But, the AI ecosystem requires much more than chips alone.
Data centers and the myriad materials and components required to generate depictions of everyone as an anime character are not exempt. For instance, an imported remote-controlled toy car with chips in both components would need an exception for toys, to avoid fees.
"We still have more questions than answers about this," wrote Morgan Stanley analysts in a note sent to investors Thursday morning. "Semiconductors are exempt. But what about modules? Cards?"
As of Friday morning, analysts were still scratching their heads as to the impact, despite the exemption.
"We're not exactly sure what to do with all this," wrote Bernstein's analysts. "Most semiconductors enter the US inside other things for which tariffs are likely to have a much bigger influence, hence secondary effects are likely to be far more material."
AI needs lots of things that aren't exempt
Nvidia designs chips and software, but what it mainly sells are boards, according to Dylan Patel, chief analyst at Semianalysis. Boards contain multiple chips, but also power delivery controls, and other components to make them work.
"On the surface, the exemption does not exempt Nvidia shipments as they ship GPU board assemblies," Patel told Business Insider. "If accelerator boards are excluded in addition to semiconductors, then the cost would not go up much," he continued.
These boards are just the beginning of the bumper crop of AI data centers in the works right now. Server racks, steel cabinets, and all the cabling, cooling gear, and switches to manage data flow and power are mostly imported.
A spokesperson for AMD, which, like Nvidia, produces its AI chips in Taiwan, told BI the company is closely monitoring the regulations.
"Although semiconductors are exempt from the reciprocal tariffs, we are assessing the details and any impacts on our broader customer and partner ecosystem," the spokesperson said in an email statement.
Nvidia declined to comment on the implications of the tariffs. But CEO Jensen Huang got the question from financial analysts last month at the company's annual GTC conference.
"We're preparing and we have been preparing to manufacture onshore," he said. Taiwanese manufacturer TSMC has invested $100 billion in a manufacturing facility in Arizona.
"We are now running production silicon in Arizona. And so, we will manufacture onshore. The rest of the systems, we'll manufacture as much onshore as we need to," Huang said. "We have a super agile supply chain, we're manufacturing in so many different places, we could shift things," he continued.
In addition to potentially producing chips in the US, it's plausible that companies, including Nvidia, could do more of their final assembly in the US, Chris Miller, the author of "Chip War" and a leading expert on the semiconductor industry told BI. Moving the later steps of the manufacturing process to un-tariffed shores, which right now include Canada and Mexico as well as the US, could theoretically allow these companies to import bare silicon chips and avoid levies. But that change would come with a cost as well, Miller said.
With retaliatory tariffs rolling in, US manufacturers could find tariffs weighing down demand in international markets too.
Supply chain shifts and knock-on effects
Semiconductor industry veteran Sachin Gandhi just brought his startup Retym out of stealth mode last week, with a semiconductor that helps data move between data centers faster. Technology like his has been most relevant to the telecom industry for decades and is now finding new markets in AI data centers.
Retym's finished product is exempt from tariffs when it enters the US, but the semiconductor supply chain is complex. Products often cross borders while being manufactured in multiple steps, packaged, tested, and validated, and then shipped to the final destination.
A global tariff-rich environment will probably bring up his costs in one way or another, Gandhi told BI. End customers like hyperscalers and the ecosystem of middlemen who bundle all these elements together and sell them will figure out how to cover these costs without too much consternation to a point, he said.
"Demand is not particularly price sensitive," wrote Morgan Stanley analyst Joe Moore Thursday.
AI is already an area where companies appear willing to spend with abandon. But, it's also maturing. Now, when companies are working to put together realistic estimates for normal business metrics like return on investment, unit economics, and profitability, tariffs risk pushing that down the road, potentially years.
Have a tip or an insight to share? Contact Emma at [email protected] or use the secure messaging app Signal: 443-333-9088
-
Latest News
- Intel's new CEO acknowledges the company fell behind and he wants customers to be 'brutally honest'
Intel's new CEO acknowledges the company fell behind and he wants customers to be 'brutally honest'

Dibyanshu Sarkar for AFP via Getty Images
- Intel's new CEO, Lip-Bu Tan, said he wants brutal customer feedback to address recent shortcomings.
- Tan's appointment follows Pat Gelsinger's departure in December.
- Tan said that Intel plans to divest non-core businesses and may explore the robotics space.
Intel's new CEO said the company had fallen short of expectations recently and asked for customer feedback in his first public appearance since taking over the chip giant.
"We had been too slow to adapt and to meet your needs," Lip-Bu Tan said about customers on Monday at an Intel event in Las Vegas. "You deserve better, and we need to improve, and we will. Please be brutally honest with us."
The keynote was Tan's first since taking over as CEO on March 18. Tan walked the audience through his upbringing in Asia and career milestones and said he learned the importance of customer feedback in his previous CEO role.
Tan's appointment comes after formerΒ CEO Pat Gelsinger's sudden departure in December. Tan has over 16 years of experience leading the electronics and system design company Cadence and joined Intel's board in 2022, serving on the mergers and acquisitions committee. He left the board in August, citing "demands on his time."
Tan is the chairman of Walden International, a venture capital firm, and has served on the boards of SoftBank Group and Hewlett Packard Enterprise.
Turnaround plans
On Monday, Tan also said that Intel would spin off non-core businesses to improve its bandwidth. He did not specify which parts of the company may get divested. He also hinted at developments in the robotics space.
"I love this company. It was hard for me to watch it struggle," he said about joining Intel at this stage in his career. "I simply cannot stay on the sideline knowing that I could help turn things around."
Intel was Silicon Valley's dominant chipmaker in the 2000s. But it lost ground to Nvidia, Samsung, and several Taiwanese and American players, missing out on key tech developments like the rise of the iPhone and, more recently, skyrocketing artificial-intelligence demand. Companies such as Microsoft and Google have been designing their own chips, further limiting Intel's customer base.
Intel's share price dropped almost 50% in 2024 as the company faced challenges, including billions of dollars in losses. Gelsinger responded with sweeping layoffs and buyouts.
In December, the company's chief global operations officer outlined a similar strategic shift: moving from "no wafer left behind" to a "no capital left behind" mindset to minimize waste and prioritize efficiency.
Intel is up 13% so far this year, partly because of Tan's appointment as CEO and because of reports that the company could be partnering with leading chipmakers Nvidia and Broadcom for manufacturing.
CoreWeave IPO debut: Pay attention to this potentially expensive hardware problem

Bruno de Carvalho/SOPA Images/LightRocket via Getty Images
- Rapid AI advancements may reduce the useful life of CoreWeave's Hopper-based Nvidia GPUs.
- CoreWeave has a lot of Hopper-based GPUs, which are becoming outdated due to the Blackwell rollout.
- Amazon recently cut the estimated useful life of its servers, citing AI advancements.
I recently wrote about Nvidia's latest AI chip-and-server package and how this new advancement may dent the value of the previous product.
Nvidia's new offering likely caused Amazon to reduce the useful life of its AI servers, which took a big chunk out of earnings.
Other Big Tech companies, such as Microsoft, Google, and Meta, may have to take similar tough medicine, according to analysts.
This issue might also impact Nvidia-backed CoreWeave, which is doing an IPO. Its shares listed on Friday under the ticker "CRWV." The company is a so-called neocloud, specializing in generative AI workloads that rely mostly on Nvidia GPUs and servers.
Like its bigger rivals, CoreWeave has been buying oodles of Nvidia GPUs and renting them out over the internet. The startup had deployed more than 250,000 GPUs by the end of 2024, per its filing to go public.
These are incredibly valuable assets. Tech companies and startups have been jostling for the right to buy Nvidia GPUs in recent years, so any company that has amassed a quarter of a million of these components has done very well.
There's a problem, though. AI technology is advancing so rapidly that it can make existing gear obsolete, or at least less useful, more quickly. This is happening now as Nvidia rolls out its latest AI chip-and-server package, Blackwell. It's notably better than the previous version, Hopper, which came out in 2022.
Veteran tech analyst Ross Sandler recently shared a chart showing that the cost of renting older Hopper-based GPUs has plummeted as the newer Blackwell GPUs become more available.

Ross Sandler/Barclays Capital
The majority of CoreWeave's deployed GPUs are based on the older Hopper architecture, according to its latest IPO filing from March 20.
Sometimes, in situations like this,Β companies have to adjust their financials to reflect the quickly changing product landscape. This is done by reducing the estimated useful life of the assets in question. Then, through depreciation, the value of assets is reduced over a short time period to reflect things like wear and tear and, ultimately, obsolescence. The faster the depreciation, the bigger the hit to earnings.
Amazon's AI-powered depreciation
Amazon, the world's largest cloud provider, just did this. On a recent conference call with analysts, the company "observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning."
That caused Amazon Web Services to decrease the useful life of some of its servers and networking gear from six years to five years, beginning in January.
Sandler, the Barclays analyst, thinks other tech companies may have to do the same, which could cut operating income by billions of dollars.
Will CoreWeave have to do the same, just as it's trying to pull off one of the biggest tech IPOs in years?
I asked a CoreWeave spokeswoman about this, but she declined to comment. This is not unusual, because companies in the midst of IPOs have to follow strict rules that limit what they can say publicly.Β
CoreWeave's IPO risk factor
CoreWeave talks about this issue in its latest IPO filing, writing that the company is always upgrading its platform, which includes replacing old equipment.
"This requires us to make certain estimates with respect to the useful life of the components of our infrastructure and to maximize the value of the components of our infrastructure, including our GPUs, to the fullest extent possible."
The company warned those estimates could be inaccurate. CoreWeave said its calculations involve a host of assumptions that could change and infrastructure upgrades that might not go according to plan β all of which could affect the company, now and later.
This caution is normal because companies have to detail everything that could hit their bottom line, from pandemics to cybersecurity attacks.
As recently as January 2023, CoreWeave was taking the opposite approach to this situation, according to its IPO filing. The company increased the estimated useful life of its computing gear from five years to six years.Β That change reduced expenses by $20 million and boosted earnings by 10 cents a share for the 2023 year.
If the company now follows AWS and reduces the useful life of its gear, that might dent earnings. Again, CoreWeave's spokeswoman declined to comment, citing IPO rules.
An important caveat: Just because one giant cloud provider made an adjustment like this, it doesn't mean others will have to do the same. CoreWeave might design its AI data centers differently, somehow making Nvidia GPU systems last longer or become less obsolete less quickly, for instance.Β
It'sΒ also worth noting that other big cloud companies, including Google, Meta, and Microsoft, have increased the estimated useful life of their data center equipment in recent years.
Google and Microsoft's current estimates are six years, like CoreWeave's, while Meta's is 5.5 years.
However, Sandler, the Barclays analyst, thinks some of these big companies will follow AWS and shorten these estimates.Β
SoftBank is buying Ampere Computing for $6.5 billion

KAZUHIRO NOGI/AFP via Getty Images
- SoftBank said it's acquiring chip designer Ampere Computing for $6.5 billion in cash.
- Ampere will operate as a subsidiary, keeping its name and headquarters in Santa Clara.
- SoftBank's CEO said the deal was part of its commitment to AI innovation in the US.
SoftBank is acquiring Ampere Computing, a Silicon Valley-based chip design startup, for $6.5 billion in cash, the company said in a statement Wednesday.
SoftBank said that Ampere will operate as a subsidiary but keep its name and headquarters in Santa Clara, California.
"The future of Artificial Super Intelligence requires breakthrough computing power," Masayoshi Son, the CEO of SoftBank, said in a statement. "Ampere's expertise in semiconductors and high-performance computing will help accelerate this vision, and deepens our commitment to AI innovation in the United States."
The deal is expected to close in the second half of the year, the statement said.
Ampere did not respond to a request for comment from Business Insider, and SoftBank declined to provide additional comment.
SoftBank has been going all in on AI for several years now.
Son has a close relationship with AI chip giant Nvidia's CEO, Jensen Huang. Huang has said that SoftBank was once his company's biggest investor. SoftBank sold its stake in Nvidia in 2019 and Son said about his decision: "The fish that got away was big."
At a conference in November, Huang and Son announced that SoftBank would be the first customer to use Nvidia's Blackwell chips, which would be used to develop an artificial intelligence supercomputer in Japan.
SoftBank acquired European chipmaker Arm for $32 billion in 2016 and took it public in 2023. The firm is also an investor in OpenAI and put $500 million into the AI company in October.
SoftBank's most recent investments in AI include Stargate, a $500 billion joint venture with OpenAI, Oracle, and other investors. The project, announced in January, aims to invest in AI infrastructure in the US over the next four years.
Son appeared at the White House alongside OpenAI CEO Sam Altman, Oracle CTO Larry Ellison, and President Donald Trump when the project was announced.
Son was known for consumer-focused investments βincluding high-profile flops like WeWork β through his two Vision Funds. But he has since bet big on AI, seeing it as this century's transformational technology. The second Vision Fund is now backing AI startups like generative AI video editing startup OpusClip.
As Intel welcomes a new CEO, a look at where the company stands
Semiconductor giant Intel hired semiconductor veteran Lip-Bu Tan to be its new CEO. This news comes three months after Pat Gelsinger retired and stepped down from the companyβs board, with Intel CFO David Zinsner and executive vice president of client relations Michelle Johnston Holthaus stepping in as co-CEOs. Tan, who was most recently the CEO [β¦]
Β© 2024 TechCrunch. All rights reserved. For personal use only.
-
Latest News
- 'BS and hype': Amazon execs cast doubt on Microsoft's quantum-computing claims in private discussions
'BS and hype': Amazon execs cast doubt on Microsoft's quantum-computing claims in private discussions

Amazon
- Amazon executives are skeptical about Microsoft's quantum computing breakthrough claims.
- One Amazon exec called it "next level (in BS and hype)."
- Recent quantum announcements by tech giants may be more hype than substance, other experts suggest.
Microsoft claimed a major quantum computing breakthrough last month. Amazon executives aren't buying it.
On February 19, Microsoft unveiled a new quantum processor called Majorana 1. The company said the chip uses a new type of architecture that could allow quantum computers to store way more data and perform much more complex calculations.
On the same day, Simone Severini, Amazon's head of quantum technologies, emailed CEO Andy Jassy casting doubt on Microsoft's claims, according to a copy of the email obtained by Business Insider.
Severini wrote that Microsoft's underlying scientific paper, released in Nature, "doesn't actually demonstrate" the claimed achievement and only showed that the new chip "could potentially enable future experiments."
He also noted that Microsoft has a checkered history of "several retracted papers due to scientific misconduct" in the quantum computing space, adding that some of the company's earlier research had to be withdrawn. The email was also shared with several other executives, including Amazon Web Services CEO Matt Garman and SVP James Hamilton.
"This seems to be a meaningful technical advancement, but it's far different from the breakthrough being portrayed in the media coverage," Severini wrote in the email.
It's also far from clear that Microsoft's architecture, which uses "topological qubits," will provide "any real performance benefit," he added.

Amazon
'Next level (in BS and hype)'
In internal Slack messages, seen by BI, Amazon execs and employees were more vocal about their frustration with Microsoft's claims.
Oskar Painter, Amazon's head of quantum hardware, stressed the need to "push back on BS statements like S. Nadella's," likely in reference to the Microsoft CEO Satya Nadella's social media post proclaiming major advancements with the Majorana chip.
Painter, who also teaches at Caltech, said he has more positive views of Google and IBM's quantum-computing efforts. Microsoft, on the other hand, is "next level (in BS and hype)," he wrote in an internal Slack message seen by BI.
One Amazon employee joked about receiving texts from friends asking if this would "change the world," while another poked fun at tech companies using grandiose statements to promote their quantum efforts.
"Seems as if Google, IBM and Microsoft's marketing teams are making faster progress than their hardware R&D teams," this person wrote in Slack.
'Insignificant' compared to what is needed
Tech companies have been working on quantum computing for years. The hope is to one day create machines that enable significant strides in areas like drug discovery or chemical compound creation. In recent months, Amazon and Google have also unveiled new quantum chips.
But their efforts to outduel each other could generate more hype than substance, according to industry experts.
Arka Majumdar, a computer engineering professor at the University of Washington, told BI that Microsoft's technological achievements are impressive but "insignificant" compared to what is needed to create a useful quantum computer. He added Microsoft's claims appear "sensational" and "overhyped," given they haven't reached meaningful scale.
Scott Aaronson, a renowned quantum computing researcher and computer science professor at the University of Texas at Austin, pointed out in a blog post that Microsoft's claim to have created a topological qubit "has not yet been accepted by peer review."
The peer review file of Microsoft's Nature report states that the "results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices," and the work is intended to introduce an architecture that "might enable fusion experiments using future Majorana zero modes."
In an email to BI, a Microsoft spokesperson said the Nature paper was published a year after its submission, and the company has made "tremendous progress" in that time. Microsoft plans to share additional data "in the coming weeks and months," the spokesperson added.
"Discourse and skepticism are all part of the scientific process," Microsoft's spokesperson said. "That is why we are dedicated to the continued open publication of our research, so that everyone can build on what others have discovered and learned."
Quantum timelines
Amazon and Microsoft also have differing views on the expected timeline for practical quantum usage.
Microsoft's spokesperson told BI, "Utility-scale quantum computers are just years away, not decades." Amazon's spokesperson, however, expects another couple of decades before mainstream adoption.
"While quantum computers may not be commercially viable for 10-20 years, bringing quantum computing to fruition is going to take an extraordinary effort, including sustained interest and investment across the industry starting now," Amazon's spokesperson told BI.
Chris Ballance, CEO of quantum-computing startup Oxford Ionics, told BI that Amazon's recent quantum chip announcement was equally vague with little substance. Other industry experts previously told BI that they were unsure if the technology has advanced as far as these companies claim.
Still, Ballance said the recent array of quantum news is a "good sign" for the industry, which is still in its "very early days."
"It shows that people are waking up to the value of quantum computing and the need to address it in their roadmaps," Ballance said.
Have a tip?
Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.
-
TechCrunch News
- Infineon teams up with Indiaβs CDIL to build chips for light EVs, energy storage solutions
Infineon teams up with Indiaβs CDIL to build chips for light EVs, energy storage solutions
Infineon Technologies has partnered with CDIL Semiconductors to tap into emerging business cases including light EVs and battery storage solutions in India.
Β© 2024 TechCrunch. All rights reserved. For personal use only.
Trump wants Republicans to 'get rid of' the CHIPS Act. They may just ignore him.

Win McNamee/Getty Images
- Trump called on Congress to scrap the "horrible" $52 billion CHIPS Act on Tuesday.
- It's not likely to happen. There's little GOP appetite to repeal it, and Democrats like the law.
- Plus, much of the 2022 bill has already been set in motion.
President Donald Trump casually told Congress on Tuesday to scrap a $52 billion bipartisan law designed to supercharge the American semiconductor manufacturing industry.
"Your CHIPS Act is a horrible, horrible thing," Trump said during a joint address to Congress on Tuesday night. "You should get rid of the CHIP Act, and whatever's left over, Mr. Speaker, you should use it to reduce debt, or any other reason you want to."
That landed with a thud among the Republicans who backed the sweeping legislation two and half years ago.
"I have to admit, I was surprised," Sen. Todd Young of Indiana, the Republican co-author of the 2022 CHIPS and Science Act, told reporters on Wednesday morning. "It's been one of the greatest successes of our time."
The bill included $39 billion in subsidies for chip manufacturing in the US, plus $13.2 billion for semiconductor research and workforce development. The goal of the legislation was to make the US less reliant on chips manufactured in Taiwan, create US manufacturing jobs, and bolster competition with China β something that was especially important for national security-minded Republicans.
It passed both chambers on a bipartisan basis, garnering the enthusiastic support of then-Senate Minority Leader Mitch McConnell, plus the support of 16 other GOP senators and virtually all Democrats.
Trump specifically criticized the subsidies, saying that he prefers to bring chip manufacturing to America via tariffs. "They take our money and they don't spend it," the president said.
Just because Trump is asking Congress to repeal the law doesn't mean it will happen β even though Republicans control both chambers and broadly support the president's agenda.
'I'd like to see what he's going to replace it with'
It would take not just unanimous GOP support, but the support of several Democratic senators, to get a repeal bill through the Senate.
For one, Democrats are unlikely to go for it.
"People are already feeling the positive impacts and new economic energy in their towns in every corner of America, from Ohio to Arizona," Senate Minority Leader Chuck Schumer said in a statement. "I do not think the president will find much support in Congress for weakening this legislation."
"I don't know what the hell he's on," Democratic Sen. Ruben Gallego, whose home state of Arizona is home to a TSMC plant built with CHIPS Act funding, told BI of Trump.
Even among Republicans, there's a hesitancy to indulge Trump's latest demand.
Republican Sen. John Cornyn of Texas defended the bill, saying it "made it possible" for Trump to announce an additional investment by TSMC earlier this week.
Other GOP senators who voted for the CHIPS Act said they want to see more details from Trump before they back repealing the bill.
"I'd like to see what he's going to replace it with," Sen. Lindsey Graham of South Carolina told reporters. "I want to bring chip manufacturing here, but if he's got a different way to do it, I'm open-minded."
"If there's a consensus on addressing that problem in a better way, I'm open to suggestions," Sen. Roger Wicker of Mississippi told BI. "It was the first I'd heard of that proposal."
It's also the case that the part of the CHIPS Act that Trump opposes has already been set in motion over the last few years. Young told reporters that the "chips portion" of the bill "has mostly been implemented," and that he's reached out to the White House to get more clarity on Trump's position.
Before leaving office, the Biden administration allocated more than $33 billion in subsidies to 32 semiconductor manufacturers and suppliers.
To date, 20 companies have secured legally binding agreements, which collectively account for roughly 85% of the total allocated subsidies, Bloomberg reported. However, only about 11% of the allocated subsidies have been doled out so far: Funds are dispersed as companies hit certain benchmarks detailed in the agreements.
In November, a Commerce Department spokesperson told BI that the funding associated with a binding agreement could not be rescinded unless the company failed to comply with the terms of the deal. Absent that, they said rescinding funding would require an act of Congress.
"My expectation is that the administration will continue to support this supply chain resiliency and national security initiative," Young said. "If it needs to transform into a different model over a period of time, I'm certainly open to that."
Gallego argued that Trump's comments could still have an impact on the semiconductor industry, even if legislative repeal doesn't happen.
"If you are a business that's trying to relocate, or ramp up chip supplies in the United States, and you think that there's not going to be money for the CHIPS Act, you may not even start engaging, and you might start looking at other countries," the Arizona Democrat said.
-
Latest Tech News from Ars Technica
- China aims to recruit top US scientists as Trump tries to kill the CHIPS Act
China aims to recruit top US scientists as Trump tries to kill the CHIPS Act
On Tuesday, Donald Trump finally made it clear to Congress that he wants to kill the CHIPS and Science Actβa $280 billion bipartisan law Joe Biden signed in 2022 to bring more semiconductor manufacturing into the US and put the country at the forefront of research and innovation.
Trump has long expressed frustration with the high cost of the CHIPS Act, telling Congress on Tuesday that it's a "horrible, horrible thing" to "give hundreds of billions of dollars" in subsidies to companies that he claimed "take our money" and "don't spend it," Reuters reported.
"You should get rid of the CHIPS Act, and whatever is left over, Mr. Speaker, you should use it to reduce debt," Trump said.
Β© klyaksun | iStock / Getty Images Plus