Sam Altman's OpenAI is tightening up developer access to its models.
JOEL SAGET/AFP via Getty Images
OpenAI now requires a government ID for developer access to advanced AI models.
Copyleaks research shows DeepSeek-R1 mimics OpenAI outputs, raising imitation concerns.
AI model fingerprinting could enforce licensing and protect intellectual property rights.
In a bid to protect its crown jewels, OpenAI is now requiring government ID verification for developers who want access to its most advanced AI models.
While the move is officially about curbing misuse, a deeper concern is emerging: that OpenAI's own outputs are being harvested to train competing AI systems.
A new research paper from Copyleaks, a company that specializes in AI content detection, offers evidence of why OpenAI may be acting now. Using a system that identifies the stylistic "fingerprints" of major AI models, Copyleaks estimated that 74% of the outputs from rival Chinese model, DeepSeek-R1, were classified as OpenAI-written.
This doesn't just suggest overlap β it implies imitation.
Copyleaks's classifier was also tested on other models including Microsoft's phi-4 and Elon Musk's Grok-1. These models scored almost zero similarity to OpenAI β 99.3% and 100% "no-agreement" respectively β indicating independent training. Mistral's Mixtral model has some similarities, but DeepSeek's numbers stood out starkly.
A chart showing stylistic "fingerprint" similarities to OpenAI models
Copyleaks research
The research underscores how even when models are prompted to write in different tones or formats, they still leave behind detectable stylistic signatures β like linguistic fingerprints. These fingerprints persist across tasks, topics, and prompts, and can now be traced back to their source with some accuracy. That has enormous implications for detecting unauthorized model use, enforcing licensing agreements, and protecting intellectual property.
OpenAI didn't respond to requests for comment. But the company discussed some reasons why it introduced the new verification process. "Unfortunately, a small minority of developers intentionally use the OpenAI APIs in violation of our usage policies," it wrote when announcing the change recently.
OpenAI says DeepSeek might have 'inappropriately distilled' its models
Earlier this year, just after DeepSeek wowed the AI community with reasoning models that were similar in performance to OpenAI's offerings, the US startup was even clearer: "We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models."
Distillation is a process where developers train new models using the outputs of other existing models. While such a technique is common in AI research, doing so without permission could violate OpenAI's terms of service.
DeepSeek's research paper about its new R1 model describes using distillation with open-source models, but it doesn't mention OpenAI. I asked DeepSeek about these allegations of mimicry earlier this year and didn't get a response.
Critics point out that OpenAI itself built its early models by scraping the web, including content from news publishers, authors, and creators β often without consent. So is it hypocritical for OpenAI to complain when others use its outputs in a similar way?
"It really comes down to consent and transparency," said Alon Yamin, CEO of Copyleaks.
Training on copyrighted human content without permission is one kind of issue. But using the outputs of proprietary AI systems to train competing models is another β it's more like reverse-engineering someone else's product, he explained.
Yamin argues that while both practices are ethically fraught, training on OpenAI outputs raises competitive risks, as it essentially transfers hard-earned innovations without the original developer's knowledge or compensation.
As AI companies race to build ever-more capable models, this debate over who owns what β and who can train on whom β is intensifying. Tools like Copyleaks' digital fingerprinting system offer a potential way to trace and verify authorship at the model level. For OpenAI and its rivals, that may be both a blessing and a warning.
OpenAI CEO Sam Altman has some kind words for his OpenAI cofounder βΒ and current legal opponent β Elon Musk.
Getty Images
OpenAI plans to launch a social network, according to a media report on Tuesday.
A move like this would give OpenAI way more user attention and fresh data for AI model training.
Elon Musk transformed X and xAI by protecting his social network's data and using it to create new AI services.
OpenAI is reportedly planning to launch its own social network. To understand why, consider what Elon Musk has done with X in recent years.
After overpaying for Twitter, one of the first things he did was shut off bot access to this social network. There were howls of protest. Most explanations focused on the assumption that Musk was gutting Twitter for some sort of evil fun.
What he was actually doing was protecting Twitter's valuable data from being pillaged by other tech companies that wanted to use this information for free to create generative AI models.
Instead, Musk's new AI startup, xAI, got exclusive access to this data. This was used to train new AI models. The result is Grok, a series of models and chatbot services that compete pretty well with OpenAI's ChatGPT β while also fueling new forms of content creation and sharing on what is now known as the X social network.
Now, Sam Altman is trying to replicate this magic. The Verge reported on Tuesday that OpenAI is developing a social media platform that could integrate ChatGPT outputs, such as image generation, with a social feed. It cited sources familiar with the matter. OpenAI didn't comment to the publication.
Bill Gross, the founder of tech incubator Idealab, told me there are two main reasons OpenAI would want its own social network. OK, well, three reasons.
The other two reasons are more interesting. They revolve around the two most important ingredients in today's AI-powered tech industry: attention and data.
"Altman needs to have more attention to justify OpenAI's valuation," Gross said. "He's already getting half a billion to a billion monthly unique visitors, which is incredible. He wants to get that higher so he can justify a trillion-dollar valuation."
Recall the title of Google's famous research paper that kicked off the generative AI boom: "Attention Is All You Need." To be a valuable tech giant, you need to control the online attention of billions of humans every day.
OpenAI was recently valued at about $300 billion. But Microsoft, Google, Amazon, and Meta are all in the trillion-dollar club. And these companies all have billions of regular users, Gross noted.
OpenAI needs the same reach, and a social network could do the trick, he said. "They just need more attention. So why not harvest the output of their models that users will share on a new social network, and this should attract even more users and even more attention," Gross said.
Human data is good. Labeled data is better.Β
What do you do with all this user attention and related activity once you get it? A decade ago, the Big Tech business model was running targeted online ads to generate billions of dollars. Now, it's siphoning off user data to train and build powerful AI models and chatbots, then charging a subscription for access to these tools.
Like other AI companies, OpenAI has spent recent years collecting all the data from the internet and using that in clever ways to create AI models and chatbots, including ChatGPT, GPT-4, and the new 4o image-generation tool.
The appetite for more, high-quality human-generated data is insatiable. But the supply can't keep up, so AI companies often have to pay in various ways to generate new data.
What if, instead, you could just collect mountains of free human data from your very own social network?
"If users start typing words into this new OpenAI social network, the company can use that for all kinds of AI model training," Gross said.
The company would have this new source of human vocabulary, but users would also share images and videos and add commentary. This is essentially humans identifying and labeling content on a massive scale, Gross explained.
This is crucial for successful AI model development. Raw data is good, but when humans take the time to annotate and label the information, that's way more valuable.
"How else can OpenAI acquire new training data at scale going forward?" Gross said.
I asked OpenAI all about this on Tuesday and didn't get a response.
Former Microsoft CEOs Bill Gates, left, and Steve Ballmer, center, pose for photos with CEO Satya Nadella. The company is taking its foot off the AI accelerator.
Stephen Brashear/Getty Images
Microsoft recently said it may "strategically pace" its data center plans.
The change follows a shift in its OpenAI partnership and concern about potential oversupply.
Microsoft's pivot reflects a broader industry shift from AI training to more cost-effective inference.
In the high-stakes race to dominate AI infrastructure, a tech giant has subtly shifted gears.
Since ChatGPT burst on the scene in late 2022, there's been a mad dash to build as many AI data centers as possible. Big Tech is spending hundreds of billions of dollars on land, construction, and computing gear to support new generative AI workloads.
Microsoft has been at the forefront of this, mostly through its partnership with OpenAI, the creator of ChatGPT.
For two years, there's been almost zero doubt in the tech industry about this AI expansion. It's been all very UP and to the right.
Until recently, that is.
Pacing plans
Last Tuesday, Noelle Walsh, head of Microsoft Cloud Operations, said the company "may strategically pace our plans."
This is pretty shocking news for an AI industry that's been constantly kicking and screaming for more cloud capacity and more Nvidia GPUs. So it's worth reading closely what Walsh wrote about how things have changed:
"In recent years, demand for our cloud and AI services grew more than we could have ever anticipated and to meet this opportunity, we began executing the largest and most ambitious infrastructure scaling project in our history," she wrote in a post on LinkedIn.
"By nature, any significant new endeavor at this size and scale requires agility and refinement as we learn and grow with our customers. What this means is that we are slowing or pausing some early-stage projects," Walsh added.
Microsoft has backed off a bit lately
She didn't share more details, but TD Cowen analyst Michael Elias has found several recent examples of what he said was Microsoft backing off.
The tech giant has walked away from more than 2 gigawatts of AI cloud capacity in both the US and Europe in the last six months that was in the process of being leased, he said. In the past month or so, Microsoft has also deferred and canceled existing data center leases in the US and Europe, Elias wrote in a recent note to investors.
This pullback on new capacity leasing was largely driven by Microsoft's decision to not support incremental OpenAI training workloads, Elias said. A recent change to this crucial partnership allows OpenAI to work with other cloud providers beyond Microsoft.
"However, we continue to believe the lease cancellations and deferrals of capacity point to data center oversupply relative to its current demand forecast," Elias added.
This is worrying because trillions of dollars in current and planned investments are riding on the generative AI boom continuing at a rapid pace. With so much money on the line, any inkling that this rocket ship is not ascending at light speed is unnerving. (I asked a Microsoft spokesperson all about this twice, and didn't get a response.)
An AI recalibration, not a retreat
The reality is more nuanced than a simple pullback, though. What we're witnessing is a recalibration β not a retreat.
Barclays analyst Raimo Lenschow put the situation in context. The initial wave of this industry spending spree focused a lot on securing land and buildings to house all the chips and other computing gear needed to create and run AI models and services.
As part of this AI "land grab," it's common for large cloud companies to sign and negotiate leases that they end up walking away from later, Lenschow explained.
Now that Microsoft feels more comfortable with the amount of land it has on hand, the company is likely shifting some spending to the later stages that focus more on buying the GPUs and other computing gear that go inside these new data centers.
"In other words, over the past few quarters, Microsoft has 'overspent' on land and buildings, but is now going back to a more normal cadence," Lenschow wrote in a recent note to investors.
Microsoft still plans $80 billion in capital expenditures during its 2025 fiscal year and has guided for year-over-year growth in the next fiscal year. So, the company probably isn't backing away from AI much, but rather becoming more strategic about where and how it invests.
From AI training to inference
Part of the shift appears to be from AI training to inference. Pre-training is how new models are created, and this requires loads of closely connected GPUs, along with state-of-the-art networking. Expensive stuff! Inference is how existing models are run to support services such as AI agents and Copilots. Inference is less technically demanding but is expected to be the larger market.
With inference outpacing training, the focus is shifting toward scalable, cost-effective infrastructure that maximizes return on investment.
For instance, at a recent AI conference in New York, the discussion was focused more on efficiency rather than attaining AGI, or artificial general intelligence, a costly endeavor to make machines work better than humans.
AI startup Cohere noted that its new Command A model only needs two GPUs to run. That's a heck of a lot less than most models have required in recent years.
Microsoft's AI chief weighs in
Mustafa Suleyman, CEO of Microsoft AI, echoed this in a recent podcast. While he acknowledged a slight slowdown in returns from massive pre-training runs, he emphasized that the company's compute consumption is still "unbelievable" β it's just shifting to different stages of the AI pipeline.
Suleyman also clarified that some of the canceled leases and projects were never finalized contracts, but rather exploratory discussions β part of standard operating procedure in hyperscale cloud planning.
This strategic pivot comes as OpenAI, Microsoft's close partner, has begun sourcing capacity from other cloud providers, and is even hinting at developing its own data centers. Microsoft, however, maintains a right of first refusal on new OpenAI capacity, signaling continued deep integration between the two companies.
What does this all mean?
First, don't mistake agility for weakness. Microsoft is likely adjusting to changing market dynamics, not scaling back ambition. Second, the hyperscaler space remains incredibly competitive.
According to Elias, when Microsoft walked away from capacity in overseas markets, Google stepped in to snap up the supply. Meanwhile, Meta backfilled the capacity that Microsoft left on the table in the US.
"Both of these hyperscalers are in the midst of a material year-over-year ramp in data center demand," Elias wrote, referring to Google and Meta.
So, Microsoft's pivot may be more a sign of maturity, than retreat. As AI adoption enters its next phase, the winners won't necessarily be those who spend the most β but those who spend the smartest.
This company is only a few months old, and employs just a handful of people. There are no products yet. So, why would investors put in so much money and value such a young enterprise at around $10 billion?
I think there are two main explanations. Both are based on how venture capital works in the age of generative AI and Big Tech.
The Power Law
First up: This is a very expensive option on Mira's startup working out spectacularly well. If that happens, a $10 billion valuation could become $100 billion. Or $1 trillion. OpenAI, where Murati used to work, was recently valued at $300 billion. Google is worth almost $2 trillion.
If Thinking Machines Lab turns into a success like these two companies, then your venture capital fund is done for the decade. You made 10x or 100x your investment, so the rest of your portfolio of early startups doesn't really matter. Those businesses can all fail and you've still generated impressive returns.
This is known as the "Power Law," which states that VC funds cannot achieve success without at least one bet so extraordinary that its gains return the entire value of the fund to investors. One potential example that comes to mind is Accel Partners' early stake in Facebook.Β
"Each year brings a handful of outliers that hit the proverbial grand slam," wrote Sebastian Mallaby in his 2022 book on the subject. "The only thing that matters in venture is to own a piece of them."
The "Big Tech put"
Most startups don't work out, though. If Thinking Machines doesn't do amazingly well, there's another path where investors can still make money β even if they've put what seems like a ridiculously high valuation on this startup.Β
Sometimes, startups that struggle to sell products and generate revenue still have other valuable assets. Maybe they've invented a new tech thing and have patents to back it up. Or they employ talented technical people. Or both.Β (To be sure, Murati is incredibly talented and she's hired quite a few experts from her OpenAI days).
In situations like this, big tech companies sometimes swoop in and acquire these startups. When these transactions are mostly for talent, these deals are called acqui-hires.Β
In the generative AI era, when tech companies are racing for domination, some of these deals have been huge.Β
Last year, Google agreed to pay $2.5 billion to license Character.AI's technology and hire the startup's two superstar cofounders, along with 20% of the other employees.
These types of transactions don't generate 10x or 100x returns for VCs. But they can get them out of struggling startup investments with little financial damage or sometimes even a relatively healthy gain.
For instance, the Character.ai deal netted investorsΒ a return of about 2.5x, BI's Ben Bergman wrote last year.Β
Let's call this theΒ "Big Tech put."Β
A put option is a financial contract that gives the owner the right to sell an asset at a specific price by a specific date.
More generally, a put is the idea that if the price of something falls a lot, someone important will step in and support it. Or even more generally, it's the idea that if the sh*t hits the fan, higher powers will intervene to clean up the mess.Β
There's aΒ "Fed put," which is the belief by investors that the Federal Reserve will step in to buoy markets if prices drop to a certain level.
And now there's even aΒ "Trump put." This is the belief that US President Donald Trump will do what it takes to prop up the market. This theory did the rounds recently when he paused global tariffs after the bond market went berserk for a day or 2.Β
If broader markets can have "put" theories, then why can't venture capitalists haveΒ their very own "Big Tech put"?
I just Googled this phrase by the way, and I couldn't find anyone using it. So I'm claiming it as my own. I came up with this, OK?Β
This startup has agreements worth billions of dollars with major automakers and EV battery manufacturers, including VW, Toyota, GM, and Panasonic. So when I sat down with Redwood Chief Commercial Officer Cal Lankton, I asked for his outlook on EV sales.
Lankton, though, was unequivocal during our interview:
"We've been very fortunate to have a strong set of partners β to be very tied into their demand plans and how they see the market evolving β and we have not seen softening," he said.
"In fact, EV demand is continuing to increase. 2024 was the largest year of EV shipments on record in North America, and I think 2025 will be even larger," Lankton added.
Concerns about a potential slowdown in EV adoption have been overblown and driven by "some in OEMs in particular," Lankton explained, without naming names. OEM refers to "original equipment manufacturer," or companies that design and make their own cars.
"But the consumer is looking at EVsas compelling options," he said. "OEMs in North America are offering more and more compelling options and we feel very bullish about the long-term growth of electric vehicles."
It's fair to take Lankton's view with a pinch of salt. Redwood Materials is relying on EV demand staying strong to support its ambitious business plans.
However, the data also supports his view. According to Cox Automotive's Q1 2025 report, the US EV market continues to grow. And while certain brands have lost ground, others are leaning into long-term demand with confidence.
EV unit sales in the US
Cox Automotive estimates
In the first quarter, almost 300,00 EVs were sold in the US, up 11% from a year earlier, Cox estimated.
While Tesla saw a decline, legacy auto brands such as Chevrolet, VW, Toyota, and Honda saw massive growth, year over year. Porsche was a real standout. It sold more than 4,000 EVs in Q1, up 250% from a year earlier. Taycans aren't cheap either!
"Despite many obstacles β and what you may read elsewhere β electric-vehicle sales continue to grow at a healthy pace in the US," Cox wrote in its latest report.
But how much has his DOGE bender affected the Cybertruck?
Q1 estimates from Cox Automotive provide some clues.
We know that Elon Musk's DOGE bender has seriously hurt sales of Tesla cars. But what about the Cybertruck specifically?
Tesla doesn't break out quarterly delivery numbers for this divisive vehicle. But once a quarter, Cox Automotive estimates electric vehicle sales and includes a number for Cybertrucks.
This is one of the only sources where you can see how the Cybertruck performed in any period. The angular, futuristic beast is either loved or despised. As Musk has waded deeper into DOGE, this vehicle has become even more polarizing, if that was possible. (I've even stopped mentioning that I once had a reservation to buy a Cybertruck. Please don't tell anyone.)
About a month ago, I was walking with a friend through my hometown and there was a Cybertruck parked near the sidewalk. It had a deep burgundy wrap and I remarked on that. My buddy started swearing and shouting about Elon and how despicable the vehicle was. I'd never seen him so angry (except when the Eagles lose).
With reactions like this, I've been wondering if anyone at all would buy a Cybertruck this year. And yet, I see them around Silicon Valley relatively often still. So, how did it do in Q1?
Cox estimates that Tesla sold 6,406 Cybertrucks in the first three months of 2025.
That compares to 12,991 units in the fourth quarter of 2024 before Elon went full DOGE. (This decline happened even as Tesla cut prices.)
A year ago, in the first quarter of 2024, Tesla delivered 2,803 Cybertrucks, according to Cox.
The Cybertruck had been outselling some other EV truck models. But this time, in Q1, Ford sold more F-150 Lightning trucks, according to Cox estimates.
At one point, in the third quarter of 2024, the Cybertruck was the third-most-popular EV in the US, outselling every other non-Tesla electric vehicle, according to Cox.
AI models fix app crashes better on iOS than Android, a new study finds.
Even Google's own Gemini model performed worse on Android.
Android's fragmented ecosystem and language variability may hinder AI model performance.
When your mobile app crashes, there's often a mad scramble to track down the software bug and fix it fast.
Now there's AI for that. But the technology works a lot better with Apple's iOS platform than Google's Android, according to a study released on Thursday.
A software company called Instabug built a tool called SmartResolve that uses leading AI models to automate the process of spotting app crashes, diagnosing what's wrong, and generating usable software code fixes.
They used models from OpenAI, Anthropic, Google, and Meta against a dataset of real-world app crashes. Each fix was scored on correctness, similarity to human fixes, depth of root-cause analysis, relevance, and overall coherence.
A big takeaway: AI models consistently perform better on iOS than on Android. Instabug found on Apple's platform, crash fixes were more accurate, coherent, and well-structured across nearly every model tested.
Even Google's AI model did worse Android
OpenAI's models, for example, delivered significantly better results on iOS. GPT-4o scored 60% on iOS versus 49% on Android. With OpenAI's o1 model, the difference was even more dramatic: It hit 62% on iOS but dropped to 26% on Android, often failing to respond entirely in Android tests.
Other models followed a similar pattern. Anthropic's Claude Sonnet 3.5 V1 scored 58% on iOS and 56% on Android β a smaller gap, but still an iOS lead.
Even Google's own Gemini 1.5 Pro performed worse on Android (51%) than on iOS (59%). Instabug found that it also faced more hallucination issues when using its larger context window.
Why does Android lag behind?
The discrepancy may come from Android's fragmented ecosystem. Compared to iOS, which offers a more uniform environment, Android's broader range of devices and crash types can make it harder for AI models to generalize fixes.
"The stronger performance on iOS is partially due to the structure of iOS native languages like Swift and Objective-C," Kenny Johnston, Instabug's chief product officer, said. "Their syntax is more predictable and strongly typed, which makes it easier for LLMs to generate accurate fixes."
Johnston said Android's languages β Java and Kotlin β plus crash format variability means higher complexity for fixes.
Apple and Google did not respond to Business Insider's requests for comment.
A part of Redwood Materials' campus in the Nevada high desert east of Reno
Emily Najera for BI
In the high desert 25 miles east of Reno, Gooseberry Mine is etched into the dusty Nevada hills. Until 1990, gold was dug out of the ground here. Now it's mostly abandoned.
In late March, I stood a few hundred yards away in a makeshift chemistry lab few people have been allowed to enter. My tour guide, Adam Kirby, a project manager at Redwood Materials, held up a small capsule containing dark powder.
"Black gold," he said, smiling.
The substance is known as Cathode Active Material, or CAM, a combination of metals and minerals that makes up roughly 60% of the value ofΒ EV batteriesΒ and 15% of the entire price of an electric vehicle.
Oil is the original black gold, of course. This has powered automobiles for a century. Now, though, EVs are steadily replacing the internal combustion engine with what Tesla and the rest of the industry hope is a more sustainable alternative.
For this to really happen, we must recycle the big battery packs inside all these EVs. Redwood Materials, run by Tesla cofounder JB Straubel, is building North America's biggest battery recycling operation. And CAM is the company's next big bet.
Emily Najera for BI
Straubel has been hacking away at this problem for almost a decade. Since Redwood was founded in 2017, the startup has raised about $2 billion from big investors, including Fidelity, Goldman Sachs, Baillie Gifford, and Amazon.Β
Today, Redwood recycles more than 70% of all the lithium-ion batteries in North America. If you've ever handed old laptops or smartphones to a recycling center in the US or Canada, it's likely they ended up on a giant lot in front of Redwood Materials' 300 acre desert campus.
The company carefully heats these old batteries to just the right temperature to tease out the base ingredients, which include nickel, manganese, cobalt, and lithium. By themselves, these metals are valuable and Redwood has generated hundreds of millions of dollars in revenue by selling this raw material into the EV supply chain.Β
The next step for the company is combining this into CAM, which is a lot more valuable. Hence the name: new black gold. Instead of digging it up, Redwood conjures it into being through chemistry at industrial scale.
JB Straubel, CEO Redwood Materials and cofounder of Tesla
John B. Carnett/Bonnier Corporation via Getty Images
It's an expensive and technically complex bet for Redwood, Straubel, and his investors. While driving down from the lab with Kirby, the project manager, we passed a building under construction that was so huge it made the nearby earth-moving trucks look like toys. This will house most of the full-scale CAM production process, featuring a heated conveyor belt more than 150 feet long.Β
"Things like Redwood are multibillion-dollar capex, ambitious projects. You must back people with experience and gravitas who can execute," said Chris Evdaimon, an investment manager at Baillie Gifford, who also visited the site recently.
"To raise equity and debt, while having the trust of governments," he added, "not many people can pull this off."
"Infinitely recyclable"
When I got back down the hill, I was ushered into the main Redwood office building on campus and met Chief Commercial Officer Cal Lankton in the "Nickel" office (next to the "Manganese" and "Lithium" rooms, of course).
He recounted how Redwood got started through a surprising revelation: The ingredients of lithium-ion batteries can be reused over and over. Unlike the original black gold, these materials don't go up in smoke or wear out.
Cathode Active Material, aka CAM
Redwood Materials
"That was the foresight on JB's part," Lankton said. "That's what differentiates this automotive transformation from previous ones β the batteries themselves are infinitely recyclable."
Redwood's processes can now recover about 98% of the critical minerals from batteries. The problem is that once this stuff has been salvaged, it must be shipped overseas, mainly to China, where it's refined and combined into useful products such as CAM.
By the time this material goes into a new battery cell in a US gigafactory, it has traveled roughly 50,000 miles from being shipped to Asia and then back again, according to Baillie Gifford's Evdaimon.
That's massively inefficient. If the US and Europe really want to meet lofty goals such as having half of cars being electric by 2030, a homegrown industry is needed. "Redwood Materials is essential for this," Evdaimon added.
A map of the battery materials supply chain
Redwood Materials
Redwood's circle
Redwood's solution is what Lankton calls aΒ "circular battery value chain," and he says no one is doing this at scale yet in the Western world.
The start of Redwood's circle begins with getting as many used batteries as possible. This includes production scrap from gigafactories, such as the ones Panasonic runs to supply Tesla, but also many other sources.
Redwood has massively expanded the number of ways it gets old batteries via contracts with companies such as Toyota, VW, Ford, GM, Amazon, and BMW. Redwood often pays to access this supply, betting it can make more money by turning this unwanted, hazardous scrap into the new black gold.
When you walk into Redwood Materials' high desert office in Nevada, there's even a black letterbox where visitors and staff can drop off old laptops and phones β or anything else that has a rechargeable battery.
The company now handles more than 20 gigawatt hours worth of lithium-ion batteries a year, the equivalent of almost 1.6 billion cellphones.
A dusty lot
The result is a dusty lot full of batteries in front of the Redwood office that stretches halfway to the horizon. Kirby drove me around to see the different types of batteries arranged in squares with gaps of a few yards to prevent runaway fires.
Some of these batteries still have a charge, so they must sit outside to shed this remaining electricity so they're safe to process.
Batteries waiting to be recycled at a Redwood Materials campus.
Emily Najera for BI
I wanted to snap photos, but Kirby said no up-close pics. Some partners don't want their old gear exposed to the public like this.Β Although, I did see a battery from a Hummer EV that was so big it took up a sizable portion of one storage square. Β
A Rube Goldberg machine
Then, we drove up the hill through Redwood's campus to near the top. There, batteries are carefully heated up to release the minerals and metals.Β
Some volatile organic compounds are released by this part of the process. These and other unwanted by-products are processed through a winding maze of giant tubes and collected and cleaned with what are essentially huge HEPA filters, according to Kirby. It looked like a giant Rube Goldberg machine, sparkling in the desert sun.
Emily Najera for BI
Kirby said some other battery recycling companies don't spend much time or money on this painstaking step, making their facilities less environmentally friendly.
Next, a hydrometallurgy process refines and purifies the metals, minerals, and chemicals that are needed for CAM.
"Redwood's equipment was bought from other places, but they have made many adjustments. This is truly novel," said Baillie Gifford's Evdaimon. "There are a hundred small things they've done to tweak and upgrade, and collectively, this makes Redwood the most efficient and innovative recycler for this industry in the Western world."
1.3 million EVs a year
Once these ingredients have been isolated, they would typically be shipped to Asia. Starting in early 2026, Redwood will instead combine them into CAM inside the massive white campus building that's being finished as I write this. (There's enough space to build several more of these facilities on site, depending on how Redwood's big bet goes).
Redwood Materials' new CAM production facility
Emily Najera for BI
Redwood estimates that North America will need 12 million tonnes of CAM by 2030, none of which is currently produced at commercial scale outside of China, Korea, and Japan. The company expects to be the first in North America.
In the coming years, Redwood aims to churn outΒ 100 gigawatt hours worth of CAM per year. That would enable production of enough batteries to power 1.3 million EVs annually.
"What's beautiful about this business: So many partners and plants will take anything that Redwood makes right now. These ingredients are already in demand," said Evdaimon. "And the more processes you add up the value chain, the more valuable your product gets and the better the profit margins are. There are a lot of processes that Redwood is adding."
Making this stuff well is hard
Reclaiming nickel, cobalt and other battery ingredients, rather than digging up virgin material from dirty, dangerous mines, is a heck of a lot more efficient. According to Redwood Materials, it uses 80% less energy, generates 70% fewer CO2 emissions, and requires 80% less water.
Actually making good CAM, though β up to the high specifications of battery manufacturers β is really hard.
You must get just the right combination of nickel, manganese, cobalt, and lithium, along with tiny amounts of rare-earth minerals and other ingredients. That forms a lattice structure that has to be strong but also allows the free movement of lithium ions back and forth between the battery's cathode and anode as it charges and discharges electricity.Β
"At the end, it literally looks like a black powder that we put in big plastic bags and ship to the customer," Lankton said. "But the work that goes into making this is incredibly technical."
Three stages
There are three stages of Redwood's CAM journey. The first is the lab that I was allowed into. This is where the core chemical processes are devised and tested on a small scale.
Nearby is a demonstration plant that is a scaled-down version of Redwood's ultimate facility. Here, the company operates the same equipment and processes, such as rotating kilns, to prove its CAM production methods work consistently.
The final stage is doing all this on a massive scale in the giant CAM building down the hill from the lab and demo plant.
Emily Najera for BI
"We're in that middle phase, but we've derisked a lot of that final step," Lankton told me. "That's why we have a high degree of confidence in our ability to achieve our targets."
So confident, in fact, that Redwood already has multibillion-dollar contractual offtake agreements with Toyota and Panasonic for its CAM.
Copper foil and Northvolt
This endeavor is not just technically complex, though. There are other risks when you're competing with China's manufacturing might.
Redwood used to make copper foil, too, which is the main ingredient for the anode in batteries. However, Chinese companies began churning this out in massive quantities, and prices plummeted.
Lankton said blood, sweat, and quite a bit of capital went into the copper foil business, and he credited Straubel with the ability to make the painful decision to pause this part of the business. Β
"ThatΒ is, perhaps, a point of pride to differentiate us from our peers," Lankton added. "We're not dogmatic; we're not going to take a business plan and just do it until we think it works and run ourselves into the ground."
In March, Northvolt, a European battery recycling startup, went bankrupt after chewing through $15 billion in funding.Β
Lankton said he admired Northvolt's ambitious plans but said the company may have gone too far by trying to make full batteries itself rather than being just a supplier of materials like Redwood.Β
Tariff timing
While Northvolt's demise paints a dour picture for battery recycling, other factors may be playing into Redwood's hands.Β
The Inflation Reduction Act from 2022 has big incentives for US battery production. And this year, Donald Trump's tariff barrage favors domestic manufacturing.Β
Being the first major CAM producer in the US is looking like a well-timed initiative right now. But Lankton said Redwood isn't resting on its laurels.Β
"Why should someone want to buy Redwood's CAM?" he asked me, rhetorically, in the company's Nickel office.Β
Firstly, Redwood has to be competitive on price globally, irrespective of incentives, tariffs, or other artificial factors.Β
Second, the company must have the technical prowess to pull this off. "There's hardcore particle engineering that goes into this CAM," Lankton explained. "It has to be consistent with anything else one of our customers can buy. So we have to technically meet or exceed those specifications."
Then, finally, he conceded that there is value in having a reliable, sustainable, secure US source of battery material as questions swirl about the future of global trade. Β
"But if number one and number two aren't true, no one's gonna buy Redwood's CAM," he warned. Β
Tariffs are an attempt by Trump to reorder the global economy away from Chinese manufacturing.
Taiwan and TSMC are at the heart of the system of interconnected, global supply chains.
Ben Thompson says a Chinese invasion of Taiwan would make last week look tame.
In a world already shaken by economic shocks β from the pandemic to sweeping tariffs β there's one scenario that would dwarf them all: a Chinese invasion of Taiwan.
This is the hypothetical the analyst Ben Thompson floats in the latest edition of Stratechery, a newsletter widely followed by tech industry insiders.
An invasion wouldn't just be a military conflict; it would be a rupture in the global economic order that underpins everything from iPhones to inflation forecasts.
Thompson reckons China would prevail militarily. The larger story isn't about who controls Taipei β it's about what happens when the world's most essential supply chains are shattered. Taiwan isn't just any island. It's home to TSMC, the company responsible for manufacturing the most advanced semiconductors in the world. Take that away, and the modern tech ecosystem buckles β taking with it the digital infrastructure of daily life.
This is so important that there's even a theory quietly discussed among global security experts that Taiwan could threaten to destroy its giant chip fabrication facilities to deter China from invading.
Thompson argues that regardless of who wins militarily, the economic result is the same: China, the global factory, is effectively cut off. Taiwan's chip output vanishes. Global trade grinds lower. Inflation surges. Markets tank.
Recall the 2020 COVID-19 supply chain chaos or last week's market turmoil from Trump's tariff barrage β now magnify that impact exponentially.
"A war over Taiwan," Thompson wrote on Monday, "would put all of these to shame."
Is war the only answer to resetting the system?
Of course, it's not clear whether, or when, China would invade Taiwan. And even if it did invade, it would face challenges, the defense writer Michael Peck says.
But what's particularly striking is Thompson's framing: War might not just break the system β but also be the only way to reset it. The post-World War II economic order, rebooted by Bretton Woods and supercharged by China's entry into global markets, is showing its age. The US traded industrial capability for cheap goods and ballooning deficits. American manufacturing jobs vanished, supply chains lengthened, and the economic resilience of the US heartland hollowed out. (These are the areas where Vice President JD Vance's anti-globalization message really resonates, by the way.)
Trump's tariffs β clumsy, politically divisive, and economically painful β might still be preferable to waiting for a war to force change. As Thompson notes, the system can't be fixed without a stomach for hard trade-offs, and America seems unwilling to take the hit until it's forced to.
So we return to the Taiwan question β if China invades, or how close we come. The prudent path, Thompson suggests, may be to pull China deeper into economic interdependence, betting that shared prosperity can delay or prevent disaster. But if that gamble fails, the cost won't just be measured in GDP β it'll be in decades of lost stability, broken industries, and a new global order forged not in peace but in fire.
In March, Best Buy's CEO said prices would likely go up because of tariffs.
Thomson Reuters
Trump's big new tariffs on imports may lead to higher consumer electronics prices.
US tariffs on China, the world's largest electronics manufacturing hub, hit 54%.
Should US consumers buy electronics now to avoid potential price hikes?
I've been thinking about getting a new TV for a while.
There's a strange bug with our Samsung TV that keeps the volume stuck on a really loud setting. It inexplicably defaults to Newsmax when turned on, so we have to fumble with the flawed remote while a pundit shouts about immigrants.
I was ready for a new one but hadn't got around to it. This week, Donald Trump gave me another reason to buy now.
The US president unveiled draconian tariffs on imports from most countries on Wednesday. All in, China will have 54% tariffs. This is where most consumer electronics are still made. Among other tech manufacturing hubs, Vietnam got 46%. Taiwan got 32%, while South Korea and Malaysia were 25% and 24%, respectively.
These are such huge trade levies that it's hard not to expect prices to increase. Best Buy's CEO, Corie Barry, said during the company's March earnings call that Trump's tariff plans were likely to increase prices. And that was before the president really went all in.
Another example is Apple, which still assembles most of its iPhones and other hardware in China. On Thursday, tech analyst Dan Ives put some rough numbers on what might happen.
He warned that for US consumers, the reality of a $1,000 iPhone "would disappear" if Apple was forced to make these devices in the US instead of China.
"If consumers want a $3,500 iPhone we should make them in New Jersey or Texas or another state," Ives wrote.Β "If they are produced in the US will be 2x-3x more expensive."
Being half Scottish, I dislike the idea of stuff being more expensive, so I headed to my Silicon Valley Best Buy on Thursday morning to see if I could lock in any pre-tariff deals. It was surprisingly empty. Maybe tariff-wary shoppers had better things to do β the tariff memes wouldn't create themselves.
I picked out a Roku TV that was a discounted floor model. It was just over $200 with tax. On the back, it read "Assembled in China," so you might have to add 54% to this price in a few months.Β
It's not as simple as that, of course. Trump might roll some or all of these tariffs back as part of a grand negotiating strategy. There could be carve-outs and exceptions. Tim Cook, aka Tim Apple, has a pretty good track record of getting Apple special treatment.Β
Given these wrinkles, I spoke on Thursday to Les Shu. HeΒ oversees Business Insider's Tech Reviews department, which includes consumer electronics reviews and explainers.
Here's his advice about what to (not) panic buy:
Question: Do you expect Trump's new tariffs to impact the prices of consumer electronics in general in the US?
"We think existing products on shelf have already established pricing, so we don't anticipate that to change from the manufacturer standpoint. We could see prices go up for future products like the next iPhone."
Q: Which tech products stand out as potentially seeing the most pricing increases?
"Accessories and smaller electronics, particularly those from Asia, could see more immediate increases, particularly if they haven't already been imported and warehoused in the US. Once we get a better picture of the scale and when semiconductors are taxed, we may see it affecting more categories like TVs and computers and graphics cards."
Q: Which consumer electronics companies stand out as being particularly exposed?Β
"We aren't sure, but it's likely all companies will feel the effects since the majority of goods are made abroad."
Q: Which consumer electronics products would you suggest BI readers try to buy now before prices might increase?
"We don't believe consumers should panic buy. If they have been debating whether to purchase something and had been holding off, perhaps now is a good time to buy. We also think older products from 2024 still on sale would be an even better deal as they're likely unaffected by tariffs, like last-generation TVs. But again, don't buy for the sake of buying, as it's too soon to tell."
Q: If BI readers want to follow my lead and go looking for electronics deals, is there anything they should snap up now? A new Roku TV? Or a MacBook Air? Or maybe a new smartphone or speaker?
"As long as you were planning to buy something, any of those would be fair game. Given how quickly tech becomes obsolete, we don't know if you're going to have an advantage in purchasing now as a way to protect against higher tariffs. People will always want the latest tech, so they may be willing to pay the extra price."
James Doohan as Lt. Commander Montgomery Scotty Scott on Star Trek
CBS via Getty Images
If AI agents catch on, there may not be enough computing capacity.
AI agents generate many more tokens than chatbots, increasing computational demands.
More AI chips may be needed if AI agents grow, Barclays analysts warned.
In Star Trek, the Starship Enterprise had a chief engineer, Montgomery "Scotty" Scott, who regularly had to explain to Captain Kirk that certain things were impossible to pull off, due to practicalities such as the laws of physics.
"The engines cannae take it, Captain!" is a famous quote that the actor may actually not have said on the TV show. But you get the idea.
We may be approaching such a moment in the tech industry right now, as the AI agent trend gathers momentum.
The field is beginning to shift from relatively simple chatbots to more capable AI agents that can autonomously complete complex tasks. Is there enough computing power to sustain this transformation?
According to a recent Barclays report, the AI industry will have enough capacity to support 1.5 billion to 22 billion AI agents.
This could be enough to revolutionize white-collar work, but additional computing power may be needed to run these agents while also satisfying consumer demand for chatbots, the Barclays analysts explained in a note to investors this week.
It's all about tokens
AI agents generate far more tokens per user query than traditional chatbots, making them more computationally expensive.
Tokens are the language of generative AI and are at the core of emerging pricing models in the industry. AI models break down words and other inputs into numerical tokens to make them easier to process and understand. One token is about ΒΎ of a word.
More powerful AI agents may rely on "reasoning" models, such as OpenAI's o1 and o3 and DeepSeek's R1, which break queries and tasks into more manageable chunks. Each step in these chains of thought creates more tokens, which must be processed by AI servers and chips.
"Agent products run on reasoning models for the most part, and generate about 25x more tokens per query compared to chatbot products," the Barclays analysts wrote.
"Super Agents"
OpenAI offers a ChatGPT Pro service that costs $200 monthly and taps into its latest reasoning models. The Barclays analysts estimated that if this service used the startup's o1 model, it would generate about 9.4 million tokens per year per subscriber.
There's been media reports recently that OpenAI could offer even more powerful AI agent services that cost $2,000 a month or even $20,000 a month.
The Barclays analysts referred to these as "super agents," and estimated that these services could generate 36 million to 356 million tokens per year, per user.
More chips, Captain!
That's a mind-blowing amount of tokens that would consume a mountain of computing power.
The AI industry is expected to have 16 million accelerators, a type of AI chip, online this year. Roughly 20% of that infrastructure may be dedicated to AI inference β essentially the computing power needed to run AI applications in real time.
If agentic products take off and are very useful to consumers and enterprise users, we will likely need "many more inference chips," the Barclays analysts warned.
The tech industry may even need to repurpose some chips that were previously used to train AI models and use those for inference, too, the analysts added.
They also predicted that cheaper, smaller, and more efficient models, like those developed by DeepSeek, will have to be used for AI agents, rather than pricier proprietary models.
An image generated by OpenAI's 4o tool showing an older artist being angry at a young tech executive
Pranav Dixit/OpenAI's 4o tool
OpenAI's new 4o tool generates Ghibli-style images on request, via the paid version of ChatGPT.
The free version of ChatGPT, which uses OpenAI's older DALL-E 3 tool, refuses to create such images.
The free ChatGPT said it can't do this because Ghibli "is a copyrighted animation studio, and its artistic style is protected."
When AI-generated Ghibli-style images started popping up on social media this week, I contacted OpenAI.
The startup had just launched a new image-generation tool called 4o, a powerful upgrade from its DALL-E 3 service. Users started asking ChatGPT for images in the style of the famed Japanese animation house Studio Ghibli. And the new 4o obliged.
I tried it myself, using the free version of ChatGPT, and got a much different response: "I wasn't able to generate the image in the style of Studio Ghibli due to content policy restrictions," OpenAI's chatbot told me.
Why was OpenAI letting 4o users do this, while refusing my similar requests on the basis on "content policy"? I asked an OpenAI spokesperson. She responded on Wednesday with this explanation, which cited an update to OpenAI's system card, the document that lays out the details of new models and tools like 4o.
"We added a refusal which triggers when a user attempts to generate an image in the style of a living artist," the company said in this document. The OpenAI spokesperson added that the company continued to prevent "generations in the style of individual living artists" but did permit "broader studio styles."
Hayao Miyazaki, the artist who cofounded Studio Ghibli, is still alive. So using 4o to generative images in his style would seem to be not allowed. However, this is a big studio, so maybe these images fall under the "broader studio" policy that the OpenAI spokesperson described.
Either way, it's clear that OpenAI has made a major change in its approach to copyright and image generation lately.
On Thursday, my colleague Pranav Dixit and I tested this out to show how OpenAI's technology treats similar requests differently, depending on which image-generation tool you use.
Pranav used the paid ChatGPT service, which comes with the new 4o tool. He asked for images in the style of Studio Ghibli. The chatbot created several, including the one at the top of this story. It shows an older artist being angry at a younger tech executive who looks a bit like OpenAI CEO Sam Altman. Weird coincidence!
Pranav then went down the technology rabbit hole, which he enjoys doing. (Good trait for a tech reporter). He got 4o to churn about several more images in the Ghibli style, like this one.
An older artist holds his head in his hands
Pranav Dixit/OpenAI's 4o tool
I tried similar Ghibli-style requests on Thursday using the free ChatGPT service, which comes with OpenAI's older DALL-E 3 image-generation tool.
The tool refused my requests, citing copyright rules. Here's what ChatGPT told me:
"I can't generate images in the style of Studio Ghibli because it is a copyrighted animation studio, and its artistic style is protected."
You can't be clearer than that. OpenAI won't do this because it would infringe Studio Ghibli's copyright.
And yet, another OpenAI tool is quite happy to generate these types of images. So what gives?
I asked OpenAI's spokesperson if this is a double standard. Or has the company changed its approach to copyright recently? Or maybe it has struck a content deal with Studio Ghibli?
OpenAI didn't respond to these questions on Thursday afternoon.Β Studio Ghibli, which is based in Tokyo, Japan, also didn't respond to a request for comment from Business Insider late on Wednesday, US time.
If we get any more answers on this confusing situation, we'll write another story on this.Β
Either way, this is probably a great way to get users to upgrade to the paid version of OpenAI's ChatGPT service. I'm still grumpy that Pranav can generate better images than me. Here's the one I managed to get out of the free version.Β
Elon Musk, CEO of Tesla, wielded a chainsaw at the Conservative Political Action Conference.
Andrew Harnik/Getty Images
Tesla is expected to release first-quarter vehicle production and delivery numbers on April 2.
The Wedbush analyst Dan Ives expects deliveries to drop 7% versus the same period a year earlier.
Ives said Tesla CEO Elon Musk's DOGE antics were partly to blame for the expected sales woes.
Elon Musk has been taking a chainsaw to government spending. Later on Wednesday, we'll get an idea of how much these antics have chopped Tesla sales.
The largest US electric vehicle company is expected to release first-quarter vehicle production and delivery numbers on April 2.
This is the first time we'll get a full, official look at Tesla sales since Musk went full DOGE when President Donald Trump took office in late January.
A well-known Tesla bull just shared his expectations for these numbers and estimated how much Musk's DOGE activity might have hurt sales.
"Musk leading DOGE has essentially taken on a life of its own as in the process Tesla has unfortunately become a political symbol globally," Dan Ives, an analyst at Wedbush Securities, wrote in a recent note to clients. He pointed to protests, demonstrations at Tesla dealerships, and keyed cars.
A 'brand tornado crisis moment'
He expects first-quarter Tesla deliveries of 355,000 to 360,000 vehicles, down about 7% from the same period a year earlier.
Just a few months ago, Wall Street expected more than 400,000 Teslas to be delivered in the first quarter, so some of the DOGE impact has already been discounted, Ives wrote.
Existing data suggests that Tesla's sales numbers in Europe have been under "major pressure," while there's also been "demand softness" in the US and China, the analyst wrote.
"This continues to be a moment of truth for Musk to navigate this brand tornado crisis moment and get onto the other side of this dark chapter for Tesla with much better days ahead we see for the story," Ives said.
How much is Musk's fault?
Ives attributed the sales woes to several issues that might be unrelated to Musk's DOGE exploits, such as consumers waiting for an updated Model Y and a lower-cost new car that may come later in 2025.
He still conceded that anti-Musk sentiment and "brand issues" were causing problems, calling them "a major factor in this weak 1Q delivery number."
He estimated that 30% of next week's expected soft Q1 delivery number would be related to "Musk/brand/DOGE," with the other 70% involving the timing of new or updated products and "non-brand headwind issues."
Michael Intrator, the CEO of CoreWeave, is on the cusp of a big initial public offering.
Bruno de Carvalho/SOPA Images/LightRocket via Getty Images
Rapid AI advancements may reduce the useful life of CoreWeave's Hopper-based Nvidia GPUs.
CoreWeave has a lot of Hopper-based GPUs, which are becoming outdated due to the Blackwell rollout.
Amazon recently cut the estimated useful life of its servers, citing AI advancements.
I recently wrote about Nvidia's latest AI chip-and-server package and how this new advancement may dent the value of the previous product.
Nvidia's new offering likely caused Amazon to reduce the useful life of its AI servers, which took a big chunk out of earnings.
Other Big Tech companies, such as Microsoft, Google, and Meta, may have to take similar tough medicine, according to analysts.
This issue might also impact Nvidia-backed CoreWeave, which is doing an IPO. Its shares listed on Friday under the ticker "CRWV." The company is a so-called neocloud, specializing in generative AI workloads that rely mostly on Nvidia GPUs and servers.
Like its bigger rivals, CoreWeave has been buying oodles of Nvidia GPUs and renting them out over the internet. The startup had deployed more than 250,000 GPUs by the end of 2024, per its filing to go public.
These are incredibly valuable assets. Tech companies and startups have been jostling for the right to buy Nvidia GPUs in recent years, so any company that has amassed a quarter of a million of these components has done very well.
There's a problem, though. AI technology is advancing so rapidly that it can make existing gear obsolete, or at least less useful, more quickly. This is happening now as Nvidia rolls out its latest AI chip-and-server package, Blackwell. It's notably better than the previous version, Hopper, which came out in 2022.
Veteran tech analyst Ross Sandler recently shared a chart showing that the cost of renting older Hopper-based GPUs has plummeted as the newer Blackwell GPUs become more available.
A chart showing the cost of renting Nvidia H100 GPUs
Ross Sandler/Barclays Capital
The majority of CoreWeave's deployed GPUs are based on the older Hopper architecture, according to its latest IPO filing from March 20.
Sometimes, in situations like this,Β companies have to adjust their financials to reflect the quickly changing product landscape. This is done by reducing the estimated useful life of the assets in question. Then, through depreciation, the value of assets is reduced over a short time period to reflect things like wear and tear and, ultimately, obsolescence. The faster the depreciation, the bigger the hit to earnings.
Amazon's AI-powered depreciation
Amazon, the world's largest cloud provider, just did this. On a recent conference call with analysts, the company "observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning."
That caused Amazon Web Services to decrease the useful life of some of its servers and networking gear from six years to five years, beginning in January.
Sandler, the Barclays analyst, thinks other tech companies may have to do the same, which could cut operating income by billions of dollars.
Will CoreWeave have to do the same, just as it's trying to pull off one of the biggest tech IPOs in years?
I asked a CoreWeave spokeswoman about this, but she declined to comment. This is not unusual, because companies in the midst of IPOs have to follow strict rules that limit what they can say publicly.Β
CoreWeave's IPO risk factor
CoreWeave talks about this issue in its latest IPO filing, writing that the company is always upgrading its platform, which includes replacing old equipment.
"This requires us to make certain estimates with respect to the useful life of the components of our infrastructure and to maximize the value of the components of our infrastructure, including our GPUs, to the fullest extent possible."
The company warned those estimates could be inaccurate. CoreWeave said its calculations involve a host of assumptions that could change and infrastructure upgrades that might not go according to plan β all of which could affect the company, now and later.
This caution is normal because companies have to detail everything that could hit their bottom line, from pandemics to cybersecurity attacks.
As recently as January 2023, CoreWeave was taking the opposite approach to this situation, according to its IPO filing. The company increased the estimated useful life of its computing gear from five years to six years.Β That change reduced expenses by $20 million and boosted earnings by 10 cents a share for the 2023 year.
If the company now follows AWS and reduces the useful life of its gear, that might dent earnings. Again, CoreWeave's spokeswoman declined to comment, citing IPO rules.
An important caveat: Just because one giant cloud provider made an adjustment like this, it doesn't mean others will have to do the same. CoreWeave might design its AI data centers differently, somehow making Nvidia GPU systems last longer or become less obsolete less quickly, for instance.Β
It'sΒ also worth noting that other big cloud companies, including Google, Meta, and Microsoft, have increased the estimated useful life of their data center equipment in recent years.
Google and Microsoft's current estimates are six years, like CoreWeave's, while Meta's is 5.5 years.
However, Sandler, the Barclays analyst, thinks some of these big companies will follow AWS and shorten these estimates.Β
Nvidia's new Blackwell GPUs mean the older Hopper models are less useful, affecting cloud giants.
Rapid tech advancements may force cloud giants to adjust asset depreciation, denting earnings.
Amazon leads in adjusting server lifespan. Meta and Google could see profit hits.
Nvidia CEO Jensen Huang made a joke this week that his biggest customers probably won't find funny.
"I said before that when Blackwell starts shipping in volume, you couldn't give Hoppers away," he said at Nvidia's big AI conference Tuesday.
"There are circumstances where Hopper is fine," he added. "Not many."
He was talking about Nvidia's latest AI chip-and-server package, Blackwell. It's notably better than the previous version, Hopper, which came out in 2022.
Big cloud companies, such as Amazon, Microsoft, and Google, buy a ton of these GPU systems to train and run the giant models powering the generative AI revolution. Meta has also gone on a GPU spending spree in recent years.
These companies should be happy about an even more powerful GPU like Blackwell. It's generally great news for the AI community. But there's a problem, too.
AI obsolescence
When new technology like this improves at such a rapid pace, the previous versions become obsolete, or at least less useful, much faster.
This makes these assets less valuable, so the big cloud companies may have to adjust. This is done through depreciation, where the value of assets are reduced over time to reflect things like wear and tear and ultimately obsolescence. The faster the depreciation, the bigger the hit to earnings.
Ross Sandler, a top tech analyst at Barclays, warned investors on Friday that the big cloud companies and Meta will probably have to make these adjustments, which could significantly reduce profits.
"Hyperscalers are likely overstating earnings," he wrote.
Google and Meta did not respond to Business Insider's questions about this on Friday. Microsoft declined to comment.
Amazon takes the plunge first
Take the example of Amazon Web Services, the largest cloud provider. In February, it became the first to take the pain.
CFO Brian Olsavsky said on Amazon's earnings call last month that the company "observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning."
"As a result, we're decreasing the useful life for a subset of our servers and networking equipment from 6 years to 5 years, beginning in January 2025," Olsavsky said, adding that this will cut operating income this year by about $700 million.
Then, more bad news: Amazon "early-retired" some of its servers and network equipment, Olsavsky said, adding that this "accelerated depreciation" cost about $920 million and that the company expects it will decrease operating income in 2025 by about $600 million.
A much larger problem for others
Sandler, the Barclays analyst, included a striking chart in his research note on Friday. It showed the cost of renting H100 GPUs, which use Nvidia's older Hopper architecture. As you can see, the price has plummeted as the company's new, better Blackwell GPUs became more available.
A chart showing the cost of renting Nvidia H100 GPUs.
Ross Sandler/Barclays Capital
"This could be a much larger problem at Meta and Google and other high-margin software companies," Sandler wrote.
For Meta, he estimated that a one-year reduction in the useful life of the company's servers would increase depreciation in 2026 by more than $5 billion and chop operating income by a similar amount.
For Google, a similar change would knock operating profit by $3.5 billion, Sandler estimated.
An important caveat: Just because one giant cloud provider has already made an adjustment like this, it doesn't mean the others will have to do exactly the same thing. Some companies might design their AI data centers differently, somehow making Nvidia GPU systems last longer or become less obsolete less quickly.
The time has come
When the generative AI boom was picking up steam in the summer of 2023, Bernstein analysts already started worrying about this depreciation.
"All those Nvidia GPUs have to be going somewhere. And just how quickly do these newer servers depreciate? We've heard some worrying timetables," they wrote in a note to investors at the time.
One Bernstein analyst, Mark Shmulik, discussed this with my colleague Eugene Kim.
"I'd imagine the tech companies are paying close attention to GPU useful life, but I wouldn't expect anyone to change their depreciation timetables just yet," he wrote in an email to BI at the time.
Nvidia CEO Jensen Huang delivers the keynote address during the GTC 2025 conference in San Jose.
Justin Sullivan/Getty Images
Nvidia CEO Jensen Huang predicts companies will become AI factories that generate tokens.
Tokens are numerical representations used by AI models to process and understand data.
Here's what the Nvidia CEO means by an "AI factory."
In his keynote at Nvidia's AI conference this week, CEO Jensen Huang predicted every company will become an "AI factory."
It's a big idea that could make businesses of all types successful in the future. So it's worth explaining.
I first heard about this last year when chatting with Guillermo Rauch, CEO of AI startup Vercel. He explained the role of tokens in artificial intelligence, and noted that Huang likes to say "every company will become a token factory."
'Everything's token'
If data is the raw materialΒ of generative AI, tokens are the language. AI models break down words and other inputs into numerical tokens to make them easier to process and understand. One token is about ΒΎ of a word.
An example from Nvidia: The wordΒ "darkness" might be tokenized into the numbers 271 for "dark" and 655 for "ness." The opposite word, "brightness," would be represented by 491 and 655. This way, the AI model can spot the same 655 number twice and understand that these words are related.Β
Trillions of these numbers, or tokens, are used to train AI models like this, then fine-tune and run them. Instead of "everything's computer," you might say that "everything's token" in AI.
'One job and one job only'
Huang, Rauch, and other technologists think modern companies will succeed by generating the most tokens. They will be AI factories churning out tokens that will be used to improve and run AI systems that help businesses make better products and services.
"I call them AI factories," Huang said on Tuesday at the GTC conference. "They're AI factories because they have one job and one job only β generating these incredible tokens that we then reconstitute into music, into words, into videos, into research, into chemicals or proteins."
He said these token-generating facilities will sometimes sit alongside companies' more traditional operations.
"Every industry, every company that has factories will have two factories in the future," Huang predicted. "The factory for what they build and the factory for the mathematics, the factory for the AI."
He cited auto manufacturing as an example, describing "a factory for cars" and "a factory for AI for the cars."
He then put some meat on those theoretical bones by announcing a partnership with General Motors in which Nvidia will help that company use AI to manufacture cars while also making GM vehicles more autonomous with AI. Β
Does Tesla make cars or token-generating machines?
Jason Liu,Β a machine-learning engineer and AI consultant, made a similar point with Tesla's electric vehicles and Elon Musk's goal to make them fully autonomous. Β
When a Tesla is driven around a city, it has sensors that collect mountains of information about its surroundings. That data is collected and turned into tokens that are used to improve Tesla's AI models. That, in theory, produces better self-driving software to guide the vehicles more accurately and safely.
In an AI world, most companies' roles will be to generate more data," Liu said.
He argued that Tesla's approach of putting as many cars as possible on the road to scoop up as much data as possible has been better than Waymo's strategy, "where engineers sat in a cave for years working on this in relative isolation, not collecting as much data, or tokens."
Making better business decisions
Liu shared another example, this time theoretical: How can companies make better business decisions by looking back at their process and tokenizing it?
"For any major decision, there's probably six months of back and forth debate between employees on Slack chats, Zoom video meetings, board meetings, and data dashboards," he said.
Companies can now turn all that into tokens and use that to train an AI system to make better decisions in the future or to help human executives and employees make better decisions next time.
"The job of the company and the software is to pull all that out of the humans involved and turn that into tokens for AI training," he added.Β
Token factory examples
Rauch says Vercel is doing this with its v0 tool, which helps developers and non-technical people build websites and applications. Β
"v0 takes in user requirements in English and outputs an application," Rauch explained. "Those are our tokens."
How also cited a Vercel customer called OpenEvidence, which uses AI to synthesize mountains of medical research into digestible information for busy medicine professionals.Β
"Their tokens are the research data that doctors need in order to make better decisions," Rauch said. "It's medical intelligence tokens."
Liu cited the example of Mercor, a startup that isΒ hiring technical PhDs to harvest their specific knowledge and turn that into tokens that are used by AI labs to improve their models.
"The job of every company will become to produce intelligence, like a token factory," Rauch said. "Companies build up institutional knowledge over time, they accrue best practices, operational principles and procedures, training manuals, brand guidelines, and even taste. All of that will become part of the pre- and post-training of AI models and the data that gets inserted on top."
The company is becoming a utility, which is hard forΒ fanboysΒ to accept, though it's not all bad.
The iPhone has become the standard tool for accessing online data and running our lives. Most owners don't care about cutting-edge AIΒ or the latest speedy chip. They just need the device to keep working.
Apple fans have been aflutter lately over the company's decision to postpone new AI features, with some analysts predicting lower iPhone sales as a result.
That may be important for a tiny fraction of customers, such as Apple bloggers and the odd person who absolutely needs the latest and greatest iPhone. For everyone else, we don't care. We mostly want the battery to last all day, our apps to run, our texts to go through, and a camera to just work. This can be done with any iPhone, and it doesn't need AI.
Similar to an electric utility, we rely on the iPhone a lot. Most of us don't think about it passionately until something goes wrong. That's usually the battery degrading. Or when Apple updates the operating system, and a few older iPhones no longer work. Or you damage it beyond repair, or it gets stolen.
This drives an iPhone upgrade cycle that has become incredibly powerful and has little to do with fancy new AI features. Eventually, every one of the roughly 1.5 billion iPhones out there will have to be replaced.
"300 million iPhones have not upgraded in over four years," Dan Ives, an analyst at Wedbush Securities, wrote recently in an email to Business Insider. "That's a lot of pent-up demand."
This is the main driver of iPhone sales now. It's not that sexy, and it's not high growth. Instead, it's big and steady, like a utility.
A chart showing iPhone unit sales.
Dan Morgan, Synovus, and data compiled by Bloomberg. Red = estimate.
The chart above, based on data compiled by Dan Morgan, a portfolio manager at Synovus, shows that iPhone sales have flatlined for a decade.
That seems bad. However, Apple shares have soared sevenfold in the last decade, massively outperforming the broader stock market. It's the most valuable public company in the world. By a lot.
During that same time, Siri β Apple's main AI offering β has sucked pretty hard. How much did that derail the iPhone upgrade train? Not much, judging by the data above.
Apple did not respond to a request for comment from BI.
Why Warren Buffett likes utilities and Apple
Apple CEO Tim Cook walks with Astrid Buffett, Warren Buffett's wife, during the Allen & Company Sun Valley Conference in Idaho.
Kevin Dietsch/Getty Images
Warren Buffett, a massive utility investor, remains one of Apple's largest shareholders. In his latest letter toΒ Berkshire HathawayΒ shareholders, he listed Apple alongside other unsexy, non-AI companies, such as American Express, Coca-Cola, and Moody's.
He seems blissfully uninterested in the latest whizz-bang Apple technology and how that might help sell more iPhones in a given quarter.
"The idea that spending loads of time trying to guess how many iPhone X or whatever are going to be sold in a given three-month period, to me, it totally misses the point," Buffett said in 2018.
Buffett likes Apple for similar reasons he loves utilities and other vaguely boring, profitable companies. Many people need what these companies sell, which helps them generate large, reliable returns β including juicy dividends.
'Panic' on the streets of Silicon Valley
A panicked human. Where's his iPhone?
izusek/Getty Images
Electricity. Water. An iPhone. These are essential ingredients for modern life. Without them, things can get difficult quickly. Take my family's experience as an example.
My wife's iPhone was stolen last year. She only waited a few hours before driving to the nearest Apple store to hand over almost $1,000 for a replacement. She described the thought of existing without an iPhone as "panic."
First up: She used two-factor authentication via her iPhone to access sensitive work documents. She couldn't work without a new Apple device. (She's added a backup now. Guess what it is? An iPad!)
Then, a long list of other practical tasks revealed themselves to be difficult without an iPhone: Booking an exercise class, texting a group of friends, keeping streaks going on mobile games, and accessing the iPhone's digital wallet for tickets she needed soon.
My daughter broke her iPhone beyond repair last year. She experienced similar panic and bought a replacement from Apple within 24 hours.
She uses a mobile app on her iPhone to authenticate and access her college accounts for submitting homework and doing other study-related stuff. She also had a plane to catch and was worried about not having the digital ticket.
My wife and daughter both worried about the data on their iPhones and thought it would be safer and easier to get a new iPhone to reclaim this digital information.
This year, my iPhone SE's battery started running low, and I struggled to hear people well on phone calls. These problems were fixable and probably related to user error. I could have gotten a new battery from Apple, for instance, yet I just could not be bothered. So I bought a new iPhone 16e. It was so easy, and my life was uninterrupted. I did not think about AI once during this process.
Android switching doesn't really happen anymore
My wife, my daughter, and I did not ever once consider getting an Android phone instead. For many non-technical consumers, switching is too complex.
My wife knew her stolen iPhone was backed up in Apple's iCloud, so getting the same device again made the replacement process much quicker and smoother.
"I knew that if I bought another one, it would be as painless as possible to access my photos, wallet, apps, contacts, etc," she told me. "I was already pained by the stolen iPhone. Switching to Android at that moment made no sense. I'd have been breaking myself into jail."
A 2023Β study by Consumer Intelligence Research Partners estimated that only about 4% of Android phone buyers switched from an iPhone. The authors wrote that a big reason was "fear of the complexity of switching."
"There's not a lot of iOS-to-Android switching," said Josh Lowitz, a partner at CIRP. "Switching between platforms is not that difficult, but the habits and iOS-specific apps and connections are hard to leave."
This is why a year or so of delay in AI features won't matter much. When everyone's iPhone stops working, they will buy another, with or without AI.
So we should watch: How many old iPhones will Apple stop supporting? The company does an amazing job keeping these devices running for years. But after a certain point, the hardware just can't keep up.
That's when some handsets areΒ dropped from iOS. Later this year, probably in September, Apple will do this again, and millions of people will have to pay to upgrade β just like I have to keep paying PG&E to pump electricity and natural gas into my house.Β
All is not lost
This is a long way from the cutting-edge technology that Apple fanboys love to nerd out about. There hasn't been much to get excited about lately on that front.Β Apple scrappedΒ its car project. TheΒ Vision ProΒ mixed-reality goggles have flopped so far. And now, these AI delays show how far Apple lags behind Google, Meta, and OpenAI.
All is not lost, though. If Apple were a true utility, regulators would cap what it charges customers, which would probably crush the stock.
Instead, Apple can charge what it wants for iPhones. Take the latest example. The cheapest version of the new iPhone 16e costs $170 more than the last entry-level iPhone, the SE. That's 40% inflation in the base cost to access your digital life.
"Without a new iPhone SE, current iPhone SE owners will likely hold on to their phones a little longer," said CIRP's Lowitz. "When they need to get a new phone, they will likely buy the most affordable new iPhone."
What an incredible business. No wonder Apple has become the world's most valuable company β with or without AI.
Nvidia CEO Jensen Huang at Nvidia's GTC AI conference on March 18.
Emma Cosgrove/Business Insider
Nvidia's GTC AI event featured a pop-up Denny's, drawing lines of attendees.
Nvidia CEO Jensen Huang even wore a Denny's apron.
It's all about Silicon Valley's origin stories.
There's nothing better than a Lumberjack Slam with your GPUs in the morning.
Denny's offerings greeted conference attendees outside Nvidia's big GTC AI event on Tuesday in San Jose, California.
There was a pop-up Denny's outside, with a line of people waiting to be served.
Conference-goers line up outside a Denny's pop-up restaurant outside Nvidia's GTC AI event.
Emma Cosgrove/Business Insider
There were also "Nvidia Breakfast Bytes," a play on the word for a unit of digital information.
A Denny's sign outside Nvidia's GTC AI conference
Emma Cosgrove/Business Insider
When he arrived, Nvidia CEO Jensen Huang even wore a kitchen overall emblazoned with Denny's corporate colors and logos.
The guy likes to cook and has broadcast announcements from his palatial kitchen in Silicon Valley before, complete with an overly large collection of colored spatulas.
But why Denny's? As a billionaire, he could dine at Nobu rather than a breakfast diner.
It's all about Silicon Valley's love of a gritty origin story. Founders of massive, wealthy tech giants love to hearken back to the good old days when their companies were scrappy startups. It keeps the troops humble, hungry, and focused on inventing new things rather than sitting back on their digital laurels.
Amazon founder Jeff Bezos has "Day 1." Google founders Larry Page and Sergey Brin like to recall starting the search giant in former YouTube CEO Susan Wojcicki's garage. Meta CEO Mark Zuckerberg started Facebook in a college dorm room.
Jensen has Denny's. He and the other Nvidia cofounders, Chris Malachowsky and Curtis Priem, came up with the idea for the company over Grand Slam breakfasts and too many cups of coffee in a Denny's in San Jose.
It's the 2484 Berryessa Road location in San Jose if you want to go sometime.
Generative AI is making it much easier to create online content.
There's so much more to consume and only so many hours in a day.
The final large chunk of time left is sleep.
There are only so many hours in the day, yet tech companies, video streamers, creators, and influencers are producing more and more online content for us to consume.
Generative AI is chipping away at the final barriers to content creation as I write this. At some point, a wall of infinite content will crash into the limits of time. Sleep has become the last major chunk of our day that remains (mostly) free of digital media. So far.Β
"While it appears that everyone has been cutting down on sleep and dedicating more time to their screens, there is a limit to how much time people can spend consuming digital content," the Bernstein internet analysts wrote in a recent note to investors.
In about 15 years, we've added about six hours a day to our digital consumption habit, according to EMARKETER, part of Business Insider.
Next year, Americans will spend more than eight hours with digital media of various kinds, according to EMARKETER estimates. This data excludes people younger than 18, so the total number could actually be higher.Β
The recommended amount of sleep is eight hours a night. So that's a total of more than 16 hours, leaving 8 hours for other activities, such as eating, exercise, love-making, pooping, and talking to other humans IRL.
If this type of competitive pressure keeps building, sleep could be Big Tech's next frontier.
As Netflix cofounder Reed Hastings said back in 2017, "We're competing with sleep."
It's hard to know what form new nighttime digital media might take, although there's already sleep-tracking technology and apps that claim to help you go to sleep better with meditation and special calming sounds.
Competing with sleep
An important caveat is that this data includes multitasking.
For instance, I could have been watching an online video of Lewis Hamilton, Ferrari's new Formula 1 driver, while sending Slack messages to my EMARKETER colleagues about that digital consumption data. If I spent an hour doing both these things, that would count as two hours. (I would never watch videos like this at work. Just a theoretical example!)
Still, the number of spare hours to consume even more digital media is shrinking. As content volume soars, we'll have to make more choices, which could mean some services are dropped more often.
For example, the Bernstein analysts wrote that increasing engagement on free social video apps could eventually cutΒ into the audience numbers of paid video-streaming services.
"At some point, the pie stops growing, and the slice starts to shrink," the Bernstein analysts wrote.
A contestant holds a pair of chinchillas at the Fourth Annual Chinchilla Show in New York.
Getty Images
Chinchillas are cuddly and cute.
Chinchilla is also an established way to build huge AI models using mountains of data.
There's at least $3 trillion riding on whether this approach continues or not.
About five years ago, researchers at OpenAI discovered that combining more computing power and more data in ever-larger training runsΒ produces better AI models.
A couple of years later, Google researchers found that adding more data to this mix produces even better results. They showed this by building a new AI model called Chinchilla.
These revelations helped create large language models and other giant models, like GPT-4, that support powerful AI tools such as ChatGPT. Yet in the future, the "Chinchilla" strategy of smashing together oodles of computing and mountains of data into bigger and longer pre-training runs may not work as well.
So what if this process doesn't end up being how AI is made in the future? To put it another way: What if the Chinchilla dies?
Building these massive AI models has so far required huge upfront investments. Mountains of data are mashed together in an incredibly complex and compute-intensive process known as pre-training.
This has sparked the biggest wave of infrastructure upgrades in technology's history. Tech companies across the US and elsewhere are frantically erecting energy-sucking data centers packed with Nvidia GPUs.
The rise of new "reasoning" models has opened up a new potential future for the AI industry, where the amount of required infrastructure could be much less. We're talking trillions of dollars of capital expenditure that might not happen in coming years.
Recently, Ross Sandler, a top tech analyst at Barclays Capital, and his team estimated the different capex requirements of these two possible outcomes:
The "Chinchilla" future is where the established paradigm of huge computing and data-heavy pre-training runs continue.
The "Stall-Out" alternative is one in which new types of models and techniques require less computing gear to produce more powerful AI.
The difference is stunning in terms of how much money will or will not be spent. $3 trillion or more in capex is on the line here.
These new models use an approach called test-time or inference-time compute, which slices queries into smaller tasks, turning each into a new prompt that the model tackles.
Reasoning models often don't need massive, intense, long pre-training runs to be created. They may take longer to respond, but their outputs can be more accurate, and they can be cheaper to run, too, the Barclays analysts said.
The analysts said that DeepSeek's R1 has shown how open-source reasoning models can drive incredible performance improvements with far less training time, even if this AI lab may have overstated some of its efficiency gains.
"AI model providers are no longer going to need to solely spend 18-24 months pre-training their next expensive model to achieve step-function improvements in performance," the Barclays analysts wrote in a recent note to investors. "With test-time-compute, smaller base models can run repeated loops and get to a far more accurate response (compared to previous techniques)."
Mixture of Experts
Another photo of a chinchilla
Thomson Reuters
When it comes to running new models, companies are embracing other techniques that will likely reduce the amount of computing infrastructure needed.
AI labs increasingly use an approach called mixture of experts, or MoE, where smaller "expert" models are trained on their tasks and subject areas and work in tandem with an existing huge AI model to answer questions and complete tasks.
In practice, this often means only part of these AI models is used, which reduces the computing required, the Barclays analysts said.
Where does this leave the poor Chinchilla?
Yet another photo of a chinchilla.
Shutterstock
The "Chinchilla" approach has worked for the past five years or more, and it's partly why the stock prices of many companies in the AI supply chain have soared.
The Barclays analysts question whether this paradigm can continue because the performance gains from this method may decline as the cost goes up.
"The idea of spending $10 billion on a pre-training run on the next base model, to achieve very little incremental performance, would likely change," they wrote.
Many in the industry also think data for training AI models is running out β there may not be enough quality information to keep feeding this ravenous chinchilla.
So, top AI companies might stop expanding this process when models reach a certain size. For instance, OpenAI could build its next huge model, GPT-5, but may not go beyond that, the analysts said.
A "synthetic" solution?
OK, the final picture of a chinchilla, I promise.
Itsuo Inouye/File/AP
The AI industry has started using "synthetic" training data, often generated by existing models. Some researchers think this feedback loop of models helping to create new, better models will take the technology to the next level.
The Chinchillas could, essentially, feed on themselves to survive.
Kinda gross, though that would mean tech companies will still spend massively on AI in the coming years.
"If the AI industry were to see breakthroughs in synthetic data and recursive self-improvement, then we would hop back on the Chinchilla scaling path, and compute needs would continue to go up rapidly," Sandler and his colleagues wrote. "While not entirely clear right now, this is certainly a possibility we need to consider."