Broadcom's CEO says he's too busy riding the AI wave to consider a takeover of rival Intel.
In an interview with the Financial Times, Hock Tan said he had no interest in "hostile takeovers."
Broadcom has soared to a $1 trillion market capitalization for the first time thanks to the AI boom.
The chief executive of $1 trillion AI chip giant Broadcom has dismissed the prospect of a takeover bid for struggling rival Intel.
In an interview with the Financial Times, Hock Tan said that he has his "hands very full" from riding the AI boom, responding to rumors that his company could make a move for its Silicon Valley rival.
"That is driving a lot of my resources, a lot of my focus," Tan said, adding that he has "not been asked" to bid on Intel.
The Broadcom boss is also adopting a "no hostile takeovers" policy after Donald Trump blocked his company's offer for Qualcomm in 2018 on national security grounds. Broadcom was incorporated in Singapore at the time.
"I can only make a deal if it's actionable," Tan told the FT. "Actionability means someone comes and asks me. Ever since Qualcomm, I learned one thing: no hostile offers."
Broadcom and Intel have experienced diverging fortunes since the start of the generative AI boom. Broadcom has more than doubled in value since the start of the year to hit a $1 trillion market capitalization for the first time, while Intel has collapsed by more than half to $82 billion.
Broadcom, which designs custom AI chips and components for data centers, hit a record milestone last week after reporting its fourth-quarter earnings. Revenues from its AI business jumped 220% year over year.
Intel, meanwhile, has had a much rougher year. Its CEO Pat Gelsinger β who first joined Intel when he was 18 and was brought back in 2021 after a stint at VMWare β announced his shock retirement earlier this month after struggling to keep pace with rivals like Nvidia in the AI boom.
Gelsinger, who returned to revitalize Intel's manufacturing and design operations, faced several struggles, leading him to announce a head count reduction of roughly 15,000 in August and a suspension of Intel's dividend.
Its challenges have led to several rumors of being bought by a rival, a move that would mark a stunning end to the decades-old chip firm. Buyer interest remains uncertain, however. Bloomberg reported in November that Qualcomm's interest in an Intel takeover has cooled.
Broadcom did not immediately respond to BI's request for comment outside regular working hours.
AWS unveiled a new AI chip and a supercomputer at its Re: Invent conference on Tuesday.
It's a sign that Amazon is ready to reduce its reliance on Nvidia for AI chips.
Amazon isn't alone: Google, Microsoft, and OpenAI are also designing their own AI chips.
Big Tech's next AI era will be all about controlling silicon and supercomputers of their own. Just ask Amazon.
At its Re: Invent conference on Tuesday, the tech giant's cloud computing unit, Amazon Web Services, unveiled the next line of its AI chips, Trainium3, while announcing a new supercomputer that will be built with its own chips to serve its AI ambitions.
It marks a significant shift from the status quo that has defined the generative AI boom since OpenAI's release of ChatGPT, in which the tech world has relied on Nvidia to secure a supply of its industry-leading chips, known as GPUs, for training AI models in huge data centers.
While Nvidia has a formidable moat β experts say its hardware-software combination serves as a powerful vendor lock-in system β AWS' reveal shows companies are finding ways to take ownership of the tech shaping the next era of AI development.
Putting your own chips on the table
On the chip side, Amazon shared that Trainium2, which was first unveiled at last year's Re: Invent, was now generally available. Its big claim was that the chip offers "30-40% better price performance" than the current generation of servers with Nvidia GPUs.
That would mark a big step up from its first series of chips, which analysts at SemiAnalysis described on Tuesday as "underwhelming" for generative AI training and used instead for "training non-complex" workloads within Amazon, such as credit card fraud detection.
"With the release of Trainium2, Amazon has made a significant course correction and is on a path to eventually providing a competitive custom silicon," the SemiAnalysis researchers wrote.
Trainium3, which AWS gave a preview of ahead of a late 2025 release, has been billed as a "next-generation AI training chip." Servers loaded with Trainium3 chips offer four times greater performance than those packed with Trainium2 chips, AWS said.
Matt Garman, the CEO of AWS, told The Wall Street Journal that some of the company's chip push is due to there being "really only one choice on the GPU side" at present, given Nvidia's dominant place in the market. "We think that customers would appreciate having multiple choices," he said.
It's an observation that others in the industry have noted and responded to. Google has been busy designing its own chips that reduce its dependence on Nvidia, while OpenAI is reported to be exploring custom, in-house chip designs of its own.
But having in-house silicon is just one part of this.
The supercomputer advantage
AWS acknowledged that as AI models trained on GPUs continue to get bigger, they are "pushing the limits of compute and networking infrastructure."
With this in mind, AWS shared that it was working with Anthropic to build an "UltraCluster" of servers that form the basis of a supercomputer it has named Project Rainier. According to Amazon, it will scale model training across "hundreds of thousands of Trainium2 chips."
"When completed, it is expected to be the world's largest AI compute cluster reported to date available for Anthropic to build and deploy their future models on," AWS said in a blog, adding that it will be "over five times the size" of the cluster used to build Anthropic's last model.
The supercomputer push follows similar moves elsewhere. The Information first reported earlier this year that OpenAI and Microsoft were working together to build a $100 billion AI supercomputer called Stargate.
Of course, Nvidia is also in the supercomputer business and aims to make them a big part of its allure to companies looking to use its next-generation AI chips, Blackwell.
AWS made no secret that it remains tied to Nvidia for now. In an interview with The Wall Street Journal, Garman acknowledged that Nvidia is responsible for "99% of the workloads" for training AI models today and doesn't expect that to change anytime soon.
That said, Garman reckoned "Trainium can carve out a good niche" for itself. He'll be wise to recognize that everyone else is busy carving out a niche for themselves, too.
Pat Gelsinger's successor has big problems to pick up.
The departing Intel CEO has struggled with a turnaround and left the company behind on AI.
Intel also faces an uphill battle in its bid to outdo its industry rival TSMC.
Intel's announcement that Pat Gelsinger is retiring has thrown the storied chip firm's future into deep uncertainty. Its interim leaders and its next CEO must pick up the pieces of a turnaround plan designed to fix a business in turmoil, play catch-up in a lucrative AI race, and navigate Donald Trump's second term.
The mission assigned to Gelsinger when he took over as CEO in 2021 was to restore the then-52-year-old company to relevancy. Its business of designing and manufacturing chips β once industry-leading β was struggling. It had to contend with Big Tech customers pushing forward with their own designs, and with production setbacks from manufacturing woes.
Now the challenges are even greater. It has failed to capitalize on the generative-AI boom that has enriched rivals like Nvidia and TSMC. It is also struggling to make the case that it's a national champion of US industry as chipmaking becomes increasingly critical to the nation's future prosperity.
In October, Intel announced a net loss of $16.6 billion in its third quarter, adding to a loss in its second quarter. The company announced it would suspend its dividend and reduce its head count by 15%. The market has not reacted kindly. The chip giant has lost half its value since the start of the year, sinking to a market capitalization of about $103 billion.
After Gelsinger's exit, which Bloomberg reported followed a clash with the board, it's on Intel's interim co-CEOs β David Zinsner and Michelle Johnston Holthaus β and its future leader to overcome these problems.
Closing the AI gap
In 2006, Intel's CEO at the time, Paul Otellini, famously turned down an offer from Steve Jobs to make chips for the iPhone. Intel's move to shun the smartphone market has been replayed in the AI boom.
"It hung on to PCs for too long and ignored what Nvidia was doing," said Peter Cohan, an associate professor of management practice at Babson College. "By the time Intel began to work on AI chips, it was too late, and the company had basically no market share."
Though Intel has tried to play catch-up, it has struggled to deliver. Analysts and researchers pointed to a few reasons.
Hamish Low, a research analyst at Enders Analysis, said Intel had issues during Gelsinger's tenure getting operations ready for the AI boom while dealing with the internal challenges of separating its foundry division from its design business.
"This long, drawn-out corporate process of trying to get your own house in order, when that's your focus, clearly generative AI just skipped right by," Low told Business Insider.
He added that Intel was long known as "the x86 CPU company," referring to its architecture for more general computer chips. The AI world runs on chips known as GPUs loaded into servers, so trying to shift focus while restructuring the business proved tough.
"When it suddenly is GPUs and accelerated computers, it was always going to be a tough challenge to pivot into doing those kinds of server GPU chips," he said.
Intel has felt the pains of this pivot. Its line of AI chips, known as Gaudi, has paled in comparison with offerings from competitors like Nvidia. In an October earnings call, Gelsinger said the company would "not achieve our target of $500 million in revenue for Gaudi in 2024."
"They're in this awkward position where their server-side chips are just too subscale to ever really gain meaningful market share," Low said. "It's hard to see who the real customers for those would be."
Daniel Newman, the CEO of the research firm The Futurum Group, argued that Intel struggled because it didn't "count on the democratization of silicon." Companies like Microsoft and Google have been designing their own chips, further limiting Intel's market.
"They all went down the path of making their own investments and bets on silicon, so what was left was this second-tier enterprise market," Newman told BI. "If you look at the enterprise market, they're not buying a lot of AI silicon yet."
Making the case as a national champion
The other big challenge facing Intel's next leaders is proving that the company can be a national champion of US chipmaking. That won't be straightforward.
Doing so would require strengthening its fab-manufacturing business. "Standing up a successful fab is not an overnight thing," Newman said. "This is a multiyear process."
A lot of the challenge, he said, is that success would require "quite a bit of customer acquisition" from those who have seemed reluctant to shift high-volume production to Intel. As it stands, Intel is its own biggest customer for chip manufacturing.
That's because everyone else largely turns to the Taiwanese giant TSMC, which has grown by more than 83% this year, hitting a market capitalization of $1 trillion. Newman acknowledged that it's unclear whether this is because TSMC is technologically superior. He sees a new Intel process called 18A as being competitive, following news in September that Amazon Web Services was adopting it for a particular chip.
What's more likely, he said, is that TSMC's customers think, "TSMC's not broken, why fix it?" If Intel wants to get serious about building a leading manufacturing business, it'll need to find a way to take on TSMC.
The president-elect has emphasized a protectionist policy. Given the importance of chipmaking to US industry and national security, he could look to throw weight behind Intel.
How that might happen is unclear. Trump has criticized the CHIPS Act, which is set to give Intel $7.9 billion in grants to boost its domestic manufacturing capabilities. Trump has said he'd prefer tariffs as a tool to incentivize chip manufacturing on US soil.
In September, Intel announced plans to spin off its manufacturing unit into its own subsidiary. But not everyone is convinced these moves will be enough.
Cohan told BI he thinks "it is highly unlikely that Intel can be more successful than TSMC in making chips" without significant support from the US government.
"That company lacks the capital to do that on its own, and its knowledge of how to make Nvidia chips is way behind TSMC's," he said. "Why would Nvidia even choose to give up its relationship with TSMC for a less successful rival?"
Intel CEO Pat Gelsinger stepped down on Sunday, the company said Monday.
The company has struggled in recent years to keep up with rivals like Nvidia in the chip race.
Intel's share price was up more than 3% at the market open after it announced Gelsinger's departure.
Intel CEO Pat Gelsinger has stepped down, the company said Monday in a statement, as the US chipmaker struggles to keep up in the global chip race.
Gelsinger leaves the chipmaker with immediate effect, vacating his roles as CEO and as a member of the board.
The 63-year-old executive's departure follows a clash with Intel's board of directors regarding his plan to gain ground against rival chipmaker Nvidia, Bloomberg reported, citing people familiar with the matter.
Gelsinger was reportedly offered the choice to step aside or be fired, the outlet said.
"Today is, of course, bittersweet as this company has been my life for the bulk of my working career," he said in a statement. "It has been a challenging year for all of us as we have made tough but necessary decisions to position Intel for the current market dynamics."
Two senior Intel executives, David Zinsner and Michelle Johnston Holthaus, will lead the company during the search for a new CEO.
Intel, once a giant of Silicon Valley, has seen its share price drop almost 50% this year as it has faced multiple challenges.
Gelsinger's plans to revitalize the company included ambitions to build more factories in the US and Europe to scale its production capacity, as well as designing its own line of AI chips, named Gaudi, to take on the industry heavyweight Nvidia.
Many of these efforts have struggled, however. Last month, Gelsinger said the company was set to miss its target of $500 million in 2024 sales for Gaudi 3, its latest series of AI chips, due to software-related issues.
Gelsinger rolled out a sweeping set of initiatives earlier this year to turn the company around. In August, Intel laid off 15,000 employees, said it would suspend its dividend starting in the fourth quarter, and cut its capital spending.
Intel's stock price rose more than 3% when markets opened on Monday.
Frank Yeary, Intel's chair, thanked Gelsinger and said the company needed to restore investor confidence.
"While we have made significant progress in regaining manufacturing competitiveness and building the capabilities to be a world-class foundry, we know that we have much more work to do at the company and are committed to restoring investor confidence," Yeary said.
Gelsinger was brought on in 2021 to lead the Santa Clara, California-headquartered company, with a remit to turn it into a powerhouse of the chip industry and close the gap with its Taiwanese rival TSMC.
He first joined Intel in 1979 and rose to become its chief technology officer in 2001. He then left the company in 2009 to join EMC, a subsidiary of Dell. In 2012, he became the CEO of the cloud-computing firm VMware before returning to Intel as its CEO in 2021.
OpenAI is seeking to reach 1 billion users by next year, a new report said.
Its growth plan involves building new data centers, company executives told the Financial Times.
The lofty user target signifies the company's growth ambitions following a historic funding round.
OpenAI is seeking to amass 1 billion users over the next year and enter a new era of accelerated growth by betting on several high-stakes strategies such as building its own data centers, according to a new report.
In 2025, the startup behind ChatGPT hopes to reach user numbers surpassed only by a handful of technology platforms, such as TikTok and Instagram, by investing heavily in infrastructure that can improve its AI models, its chief financial officer Sarah Friar told the Financial Times.
"We're in a massive growth phase, it behooves us to keep investing. We need to be on the frontier on the model front. That is expensive," she said.
ChatGPT, the generative AI chatbot introduced two years ago by OpenAI boss Sam Altman, serves 250 million weekly active users, the report said.
ChatGPT has enjoyed rapid growth before. It reached 100 million users roughly two months after its initial release thanks to generative AI features that grabbed the attention of businesses and consumers. At the time, UBS analysts said they "cannot recall a faster ramp in a consumer internet app."
Data center demand
OpenAI will require additional computing power to accommodate a fourfold increase in users and to train and run smarter AI models.
Chris Lehane, vice president of global affairs at OpenAI, told the Financial Times that the nine-year-old startup was planning to invest in "clusters of data centers in parts of the US Midwest and southwest" to meet its target.
Increasing data center capacity has become a critical global talking point for AI companies. In September, OpenAI was reported to have pitched the White House on the need for a massive data center build-out, while highlighting the massive power demands that they'd come with.
Altman, who thinks his technology will one day herald an era of "superintelligence," has been reported to be in talks this year with several investors to raise trillions of dollars of capital to fund the build-out of critical infrastructure like data centers.
Friar also told the FT that OpenAI is open to exploring an advertising model.
"Our current business is experiencing rapid growth and we see significant opportunities within our existing business model," Friar told Business Insider. "While we're open to exploring other revenue streams in the future, we have no active plans to pursue advertising."
OpenAI said the capital would allow it to "double down" on its leadership in frontier AI research, as well as "increase compute capacity, and continue building tools that help people solve hard problems."
In June, the company also unveiled a strategic partnership with Apple as part of its bid to put ChatGPT in the hands of more users.
OpenAI did not immediately respond to BI's request for comment.
Elon Musk's $44 billion Twitter buyout was seen by many as overpriced.
However, the social media platform has helped give Musk close access to the Trump administration.
Twitter, now X, has also been a valuable data source for Musk's $50 billion startup xAI.
When Elon Musk bought Twitter for $44 billion, it was panned as one of the worst tech acquisitions in history. Two years, an election, and a generative AI boom later, it's starting to look like more of a bargain.
Shortly after the deal closed in October 2022, Wedbush Securities tech analyst Dan Ives said it would "go down as one of the most overpaid tech acquisitions in the history of M&A deals on the Street."
On paper, the $13 billion that Musk borrowed to buy Twitter, now X, has turned into the worst merger-finance deal for banks since the 2008 financial crisis.
Yet the deal has provided significant benefits for Musk. He now wields considerable influence in the incoming Trump administration after using X to throw his support behind the former president's reelection.
Not only has X served as Musk's political megaphone, but it's also been a lucrative source of training data for one of the billionaire's other ventures: xAI, the startup that's rocketed to a $50 billion valuation just 16 months after launch.
That fresh valuation means xAI has surpassed Musk's purchase price for X. It came with a $5 billion funding round, which The Wall Street Journal reported was backed byΒ the Qatar Investment Authority and Sequoia Capital.
Musk launched xAI in July 2023 as a springboard to get in the AI race after cofounding and then leaving ChatGPT maker OpenAI due to differences with its CEO, Sam Altman.
The startup has made up significant ground on its rivals by using X as a source of third-party data, one of the key avenues for training large language models.
Ellen Keenan O'Malley, a senior associate at intellectual property law firm EIP, told Business Insider that xAI's access to "third-party information through X is the potential kryptonite to ChatGPT's edge" and a potential driver behind the rising valuation of Musk's startup.
"This is a level that neither OpenAI nor any other third party can access, or at least not as easily, which provides a huge competitive edge and therefore makes xAI a valuable company," added O'Malley.
Access to 0.3% of X's data costs around $500,000 annually, which prices many out, Wired previously reported.
"Clearly, X's or indeed any social media platform's data is valuable," Advika Jalan, head of research at MMC Ventures, told BI.
X marks the spot in the Musk-Trump alliance
Musk spent at least $119 million on a political action committee to support Trump's campaign.
X played a large role, too. Musk has long been an avid poster on X, but he ramped up the volume during the election cycle. Analysis by The Economist found that the share of Musk's political posts on X has risen from less than 4% in 2016 to over 13% this year. Since endorsing Trump, has has posted more than 100 times on some days to his more than 200 million followers.
A study published by the Queensland University of Technology this month suggested that Musk may have tweaked X's algorithm to boost the reach of his and other Republican-leaning accounts.
Shmuel Chafets, cofounder of venture capital firm Target Global, told Business Insider that "X has become a powerful tool" in Musk's ecosystem, adding that it serves "as a platform for promotion and influence, similar to how Warren Buffett leverages the Berkshire Hathaway annual shareholders meeting and his shareholder letters."
X didn't always seem destined to attain such influence in Musk's hands.
In the months and years following Musk's takeover, an advertiser revolt ensued over content moderation concerns, the company laid off about 80% of staff, and service outages disrupted users.
Musk's co-investors have been writing down the value of their X stakes in the two years since. In September, Fidelity, one of its investors, slashed the value of its holding, giving X an implied valuation of $9.4 billion.
Yet Musk's support for Trump, which came after an assassination attempt against the president-elect at a rally in Pennsylvania in July, gives the tech billionaire political sway that is hard to put a price on.
Musk, who Trump said was a "super genius" in his victory speech at the Mar-a-Lago resort in Palm Beach, was selected by the president-elect to run a new Department of Government Efficiency alongside Vivek Ramaswamy, who ran in the 2024 Republican primary.
DOGE will be a "threat to bureaucracy," according to Musk, whose remit at the newly formed department will include driving $2 trillion in federal spending cuts and slashing regulations he deems superfluous and in the way of his corporate empire. As one SpaceX official told Reuters, Musk "sees the Trump administration as the vehicle for getting rid of as many regulations as he can, so he can do whatever he wants, as fast as he wants."
Since Trump's election win, the billionaire has been seen side-by-side with the president-elect at a UFC fight night while reportedly joining his calls to leaders like Volodymyr Zelensky and Google CEO Sundar Pichai.
X-odus
How long X maintains a Musk-Trump bromance and supports xAI's growth remains to be seen.
Musk, for instance, isn't always getting his preference for cabinet appointments chosen by Trump; his choice of Wall Street veteran Howard Lutnick as Treasury secretary was shunned for Trump's pick Scott Bessent, dismissed by Musk as a "business-as-usual choice."
X also faces legal challenges in which judges have expressed concerns over gatekeeping user data. In May, a federal judge in California dismissed a lawsuit filed by X against Israeli firm Bright Data. X claimed Bright Data was "using elaborate technical measures to evade X Corp.'s anti-scraping technology."
Earlier this month, X partially revived its suit against Bright Data. Should X be unsuccessful, it would raise questions about the value of the X to xAI data pipeline.
Elsewhere, the renewed interest in X rivals like Bluesky and Threads risks seeing Musk's social media site lose users who are both key for advertising revenue and providing vital sources of data for training future models at xAI. X is now in a position where "lots of people hate it because they see it as being a weaponized instrument of MAGA," Calum Chace, cofounder of AI startup Conscium, told BI.
For now, though, Musk has a powerful tool in his hands with X.
"Critics may enjoy pointing out his missteps, but Musk's ability to leverage X for both personal and business purposes reinforces his reputation as a visionary entrepreneur who consistently thinks several steps ahead of his contemporaries," said Target Global's Chafets.
"Ultimately, this deal could prove highly lucrative if he decides to sell or take the company public in the future."
The rate of AI model improvement appears to be slowing, but some tech leaders say there is no wall.
It's prompted a debate over how companies can overcome AI bottlenecks.
Business Insider spoke to 12 people at the forefront of the AI boom to find out the path forward.
Silicon Valley leaders all-in on the artificial intelligence boom have a message for critics: their technology has not hit a wall.
A fierce debate over whether improvements in AI models have hit their limit has taken hold in recent weeks, forcing several CEOs to respond. OpenAI boss Sam Altman was among the first to speak out, posting on X this month that "there is no wall."
Dario Amodei, CEO of rival firm Anthropic, and Jensen Huang, the CEO of Nvidia, have also disputed reports that AI progress has slowed. Others, including Marc Andreessen, say AI models aren't getting noticeably better and are all converging to perform at roughly similar levels.
This is a trillion-dollar question for the tech industry. If tried-and-tested AI model training methods are providing diminishing returns, it could undermine the core reason for an unprecedented investment cycle that's funding new startups, products, and data centers β and even rekindling idled nuclear power plants.
Business Insider spoke to 12 people at the forefront of the AI industry, including startup founders, investors, and current and former insiders at Google DeepMind and OpenAI, about the challenges and opportunities ahead in the quest for superintelligent AI.
Together, they said that tapping into new types of data, building reasoning into systems, and creating smaller but more specialized models are some of the ways to keep the wheels of AI progress turning.
The pre-training dilemma
Researchers point to two key blocks that companies may encounter in an early phase of AI development, known as pre-training. The first is access to computing power. More specifically, this means getting hold of specialist chips called GPUs. It's a market dominated by Santa Clara-based chip giant Nvidia, which has battled with supply constraints in the face of nonstop demand.
"If you have $50 million to spend on GPUs but you're on the bottom of Nvidia's list β we don't have enough kimchi to throw at this, and it will take time," said Henri Tilloy, partner at French VC firm Singular.
There is another supply problem, too: training data. AI companies have run into limits on the quantity of public data they can secure to feed into their large language models, or LLMs, in pre-training.
This phase involves training an LLM on a vast corpus of data, typically scraped from the internet, and then processed by GPUs. That information is then broken down into "tokens," which form the fundamental units of data processed by a model.
While throwing more data and GPUs at a model has reliably produced smarter models year after year, companies have been exhausting the supply of publicly available data on the internet. Research firm Epoch AI predicts usable textual data could be squeezed dry by 2028.
"The internet is only so large," Matthew Zeiler, founder and CEO of Clarifai, told BI.
Multimodal and private data
Eric Landau, cofounder and CEO of data startup Encord, said that this is where other data sources will offer a path forward in the scramble to overcome the bottleneck in public data.
One example is multimodal data, which involves feeding AI systems visual and audio sources of information, such as photos or podcast recordings. "That's one part of the picture," Landau said. "Just adding more modalities of data." AI labs have already started using multimodal data as a tool, but Landau says it remains "very underutilized."
Sharon Zhou, cofounder and CEO of LLM platform Lamini, sees another vastly untapped area: private data. Companies have been securing licensing agreements with publishers to gain access to their vast troves of information. OpenAI, for instance, has struck partnerships with organizations such as Vox Media and Stack Overflow, a Q&A platform for developers, to bring copyrighted data into their models.
"We are not even close to using all of the private data in the world to supplement the data we need for pre-training," Zhou said. "From work with our enterprise and even startup customers, there's a lot more signal in that data that is very useful for these models to capture."
A data quality problem
A great deal of research effort is now focused on enhancing the quality of data that an LLM is trained on rather than just the quantity. Researchers could previously afford to be "pretty lazy about the data" in pre-training, Zhou said, by just chucking as much as possible at a model to see what stuck. "This isn't totally true anymore," she said.
One solution that companies are exploring is synthetic data, an artificial form of data generated by AI.
According to Daniele Panfilo, CEO of startup Aindo AI, synthetic data can be a "powerful tool to improve data quality," as it can "help researchers construct datasets that meet their exact information needs." This is particularly useful in a phase of AI development known as post-training, where techniques such as fine-tuning can be used to give a pre-trained model a smaller dataset that has been carefully crafted with specific domain expertise, such as law or medicine.
One former employee at Google DeepMind, the search giant's AI lab, told BI that "Gemini has shifted its strategy" from going bigger to more efficient. "I think they've realized that it is actually very expensive to serve such large models, and it is better to specialize them for various tasks through better post-training," the former employee said.
In theory, synthetic data offers a useful way to hone a model's knowledge and make it smaller and more efficient. In practice, there's no full consensus on how effective synthetic data can be in making models smarter.
"What we discovered this year with our synthetic data, called Cosmopedia, is that it can help for some things, but it's not the silver bullet that's going to solve our data problem," Thomas Wolf, cofounder and chief science officer at open-source platform Hugging Face, told BI.
Jonathan Frankle, the chief AI scientist at Databricks, said there's no "free lunch " when it comes to synthetic data and emphasized the need for human oversight. "If you don't have any human insight, and you don't have any process of filtering and choosing which synthetic data is most relevant, then all the model is doing is reproducing its own behavior because that's what the model is intended to do," he said.
Concerns around synthetic data came to a head after a paper published in July in the journal Nature said there was a risk of "model collapse" with "indiscriminate use" of synthetic data. The message was to tread carefully.
Building a reasoning machine
For some, simply focusing on the training portion won't cut it.
Former OpenAI chief scientist and Safe Superintelligence cofounder Ilya Sutskever told Reuters this month that results from scaling models in pre-training had plateaued and that "everyone is looking for the next thing."
That "next thing" looks to be reasoning. Industry attention has increasingly turned to an area of AI known as inference, which focuses on the ability of a trained model to respond to queries and information it might not have seen before with reasoning capabilities.
At Microsoft's Ignite event this month, the company's CEO Satya Nadella said that instead of seeing so-called AI scaling laws hit a wall, he was seeing the emergence of a new paradigm for "test-time compute," which is when a model has the ability to take longer to respond to more complex prompts from users. Nadella pointed to a new "think harder" feature for Copilot β Microsoft's AI agent β which boosts test time to "solve even harder problems."
Aymeric Zhuo, cofounder and CEO of AI startup Agemo, said that AI reasoning "has been an active area of research," particularly as "the industry faces a data wall." He told BI that improving reasoning requires increasing test-time or inference-time compute.
Typically, the longer a model takes to process a dataset, the more accurate the outcomes it generates. Right now, models are being queried in milliseconds. "It doesn't quite make sense," Sivesh Sukumar, an investor at investment firm Balderton, told BI. "If you think about how the human brain works, even the smartest people take time to come up with solutions to problems."
In September, OpenAI released a new model, o1, which tries to "think" about an issue before responding. One OpenAI employee, who asked not to be named, told BI that "reasoning from first principles" is not the forte of LLMs as they work based on "a statistical probability of which words come next," but if we "want them to think and solve novel problem areas, they have to reason."
Noam Brown, a researcher at OpenAI, thinks the impact of a model with greater reasoning capabilities can be extraordinary. "It turned out that having a bot think for just 20 seconds in a hand of poker got the same boosting performance as scaling up the model by 100,000x and training it for 100,000 times longer," he said during a talk at TED AI last month.
Google and OpenAI did not respond to a request for comment from Business Insider.
The AI boom meets its tipping point
These efforts give researchers reasons to remain hopeful, even if current signs point to a slower rate of performance leaps. As a separate former DeepMind employee who worked on Gemini told BI, people are constantly "trying to find all sorts of different kinds of improvements."
That said, the industry may need to adjust to a slower pace of improvement.
"I just think we went through this crazy period of the models getting better really fast, like, a year or two ago. It's never been like that before," the former DeepMind employee told BI. "I don't think the rate of improvement has been as fast this year, but I don't think that's like some slowdown."
Lamini's Zhou echoed this point. Scaling laws β an observation that AI models improve with size, more data, and greater computing power βwork on a logarithmic scale rather than a linear one, she said. In other words, think of AI advances as a curve rather than a straight upward line on a graph. That makes development far more expensive "than we'd expect for the next substantive step in this technology," Zhou said.
She added: "That's why I think our expectations are just not going to be met at the timeline we want, but also why we'll be more surprised by capabilities when they do appear."
Companies will also need to consider how much more expensive it will be to create the next versions of their highly prized models. According to Anthropic's Amodei, a training run in the future could one day cost $100 billion. These costs include GPUs, energy needs, and data processing.
Whether investors and customers are willing to wait around longer for the superintelligence they've been promised remains to be seen. Issues with Microsoft's Copilot, for instance, are leading some customers to wonder if the much-hyped tool is worth the money.
For now, AI leaders maintain that there are plenty of levers to pull β from new data sources to a focus on inference β to ensure models continue improving. Investors and customers just might have to be prepared for them to come at a slower pace compared to the breakneck pace set by OpenAI when it launched ChatGPT two years ago.
Huawei is set to launch its new line of Mate 70 phones on Tuesday.
Its software and hardware have been developed with domestic expertise.
It marks a new era of self-sufficiency at a moment of tech division between the US and China.
Look no further than Huawei to get a sense of just how far apart the US and China are heading into a second Donald Trump presidency.
On Tuesday, the Shenzhen-based tech giant is set to unveil a slate of new smartphones β the Mate 70 series β that will be the most free they have ever been of Western software and hardware.
During his first term in the White House, the president-elect moved to block what he saw as a national security threat by wielding export controls and an executive order to cut the Chinese firm's ties to crucial US partners and suppliers.
President Joe Biden's outgoing administration continued this approach, which meant Huawei had to look closer to home for chips, operating systems, and apps.
This term, Trump will stare down a Huawei that's showing it's doing just fine without its US suppliers.
On the software side, all lingering remains of Huawei's former dependence on Android look set to be excised on the Mate 70 devices as they launch with HarmonyOS Next, an operating system built to run apps specific to Huawei's system.
Huawei first launched HarmonyOS in 2019 after being cut off from Google's powerful Android system. Early versions of the platform contained code from the Android Open Source Project, but HarmonyOS Next removes it all, making it a product solely of Huawei's own making.
Meanwhile, on the hardware side, Huawei is looking to raise the bar on performance by introducing a new made-in-China smartphone chip in some of the new Mate 70 models, according to the Wall Street Journal.
A performance leap with a domestic chip would be a big deal. The top-end version of the Mate 70 predecessor β the Mate 60 β stunned policymakers last year as its launch showed off capabilities that were once only possible to accomplish with equipment sourced in the US.
The Mate 60's pro model was reported to have an advanced chipset called Kirin 9000s, designed by Shenzhen-based HiSilicon and manufactured by state-backed semiconductor firm SMIC. It gave the phone 5G-like cellular capabilities, per a teardown by Bloomberg.
Together, the software and hardware advances are a symbolic moment that shows how little effect efforts in Washington have had on squeezing a company dubbed a "national champion" by Beijing's mandarins since the 1990s.
Bad news for Apple
This growing self-sufficiency isn't going unnoticed.
Apple, which considers China its most important international market beyond the US, has seen iPhone sales suffer in the region as local consumers have gravitated toward handsets that are aggressively priced and give them a sense of national pride.
According to figures from research firm Counterpoint, Huawei held an 18% share of the Chinese smartphone market in the third quarter of this year, while Apple had a 14% share. Depending on the success of the Mate 70 phones, that gap could widen in the months ahead.
Apple CEO Tim Cook, for his part, wants to ensure that Chinese consumers remain dedicated to the iPhone maker, which has sold its smartphones there since 2009. This week, he is visiting the country for at least a third time this year to attend an industry conference.
During his trip, he will be acutely aware that iPhones face stiff competition in China. Back in 2009, no Chinese company had an answer to Steve Jobs' creation, and even if they did, they'd need to package it up with US technology. Huawei's Tuesday launch could well change that.
Nvidia has a lot riding on Blackwell, its new flagship AI chip.
Investors had a tepid response to earnings despite reporting $35.1 billion in third-quarter revenue.
To deliver with Blackwell, Nvidia must juggle performance expectations and complex supply chains.
Nvidia looks set to end the year as confidently as it started. How next year plays out will significantly depend on the performance of Blackwell, its next-generation AI chip.
The Santa Clara-based chip giant reminded everyone why it has grown more than 200% this year to become the world's most valuable company. On Wednesday, it reported another blowout earnings. Revenue hit $35.1 billion in its third quarter, up 94% from a year ago.
But despite the strong earnings, which Wedbush analyst Dan Ives said "should be framed and hung in the Louvre," investors remained cautious as they focused their attention on the highwire act Nvidia must pull off with Blackwell.
The new chip, known as a GPU, was first unveiled by CEO Jensen Huang at the company's GTC conference in March. It was revealed as a successor to the Hopper GPU that companies across Silicon Valley and beyond have used to build powerful AI models.
While Nvidia confirmed on Wednesday that Blackwell is now "in full production," with 13,000 samples shipped to customers last quarter, signs emerged to suggest that Nvidia faces a difficult path ahead as it prepares to scale up its new-era GPUs.
Nvidia must navigate complex supply chains
First, Blackwell is what Nvidia CFO Colette Kress called a "full-stack" system.
That makes it a beast of machinery that needs to be fit for an incredibly wide range of specific needs from a variety of customers. As she told investors on the earnings call on Wednesday, Blackwell is built with "customizable configurations" to address "a diverse and growing AI market." That includes everything from "x86 to Arm, training to inferencing GPUs, InfiniBand to Ethernet switches," Kress said.
Nvidia will also need incredible precision in its execution to satisfy its customers. As Kress said on the earnings call, the line for Blackwell is "staggering," with the company "racing to scale supply to meet the incredible demand customers are placing on us."
To achieve this, it'll need to focus on two areas. First, meeting demand for Blackwell will mean efficiently orchestrating an incredibly complex and widespread supply chain. In response to a question from Goldman Sachs analyst Toshiya Hari, Huang reeled off a near-endless list of suppliers contributing to Blackwell production.
There were Far East semiconductor firms TSMC, SK Hynix, and SPIL; Taiwanese electronics giant Foxconn; Amphenol, a producer of fiber optic connectors in Connecticut; cloud and data center specialists like Wiwynn and Vertiv, and several others.
"I'm sure I've missed partners that are involved in the ramping up of Blackwell, which I really appreciate," Huang said. He'll need each and every one of them to be in sync to help meet next quarter's guidance of $37.5 billion in revenue. There had been some recent suggestions that cooling issues were plaguing Blackwell, but Huang seemed to suggest they had been addressed.
Kress acknowledged that the costs of the Blackwell ramp-up will lead to gross margins dropping by a few percentage points but expects them to recover to their current level of roughly 75% once "fully ramped."
All eyes are on Blackwell's performance
The second area Nvidia will need to execute with absolute precision is performance. AI companies racing to build smarter models to keep their own backers on board will depend on Huang's promise that Blackwell is far superior in its capabilities to Hopper.
Reports so far suggest Blackwell is on track to deliver next-generation capabilities. Kress reassured investors on this, citing results from Blackwell's debut last week on the MLPerf Training benchmark, an industry test that measures "how fast systems can train models to a target quality metric." The Nvidia CFO said Blackwell delivered a "2.2 times leap in performance over Hopper" on the test.
Collectively, these performance leaps and supply-side pressures matter to Nvidia for a longer-term reason, too. Huang committed the company to a "one-year rhythm" of new chip releases earlier this year, a move that effectively requires the tech giant to showcase a vastly more powerful variety of GPUs each year while convincing customers that it can dole them out.
While performance gains seem to be showing real improvements, reports this year have suggested that pain points have emerged in production that have added delays to the rollout of Blackwell.
Nvidia remains ahead of rivals like AMD
For now, investors appear to be taking a wait-and-see approach to Blackwell, with Nvidia's share price down less than a percentage point in pre-market trading. Hamish Low, research analyst at Enders Analysis, told BI that "the reality is that Nvidia will dominate the AI accelerator market for the foreseeable future," particularly as "the wave of AI capex" expected from tech firms in 2025 will ensure it remains "the big winner" in the absence of strong competition.
"AMD is a ways behind and none of the hyperscaler chips are going to be matching that kind of volume, which gives Nvidia some breathing room in terms of market share," Low said.
As Low notes, however, there's another reality Nvidia must reckon with. "The challenge is the sheer weight of investor expectations due to the scale and premium that Nvidia has reached, where anything less than continually flying past every expectation is a disappointment," he said.
If Blackwell misses those expectations in any way, Nvidia may need to brace for a fall.