โŒ

Normal view

There are new articles available, click to refresh the page.
Today โ€” 19 May 2025Main stream

DeepSeek's R1 was 'genuinely a gift to the world's AI industry,' says Jensen Huang

19 May 2025 at 02:35
Nvidia co-founder and CEO Jensen Huang.
Nvidia cofounder and CEO Jensen Huang talked hardware and software in Taipei on Monday.

I-Hwa Cheng/AFP/Getty Images

  • Nvidia CEO Jensen Huang praised DeepSeek R1 for significant contributions to AI research.
  • DeepSeek has made a "real impact" in how people think about inference and reasoning AI, Huang said.
  • Nvidia's stock fell sharply amid January's DeepSeek selloff, but Huang said investors got it wrong.

Jensen Huang heaped praise on the Chinese AI model that briefly upended the tech world, calling DeepSeek's R1 "a great contribution to the industry and to the world" on Monday.

Shares of tech and semiconductor companies, including Nvidia, tumbled in January following the meteoric rise of DeepSeek R1, the Chinese AI model that investors viewed as being globally competitive and cost-effective.

But Huang has good things to say about DeepSeek, which he said on Monday was "genuinely a gift to the world's AI industry."

"The amount of computer science breakthroughs is really quite significant and has really opened up a lot of great research for researchers in the United States and around the world," Huang said at the opening keynote of the Computex Taipei tech conference in Taiwan.

In January, open-source chatbot DeepSeek R1 took the world by storm, raising questions about Silicon Valley's massive spending spree on the technology.

"Everywhere I go, DeepSeek R1 has made a real impact in how people think about AI and how to think about inference and how to think about reasoning AIs," Huang said.

US AI-related shares tanked across the board in the wake of DeepSeek's rise. Nvidia's stock lost as much as $600 billion in market capitalization, hitting 20% of Huang's personal net worth at one point. The stock has recovered most of these losses and is up nearly 43% in the last year.

Huang said in February that investors got it wrong because the industry will still need computing power for post-training.

At the time, Huang said that post-training is the "most important part of intelligence" and "where you learn to solve problems."

The tech titan also seemed upbeat about DeepSeek, saying the open-sourced model created "energy around the world."

Read the original article on Business Insider

Google chief scientist predicts AI could perform at the level of a junior coder in a year

19 May 2025 at 01:47
Jeff Dean
Jeff Dean, Google's AI lead, said it's possible AI will be at the level of a junior coder in a year or so.

Thomas Samson/Getty Images

  • Jeff Dean, chief scientist at Google, said it will soon be possible for AI to match the skills of a junior engineer.
  • He estimated it could happen within the next year during the "AI Ascent" event.
  • AI will have to know more than basic programming to truly be at the level of a junior programmer, he added.

Jeff Dean, Google's chief scientist, thinks that AI will soon be able to replicate the skills of a junior software engineer.

"Not that far," he said during Sequoia Capital's "AI Ascent" event, when asked how far AI was from being on par with an entry-level engineer. "I will claim that's probably possible in the next year-ish."

Plenty of tech leaders have made similar predictions as models have continued to improve at coding, and AI tools become increasingly popular among programmers. With sweeping layoffs across the tech industry, entry-level engineers are already fielding intense competition โ€” only to see it compounded by artificial intelligence.

Still, Dean said, AI has more to learn beyond the basics of programming before it can produce work at the level of a human being.

"This hypothetical virtual engineer probably needs a better sense of many more things than just writing code in an IDE," he said. "It needs to know how to run tests, debug performance issues, and all those kinds of things."

As for how he expects it to acquire that knowledge, Dean said that the process won't be entirely unlike that of a person trying to gain the same skills.

"We know how human engineers do those things," he said. "They learn how to use various tools that we have, and can make use of them to accomplish that. And they get that wisdom from more experienced engineers, typically, or reading lots of documentation."

Research and experimentation is key, he added.

"I feel like a junior virtual engineer is going to be pretty good at reading documentation and sort of trying things out in virtual environments," Dean said. "That seems like a way to get better and better at some of these things."

Dean also said the impact "virtual" engineers will likely be significant.

"I don't know how far it will take us, but it seems like it'll take us pretty far," he said.

Google did not immediately respond to a request for comment by Business Insider prior to publication.

Read the original article on Business Insider

Yesterday โ€” 18 May 2025Main stream

Elton John is furious about plans to let Big Tech train AI on artists' work for free

18 May 2025 at 05:50
Sir Elton John called the UK government 'absolute losers' for failing to safeguard artists from AI.
Elton John called the UK government 'absolute losers' for failing to safeguard artists from AI.

CBS Photo Archive/CBS via Getty Images

  • Elton John attacked UK plans to let Big Tech train AI on creative work without permission or pay.
  • He called ministers "absolute losers" and accused them of "thievery on a high scale."
  • John warned that young artists "haven't got the resources" to take on Big Tech.

Elton John has accused the UK government of betraying artists with plans to allow Big Tech to train AI on creative works without permission or payment.

The 78-year-old music icon said the plans meant "committing theft, thievery on a high scale," in an interview with the BBC on Sunday.

He was commenting on the Data (Use and Access) Bill, which would allow companies to train AI on works such as music and books, unless the copyright holder specifically opts out.

John said he was "very angry," calling the government "absolute losers."

He told the BBC that young artists "haven't got the resources" to take on Big Tech and that the legislation would "rob young people of their legacy and their income."

"It's criminal, in that I feel incredibly betrayed," he said.

The bill was passing through the country's parliament until earlier this week, when the House of Lords voted to amend it to require tech companies to disclose and seek consent before scraping copyrighted material.

But the lower house, the House of Commons, rejected that change, sending the bill back into parliamentary limbo.

In his BBC interview, Sir Elton called on UK Prime Minister Keir Starmer to "wise up," saying he was prepared to take ministers to court and "fight it all the way."

The UK Government had not responded to a Business Insider request for comment when this article went live.

John was one of over 400 musicians, writers, and artists โ€” including Paul McCartney โ€” who signed an open letter to the Prime Minister earlier this year, warning that AI needed proper copyright safeguards to protect artists.

Sir Paul McCartney warned in January that AI could "rip off" artists and result in a "loss of creativity."

Read the original article on Business Insider

No one seems to know if AI will take our jobs or make us productive superstars

18 May 2025 at 03:37
ai robot hand worker running 2x1
Experts don't all agree on the impact AI will have on jobs, though many say it will be sizable.

Getty Images; Jenny Chang-Rodriguez/BI

  • Mark Quinn lost his job to AI, but says smart thinking can help people adapt to new ways of working.
  • Experts don't always agree on the impact AI will have on jobs, though many expect big changes.
  • AI will likely reshape many tasks, so workers need to build their skills to stay ahead.

Mark Quinn said he lost his previous job to AI, though he doesn't think it was a sign of a coming employment purge at the hands of bots.

Quinn was working for a generative artificial intelligence startup running a team he set up to oversee the answers the bots kicked out โ€” the proverbial human in the loop.

Eventually, as the AI improved, the company found it could manage with a smaller, more efficient set of workers, the longtime tech exec said.

"My skill and the job I was hired to do was truly no longer needed," Quinn told Business Insider.

Because there wasn't another role that was a good fit for him, he left.

The idea of losing your job to a bot is scary,ย and some workplace thinkers have warned about it. Yet others hold a sunnier view: Whip-smart bots will take over so much that we'll be able to add a whole lot more to our to-do lists.

The absence of a solid consensus among the tech and labor cognoscenti about AI's impact speaks to how many questions remain and how often the answer might start with "it depends."

"Part of it is, we honestly don't know," Gary Hamel, a visiting professor at London Business School who lives in Silicon Valley, told BI about the effect AI will have on jobs.

He said there are varying opinions in the AI community about whether we're already bumping up against the limits of what large language models and GenAI can do or whether there are blockbuster sequels to come.

Hamel said we've often overestimated the impact of new technology on employment.

"As far as I know, over the last 50 years, only one job category in the United States has disappeared," he said. "That is elevator operator."

The list could grow. In 2023, Goldman Sachs said that some 300 million full-time jobs globally could be at risk of being automated. More recently, Salesforce CEO Marc Benioff said that his company might not hire software engineers this year because of how much AI agents have helped boost some coders' productivity.

"I can't think of any roles that won't be impacted," Scott Russell, CEO of the tech company NICE, previously told BI about how AI will reshape work.

'An Iron Man suit'

Adam Brotman, cofounder and co-CEO of Forum3, a boutique consulting firm that advises companies on AI adoption, told BI that he expects AI will take some jobs, change others, and lead some companies to forgo posting some roles they might once have.

"It's this weird, ambiguous, conflicting thing," Brotman said.

What is clear, he said, is that AI will make many workers far more productive.

"It's going to be an Iron Man suit," said Brotman, who once ran digital operations at Starbucks and is the former co-CEO of J. Crew.

He said the business leaders his firm talks to and who understand what AI is capable of, are asking how they can make their businesses more productive and whether they can get by without hiring as many people as a result.

Brotman expects it will take another 12 months or so of AI being on the scene for businesses to have a clearer understanding of what the technology will mean for jobs. Ultimately, he predicts there will be a fallout, yet one that's not evenly distributed.

For a job like software development, Brotman said, AI can do a lot of the programming and quality assurance work, yet someone working with AI to generate code can also do a lot more.

He said it's become harder to answer the question of what AI will mean for employment because, as the technology improves, many of the gains will come not just from making organizations more efficient but from helping companies innovate and create new products and lines of business.

"It's not just about productivity. It's about this abundance," he said.

Ravin Jesuthasan, the global leader for transformation services at the consulting firm Mercer, expects there to be a "ton of dislocation" within companies and across industries that might not result in massive job losses across the US economy, but that will remake a lot of roles.

He told BI that employees will be able to get more done, but that AI will also create a lot of work.

This includes the need for people to ensure that the tech is functioning, that it's calibrated correctly, and that the output is used in an "intelligent, ethical, responsible way," Jesuthasan said.

Think about tasks, not jobs

Quinn, who lost his previous job to AI's prowess, is now the senior director of AI operations for Pearl, an AI search platform for professional services that pairs GenAI with human experts to verify responses are accurate.

He said the best way to think about how AI will affect work isn't necessarily about which jobs or industries are most at risk of being upended, but rather about the tasks and type of work that will change. Quinn, who's held roles at Waymo, LinkedIn, Apple, and Amazon, said AI will take on many formulaic and rote tasks.

He said that, as with any tech innovation, there will be some amount of upheaval, but that people can also learn to work with AI. The focus should be on what workers can do with the extra time they'll have.

Quinn advises companies to help build workers' skills and embrace different ways of getting things done. Otherwise, he said, employees could get left behind.

"The longer that people sit on the sidelines wondering if this wave is coming, the more at risk they are of getting caught off guard by the undertow," Quinn said.

Read the original article on Business Insider

The inside story of how Silicon Valley's hottest AI coding startup almost died

18 May 2025 at 02:00
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto

Eric Simons

In 2017, Eric Simons founded StackBlitz with his childhood friend Albert Pai. Six years later, it was the startup equivalent of the walking dead.

StackBlitz raised funding to build software development tools, including WebContainers technology that let engineers create and manage projects in a browser, rather than on their laptops.

The business didn't really take off, and by late 2023, things came to a head. StackBlitz wasn't generating much revenue. Growth was lackluster. At a board meeting that December, an ultimatum was issued: Show real progress, or you're toast.

Simons and Pai pitched a plan to grow by ramping up sales efforts for existing products, while building new offerings that could be bigger. "We also acknowledged that it might be time to explore acquisition scenarios ahead of potential failure," Simons recalled.

Then, one board member, Thomas Krane, got real: By the end of 2024, everyone needed finality on StackBlitz's fate.

Thomas Krane, Insight Partners
Thomas Krane, managing director at Insight Partners

Insight Partners

"I think I was saying what a lot of others were thinking in the room," Krane told Business Insider.

"No one was happy with the trajectory," venture capitalist and StackBlitz board directorย Sarah Guo remembers. "We needed a new plan."

When the meeting ended, Simons walked out of his "shed-turned-home office" into his backyard on a cloudy, windy Bay Area day to try to process the news.

"It was a tough pill to swallow, but we agreed," he said.

As 2024 began, it looked like StackBlitz was about to become one of the thousands of startups that fizzle into the abyss of venture capital history every year.

Not so fast. In Silicon Valley, fortunes can turn on dime as new inventions spread like wildfire, incinerating legacy technology and feeding unlikely growth from the embers. And this is what happened to StackBlitz.

Noodling with OpenAI models

Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz
Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz

Eric Simons

In early 2024, Simons, Pai, and their co-workers probably should have been meeting more with the investment bankers Krane had introduced them to โ€” an attempt to ring what value remained from the struggling startup.

Instead, like Silicon Valley founders often do, they were noodling with new technology, seeing how OpenAI models performed on coding tasks.

"The code output from their models would break, and the web apps created were buggy and unreliable," Simons said. "We thought it would be years before this improved. So we dropped that side project after about two weeks."

A Bolt from the blue

StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).
StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).

Courtney Yee/Photoflood Studio

Then, in June 2024, OpenAI rival Anthropic launched its Sonnet 3.5 AI model. This was a lot better at coding, and it became the technical foundation for an explosion in AI coding startups, such as Cursor and Lovable, and an important driver of what's now known as vibe coding.

That summer, StackBlitz started working on a new product that relied on Anthropic's breakthrough to bring coding to non-technical users.

On Oct. 3, StackBlitz launched this new service. It was called Bolt.new, a play on the startup's lightning-bolt logo. It took roughly 10 employees three months to create.

Bolt used StackBlitz's technological base โ€” that WebContainers underpinning that allows engineers to work in a browser โ€” and added a simple box on top with a flashing cursor and a question, "What do you want to build?"

A cocktail menu at StackBlitz's "hackathon" event in San Francisco
A cocktail menu at StackBlitz's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The service offered a tantalizingly simple proposition: Type what you want to create in plain English and Bolt's software would tap into Anthropic's Sonnet model in the background and write the code needed to create a website or a mobile app. And not just simple sites to share your wedding photos. Full applications that let users take valuable actions including logging in, subscribing, and buying things.

Before this, digital products like these required professional software engineers and developers to build them using complex coding languages and tricky tools that were way beyond the capabilities of non-technical people.

Simons emailed StackBlitz investors to tell them about Bolt, and asked for their help.

"If you can RT/share on X, and/or share with 3 developers you know, myself and the team would be extremely appreciative!" he wrote, according to a copy of that email obtained by BI.ย 

Crying in aย "shed office"

StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco
StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The first week that Bolt.new came out, it generated about $1 million of annual recurring revenue, or ARR, a common way cloud software services from startups are measured financially. The next week, it added another $1 million in ARR, and then another, according to Simons.

StackBlitz wasn't, in fact, going to shut down. Instead, it had a hit on its hands.

"I had slept three hours a night for a week straight to get the release out with our team," Simons told BI. "After seeing it live, and people loving it โ€” beyond anything I had ever created before โ€” I cried, alone at my desk in my backyard shed office."

A very different investor update

Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.
Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.

Eric Simons

On the first day of November, Simons wrote a very different email to his investors.ย The subject line read,ย StackBlitz October Update: $0 to $4m ARR in 30 days.

The number of active Bolt customers surged from about 600 to more than 14,000 in the first few weeks, according to a copy of the email obtained by BI.ย 

A chart showing early usage trends for Bolt.new
A chart showing early usage trends for Bolt.new

Eric Simons/StackBlitz

ARR soared from roughly $80,000 to more than $4 million in the same period.ย 

A chart showing early revenue traction for Bolt.new
A chart showing early revenue traction for Bolt.new

Eric Simons/StackBlitz

"You can imagine after years of grinding on our amazing core technology, endlessly searching for a valuable business use case of it, just striking out over and over again, how I and the team feel looking at this graph," Simons wrote. "If you had to put it into a word or two, it'd be something like 'HELL. YES.'"

When talented technologists are pushed to search harder for new ways to monetize their inventions, on a tight deadline, sometimes magic happens, according to Krane from Insight Partners. ย 

"That life-or-death pressure led to a series of rapid pivots that ultimately led to this incredible outcome," he told BI.ย "This company broke every model in terms of growth rate."

A new pricing model

There was so much customer demand for Bolt that StackBlitz raised prices after about a week. The main subscription plan went from $9 a month to $20 a month.

The startup also added new pricing tiers that cost $50, $100, and $200 a month. A few weeks after this change, almost half of Bolt's paying users were on more expensive plans.

An early breakdown of Bolt.new revenue and pricing tiers
An early breakdown of Bolt.new revenue and pricing tiers

Eric Simons/StackBlitz

Simons said StackBlitz may have stumbled upon a new pricing model for AI code-generation services. (Turns out, he did).

Every time a Bolt user typed in a request, this was transformed into "tokens," which AI models use to break letters, numbers, and other characters into digestible chunks. Each token costs a certain amount to process.

Bolt users were sending in so many requests they were blowing through their token limits. So StackBlitz introduced tiered pricing so customers could pay extra to get more tokens.

"Customers are willing to pay a lot of money for heavier usage of AI inference," Simons told his investors.

One customer was quoted $5,000 and a 2-3 month timeframe by a Ukrainian contractor to develop an app. Instead, she bought Bolt's $50 a month plan, and built the app herself in less than two weeks.

"1/100th of the cost, and 5-10x faster timeline," Simons wrote.

A carrot or a stick?

Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight
Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight

Eric Simons

Simons also couldn't resist an invitation to stay on the rocket ship, dropping a classic Silicon Valley funding-raise carrot. Or was it stick?

"We've also received substantial inbound interest from VCs and from strategic acquirers," the CEO wrote in the Nov. 1 email to investors. "So we're starting to explore how best to play our hand(s) in the coming weeks/months here." (Earlier this year, reports emerged of a new funding round valuing StackBlitz at roughly $700 million).

With StackBlitz's demise averted, Simons realigned resources to focus on Bolt. The startup hired more staff and added Supabase, a database service that stores transaction data, user registrations, and otherย crucial information.

It also added Stripe technology so Bolt users can easily accept card and digital payments. StackBlitz also spent heavily to educate customers on how to use Bolt better, running weekly live YouTube video sessions.

Waiting for Anthropic

Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.
Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.

Anthropic

Bolt was off the races. But there wasย still one big hurdle involving Anthropic.

Back in the spring of 2024, an Anthropic cofounder filled out a "Contact Us" form on StackBlitz's webcontainers.ioย site. The form asked anyone who wanted to license the WebContainers technology to fill out some basic details.

"After we saw that form, we called to chat. He said Anthropic was working on a project and this could help," Simons recalled, without identifying the cofounder.ย 

For the first year, StackBlitz proposed a license for Anthropic with uncapped usage for about $300,000.

"With hindsight, we made them a smokin' deal," Simons said. "We were desperate at that point." Other big customers might follow Anthropic's lead and sign license deals, too, the thinking went.ย 

By October, though, when Bolt had taken off using the same web-based StackBlitz technology, a $300,000 uncapped license suddenly looked like a very bad deal for Simons's startup.

StackBlitz founders and staff hanging with The Chainsmokers
StackBlitz founders and staff hanging with The Chainsmokers

StackBlitz

But there was a catch: Anthropic had to sign the contract by the end of October, otherwise the deal would expire. Simons and his StackBlitz co-workers watched the clock like hawks.

"We were like, god, please don't sign it," Simons told BI.ย 

The deadline finally passed. Anthropic never signed.

Simons doesn't know exactly why the AI lab didn't put pen to paper. However, he noted that Anthropic has "a lot of things going on."

"They were like, 'we might come back to this in the future maybe,'" Simons said.ย "We have a great relationship with Anthropic. They are doing an incredible amount of revenue now, and so are we."ย 

Whatever the reason, StackBlitz was now free to pursue its Bolt growth strategy.

A podcast appearance

Venture capitalist Sarah Guo
Venture capitalist Sarah Guo

Fortune/Reuters

By December 6, Simons appeared on No Priors, a popular AI podcast hosted by Guo and another top AI startup investor, Elad Gil. (Guo is an early backer of StackBlitz).

The CEO shared that Bolt was generating $20 million in ARR, just a few months after it launched.ย By the middle of March 2025, Bolt's ARR had jumped to $40 million.ย 

In a recent interview with BI, Simons wouldn't share more revenue details. However, he said StackBlitz planned to announce a new ARR number when Bolt passes triple-digit ARR, meaning more than $100 million.ย 

The service has 5 million registered users now, and Simons said StackBlitz is profitable, growing, and healthy.ย 

There's even a new name for the millions of non-technical users who craft digital offerings through Bolt.

Simons calls them "software composers."

A hackathon meeting

The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.
The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.

Alistair Barr/Business Insider

He explained this to me at a "hackathon" event StackBlitz held on May 7 in San Francisco. Hundreds of "composers," along with other customers, partners, and investors, partied late into the evening, with The Chainsmokers DJ-ing. (The duo are StackBlitz investors).

Simons held court, schmoozed and chatted, with a wide grin and seemingly endless energy.

Through the din, I asked him if he was concerned about rival AI coding services nipping at his heels. After all, it had been about seven months since Bolt launched โ€” a lifetime in Silicon Valley AI circles.

Simons seemed unperturbed. He said the years of hard work that StackBlitz spent developing its WebContainers technology gives Bolt an edge that most rivals don't have.

This allows Bolt-based applications to be built and runย applications using the chips on customers' devices, such as laptops. Other AI coding providers must tap a cloud service each time a user spins up a project, which can get very expensive and technical, according to Simons.

"People assume we're a startup that just launched yesterday," he said. "But we're an overnight success, seven years in the making."

A party duel with Figma

Figma founders Evan Wallace Dylan Field
Evan Wallace and Dylan Field are the cofounders of Figma.

Figma

The competition doesn't wait long to respond in Silicon Valley, though.

A few San Francisco blocks away, on the same day as Bolt's hackathon party, graphic design giant Figma announced a competing product at its annual Config conference. Figma Make is a new tool that helps developers create web apps using conversational prompts, rather than specialized software code.

Sound familiar?

"We believe there are multiple huge companies to be built here, and that the market for engineering is bigger because of AI," Guo said.

Simons noted that this new Figma service doesn't use the same WebContainers technology that supports Bolt. "We wrote an operating system from scratch that runs in your browser. It's completely different from what Figma has," he argued.

Still, I could tell Figma had made an impact.

"What are the odds that we were throwing a giant party on the same day as their launch across the road? I'll leave that to your writer's imagination," Simons told me, with a giggle.

Read the original article on Business Insider

Before yesterdayMain stream

A guide to the Nvidia products driving the AI boom and beyond — from data center GPUs to automotive and consumer tech

17 May 2025 at 03:11
A man wearing all black and a leather jacket holds a consumer GPU and a laptop on a stage
Nvidia products, such as GPUs and software, are driving the AI boom.

Brittany Hosea-Small/REUTERS

  • Nvidia products, such as data center GPUs, are crucial for AI, making it the leader in the industry.
  • Nvidia's CUDA software stack supports GPU programming, enhancing its competitive edge.
  • Nvidia's automotive and consumer tech ventures expand its influence beyond data centers.

Nvidia products are at the heart of the boom in artificial intelligence.

Despite starting in gaming and designing semiconductors that touch many diverse industries, the products Nvidia designs to go inside high-powered data centers are the most important to the company today, and to the future of AI.

Graphics processing units, designed to be clustered together in dozens of racks inside massive temperature-controlled warehouses, made Nvidia a household name. They also got Nvidia into the Dow Jones Industrial Average, and put it in the position to control the flow of a crucial but finite resource: artificial intelligence.

Nvidia's first generation of chips for the data center launched in 2017. That first generation was called Volta. Along with the Volta chips, Nvidia designed DGX (which stands for Deep GPU Xceleration) systems โ€” the full stack of technologies and equipment necessary to bring GPUs online in a data center and make them work to the best of their ability. DGX was the first of its kind. As AI has become more mainstream, other companies such as Dell and and Supermicro have put forth designs for running GPUs at scale in a data center too.

Ampere, Hopper, Blackwell, and Beyond

The next GPU generation designed for the data center, Ampere, which launched in 2020, can still be found in data centers today.

Though Ampere generation GPUs are slowly fading into the background in favor of more powerful models, this generation did support the first iteration of Nvidia's Omniverse, a simulation platform that the company purports as key to a future where robots work alongside humans doing physical tasks.

The Hopper generation of GPUs is the one that has enabled much of the latest innovation in large language models and broader AI.

Nvidia's Hopper generation of chips, which include the H100 and the H200, debuted in 2022 and remain in high demand. The H200 model in particular has added capacity that has proven increasingly important as AI models grow in size, complexity, and capability.

The most powerful chip architecture Nvidia has launched to date is Blackwell. Jensen Huang announced the step change in accelerated computing in 2024 at GTC, Nvidia's developers conference, and though the rollout has been rocky, racks of Blackwells are now available from cloud providers.

Nvidia's Jensen Huang holds up one of the company's Blackwell chips at the 2024 GTC conference.
Nvidia unveiled its Blackwell chip at the GTC conference in 2024.

Andrej Sokolow/picture alliance via Getty Images

Inside the data center, Nvidia does have competitors, even though it has the vast majority of the market for AI computing. Those competitors include AMD, Intel, Huawei, custom AI chips, and a cavalcade of startups.

The company has already teased that the next generation will be called "Blackwell Ultra," followed by "Rubin" in 2026. Nvidia also plans to launch a new CPU, or traditional computer chip alongside Rubin, which it hasn't done since 2022. CPUs work alongside GPUs to triage tasks and direct the firepower that is parallel computing.

Nvidia is a software company, too

None of this high-powered computing is possible without software and Nvidia recognized this need sooner than any other company.

Development for Nvidia's tentpole software stack, CUDA or Compute Unified Device Architecture, began as early as 2006. CUDA is software that allows developers to use widely known coding languages to program GPUs, since these chips require layers of code to work relatively few developers have the needed skills to program the chips directly.

Still "CUDA developer" is a skillset and there are millions who claim this ability, according to Nvidia.

When GPUs started going into data centers, CUDA was ready and that's why it's often touted as the basis for Nvidia's competitive moat.

Within CUDA are dozens of libraries that help developers use GPUs in specific fields such as medical imaging, data science, or weather analytics.

Nvidia began at home

Just two years after Nvidia's founding, the company released its first graphics card in 1995. For more than a decade, the chips mostly resided in homes and offices โ€” used by gamers and graphics professionals.

The current generation includes the GeForce RTX 5090 and 5080, which was released in May 2025. RTX 4090, 4080, 4070, and 4060, were released in 2022 and 2023. GPUs in gaming enabled the more sophisticated shadows, texture, and light to make games hyperrealistic.

In addition to the consumer work stations, Nvidia partners with device-makers like Apple and ASUS to produce laptops and personal computers. Though gaming is now a minority of the company's revenue, the business continues to grow.

Nvidia has also made new efforts to enable high powered computing at home for the machine-learning obsessed. It launched Project DIGITS, which is a personal-sized supercomputer capable of working with some of the largest large language models.

Nvidia in the car

Nvidia is angling to be a primary player in a future where self-driving cars are the norm, but the company has also been in the automotive semiconductor game for many years.

Nvidia's Jensen Huang holds an Nvidia Drive PX Auto-Pilot Computer while giving a speech.
Nvidia first launched its DRIVE PX, for developing autopilot capabilities for vehicles, in 2015.

Kim Kulish/Corbis via Getty Images

It launched Nvidia DRIVE, a platform for autonomous vehicle development, in 2015, and over time it developed or acquired technologies for mapping, driver assist, and driver monitoring.

The company designs various chips for all of functions in partnerships with Mediatek and Foxconn. Nvidia's automotive customers include Toyota, Uber, and Hyundai.

Read the original article on Business Insider

Duolingo CEO says there may still be schools in our AI future, but mostly just for childcare

17 May 2025 at 02:00
Luis von Ahn
Luis von Ahn, CEO of Duolingo

Duolingo

  • Luis von Ahn envisions AI transforming education, making it more scalable than human teachers.
  • Schools may focus mostly on childcare duties while AI provides personalized learning, he said.
  • Regulation and cultural expectations may slow AI's integration into education systems.

What happens to schools if AI becomes a better teacher?

Luis von Ahn, CEO of Duolingo, recently shared his vision for the future of education on the No Priors podcast with venture capitalist Sarah Guo, and it centered on AI transforming the very role schools will play.

"Education is going to change," von Ahn said. "It's just a lot more scalable to teach with AI than with teachers."

That doesn't mean teachers will vanish, he emphasized. Instead, he believes schools will remain, but their function could shift dramatically. In von Ahn's view, schools may increasingly serve as childcare centers and supervised environments, while AI handles most of the actual instruction.

"That doesn't mean the teachers are going to go away. You still need people to take care of the students," the CEO said on the podcast. "I also don't think schools are going to go away because you still need childcare."

In a classroom of 30 students, a single teacher can struggle to offer personalized, adaptive learning to each person. AI, on the other hand, will be able to track individual performance in real time and adjust lesson difficulty based on how well each student is grasping the material, according to von Ahn.

Imagine a classroom where each student is "Duolingo-ing" their way through personalized content, while a teacher acts as a facilitator or mentor. "You still need people to take care of the students," he noted, "but the computer can know very precisely what you're good at and bad at โ€” something a teacher just can't track for 30 students at once."

Education is slow to change, so this may take many years, von Ahn explained, noting that regulation, legacy systems, and cultural expectations all serve as drag forces. Still, he sees a future where AI augments or even supplants parts of formal education, especially in countries that need scalable education solutions fast.

It's a provocative vision, one that raises deep questions about the future of learning and what we expect from education in an AI-driven world.

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected].

Read the original article on Business Insider

Ignore AI and risk becoming irrelevant, warns Eric Schmidt — 'Adopt it, and adopt it fast'

17 May 2025 at 01:53
Eric Schmidt
Eric Schmidt, who was Google CEO for a decade, says ignoring AI could make workers irrelevant.

REUTERS/Beck Diefenbach

  • Eric Schmidt warned that anyone, from artists to doctors, who doesn't embrace AI will be left behind.
  • The former Google CEO recently used AI to get up to speed quickly on a rocket company he bought.
  • Schmidt warned that the pace of change could catch many off guard.

Eric Schmidt thinks every worker, from CEOs to artists, needs to get to grips with AI โ€” or risk being left behind.

The former Google CEO argued in a recent TED interview that the speed of AI progress was forcing a fundamental shift in every job, from the arts to business to science.

"Each and every one of you has a reason to use this technology," Schmidt said, referring to AI.

"If you're an artist, a teacher, a physician, a businessperson, a technical person, if you're not using this technology, you're not going to be relevant compared to your peer groups and your competitors and the people who want to be successful. Adopt it, and adopt it fast."

Schmidt, who ran Google from 2001 to 2011, says AI tools let anyone get up to speed in almost any field. He pointed to his recent decision to buy a rocket company despite knowing little about aerospace.

"It's an area that I'm not an expert in, and I want to be an expert, so I'm using deep research," Schmidt said, who was named CEO at Relativity Space in March, a California rocket startup vying to compete with SpaceX.

He said this kind of rapid learning was just the beginning. Schmidt pointed to studies that estimate AI could drive a "30% increase in productivity" annually โ€” a jump so dramatic that "economists have no models for what that kind of increase looks like."

While predicting that entire industries could be disrupted as AI simplifies or automates work, some professions will evolve rather than disappear in his view.

"Do you really think that we're going to get rid of lawyers? No, they're just going to have more sophisticated lawsuits," Schmidt said.

'Marathon, not a sprint'

The pace of change may catch many off guard: "As this stuff happens quicker, you will forget what was true two years ago or three years ago. That's the key thing. So my advice to you all is ride the wave, but ride it every day."

When asked if he had any advice for those feeling overwhelmed by the pace of change, Schmidt, who now advises governments and startups on tech strategy, offered some perspective from his own experience.

"One thing to remember is that this is a marathon, not a sprint," he said. "Every day you get up, and you just keep going."

At the AI summit in Paris in February, Schmidt criticised Europe's AI laws as too strict but insisted that regulation was essential. "It's really important that governments understand what we're doing and keep their eye on us," he told BBC News.

He's made similar warnings before, calling in December for "meaningful control" over military AI.

Read the original article on Business Insider

OpenAI's new agent tool Codex is for developers, but it can also help you order takeout

16 May 2025 at 15:17
OpenAI CEO Sam Altman testifies before Senate Commerce, Science, and Transportation Committee hearing on Capitol Hill in Washington
Sam Altman's OpenAI launches Codex, an AI tool to write codes and fix bugs for developers.

Jonathan Ernst/REUTERS

  • OpenAI launched Codex, an AI tool to write codes and fix bugs for developers.
  • As an AI Agent, Codex could also help users with an Amazon order or a dinner reservation.
  • Codex and GPT-4.5, which was launched in April, both come with a heftier price tag of $200 per month.

OpenAI on Friday rolled out a powerful new tool for software developers, as the company pushes further into automating coding tasks with AI.

The new product, called Codex, is an AI agent designed to help programmers write code, fix bugs, and run tests โ€” often simultaneously.

"Technical teams at OpenAI have started using Codex as part of their daily toolkit," OpenAI said in a blogpost. "It is most often used by OpenAI engineers to offload repetitive, well-scoped tasks, like refactoring, renaming, and writing tests, that would otherwise break focus."

"When uncertain or faced with test failures, the Codex agent explicitly communicates these issues, enabling users to make informed decisions about how to proceed," OpenAI added.

Unlike traditional chatbots that respond to prompts and generate responses in mostly words, AI agents like Codex can interact with other software and online services, such as helping you with a DoorDash order or booking a dinner reservation.

The Codex rollout came after OpenAI launched GPT-4.5 in February. A livestream demo highlighted its improved reasoning, intuition, and reduced hallucinations.

CEO Sam Altman described it as "the first model that feels like talking to a thoughtful person," but also said that its intelligence and nuance comes at steep computational cost. Due to GPU shortages, GPT-4.5 was initially available only to $200-per-month ChatGPT Pro users.

Codex is now available to subscribers of OpenAI's ChatGPT Pro plan, which costs $200 a month. OpenAI also said it will eventually bring Codex to its other premium offerings.

OpenAI did not immediately respond to a request for comments.

Read the original article on Business Insider

WNBA monitoring fans, with AI, to crack down on 'hate speech' amid Caitlin Clark-Angel Reese rivalry renewal

The WNBA is utilizing new technology this season to squash out "hate speech" among its fans.ย 

The league announced a new initiative titled "No Space for Hate" this week ahead of the season tip-off. The campaign will include the use of AI social media monitoring tools that will help the league enforce a revised code of conduct.ย 

"As part of the comprehensive plan, the WNBA is rolling out an AI-powered technology solution to monitor social media activity, in partnership with players and teams, to help protect the community from online hate speech and harassment," the announcement read.ย 

Fox News Digital has reached out to the WNBA for further clarification about how the technology will be used, but has not received a response.ย 

CLICK HERE FOR MORE SPORTS COVERAGE ON FOXNEWS.COM

A revised WNBA fan code of conduct includes regulations for fans on social media, and threats of sanctioning those fans from official content if they are violated. The new policy lists racist, homophobic, sexist, sexual, threatening or libelous content as "subject to blocking or deletion."ย 

"Repeat violations of these guidelines may result in the violator no longer being able to follow our news, comment on our posts or send us messages," the policy reads. "Additionally, any direct threats to players, referees or other league and team personnel may be referred to law enforcement and may result in the violator being banned from all WNBA arenas and events."ย 

The league is set to put these new practices into place ahead of a season that will see phenom Caitlin Clark take on arch-rival Angel Reese on Saturday for their season-opener.ย 

Clark's Indiana Fever will take on Reese's Chicago Sky at Gainbridge Fieldhouse on Saturday, renewing the hottest rivalry in women's basketball. The rivalry between Clark and Reese has been a hot bed for intense controversy, often igniting racial debates, dating back to their matchup in the 2023 NCAA championship game.ย 

Reese has lambasted Clark's fans as "racist" and even alleged they created AI-generated explicit images of the Sky star and sent them to her family members.ย 

INSIDE CAITLIN CLARK AND ANGEL REESE'S IMPACT ON MEN'S BASKETBALL

"I think it's really just the fans, her fans, the Iowa fans, now the Indiana fans, that are really just, they ride for her, and I respect that, respectfully. But sometimes it's very disrespectful. I think there's a lot of racism when it comes to it," Reese said in the first episode of her podcast in early September.ย 

"Multiple occasions, people have made AI-images of me naked. They have sent it to my family members. My family members are like uncles, sending it to me like, โ€˜Are you naked on Instagram?โ€™

Clark had also been on the receiving end of racial comments throughout her rookie season in 2024, sometimes by figures in the mainstream media.ย 

ESPN's Pat McAfee referred to Clark as a "white b----" during an episode of his nationally televised show June 3 and later apologized. McAfee used the term during a discussion about how much popularity Clark was bringing to the league, compared to other players, saying "I would like the media people that continue to say, โ€˜This rookie class, this rookie class, this rookie class.โ€™ Nah, just call it for what it is. Thereโ€™s one White b---- for the Indiana team who is a superstar." McAfee later apologized.

In May, "The View" host Sunny Hostins said during an episode of that show that Clark's popularity was due, in part, to "White privilege."ย 

In late September, Clark herself was forced to address allegations that her fans acted "racist" toward Connecticut Sun players during the Fever's playoff series last September.ย 

Connecticut Sun star Alyssa Thomas accused Indiana Fever fans of racist behavior to reporters after the Sun's Game 2 win, while her teammate, DiJonai Carrington, revealed on Instagram an email she had received filled with racial slurs.

"We've been professional throughout the whole entire thing, but I've never been called the things that I've been [called] on social media, and there's no place for it," Thomas said. "Basketball is headed in a great direction, but no, we don't want fans that are going to degrade us and call us racial names."

The WNBA later put out a statement addressing the allegations, and Clark was asked about it during Indiana's exit interviews.

"Those aren't fans. Those are trolls," Clark said when asked about it.

"Nobody in our league should be facing any sort of racism, disrespectful or hurtful comments and threats."

Follow Fox News Digitalโ€™sย sports coverage on X, and subscribe toย the Fox News Sports Huddle newsletter.

An exclusive look inside the hottest legal AI startup

16 May 2025 at 08:21
Harvey CEO Winston Weinberg
Harvey CEO Winston Weinberg

Reuters/Fortune

Hello, and welcome to your weekly dose of Big Tech news and insights. I'm your host Alistair Barr. My dog Maisie is having surgery soon, so keep her in your thoughts.

I recently met a friend for coffee, and he shared some surprising news. After working in cloud computing for roughly 20 years, he's moving from Silicon Valley to the UK. Would you leave Silicon Valley right now? Where would you go? (Send me a note if you want to share.) If you're interested in living in other places, check out our stories on moving to India, Canada, and Spain.


Agenda

  • This week, we're talking about how generative AI is changing professional services, especially law firms and consulting.
  • We'll also take a look at the Silicon Valley chatter right now, including Meta's turning point, Google's pickle, and Microsoft's new AI vision.
  • And I'll experiment with an AI tool and show you the results โ€” something I hope to do each week, and get your responses and recommendations.

Central story unit

Harvey co-founders co-founders Winston Weinberg and Gabe Pereyra
Harvey co-founders co-founders Winston Weinberg and Gabe Pereyra

Harvey

I went to a party in San Francisco recently. Yeah, I know, crazy. Actually, it was a bit wild, in a lovable, techy way.

StackBlitz threw a "hackathon" event to show off its AI coding service called Bolt.new. It's a hot product right now, and the party was packed. The Chainsmokers DJ-ed, and I zipped around chatting to as many people as possible, with my tech buddy Dave in support (if questions got too technical!).

I met one person who said she worked at Harvey, a startup that's using generative AI to help lawyers operate more efficiently and automate parts of legal work.

I asked what she did, and she said she was a lawyer. I assumed she'd be a software engineer, working for an AI startup. But no, she's a fully qualified attorney who, instead of advising clients, helps to train Harvey's AI models to be better at law.

Right on cue, BI's Melia Russell has an in-depth, exclusive look at Harvey. She visited founder Winston Weinberg and learned some important scoopy stuff about the company's latest moves and how it's tackling growing competition in the suddenly hot legal tech market.

The legal profession is pretty well suited to large language models and generative AI. It's based mostly on rules, laws, and other dense, complicated text. Legions of law firm associates usually spend years learning how to parse and interpret this information for clients.

Now, all this content, along with decades of legal decisions and other records, is being used as training data to develop specialized AI models and tools. AI needs high-quality training data, and in the legal profession there's a ton of it.

The end result is tools like Harvey that can automate some of the busy work that previously bogged lawyers down, and could change how the entire profession operates.

You know what other industry has AI potential? Consulting. The Big Four, Deloitte, EY, PwC, and KPMG, are investing in AI agents to "liberate" employees from thousands of hours of work a year. For instance, generative AI tools are pretty good at creating PowerPoint slide presentations. Do you feel liberated?


News++

Other BI tech stories that caught my eye lately:


Eval time

My take on who's up and down in the tech industry right now, including updates on Big Tech employee pay. This is based on an evolving, unscientific system I developed myself. (A bit like AI model benchmarking these days!)

UP: Tim Cook probably breathed a sigh of relief after the US and China paused those really high tariffs. Although it's not out of the woods yet, Apple stock jumped this week.

DOWN: Google is in an antitrust quagmire, and ChatGPT may be eating into its prized Search business. Take a look at this metric. It's not great. We'll see if the company has answers next week at its big I/O conference.

COMP UPDATE: This data from Levels.fyi made me look. Software engineers (SWEs) โ€” if you're not in AI, your tech compensation may not rise as much as it once did:

A chart showing tech compensations trends
A chart showing tech compensation trends

Levels.fyi


From the group chat

Other Big Tech stories I found on the interwebs:


AI playground

This is the time each week when I try an AI tool. Is it better than what I could have done myself? Was it faster and more efficient than asking a technical colleague for help? I need you, dear reader, to help. What am I doing wrong? What should I do, or use, next week? Let me know.

I started off simple. I asked ChatGPT (Enterprise 4o) to create an image that sums up the past week in tech. I told it to use Business Insider style. Here's what it came up with:

An image generated by ChatGPT showing tech news of the week
An image generated by ChatGPT showing tech news of the week

Alistair Barr/ChatGPT

This is after a couple of pretty bad initial attempts. The image is not bad, but not amazing. The Samsung blue blob logo is floating by itself down there. Why? Who knows? It's true that Google is prepping for I/O. It also seems to have mixed up Apple and Samsung? And I couldn't find news related to new real-time features added recently to OpenAI's GPT-4o model. (I asked OpenAI's PR dept and will let you know if they respond.) It used an old BI logo, too.


User feedback

I would love to hear from anyone who reads this newsletter. What am I doing wrong? What do you want to see more of? Specifically, though: This week, I want to hear back from folks who work in professional services, such as lawyers and consultants.

Attorneys: What's your experience been with Harvey AI and similar AI tools so far? Has this tech helped you get stuff done faster and better for clients? Or not? Are you worried legal AI tools might replace you in the end? Will it change the law firm business model? Or is this another tech flash in the pan that won't amount to much? Let Melia Russell and me know at [email protected] and [email protected].

Same question for people who work at consulting firms. Any insights or views, reach out to my excellent colleague Polly Thompson at [email protected].

Read the original article on Business Insider

Internal Microsoft memo reveals plans for a new 'Tenant Copilot,' and an 'Agent Factory' concept

16 May 2025 at 02:00
Microsoft Copilot
Microsoft CEO Satya Nadella

Microsoft

  • Microsoft is working on a new "Tenant Copilot" offering, according to an internal memo.
  • The company is also developing news ways for customers to manage AI agents alongside human staff.
  • Microsoft at the time was planning to announce the developments at next week's Build.

Microsoft is working on a new Copilot and could unveil it at the company's Build conference next week, according to an internal memo viewed by Business Insider.

The software giant also has grand "Agent Factory" ambitions, and is developing new ways for corporate customers to manage AI agents alongside human employees, the memo shows.

The Tenant Copilot project is run by the organization behind the Microsoft 365 business. This new Copilot is designed to "rapidly channel an organization's knowledge into a Copilot that can 'talk,' 'think,' and 'work' like the tenant itself," according to an April 14 email sent by Microsoft executive Jay Parikh.

A "tenant" is the term used to describe corporate users of the Microsoft 365 suite of business applications. A Copilot that has access to these tenants would essentially be able to access customer information stored within their Microsoft 365 accounts.

Parikh explained in the email that Microsoft is using different AI techniques to power the Tenant Copilot feature. Supervised fine-tuning helps "to capture a tenant's voice." The tool will also tap into OpenAI's o3 reasoning model "to shape its thought process." Lastly, "agentic" fine-tuning will "empower real-world tasks," he wrote.

Microsoft at the time planned to offer a public preview of Tenant Copilot at Build, according to the memo. The company sometimes changes what it plans to announce at the conference.

Meanwhile, the CoreAI Applied Engineering team is also "working to launch a collaborative go-to-market plan for top-tier customers to drive successful adoption of our Al cloud," Parikh added in the memo.

Microsoft declined to comment.

Parikh's 'Agent Factory' concept

Parikh is the former head of engineering at Facebook. Microsoft CEO Satya Nadella hired Parikh in October and tapped him in January to run a new group called CoreAI Platform and Tools focused on building AI tools. The group combined Microsoft's developer division and AI platform team and is responsible for GitHub Copilot, Microsoft's AI-powered coding assistant.

This year's Build event will be Parikh's first at the helm of this new organization. In the email to the nearly 10,000 employees in the organization, Parikh discussed a new "Agent Factory" concept. That's likely a nod to cofounder Bill Gates, who talked about Microsoft being a "software factory."

"Building our vision demands this type of culture โ€” one where Al is embedded in how we think, design, and deliver," Parikh wrote. "The Agent Factory reflects this shift โ€” not just in what we build, but in how we build it together. If we want every developer (and everyone) to shape the future, we have to get there first."

Parikh has been trying to work across organizations to collaborate on AI agents, through a "new type of cross-product review" combining teams such as security services like Entra and Intune with "high-ambition agent efforts" within LinkedIn, Dynamics, and Microsoft 365.

Meet your new AI agent co-worker

Part of this effort focuses on how to manage AI agents alongside human employees.

Microsoft, for example, has been working on how to handle identity management for AI agents, according to the memo. This technology usually controls security access for human users. Now, the company is trying to spin up a similar system for AI agents.

"Our hypothesis is that all agent identities will reside in Entra," Parikh wrote, although "not every agent will require an identity (some simpler agents in M365 or Studio, for instance, don't need one)."

Microsoft is taking a similar approach to M365 Admin Center, which is used by IT administrators to manage employee access to applications, data, devices, and users. Future versions of this system will accommodate AI agents as "digital teammates" of human workers, according to Parikh's memo.

Microsoft's Copilot Analytics service is also expanding into broader workforce analytics to give corporate customers a view of how work gets done both by humans and AI agents.

And Parikh aims to make Azure AI Foundry, its generative AI development hub, "the single platform for the agentic applications that you build," he wrote. "At Build, we will have the early versions of this, and we'll iterate quickly to tackle a variety of customer use cases."

Have a tip? Contact this reporter via email at [email protected] or Signal at +1-425-344-8242. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Meta's Llama has reached a turning point with developers as delays and disappointment mount

Mark Zuckerberg, a white man in a grey polo shirt and dark pants sits in a white chair holding a microphone in front of a dark purple background.
Almost a year passed between the release of Meta's Llama 3 and Llama 4. A lot can happen in a year.

AP Photo/Jeff Chiu

  • Meta's Llama 4 models had a lukewarm start and haven't seen as much adoption as past models.
  • The muted reception of Meta's latest models has some questioning its relevance.
  • Developers told Business Insider Llama slipped from the cutting edge, but it still plays a key role.

At LlamaCon, Meta's first-ever conference focused on its open-source large language models held last month, developers were left wanting.

Several of them told Business Insider they expected a reasoning model to be announced at the inaugural event and would have even settled for a traditional model that can beat alternatives like DeepSeek's V3 and Qwen, a group of models built by Alibaba's cloud firm.

A month earlier, Meta released the fourth generation of its Llama family of LLMs, including two open-weight models: Llama 4 Scout and Llama 4 Maverick. Scout is designed to run on a single graphics processing unit, but with the performance of a larger model, and Maverick is a larger version meant to compete with other foundation models.

Alongside Scout and Maverick, Meta also previewed Llama 4 Behemoth, a much larger "teacher model" still in training. It is designed for distillation, which enables the creation of smaller, specialized models from a larger one.

The Wall Street Journal reported on Thursday that Behemoth would be delayed, and that the entire suite of models was struggling to compete. Meta said these models achieve state-of-the-art performance.

Meta's Llama used to be a big deal. But now it's sliding farther down the AI world's leaderboards, and to some, its relevance is fading.

"It would be exciting if they were beating Qwen and DeepSeek," Vineeth Sai Varikuntla, a developer working on medical AI applications, told BI at the conference last month. "Qwen is ahead, way ahead of what they are doing in general use cases and reasoning."

The disappointment reflected a growing sense among some developers and industry observers that Meta's once-exciting open-source models are losing momentum, both in technical performance and developer mindshare.

While Meta continues to tout its commitment to openness, ecosystem-building, and innovation, rivals like DeepSeek, Qwen, and OpenAI are setting a faster pace in areas like reasoning, tool use, and real-world deployment.

Meta aimed to reassert its leadership in open-source AI. Instead, it raised fresh questions about whether Llama is keeping up.

"We're constantly listening to feedback from the developer community and our partners to make our models and features even better and look forward to working with the community to continue iterating and unlocking their value," Meta spokesperson Ryan Daniels told BI.

A promising start

In 2023, Nvidia CEO Jensen Huang called the launch of Llama 2 "probably the biggest event in AI" that year. By July 2024, the release of Llama 3 was held up as a breakthrough โ€” the first open large language model that could compete with OpenAI.

Llama 3 created an immediate surge in demand for computing power, SemiAnalysis Chief Analyst Dylan Patel told BI at the time."The moment Meta's new model was released, there was a big shift. Prices went up for renting GPUs."

Google searches containing "Meta" and "Llama" similarly peaked in late July 2024.

Llama 3 was an American-made, open, top-of-the-line LLM. Though Llama never consistently topped the leaderboard on industry benchmarks, it's traditionally been influential โ€” relevant.

But that has started to change.

The models introduced a new-to-Meta architecture called "mixture of experts," which was popularized by China's DeepSeek.

The architecture allows the model to activate only the most relevant expertise for a given task, making a large model function more efficiently, like a smaller one.

Llama 4's debut quickly met criticism when developers noticed that the version Meta used for public benchmarking was not the same version available for download and deployment. This prompted accusations that Meta was gaming the leaderboard. The company denied this, saying the variant in question was experimental and that evaluating multiple versions of a model is standard practice.

While competing models paced out ahead, Meta looked rudderless.

"It did seem like a bit of a marketing push for Llama," said Mahesh Sathiamoorthy, cofounder of Bespoke Labs, a Mountain View-based startup that creates AI tools for data curation and training LLMs, previously told BI.

There's no singular resource that can measure which model or family of models is winning with developers. But what data exists shows Llama's latest models aren't among the leaders.

Qwen, in particular, hovers around the top of leaderboards across the internet.

Artificial Analysis is a site that ranks models based on performance, and when it comes to intelligence, it places Llama 4 Maverick and Scout just above OpenAI's GPT-4 model, released at the end of last year, and below xAI's Grok and Anthropic's Claude.

Openrouter offers a platform for developers to access various models and then publishes leaderboards for model use through its own API. It shows Lama 3.3 among the top 20 models used as of early May, but not Llama 4.

"They wanted to cast a wider net and appeal to enterprises, but I think the technical community was looking for more substantial model improvements," Sathiamoorthy said.

More than a model

The standard evaluations of Llama 4 released to the public were lackluster, according to experts.

But the muted enthusiasm for Llama 4, compared to Llama 3, goes beyond the model itself, AJ Kourabi, an analyst at SemiAnalysis focused on models, told BI.

"Sometimes it's not the evals that necessarily matter. It's the tool-calling and capability for the model to extend beyond just being a chatbot," Kourabi said.

"Tool-calling" is a model's ability to access and instruct other applications on the internet or on a user's computer or device. It's essential for agentic AI, which promises to eventually book our airline tickets and file our work expenses.

Meta told BI that Llama models support tool-calling, including through its API in preview.

Theo Browne, a YouTuber and developer whose company, Ping, builds AI software for other developers, told BI that tool-calling is increasingly important as agentic tools are coming into focus, and it is almost a requirement for cutting-edge relevance.

Anthropic was an early leader in this, and other proprietary models like OpenAI are catching up, Browne said.

"Having a model that will reliably call the right tool to get the right information to generate the right response is incredibly valuable, and OpenAI went from kind of ignoring this to seemingly being all in on tools," Browne said.

Kourabi says the biggest indicator that Meta has fallen behind is the absence of a reasoning model, perhaps an even more fundamental element in the agentic AI equation.

"The reasoning model is the main thing, because when we think about what has unlocked a lot of these agentic capabilities, it's the ability to reason through a specific task, and to decide what to do," he said.

Llama: Who is it good for?

Some see Llama 4 as evidence that Meta is falling behind, but like Meta's foundational product, Facebook, AI practitioners say, it's still almost impossible to write it off.

Nate Jones, the head of product at RockerBox, offers advice to young developers through his Substack, YouTube, and TikTok. He encourages them to put Llama and any other models they're intimately familiar with on their rรฉsumรฉs.

"In 2025, people will already have Llama in their stack and they will look for people who have worked with it," Jones said.

Paul Baier, the CEO and principal analyst at GAI Insights, consults with companies on AI projects, with an emphasis on non-tech companies. He said Llama is likely to stay in the mix of many, if not most, of his clients.

"Enterprises continue to see that open source is an important part to have in the mix of their intelligence," Baier told BI. Open models, Llama most prominent among them, can handle less complicated tasks and keep costs down. "They want closed and open," Baier said.

And that's what many developers think too, Baris Gultekin, Head of AI at Snowflake, said.

"When our customers evaluate models, they are rarely looking at these benchmarks," Gultekin said. "Instead, they'll evaluate these models on their own problem statement. Given the very low cost, Llama is sufficient."

At Snowflake, Llama powers workloads like summarizing sales call transcripts and extracting structured information from customer reviews. At data platform company Dremio, Llama generates structured query language code and writes marketing emails.

"For 80% of applications, the model probably doesn't matter," Tomer Shiran, cofounder and chief product officer at Dremio, told BI. "All the models now are good enough. OpenAI, Llama, Claude, Gemini โ€” they all meet a specific need that the user has."

Llama may be slipping away from direct competition with the proprietary models, at least for now. But other analysis suggests that the field is diversifying, and Llama's role in it is solidifying.

Benchmarks are not what drives model choice a lot of the time.

"Everybody's just testing it on their own use cases," said Shiran. "It's the customer's data, and it's also going to keep changing."

Gultekin added: "They usually make these decisions not as a one-time thing, but rather per use case."

Llama may be losing developers like Browne, who breathlessly await the next toy from a company on the frontier. But the rest of the developer world, the one that's just trying to make AI-powered tools that work, Llama hasn't lost them yet. That means Llama's potential could still be intact.

It's also part of an open-source playbook Zuckerberg has used since 2013, when the company launched React, a library for building consumer interfaces that's still in use.

PyTorch is a machine learning framework created in 2016 that overtook Google's similar effort. Meta transferred PyTorch to the Linux Foundation in 2022 to maintain its neutrality.

"If Meta anchors another successful ecosystem, Meta gets a lot of labor from the open-source community," RockerBox's Jones said. "Zuckerberg gets tailwinds that he wouldn't have had otherwise."

Have a tip? Contact this reporter via email at [email protected] or Signal at 443-333-9088. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Have a tip? Contact this reporter via email at [email protected] or Signal at +1408-905-9124. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

A European defense startup is making drone submarines that can lurk underwater for 3 months at a time

15 May 2025 at 02:10
Helsing sea drones
Helsing sea drones being tested near Portsmouth Naval Base in the UK.

Helsing

  • German defense tech startup Helsing is working on a fleet of AI-equipped underwater sea drones.
  • It said they can operate for three-month stretches, with hundreds controlled by a single operator.
  • The news comes as NATO seeks to shore up the defense of vital subsea cable infrastructure.

German military tech startup Helsing said it is readying a fleet of undersea drones amid intensifying threats to subsea cables, and said they'd be ready to deploy in around a year.

The uncrewed submarine, the SG-1 Fathom, would be able to patrol and stay underwater for up to three months at a time, it said.

In a statement Tuesday, the company said that its AI Lura software detects subsea threats and can identify ship and submarine models from their underwater sound patterns.

It claimed the software operates 40x faster than human operators, and is 10x quieter than other models, meaning it's better able to evade detection.

"We must harness new technologies to keep pace with the threats against our critical infrastructure, national waters, and way of life," said Gundbert Scherf, cofounder and co-CEO of Helsing.

Hundreds of the drones could be deployed at the same time, controlled by a single operator, the company said, monitoring undersea regions for threats and relaying live data.

Bryan Clark, a senior fellow at the Hudson Institute in Washington, DC, told BI that underwater drones such as the ones being developed by Helsing "would be effective at monitoring underwater infrastructure."

He added that each drone's detection range is "quite short," but that the system is designed to manage dozens or even hundreds at a time.

Clark also said that underwater drones could be vulnerable to electronic jamming, which could impact their navigation systems and cause them to get "lost."

Helsing's announcement comes amid intensifying threats to networks of subsea cables crucial for carrying internet data.

European officials blamed Russia for a series of subsea cable severances in the Baltic late last year and in January, which some said was part of the Kremlin's "hybrid warfare" campaign.

In September, Business Insider reported that a specialist Russian submarine sabotage unit had been surveilling subsea cables.

NATO has formed its own special unit to better defend critical underwater infrastructure, and has also said it's developing new satellite technology so that data can be rerouted in the event of a massive disruption.

European militaries are also testing and deploying sea drones as part of their bid to increase undersea monitoring and shore up defenses.

The UK's military, as part of its Project Cabot, is testing new drone and AI technology to monitor underwater infrastructure, and is working with Helsing on the project, The Times of London reported Tuesday.

Helsing has already produced AI systems and aerial drone systems for European militaries, and was valued at $5.4 billion during a funding round last year.

It said it had developed the sea drones following interest from several navies, and had tested them at a naval base in the south of England.

"Deploying AI to the edge of underwater constellations will illuminate the oceans and deter our adversaries, for a strong Europe," Helsing's Scherf said.

Read the original article on Business Insider

Bills recruit NBA legend Allen Iverson for creative NFL schedule release

NFL schedule release videos are always fun to see each year, and the Buffalo Bills are always among the teams thinking outside the box.ย 

This year, the Bills had the ultimate play on words when their video began with general manager Brandon Beane calling MVP quarterback Josh Allen, asking if he had any ideas for how to release the schedule.

"Just use AI," Allen told Beane. "Thatโ€™s what everybodyโ€™s doing."

CLICK HERE FOR MORE SPORTS COVERAGE ON FOXNEWS.COM

Artificial intelligence has become a useful tool, but what Allen was doing to start the video hinted at how Beane took his suggestion.ย 

Allen was shooting basketballs instead of throwing footballs, and Beane channeled a different AI.ย 

BILLS GM BRANDON BEANE PRAISES โ€˜HUMBLEโ€™ HAILEE STEINFELD, REVEALS ONE WISH FOR JOSH ALLEN'S WEDDING

Beane called NBA Hall of Fame point guard Allen Iverson, nicknamed "AI."

"Yeah, I donโ€™t know what Iโ€™m doing," Iverson told Beane over a web call. "Why am I here?"

"We do want you to reveal the Buffalo Billsโ€™ 2025 schedule," Beane replied.ย 

Iverson liked that Allen was the player who gave Beane the idea, though he did ask if the quarterback meant using artificial intelligence.ย 

"Youโ€™re the only AI I know," Beane answered.ย 

So, Iverson obliged and was happy to do so. He held up a piece of paper with all 17 games and the bye week listed.

While Iverson did the schedule release, he wasnโ€™t thrilled with Beane's final request.ย 

"Can I get a โ€˜Go Bills?โ€™" Beane asked.ย 

"Go Josh Allen," Iverson, a die-hard Dallas Cowboys fan, replied with a smile.ย 

Buffalo opens its 2025 campaign on "Sunday Night Football," facing the Baltimore Ravens at home. Another notable game is a Nov. 2 rematch of the AFC championship game against the Kansas City Chiefs in Week 9 at 4:25 p.m. at home. ย ย 

Follow Fox News Digitalโ€™sย sports coverage on X, and subscribe toย the Fox News Sports Huddle newsletter.

Amazon sees warehouse robots 'flattening' its hiring curve, according to internal document

14 May 2025 at 10:39
Amazon's new Vulcan robot
Amazon's Vulcan robot.

Amazon

  • Amazon has a growing fleet of warehouse robots to enhance safety and efficiency.
  • In an internal document, Amazon said these robots are critical to flattening its hiring curve.
  • Amazon automation could save the company $10 billion annually by 2030, Morgan Stanley estimates.

When Amazon unveiled its Vulcan touch-sensing warehouse robot last week, it framed the technology as a way to make frontline jobs safer and easier.

What the company didn't mention is a broader ambition: using Vulcan and its expanding fleet of warehouse robots to reduce its need to hire a lot more humans.

An internal document obtained by Business Insider describes Amazon's long-term vision of automating many warehouse tasks. The document, dated late last year, said Vulcan and similar robots are "critical to flattening Amazon's hiring curve over the next ten years" as the company builds "the world's most advanced Fulfillment Network."

This suggests Amazon is trying to use automation to slow the rate of new hiring, rather than replace existing workers. People in senior positions at the company who are familiar with the matter say the automation push is also a response to growing costs and possible labor shortages in Amazon's warehouses.

The document, marked "Amazon Confidential," was produced by Amazon's retail team to review various important projects. It also outlined several AI initiatives designed to further optimize warehouse efficiency and employee productivity.

'Higher-value tasks'

Amazon's Vulcan robot in action
Amazon's Vulcan robot in action.

Amazon/Cover Images via Reuters Connect

The company still plans to "have a lot of people for a long period of time," an Amazon spokesperson told BI, adding that many future roles would involve "higher-value tasks."

"Our robotics solutions are designed to automate tasks in an effort to continue improving safety, reducing repetition, and freeing our employees up to deliver for customers in more skilled ways," the spokesperson said. "Since introducing robots within Amazon's operations, we've continued to hire hundreds of thousands of employees to work in our facilities and created many new job categories worldwide, including positions like flow control specialists, floor monitors, and reliability maintenance engineers."

The spokesperson also cautioned against drawing conclusions from a specific internal company document.

A leader in automation

Amazon has been a leader in warehouse automation for years, having acquired Kiva Systems in 2012 for roughly $775 million. The company has consistently streamlined its operations through technology, integrating more than 750,000 robots to work alongside over 1 million frontline employees in storing, picking, packing, and shipping goods.

For roughly a decade, Amazon's head count grew massively, even though it was embracing automation. But this has gone into reverse in recent years.

After doubling its workforce to 1.6 million between 2019 and 2021, Amazon's head count declined to 1.55 million last year.

A chart showing Amazon headcount
Amazon's head count.

Amazon public filings

Humans working alongside robots

Amazon introduced Vulcan last week as its first tactile robot. It's capable of sensing and adjusting the force needed to pick products from crowded bins and tall baskets, improving safety and speed.

According to the internal document obtained by BI, Amazon's robotics team is working on at least two AI models intended to be building blocks for new applications that "will significantly enhance the efficiency and responsiveness of our robotics systems." The company is also working on a new AI model called "Tetris," aimed at reducing variable labor and transportation costs, the document said.

In the document, Aaron Parness, the director of applied science at Amazon Robotics, emphasized robots' role in enhancing efficiency and safety to ultimately enable the company to fulfill more orders and deliver more shipments.

"We've always envisioned a solution that's robots and humans working side by side," Parness wrote. "And we think the sum of the two together is better than the parts alone."

He added that automation helps Amazon retain frontline employees in a competitive labor market by improving the work environment and offering new technical career paths in maintenance and operations.

"You have to be competitive for workers," Parness said, "so that people will want to work and stay at Amazon."

A potential solution for labor shortages

Some Amazon employees told BI that machines such as Vulcan are designed not only to enhance productivity but also to help address a growing labor gap.

One employee said the company had set aggressive targets to automate much of the warehouse workload over the next decade to drive down costs. Amazon is also extensively researching how to upskill the current workforce to move them into more maintenance-related jobs, this person added.

With Amazon's continued growth, finding enough workers has become increasingly difficult, another Amazon insider told BI. If the company doesn't automate more, it will struggle to keep up with demand, this person added.

A $10 billion opportunity

A green wheeled robot carries a large wheeled cage on its back.
Amazon's new "Proteus" robot.

Amazon

Vulcan is one of several systems Amazon has introduced in recent years, including robotic arms such as Robin and Sparrow that sort orders and mobile units like Proteus that transport packages across warehouses.

Amazon's automation strategy could save as much as $10 billion annually if 30% to 40% of US orders are fulfilled through next-generation facilities by 2030, Morgan Stanley estimates.

"We expect Amazon to continue to expand its warehouse network (to support growth) while also upgrading the footprint toward next-gen robotics in new builds and retrofits," Morgan Stanley analysts wrote in a research note earlier this year.

Amazon CEO Andy Jassy reaffirmed the company's commitment to automation during a February earnings call, saying its robotics investments aim to boost safety, productivity, and cost efficiency.

"We've already seen substantial value from our robotics innovations," Jassy said.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

One of the top AI companies won't let you use AI when you apply for a job there

13 May 2025 at 02:00
Anthropic CEO Dario Amodei at the World Economic Forum in Davos
Anthropic CEO Dario Amodei at the World Economic Forum in Davos

Yves Herman/REUTERS

  • Anthropic bans the use of AI in job applications.
  • This is a leading AI startup that offers a top AI chatbot service called Claude.
  • "AI is circumventing the evaluation of human qualities and skill," a top tech recruiter told BI.

AI companies want everyone to use AI. In fact, they often warn that if you don't AI, you'll get left behind, a penniless luddite that no one will hire.

There may be only one area of modern life where the technorati might think it's bad to use AI. That's when you apply for a job at their company.

That's the situation at Anthropic, one of the leading AI labs run by a slew of early OpenAI researchers and executives.

Anthropic is hiring a lot right now. If you go to their career website and click on a job posting, you'll be asked to write a short essay on why you want to work for the startup. It's one of those really annoying job application hurdles โ€” and a perfect task for Anthropic's AI chatbot Claude.

The problem is, you can't use AI to apply.

"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," Anthropic wrote in a recent job posting for an economist. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."

The AI startup has the same requirement with other job listings, including these technical roles below. Which, you know, will require a lot of AI use. Just not in the application, though.

For anyone applying to that third job, you better not cheat and use AI! Gotta be honest for that role in particular.

Why would an AI company not want people using its products like this? This technology is supposed to take over the world, revolutionizing every aspect of work and play. Why stop at job applications?

This is especially true for math and science boffins who usually do numbers best, not words. I asked Anthropic to explain this policy last week. OpenAI doesn't seem to have an AI ban like this, but I checked in with that AI firm, too. Neither firm responded.ย 

"The pendulum is swinging"

I also asked Jose Guardado about this. He's a recruiter at VC firm The General Partnership, who helps startup founders build engineering teams. This chap used to work at Google and Andreessen Horowitz, so he's a real (human) expert at this stuff.

"The pendulum is swinging more toward humanities and authentic human experiences," he said.ย 

In a world where AI tools can write and check software code pretty well, it may not be the best strategy to hire an army of math whizzes who don't communicate with colleagues very well, or struggle to describe their plans in human languages, such as English.

Maybe you need a combination of skills now. So getting candidates to write a short essay, without the help of AI, is probably a good idea.ย 

Guardado made the more obvious point, too: If candidates use AI to answer interview questions, you can't tell if they'll be any good at the job.ย 

"There are well-known instances of students using AI chatbots to cheat on tests, and others using similar technology to cheat on coding interviews," Guardado told me. "AI is circumventing the evaluation of human qualities and skill. So there's a bit of a backlash right now."

"So, how do you find authentic measures of evaluation?" he added. "How can you truly get a measure of applicants?"

Banning AI like this is probably a better way, for now, according to Guardado.

"It's ironic that the maker of Claude is at the forefront of this," he said of Anthropic.ย "Is it the best look for them, as a leading AI provider?"

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected]

Read the original article on Business Insider

Forget SEO. The hot new thing is 'AEO.' Here are the startups chasing this AI marketing phenomenon.

12 May 2025 at 02:01
OpenAI's Sam Altman discusses AI at a university in Berlin
Marketers now offer to help with "AEO" when it comes to getting good placement in Sam Altman's ChatGPT.

Axel Schmidt/REUTERS

  • "AEO" is replacing "SEO" as AI chatbots such as Sam Altman's ChatGPT change online discovery.
  • AEO focuses on influencing AI chatbot responses. It's different from traditional keyword-driven SEO.
  • AEO startups are rapidly emerging, raising venture capital, and analyzing growing AI-driven traffic.

SEO, or search engine optimization, is the art and science of crafting websites and other online content so it shows up prominently when people Google something.

A massive industry of experts, advisors, gurus (and charlatans) has grown up around the interplay between Google, which purposely sets opaque rules, and website owners tweaking stuff and trying to work the system.

The rise of generative AI, large language models, and AI chatbots is changing all this โ€” radically and quickly.

While SEO has long been a cornerstone of digital marketing strategy, a new paradigm is rapidly threatening to take its place: "answer engine optimization," or AEO.

As AI chatbots such as ChatGPT, Claude, Gemini, and Perplexity become the front door to online discovery, AEO is emerging as a strategic imperative for growth. There's been an explosion of AEO startups and tools in recent months, all promising to help online businesses show up when chatbots and AI models answer user questions.

"There must have been 30 AEO product launches in the last few months, all trying to do what SEO did 20 years ago," said David Slater, a chief marketing officer who's worked at Mozilla, Salesforce, and other tech companies. "It's absolutely going to be a hot space."

What Is AEO?

AEO is SEO adapted for the world of conversational AI, says Ethan Smith, CEO of digital marketing firm Graphite Growth. He wrote an excellent blog recently about this new trend.

Where traditional SEO focused on optimizing for static keyword-driven queries, AEO centers on influencing how AI chatbots respond to user questions, he says. With tools like ChatGPT increasingly integrating real-time web search and surfacing clickable links, chat interfaces now function like hybrid search engines. The result is a fast feedback loop that makes influencing LLM outputs not just possible, but essential for online businesses.

Unlike SEO, where a landing page might target a single keyword, AEO pages must address clusters of related questions. Smith shares an example: Instead of optimizing a webpage for "project management software," AEO pages might answer dozens or even hundreds of variations such as "What's the best project management tool for remote teams?" or "Which project management platforms support API integration?"

Why ChatGPT's live web access makes AEO important

This shift didn't happen overnight. When ChatGPT launched in late 2022, its responses were generated from outdated training data with no live web access. But over the past year, LLMs have started using retrieval-augmented generation, or RAG, and other techniques that help them incorporate more real-time information. They often perform a live online search, for instance, and then summarize results in real time. This makes AEO both faster to influence and more dynamic than its SEO predecessor, Smith writes.

There's been some interest in AEO for about a year or so. But in early 2025, OpenAI's ChatGPT and other generative AI services began surfacing prominent links and citations in answers a lot more. That's when AEO really took off.

Now, AEO startups are raising venture capital, some online businesses are seeing conversion spikes from AI traffic, and there's been a Cambrian explosion of AEO analytics, tracking, and content tools.

Check out this list of AEO startups and tools, identified by Smith from Graphite Growth. There are a few established players in here, too, including HubSpot. (Overall, there are a lot, so click on the button in the top right of this table to see all the options!)

Looking into the 'brain' of an AI model

There's already a race to try to determine how these AI chatbots spit out results and recommendations, so website owners can hack their way to better online distribution in the new era of generative AI and large language models.

GPTrends is one of these up-and-coming AEO providers. David Kaufman, one of the entrepreneurs behind the firm, shared an interesting analysis recently on LinkedIn.

He said that AI search results from tools such as ChatGPT and Perplexity are unpredictable. They can change even when you ask the same question multiple times. Unlike Google, where search results stay mostly the same, AI tools give different answers depending on how the model responds in the moment, Kaufman writes.

For example, Kaufman and his colleagues asked ChatGPT this question 100 times: "What's the best support ticketing software?" Then they tracked which providers appeared most often. Here are the results of the test:

A chart showing an example of results from a ChatGPT request

David Kaufman, GPTrends

Zendesk showed up in 94% of answers, while other companies, including Freshworks and Zoho, appeared less often and in different positions. This randomness gives less well-known brands a better shot at being seen, at least some of the time.

"Strategically, this means brands need to rethink how they optimize for discovery, focusing less on traditional SEO tactics and more on comprehensive, authoritative content that AI systems recognize as valuable," Kaufman writes.

Read the original article on Business Insider

The US Copyright Office has thoughts on how AI is trained. Big Tech may not like it.

11 May 2025 at 16:59
Logos of AI apps
The US Copyright Office published a new report on generative artificial intelligence.

Jonathan Raa/NurPhoto

  • Big Tech companies depend on content made by others to train their AI models.
  • Some of those creators say using their work to train AI is copyright infringement.
  • The US Copyright Office just published a report that indicates it may agree.

Big Tech companies train their AI models mostly on the work of other people, like scientists, journalists, filmmakers, or artists.

Those creators have long objected to the practice. Now, the US Copyright Office appears to have joined their side.

The office released on Friday its latest in a series of reports exploring copyright laws and artificial intelligence. The report addresses whether the copyrighted content AI companies use to train their AI models qualifies under the fair use doctrine.

AI companies are probably not going to like what they read.

AI companies are desperate for data. Most of them believe that the more information a model can digest, the better it will be. But with that insatiable consumption, they risk running afoul of copyright laws.

Companies like Open AI have faced a slew of lawsuits from creators who say training AI models on their copyrighted work without permission infringes on their rights. AI execs argue they haven't violated copyright laws because the training falls under fair use.

According to the US Copyright Office's new report, however, it's not that simple.

"Although it is not possible to prejudge the result in any particular case, precedent supports the following general observations," the office said. "Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs โ€” all of which can affect the market."

The office made a distinction between AI models for research and commercial AI models.

"When a model is deployed for purposes such as analysis or research โ€” the types of uses that are critical to international competitiveness โ€” the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries."

In the report, the office compared artificial intelligence outputs that essentially copy its training materials to outputs with added elements and new value.

"On one end of the spectrum, training a model is most transformative when the purpose is to deploy it for research, or in a closed system that constrains it to a non-substitutive task," the office said. "For example, training a language model on a large collection of data, including social media posts, articles, and books, for deployment in systems used for content moderation does not have the same educational purpose as those papers and books."

Training an artificial intelligence model to create outputs "substantially similar to copyrighted works in the dataset" is less likely to be considered transformative.

"Unlike cases where copying computer programs to access their functional elements was necessary to create new, interoperable works, using images or sound recordings to train a model that generates similar expressive outputs does not merely remove a technical barrier to productive competition," the office said. "In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative."

In another section, the office said it rejected two "common arguments" about the "transformative nature of AI training."

"As noted above, some argue that the use of copyrighted works to train AI models is inherently transformative because it is not for expressive purposes. We view this argument as mistaken," the office said.

"Nor do we agree that AI training is inherently transformative because it is like human learning," it added.

A day after the office released the report, President Donald Trump fired its director, Shira Perlmutter, a spokesperson told Business Insider.

"On Saturday afternoon, the White House sent an email to Shira Perlmutter saying 'your position as the Register of Copyrights and Director at the US Copyright Office is terminated effective immediately," the spokesperson said.

While Trump, with the help ofย Elon Musk, who has his own AI model, Grok, has sought to reduce the federal workforce and shutter some agencies, some saw the timing of Perlmutter's dismissal as suspect. New York Rep. Joe Morelle, a Democrat, addressed Perlmutter's firing in an online statement.

"Donald Trump's termination of Register of Copyrights, Shira Perlmutter, is a brazen, unprecedented power grab with no legal basis. It is surely no coincidence he acted less than a day after she refused to rubber-stamp Elon Musk's efforts to mine troves of copyrighted works to train AI models," the statement said.

Big Tech and AI companies have rallied around Trump since his election, led byย Musk, who became the face of the White House DOGE Office and the administration's effort to reduce federal spending. Other tech billionaires, like Meta CEO Mark Zuckerberg and OpenAI CEO Sam Altman, have also cozied up to Trump in recent months.

A representative for the White House did not respond to a request for comment from Business Insider.

Read the original article on Business Insider

Nvidia vice president says GPUs are the 'currency' of AI researchers

10 May 2025 at 01:01
The Nvidia logo, which consists of a green, eye-shaped decal over the company name written in white lettering, is affixed to the side of a black stone building, with a blue sky behind it all.
Some of Nvidia's researchers donated their compute to help facilitate a swift rollout of the company's Llama Nemotron models.

Justin Sullivan/Getty Images

  • Nvidia's Llama Nemotron models were developed quickly, said Jonathan Cohen, VP of Applied Research.
  • The speed was thanks to researchers across the company being willing to "give up their compute."
  • "These days, the currency in any AI researcher is how many GPUs they get access to," he said in an interview.

In the world of AI research, the speed of development is limited in large part by available computing resources, according to Jonathan Cohen, Vice President of Applied Research at Nvidia.

"These days, the currency in any AI researcher is how many GPUs they get access to, and that's no less true at Nvidia than at any other company," Cohen said in an interview on Nvidia Developer.

Cohen led the team responsible for developing Nvidia's Llama Nemotron family of models. Released in March of this year, they represent the company's entry into the world of "reasoning" AI systems.

The speed at which the models came together was remarkable, Cohen said, taking "no more than one to two months." He partially credits the efficiency of their development to other workers being willing to sacrifice their processing power.

"So, there were a lot of researchers who very selflessly agreed to give up their compute so that we could get these Llama Nemotron models trained as quickly as we did," he said.

Cohen also attributed the speed of development to Nvidia's company-wide culture of prioritizing major projects, regardless of current team goals.

"How do you have a team to do a thing you've never done before? Part of the corporate culture is โ€” we call them a 'swarm' โ€” where you identify, 'This is something that's important,'" he said. "And everyone, every manager who has people who might be able to contribute, thinks about, 'Is this new thing more important than the current thing everyone on my team is doing?'"

If the manager can spare anybody, they'll "contribute" their direct reports to the new priority.

"Llama Nemotron ended up being a very cross-discipline, cross-team effort," Cohen added. "We had people from across the whole company working together without any formal organizational structure."

Llama Nemotron required a series of sacrifices, Cohen said, both in terms of computing power and personnel โ€” but people were able to set aside self-interests for the benefit of the whole.

"It was really great to see, great leadership," he said. "There were a lot of sacrifices that people made, a lot of very egoless decisions that brought it together, which is just awesome."

Nvidia did not respond to a request for comment by Business Insider immediately prior to publication.

Read the original article on Business Insider

โŒ
โŒ