Normal view

There are new articles available, click to refresh the page.
Today — 19 May 2025Main stream

GitHub, Microsoft embrace Anthropic’s spec for connecting AI models to data sources

19 May 2025 at 09:00
GitHub and Microsoft, GitHub’s corporate parent, are joining the steering committee for MCP, Anthropic’s standard for connecting AI models to the systems where data resides. The announcement, which was made at Microsoft’s Build 2025 conference on Monday, comes as MCP gains steam in the AI industry. Earlier this year, both OpenAI and Google said they […]
Yesterday — 18 May 2025Main stream

The inside story of how Silicon Valley's hottest AI coding startup almost died

18 May 2025 at 02:00
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto

Eric Simons

In 2017, Eric Simons founded StackBlitz with his childhood friend Albert Pai. Six years later, it was the startup equivalent of the walking dead.

StackBlitz raised funding to build software development tools, including WebContainers technology that let engineers create and manage projects in a browser, rather than on their laptops.

The business didn't really take off, and by late 2023, things came to a head. StackBlitz wasn't generating much revenue. Growth was lackluster. At a board meeting that December, an ultimatum was issued: Show real progress, or you're toast.

Simons and Pai pitched a plan to grow by ramping up sales efforts for existing products, while building new offerings that could be bigger. "We also acknowledged that it might be time to explore acquisition scenarios ahead of potential failure," Simons recalled.

Then, one board member, Thomas Krane, got real: By the end of 2024, everyone needed finality on StackBlitz's fate.

Thomas Krane, Insight Partners
Thomas Krane, managing director at Insight Partners

Insight Partners

"I think I was saying what a lot of others were thinking in the room," Krane told Business Insider.

"No one was happy with the trajectory," venture capitalist and StackBlitz board director Sarah Guo remembers. "We needed a new plan."

When the meeting ended, Simons walked out of his "shed-turned-home office" into his backyard on a cloudy, windy Bay Area day to try to process the news.

"It was a tough pill to swallow, but we agreed," he said.

As 2024 began, it looked like StackBlitz was about to become one of the thousands of startups that fizzle into the abyss of venture capital history every year.

Not so fast. In Silicon Valley, fortunes can turn on dime as new inventions spread like wildfire, incinerating legacy technology and feeding unlikely growth from the embers. And this is what happened to StackBlitz.

Noodling with OpenAI models

Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz
Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz

Eric Simons

In early 2024, Simons, Pai, and their co-workers probably should have been meeting more with the investment bankers Krane had introduced them to — an attempt to ring what value remained from the struggling startup.

Instead, like Silicon Valley founders often do, they were noodling with new technology, seeing how OpenAI models performed on coding tasks.

"The code output from their models would break, and the web apps created were buggy and unreliable," Simons said. "We thought it would be years before this improved. So we dropped that side project after about two weeks."

A Bolt from the blue

StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).
StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).

Courtney Yee/Photoflood Studio

Then, in June 2024, OpenAI rival Anthropic launched its Sonnet 3.5 AI model. This was a lot better at coding, and it became the technical foundation for an explosion in AI coding startups, such as Cursor and Lovable, and an important driver of what's now known as vibe coding.

That summer, StackBlitz started working on a new product that relied on Anthropic's breakthrough to bring coding to non-technical users.

On Oct. 3, StackBlitz launched this new service. It was called Bolt.new, a play on the startup's lightning-bolt logo. It took roughly 10 employees three months to create.

Bolt used StackBlitz's technological base — that WebContainers underpinning that allows engineers to work in a browser — and added a simple box on top with a flashing cursor and a question, "What do you want to build?"

A cocktail menu at StackBlitz's "hackathon" event in San Francisco
A cocktail menu at StackBlitz's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The service offered a tantalizingly simple proposition: Type what you want to create in plain English and Bolt's software would tap into Anthropic's Sonnet model in the background and write the code needed to create a website or a mobile app. And not just simple sites to share your wedding photos. Full applications that let users take valuable actions including logging in, subscribing, and buying things.

Before this, digital products like these required professional software engineers and developers to build them using complex coding languages and tricky tools that were way beyond the capabilities of non-technical people.

Simons emailed StackBlitz investors to tell them about Bolt, and asked for their help.

"If you can RT/share on X, and/or share with 3 developers you know, myself and the team would be extremely appreciative!" he wrote, according to a copy of that email obtained by BI. 

Crying in a "shed office"

StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco
StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The first week that Bolt.new came out, it generated about $1 million of annual recurring revenue, or ARR, a common way cloud software services from startups are measured financially. The next week, it added another $1 million in ARR, and then another, according to Simons.

StackBlitz wasn't, in fact, going to shut down. Instead, it had a hit on its hands.

"I had slept three hours a night for a week straight to get the release out with our team," Simons told BI. "After seeing it live, and people loving it — beyond anything I had ever created before — I cried, alone at my desk in my backyard shed office."

A very different investor update

Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.
Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.

Eric Simons

On the first day of November, Simons wrote a very different email to his investors. The subject line read, StackBlitz October Update: $0 to $4m ARR in 30 days.

The number of active Bolt customers surged from about 600 to more than 14,000 in the first few weeks, according to a copy of the email obtained by BI. 

A chart showing early usage trends for Bolt.new
A chart showing early usage trends for Bolt.new

Eric Simons/StackBlitz

ARR soared from roughly $80,000 to more than $4 million in the same period. 

A chart showing early revenue traction for Bolt.new
A chart showing early revenue traction for Bolt.new

Eric Simons/StackBlitz

"You can imagine after years of grinding on our amazing core technology, endlessly searching for a valuable business use case of it, just striking out over and over again, how I and the team feel looking at this graph," Simons wrote. "If you had to put it into a word or two, it'd be something like 'HELL. YES.'"

When talented technologists are pushed to search harder for new ways to monetize their inventions, on a tight deadline, sometimes magic happens, according to Krane from Insight Partners.  

"That life-or-death pressure led to a series of rapid pivots that ultimately led to this incredible outcome," he told BI. "This company broke every model in terms of growth rate."

A new pricing model

There was so much customer demand for Bolt that StackBlitz raised prices after about a week. The main subscription plan went from $9 a month to $20 a month.

The startup also added new pricing tiers that cost $50, $100, and $200 a month. A few weeks after this change, almost half of Bolt's paying users were on more expensive plans.

An early breakdown of Bolt.new revenue and pricing tiers
An early breakdown of Bolt.new revenue and pricing tiers

Eric Simons/StackBlitz

Simons said StackBlitz may have stumbled upon a new pricing model for AI code-generation services. (Turns out, he did).

Every time a Bolt user typed in a request, this was transformed into "tokens," which AI models use to break letters, numbers, and other characters into digestible chunks. Each token costs a certain amount to process.

Bolt users were sending in so many requests they were blowing through their token limits. So StackBlitz introduced tiered pricing so customers could pay extra to get more tokens.

"Customers are willing to pay a lot of money for heavier usage of AI inference," Simons told his investors.

One customer was quoted $5,000 and a 2-3 month timeframe by a Ukrainian contractor to develop an app. Instead, she bought Bolt's $50 a month plan, and built the app herself in less than two weeks.

"1/100th of the cost, and 5-10x faster timeline," Simons wrote.

A carrot or a stick?

Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight
Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight

Eric Simons

Simons also couldn't resist an invitation to stay on the rocket ship, dropping a classic Silicon Valley funding-raise carrot. Or was it stick?

"We've also received substantial inbound interest from VCs and from strategic acquirers," the CEO wrote in the Nov. 1 email to investors. "So we're starting to explore how best to play our hand(s) in the coming weeks/months here." (Earlier this year, reports emerged of a new funding round valuing StackBlitz at roughly $700 million).

With StackBlitz's demise averted, Simons realigned resources to focus on Bolt. The startup hired more staff and added Supabase, a database service that stores transaction data, user registrations, and other crucial information.

It also added Stripe technology so Bolt users can easily accept card and digital payments. StackBlitz also spent heavily to educate customers on how to use Bolt better, running weekly live YouTube video sessions.

Waiting for Anthropic

Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.
Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.

Anthropic

Bolt was off the races. But there was still one big hurdle involving Anthropic.

Back in the spring of 2024, an Anthropic cofounder filled out a "Contact Us" form on StackBlitz's webcontainers.io site. The form asked anyone who wanted to license the WebContainers technology to fill out some basic details.

"After we saw that form, we called to chat. He said Anthropic was working on a project and this could help," Simons recalled, without identifying the cofounder. 

For the first year, StackBlitz proposed a license for Anthropic with uncapped usage for about $300,000.

"With hindsight, we made them a smokin' deal," Simons said. "We were desperate at that point." Other big customers might follow Anthropic's lead and sign license deals, too, the thinking went. 

By October, though, when Bolt had taken off using the same web-based StackBlitz technology, a $300,000 uncapped license suddenly looked like a very bad deal for Simons's startup.

StackBlitz founders and staff hanging with The Chainsmokers
StackBlitz founders and staff hanging with The Chainsmokers

StackBlitz

But there was a catch: Anthropic had to sign the contract by the end of October, otherwise the deal would expire. Simons and his StackBlitz co-workers watched the clock like hawks.

"We were like, god, please don't sign it," Simons told BI. 

The deadline finally passed. Anthropic never signed.

Simons doesn't know exactly why the AI lab didn't put pen to paper. However, he noted that Anthropic has "a lot of things going on."

"They were like, 'we might come back to this in the future maybe,'" Simons said. "We have a great relationship with Anthropic. They are doing an incredible amount of revenue now, and so are we." 

Whatever the reason, StackBlitz was now free to pursue its Bolt growth strategy.

A podcast appearance

Venture capitalist Sarah Guo
Venture capitalist Sarah Guo

Fortune/Reuters

By December 6, Simons appeared on No Priors, a popular AI podcast hosted by Guo and another top AI startup investor, Elad Gil. (Guo is an early backer of StackBlitz).

The CEO shared that Bolt was generating $20 million in ARR, just a few months after it launched. By the middle of March 2025, Bolt's ARR had jumped to $40 million. 

In a recent interview with BI, Simons wouldn't share more revenue details. However, he said StackBlitz planned to announce a new ARR number when Bolt passes triple-digit ARR, meaning more than $100 million. 

The service has 5 million registered users now, and Simons said StackBlitz is profitable, growing, and healthy. 

There's even a new name for the millions of non-technical users who craft digital offerings through Bolt.

Simons calls them "software composers."

A hackathon meeting

The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.
The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.

Alistair Barr/Business Insider

He explained this to me at a "hackathon" event StackBlitz held on May 7 in San Francisco. Hundreds of "composers," along with other customers, partners, and investors, partied late into the evening, with The Chainsmokers DJ-ing. (The duo are StackBlitz investors).

Simons held court, schmoozed and chatted, with a wide grin and seemingly endless energy.

Through the din, I asked him if he was concerned about rival AI coding services nipping at his heels. After all, it had been about seven months since Bolt launched — a lifetime in Silicon Valley AI circles.

Simons seemed unperturbed. He said the years of hard work that StackBlitz spent developing its WebContainers technology gives Bolt an edge that most rivals don't have.

This allows Bolt-based applications to be built and run applications using the chips on customers' devices, such as laptops. Other AI coding providers must tap a cloud service each time a user spins up a project, which can get very expensive and technical, according to Simons.

"People assume we're a startup that just launched yesterday," he said. "But we're an overnight success, seven years in the making."

A party duel with Figma

Figma founders Evan Wallace Dylan Field
Evan Wallace and Dylan Field are the cofounders of Figma.

Figma

The competition doesn't wait long to respond in Silicon Valley, though.

A few San Francisco blocks away, on the same day as Bolt's hackathon party, graphic design giant Figma announced a competing product at its annual Config conference. Figma Make is a new tool that helps developers create web apps using conversational prompts, rather than specialized software code.

Sound familiar?

"We believe there are multiple huge companies to be built here, and that the market for engineering is bigger because of AI," Guo said.

Simons noted that this new Figma service doesn't use the same WebContainers technology that supports Bolt. "We wrote an operating system from scratch that runs in your browser. It's completely different from what Figma has," he argued.

Still, I could tell Figma had made an impact.

"What are the odds that we were throwing a giant party on the same day as their launch across the road? I'll leave that to your writer's imagination," Simons told me, with a giggle.

Read the original article on Business Insider

Before yesterdayMain stream

Anthropic's Claude faked a legal citation. A lawyer had to clean it up.

15 May 2025 at 21:26
Claude logo on a phone
In a copyright lawsuit over Anthropic's use of music lyrics, the company's legal team used its AI assistant, Claude, to help draft a citation in an expert report.

illustration by Cheng Xin/Getty Images

  • Claude generated "an inaccurate title and incorrect authors" in a legal citation, per a court filing.
  • The AI was used to help draft a citation in an expert report for Anthropic's copyright lawsuit.
  • Anthropic's lawyer called it "an embarrassing and unintentional mistake."

A lawyer defending Anthropic had to clean up after the company's AI bot, calling it "an embarrassing and unintentional mistake."

In a copyright lawsuit over Anthropic's use of music lyrics, the company's legal team used its AI assistant, Claude, to help draft a citation in an expert report.

Claude provided the correct publication title, publication year, and link to the provided source, but "an inaccurate title and incorrect authors," Anthropic's lawyer said in a court filing on Thursday.

Attorney Ivana Dukanovic, of Latham & Watkins, said her team's "manual citation check" failed to catch the mistake and "additional wording errors introduced in the citations during the formatting process using Claude.ai."

"This was an honest citation mistake and not a fabrication of authority," Dukanovic wrote.

Music publishers Universal Music Group, Concord, and ABKCO sued Anthropic, saying the company used copyrighted lyrics to train Claude. The case is part of a wave of legal battles between copyright holders and AI companies.

The publishers' attorney told the court on Tuesday that Anthropic data scientist Olivia Chen may have used a fake source generated by AI to support the company's argument, Reuters reported.

On Thursday, Dukanovic responded that Chen cited a real article from the journal "The American Statistician," but Claude had made up the title and authors.

Anthropic and Dukanovic did not respond to a request for comment.

AI in the legal world

It's not the first time an AI tool has raised eyebrows in the legal world.

In March, a lawyerless man deployed an AI-generated avatar to argue his civil appeals case in a New York courtroom. A panel of stunned judges quickly shot him down.

AI hallucinations have also landed lawyers in hot water. An attorney was fired from Baker Law Group after he used ChatGPT to generate legal citations, which turned out to be fake.

Donald Trump's former fixer, Michael Cohen, also got into trouble when he used Google's AI chatbot, Bard, to find legal cases to support his arguments. The chatbot made up the cases, and his lawyer filed them in court without checking.

Daniel Shin, the assistant director of research at the Center for Legal and Court Technology at Virginia's William & Mary Law School, told Business Insider in a report last month that judges are concerned about the use of AI in the courts because of hallucinations.

Courts have shown they will not tolerate any improper use of AI tools, Shin said.

Still, lawyers are being told they need to start adopting AI.

At a legal-tech conference in March, lawyers were urged to embrace AI or risk falling behind, BI's Melia Russell reported.

"Lawyers need to wake up," Todd Itami, an attorney at the large legal defense firm Covington & Burling, said, adding that learning to use artificial intelligence was "imperative" for their success.

Read the original article on Business Insider

Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation

15 May 2025 at 12:37
A lawyer representing Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with “an inaccurate title and inaccurate authors,” Anthropic says in the filing, first reported […]

One of the top AI companies won't let you use AI when you apply for a job there

13 May 2025 at 02:00
Anthropic CEO Dario Amodei at the World Economic Forum in Davos
Anthropic CEO Dario Amodei at the World Economic Forum in Davos

Yves Herman/REUTERS

  • Anthropic bans the use of AI in job applications.
  • This is a leading AI startup that offers a top AI chatbot service called Claude.
  • "AI is circumventing the evaluation of human qualities and skill," a top tech recruiter told BI.

AI companies want everyone to use AI. In fact, they often warn that if you don't AI, you'll get left behind, a penniless luddite that no one will hire.

There may be only one area of modern life where the technorati might think it's bad to use AI. That's when you apply for a job at their company.

That's the situation at Anthropic, one of the leading AI labs run by a slew of early OpenAI researchers and executives.

Anthropic is hiring a lot right now. If you go to their career website and click on a job posting, you'll be asked to write a short essay on why you want to work for the startup. It's one of those really annoying job application hurdles — and a perfect task for Anthropic's AI chatbot Claude.

The problem is, you can't use AI to apply.

"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," Anthropic wrote in a recent job posting for an economist. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."

The AI startup has the same requirement with other job listings, including these technical roles below. Which, you know, will require a lot of AI use. Just not in the application, though.

For anyone applying to that third job, you better not cheat and use AI! Gotta be honest for that role in particular.

Why would an AI company not want people using its products like this? This technology is supposed to take over the world, revolutionizing every aspect of work and play. Why stop at job applications?

This is especially true for math and science boffins who usually do numbers best, not words. I asked Anthropic to explain this policy last week. OpenAI doesn't seem to have an AI ban like this, but I checked in with that AI firm, too. Neither firm responded. 

"The pendulum is swinging"

I also asked Jose Guardado about this. He's a recruiter at VC firm The General Partnership, who helps startup founders build engineering teams. This chap used to work at Google and Andreessen Horowitz, so he's a real (human) expert at this stuff.

"The pendulum is swinging more toward humanities and authentic human experiences," he said. 

In a world where AI tools can write and check software code pretty well, it may not be the best strategy to hire an army of math whizzes who don't communicate with colleagues very well, or struggle to describe their plans in human languages, such as English.

Maybe you need a combination of skills now. So getting candidates to write a short essay, without the help of AI, is probably a good idea. 

Guardado made the more obvious point, too: If candidates use AI to answer interview questions, you can't tell if they'll be any good at the job

"There are well-known instances of students using AI chatbots to cheat on tests, and others using similar technology to cheat on coding interviews," Guardado told me. "AI is circumventing the evaluation of human qualities and skill. So there's a bit of a backlash right now."

"So, how do you find authentic measures of evaluation?" he added. "How can you truly get a measure of applicants?"

Banning AI like this is probably a better way, for now, according to Guardado.

"It's ironic that the maker of Claude is at the forefront of this," he said of Anthropic. "Is it the best look for them, as a leading AI provider?"

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected]

Read the original article on Business Insider

Anthropic cofounder says plenty of parents would buy an AI teddy bear to keep their kids busy

13 May 2025 at 01:37
Jack Clark, cofounder of AI startup Anthropic
Jack Clark, cofounder of Anthropic, thinks "a lot" of parents will

ANTHONY WALLACE/AFP via Getty Images

  • Jack Clark, cofounder of Anthropic, said he'd use an AI "teddy bear" or "bunny" to keep his child entertained.
  • He expects "a lot" of parents would do the same if such a product becomes available.
  • Rationing the tech, as with TV, is crucial, he said on an episode of Conversations with Tyler.

Jack Clark, the cofounder of Anthropic, "a lot" of parents will want an AI teddy bear to help entertain their kids — himself included.

"I think most parents, if they could acquire a well-meaning friend that could provide occasional entertainment to their child when their child is being very trying, they would probably do it," he said on an episode of the Conversations with Tyler podcast that posted last week.

AI tools for kids' entertainment are already here — including a Grimes-backed stuffed rocket ship, which kids can chat with and ask questions to, and a storytelling bear that uses artificial intelligence to generate narratives.

While Clark wasn't explicitly talking about those, he said he'd be supportive of toys with expanded capabilities — "smart AI friends" that could interact with children on the same level as someone in their age group.

"I am annoyed I can't buy the teddy bear yet," said Clark, who acted as policy director at OpenAI for 2 years before transitioning to Anthropic.

Clark said he doesn't think he's alone, either — as soon as children display a need to socialize, parents look for some way to get them to interact with their peers, he said. An AI companion could be an addition, rather than a substitute, he said.

"I think that once your lovable child starts to speak and display endless curiosity and a need to be satiated, you first think, 'How can I get them hanging out with other human children as quickly as possible?'" he said, adding that he's also placed his child on a preschool waitlist.

He's especially wished for the help of an AI tool while doing chores, he added.

"I've had this thought, 'Oh, I wish you could talk to your bunny occasionally so that the bunny would provide you some entertainment while I'm putting the dishes away, or making you dinner, or something,'" Clark said. "Often, you just need another person to be there to help you wrangle the child and keep them interested. I think lots of parents would do this."

Not all tech leaders agree — Sam Altman, CEO of OpenAI and father as of February, says he doesn't want his son's best friend to be a bot.

"These AI systems will get to know you over the course of your life so well — that presents a new challenge and level of importance for how we think about privacy in the world of AI," Altman said while testifying before the Senate last week.

A paper released by researchers at Microsoft and Carnegie Mellon University said AI being used "improperly" by knowledge workers could lead to the "deterioration of cognitive faculties" — and students are frequently using AI to "help" them with their assignments. But some research does show children can be taught, early on, to work alongside AI, rather than to depend on it entirely.

Clark is an advocate for measured exposure — he said removing a hypothetical AI friend from a kid's life entirely could result in them developing an unhealthy relationship with the technology later on in life. If a child starts to show a preference for their AI companion over their human friends, it's up to their parents to reorient them.

"I think that's the part where you have them spend more time with their friends, but you keep the bunny in their life because the bunny is just going to get smarter and be more around them as they grow up," he said. "If you take it away, they'll probably do something really strange with smart AI friends in the future."

Like any other technology that's meant to provide entertainment, Clark said, it's ultimately up to parents to regulate their child's use.

"We do this today with TV, where if you're traveling with us, like on a plane with us, or if you're sick, you get to watch TV — the baby — and otherwise, you don't, because from various perspectives, it seems like it's not the most helpful thing," he said. "You'll probably need to find a way to gate this. It could be, 'When mom and dad are doing chores to help you, you get the thing. When they're not doing chores, the thing goes away.'"

Clark did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

The age of incredibly powerful 'manager nerds' is upon us, Anthropic cofounder says

12 May 2025 at 09:31
Jack Clark
Anthropic's cofounder Jack Clark expects AI agents will let managers and teams do more with fewer human employees.

ANTHONY WALLACE/ Getty Images

  • The "manager nerds" are coming, Anthropic's cofounder Jack Clark said.
  • He said AI meant managers would "manage fleets of AI agents" and do more with smaller teams.
  • Clark said on a podcast that he foresaw managers having "AI agents doing large amounts of work."

Managers need to have "soft skills" like communication alongside harder technical skills. But what if the job becomes more about managing AI agents than directing people?

Anthropic's cofounder Jack Clark said AI agents are ushering in an era of the "nerd turned manager."

"I think it's actually going to be the era of the manager nerds now, where I think being able to manage fleets of AI agents and orchestrate them is going to make people incredibly powerful," he said on an episode of the "Conversations With Tyler" podcast released last week.

"We're going to see this rise of the nerd turned manager who has their people, but their people are actually instances of AI agents doing large amounts of work for them," he added.

Clark said he's already seeing this play out with some startups that have "very small numbers of employees relative to what they used to have because they have lots of coding agents working for them."

He's not the only tech exec to predict AI agents will let teams do more with fewer people.

Meta CEO Mark Zuckerberg said at the Stripe Sessions conference last week that tapping into AI could help entrepreneurs "focus on the core idea" of their business and operate with "very small, talent-dense teams."

"If you were starting whatever you're starting 20 years ago, you would have had to have built up all these different competencies inside your company, and now there are just great platforms to do it," Zuckerberg said.

Y Combinator CEO Garry Tan said in March that he thinks "vibe coding" — or using generative AI tools to quickly develop and experiment with software development — would help smaller startup teams do the work of 50 to 100 engineers.

"People are getting to a million dollars to $10 million a year revenue with under 10 people, and that's really never happened before in early-stage venture," Tan said. "You can just talk to the large language models and they will code entire apps."

AI researchers and other experts have said there are risks to overreliance on the technology, especially as a replacement for human power, including LLMs having hallucinations and concerns that vibe coding could make it harder in some instances to scale and debug code.

Mike Krieger, a cofounder of Instagram and the chief product officer at Anthropic, predicted on a podcast earlier this year that a software developer's job would change in the next three years to focus more on double-checking code generated by artificial intelligence rather than writing it themselves.

"How do we evolve from being mostly code writers to mostly delegators to the models and code reviewers?" he said on the "20VC" podcast.

The job will be about "coming up with the right ideas, doing the right user-interaction design, figuring out how to delegate work correctly, and then figuring out how to review things at scale," he added.

A spokesperson for Anthropic previously told Business Insider the company saw itself as a "testbed" for workplaces navigating AI-driven changes to critical roles.

"At Anthropic, we're focused on developing powerful and responsible AI that works with people, not in place of them," the spokesperson said. "As Claude rapidly advances in its coding capabilities for real-world tasks, we're observing developers gradually shifting toward higher-level responsibilities."

Correction, May 12: A previous version of this story incorrectly identified Mike Krieger's job title at Anthropic. He is the chief product officer, not the chief people officer.

Read the original article on Business Insider

Forget SEO. The hot new thing is 'AEO.' Here are the startups chasing this AI marketing phenomenon.

12 May 2025 at 02:01
OpenAI's Sam Altman discusses AI at a university in Berlin
Marketers now offer to help with "AEO" when it comes to getting good placement in Sam Altman's ChatGPT.

Axel Schmidt/REUTERS

  • "AEO" is replacing "SEO" as AI chatbots such as Sam Altman's ChatGPT change online discovery.
  • AEO focuses on influencing AI chatbot responses. It's different from traditional keyword-driven SEO.
  • AEO startups are rapidly emerging, raising venture capital, and analyzing growing AI-driven traffic.

SEO, or search engine optimization, is the art and science of crafting websites and other online content so it shows up prominently when people Google something.

A massive industry of experts, advisors, gurus (and charlatans) has grown up around the interplay between Google, which purposely sets opaque rules, and website owners tweaking stuff and trying to work the system.

The rise of generative AI, large language models, and AI chatbots is changing all this — radically and quickly.

While SEO has long been a cornerstone of digital marketing strategy, a new paradigm is rapidly threatening to take its place: "answer engine optimization," or AEO.

As AI chatbots such as ChatGPT, Claude, Gemini, and Perplexity become the front door to online discovery, AEO is emerging as a strategic imperative for growth. There's been an explosion of AEO startups and tools in recent months, all promising to help online businesses show up when chatbots and AI models answer user questions.

"There must have been 30 AEO product launches in the last few months, all trying to do what SEO did 20 years ago," said David Slater, a chief marketing officer who's worked at Mozilla, Salesforce, and other tech companies. "It's absolutely going to be a hot space."

What Is AEO?

AEO is SEO adapted for the world of conversational AI, says Ethan Smith, CEO of digital marketing firm Graphite Growth. He wrote an excellent blog recently about this new trend.

Where traditional SEO focused on optimizing for static keyword-driven queries, AEO centers on influencing how AI chatbots respond to user questions, he says. With tools like ChatGPT increasingly integrating real-time web search and surfacing clickable links, chat interfaces now function like hybrid search engines. The result is a fast feedback loop that makes influencing LLM outputs not just possible, but essential for online businesses.

Unlike SEO, where a landing page might target a single keyword, AEO pages must address clusters of related questions. Smith shares an example: Instead of optimizing a webpage for "project management software," AEO pages might answer dozens or even hundreds of variations such as "What's the best project management tool for remote teams?" or "Which project management platforms support API integration?"

Why ChatGPT's live web access makes AEO important

This shift didn't happen overnight. When ChatGPT launched in late 2022, its responses were generated from outdated training data with no live web access. But over the past year, LLMs have started using retrieval-augmented generation, or RAG, and other techniques that help them incorporate more real-time information. They often perform a live online search, for instance, and then summarize results in real time. This makes AEO both faster to influence and more dynamic than its SEO predecessor, Smith writes.

There's been some interest in AEO for about a year or so. But in early 2025, OpenAI's ChatGPT and other generative AI services began surfacing prominent links and citations in answers a lot more. That's when AEO really took off.

Now, AEO startups are raising venture capital, some online businesses are seeing conversion spikes from AI traffic, and there's been a Cambrian explosion of AEO analytics, tracking, and content tools.

Check out this list of AEO startups and tools, identified by Smith from Graphite Growth. There are a few established players in here, too, including HubSpot. (Overall, there are a lot, so click on the button in the top right of this table to see all the options!)

Looking into the 'brain' of an AI model

There's already a race to try to determine how these AI chatbots spit out results and recommendations, so website owners can hack their way to better online distribution in the new era of generative AI and large language models.

GPTrends is one of these up-and-coming AEO providers. David Kaufman, one of the entrepreneurs behind the firm, shared an interesting analysis recently on LinkedIn.

He said that AI search results from tools such as ChatGPT and Perplexity are unpredictable. They can change even when you ask the same question multiple times. Unlike Google, where search results stay mostly the same, AI tools give different answers depending on how the model responds in the moment, Kaufman writes.

For example, Kaufman and his colleagues asked ChatGPT this question 100 times: "What's the best support ticketing software?" Then they tracked which providers appeared most often. Here are the results of the test:

A chart showing an example of results from a ChatGPT request

David Kaufman, GPTrends

Zendesk showed up in 94% of answers, while other companies, including Freshworks and Zoho, appeared less often and in different positions. This randomness gives less well-known brands a better shot at being seen, at least some of the time.

"Strategically, this means brands need to rethink how they optimize for discovery, focusing less on traditional SEO tactics and more on comprehensive, authoritative content that AI systems recognize as valuable," Kaufman writes.

Read the original article on Business Insider

OpenAI’s enterprise adoption appears to be accelerating, at the expense of rivals

10 May 2025 at 09:00
OpenAI appears to be pulling well ahead of rivals in the race to capture enterprises’ AI spend, according to transaction data from fintech firm Ramp. According to Ramp’s AI Index, which estimates the business adoption rate of AI products by drawing on Ramp’s card and bill pay data, 32.4% of U.S. businesses were paying for […]

A timeline of the US semiconductor market in 2025

10 May 2025 at 07:00
It’s already been a tumultuous year for the U.S. semiconductor industry. The semiconductor industry plays a sizable role in the “AI race” that the U.S. seems determined to win, which is why this context is worth paying attention to: from Intel’s appointment of Lip-Bu Tan to CEO — who wasted no time getting to work […]

Anthropic rolls out an API for AI-powered web search

7 May 2025 at 13:28
Anthropic is launching a new API that allows its Claude AI models to search across the web. Developers using it can build Claude-powered apps that deliver up-to-date info, the company said in a press release published Wednesday. The rollout of the API comes as AI companies look to augment their models in various ways that […]

Anthropic launches a program to support scientific research

5 May 2025 at 09:54
Anthropic is launching an AI for Science program to support researchers working on “high-impact” scientific projects, with a focus on biology and life sciences applications. The program, announced Monday, will offer up to $20,000 in Anthropic API credits over a six-month period to “qualified” researchers who’ll be selected based on their “contributions to science, the potential […]

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

2 May 2025 at 12:33
British novelist and critic Edward Morgan Forster (1879-1970)
British novelist and critic Edward Morgan Forster (1879-1970) has a lesson for us in the AI age.

Edward Gooch Collection/Getty Images

OK, get ready. I'm getting deep here.

OpenAI messed up a ChatGPT update late last month, and on Friday, it published a mea culpa. It's worth a read for its honest and clear explanation of how AI models are developed — and how things can sometimes go wrong in unintended ways.

Here's the biggest lesson from all this: AI models are not the real world, and never will be. Don't rely on them during important moments when you need support and advice. This is what friends and family are for. If you don't have those, reach out to a trusted colleague or human experts such as a doctor or therapist.

And if you haven't read "Howards End" by E.M. Forster, dig in this weekend. "Only Connect!" is the central theme, which includes connecting with other humans. It was written in the early 20th century, but it's even more relevant in our digital age, where our personal connections are often intermediated by giant tech companies, and now AI models like ChatGPT.

If you don't want to follow the advice of a dead dude, listen to Dario Amodei, CEO of Anthropic, a startup that's OpenAI's biggest rival: "Meaning comes mostly from human relationships and connection," he wrote in a recent essay.

OpenAI's mistake

Here's what happened recently. OpenAI rolled out an update to ChatGPT that incorporated user feedback in a new way. When people use this chatbot, they can rate the outputs by clicking on a thumbs-up or thumbs-down button.

The startup collected all this feedback and used it as a new "reward signal" to encourage the AI model to improve and be more engaging and "agreeable" with users.

Instead, ChatGPT became waaaaaay too agreeable and began overly praising users, no matter what they asked or said. In short, it became sycophantic.

"The human feedback that they introduced with thumbs up/down was too coarse of a signal," Sharon Zhou, the human CEO of startup Lamini AI, told me. "By relying on just thumbs up/down for signal back on what the model is doing well or poorly on, the model becomes more sycophantic."

OpenAI scrapped the whole update this week.

Being too nice can be dangerous

What's wrong with being really nice to everyone? Well, when people ask for advice in vulnerable moments, it's important to try to be honest. Here's an example I cited from earlier this week that shows how bad this could get:

it helped me so much, i finally realized that schizophrenia is just another label they put on you to hold you down!! thank you sama for this model <3 pic.twitter.com/jQK1uX9T3C

— taoki (@justalexoki) April 27, 2025

To be clear, if you're thinking of stopping taking prescribed medicine, check with your human doctor. Don't rely on ChatGPT. 

A watershed moment

This episode, combined with a stunning surge in ChatGPT usage recently, seems to have brought OpenAI to a new realization. 

"One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice," the startup wrote in its mea culpa on Friday. "With so many people depending on a single system for guidance, we have a responsibility to adjust accordingly."

I'm flipping this lesson for the benefit of any humans reading this column: Please don't use ChatGPT for deeply personal advice. And don't depend on a single computer system for guidance.

Instead, go connect with a friend this weekend. That's what I'm going to do.

Read the original article on Business Insider

Apple and Anthropic reportedly partner to build an AI coding platform

2 May 2025 at 11:49
Apple and Anthropic are teaming up to build a “vibe-coding” software platform that will use generative AI to write, edit, and test code for programmers, Bloomberg reported on Friday. The iPhone maker is planning to roll out the software internally, according to Bloomberg, but hasn’t decided if it will launch it publicly. The system is […]

Claude’s AI research mode now runs for up to 45 minutes before delivering reports

On Thursday, Anthropic announced significant upgrades to its AI assistant Claude, extending its research capabilities to run for up to 45 minutes before delivering comprehensive reports. The company also expanded its integration options, allowing Claude to connect with popular third-party services.

Much like Google's Deep Research (which debuted on December 11) and ChatGPT's deep research features (February 2), Anthropic announced its own "Research" feature on April 15. Each can autonomously browse the web and other online sources to compile research reports in document format, and open source clones of the technique have debuted as well.

Now, Anthropic is taking its Research feature a step further. The upgraded mode enables Claude to conduct "deeper" investigations across "hundreds of internal and external sources," Anthropic says. When users toggle the Research button, Claude breaks down complex requests into smaller components, examines each one, and compiles a report with citations linking to original sources.

Read full article

Comments

© UCG via Getty Images

I went to the 'Conclave of Silicon Valley.' It showed the hold tech has on DC.

2 May 2025 at 02:00
A panel of four Hill and Valley speakers on stage, flanked by American flags.
Delian Asparouhov moderating a discussion with Emil Michael, Ruth Porat, and Kevin Weil at the Hill and Valley Forum.

Julia Hornstein/BI

  • I went to the fourth Hill and Valley Forum, a mashup of tech titans and political power players in Washington, DC.
  • The theme was 'rebuilding America,' but many panelists discussed fears about China's AI and defense tech rise.
  • The Forum used to meet in private but has expanded as ideas about American reindustrialization have become mainstream.

In the slow-creeping security line outside the Capitol Visitor Center in Washington, DC, Silicon Valley looked stately.

On Wednesday, at the fourth Hill and Valley Forum — a hush-hush gathering of tech titans and political power players — it was difficult to discern DC dealmakers from West Coast wunderkinds. Then, one suit-clad VC ahead of me saw Flexport CEO Ryan Petersen like a mirage in the desert. We were in the right place, he assured his fellow money men.

A line outside the US Capitol building for the Hill and Valley forum.
For some, the security line to get into the Hill and Valley Forum took over 30 minutes.

Julia Hornstein/BI

This year's Hill and Valley theme was "rebuilding America," cofounder Jacob Helberg said in his opening speech. A former senior advisor to Palantir Technologies CEO Alex Karp, Helberg also lobbied for the TikTok ban and is currently awaiting confirmation as Trump's pick for Undersecretary for Economic Growth, Energy, and the Environment.

Helberg also launched the "conclave of Silicon Valley," as Secretary of the Interior Doug Burgum put it in an opening speech, four years ago with Christian Garrett, a partner at 137 Ventures (backers of SpaceX, Anduril, and Palantir), and Delian Asparouhov, a Founders Fund partner and cofounder of space-weapons-and-pharma startup Varda Space.

The roster was, quite literally, a who's who of the tech heavyweights. Jensen Huang, CEO of Nvidia; Ruth Porat, president and chief investment officer of Alphabet and Google; Alex Karp, CEO of Palantir; Vinod Khosla, founder of Khosla Ventures; Josh Kushner, founder of Thrive Capital; Keith Rabois, managing director at Khosla Ventures; Kevin Weil, chief product officer of OpenAI; Jack Clark, cofounder of Anthropic; and others spoke at the Forum.

In the event's scrappier days, attendees "met in private because they had to," Helberg said — back when going full throttle on autonomous weaponry and American reindustrialization was considered edgy.

Now that such sentiments seem to be the norm for policymakers and X posters alike, Helberg is saying the quiet part out loud: "We've made the 2021 counterculture the 2025 mainstream culture."

If the Forum showed attendees anything, it's that Trump's first 100 days were marked by tech's prolific influence on the Hill, from VC David Sack's appointment as crypto czar to Elon Musk's DOGE effort.

"There's no difference between Silicon Valley and DC," CEO of consumer health startup Nucleus Kian Sadeghi told BI. "Silicon Valley is DC."

'An absolute crisis' over China

Rallying the Valley to treat China's AI and defense tech rise with Cold War-level urgency has quickly become techno and political orthodoxy. In the wake of Deepseek and Trump's 145% tariff on Chinese goods, that anxiety pulsed through panels and investor side chats Wednesday.

Garrett put the Forum's focus this way: "The theme of the year always ends up presenting itself versus we come up with it," he told BI in an interview. "We extrapolate what's a very obvious discussion in the zeitgeist."

Discussing tech's role in rising geopolitical tensions felt par for the course for the Forum's cofounders, a China hawk and two defense tech investors. All day, speakers hammered the message home.

Some were skeptical of US-based venture firm Benchmark's recent investment in Chinese AI startup Manus. "It was very curious, to say the least," Emil Michael, Undersecretary of Defense, said of the fundraise. "It's not like there's not a plethora of US companies doing AI to invest in that are high quality."

OpenAI's Weil called for the best open weights models to be built in the US "on democratic values," he said. "I don't want the best open weights model to be a Chinese model."

Nvidia's Huang urged attendees to think about the AI race as an "infinite game," that requires an understanding of where engineering talent is located. "Fifty percent of the world's AI researchers are Chinese," he said. "Just take a step back and recognize that."

Later that afternoon, Huang told reporters that "China is right behind us" in the race for AI chip dominance.

And not everyone was on board with Trump's approach to negotiating with China. Senator Jeanne Shaheen of New Hampshire criticized budget cuts to universities and research institutions, warning that they may precipitate a loss of future American scientists. "Our ability to compete with China is not just on a military level — it's on an economic level," she said.

To all, the stakes couldn't be higher. "What would the consequences of another nation state that does not share democratic values leading in these efforts be?" Thrive Capital's Kushner said. "Treat this moment, even though we're in first place, as an absolute crisis."

A live X feed

Inside, the vibe was akin to my (and everyone else's) X feed. One investor told me Hill and Valley was basically his nightly scroll materialized in person. The panels weren't much different either, he added, noting that much of the material discussed seemed to be the usual thread-fodder and takes tossed around online.

Organizers insisted that Hill and Valley was bipartisan. The companies involved weren't officially tied to any one party, just "affiliated with America," Asparouhov said at a media event on Tuesday, "so much so that we committed to the bit of bipartisanship that Jacob made sure to put on a bipartisan tie this morning."

Hill and Valley cofounders Jacob Helberg, Christian Garrett, and Delian Asparouhov speaking at a media event for the Forum.
Hill and Valley cofounder Jacob Helberg sporting a red and blue tie at a media event Tuesday.

Julia Hornstein/BI

There were indeed a few Democrats, like Khosla and Senator Shaheen, on the docket. Republicans were represented, too: Speaker of the House Mike Johnson closed out the Forum, and Secretary of Commerce Howard Lutnick — a fierce Trump ally — spoke at the Forum's dinner later that evening.

"We have the best leader in the world," Helberg said in his opener. "President Trump is objectively and truly a sample of one."

Despite its bipartisan branding, the Forum wasn't exactly isolated from criticism. Two pro-Palestine protesters interrupted the first panelist of the day, Palantir's Karp, who was discussing his recent book about Western society and Silicon Valley.

After security removed the protesters, Karp shrugged it off: "I haven't had this much fun in a long time," he said. "Maybe I should come back tomorrow."

Read the original article on Business Insider

Nvidia takes aim at Anthropic’s support of chip export controls

1 May 2025 at 10:46
Nvidia clearly doesn’t agree with Anthropic’s support for export controls on U.S.-made AI chips. On Wednesday, Anthropic doubled down on its support for the U.S. Department of Commerce’s “Framework for Artificial Intelligence Diffusion,” which would impose sweeping AI chip export restrictions starting May 15. The next day, Nvidia responded with a very different take on […]

Anthropic lets users connect more apps to Claude

1 May 2025 at 09:48
Anthropic on Thursday launched a new way to connect apps and tools to its AI chatbot Claude, as well as an expanded “deep research” capability that allows Claude to search the web, enterprise accounts, and more. The new app connection feature, called Integrations, and expanded deep research tool, dubbed Advanced Research, are available in beta […]

A new, 'diabolical' way to thwart Big Tech's data-sucking AI bots: Feed them gibberish

1 May 2025 at 06:38
AI Robot hand squeezing data, binary code and dollar signs out of a desktop computer
As AI tools vacuum up massive amounts of free training data from the internet, companies are fighting back.

Getty Images/Alyssa Powell

  • Bots now generate more internet traffic than humans, the cybersecurity firm Thales says.
  • This is being driven by web crawlers from tech giants that harvest data for AI model training.
  • Cloudflare's AI Labyrinth misleads and exhausts bots with fake content.

A data point caught my eye recently. Bots generate more internet traffic to websites than humans now, the cybersecurity company Thales says.

This is being driven by a swarm of web crawlers unleashed by Big Tech companies and AI labs, including Google, OpenAI, and Anthropic, that slurp up copyrighted content for free.

I've warned about these automated scrapers before. They're increasingly sophisticated and persistent in their quest to harvest information to feed the insatiable demand for AI training datasets. These bots not only take data without permission or payment but also cause traffic surges in some parts of the internet, increasing costs for website owners and content creators.

Thankfully, there's a new way to thwart this bot swarm. If you're struggling to block them, you can send them down new digital rabbit holes where they ingest content garbage. One software developer recently called this "diabolical" — in a good way.

Absolutely diabolical Cloudflare feature. love to see it pic.twitter.com/k8WX2PdANN

— hibakod (@hibakod) April 25, 2025

It's called AI Labyrinth, and it's a tool from Cloudflare. Described as a "new mitigation approach," AI Labyrinth uses generative AI not to inform but to mislead. When Cloudflare detects unauthorized activity, typically from bots ignoring "no crawl" directives, it deploys a trap: a maze of convincingly real but irrelevant AI-generated content designed to waste bots' time and chew through AI companies' computing power.

Cloudflare pledged in a recent announcement that this is only the first iteration of using generative artificial intelligence to thwart bots.

Digital gibberish

Unlike traditional honeypots, AI Labyrinth creates entire networks of linked pages invisible to humans but highly attractive to bots. These decoy pages don't affect search engine optimization and aren't indexed by search engines. They are specifically tailored to bots, which get ensnared in a meaningless loop of digital gibberish.

When bots follow the maze deeper, they inadvertently reveal their behavior, allowing Cloudflare to fingerprint and catalog them. These data points feed directly into Cloudflare's evolving machine learning models, strengthening future detection for customers.

Will Allen, the vice president of product at Cloudflare, told me that more than 800,000 domains had fired up the company's general AI bot blocking tool. AI Labyrinth is the next weapon to wield when sneaky AI companies get around blockers.

Cloudflare hasn't released data on how many customers use AI Labyrinth, which suggests it's too early for major adoption. "It's still very new, so we haven't released that particular data point yet," Allen said.

I asked him why AI bots are still so active if most of the internet's data has already been scraped for model training.

"New content," Allen said. "If I search for 'what are the best restaurants in San Francisco,' showing high-quality content from the past week is much better than information from a year or two prior that might be out of date."

Turning AI against itself

Bots are not just scraping old blog posts; they're hungry for the freshest data to keep AI outputs relevant.

Cloudflare's strategy flips this demand on its head. Instead of serving up valuable new content to unauthorized scrapers, it offers them an endless buffet of synthetic articles, each more irrelevant than the last.

As AI scrapers become more common, innovative defenses like AI Labyrinth are becoming essential. By turning AI against itself, Cloudflare has introduced a clever layer of defense that doesn't just block bad actors but also exhausts them.

For web admins, enabling AI Labyrinth is as easy as toggling a switch on the Cloudflare dashboard. It's a small step that could make a big difference in protecting original content from unauthorized exploitation in the age of AI.

Read the original article on Business Insider

Anthropic suggests tweaks to proposed US AI chip export controls

30 April 2025 at 08:29
Anthropic agrees with the U.S. government that implementing robust export controls on domestically made AI chips will help the U.S. compete in the AI race against China. But the company is suggesting a few tweaks to the proposed restrictions. Anthropic released a blog post on Wednesday stating that the company “strongly supports” the U.S. Department […]
❌
❌