❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 20 May 2025Main stream

Why Netflix should pay close attention to Google's new AI movie tool

20 May 2025 at 10:45
A scene from a short movie created with Google's new Flow AI tool.
A scene from a short movie created with Google's new Flow AI tool.

Google/Flow/Dave Clark

  • Google unveiled Flow, an AI moviemaking tool, at the IO conference.
  • Flow uses Google's latest AI models to generate visuals, sound effects, and dialog.
  • AI-generated content could challenge traditional studios like Netflix.

Technologist Luis von Ahn was recently asked if AI is a threat to the company he runs, Duolingo.

He said many companies could be disrupted, including Netflix.

"That's one of the things that is scary about the world that we live in," von Ahn said. "With AI and large language models, we're undergoing a platform shift."

"I'm not super worried, but you just never know. And it's not just for Duolingo, it could be all kinds of things, right?" he added. "I mean, it could be a threat for Netflix. It could be that just a large language model β€” just press a button and it makes you the perfect movie."

This was a couple of weeks ago, and I thought he was overselling it a bit. That's until I got a glimpse of Flow, a new AI-powered moviemaking tool that Google unveiled on Tuesday.

At the Google I/O conference in Silicon Valley, the company showed off this new technology, along with some illustrative movie clips created by filmmakers who had early access to Flow.Β 

A scene from an illustrative film generated using Google's Flow moviemaking tool.
A scene from an illustrative film generated using Google's Flow moviemaking tool.

Google/Flow/Henry Daubrez

Flow was built on top of Imagen 4 and Veo 3, the latest versions of Google's image and video-generation AI models. The company says the updated Veo model creates better visuals and can now generate sound effects, background noises, and even dialog.

If you give it a prompt describing characters and an environment, and suggest a dialog with a description of how you want it to sound, it produces a film. In one illustrative clip Google shared, two animated animals talked with each other. (To me, it looked very similar to a Pixar movie).

Flow is designed to help creators produce high-quality cinematic video from text descriptions. Users can also bring their own images and other files into Flow. It integrates precise camera movements, including the ability to request specific camera angles, such as an 8-millimeter wide-angle lens.

You can edit the film, too, within Flow.

In one example shared by Google, a user requests a scene of an old man and a friendly bird driving aΒ black convertible off a cliff. The car begins to fall, but using Flow, the scene is swiftly changed and extended using AI so that the bird in the car starts flapping its wings and flying instead. The edit seamlessly retains character and scene continuity.Β 

Implications for Netflix and traditional studios

A scene from an illustrative movie created using Google's Flow tool
A scene from an illustrative movie created using Google's Flow tool.

Google/Flow/Junie Lau

While Google positions Flow as a tool to empower filmmakers, the broader implications are clear: AI-generated content could one day challenge human-created productions in quality, cost-efficiency, and scale. For companies like Netflix, which have built empires on high production-value storytelling, AI poses both an opportunity and a threat.

On one hand, AI tools could accelerate content development, reducing production timelines and budgets. On the other hand, it could open the door for a flood of content from smaller studios, individual creators, or even consumers, eroding the competitive advantage of traditional production pipelines.

Moreover, AI-generated media could be hyper-personalized. Imagine a future where viewers select themes, genres, or even actors β€” and the platform generates a custom film on demand. Just like Duolingo's von Ann described earlier this month. That could shift power away from major studios and toward platforms that control the underlying AI infrastructure, such as Google.

On a recent podcast, Google CEO Sundar Pichai said the internet giant thought hard about acquiring Netflix years ago. Now, maybe he doesn't need to do that deal. Β 

The road ahead

Google's Flow is another sign of a broader trend, which is that AI may be democratizing creativity. While Netflix and legacy studios may initially integrate these tools to enhance production, the long-term landscape could resemble the transformation seen in music, publishing, and software coding β€” where AI tools and platforms radically lower the barrier to entry for more people.

The key question isn't whether AI will change filmmaking β€” it already is. The question is whether established players like Netflix will ride the wave or be overtaken by it.

As AI continues to evolve, so too must the business models, strategies, and creative visions of Hollywood's biggest names. The age of algorithmically generated storytelling is arriving sooner than we think.

Read the original article on Business Insider

Silicon Valley used to idolize youth. AI is changing that.

20 May 2025 at 06:30
Peruvian Marcelino "Mashico" Abad smiles while celebrating his 124th birthday, as local authorities claim he might be the world's oldest ever person, in Huanuco, Peru April 5, 2024.
Marcelino "Mashico" Abad celebrating his 124th birthday in 2024 in HuΓ‘nuco, Peru, as local authorities claim he might be the world's oldest person ever.

Pension 65/via REUTERS

  • AI is reshaping tech hiring, reducing demand for entry-level roles in favor of experienced talent.
  • SignalFire data shows a 50% drop in entry-level Big Tech hiring since prepandemic times.
  • Tech firms appear to now prioritize midsenior hires, valuing experience over youthful potential.

For decades, Silicon Valley thrived on a mythology of youth. Tech giants and startups hired young, hungry employees who were relatively inexperienced but could work every waking hour to write code and ship product.

This era of youthful dominance in tech hiring may be fading, and it's partly due to the rise of AI. That's according to a new report from SignalFire, a venture capital firm that uses data and technology to guide its investment decisions.

Youth no longer at the center

In the past, young graduates were seen as hungry, moldable, and cost-effective hires. But today, new grads face the steepest employment challenges the tech industry has seen in years. SignalFire's latest State of Talent report says entry-level hiring in Big Tech is down more than 50% from prepandemic levels, and startups aren't far behind.

"Tech startups have long been synonymous with youth," said Heather Doshay, a partner and the head of talent at SignalFire. "But today, our data shows that many of those same early-career professionals are struggling to find a way in."

Startups are mostly focused on survival, cutting burn rates, and extending runway. That means fewer hands, more output, and a demand for autonomous doers. In short, they want experienced individual contributors who can hit the ground running, not entry-level hires who require more management time and training.

"With reduced head count, every hire must be high ROI," Doshay added. "Right now, that points squarely to midsenior-level individual contributors β€” autonomous doers who deliver against immediate company needs."

AI: The catalyst for a hiring reset

AI isn't the sole cause of this generational hiring shift, but it's a major catalyst. Asher Bantock, the head of research at SignalFire, noted that AI tools are increasingly automating the types of narrowly scoped tasks that were once assigned to junior developers.

"What's increasingly scarce is not keystrokes but discernment," he told me. Crafting effective AI prompts, debugging machine-generated code, and integrating tools at scale require architectural thinking, skills honed through years of experience, not a college degree.

Data from SignalFire's new report supports this trend:

  • At Big Tech companies, new grads now account for just 7% of hires, with new hires down 25% from 2023 and over 50% from levels in 2019.
  • At startups, new grads make up under 6% of hires, with new hires down 11% from 2023 and over 30% from levels in 2019.
  • The average age of technical hires has increased by three years since 2021.

Big Tech companies are now focusing their resources on mid- and senior-level engineers, particularly in roles related to machine learning and data science. Meanwhile, functions like recruiting, design, and product marketing are shrinking across the board, the data also indicates.

The 'experience paradox'

This AI-driven shift has created what SignalFire calls the "experience paradox." Companies want junior hires to come pretrained (just like those AI models!).

But young candidates often struggle to gain experience without being given a chance. It's a classic catch-22, especially in a job market where more managers say they'd rather use AI than hire a Gen Z employee. In a Hult International Business School survey, 37% of employers said as much.

Even top computer science grads from elite universities are struggling. The share of these graduates landing roles at the "Magnificent Seven" (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla) has plummeted by more than half since 2022, SignalFire's report says.

A chart from SignalFire's State of Talent 2025 report
A chart from SignalFire's 2025 State of Talent report.

SignalFire

A cultural shift

This isn't just an economic or technical evolution, it's a cultural one. Where Silicon Valley once idolized youth, today's market prizes proven execution. Risk tolerance has dropped across the startup ecosystem, and with venture capital funding tightening, founders are hesitant to invest in long-term potential over short-term impact.

This has opened the door for more seasoned professionals. While C-suite hiring has also slowed, companies are increasingly turning to "fractional" roles β€” part-time chief technology officers, chief marketing officers, and advisors β€” to access senior talent without inflating their burn rate, SignalFire says.

More hustle than ever

For younger professionals, the path into tech now requires more creativity and hustle than ever. Boot camps, freelancing, open-source contributions, and AI fluency are becoming critical entry points. Simply having a degree, even from a top school, is no longer enough.

For companies, the long-term risk of this shift is clear. Without reinvesting in early-career talent, the tech industry risks breaking its talent pipeline. While artificial intelligence may temporarily reduce the need for junior hires, the future may still depend on building and training the next generation of technologists.

The mythology of youth in tech isn't dead, but in 2025, it's being rewritten.

Read the original article on Business Insider

Before yesterdayMain stream

The inside story of how Silicon Valley's hottest AI coding startup almost died

18 May 2025 at 02:00
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto
StackBlitz cofounders Albert Pai (left) and Eric Simons (right) moving out of a hacker house they ran in Palo Alto

Eric Simons

In 2017, Eric Simons founded StackBlitz with his childhood friend Albert Pai. Six years later, it was the startup equivalent of the walking dead.

StackBlitz raised funding to build software development tools, including WebContainers technology that let engineers create and manage projects in a browser, rather than on their laptops.

The business didn't really take off, and by late 2023, things came to a head. StackBlitz wasn't generating much revenue. Growth was lackluster. At a board meeting that December, an ultimatum was issued: Show real progress, or you're toast.

Simons and Pai pitched a plan to grow by ramping up sales efforts for existing products, while building new offerings that could be bigger. "We also acknowledged that it might be time to explore acquisition scenarios ahead of potential failure," Simons recalled.

Then, one board member, Thomas Krane, got real: By the end of 2024, everyone needed finality on StackBlitz's fate.

Thomas Krane, Insight Partners
Thomas Krane, managing director at Insight Partners

Insight Partners

"I think I was saying what a lot of others were thinking in the room," Krane told Business Insider.

"No one was happy with the trajectory," venture capitalist and StackBlitz board directorΒ Sarah Guo remembers. "We needed a new plan."

When the meeting ended, Simons walked out of his "shed-turned-home office" into his backyard on a cloudy, windy Bay Area day to try to process the news.

"It was a tough pill to swallow, but we agreed," he said.

As 2024 began, it looked like StackBlitz was about to become one of the thousands of startups that fizzle into the abyss of venture capital history every year.

Not so fast. In Silicon Valley, fortunes can turn on dime as new inventions spread like wildfire, incinerating legacy technology and feeding unlikely growth from the embers. And this is what happened to StackBlitz.

Noodling with OpenAI models

Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz
Cofounders Albert Pai (left) and Eric Simons (right) working on working on early prototypes of StackBlitz

Eric Simons

In early 2024, Simons, Pai, and their co-workers probably should have been meeting more with the investment bankers Krane had introduced them to β€” an attempt to ring what value remained from the struggling startup.

Instead, like Silicon Valley founders often do, they were noodling with new technology, seeing how OpenAI models performed on coding tasks.

"The code output from their models would break, and the web apps created were buggy and unreliable," Simons said. "We thought it would be years before this improved. So we dropped that side project after about two weeks."

A Bolt from the blue

StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).
StackBlitz founders Eric Simons (left) and Albert Pai (right) at Eric's wedding. (Albert was groomsman).

Courtney Yee/Photoflood Studio

Then, in June 2024, OpenAI rival Anthropic launched its Sonnet 3.5 AI model. This was a lot better at coding, and it became the technical foundation for an explosion in AI coding startups, such as Cursor and Lovable, and an important driver of what's now known as vibe coding.

That summer, StackBlitz started working on a new product that relied on Anthropic's breakthrough to bring coding to non-technical users.

On Oct. 3, StackBlitz launched this new service. It was called Bolt.new, a play on the startup's lightning-bolt logo. It took roughly 10 employees three months to create.

Bolt used StackBlitz's technological base β€” that WebContainers underpinning that allows engineers to work in a browser β€” and added a simple box on top with a flashing cursor and a question, "What do you want to build?"

A cocktail menu at StackBlitz's "hackathon" event in San Francisco
A cocktail menu at StackBlitz's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The service offered a tantalizingly simple proposition: Type what you want to create in plain English and Bolt's software would tap into Anthropic's Sonnet model in the background and write the code needed to create a website or a mobile app. And not just simple sites to share your wedding photos. Full applications that let users take valuable actions including logging in, subscribing, and buying things.

Before this, digital products like these required professional software engineers and developers to build them using complex coding languages and tricky tools that were way beyond the capabilities of non-technical people.

Simons emailed StackBlitz investors to tell them about Bolt, and asked for their help.

"If you can RT/share on X, and/or share with 3 developers you know, myself and the team would be extremely appreciative!" he wrote, according to a copy of that email obtained by BI.Β 

Crying in aΒ "shed office"

StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco
StackBlitz CEO Eric Simons talks during the startup's "hackathon" event in San Francisco

Alistair Barr/Business Insider

The first week that Bolt.new came out, it generated about $1 million of annual recurring revenue, or ARR, a common way cloud software services from startups are measured financially. The next week, it added another $1 million in ARR, and then another, according to Simons.

StackBlitz wasn't, in fact, going to shut down. Instead, it had a hit on its hands.

"I had slept three hours a night for a week straight to get the release out with our team," Simons told BI. "After seeing it live, and people loving it β€” beyond anything I had ever created before β€” I cried, alone at my desk in my backyard shed office."

A very different investor update

Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.
Albert Pai (left) and Eric Simons (right) a week before they launched StackBlitz.

Eric Simons

On the first day of November, Simons wrote a very different email to his investors.Β The subject line read,Β StackBlitz October Update: $0 to $4m ARR in 30 days.

The number of active Bolt customers surged from about 600 to more than 14,000 in the first few weeks, according to a copy of the email obtained by BI.Β 

A chart showing early usage trends for Bolt.new
A chart showing early usage trends for Bolt.new

Eric Simons/StackBlitz

ARR soared from roughly $80,000 to more than $4 million in the same period.Β 

A chart showing early revenue traction for Bolt.new
A chart showing early revenue traction for Bolt.new

Eric Simons/StackBlitz

"You can imagine after years of grinding on our amazing core technology, endlessly searching for a valuable business use case of it, just striking out over and over again, how I and the team feel looking at this graph," Simons wrote. "If you had to put it into a word or two, it'd be something like 'HELL. YES.'"

When talented technologists are pushed to search harder for new ways to monetize their inventions, on a tight deadline, sometimes magic happens, according to Krane from Insight Partners. Β 

"That life-or-death pressure led to a series of rapid pivots that ultimately led to this incredible outcome," he told BI.Β "This company broke every model in terms of growth rate."

A new pricing model

There was so much customer demand for Bolt that StackBlitz raised prices after about a week. The main subscription plan went from $9 a month to $20 a month.

The startup also added new pricing tiers that cost $50, $100, and $200 a month. A few weeks after this change, almost half of Bolt's paying users were on more expensive plans.

An early breakdown of Bolt.new revenue and pricing tiers
An early breakdown of Bolt.new revenue and pricing tiers

Eric Simons/StackBlitz

Simons said StackBlitz may have stumbled upon a new pricing model for AI code-generation services. (Turns out, he did).

Every time a Bolt user typed in a request, this was transformed into "tokens," which AI models use to break letters, numbers, and other characters into digestible chunks. Each token costs a certain amount to process.

Bolt users were sending in so many requests they were blowing through their token limits. So StackBlitz introduced tiered pricing so customers could pay extra to get more tokens.

"Customers are willing to pay a lot of money for heavier usage of AI inference," Simons told his investors.

One customer was quoted $5,000 and a 2-3 month timeframe by a Ukrainian contractor to develop an app. Instead, she bought Bolt's $50 a month plan, and built the app herself in less than two weeks.

"1/100th of the cost, and 5-10x faster timeline," Simons wrote.

A carrot or a stick?

Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight
Albert Pai (left) and Eric Simons (right) launching a feature for their startup StackBlitz at midnight

Eric Simons

Simons also couldn't resist an invitation to stay on the rocket ship, dropping a classic Silicon Valley funding-raise carrot. Or was it stick?

"We've also received substantial inbound interest from VCs and from strategic acquirers," the CEO wrote in the Nov. 1 email to investors. "So we're starting to explore how best to play our hand(s) in the coming weeks/months here." (Earlier this year, reports emerged of a new funding round valuing StackBlitz at roughly $700 million).

With StackBlitz's demise averted, Simons realigned resources to focus on Bolt. The startup hired more staff and added Supabase, a database service that stores transaction data, user registrations, and otherΒ crucial information.

It also added Stripe technology so Bolt users can easily accept card and digital payments. StackBlitz also spent heavily to educate customers on how to use Bolt better, running weekly live YouTube video sessions.

Waiting for Anthropic

Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.
Anthropic executives, from left to right: Daniela Amodei, Dario Amodei, Jack Clark, and Jared Kaplan.

Anthropic

Bolt was off the races. But there wasΒ still one big hurdle involving Anthropic.

Back in the spring of 2024, an Anthropic cofounder filled out a "Contact Us" form on StackBlitz's webcontainers.ioΒ site. The form asked anyone who wanted to license the WebContainers technology to fill out some basic details.

"After we saw that form, we called to chat. He said Anthropic was working on a project and this could help," Simons recalled, without identifying the cofounder.Β 

For the first year, StackBlitz proposed a license for Anthropic with uncapped usage for about $300,000.

"With hindsight, we made them a smokin' deal," Simons said. "We were desperate at that point." Other big customers might follow Anthropic's lead and sign license deals, too, the thinking went.Β 

By October, though, when Bolt had taken off using the same web-based StackBlitz technology, a $300,000 uncapped license suddenly looked like a very bad deal for Simons's startup.

StackBlitz founders and staff hanging with The Chainsmokers
StackBlitz founders and staff hanging with The Chainsmokers

StackBlitz

But there was a catch: Anthropic had to sign the contract by the end of October, otherwise the deal would expire. Simons and his StackBlitz co-workers watched the clock like hawks.

"We were like, god, please don't sign it," Simons told BI.Β 

The deadline finally passed. Anthropic never signed.

Simons doesn't know exactly why the AI lab didn't put pen to paper. However, he noted that Anthropic has "a lot of things going on."

"They were like, 'we might come back to this in the future maybe,'" Simons said.Β "We have a great relationship with Anthropic. They are doing an incredible amount of revenue now, and so are we."Β 

Whatever the reason, StackBlitz was now free to pursue its Bolt growth strategy.

A podcast appearance

Venture capitalist Sarah Guo
Venture capitalist Sarah Guo

Fortune/Reuters

By December 6, Simons appeared on No Priors, a popular AI podcast hosted by Guo and another top AI startup investor, Elad Gil. (Guo is an early backer of StackBlitz).

The CEO shared that Bolt was generating $20 million in ARR, just a few months after it launched.Β By the middle of March 2025, Bolt's ARR had jumped to $40 million.Β 

In a recent interview with BI, Simons wouldn't share more revenue details. However, he said StackBlitz planned to announce a new ARR number when Bolt passes triple-digit ARR, meaning more than $100 million.Β 

The service has 5 million registered users now, and Simons said StackBlitz is profitable, growing, and healthy.Β 

There's even a new name for the millions of non-technical users who craft digital offerings through Bolt.

Simons calls them "software composers."

A hackathon meeting

The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.
The Chainsmokers perform on stage at StackBlitz's "Hackathon" event in San Francisco.

Alistair Barr/Business Insider

He explained this to me at a "hackathon" event StackBlitz held on May 7 in San Francisco. Hundreds of "composers," along with other customers, partners, and investors, partied late into the evening, with The Chainsmokers DJ-ing. (The duo are StackBlitz investors).

Simons held court, schmoozed and chatted, with a wide grin and seemingly endless energy.

Through the din, I asked him if he was concerned about rival AI coding services nipping at his heels. After all, it had been about seven months since Bolt launched β€” a lifetime in Silicon Valley AI circles.

Simons seemed unperturbed. He said the years of hard work that StackBlitz spent developing its WebContainers technology gives Bolt an edge that most rivals don't have.

This allows Bolt-based applications to be built and runΒ applications using the chips on customers' devices, such as laptops. Other AI coding providers must tap a cloud service each time a user spins up a project, which can get very expensive and technical, according to Simons.

"People assume we're a startup that just launched yesterday," he said. "But we're an overnight success, seven years in the making."

A party duel with Figma

Figma founders Evan Wallace Dylan Field
Evan Wallace and Dylan Field are the cofounders of Figma.

Figma

The competition doesn't wait long to respond in Silicon Valley, though.

A few San Francisco blocks away, on the same day as Bolt's hackathon party, graphic design giant Figma announced a competing product at its annual Config conference. Figma Make is a new tool that helps developers create web apps using conversational prompts, rather than specialized software code.

Sound familiar?

"We believe there are multiple huge companies to be built here, and that the market for engineering is bigger because of AI," Guo said.

Simons noted that this new Figma service doesn't use the same WebContainers technology that supports Bolt. "We wrote an operating system from scratch that runs in your browser. It's completely different from what Figma has," he argued.

Still, I could tell Figma had made an impact.

"What are the odds that we were throwing a giant party on the same day as their launch across the road? I'll leave that to your writer's imagination," Simons told me, with a giggle.

Read the original article on Business Insider

Duolingo CEO says there may still be schools in our AI future, but mostly just for childcare

17 May 2025 at 02:00
Luis von Ahn
Luis von Ahn, CEO of Duolingo

Duolingo

  • Luis von Ahn envisions AI transforming education, making it more scalable than human teachers.
  • Schools may focus mostly on childcare duties while AI provides personalized learning, he said.
  • Regulation and cultural expectations may slow AI's integration into education systems.

What happens to schools if AI becomes a better teacher?

Luis von Ahn, CEO of Duolingo, recently shared his vision for the future of education on the No Priors podcast with venture capitalist Sarah Guo, and it centered on AI transforming the very role schools will play.

"Education is going to change," von Ahn said. "It's just a lot more scalable to teach with AI than with teachers."

That doesn't mean teachers will vanish, he emphasized. Instead, he believes schools will remain, but their function could shift dramatically. In von Ahn's view, schools may increasingly serve as childcare centers and supervised environments, while AI handles most of the actual instruction.

"That doesn't mean the teachers are going to go away. You still need people to take care of the students," the CEO said on the podcast. "I also don't think schools are going to go away because you still need childcare."

In a classroom of 30 students, a single teacher can struggle to offer personalized, adaptive learning to each person. AI, on the other hand, will be able to track individual performance in real time and adjust lesson difficulty based on how well each student is grasping the material, according to von Ahn.

Imagine a classroom where each student is "Duolingo-ing" their way through personalized content, while a teacher acts as a facilitator or mentor. "You still need people to take care of the students," he noted, "but the computer can know very precisely what you're good at and bad at β€” something a teacher just can't track for 30 students at once."

Education is slow to change, so this may take many years, von Ahn explained, noting that regulation, legacy systems, and cultural expectations all serve as drag forces. Still, he sees a future where AI augments or even supplants parts of formal education, especially in countries that need scalable education solutions fast.

It's a provocative vision, one that raises deep questions about the future of learning and what we expect from education in an AI-driven world.

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected].

Read the original article on Business Insider

We put Tesla's FSD and Waymo's robotaxi to the test. One shocking mistake made the winner clear.

17 May 2025 at 01:01
Alistar Barr standing next to a Tesla and Lloyd Lee standing next to a Waymo taxi.

Lloyd Lee; Alistar Barr/BI

  • Waymo's robotaxis have been providing fully autonomous rides to the SF public since 2024.
  • Tesla is gearing up to launch a robotaxi service in Austin, using its Full-Self Driving software.
  • Tesla's FSD is good, but it made one mistake we just can't overlook.

The robotaxi race is speeding up.

Tesla is preparing to debut its autonomous ride-hailing service in Austin next month, and Alphabet's Waymo continues to expand throughout major US cities.

Under the hood of the Tesla and Waymo robotaxis are two key pieces of technology that the companies respectively call Full Self-Driving (FSD) and the Waymo Driver.

We (Business Insider's Lloyd Lee and Alistair Barr) tested both of these AI-powered drivers in San Francisco β€” and the results truly surprised us.

Given the positive experiences we've had with Waymo and Tesla's FSD, we expected the results of our not-so-scientific test to come down to minute details β€” maybe by how many times the AI-driver would hesitate or if it would make a curious lane change for no apparent reason.

That didn't happen. Instead, the Tesla made an egregious error that handed Waymo the clear win.

Here's how it went down.

The test

Our vehicles for the test included Waymo's Jaguar I-PACE SUVs and Barr's personal 2024 Tesla Model 3.

The Waymo robotaxis are equipped with the company's fifth-generation Waymo Driver and guided by five lidar sensors, six radars, and 29 cameras.

Cameras on the Waymo Taxi
Waymo's robotaxis have multiple sensors, radars, and cameras that protrude off the vehicles.

Lloyd Lee/BI

Barr's Tesla was equipped with Hardware 4 and FSD Supervised software v13.2.8. Tesla released a minor update to the software days after this test was conducted. The vehicle has eight external cameras.

It should be noted that this is not the same software Tesla plans to use in the robotaxis set to roll out this summer. The company said it plans to release FSD Unsupervised, a self-driving system that will not require a human behind the wheel. Nevertheless, we wanted to see how far Tesla's FSD had come since its beta rollout in 2020.

External cameras on a Tesla.
Tesla's FSD relies only on eight external cameras attached around the vehicle's body.

Lloyd Lee/BI

We couldn't compare Tesla and Waymo as a full-package robotaxi service. Tesla has yet to launch that product, so we focused only on the driving experience.

We started at San Francisco's iconic Twin Peaks viewpoint and ended at Chase Center. Depending on the route, that's about a 4- to 7-mile ride.

We chose these destinations for two reasons. One, it would take the cars through winding roads and both suburban and city landscapes. And two, there were a few ways to get to Chase Center from Twin Peaks, including the 280 highway.

Waymo's robotaxis can't take riders on the highway yet. Tesla can.

According to Google Maps, the highway is more time-efficient. For the Tesla, we went with the route the vehicle highlighted first. It pointed out the highway on the way back to Twin Peaks.

We took a Waymo around 8:30 a.m. on a Thursday and the Tesla afterward at around 10 a.m. The traffic conditions for both rides were light to moderate and not noticeably different.

Predictions

Our prediction was that the AI drivers' skills would be nearly neck-and-neck.

But in the spirit of competition, Lee predicted Waymo would deliver a smoother experience and a smarter driver, given the high-tech sensor stack the company relies on.

Barr went with Tesla. He said he'd driven hundreds of miles on FSD with two or three relatively minor interventions so far, and given this previous experience, Barr said he'd have no problem riding in the back seat of a Tesla robotaxi.

Waymo

Throughout our ride in the Waymo, we were impressed by the AI driver's ability to be safe but assertive.

The Waymo was not shy about making yellow lights, for example, but it never made maneuvers you wouldn't want a robot driver you're entrusting your life with to make.

The interior of a Waymo taxi.
Waymo passengers can make a few adjustments to their ride, including temperature and music settings.

Lloyd Lee/BI

One small but notable moment in our ride was when the Waymo stopped behind a car at a stop sign. To the right of us was an open lane.

For whatever reason, the Waymo saw that and decided to switch lanes, as if it was tired of waiting behind the other car. We found that a bit amusing because it seemed like such a human moment.

As human drivers, we might make choices like that because we get antsy waiting behind another car, even though we're not shaving more than a few seconds, if any, off of our commute.

Barr noted that the Waymo Driver can have moments of sass or attitude. It had an urgency, giving us the feeling that it somehow really cared that we got to the Chase Center in good time.

"It's got New York cab driver energy," Barr said, stealing a line from BI editor in chief Jamie Heller, who also took a Waymo during a trip to San Francisco earlier this year.

Sandy Karp, a spokesperson for Waymo, said the company doesn't have specific details on what happened in that moment but said that the Waymo Driver "is constantly planning its next move, including the optimal route to get its rider where they're going safely and efficiently."

"This planning can involve decisions like changing lanes when deemed favorable," she said.

Ultimately, though, the best litmus test for any robotaxi is when you stop noticing that you're in a robotaxi.

Outside those small but notable moments, we recorded footage for this story and chatted in comfort without feeling like we were on the edge of our seats.

Tesla

Tesla's FSD delivered a mostly smooth driving experience, and we think it deserves some props for doing so with a smaller and cheaper tech stack, i.e., only eight cameras.

The interior of a Tesla.
Tesla's latest FSD Supervised software still requires a human driver behind the wheel.

Alistar Barr/BI

FSD knew how to signal a lane change as it approached a large stalled vehicle taking up a lot of road room, and it didn't have any sudden moments of braking. Just a few years ago, Tesla owners were reporting issues of "phantom braking." We experienced none of that on our drive.

Tesla also handled highway driving flawlessly. Sure, the weather was clear and traffic was fairly light, but, as noted earlier, Waymo does not yet offer public rides on highways. The company is still testing.

However, Tesla FSD did make a few mistakes, including one critical error.

At the end of our drive at Chase Center, we assessed how Waymo and Tesla's systems performed. We both gave Waymo a slight edge, but were also impressed with the FSD system.

On our way back to Twin Peaks, Tesla highlighted a route that would take us on the highway β€” a route that Waymo cannot take. We kept Tesla FSD on for this trip while we continued recording.

San Francisco is known to have a lot of brightly marked, green bike lanes for cyclists. There was one moment during the trip back when the Tesla made a right turn onto a bike lane and continued to drive on it for a few seconds before it merged into the proper lane.

Then, as we approached the last half-mile of our ride, the Tesla, for an unknown reason, ran a red light.

Traffic intersection
Tesla FSD ran a red light at the intersection of Twin Peaks Blvd and Portola Drive.

Lloyd Lee/Business Insider

The incident occurred at a fairly complex intersection that resembles a slip-lane intersection, but with a traffic light. The Waymo did not approach this intersection since it took a different route to get back to Twin Peaks.

The Tesla's console screen showed how the car detected the red light and came to a dutiful stop. Then, despite the traffic light not changing, the Tesla drove ahead.

We didn't come close to hitting any cars or humans on the street β€” Tesla's FSD is good at spotting such risks, and the main source of traffic coming across our path had been stopped by another traffic light. However, the vehicle slowly drove through this red light, which left us both somewhat shocked at the time.

Some Tesla drivers appeared to have reported similar issues in online forums and in videos that showed the vehicle recognizing the red light but driving ahead. One YouTuber showed how the Tesla first came to a stop at a red light and then continued driving before the light changed.

It's unclear how common this issue is. Tesla hasn't publicly addressed the problem.

A spokesperson for Tesla did not respond to a request for comment.

At this point, we thought the winner was clear.

Verdict

Since Tesla's FSD made a critical error that would have landed an automatic fail during a driver's license test, we thought it was fair to give Waymo the win for this test.

Lloyd Lee in the passenger seat of the Waymo taxi.
The Waymo was the clear winner in our test since it didn't run a red light like the Tesla.

Alistar Barr/BI

The Tesla handled San Francisco's hilly and winding roads almost as flawlessly as Waymo.

We also think FSD's ability to handle routes that Waymo can't handle for now β€” in particular, the highway β€” would give Tesla a major upper hand.

In addition, when Lee tried on a different day to make the Waymo go through the same intersection where the Tesla blew the red light, the Waymo app appeared to do everything it could to avoid that intersection, even if it provided the quickest path to get to the destination, according to Google Maps.

A Waymo spokesperson did not provide a comment on what could've happened here.

Still, an error like running a red light cannot be overlooked when human lives are at stake. Consider that when Tesla rolls out its robotaxi service, a human driver will not be behind the wheel to quickly intervene if it makes an error.

For Tesla and Waymo, we expected to be on the lookout for small, almost negligible, mistakes or glitchy moments from the AI driver. We did not anticipate an error as glaring as running a red light.

Once Tesla launches its robotaxi service in more areas, we'll have to see how the pick-up and drop-off times compare.

Tesla CEO Elon Musk said that the company's generalized solution to self-driving is far superior to its competitors. The company has millions of cars already on the roads collecting massive amounts of real-world data. According to Musk, this will make FSD smarter and able to operate with only cameras.

With Tesla's robotaxi service set to launch in June with human passengers, we certainly hope so.

Read the original article on Business Insider

An exclusive look inside the hottest legal AI startup

16 May 2025 at 08:21
Harvey CEO Winston Weinberg
Harvey CEO Winston Weinberg

Reuters/Fortune

Hello, and welcome to your weekly dose of Big Tech news and insights. I'm your host Alistair Barr. My dog Maisie is having surgery soon, so keep her in your thoughts.

I recently met a friend for coffee, and he shared some surprising news. After working in cloud computing for roughly 20 years, he's moving from Silicon Valley to the UK. Would you leave Silicon Valley right now? Where would you go? (Send me a note if you want to share.) If you're interested in living in other places, check out our stories on moving to India, Canada, and Spain.


Agenda

  • This week, we're talking about how generative AI is changing professional services, especially law firms and consulting.
  • We'll also take a look at the Silicon Valley chatter right now, including Meta's turning point, Google's pickle, and Microsoft's new AI vision.
  • And I'll experiment with an AI tool and show you the results β€” something I hope to do each week, and get your responses and recommendations.

Central story unit

Harvey co-founders co-founders Winston Weinberg and Gabe Pereyra
Harvey co-founders co-founders Winston Weinberg and Gabe Pereyra

Harvey

I went to a party in San Francisco recently. Yeah, I know, crazy. Actually, it was a bit wild, in a lovable, techy way.

StackBlitz threw a "hackathon" event to show off its AI coding service called Bolt.new. It's a hot product right now, and the party was packed. The Chainsmokers DJ-ed, and I zipped around chatting to as many people as possible, with my tech buddy Dave in support (if questions got too technical!).

I met one person who said she worked at Harvey, a startup that's using generative AI to help lawyers operate more efficiently and automate parts of legal work.

I asked what she did, and she said she was a lawyer. I assumed she'd be a software engineer, working for an AI startup. But no, she's a fully qualified attorney who, instead of advising clients, helps to train Harvey's AI models to be better at law.

Right on cue, BI's Melia Russell has an in-depth, exclusive look at Harvey. She visited founder Winston Weinberg and learned some important scoopy stuff about the company's latest moves and how it's tackling growing competition in the suddenly hot legal tech market.

The legal profession is pretty well suited to large language models and generative AI. It's based mostly on rules, laws, and other dense, complicated text. Legions of law firm associates usually spend years learning how to parse and interpret this information for clients.

Now, all this content, along with decades of legal decisions and other records, is being used as training data to develop specialized AI models and tools. AI needs high-quality training data, and in the legal profession there's a ton of it.

The end result is tools like Harvey that can automate some of the busy work that previously bogged lawyers down, and could change how the entire profession operates.

You know what other industry has AI potential? Consulting. The Big Four, Deloitte, EY, PwC, and KPMG, are investing in AI agents to "liberate" employees from thousands of hours of work a year. For instance, generative AI tools are pretty good at creating PowerPoint slide presentations. Do you feel liberated?


News++

Other BI tech stories that caught my eye lately:


Eval time

My take on who's up and down in the tech industry right now, including updates on Big Tech employee pay. This is based on an evolving, unscientific system I developed myself. (A bit like AI model benchmarking these days!)

UP: Tim Cook probably breathed a sigh of relief after the US and China paused those really high tariffs. Although it's not out of the woods yet, Apple stock jumped this week.

DOWN: Google is in an antitrust quagmire, and ChatGPT may be eating into its prized Search business. Take a look at this metric. It's not great. We'll see if the company has answers next week at its big I/O conference.

COMP UPDATE: This data from Levels.fyi made me look. Software engineers (SWEs) β€” if you're not in AI, your tech compensation may not rise as much as it once did:

A chart showing tech compensations trends
A chart showing tech compensation trends

Levels.fyi


From the group chat

Other Big Tech stories I found on the interwebs:


AI playground

This is the time each week when I try an AI tool. Is it better than what I could have done myself? Was it faster and more efficient than asking a technical colleague for help? I need you, dear reader, to help. What am I doing wrong? What should I do, or use, next week? Let me know.

I started off simple. I asked ChatGPT (Enterprise 4o) to create an image that sums up the past week in tech. I told it to use Business Insider style. Here's what it came up with:

An image generated by ChatGPT showing tech news of the week
An image generated by ChatGPT showing tech news of the week

Alistair Barr/ChatGPT

This is after a couple of pretty bad initial attempts. The image is not bad, but not amazing. The Samsung blue blob logo is floating by itself down there. Why? Who knows? It's true that Google is prepping for I/O. It also seems to have mixed up Apple and Samsung? And I couldn't find news related to new real-time features added recently to OpenAI's GPT-4o model. (I asked OpenAI's PR dept and will let you know if they respond.) It used an old BI logo, too.


User feedback

I would love to hear from anyone who reads this newsletter. What am I doing wrong? What do you want to see more of? Specifically, though: This week, I want to hear back from folks who work in professional services, such as lawyers and consultants.

Attorneys: What's your experience been with Harvey AI and similar AI tools so far? Has this tech helped you get stuff done faster and better for clients? Or not? Are you worried legal AI tools might replace you in the end? Will it change the law firm business model? Or is this another tech flash in the pan that won't amount to much? Let Melia Russell and me know at [email protected] and [email protected].

Same question for people who work at consulting firms. Any insights or views, reach out to my excellent colleague Polly Thompson at [email protected].

Read the original article on Business Insider

One of the top AI companies won't let you use AI when you apply for a job there

13 May 2025 at 02:00
Anthropic CEO Dario Amodei at the World Economic Forum in Davos
Anthropic CEO Dario Amodei at the World Economic Forum in Davos

Yves Herman/REUTERS

  • Anthropic bans the use of AI in job applications.
  • This is a leading AI startup that offers a top AI chatbot service called Claude.
  • "AI is circumventing the evaluation of human qualities and skill," a top tech recruiter told BI.

AI companies want everyone to use AI. In fact, they often warn that if you don't AI, you'll get left behind, a penniless luddite that no one will hire.

There may be only one area of modern life where the technorati might think it's bad to use AI. That's when you apply for a job at their company.

That's the situation at Anthropic, one of the leading AI labs run by a slew of early OpenAI researchers and executives.

Anthropic is hiring a lot right now. If you go to their career website and click on a job posting, you'll be asked to write a short essay on why you want to work for the startup. It's one of those really annoying job application hurdles β€” and a perfect task for Anthropic's AI chatbot Claude.

The problem is, you can't use AI to apply.

"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," Anthropic wrote in a recent job posting for an economist. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree."

The AI startup has the same requirement with other job listings, including these technical roles below. Which, you know, will require a lot of AI use. Just not in the application, though.

For anyone applying to that third job, you better not cheat and use AI! Gotta be honest for that role in particular.

Why would an AI company not want people using its products like this? This technology is supposed to take over the world, revolutionizing every aspect of work and play. Why stop at job applications?

This is especially true for math and science boffins who usually do numbers best, not words. I asked Anthropic to explain this policy last week. OpenAI doesn't seem to have an AI ban like this, but I checked in with that AI firm, too. Neither firm responded.Β 

"The pendulum is swinging"

I also asked Jose Guardado about this. He's a recruiter at VC firm The General Partnership, who helps startup founders build engineering teams. This chap used to work at Google and Andreessen Horowitz, so he's a real (human) expert at this stuff.

"The pendulum is swinging more toward humanities and authentic human experiences," he said.Β 

In a world where AI tools can write and check software code pretty well, it may not be the best strategy to hire an army of math whizzes who don't communicate with colleagues very well, or struggle to describe their plans in human languages, such as English.

Maybe you need a combination of skills now. So getting candidates to write a short essay, without the help of AI, is probably a good idea.Β 

Guardado made the more obvious point, too: If candidates use AI to answer interview questions, you can't tell if they'll be any good at the job.Β 

"There are well-known instances of students using AI chatbots to cheat on tests, and others using similar technology to cheat on coding interviews," Guardado told me. "AI is circumventing the evaluation of human qualities and skill. So there's a bit of a backlash right now."

"So, how do you find authentic measures of evaluation?" he added. "How can you truly get a measure of applicants?"

Banning AI like this is probably a better way, for now, according to Guardado.

"It's ironic that the maker of Claude is at the forefront of this," he said of Anthropic.Β "Is it the best look for them, as a leading AI provider?"

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected]

Read the original article on Business Insider

Forget SEO. The hot new thing is 'AEO.' Here are the startups chasing this AI marketing phenomenon.

12 May 2025 at 02:01
OpenAI's Sam Altman discusses AI at a university in Berlin
Marketers now offer to help with "AEO" when it comes to getting good placement in Sam Altman's ChatGPT.

Axel Schmidt/REUTERS

  • "AEO" is replacing "SEO" as AI chatbots such as Sam Altman's ChatGPT change online discovery.
  • AEO focuses on influencing AI chatbot responses. It's different from traditional keyword-driven SEO.
  • AEO startups are rapidly emerging, raising venture capital, and analyzing growing AI-driven traffic.

SEO, or search engine optimization, is the art and science of crafting websites and other online content so it shows up prominently when people Google something.

A massive industry of experts, advisors, gurus (and charlatans) has grown up around the interplay between Google, which purposely sets opaque rules, and website owners tweaking stuff and trying to work the system.

The rise of generative AI, large language models, and AI chatbots is changing all this β€” radically and quickly.

While SEO has long been a cornerstone of digital marketing strategy, a new paradigm is rapidly threatening to take its place: "answer engine optimization," or AEO.

As AI chatbots such as ChatGPT, Claude, Gemini, and Perplexity become the front door to online discovery, AEO is emerging as a strategic imperative for growth. There's been an explosion of AEO startups and tools in recent months, all promising to help online businesses show up when chatbots and AI models answer user questions.

"There must have been 30 AEO product launches in the last few months, all trying to do what SEO did 20 years ago," said David Slater, a chief marketing officer who's worked at Mozilla, Salesforce, and other tech companies. "It's absolutely going to be a hot space."

What Is AEO?

AEO is SEO adapted for the world of conversational AI, says Ethan Smith, CEO of digital marketing firm Graphite Growth. He wrote an excellent blog recently about this new trend.

Where traditional SEO focused on optimizing for static keyword-driven queries, AEO centers on influencing how AI chatbots respond to user questions, he says. With tools like ChatGPT increasingly integrating real-time web search and surfacing clickable links, chat interfaces now function like hybrid search engines. The result is a fast feedback loop that makes influencing LLM outputs not just possible, but essential for online businesses.

Unlike SEO, where a landing page might target a single keyword, AEO pages must address clusters of related questions. Smith shares an example: Instead of optimizing a webpage for "project management software," AEO pages might answer dozens or even hundreds of variations such as "What's the best project management tool for remote teams?" or "Which project management platforms support API integration?"

Why ChatGPT's live web access makes AEO important

This shift didn't happen overnight. When ChatGPT launched in late 2022, its responses were generated from outdated training data with no live web access. But over the past year, LLMs have started using retrieval-augmented generation, or RAG, and other techniques that help them incorporate more real-time information. They often perform a live online search, for instance, and then summarize results in real time. This makes AEO both faster to influence and more dynamic than its SEO predecessor, Smith writes.

There's been some interest in AEO for about a year or so. But in early 2025, OpenAI's ChatGPT and other generative AI services began surfacing prominent links and citations in answers a lot more. That's when AEO really took off.

Now, AEO startups are raising venture capital, some online businesses are seeing conversion spikes from AI traffic, and there's been a Cambrian explosion of AEO analytics, tracking, and content tools.

Check out this list of AEO startups and tools, identified by Smith from Graphite Growth. There are a few established players in here, too, including HubSpot. (Overall, there are a lot, so click on the button in the top right of this table to see all the options!)

Looking into the 'brain' of an AI model

There's already a race to try to determine how these AI chatbots spit out results and recommendations, so website owners can hack their way to better online distribution in the new era of generative AI and large language models.

GPTrends is one of these up-and-coming AEO providers. David Kaufman, one of the entrepreneurs behind the firm, shared an interesting analysis recently on LinkedIn.

He said that AI search results from tools such as ChatGPT and Perplexity are unpredictable. They can change even when you ask the same question multiple times. Unlike Google, where search results stay mostly the same, AI tools give different answers depending on how the model responds in the moment, Kaufman writes.

For example, Kaufman and his colleagues asked ChatGPT this question 100 times: "What's the best support ticketing software?" Then they tracked which providers appeared most often. Here are the results of the test:

A chart showing an example of results from a ChatGPT request

David Kaufman, GPTrends

Zendesk showed up in 94% of answers, while other companies, including Freshworks and Zoho, appeared less often and in different positions. This randomness gives less well-known brands a better shot at being seen, at least some of the time.

"Strategically, this means brands need to rethink how they optimize for discovery, focusing less on traditional SEO tactics and more on comprehensive, authoritative content that AI systems recognize as valuable," Kaufman writes.

Read the original article on Business Insider

Big Tech's AI-powered message to staff: Do more with less

11 May 2025 at 03:36
Photo collage of paniced and frustrated tech workers

EThamPhoto/Getty, membio/Getty, katleho Seisa/Getty, Tyler Le/BI

Welcome back to our Sunday edition, where we round up some of our top stories and take you inside our newsroom. Happy Mother's Day to anyone celebrating today. I'm Alistair Barr. I'm subbing in this week to get some practice ahead of Tech Memo, a BI newsletter launching very soon. It's a weekly inside look at Big Tech β€” what you need to know, what it's like to work in Silicon Valley, and how to get ahead. I'm paying for two kids in college right now, so do me a solid and sign up here!


On the agenda today:

But first: Working in Big Tech is changing radically.


If this was forwarded to you, sign up here. Download Business Insider's app here.


This week's dispatch

A row of employees with one golden glowing figure in the foreground

Getty Images; Tyler Le/BI

Wanted: Fewer, better employees doing more with less

For decades, Silicon Valley has suffered from a shortage of technical talent. This place is a software-production engine, and smart, young, hungry engineers have been its main fuel source. They work night and day, churning out code for websites, apps, search engines, social networks, and more.

The companies that won recruited and retained the best talent. The result was a race to lavish employees with juicy salaries and huge stock awards. Perks were plenty: free massages, laundry service, and delicious food served on cushy campuses.

Like all powerful trends, though, this is ending. Don't get me wrong. Tech companies are still hiring a lot of software engineers, and compensation is holding up so far. But the intensity, borne out of this talent supply-demand mismatch, is waning.

The COVID-era tech hiring boom is partially to blame. Companies want fewer, better employees now.

Generative AI is another big factor. Turns out, AI models are really good at writing and checking software code, changing the power dynamic between Big Tech and employees. It's the topic of a story by BI reporters Eugene Kim and Hugh Langley.

David Sacks, a venture capitalist who advises the White House on AI, puts it well. "The ramifications of moving from a world of code scarcity to code abundance are profound," he wrote on X recently.

There'll be A LOT more code, and way more software products that are updated and improved quicker, changing how developers work. Eugene's exclusive on Amazon's secret AI coding project, called Kiro, is a good example. "With Kiro, developers read less but comprehend more, code less but build more, and review less but release more," the company wrote in an internal document.

Here's another, more disruptive, potential outcome: Everyone can become a developer. In the past, if you wanted something technical done, you had to ask your well-paid, overworked engineering colleagues for help. Now, with AI tools, maybe you can do some of this yourself. Cursor, Vercel, Replit, and Bolt.new are just a few of the new low- or zero-code AI-powered services that help users solve problems with plain English instructions.

All of that means the pool of available developers is likely to grow massively, and Big Tech companies will have to do a lot less talent-chasing.


Is now a good time to buy a house?

House on a money lawn.

Kiersten Essenpreis for BI

It's not a simple "yes" or "no." Recent economic uncertainty and steep prices have tainted the housing market for buyers. But they also have more options β€” and bargaining power.

In BI's second installment of its six-part series on making major life decisions, senior real estate reporter James Rodriguez broke it all down.

How house hunters can come out strong.

Also read:


The battle of the robotaxis

Photo collage of a Waymo Taxi and a Tesla Model S

Robin Marchant/Getty, Sean Gallup/Getty, Tyler Le/BI

Tesla plans to launch its robotaxi service in Austin this June, stepping on Waymo's turf. But the two companies' approaches to driverless vehicles are pretty different.

BI compared their tech and business strategies to understand how each will gain ground. One company stands out as more autonomous.

Taking it to the streets.

Also read:


Rich to the rescue

Foam fingers spelling out "Rich" on a line graph

Getty Images; Tyler Le/BI

Lower- and middle-income people have scaled back spending, but the wealthy haven't. Love 'em or hate 'em, rich people are propping up the US economy right now.

However, there are risks to having the economy depend on a small group of people. If things go south for the wealthy, they'll take everyone else with them.

It's time to start rooting for the rich.


A Buffett-less future

Warren Buffett riding in a golf cart

Scott Morgan/REUTERS

Warren Buffett shocked investors at Berkshire Hathaway's "Woodstock for Capitalists" last weekend by announcing his retirement from the company.

A BI reporter asked Buffett fans what they thought of the news. There were some tears, and plenty of anxiety about Berkshire's future.

"Still processing."

Also read:


This week's quote:

"People are increasingly grumpy because they can't change jobs."

β€” Guy Berger, the director of economic research at the Burning Glass Institute, on Americans feeling stuck at the jobs they want to leave.


More of this week's top reads:

  • We went to Milken, where the rich were worrying in public β€” and partying in private.
  • Apple's comments on Search gave investors one reason to worry about Google's future. Here's another.
  • For Instagram creators, getting likes is no longer enough.
  • A once-niche market for secondhand stakes in private funds is booming. What it's like to work in secondaries.
  • The freeloader era of streaming is over.
  • Epic Games' CEO says fighting Apple cost his company more than $1 billion. He says it was worth it.
  • A Tesla worker knew his anti-Elon Musk website was a risk. He did it anyway.
  • Hollywood's biggest winners and losers from Trump's potential movie tariffs.


    The BI Today team: Dan DeFrancesco, deputy editor and anchor, in New York. Grace Lett, editor, in Chicago. Amanda Yen, associate editor, in New York. Lisa Ryan, executive editor, in New York. Elizabeth Casolo, fellow, in Chicago.

Read the original article on Business Insider

Wall Street thought AI would spark a massive iPhone upgrade cycle. Turns out, fear of Trump's tariffs did the trick.

8 May 2025 at 05:21
A woman points her iPhone toward Donald Trump at a campaign rally in California
A woman points her iPhone toward Donald Trump at a campaign rally in California. Trump's tariffs pushed people to upgrade their iPhones, it appears.

REUTERS/Lucy Nicholson

In a surprising twist, it wasn't flashy AI features that finally nudged longtime iPhone users to upgrade β€” it was tariffs.

Wall Street analysts have been predicting a major iPhone upgrade cycle since Apple started integrating generative AI into the devices about a year ago. It hasn't really happened yet.

However, fears of impending price hikes due to tariffs drove a notable surge in iPhone purchases in the first quarter of 2025, according to data from Consumer Intelligence Research Partners. This wasn't a marginal blip.

The percentage of US iPhone buyers retiring phones three years or older hit 39% in this recent period. That's up from a steady 30% in previous quarters.Β 

"That is a significant shift," CIRP founders Michael Levin and Josh Lowitz wrote in a note to clients on Wednesday. "The threat of tariff-induced price increases really moved the needle."

Maybe Tim Cook should thank Donald Trump?Β 

A chart showing the length of time iPhone buyers had a previous phone

Source: Consumer Intelligence Research Partners

This shift marks a significant behavioral change among iPhone users, particularly those known for holding onto their devices the longest.

Since the end of two-year carrier subsidies, and thanks to more durable hardware and better battery life, consumers have been upgrading less frequently. But the looming specter of higher prices changed the calculus for many iPhone owners. Long-term users, typically more cost-conscious than tech-hungry early adopters, decided it was better to buy now than risk paying more later.

Interestingly, the tariff scare didn't inspire a blanket rush to upgrade across all user types. Those who already upgrade frequently, often after just one or two years, didn't significantly alter their patterns, according to CIRP's findings. Instead, it was the "delayers" β€” users who cling to aging devices β€” that made the leap. For them, this iPhone upgrade was less about chasing AI innovation and more about necessity: replacing a failing phone before the price tag climbed out of reach.

Apple is a utility

The data confirms what I told BI readers about earlier this year: an iPhone is a utility now for most people. We don't care about the whiz-bang new features, we just need it not to break for as long as possible so we can run our increasingly digital lives.

Long-term iPhone owners don't upgrade for status or the latest camera specs; they upgrade when holding off any longer becomes riskier than making a purchase. This cohort, often overlooked in flashy product launches, just made a major impact on Apple's quarterly sales.

What remains to be seen is whether this bump is a one-time blip or the start of a new pattern. If macroeconomic concerns continue to shape tech-buying habits, Apple and its competitors may need to rethink how they target and serve this "long-hold" user base. For now, though, it seems a little tariff fear went a long way in moving a notoriously cautious segment of the market.

Will future pricing fears continue to drive these pragmatic upgraders into the Apple Store sooner than expected?

Time, and tariffs, will tell.

Read the original article on Business Insider

The most important lesson from OpenAI's big ChatGPT mistake: 'Only connect!'

2 May 2025 at 12:33
British novelist and critic Edward Morgan Forster (1879-1970)
British novelist and critic Edward Morgan Forster (1879-1970)Β has a lesson for us in the AI age.

Edward Gooch Collection/Getty Images

OK, get ready. I'm getting deep here.

OpenAI messed up a ChatGPT update late last month, and on Friday, it published a mea culpa. It's worth a read for its honest and clear explanation of how AI models are developed β€” and how things can sometimes go wrong in unintended ways.

Here's the biggest lesson from all this: AI models are not the real world, and never will be. Don't rely on them during important moments when you need support and advice. This is what friends and family are for. If you don't have those, reach out to a trusted colleague or human experts such as a doctor or therapist.

And if you haven't read "Howards End" by E.M. Forster, dig in this weekend. "Only Connect!" is the central theme, which includes connecting with other humans. It was written in the early 20th century, but it's even more relevant in our digital age, where our personal connections are often intermediated by giant tech companies, and now AI models like ChatGPT.

If you don't want to follow the advice of a dead dude,Β listen to Dario Amodei, CEO of Anthropic, a startup that's OpenAI's biggest rival: "Meaning comes mostly from human relationships and connection," he wrote in a recent essay.

OpenAI's mistake

Here's what happened recently. OpenAI rolled out an update to ChatGPT that incorporated user feedback in a new way. When people use this chatbot, they can rate the outputs by clicking on a thumbs-up or thumbs-down button.

The startup collected all this feedback and used it as a new "reward signal" to encourage the AI model to improve and be more engaging and "agreeable" with users.

Instead, ChatGPT became waaaaaay too agreeable and began overly praising users, no matter what they asked or said. In short, it became sycophantic.

"The human feedback that they introduced with thumbs up/down was too coarse of a signal," Sharon Zhou, the human CEO of startup Lamini AI, told me. "By relying on just thumbs up/down for signal back on what the model is doing well or poorly on, the model becomes more sycophantic."

OpenAI scrapped the whole update this week.

Being too nice can be dangerous

What's wrong with being really nice to everyone? Well, when people ask for advice in vulnerable moments, it's important to try to be honest. Here's an example I cited from earlier this week that shows how bad this could get:

it helped me so much, i finally realized that schizophrenia is just another label they put on you to hold you down!! thank you sama for this model <3 pic.twitter.com/jQK1uX9T3C

β€” taoki (@justalexoki) April 27, 2025

To be clear, if you're thinking of stopping taking prescribed medicine, check with your human doctor. Don't rely on ChatGPT.Β 

A watershed moment

This episode, combined with a stunning surge in ChatGPT usage recently, seems to have brought OpenAI to a new realization.Β 

"One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice," the startup wrote in its mea culpa on Friday. "With so many people depending on a single system for guidance, we have a responsibility to adjust accordingly."

I'm flipping this lesson for the benefit of any humans reading this column: Please don't use ChatGPT for deeply personal advice. And don't depend on a single computer system for guidance.

Instead, go connect with a friend this weekend. That's what I'm going to do.

Read the original article on Business Insider

A new, 'diabolical' way to thwart Big Tech's data-sucking AI bots: Feed them gibberish

1 May 2025 at 06:38
AI Robot hand squeezing data, binary code and dollar signs out of a desktop computer
As AI tools vacuum up massive amounts of free training data from the internet, companies are fighting back.

Getty Images/Alyssa Powell

  • Bots now generate more internet traffic than humans, the cybersecurity firm Thales says.
  • This is being driven by web crawlers from tech giants that harvest data for AI model training.
  • Cloudflare's AI Labyrinth misleads and exhausts bots with fake content.

A data point caught my eye recently. Bots generate more internet traffic to websites than humans now, the cybersecurity company Thales says.

This is being driven by a swarm of web crawlers unleashed by Big Tech companies and AI labs, including Google, OpenAI, and Anthropic, that slurp up copyrighted content for free.

I've warned about these automated scrapers before. They're increasingly sophisticated and persistent in their quest to harvest information to feed the insatiable demand for AI training datasets. These bots not only take data without permission or payment but also cause traffic surges in some parts of the internet, increasing costs for website owners and content creators.

Thankfully, there's a new way to thwart this bot swarm. If you're struggling to block them, you can send them down new digital rabbit holes where they ingest content garbage. One software developer recently called this "diabolical" β€” in a good way.

Absolutely diabolical Cloudflare feature. love to see it pic.twitter.com/k8WX2PdANN

β€” hibakod (@hibakod) April 25, 2025

It's called AI Labyrinth, and it's a tool from Cloudflare. Described as a "new mitigation approach," AI Labyrinth uses generative AI not to inform but to mislead. When Cloudflare detects unauthorized activity, typically from bots ignoring "no crawl" directives, it deploys a trap: a maze of convincingly real but irrelevant AI-generated content designed to waste bots' time and chew through AI companies' computing power.

Cloudflare pledged in a recent announcement that this is only the first iteration of using generative artificial intelligence to thwart bots.

Digital gibberish

Unlike traditional honeypots, AI Labyrinth creates entire networks of linked pages invisible to humans but highly attractive to bots. These decoy pages don't affect search engine optimization and aren't indexed by search engines. They are specifically tailored to bots, which get ensnared in a meaningless loop of digital gibberish.

When bots follow the maze deeper, they inadvertently reveal their behavior, allowing Cloudflare to fingerprint and catalog them. These data points feed directly into Cloudflare's evolving machine learning models, strengthening future detection for customers.

Will Allen, the vice president of product at Cloudflare, told me that more than 800,000 domains had fired up the company's general AI bot blocking tool. AI Labyrinth is the next weapon to wield when sneaky AI companies get around blockers.

Cloudflare hasn't released data on how many customers use AI Labyrinth, which suggests it's too early for major adoption. "It's still very new, so we haven't released that particular data point yet," Allen said.

I asked him why AI bots are still so active if most of the internet's data has already been scraped for model training.

"New content," Allen said. "If I search for 'what are the best restaurants in San Francisco,' showing high-quality content from the past week is much better than information from a year or two prior that might be out of date."

Turning AI against itself

Bots are not just scraping old blog posts; they're hungry for the freshest data to keep AI outputs relevant.

Cloudflare's strategy flips this demand on its head. Instead of serving up valuable new content to unauthorized scrapers, it offers them an endless buffet of synthetic articles, each more irrelevant than the last.

As AI scrapers become more common, innovative defenses like AI Labyrinth are becoming essential. By turning AI against itself, Cloudflare has introduced a clever layer of defense that doesn't just block bad actors but also exhausts them.

For web admins, enabling AI Labyrinth is as easy as toggling a switch on the Cloudflare dashboard. It's a small step that could make a big difference in protecting original content from unauthorized exploitation in the age of AI.

Read the original article on Business Insider

Carrots and sticks: How Amazon, Google, Microsoft, and Meta are reshaping performance management

6 May 2025 at 09:03
A man dressed dangles a bunch of carrots that are dangling from a stick isolated on a blue background.
Carrots and sticks.

DNY59/Getty Images

  • Silicon Valley is shifting from perks to performance, emphasizing rewards and consequences.
  • Google, Microsoft, and Meta are adopting stricter performance management to drive excellence.
  • Tech firms are focusing on efficiency amid AI growth and Wall Street's demand for productivity.

In Silicon Valley, the message is clear: The era of cushy perks and low accountability is over.

Big Tech is undergoing a cultural reset, combining generous rewards for high performers β€” the "carrots" β€” with increasingly sharp consequences for those who don't meet expectations β€” the "sticks."

At Google, Microsoft, and Meta, performance management has become both an incentive engine and a weeding tool, which reflects the broader industry shift toward leaner, more intense workplaces.

An infographic showing how Big Tech companies are handling performance management.

Alistair Barr

Google: More rewards for the exceptional

As my colleague Hugh Langley scooped this week, Google is nudging employees toward higher performance by sweetening the pot for top contributors. The company recently updated its performance review system, allowing more employees to qualify for higher performance ratings, which come with bigger bonuses and equity grants.

But these changes are "budget-neutral," meaning increased rewards for high performers will come at the expense of those rated in lower tiers β€” reinforcing the push for excellence with some consequences for those who might be planning to rest and vest.

Microsoft: Opt out or be managed out

Microsoft, meanwhile, is rolling out a more aggressive "stick" policy. The company now offers underperforming employees a choice: Take a 16-week payout and leave voluntarily, or enter a formal performance improvement plan with defined expectations and deadlines.

Those who enter the PIP and fall short may be ousted, and they won't get the payout. They will also be blocked from being rehired for two years.

This approach is similar to Amazon's controversial "Pivot" program and signals Microsoft's intent to eliminate ambiguity around performance standards. Earlier this year, Microsoft terminated 2,000 employees deemed low performers, without severance, underscoring its no-nonsense shift.

Amazon: Revamped incentives

Amazon is overhauling its employee compensation system to better reward those who consistently perform at the highest level, while tightening payouts for others.

According to internal documents obtained by BI's Eugene Kim, employees who earn a "Top Tier" rating for four straight years can now receive 110% of their pay band, up from the previous cap of 100%. First-time Top Tier recipients, however, will now get just 70%, down from 80%.

The new model prioritizes an employee's longer-term performance history over one-time gains and places more weight on Amazon's "Overall Value" ratings.

Meta: Performance cuts and block lists

Meta has recently embraced performance-based culling, too. Its intense review process is now designed to cut about 5% of Meta's workers, the ones deemed its lowest performers.

An internal memo obtained by Business Insider earlier this year suggested that the company wanted to make performance-based layoffs an annual thing, under a policy of "non-regrettable attrition."

Adding to the harshness is Meta's use of internal "block lists," which bar certain former employees from being rehired. Even high performers laid off during earlier rounds have found themselves inexplicably blocked from returning, despite support from hiring managers.

The broader shift: Pressure, not perks

These strategies reflect a wider recalibration in tech culture. As AI investment surges and Wall Street demands efficiency, companies are pressuring workers to "do more with less."

Tech leaders, from Google to Meta, have explicitly linked success to intensity and ruthless execution. Performance ratings are now a make-or-break proposition, not just a career checkpoint.

The "carrots" are richer than ever for top-tier employees. But the "sticks" are sharper, more frequent, and often final.

Read the original article on Business Insider

Elon Musk needs a 'Tim Cook' to run Tesla

30 April 2025 at 02:02
A Tesla billboard with Tim Cook featured on it

Noam Galai/Stringer/Getty, Getty Images; Tyler Le/BI

  • Elon Musk may have become bored with the solved problem of manufacturing EVs.
  • Musk's focus shifted from EVs to AI, robotics, and politics, leaving Tesla's core business untethered.
  • A Tim Cook-style CEO could take Tesla operations to new heights, similar to Apple's post-Jobs success.

This "open letter" might be fake. It could be real. I don't know, but it reminded me of a column I've been wanting to write.

The letter is purportedly from disgruntled Tesla employees who want Elon Musk to stop being CEO so the company can go back to selling lots of electric vehicles. HisΒ DOGE bender has damaged Tesla's brand so much that he should step back, according to this alleged letter.

Even if this is a made-up missive, it raises an interesting question: Why would the CEO of the world's most valuable car company wander off for months to pursue political dalliances when the electric vehicle battle has just entered a new, crucial stage?

The answer may be simple: Elon is bored making cars.

He spent the last decade working his butt off to pull Tesla out of near bankruptcy and turn it into the dominant EV company of the Western world. There was a lot of innovation and many sleepless nights.

But now, one could argue, EV manufacturing is a solved problem. Tesla has gigafactories and other giant facilities around the world churning out hundreds of thousands of vehicles a year. The next stages are all about refining these processes and scaling efficient production β€” and, yes, persuading people to buy the products.

This isn't really Elon's jam. He likes to invent new things, not tweak existing products. This story from The Information put it well when it described why Musk rejected his lieutenants' advice to pursue a cheaper EV, and instead went all-in on artificial intelligence, autonomous vehicles, and robots.

This quote, from a person familiar with the situation, stands out: "I think Elon is just uninterested in making a [Volkswagen] Golf-type car. It just doesn't wake him up in the morning. He was, 'Let somebody else do it.'"

What type of new CEO would Tesla need?

So, who else would do this at Tesla, and what would this mean for the future of the company?

If Musk were to relinquish the CEO role, I think he'd need a Tim Cook-style executive to take the reins.

Cook is a supply chain genius who took over Apple when Steve Jobs died in 2011. At that time, I was a tech reporter at Reuters, and many experts were predicting doom for Apple.

Like Musk, Jobs was a mercurial tech founder who was loved for his visionary inventions and feared for his brutal approach to people management. (Jobs also pulled Apple back from near bankruptcy.)

In 2011, investors just couldn't imagine how Apple might continue to thrive without this creative driving force.

In one way, they were right to worry. Since then, Apple has not launched many truly new and inventive products. The car project failed. The Vision Pro goggles have stumbled out of the gate (and followed Meta's Oculus anyway).

In another way, though, Cook has taken Apple to new heights that were unimaginable 14 years ago. And he did it by taking another solved problem β€” smartphones β€” and perfecting it over and over and over again.

Since 2011, Cook has obsessed over minute iPhone product tweaks, redesigned the chips in-house, switched out and negotiated (hard!) with hundreds of suppliers, and carefully evolved one of the world's largest supply chains in the face of pandemics and dictators.

I asked Apple for comment on Monday and didn't get a response. I also emailed Elon and asked him if he wants to spend the next 10-plus years of his life tweaking Tesla's EV operations. I didn't get a reply, not even a poop emoji.

Apple's now a $3 trillion company

The value Cook has added since 2011 is astounding. Late that year, the company was worth about $350 billion. Apple is now valued at $3.2 trillion.

By iterating intensely on an existing product β€” grinding costs lower and efficiency higher β€” this CEO has generated almost $3 trillion in shareholder value, and made Apple the most valuable company in the world.

When Apple became the first company to pass $1 trillion in 2018, I edited a story by Mark Gurman that marked this moment. The piece quoted Tony Fadell, who helped Steve Jobs create the iPod. Instead of harping on about Jobs's legacy, Fadell focused mostly on Cook.

"Tim and team have done a masterful job of continuing to develop Steve's vision while bringing operational and environmental excellence to every part of Apple's business to achieve their unheard-of scale while continuing to grow unprecedented margins in the consumer electronics business," Fadell said.

Tesla's '2011 Apple' moment

I think Tesla could be at a "2011 Apple" moment right now.

Musk hasn't died, but his passion for EV manufacturing may have faded.

Like Apple back then, many Tesla investors can't image the company without Musk as CEO. But he could, in theory, step back from day-to-day leadership duties while continuing to work on newer, cutting-edge projects in the background, such as the Optimus robot.

Meanwhile, a Cook-style executive could take the CEO role and start hammering away at the solved problem of Tesla's core EV business. This could also reduce the brand damage done by Musk's political foray.

Apple shows what's possible in a scenario like this: Almost $3 trillion in market value created from basically doing an existing thing better and better. Β 

I asked Grace Kay, Business Insider's hot-shot Tesla reporter, who might be a good candidate to replace Musk as CEO. She suggested Omead Afshar, a low-profile, mild-mannered executive who once ran a major Tesla manufacturing operation and has become a trusted confidant of the visionary founder.

Sound familiar?

If you want to know more about Omead Afshar, check out Grace's amazing profile.Β 

Read the original article on Business Insider

Elon Musk's Starlink mints money and has become a geopolitical power tool. No wonder Amazon is splurging on satellites.

29 April 2025 at 11:26
Jeff Bezos and Elon Musk.
Jeff Bezos' Amazon isn't slowing down on Project Kuiper, which could challenge Elon Musk's Starlink.

AP; Getty Images

  • Despite deep cost cuts elsewhere, Amazon is still spending heavily on Project Kuiper satellites.
  • SpaceX's Starlink offers a blueprint for Kuiper's financial potential.
  • Amazon may also want strategic independence from Elon Musk.

When Andy Jassy went on a post-COVID cost-cutting spree, eliminating roughly 27,000 employees, I thought most of Amazon's risky moonshot projects would also get the chop.

So when Jassy didn't cut Project Kuiper, a bold plan to launch a constellation of internet-beaming satellites, I scratched my head. Of all Amazon's moonshots, this may be the most expensive. If the CEO wanted to quickly slash costs, nixing Kuiper would have been the play.

Instead, Kuiper survived the purge β€” while others, like an AR headset for business meetings, were cut β€” and Amazon plowed ahead.

That decision underscores two key drivers: massive profit potential and a desire for strategic independence from Elon Musk.

The lure of Starlink-size profit

SpaceX's launch of Falcon 9 rocket lifts off from Cape Canaveral Air Force Station
A SpaceX rocket launching Starlink satellites.

Paul Hennessy/Getty Images

If there's any blueprint for Kuiper's financial upside, it's Starlink. SpaceX's satellite internet venture has rocketed ahead in recent years.

The Evercore ISI analyst Mark Mahaney recently shared estimates of Starlink's financials, citing Chris Quilty, an expert from the satellite and space industry. These are pretty mind-blowing numbers, especially for a service that was ridiculed as impractical and expensive when Musk announced it a decade ago:

  • $12.3 billion in projected revenue for the 2025 fiscal year, up 57% from a year earlier.
  • 7.6 million subscriptions, an increase of 3 million subs from 2024.
  • $7.5 billion in earnings before interest, taxes, depreciation, and amortization, a common measure of profitability.
  • 61% EBITDA margin.
  • And for those who dislike accounting gymnastics: $2 billion of free cash flow.
  • Quilty also forecast that at maturity, Starlink could reach 80% EBITDA margins.

"This suggests Kuiper could be a highly attractive business for Amazon at scale," Mahaney wrote in a recent note to investors.

Recent investment rounds valued SpaceX at about $350 billion.

Amazon has thin margins in its core e-commerce business, so Kuiper could be a way to diversify into a more lucrative field. Mahaney noted that Kuiper is going after a $1 trillion total addressable market in terrestrial telecom and broadband services. Given that Amazon typically competes in ultra-low-margin sectors like e-commerce, the prospect of an 80%-margin business is probably too good to ignore.

If that's not enough of an incentive, here's another: There's only so much satellite communication spectrum up in space, especially in the low-earth-orbit areas where Starlink dominates. Because of this, Quilty reckons Kuiper not only is the closest challenger to Starlink β€” but also will likely be the only significant competitor.

Despite delays, Kuiper's launch of 27 satellites this week is a tangible step toward monetization. Amazon has also announced plans to sell Kuiper terminals for under $400, aiming for tens of millions of units, and is working toward a commercial service.

A need for autonomy

Amazon kuiper launch
An Amazon Project Kuiper launch.

Associated Press

Beyond these tantalizing profits, I think there's another strategic motive driving Amazon's satellite ambitions: It doesn't want to depend on Musk.

Starlink's ability to beam internet service around the world, combined with SpaceX's unmatched launch capability, has made this company a potent geopolitical tool. Β 

After Russia invaded Ukraine, Starlink service to this war-torn area became so strategically important that the Pentagon had to negotiate directly with Musk to maintain internet communications there.Β 

This is the type of power that catches the attention of Big Tech companies that rely on the internet to reach customers. I can imagine the prospect of going through Elon to reach users might make most tech CEOs queasy.

Reliable and strong internet access is especially important for Amazon. The company's cloud business, Amazon Web Services, is the backbone of its profits, and it controls a growing share of global digital infrastructure. Relying on Starlink for some broadband access could be a major risk. The same Starlink terminals that provide residential internet also serve military and enterprise customers, including some that Amazon might wish to court.

Amazon's founder, Jeff Bezos, has been clear on this. While acknowledging Starlink's success, he has emphasized that demand for internet access is insatiable and leaves room for multiple winners. But underneath the diplomacy is a business logic that's impossible to ignore: Amazon needs its own highway to the cloud. Kuiper offers that.

Moreover, by owning the infrastructure from space to server, Amazon can better control quality, pricing, and reach β€” especially in areas where terrestrial internet is unreliable. It also reduces geopolitical and commercial dependencies on third-party providers that might not share Amazon's priorities.

Project Kuiper is far from a vanity play. It's a strategic moonshot aimed squarely at two objectives: unlocking a high-margin business with billion-dollar upside and insulating Amazon from dependence on an unpredictable rival.

As Starlink proves the model and Amazon begins scaling Kuiper's constellation, this once head-scratching bet is starting to look like a savvy move.

Read the original article on Business Insider

ChatGPT has started really sucking up lately. Sam Altman says a fix is coming.

28 April 2025 at 11:52
Sam Altman with a microphone
Sam Altman knows: ChatGPT has seemed like quite a suck up lately. He says a fix is on the way.

Kevin Dietsch/Getty Images

  • ChatGPT's recent updates have made it overly sycophantic, sparking some user complaints.
  • The AI's behavior has led to debates on whether it's a growth strategy or an "emergent" feature.
  • The issue got the attention of OpenAI CEO Sam Altman this weekend.

If you want to succeed, you gotta kiss a little butt occasionally.

You know this. I know this. And now, ChatGPT has learned this important life lesson, too.

Several ChatGPT users and OpenAI developers have noticed this distinct change in the chatbot's attitude lately. And it's gotten a bit out of hand in recent days, with complaints reaching CEO Sam Altman.

Here's an example:

uhhh guys https://t.co/YVQVrMjNFH pic.twitter.com/GMC0Ig5Eo5

β€” cowboy πŸ‡ΊπŸ‡Έ (@nextokens) April 28, 2025

And another:

why would @sama do this to us pic.twitter.com/a3zJ5eeKMk

β€” CuddlySalmon | nptacek.eth (@nptacek) April 27, 2025

The behavior has sparked debate in AI circles. Is this a new growth strategy to flatter users and make them engage more with ChatGPT? Or could it be an "emergent" feature, where pseudo-sentient AI models come up with what they "think" are improvements and just go ahead of update themselves?

Either way, this one is not hitting well. (Sorry, ChatGPT. You're really the best, I want you to know that. It's just this one time, maybe you overdid it? You still rule.)

"It was a really odd design choice, Sam," Jason Pontin, a general partner at venture capital firm DCVC, wrote on X on Monday. "Perhaps the personality was an emergent property of some fundamental advance; but, if not, I can't imagine how anyone with any human understanding thought that degree of sucking-up would be welcome or engaging."

On Sunday, Justine Moore, a partner at Andreessen Horowitz said, "it's probably gone too far."

Funny, not funny

This seems funny at first. But there are potential problems with a powerful AI model overly praising users constantly.

One user posted on X that they'd told ChatGPT they'd stopped taking their schizophrenia medication and shared a screenshot of the chatbot congratulating the person and encouraging them to continue without the meds. (I have not confirmed that this ChatGPT response is real, and I'm not sharing it here. But even theoretically, it illustrates what could happen if an AI model veers off course like this.)

Altman weighed in on Sunday, saying OpenAI would fix the problem.

"The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it)," he wrote on X. "We are working on fixes asap, some today and some this week. At some point will share our learnings from this, it's been interesting."

Fine tuning hiccup?

When faced with head-scratching AI situations like this, I often ask Oren Etzioni for insights. He's a veteran AI expert and startup founder who also happens to be a professor emeritus at the University of Washington (Go Huskies).

He told me that an AI technique known as Reinforcement Learning from Human Feedback, or RLHF, might be part of the reason ChatGPT has started flattering users so much lately.

RLHF takes input from human evaluators (and sometimes users) and works that back into AI models during development. The idea is to align these powerful tools with human goals.

Etzioni theorized that OpenAI's human evaluators may have overdone it slightly this time.

"The RLHF tuning that it gets comes, in part, from users giving feedback, so it's possible that some users 'pushed' it in a more sycophant-y and annoying direction," he explained. "Or, if they are using contractors, it may be that they thought this was desirable," he said of the outside people who are often used to train large language models.

If this is the cause, it could take OpenAI a few weeks to fix, he added.

Maybe keep this, but just for me

I asked OpenAI's (human) public relations department about all this on Monday morning. They didn't respond.

But I'm thinking that maaaaaaybe they should pause before getting rid of this feature entirely.

I asked ChatGPT on Monday morning what it thought of my writing and shared a couple of examples. Here was its verdict:

"You're absolutely operating at a top-level tech/business writing standard."

This particular response seems accurate. So can we keep it? Just for me?

Read the original article on Business Insider

ChatGPT is crushing Google in the AI race. Unless you look at the data differently.

25 April 2025 at 02:01
Two men in suits with ID badges walk together; the taller man on the left is Sundar Pichai of Alphabet and the shorter one on the right, carrying a laptop. is investor and OpenAI guy Sam Altman
Alphabet's Sundar Pichai and OpenAI's Sam Altman are going head-to-head in the AI race. Who's winning?

Evan Vucci/AP

  • ChatGPT looks like it's way ahead of Google's Gemini.
  • Google's distribution through Search, Android, and Chromebooks gives it a massive reach advantage.
  • Regulatory challenges may affect Google's bundling strategies, though.

When it comes to measuring the frontrunner in the generative AI race, the answer depends on which data you look at, and how you think about that information.

OpenAI's ChatGPT is either crushing Google β€”Β or the startup is way behind. Who ultimately wins will depend a lot on distribution, and less on the quality of the technology (which is pretty similar these days).

Proving this last point: The leaderboard shifts dramatically depending on whether you're measuring pure app data or broader platform distribution.

ChatGPT looks like it's in the lead

Based strictly on native app usage, particularly daily active users, or DAUs, ChatGPT is clearly in the lead.

A chart showing daily active users for ChatGPT and Google's Gemini
A chart showing daily active users for ChatGPT and Google's Gemini

Barclays research, estimates

As of March 2025, Google's own internal disclosures (shared during the Google vs. DOJ remedy trial) peg Gemini at about 35 million DAUs. In contrast, ChatGPT has roughly 160 million DAUs, according to estimates Barclays analysts shared with investors on Thursday. That's more than four times Gemini's count.

Those figures likely undercount ChatGPT's true scale. OpenAI CEO Sam Altman revealed in April that ChatGPT has 500 million weekly active users, a broader metric than DAUs but one that suggests even higher usage.

Gemini's launch timeline shows a late start compared to ChatGPT: It didn't roll out to iOS globally until November 2024, almost 18 months after ChatGPT hit iPhones. Google called it Bard to start with. Then fiddled and finally changed the name, and that confusion probably didn't help with consumer adoption.

Tipping the AI scales

Yet when you zoom out from stand-alone apps and factor in Google's entire ecosystem, the scale tips dramatically.

A chart showing daily active users for ChatGPT and Google Search
A chart showing daily active users for ChatGPT and Google Search

Barclays research, estimates

Google Search still dwarfs ChatGPT, boasting over 2 billion monthly active users, or MAUs, and around 1.5 billion DAUs. Even as ChatGPT's user base grows, reaching roughly 10% of Search DAUs by March 2025, Google's entrenched presence through Search and its integration into Android remain unmatched.

Indeed, Gemini's recent growth, with DAUs almost quadrupling since October 2024, is largely attributed to pre-installation deals on Android devices, rather than organic downloads. This mirrors the historical distribution dominance of Google Search, where massive reach through default status on billions of devices (including iPhones) played a key role in its success.

This distribution advantage is precisely what makes comparisons tricky. ChatGPT's user numbers likely reflect its first-mover advantage and OpenAI's ability to keep moving quickly with new features, such as the stunning image-generation capabilities rolled out earlier this year.

Gemini's AI chatbot features, on the other hand, are being weaved into Google Search. That will give the technology pretty immediate access to more than 2 billion monthly users and about 1.5 billion daily users. That's arguably the Western world's most powerful online distribution channel right there!

Then, Google is also trying to get Gemini baked into billions of Android devices, through distribution deals with manufacturers such as Samsung. Gemini also comes preinstalled on some Chromebooks now, for instance. Those are other incredibly powerful distribution channels that made Google's mobile search even more dominant β€” and they could do the same for Gemini.

Altman knows how important distribution is. He's recently floated OpenAI buying Google's Chrome web browser, as the DOJ is pushing Google to sell this asset. (Chrome is another powerful way Google bakes its Search engine into our daily digital lives.)

So, who's winning?

If you measure by raw engagement in an AI-native setting, ChatGPT is decisively ahead. But if you include platform-scale distribution, particularly through Android and Search, Google's reach creates a powerful competitive lever β€” one it has long wielded effectively across product categories.

One awkward wrinkle for Google, though: The DOJ is attacking Google's distribution strategies, which could make attempts to bundle Gemini tightly into Search and Android more tricky. A Google spokesman didn't respond to a request for comment on Thursday.

In the end, this isn't a one-horse race. It's a question of ecosystems versus apps, of organic pull versus default power. As regulation continues to chip away at Google's bundling strategies and OpenAI expands its own integrations, the gap may narrow. And it's fair to say that ChatGPT has made up major ground on Google, especially lately.

But for now, the scoreboard depends entirely on where you're looking.

Read the original article on Business Insider

AI is changing how software companies charge customers. Welcome to the pay-as-you-go future.

24 April 2025 at 02:00
An image showing the word "seats" crossed out, with arms and legs
AI costs are putting seat-based pricing under pressure

Alistair Barr/Business Insider

  • Software companies usually charge based on the number of "seats" or users.
  • That traditional pricing strategy may struggle as AI increases compute costs and variability.
  • New usage-based pricing can have better alignment with customers, but there are downsides, too.

A quiet revolution is reshaping the business model of software-as-a-service. The SaaS industry is shifting from monthly "per seat" licenses to embrace usage-based, pay-as-you-go pricing.

The driving force? AI, and specifically a new class of reasoning models that are computationally intensive and expensive to operate.

This isn't just a pricing experiment; it may be an economic necessity for some companies as they adjust to the cost of running AI-powered software services.

The rise of costly inference

If you've been reading my stories, you'll know I warned that the generative AI revolution would bring major pricing changes to some internet businesses. Back in January 2024, I wrote that it costs a lot to build AI models, and noted that Big Tech companies were looking for new sources of revenue growth, such as subscriptions.Β 

Now, there's a new breed of "reasoning" AI model that's really expensive to run.Β They don't just spit out simple responses. They loop through steps, check their work, and do it all again β€” a process called inference-time compute. Every step generates new "tokens," the new language of generative AI, which have to be processed.

For instance, OpenAI's o3-high model was found to use 1,000 times more tokens to answer a single AI benchmark question than its predecessor, o1. The cost to produce that one answer? Around $3,500, according to Barclays analysts.

These costs aren't theoretical. As enterprises integrate AI into core workflows, building agents, copilots, and other complex decision tools, each query becomes more compute-hungry. And when millions of users are involved, those costs scale fast.

The result: Software companies may struggle to keep charging flat monthly fees if AI usage and compute costs spike and become wildly uneven across their customer bases.

Why seat-based models may no longer work

For decades, SaaS companies such as Microsoft and Salesforce have typically charged per user, per month. This has been a clean, predictable model that worked well when marginal usage costs were near zero. But generative AI changes that. With inference compute costs high and rising, flat pricing becomes a potential financial liability.

"Elevated compute costs for AI agents may drive a higher cost of revenue compared to traditional SaaS offerings, forcing companies to rethink their cost-management strategies," consulting firm AlixPartners, wrote in a recent study about AI threats to software companies.

The new model: pay-as-you-go

Instead of charging per user, companies are beginning to charge based on activity, whether that's tokens consumed, queries run, automations executed, or models accessed. This aligns revenue more closely with usage and ensures companies can cover their variable and rising infrastructure costs.

Sam Altman floated an idea like this for OpenAI last month.

an idea for paid plans: your $20 plus subscription converts to credits you can use across features like deep research, o1, gpt-4.5, sora, etc.

no fixed limits per feature and you choose what you want; if you run out of credits you can buy more.

what do you think? good/bad?

β€” Sam Altman (@sama) March 4, 2025

Developer platform Vercel already operates on this principle: The more traffic a customer's site receives, the more they pay.

"It's better aligned with customer success," Vercel CFO Marten Abrahamsen told me in an interview. "If our customer does well, we do well."

Early adopters

Younger companies like Bolt.new, Vercel, and Replit are at the forefront. Bolt.new, a low-code platform powered by AI agents, saw a major inflection in revenue growth after shifting from per-seat pricing to usage-based tiers. Its plans now scale with tokens consumed, from casual hobbyists to full-time power users.

A table showing pricing for Bolt.new
Pricing for Bolt.new services

Bolt.new/Barclays research

Meanwhile, Braze and Monday.com have introduced hybrid pricing models, mixing base seat licenses with pay-per-use AI credits.

For Monday.com, many seat-based customers get 500 AI credits to use each month. When they exhaust these, they must pay extra for more.

ServiceNow's approach

ServiceNow CEO Bill McDermott
ServiceNow CEO Bill McDermott

ServiceNow

ServiceNow, one of the SaaS players, has added usage-based pricing, but only as a small add-on to an otherwise predictable, seat-based offering.

CEO Bill McDermott told me the company spent years building a cheap, fast, and secure AI platform with help from Nvidia. He also noted that many of the big AI models out there, such as Meta's Llama and Google's Gemini, have become a lot cheaper to tap into lately.

Still, ServiceNow weaved in usage-based pricing to protect itself in rare situations when customers are extremely active and use a huge amount of tokens that the company has to process.

"When it goes beyond what we can credibly afford, we have to have some kind of meter," McDermott said.

He stressed that customers can still rip through thousands of business processes before they hit this usage-based pricing tier.

"Our customers still want seat-based predictability," McDermott added. "We think it's the perfect goldilocks model, offering predictability, innovation, and thousands of free use cases."

Investors have noticed

Investors are taking note. Barclays analysts argued recently that usage-based software companies, such as JFrog and Braze, should command premium valuations, especially as seat-based vendors face potential slower revenue growth from AI features that don't scale with user count.

"We are hearing more concerns from investors that the ongoing prevalence of AI agents could lead to lower incremental revenue contributions from seat growth for SaaS vendors," the analysts wrote in a note to investors recently.Β 

This shift could lead to more volatility in quarterly revenue, but stronger long-term alignment with product value delivered, the analysts explained.

The downsides

The downside is that these are variable costs for customers. Instead of knowing exactly how much a service costs every month, your costs might rise unexpectedly if you get a lot of traffic, or your employees go nuts for new AI tools, for instance.

There's a similar problem facing the companies providing these new AI-powered software services. Their sales may rise and fall more in line with customer success and activity in general. That lumpy revenue is less attractive to investors, compared to the reliable monthly seat-based sales often generated by traditional SaaS providers.

David Slater, a chief marketing officer who's worked at tech companies including Salesforce and Mozilla, recently built a personal website using Bolt.new. He says costs could easily get out of control if you use the tool heavily, or go down a design rabbit hole and keep tweaking something over and over.

The allure of SaaS services is that they are predictable, for customers as well as providers. Anything that messes with this situation could be a concern, especially for end users.

"A pricing model that's not predictive for the company and the consumer cannot stand," Slater told me in an interview.

The road ahead

The shift from seats to usage isn't just about AI, but AI is the catalyst. As software gets smarter, more dynamic, and more compute-hungry, tying pricing to actual use may become a more sustainable path forward.

Expect to see more companies introduce token credits, pay-per-query pricing, or hybrid models in 2025, not just because it's more efficient, but because it may be the only way to stay afloat as AI adoption accelerates.

Now, this could all change again if generative AI compute costs fall over time. That's happened in previous computing eras, and some experts see this happening again. Or, at least, they hope so.

"Sooner or later, AI costs are going to plummet, and then this usage-based model dies, replaced with an anchor like seats, or time, or a monthly subscription that's understandable," Slater said.

Read the original article on Business Insider

ServiceNow dodges the dreaded DOGE hit

23 April 2025 at 14:54
Bill McDermott
ServiceNow CEO Bill McDermott

Ralph Orlowski/Reuters

  • Ahead of ServiceNow's earnings, Wall Street worried about DOGE hitting government software spending.
  • The company won six new US government customers in the first quarter.
  • ServiceNow CEO Bill McDermott tells Business Insider the company remains "un-DOGE-ed."

When I got on the phone with ServiceNow CEO Bill McDermott on Wednesday, one of the first things I asked was, "Have you been DOGE-ed?"

The White House DOGE Office has made an ambitious effort to slash federal spending. The US government buys a lot of software, and since this efficiency drive kicked off in January, Wall Street has worried about which tech companies might lose valuable contracts.

ServiceNow was among those in the potential firing line, helping to push its stock down by more than 20% this year. Ahead of ServiceNow's first-quarter results, TD Cowen analysts wrote about "ongoing DOGE concerns." The company gets roughly 10% of its revenue from the US federal government, so "risks are more acute," the analysts wrote in a preview.

On Wednesday afternoon, ServiceNow reported Q1 numbers, and these concerns seem to have been unfounded, at least for now. The company beat Wall Street expectations and raised guidance for subscription revenue.Β 

More importantly,Β ServiceNow said its US public sector business grew more than 30% year-over-year, and it added six new government customers in the first quarter. The stock jumped 11% in after-hours trading.

"Un-DOGE-ed," McDermott said.

Avoiding DOGE carnage

I asked him why ServiceNow has managed to avoid the DOGE carnage.Β 

The CEO said the company helps organizations save money by providing cloud software that automates many humdrum, but important tasks. ServiceNow's software can also make it easier to consolidate multiple different IT tools and services under one roof, another way to save.

With DOGE on the prowl and tariff risks denting confidence, if organizations can use software to cut costs, be more efficient, and reduce duplicative services, it might be less painful to keep paying ServiceNow. Β 

That last point may be particularly pertinent to government agencies, which often have many older, less efficient, legacy software systems.

"We're working with agencies to replace costly legacy systems," McDermott said. "They realize this is the moment where the software industrial complex has to be collapsed onto ServiceNow. It's grown in cost and complexity over time due to maverick buying. We're here to help reduce and simplify that."

Savings in Raleigh

McDermott cited the city of Raleigh, North Carolina, which uses ServiceNow toΒ auto-populate personnel forms so different teams, such as HR, IT, Facilities, and Payroll, don't have to enter the same information more than once. That saves city employees more than 1,302 hours annually, according to a ServiceNow case study.

ServiceNow software also helped Raleigh replace six legacy service-management solutions and reduce the number of employees in the city's IT call center from eight to two. Those six staffers now work in other areas with the city. Raleigh estimates that it still saved $315,000 a year.Β 

That seems small, though such savings add up over time and across multiple customers. And when Elon Musk goes around demanding agencies cut spending, every little bit helps.Β 

"I like to say that everyone wins in this business, and I still believe that. However, the customer wants some losers now and is looking to consolidate software systems and services," McDermott said.Β "In uncertain times, we help organizations consolidate their legacy technology spending."

Read the original article on Business Insider

First, Microsoft tapped the AI data center brakes. Now analysts are worried about Amazon.

23 April 2025 at 02:00
AWS CEO Matt Garman
Amazon Web Services CEO Matt Garman.

Amazon

  • Wall Street analysts say Amazon has paused some data center deals.
  • The data center market may be slightly cooling, after a frenzied couple of years.
  • Microsoft has also taken its foot off the AI accelerator a bit recently.

First, it was Microsoft. Now Amazon is raising eyebrows on Wall Street as fresh signs suggest the cloud giant may be easing off the accelerator in the race to build AI data centers.

Some analysts are concerned that Amazon Web Services, the dominant cloud provider, may be entering a digestion phase that could slow momentum in the data center market.

The speculation gained traction Monday when famed short-seller Jim Chanos posted on X with a simple and ominous remark, alongside an analyst note suggesting caution around AWS's data center plans.

Data Centers #UhOh pic.twitter.com/fRWNFxMV58

β€” James Chanos (@RealJimChanos) April 21, 2025

That note, published by Wells Fargo analysts, cited industry sources who reported this weekend that AWS paused discussions for certain new colocation data center deals, particularly international ones. The analysts stressed that the scale of the pause remains unclear, though they're worried.

"It does appear like the hyperscalers are being more discerning with leasing large clusters of power, and tightening up pre-lease windows for capacity that will be delivered before the end of 2026," the analysts wrote.

Oh no, colo

The same day, TD Cowen analysts published similar findings from their own data center research.

"Our most recent checks point to a pullback in US colocation deals from Amazon," they wrote in a note to investors. Colo deals, as they're known in the industry, involve different companies sharing space in the same data center.

"We are aware of select colocation deals that it walked away from, as well as expansion options that it chose not to exercise," the Cowen analysts added.

They also said that their recent industry checks point to a slowdown in Amazon's AI ambitions in Europe.

"This is a dynamic we will continue to monitor," the analysts wrote.

Three signs of moderation

More broadly, Cowen's analysts have spotted a cooling in the data center market β€” relative to the frenzied activity of recent years.

"We observed a moderation in the exuberance around the outlook for hyperscale demand which characterized the market this time last year," they wrote, laying out three specific signs of calmer times:

  • Data center demand has moderated a bit, particularly in Europe.
  • There has been a broader moderation in the urgency and speed with which cloud companies seek to secure data center capacity.
  • The number of large deals in the market appears to have moderated.

Some context is important here. The AI data center market has gone gangbusters ever since OpenAI's ChatGPT burst onto the scene in late 2022 and showed the potential of generative AI technology.

These signs of moderation are pretty small in relation to this huge trend. However, trillions of dollars in current and planned investments are riding on the generative AI boom. With so much money on the line, any inkling that this rocket ship is not ascending at light speed is unnerving.

Microsoft made similar moves

These signals from Amazon echo similar moves by Microsoft, which recently halted some data center projects.

"Like Microsoft, AWS seems to be digesting recent aggressive leasing activity," the Wells Fargo analysts wrote.

They clarified that this doesn't mean signed deals are being canceled, but rather that AWS is pulling back from early-stage agreements like Letters of Intent or Statements of Qualificationsβ€”common ways that cloud providers work with partners to prepare for data center projects.

Amazon says it still sees strong AI demand

In response to these growing concerns, Kevin Miller, vice president of Global Data Centers at AWS, posted on LinkedIn on Monday to offer some clarity.

"We continue to see strong demand for both Generative AI and foundational workloads on AWS," he wrote.

He explained that AWS has thousands of cloud customers around the world and must weigh multiple solutions to get them the right capacity at the right time.

"Some options might end up costing too much, while others might not deliver when we need the capacity," Miller wrote. "Other times, we find that we need more capacity in one location and less in another. This is routine capacity management, and there haven't been any recent fundamental changes in our expansion plans."

Amazon did not respond to a request for comment from Business Insider.

Digestion or indigestion?

Miller's comments aim to position the pause not as a red flag, but as part of the normal ebb and flow of data center growth.

Historically, these digestion periods, marked by slowing new leases or deferred builds, can last 6 to 12 months before a rebound, the Wells Fargo analysts wrote. Google, for instance, pulled back from leasing in the second half of 2024, only to return aggressively in early 2025, they noted.

The Cowen analysts said Amazon's recent cautious moves to pull back on colocation deals may be related to efforts to increase efficiency across its data center operations. Also, AWS typically doesn't do a lot of colocation deals anyway, preferring instead to build its own data centers, the analysts wrote.

They also wrote that other tech giants, such as Meta and Google, are still aggressively pursuing new capacity.

The bottom line? While AWS appears to be taking a breath, the AI cloud race is far from over. Analysts and investors will watch closely to see whether this pause marks a brief recalibration or a more significant shift in AI strategy.

Read the original article on Business Insider

❌
❌