❌

Reading view

There are new articles available, click to refresh the page.

We put Tesla's FSD and Waymo's robotaxi to the test. One shocking mistake made the winner clear.

Alistar Barr standing next to a Tesla and Lloyd Lee standing next to a Waymo taxi.

Lloyd Lee; Alistar Barr/BI

  • Waymo's robotaxis have been providing fully autonomous rides to the SF public since 2024.
  • Tesla is gearing up to launch a robotaxi service in Austin, using its Full-Self Driving software.
  • Tesla's FSD is good, but it made one mistake we just can't overlook.

The robotaxi race is speeding up.

Tesla is preparing to debut its autonomous ride-hailing service in Austin next month, and Alphabet's Waymo continues to expand throughout major US cities.

Under the hood of the Tesla and Waymo robotaxis are two key pieces of technology that the companies respectively call Full Self-Driving (FSD) and the Waymo Driver.

We (Business Insider's Lloyd Lee and Alistair Barr) tested both of these AI-powered drivers in San Francisco β€” and the results truly surprised us.

Given the positive experiences we've had with Waymo and Tesla's FSD, we expected the results of our not-so-scientific test to come down to minute details β€” maybe by how many times the AI-driver would hesitate or if it would make a curious lane change for no apparent reason.

That didn't happen. Instead, the Tesla made an egregious error that handed Waymo the clear win.

Here's how it went down.

The test

Our vehicles for the test included Waymo's Jaguar I-PACE SUVs and Barr's personal 2024 Tesla Model 3.

The Waymo robotaxis are equipped with the company's fifth-generation Waymo Driver and guided by five lidar sensors, six radars, and 29 cameras.

Cameras on the Waymo Taxi
Waymo's robotaxis have multiple sensors, radars, and cameras that protrude off the vehicles.

Lloyd Lee/BI

Barr's Tesla was equipped with Hardware 4 and FSD Supervised software v13.2.8. Tesla released a minor update to the software days after this test was conducted. The vehicle has eight external cameras.

It should be noted that this is not the same software Tesla plans to use in the robotaxis set to roll out this summer. The company said it plans to release FSD Unsupervised, a self-driving system that will not require a human behind the wheel. Nevertheless, we wanted to see how far Tesla's FSD had come since its beta rollout in 2020.

External cameras on a Tesla.
Tesla's FSD relies only on eight external cameras attached around the vehicle's body.

Lloyd Lee/BI

We couldn't compare Tesla and Waymo as a full-package robotaxi service. Tesla has yet to launch that product, so we focused only on the driving experience.

We started at San Francisco's iconic Twin Peaks viewpoint and ended at Chase Center. Depending on the route, that's about a 4- to 7-mile ride.

We chose these destinations for two reasons. One, it would take the cars through winding roads and both suburban and city landscapes. And two, there were a few ways to get to Chase Center from Twin Peaks, including the 280 highway.

Waymo's robotaxis can't take riders on the highway yet. Tesla can.

According to Google Maps, the highway is more time-efficient. For the Tesla, we went with the route the vehicle highlighted first. It pointed out the highway on the way back to Twin Peaks.

We took a Waymo around 8:30 a.m. on a Thursday and the Tesla afterward at around 10 a.m. The traffic conditions for both rides were light to moderate and not noticeably different.

Predictions

Our prediction was that the AI drivers' skills would be nearly neck-and-neck.

But in the spirit of competition, Lee predicted Waymo would deliver a smoother experience and a smarter driver, given the high-tech sensor stack the company relies on.

Barr went with Tesla. He said he'd driven hundreds of miles on FSD with two or three relatively minor interventions so far, and given this previous experience, Barr said he'd have no problem riding in the back seat of a Tesla robotaxi.

Waymo

Throughout our ride in the Waymo, we were impressed by the AI driver's ability to be safe but assertive.

The Waymo was not shy about making yellow lights, for example, but it never made maneuvers you wouldn't want a robot driver you're entrusting your life with to make.

The interior of a Waymo taxi.
Waymo passengers can make a few adjustments to their ride, including temperature and music settings.

Lloyd Lee/BI

One small but notable moment in our ride was when the Waymo stopped behind a car at a stop sign. To the right of us was an open lane.

For whatever reason, the Waymo saw that and decided to switch lanes, as if it was tired of waiting behind the other car. We found that a bit amusing because it seemed like such a human moment.

As human drivers, we might make choices like that because we get antsy waiting behind another car, even though we're not shaving more than a few seconds, if any, off of our commute.

Barr noted that the Waymo Driver can have moments of sass or attitude. It had an urgency, giving us the feeling that it somehow really cared that we got to the Chase Center in good time.

"It's got New York cab driver energy," Barr said, stealing a line from BI editor in chief Jamie Heller, who also took a Waymo during a trip to San Francisco earlier this year.

Sandy Karp, a spokesperson for Waymo, said the company doesn't have specific details on what happened in that moment but said that the Waymo Driver "is constantly planning its next move, including the optimal route to get its rider where they're going safely and efficiently."

"This planning can involve decisions like changing lanes when deemed favorable," she said.

Ultimately, though, the best litmus test for any robotaxi is when you stop noticing that you're in a robotaxi.

Outside those small but notable moments, we recorded footage for this story and chatted in comfort without feeling like we were on the edge of our seats.

Tesla

Tesla's FSD delivered a mostly smooth driving experience, and we think it deserves some props for doing so with a smaller and cheaper tech stack, i.e., only eight cameras.

The interior of a Tesla.
Tesla's latest FSD Supervised software still requires a human driver behind the wheel.

Alistar Barr/BI

FSD knew how to signal a lane change as it approached a large stalled vehicle taking up a lot of road room, and it didn't have any sudden moments of braking. Just a few years ago, Tesla owners were reporting issues of "phantom braking." We experienced none of that on our drive.

Tesla also handled highway driving flawlessly. Sure, the weather was clear and traffic was fairly light, but, as noted earlier, Waymo does not yet offer public rides on highways. The company is still testing.

However, Tesla FSD did make a few mistakes, including one critical error.

At the end of our drive at Chase Center, we assessed how Waymo and Tesla's systems performed. We both gave Waymo a slight edge, but were also impressed with the FSD system.

On our way back to Twin Peaks, Tesla highlighted a route that would take us on the highway β€” a route that Waymo cannot take. We kept Tesla FSD on for this trip while we continued recording.

San Francisco is known to have a lot of brightly marked, green bike lanes for cyclists. There was one moment during the trip back when the Tesla made a right turn onto a bike lane and continued to drive on it for a few seconds before it merged into the proper lane.

Then, as we approached the last half-mile of our ride, the Tesla, for an unknown reason, ran a red light.

Traffic intersection
Tesla FSD ran a red light at the intersection of Twin Peaks Blvd and Portola Drive.

Lloyd Lee/Business Insider

The incident occurred at a fairly complex intersection that resembles a slip-lane intersection, but with a traffic light. The Waymo did not approach this intersection since it took a different route to get back to Twin Peaks.

The Tesla's console screen showed how the car detected the red light and came to a dutiful stop. Then, despite the traffic light not changing, the Tesla drove ahead.

We didn't come close to hitting any cars or humans on the street β€” Tesla's FSD is good at spotting such risks, and the main source of traffic coming across our path had been stopped by another traffic light. However, the vehicle slowly drove through this red light, which left us both somewhat shocked at the time.

Some Tesla drivers appeared to have reported similar issues in online forums and in videos that showed the vehicle recognizing the red light but driving ahead. One YouTuber showed how the Tesla first came to a stop at a red light and then continued driving before the light changed.

It's unclear how common this issue is. Tesla hasn't publicly addressed the problem.

A spokesperson for Tesla did not respond to a request for comment.

At this point, we thought the winner was clear.

Verdict

Since Tesla's FSD made a critical error that would have landed an automatic fail during a driver's license test, we thought it was fair to give Waymo the win for this test.

Lloyd Lee in the passenger seat of the Waymo taxi.
The Waymo was the clear winner in our test since it didn't run a red light like the Tesla.

Alistar Barr/BI

The Tesla handled San Francisco's hilly and winding roads almost as flawlessly as Waymo.

We also think FSD's ability to handle routes that Waymo can't handle for now β€” in particular, the highway β€” would give Tesla a major upper hand.

In addition, when Lee tried on a different day to make the Waymo go through the same intersection where the Tesla blew the red light, the Waymo app appeared to do everything it could to avoid that intersection, even if it provided the quickest path to get to the destination, according to Google Maps.

A Waymo spokesperson did not provide a comment on what could've happened here.

Still, an error like running a red light cannot be overlooked when human lives are at stake. Consider that when Tesla rolls out its robotaxi service, a human driver will not be behind the wheel to quickly intervene if it makes an error.

For Tesla and Waymo, we expected to be on the lookout for small, almost negligible, mistakes or glitchy moments from the AI driver. We did not anticipate an error as glaring as running a red light.

Once Tesla launches its robotaxi service in more areas, we'll have to see how the pick-up and drop-off times compare.

Tesla CEO Elon Musk said that the company's generalized solution to self-driving is far superior to its competitors. The company has millions of cars already on the roads collecting massive amounts of real-world data. According to Musk, this will make FSD smarter and able to operate with only cameras.

With Tesla's robotaxi service set to launch in June with human passengers, we certainly hope so.

Read the original article on Business Insider

OpenAI whistleblower found dead by apparent suicide

Logo for OpenAI
Suchir Balaji, 26, was an OpenAI researcher of four years. He left the company in August and accused his employer of violating copyright law.

Joan Cros/NurPhoto via Getty Images

  • Suchir Balaji, a former OpenAI researcher, was found dead on Nov. 26 in his apartment, reports say.
  • Balaji, 26, was an OpenAI researcher of four years who left the company in August.
  • He had accused his employer of violating copyright law with its highly popular ChatGPT model.

Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.

Balaji had recently criticized OpenAI over how the startup collects data from the internet to train its AI models. One of his jobs at OpenAI was gather this information for the development of the company's powerful GPT-4 AI model, and he'd become concerned about how this could undermine how content is created and shared on the internet.

A spokesperson for the San Francisco Police Department told Business Insider that "no evidence of foul play was found during the initial investigation."

David Serrano Sewell, executive director of the city's office of chief medical examiner, told the San Jose Mercury News "the manner of death has been determined to be suicide." A spokesperson for the city's medical examiner's office did not immediately respond to a request for comment from BI.

"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson said in a statement to BI.

In October, Balaji published an essay on his personal website that raised questions around what is considered "fair use" and whether it can apply to the training data OpenAI used for its highly popular ChatGPT model.

"While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data," Balaji wrote. "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use.' Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use."

Balaji argued in his personal essay that training AI models with masses of data copied for free from the internet is potentially damaging online knowledge communities.

He cited a research paper that described the example of Stack Overflow, a coding Q&A website that saw big declines in traffic and user engagement after ChatGPT and AI models such as GPT-4 came out.

Large language models and chatbots answer user questions directly, so there's less need for people to go to the original sources for answers now.

In the case of Stack Overflow, chatbots and LLMs are answering coding questions, so fewer people visit Stack Overflow to ask that community for help. This, in turn, means the coding website generates less new human content.

Elon Musk has warned about this, calling the phenomenon "Death by LLM."

OpenAI faces multiple lawsuits that accuse the company of copyright infringement.

The New York Times sued OpenAI last year, accusing the start up and Microsoft of "unlawful use of The Times's work to create artificial intelligence products that compete with it."

In an interview with Times that was published October, Balaji said chatbots like ChatGPT are stripping away the commercial value of people's work and services.

"This is not a sustainable model for the internet ecosystem as a whole," he told the publication.

In a statement to the Times about Balaji's accusations, OpenAI said: "We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness."

Balaji was later named in the Times' lawsuit against OpenAI as a "custodian" or an individual who holds relevant documents for the case, according to a letter filed on November 18 that was viewed by BI.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β€” just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

❌