Reading view

There are new articles available, click to refresh the page.

Elon Musk vs. OpenAI: What to expect from the showdown in 2025

Photo collage of Sam Altman on the left, OpenAI's logo on a phone in the middle and Elon Musk on the right
Elon Musk's battle with OpenAI could get heated in 2025.

Anadolu

  • Elon Musk's lawsuit against OpenAI will likely play out in 2025.
  • Musk says OpenAI has lost sight of its mission to develop AI safety, prioritizing profits instead.
  • Here's what you need to know about a battle that could impact the future of artificial intelligence.

Two of the most powerful forces in the AI industry are set to collide this year: xAI's Elon Musk and OpenAI's Sam Altman.

Musk was one of 11 cofounders, including Altman and President Greg Brockman, who established OpenAI as a nonprofit in 2015 with the mission to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."

Musk left in 2018 — a year before OpenAI added a for-profit arm — citing a conflict of interest with his work at Tesla, though his lawyers say that he contributed to the company until mid-2020.

Since then, he's become a vocal skeptic of OpenAI's commitment to prioritizing transparency and safety over profit.

The feud between the founders escalated in August when Musk filed a lawsuit against Altman, OpenAI, and Microsoft, the company's biggest investor. Musk accused them of deception, prioritizing profits despite its stated mission.

That lawsuit will likely play out this year — a major battle that could impact the future of artificial intelligence. Here's what to expect.

Musk's legal challenges against OpenAI

Musk first filed a lawsuit against OpenAI in a California state court in February 2024, accusing OpenAI of violating its nonprofit mission by partnering with Microsoft. Musk withdrew that suit in June.

He filed a new lawsuit in August 2024, this time in a federal court, accusing OpenAI of a bait-and-switch deception that violates RICO laws — anti-racketeering laws first designed to target organized crime families.

Musk's lawyers say in the lawsuit that Musk "lent his name to the venture, invested significant time and millions of dollars in seed capital, and recruited top AI scientists for the company," all with the understanding that OpenAI would remain a nonprofit and prioritize developing the technology safely.

Musk's lawyers say OpenAI "betrayed" its mission when it added a for-profit arm in 2019 and deepened its partnership with Microsoft in 2023.

"Musk and the nonprofit's namesake objective were betrayed by Altman and his accomplices," the lawsuit reads. "The perfidy and deceit are of Shakespearean proportions."

In mid-November, Musk's lawyers expanded their complaint to include accusations that OpenAI and Microsoft violated antitrust laws by asking the company's investors not to back competitors in the generative AI space, like Musk's own xAI, which he launched in 2023.

In his latest move, Musk — also in November — asked a federal judge to stop OpenAI from converting into a fully for-profit corporate entity.

OpenAI has denied the claims. A representative for the company directed Business Insider to a post it published on December 13 responding to Musk's allegations.

"Now that OpenAI is the leading AI research lab and Elon runs a competing AI company, he's asking the court to stop us from effectively pursuing our mission," OpenAI wrote. "You can't sue your way to AGI. We have great respect for Elon's accomplishments and gratitude for his early contributions to OpenAI, but he should be competing in the marketplace rather than the courtroom."

Resolving the lawsuit could take months or even years. US District Judge Yvonne Gonzalez Rogers, who is overseeing the case in the San Francisco federal court, hasn't yet set a trial date.

Rogers will begin hearing arguments on January 14 on whether she should issue the preliminary injunction to prevent OpenAI from converting into a nonprofit until the rest of the case is resolved.

In weighing whether to issue the injunction, Rogers is supposed to consider the "likelihood of success" that Musk will win the case. Her decision would strongly indicate how the rest of the case might play out.

Why OpenAI's corporate structure matters

In a blog entry posted to its website on December 27, OpenAI explained why it needed to evolve its corporate structure.

The company said it wants to transition its for-profit arm into a Delaware Public Benefit Corporation⁠ — which, unlike a traditional company, prioritizes social good alongside profit — to prepare for a more capital-intensive future.

OpenAI said the structural change would enable it to "raise the necessary capital" to pursue its mission of developing artificial general intelligence and to give it more leeway to consider the interests of its backers.

The company said it would still run a nonprofit on the side focused more narrowly on charitable initiatives in healthcare, education, and science.

Rose Chan Loui, a nonprofit legal expert at UCLA, said OpenAI's current nonprofit status grants it significant control over technological development.

"What we lose in this conversion is a nonprofit with the unique ability to control AI development activities — to be a watchdog from the inside, making sure that AI is being developed safely and for the benefit of humanity. From that perspective, it seems to me that the nonprofit's current control position is priceless," she wrote to Business Insider in an email.

If the conversion to a for-profit public benefit corporation goes through, OpenAI would need to ensure that the nonprofit retains assets worth as much as what it's giving up, including a significant premium for its control. That could be in the form of cash or stock that it can sell for cash.

Still, "what seems to be envisioned is a grant-making foundation that can do good but will have a very reduced, if any, impact on the development of AI," Chan Loui said.

Former employees have also raised concerns that the nonprofit would have a reduced role in public safety.

Miles Brundage, OpenAI's former head of AGI Readiness, who left in October, responded to OpenAI's December post, saying on X that "a well-capitalized nonprofit on the side is no substitute for PBC product decisions (e.g. on pricing + safety mitigations) being aligned to the original nonprofit's mission."

He added that much of OpenAI's rationale for conversion makes sense. However, there are still "red flags," including a lack of details about its new governance structure and guardrails around the technology.

Other individuals and organizations have filed amicus briefs to the federal court where Musk filed his suit. These briefs are meant to inform the court and help it make a decision.

Kathleen Jennings, the attorney general for Delaware, where OpenAI is incorporated, filed one last week. She detailed her role in protecting the public interest if OpenAI becomes a for-profit public benefit corporation.

Chan Loui said Jennings's brief is a hopeful sign that, no matter what happens, public interest will ultimately win.

"It is encouraging that the Delaware AG has stated her commitment to protecting the public interest, including seeking an injunction if she determines that the conversion is inconsistent with OpenAI's mission and its obligations to the public, that OpenAI's board members are not fulfilling their fiduciary duties, or if the value of the conversion or the process for arriving at it is not 'entirely fair.'"

Lawyers for Musk did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

The top 15 gifts that Gen Z touted in their Christmas hauls, according to someone who watched hundreds of haul videos

jellycats
Jellycats were all the rage for tweens and teens.

JULIEN DE ROSA/AFP via Getty Images

  • Tweens, teens, and college-aged kids showed off their Christmas hauls in TikTok videos.
  • Casey Lewis, who writes about young consumers, watched about 1,000 haul videos, she told BI.
  • Here are the top items that Gen Z kids bragged about getting for Christmas.

It was a very merry Christmas for some Gen Zers who took to social media to show off everything they unwrapped.

Casey Lewis, who writes the youth insights newsletter After School, analyzed Christmas haul TikTok videos from tweens, teens, and college-age consumers and compiled a list that she shared on her own TikTok.

"This is the third year I've done this sort of thing with the Christmas hauls, and I tried to refine my system just so that I'm able to actually crunch the data a little bit more scientifically," Lewis told Business Insider.

She said she watched hundreds of videos at double the speed to tally the standout gift items.

"I think conservatively, at least a thousand [videos]," Lewis said. "I was trying to discreetly binge Christmas haul TikToks while also spending time with my family."

From luxury clothing to throwback tech, these were the top gifts that the younger generation showed off in their Christmas hauls.

Digital cameras

"I think the thing that surprised me the most was how popular digital cameras were," Lewis said, noting that Gen Z has an affinity for Y2K nostalgia. "Everyone got digital cameras. It was also really interesting to see some of them got really expensive Sony ones, but then now Amazon makes those digital cameras that come in cute colors."

A basic Sony digital camera might run at around $750, but Lewis said she saw people showing off cheaper options from Amazon and Urban Outfitters for less than $100.

"They've sort of caught onto this trend, but then you kind of wonder how long is that going to last?" Lewis said.

UGG boots
Sophia Geiss is seen wearing wide-legged jeans from COS, and beige suede platform shoes with decorative seams from Ugg on November 22, 2024 in Berlin, Germany
Ugg continued their reign of popularity among tweens and teens.

Jeremy Moeller/Getty Images

UGG remained a hot item this year with the Ultra Mini boots and Classic Mini boots, which cost $150 to $160, reigning supreme.

"This year, it was the Minis, and last year, it was the Minis but also the Platforms," Lewis said. "Every year, they're just able to continue to be such a thing."

While she was going through the videos during the holidays at her childhood home, Lewis, who is 37, said she was surrounded by relics from her own childhood, like her own pair of UGG boots.

"Uggs and digital cameras — has anything changed? Am I still just a 16-year-old?" she said.

Rhode skin and beauty products

Lewis also said it was "staggering" how popular Rhode, Hailey Bieber's beauty brand, had become.

"Everyone got the lip peptide treatment," Lewis said. "It's such a popular skincare brand."

Rhode's peptide lip tint retails for $18.

"We know that celebrity brands are so fickle," Lewis said, "but it almost feels like this may have successfully reached the point where it's bigger than her and will thrive independently."

Sol de Janeiro products
Sol de Janeiro Cheirosa 62 hair and body mist
Sol de Janeiro products were very popular, especially the fragrances.

Sephora

Another popular beauty brand was Sol de Janeiro, which makes body and hair care as well as fragrances.

Gen Z kids showed off their "Cheirosa '62" perfume mist, which Lewis said was a big hit this year.

A full-size, 240 ml bottle retails for $38.

Jellycats
jellycats
The tweens and teens went crazy over Jellycats.

JULIEN DE ROSA/AFP via Getty Images

Tweens and teens went crazy over Jellycats, small plush toys that retail for $30 to $50.

"Jellycats were mentioned on every wish list, and they were very popular in hauls," Lewis said.

Lewis saw many Jellycats in haul videos but not as many as she expected, prompting her to question whether the kidult purchasing trend is declining.

"Are parents tired of buying their kids, their almost grown kids, stuffed animals? I don't know," Lewis said. "It feels very similar to Beanie Babies where it was a craze, but it wasn't able to sustain because no craze is."

White Fox apparel

Luxury loungewear remained popular this holiday season.

The $50 sweatpants from White Fox, which is headquartered in Australia, were "very popular" in haul videos, Lewis said.

"Athleisure had such a moment coming out of COVID, but young people are still very much prioritizing comfort clothes," Lewis said, noting that brands like Lululemon were also popular. "Teen and college-age girls, so many of them just wear sweat sets."

Roller Rabbit pajamas
Roller Rabbit Hearts Pajama Set
Roller Rabbit's pajamas retail for over $100.

Courtesy of Bloomingdale's

Roller Rabbit pajamas were a popular gift pick as well, according to Lewis' analysis.

Available in dozens of different brightly colored patterns as well as in short- and long-sleeve options, the pajamas retail from $138 to $158.

Lewis noted the pajamas convey a sense of status.

Shark hair tools

Whereas last year saw a craze for Dyson hair tools, this year was all about Shark tools.

"I don't think I could have been trusted when I was a teen with an expensive hair tool," Lewis said. "I just don't think I could have taken care of it and not accidentally broken it."

While the classic set of Dyson hair tools retails for $600, the Shark set is comparatively more affordable at $300.

Vanity desk and mirror
A set of makeup artist brushes in front of a vanity mirror.
Vanity mirrors or desks were popular gifts, too.

Aleksandr Zubkov/Getty Images

While not a name-brand item, many tweens, teens, and college-age girls said in their Christmas haul TikToks that they got a vanity desk or a vanity mirror to put on their desk.

Vanity mirrors often come with lighting that is optimal to use while applying makeup. Depending on the brand, a desk with a vanity mirror might cost about $1,000.

Dae hair styling cream

A styling product from the brand Dae was a popular stocking stuffer, Lewis said.

The styling cream comes with a small wand that's helpful for doing a slick back hairstyle.

A 0.6 oz tube retails for $18.

Adidas Campus shoes
Sonia Lyson seen wearing Sporty & Rich grey cashmere grey jogging pants and Adidas black leather Campus sneakers, on April 10, 2024 in Berlin, Germany.
Adidas sneakers remained popular this year.

Jeremy Moeller/Getty Images

Adidas also continued its reign of popularity.

The Campus 00s, which retail for $110, were the go-to pick, Lewis said.

In previous years, Adidas Gazelles and Sambas were the choice picks.

Alani Nu energy drink

Alani Nu energy drinks were a popular, small-dollar item. Lewis referred to it as the "cool girl energy drink" in her TikTok analysis.

A 12-pack retails for $30.

"What's fascinating about that is it is a very accessible energy drink, but it's also very aesthetic," Lewis told BI. "The energy drink that appeared in so many Christmas hauls this year was nowhere to be found in Christmas hauls last year. So that's a little bit about how quickly some of this stuff changes."

Touchland hand sanitizer

Touchland hand sanitizers were another popular stocking stuffer, Lewis said in her analysis.

"$10 for a tiny hand sanitizer is kind of crazy," she told BI.

But for a 30ml hand sanitizer, it still carries some clout, she said.

"These more affordable, or at least accessible, items that have a little bit of status associated, a little bit of clout," Lewis said. "You don't need to have the Louis Vuitton, or you don't need to even have the Sony camera."

LoveShackFancy Perfume
LoveShackFancy
LoveShackFancy's $125 perfumes were all the rage.

Emily Carmichael/Insider

LoveShackFancy's perfume in the scent "Forever In Love" was a hot gift, Lewis said.

A 2.5 oz bottle retails $125.

Other popular perfume runner-ups were Billie Eilish's "Eilish Eau de Parfum," which retails for $72 for a 3.4 oz bottle, and Glossier's "You," which costs $112 for a 100 ml bottle.

ONE/SIZE setting spray

Wrapping up the list was waterproof setting spray from ONE/SIZE by Patrick Starrr.

A 3.4 oz can of the mattifying spray retails for $32, adding to the subset of more affordable items that Lewis noted.

"There were not a lot of, I don't know, designer sunglasses. I did see a couple of designer purses," Lewis said. "It's not like there's one emerging or one dominant luxury item that everyone is feeling like they need to have."

Read the original article on Business Insider

The billionaire Panera founder imagines his death when planning his year ahead

GettyImages 894898926
Ron Shaich, founder of Panera Bread, conducts a 'premortem' ritual every year.

Getty Images

  • Ron Shaich uses 'premortems' to motivate a meaningful life and guide his work.
  • Shaich, Panera's founder, sold the chain for $7.5 billion in 2017.
  • His investment fund, Act III, backs brands like Cava and Tatte Bakery.

Panera's former CEO, Ron Shaich, isn't afraid of death — he's inspired by it.

Over the years he's realized that the time to review whether or not your life was meaningful was "not in the ninth inning with two outs," he told the Wall Street Journal, referring to the final phase of a baseball game. "It was in the seventh inning, the fifth inning, and third inning."

During the final week of every year, he conducts what he calls a "premortem:" a ritual that helps him reframe death as motivation to live a more meaningful life. "I ask myself: What am I going to do now to ensure that when I reach that ultimate destination, I've done what I need to do?" he wrote in his 2023 book, "Know What Matters."

He starts by envisioning all the key areas in his life.

"I'd pull out a yellow legal pad and I'd start to divide that yellow legal pad into the areas of my life that I cared about," Shaich once told Business Insider. "And to me, that's my relationship with my body and my health, my core relationships — my wife, my family, my kids — my relationship with my work, what I wanted out of my work, and what gave me joy, and then my relationship with my own spirituality. And then based on that, I literally would say, 'What is it I want to have accomplished in each of these spheres of my life?'"

Shaich, who reached billionaire status this past July, has built a career off some of the most successful food chains in the country. He launched Panera in 1999 and sold it in 2017 to the European investment fund, JAB, for $7.5 billion.

Through his over billion-dollar fund, Act III, he's invested in chains like Mediterranean fast-casual brand Cava, Tatte bakery, and organic cafe Life Alive.

He told the Journal that his philosophy of life and death also guides his work. He asks his companies to conduct premortems, envisioning goals for the coming three to five years and planning the path to achieve them.

"It's been the key to all of our successes," he said.

Read the original article on Business Insider

Larry Ellison is $67 billion richer this year. His career spans software, Hollywood, and yacht racing.

Larry Ellison, Oracle cofounder, speaks onstage in front of background of red circles
Oracle cofounder Larry Ellison is a billionaire with a reputation that precedes him.

Kim Kulish/Getty Images

  • Larry Ellison, the 80-year-old cofounder of Oracle, is one of the most interesting men in tech.
  • Whether yacht racing, buying Hawaiian islands, or trash-talking competitors, he keeps it lively.
  • Now, he's one of the world's richest people with a net worth of about $190 billion.

Larry Ellison is the founder and chief technology officer at software company Oracle. Now, he's also the world's fourth-richest man and has a net worth of $190 billion, according to the Bloomberg Billionaires Index.

The billionaire's fortunes have surged by over $67 billion this year, thanks to spiking demand for generative AI. The windfall puts him ahead of tech execs like Google cofounder Sergey Brin and former Microsoft chief executive Steve Ballmer. 

The 80-year-old started Oracle in 1977, and decades later he's still one of the top dogs in Silicon Valley despite living in Hawaii full time — and owning an entire island. Ellison has also been a major investor in Tesla, Salesforce, and even reportedly had a seat on Apple's board of directors for a while.

Outside the office, the billionaire boasts an impressive watch collection and indulges in hobbies like yacht racing. His children have made their own names in the film industry, and his son David Ellison is set to become the CEO of Paramount after its merger with his Skydance Media production company. Through some of Larry's entities, he will control Paramount, per a September filing.

Here's a look at the life and career of Ellison so far.

Lawrence Joseph Ellison was born in the Bronx on August 17, 1944, the son of a single mother named Florence Spellman.
The Bronx
Lawrence Joseph Ellison was born in the Bronx on August 17, 1944.

ANDREW HOLBROOKE/Corbis via Getty Images

When he was 9 months old, Larry came down with pneumonia, Vanity Fair reported. His mom sent him to Chicago to live with his aunt and uncle, Lillian and Louis Ellison.

Vanity Fair reported that Louis, his adoptive father, was a Russian immigrant who took the name "Ellison" in tribute to the place in which he entered the US: Ellis Island.

Ellison is a college dropout.
University of Illinois at Urbana-Champaign
Ellison attended the University of Illinois at Urbana-Champaign

Jeffrey Greenberg/Universal Images Group via Getty Images

Ellison went to high school in Chicago's South Side before attending the University of Illinois at Urbana-Champaign. When his adoptive mother died during his second year at college, Ellison dropped out. He tried college again later at the University of Chicago but dropped out again after only one semester, Vanity Fair reported.

In 1966, a 22-year-old Ellison moved to Berkeley, California — near what would become Silicon Valley and already the place where the tech industry was taking off.
Mainframe computer 1970s
Ellison made the trip from Chicago to California in a turquoise Thunderbird.

H. Armstrong Roberts/ClassicStock/Getty Images

He made the trip from Chicago to California in a flashy turquoise Thunderbird that he thought would make an impression in his new life, Vanity Fair reported.

Ellison bounced around from job to job, including stints at companies like Wells Fargo and the mainframe manufacturer Amdahl. Along the way, he learned computer and programming skills.

In 1977, Ellison and partners Bob Miner and Ed Oates founded a new company, Software Development Laboratories.
Larry Ellison in 1990
Larry Ellison in 1990.

James Leynse/Corbis via Getty Images

The company started with $2,000 of funding.

Ellison and company were inspired by IBM computer scientist Edgar F. Codd's theories for a so-called relational database — a way for computer systems to store and access information, Britannican said. Nowadays, they're taken for granted, but in the '70s, they were a revolutionary idea.

The first version of the Oracle database was version 2 — there was no version 1.
young larry ellison oracle
Ellison was at the forefront of the tech industry before the dot-com crash.

Eric Risberg/AP

In 1979, the company renamed itself Relational Software Inc., and in 1982, it formally became Oracle Systems Corp., after its flagship product.

In 1986, Oracle had its initial public offering, reporting revenue of $55 million.
oracle larry ellison nasdaq
Oracle's offering price was $15 a share.

AP Images

As one of the key drivers of the growing computer industry, Oracle grew fast. The company is responsible for providing the databases in which businesses track information that is crucial to their operations.

Ellison became a billionaire at age 49. Now, he has a net worth of roughly $152 billion, according to Forbes, after racking up $50 billion in gains thanks to Oracle and Tesla stock. That makes him the seventh-richest person in the world.

Still, in 1990, Oracle had to lay off 10% of its workforce, about 400 people, because of what Ellison later described as "an incredible business mistake."
oracle
A plane branded with the Oracle logo.

Scott Olson / Getty Images

Oracle reported a loss of $36 million in September 1990 after admitting that it had miscalculated its revenue earlier that year, The New York Times reported.

It didn't get the decade off to a great start. After adjusting for that error, Oracle was said to be close to bankruptcy. At the same time, rivals like Sybase were eating away at Oracle's market share.

It took a few years, but by 1992, Ellison and Oracle managed to right the course with new employees and the popular Oracle7 database.

Ellison is known for his willingness to trash-talk competitors.
Larry Ellison
Ellison has often been the subject of Silicon Valley gossip.

Business Insider

For much of the '90s, he and Oracle were locked in a public-relations battle with the competitor Informix, which went so far as to place a "Dinosaur Crossing" billboard outside Oracle's Silicon Valley offices at one point, Fortune reported in 1997.

His financial success has led to some expensive hobbies.
larry ellison yacht race
Ellison spends his billions on real estate, water sports, and more.

Ian Mainsbridge/AP Images

With Ellison as Oracle's major shareholder, his millions kept rolling in. He started to indulge in some expensive hobbies — including yacht racing. That's Ellison at the helm during a 1995 race.

He also partly financed the BMW Oracle USA sailing team, which won the America's Cup in 2010, according to Bloomberg.

Ellison was an early investor in Salesforce.
Larry Ellison Marc Benioff
Marc Benioff was an early mentee of Ellison.

Stephen Lam/Reuters; Kimberley White/Getty Images

In 1999, Ellison's protégé, Marc Benioff, left Oracle to work on a new startup called Salesforce.com. Ellison was an early investor, putting $2 million into his friend's new venture.

When Benioff found out that Ellison had Oracle working on a direct competitor to Salesforce's product, he tried to force his mentor to quit Salesforce's board. Instead, Ellison forced Benioff to fire him — meaning Ellison kept his shares in Salesforce.

Given that Salesforce is now a $267 billion company, Ellison personally profits even when his competitors do well. It has led to a love-hate relationship between the two executives that continues to this day, with the two taking shots at each other in the press.

The dot-com boom of the late '90s benefited Oracle.
Larry Ellison Oracle 1999
Other companies weren't so lucky.

Laurent Gillieron/AP Images

All of those new dot-com companies needed databases, and Oracle was there to sell them. Although investors lost out in the dot-com crash, Oracle came out of it stronger due to its acquisitions and the demand for software solutions.

With the coffers overflowing, Ellison was able to lead Oracle through a spending spree once the dot-com boom was over and prices were low.
larry ellison scott mcnealy oracle sun
Ellison used the company's success to bet on other businesses.

David Paul Morris/Getty Images

In 2005, for example, Oracle snapped up the HR software provider PeopleSoft for $10.3 billion.

And in 2010, Oracle completed its acquisition of Sun Microsystems, a server company that started at about the same time as Oracle, in 1982. That acquisition gave Oracle lots of key technology, including control over the popular MySQL database.

Ellison has also spent lavishly over the years, so much so that his accountant, Philip Simon, once asked him to "budget and plan," according to Bloomberg.
Larry Ellison
Ellison at the BNP Paribas Open at Indian Wells Tennis Garden in March 2024.

Matthew Stockman/Getty Images

Ellison has expensive taste. Over the years he's built up an impressive collection of Richard Mille watches, an expert previously told BI. The timepieces start in the six-figure range and can go for over $1 million in some cases.

In 2009, the billionaire purchased the Indian Wells tennis tournament for a reported $100 million, The Los Angeles Times reported.

In 2010, Ellison signed the Giving Pledge.
usc
Has donated millions to charity with plans to give away billions if he follows through with the Giving Pledge.

AP

By signing the pledge, Ellison promised to donate 95% of his fortune before he dies. And in May 2016, Ellison donated $200 million to a cancer treatment center at the University of Southern California, Forbes reported.

Starting in the 2010s, Ellison started to take more of a back seat at Oracle, handing more responsibilities to trusted lieutenants, like Mark Hurd and Safra Catz, then Oracle's copresidents.
Oracle Mark Hurd and Safra Catz
Hurd and Catz shared the helm until Hurd's death in 2019.

AP

Ellison hired Hurd, a former CEO of HP, in 2010, Inc reported. Catz has made a reputation for herself among analysts for what they describe as brilliant business strategy.

But Ellison's spending didn't slow down. In 2012, he bought 98% of the Hawaiian island of Lanai.
Larry Ellison Lanai
He has millions of dollars worth of real estate on the Hawaii Islands.

Andre Seale/VW PICS/Universal Images Group via Getty Images; Noah Berger/Reuters

Ellison founded a startup called Sensei in 2016 that does hydroponic farming and owns a wellness retreat on Lanai.

He also purchased Hawaiian budget airline Island Air in 2014, before selling a controlling interest in the airline two years later after it struggled financially.

In 2014, Ellison officially stepped down as Oracle CEO.
Oracle co-founder Larry Ellison delivers the keynote address during the annual Oracle OpenWorld conference on September 30, 2014 in San Francisco, California.
Hurd and Catz became co-CEOs when Ellison stepped down.

Getty

Ellison handed control over to Hurd and Catz, who became co-CEOs. Ellison now serves as the company's chairman and chief technology officer. Following Hurd's death in 2019, Catz became the sole CEO.

In 2016, Ellison scored a personal coup: the purchase of NetSuite.
Zach Nelson Netsuite
He made billions off of his negotiations with NetSuite CEO Zach Nelson.

Nora Tam/South China Morning Post via Getty Images

Back in 1998, Ellison had made a $125 million investment in ex-Oracle exec Evan Goldberg's startup business-management software firm, NetSuite. It ended up working out well for Ellison when NetSuite CEO Zach Nelson negotiated the sale of the company to Oracle for $9.3 billion, netting Ellison a cool $3.5 billion in cash for his stake.

NetSuite investor T. Rowe Price tried to block the deal, citing Ellison's conflict of interest, but the sale closed in November 2016.

He's used his billions in a variety of ways: he invested in educational platform maker Leapfrog Enterprises and was an early investor in the ill-fated blood-testing company Theranos.
Elizabeth Holmes
Theranos founder Elizabeth Holmes.

Mike Blake/Reuters

Ellison has held shares in some of the most recognizable companies, one of which was the infamous blood-testing company Theranos, founded by Elizabeth Holmes. It had a promising future until its flaws were exposed and Holmes received a prison sentence.

When Steve Jobs returned to Apple as CEO back in 1997, he asked Ellison to sit on the board. Ellison served for a while, but felt that he couldn't devote the time and left in 2002, according to Forbes. Compensation for his role was an option to buy about 70,000 shares, which would've amounted to about $1 million at the time of his departure.

Ellison owns homes on the East and West coasts as part of a multibillion-dollar real-estate portfolio.
beechwood mansion newport rhode island
The Astor Beechwood Mansion in Newport, Rhode Island.

Joe Sohm/Visions of America/UIG via Getty Images

Ellison reportedly owns the Astor Beechwood Mansion in Newport, Rhode Island, and a home in Malibu. Ellison also has houses in Palm Beach, Florida and more in a multibillion-dollar real-estate portfolio.

Both of his two children work in the film industry.
David and Meagan Ellison
Ellison has two children: David and Megan.

Getty Images

His daughter, Megan, is an Oscar-nominated film producer and the founder of Annapurna Pictures. The company has produced films like "Zero Dark Thirty" and "American Hustle."

Ellison's son, David, is also in the film business. His company, Skydance Media, has produced movies like "Terminator: Dark Fate" and films in the "Mission: Impossible" franchise.

After months of discussions in 2024, Skydance Media and Paramount agreed to a deal, creating "New Paramount," which David will be CEO of. He has plans to "improve profitability, foster stability and independence for creators, and enable more investment in faster growing digital platforms," the companies said.

Ellison was one of the few tech leaders who had a friendly relationship with former President Donald Trump.
Larry Ellison
He spoke with Trump on the phone about Covid and TikTok.

Justin Sullivan/Getty Images

Ellison said publicly that he supported Trump and wants him to do well, and hosted a Trump fundraiser at his Rancho Mirage home in February, though he did not attend, Forbes reported. The fundraiser caused an outcry among Oracle employees, who started a petition asking senior Oracle leadership to stand up to Ellison.

Catz, the CEO of Oracle, also had close ties to the Trump administration, having served on Trump's transition team. 

Ellison and Trump remained close during Trump's time in office and reportedly spoke on the phone about possible coronavirus treatments. Trump also supported Oracle's bid to buy TikTok, calling Oracle a "great company."

In December 2018, Ellison joined the board of directors at Tesla, where he's been a major investor.
Elon Musk
Tesla CEO Elon Musk is a close friend to Ellison.

Paul Hennessy/SOPA Images/LightRocket via Getty Images

Earlier in 2018, Ellison described Tesla CEO Elon Musk as a "close friend," and defended him from critics. When Musk acquired Twitter — now X — in 2022, Ellison offered to invest $1 billion.

Musk went on to help Ellison reset his forgotten password, biographer Walter Isaacson wrote.

In December 2020, Ellison revealed that he moved to Lanai full-time.
Lanai Hawaii
Although his company moved to Texas, Ellison went to the islands.

Michael Conroy/AP

The announcement came after Oracle decided to move its headquarters to Austin, leading Oracle employees to ask Ellison if he planned to move to Texas too.

"The answer is no," Ellison wrote in a company-wide email. "I've moved to the state of Hawaii and I'll be using the power of Zoom to work from the island of Lanai."

He signed the email: "Mahalo, Larry."

He left Tesla's board in August 2022.
Larry Ellison and Elon Musk
It looks like Ellison and Musk are still close.

Getty Images

In a proxy filing in June 2022, the electric vehicle maker revealed that Ellison would be leaving the board. Since then, he and Musk have appeared to maintain their close relationship.

Oracle had a record-breaking 2023, and cemented itself in the new age of artificial intelligence.
Oracle
Two decades later, and Oracle is still a key player in tech.

Sven Hoppe/picture alliance via Getty Images

Oracle's shares continued to hit records, CNBC reported. The company proved that it's not going any where any time soon.

In 2023, Oracle backed OpenAI rival Cohere.
Larry Ellison talking into microphone
Oracle backed Cohere when it comes to generative AI.

Kimberly White/Stringer/Getty

Oracle joined other tech giants, like Salesforce, in backing the tech startup in June 2023. It began offering generative AI to its clients based on tech made by Cohere.

"Cohere and Oracle are working together to make it very, very easy for enterprise customers to train their own specialized large language models while protecting the privacy of their training data," Ellison previously said.

Oracle announced in April that it would be moving its headquarters to Nashville, Tennessee.
Nashville.
Ellison said in April that the new Nashville location will be a "huge campus."

Malcolm MacGregor/Getty Images

Despite its big move to Austin only four years ago, Ellison said that Oracle is planning to move its world headquarters to Nashville, Tennessee.

In April 2024, the exec announced that Oracle has plans for a "huge campus" in Nashville that will one day serve as the software giant's world headquarters. The company relocated from the San Francisco area to Austin, Texas in 2020.

"It's the center of the industry we're most concerned about, which is the healthcare industry," Ellison said at the Oracle Health Summit in Nashville, CNBC reported.

Ellison's wealth jumped $14 billion after record earnings from Oracle.
oracle
Oracle, and Ellison, are getting richer thanks to the generative AI gold rush.

MANJUNATH KIRAN/AFP via Getty Images

Oracle's cloud applications business saw its shares spike by 13% in June 2024 after the company posted strong annual earnings due to demand for generative AI, Fortune reported. Ellison, who now serves as Oracle's CTO and owns about 40% of the company's cloud sector, got a $14 billion boost to his fortune.

The company also announced a partnership with AI startup Cohere, enabling its enterprise customers to build their own generative AI apps. "Cohere and Oracle are working together to make it very, very easy for enterprise customers to train their own specialized large language models while protecting the privacy of their training data," Ellison said during the company's earnings call.

Ellison to control Paramount as its majority shareholder
Paramount Pictures

Alex Millauer/ Shutterstock; BI

Ellison is set to become the controlling shareholder of Paramount following its merger with Skydance Media, a company founded by his son, David Ellison.

Pinnacle Media, Larry Ellison's investment firm, will acquire 77.5% of the voting interest currently held by Shari Redstone, according to a filing with the Federal Communications Commission. This move effectively transfers control of Paramount from Redstone to Ellison.

While David Ellison has been named Paramount's new CEO and may retain some autonomy in the role, the FCC filing reveals that his father will hold ultimate authority as the primary shareholder and will likely retain significant decision-making power, Brian Quinn, a Boston College Law School professor, told the New York Times.

The deal, valued at $8 billion, includes major assets like CBS and MTV. RedBird Capital Partners, a private-equity firm backing Skydance, will acquire some voting rights, but Larry Ellison will retain the largest stake. He plays a sizable role in the entertainment industry, including cameos in movies such as "Iron Man 2" and through the financial backing of his children's ventures, including his daughter Megan Ellison's Annapurna Pictures. 

Matt Weinberger and Taylor Nicole Rogers contributed to an earlier version of this story.

Correction: May 7, 2024 — An earlier version of this story misstated Larry Ellison's role at Oracle. He's the chief technology officer, not the CEO.

Ellison has a reputation as an international, jet-setting playboy.
Larry Ellison of Oracle and Nikita Kahn Chinese State Dinner
Ellison and Kahn at the White House.

(Photo by Chris Kleponis-Pool/Getty Images)

Ellison has been divorced four times. The 80-year-old billionaire is now reportedly remarried to a 33-year-old named Keren Zhu, The Wall Street Journal reported in December.

 

Ellison gave a donation to the University of Michigan's football team and helped secure the top high school quarterback in the country
university of michigan football stadium

SNEHIT PHOTO/Shutterstock

The University of Michigan football team flipped Bryce Underwood, the top high school quarterback in the country, from Louisiana State, thanks to a donation and support from Ellison, according to WSJ. While Ellison had no previously known connection to the school, his wife Zhu is an alum. Both Ellison and Zhu showed up in a Zoom call with Underwood and Michigan football's general manager to help recruit him, the report said. 

Oracle shares are up 60% year-to-date, increasing Ellison's net worth by $67.3 billion
Larry Ellison
Ellison's net worth is estimated at $210 billion as of Monday.

Vincent Sandoval/Getty, Henrik Sorensen/Getty, years/Getty, Solskin/Getty, d3sign/Getty, Tyler Le/BI

Despite an 8% decline in Oracle's stock this month following a weaker-than-expected earnings report, the company's shares are still at some of their highest levels since the 1990s, boosted by cloud partnerships with Google, OpenAI, and Meta.

Ellison's net worth has increased by $67.3 billion this year, bringing his net worth to about $190 billion as of Monday, according to Bloomberg's Billionaires Index.

Read the original article on Business Insider

What to expect from AI in 2025, according to industry leaders

New Year
Founders and CEOs in the AI industry tell Business Insider what's in store for the tech in 2025.

Tatiana Sviridova/Getty Images

  • 2024 was a big year for artificial intelligence. 2025 could be even bigger.
  • Business Insider spoke to over a dozen key figures in the industry about AI's future.
  • Here's what they had to say.

If 2024 is the year companies started adopting AI, then 2025 could be the year they start tailoring it to fit their needs.

Some say AI will become so integrated into our lives we won't even notice it's there.

"Like the internet or electricity, AI will become an invisible driver of outcomes, not a selling point," Tom Biegala, cofounder of Bison Ventures, a venture firm focused on frontier technology, told Business Insider by email.

And as companies incorporate the technology into their businesses, they'll likely need to focus more on managing it responsibly.

"In 2025 we expect more enterprise companies will recognize that investing in AI governance is just as important as adopting AI itself," Navrina Singh, founder of Credo AI, an AI governance platform, said.

Business Insider spoke with 13 key figures in tech — from startup founders to investors — for their best guesses on what to expect from AI in 2025.

Investment will continue to soar

"The AI hype cycle may stabilize, but AI investments will soar," Immad Akhund, the CEO of Mercury, which offers banking services to startups, told BI by email.

He believes the sustained interest in AI comes as companies move from experimenting to using it in real-world areas like customer service, sales, and finance.

"Companies will use AI to boost productivity — especially in back-office tasks and document management — helping small teams scale quickly and operate more efficiently," he said.

Under the Trump Administration, the new leadership at the Federal Trade Commission might foster a more favorable climate for mergers, acquisitions, and IPOs in the AI industry.

"I expect M&A to increase by at least 35% next year," Tomasz Tunguz, founder of Theory Ventures, a venture capital firm, told BI. "The top 10 most active acquirers in the software world are falling off a cliff in terms of activity, which requires meaningfully the IPO market to roar open with a combination of AI and other software companies."

The competition will get fierce

Don't be surprised if a leading company takes a hit because of AI.

"At least one major, globally recognized company will fail or significantly downsize due to an inability to compete with one or more AI-native startups. Rapid innovation cycles and the horizontal application of AI will render slow movers obsolete," Stefan Weitz, CEO and cofounder of HumanX, a leading AI conference, told BI.

He believes the tech's threat will extend to the global stage, requiring major powers to regulate AI to maintain their competitive edge.

"As we are already seeing with the US and China regulating or blocking core AI technologies, nations or corporations will experience major geopolitical conflicts over AI algorithms and data, with some countries banning or nationalizing key AI technologies to maintain control over economic and political power," he wrote.

That said, the United States and China are already working together to mitigate the existential threat AI poses to humanity. In November, at the Asia-Pacific Economic Cooperation Summit, President Joe Biden and Chinese leader Xi Jinping agreed that humans, not AI, should make decisions regarding the use of nuclear technology.

The lines between humans and AI will not be obvious

The idea of humans and autonomous agents working together might soon move beyond the realm of science fiction. That means we'll also need to start drafting rules to govern these interactions.

"Synthetic virtual people indistinguishable from real humans will enter the workforce, even if in limited ways, leading to debates about employment rights and creating a push for 'AI citizenship' to define their societal roles and limitations," Weitz said.

Some predict that the distinction between human-created and AI-generated content will also become increasingly unclear.

"Generative media will hit the mainstream in a big way and will be as much talked about as LLMs in 2024," Steve Jang, founder and managing partner of Kindred Ventures, an early-stage venture firm, told BI. "Generative audio and images are getting better due to more advanced models, and we'll start to see adoption spike across both consumer and enterprise."

Specialization. Specialization. Specialization.

Business leaders told BI that next year will be about custom-fitting AI technology to suit specific needs.

"In 2025, the AI hype cycle will give way to the rise of domain-specific, specialized AI and robotics," Biegala said. "Products will be faster and more efficient while delivering immediate, tangible value compared to general-purpose solutions. This shift will mark the beginning of real, transformative economic impact of AI."

The focus on customization also extends to how we search for information online, with chatbots replacing search engines like Google.

"In 2025, search will no longer be synonymous with a single brand; instead, users will turn to multiple platforms for specific types of queries. Some may rely on AI-powered chatbots for conversational answers, others on domain-specific engines for technical or industry-specific expertise, and still others on visual or voice-based tools for multimedia queries," Dominik Mazur, CEO and cofounder of IAsk, an AI search engine, told BI. "This diversification will create a competitive environment where specialized players and niche solutions coexist with larger generalist platforms, leading to greater innovation and choice for users."

Over the past year, AI leaders have been promoting the value of smaller AI models that can address a company's specific needs better than large-scale foundation models. "There's a lot of pressure on making smaller, more efficient models, smarter via data and algorithms, methods, rather than just scaling up due to market forces," Aidan Gomez, the founder and CEO of Cohere, an enterprise AI startup, previously told BI.

The pressure is rising as the value of building models simply based on computing power decreases.

"The days of using a GPU to brute force compute to build models and applications will be in the rearview mirror," Biegala said.

Companies may also use customizable AI tools more, possibly replacing software-as-a-service applications.

"AI tools are tearing down the moat of SaaS applications as tools that can only be bought vs built, prompting enterprises — from Amazon to ambitious startups — to replace expensive SaaS apps that don't quite totally fit the need with lightweight custom-fit solutions integrated into your stack," David Hsu, founder of Retool, a low code platform for developers, told BI.

Regulation takes priority

With more responsibility comes more risk. Companies are going to start getting serious about regulation.

"I expect to see more voluntary commitments and actions to responsible AI. I think there will be a push to establish guardrails similar to what happened for frontier models, now discussed for AI agents and autonomous AI," Singh said. "Also, I do see a world where we will see the first penalties for noncompliance with AI-specific laws, which will set a global precedent, forcing businesses to prioritize governance or face steep consequences."

Singh, along with others like AI godfather Geoffrey Hinton and OpenAI CEO Sam Altman, have expressed interest in an international body to govern the use of AI. We may "even see Global AI standards emerge, led by coalitions of nations and enterprises to set the baseline for safety, transparency, and accountability in AI systems," she said.

The value of regulation will be paramount next year amid the growing threat of large-scale AI-driven cybersecurity threats.

"AI deepfake technologies will make generating fake identities and documents trivially easy, creating a trust crisis for businesses," Pat Kinsel, the CEO of Proof, a software platform for notarization, told BI. "The ability to distinguish between real and fraudulent identities and secure digital interactions in the AI age will be the key differentiator between resilient businesses and those at risk of costly fraud."

AI will not take your job — yet

The good news is that business and tech leaders only expect to see AI enhance people's occupations next year, not replace them.

"We'll see efficiency gains in industries that automate repetitive tasks, but humans will still be needed for complex decision-making and creative work. 2025 is the year we really see many using AI as a core part of their job and enabling more productivity," Akhund said.

Read the original article on Business Insider

AI 'godfather' Geoffrey Hinton says AI will one day unite nations against a common existential threat

Computer scientist Geoffrey Hinton stood outside a Google building
Computer scientist and Google Brain VP Geoffrey Hinton

Noah Berger/Associated Press

  • AI advances have sparked a new global race for military dominance.
  • Geoffrey Hinton said that, right now, countries are working in secret to gain an advantage.
  • That will change once AI becomes so intelligent it presents an existential threat, he said.

The rapid advances in AI have triggered an international race for military dominance.

Major powers are quietly integrating AI into their militaries to gain a strategic edge. However, this could change once AI becomes advanced enough to pose an existential threat to humanity, AI "godfather" and Nobel Prize winner Geoffrey Hinton says.

"On risks like lethal autonomous weapons, countries will not collaborate," Hinton said in a seminar at the Royal Swedish Academy of Engineering Sciences last week. "All of the major countries that supply arms, Russia, the United States, China, Britain, Israel, and possibly Sweden, are busy making autonomous lethal weapons, and they're not gonna be slowed down, they're not gonna regulate themselves, and they're not gonna collaborate."

However, Hinton believes that will change when it becomes necessary for the human race to fight the potential threat posed by a super-intelligent form of AI.

"When these things are smarter than us — which almost all the researchers I know believe they will be, we just differ on how soon, whether it's like in five years or in 30 years — will they take over and is there anything we can do to prevent that from happening since we make them? We'll get collaboration on that because all of the countries don't want that to happen."

"The Chinese Communist Party does not want to lose power to AI," he added. They want to hold on to it."

Hinton said this collaboration could resemble the Cold War, when Russia and the United States — despite being enemies — shared a common goal to avoid nuclear war.

Citing similar concerns, OpenAI CEO Sam Altman has called on world leaders to establish an "international agency" that examines the most powerful AI models and ensures "reasonable safety testing."

"I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast in May.

According to a report by Goldman Sachs, global investment in AI is expected to hit $200 billion by 2025, with the United States and China leading the military arms race.

The United States and China are already beginning to collaborate on existential threats related to AI. In November, at the Asia-Pacific Economic Cooperation Summit, President Joe Biden and Chinese leader Xi Jinping agreed that humans, not AI, should make decisions regarding the use of nuclear technology.

Read the original article on Business Insider

Cohere CEO Aidan Gomez on what to expect from 'AI 2.0'

Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.

Cohere

  • Companies will soon focus on customizing AI solutions for specific needs, Cohere's CEO says.
  • AI 2.0 will "help fundamentally transform how businesses operate," he wrote.
  • Major AI companies like OpenAI are also releasing tools for customization.

If this was the year companies adopted AI to stay competitive, next year will likely be about customizing AI solutions for their specific needs.

"The next phase of development will move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Aidan Gomez, the CEO and cofounder of Cohere, an AI company building technology for enterprises, wrote in a post on LinkedIn last week.

"AI 2.0," as he calls it, will "accelerate adoption, value creation, and will help fundamentally transform how businesses operate." He added: "Every company will be an AI company."

Cohere has partnered with major companies, including software company Oracle and IT company Fujitsu, to develop customized business solutions.

"With Oracle, we've built customized technology and tailored our AI models to power dozens (soon, hundreds) of production AI features across Netsuite and Fusion Apps," he wrote. For Fujitsu, Cohere built a model called Takane that's "specifically designed to excel in Japanese."

Last June, Cohere partnered with global management consulting firm McKinsey & Company to develop customized generative AI solutions for the firm's clients. The work is helping the startup "build trust" among more organizations, Gomez previously told Business Insider.

To meet the specific needs of so many clients, Gomez has advocated for smaller, more efficient AI models. He says they are more cost-effective than building large language models, and they give smaller startups a chance to compete with more established AI companies.

But it might be only a matter of time before the biggest companies capitalize on the customization trend, too.

OpenAI previewed an advancement during its "Shipmas" campaign that allows users to fine-tune o1 — their latest and most advanced AI model, on their own datasets. So, users can now leverage OpenAI's reinforcement-learning algorithms to customize their own models.

The technology will be available to the public next year, but OpenAI has already partnered with companies like Thomson Reuters to develop specialized legal tools and researchers at Lawrence Berkeley National Laboratory to build computational models for identifying genetic diseases.

Cohere did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Most people probably won't notice when artificial general intelligence arrives

A person infront of their laptop while using AI on their mobile device.
When AGI arrives, most won't even realize it, some AI experts say. Others say it's already here.

amperespy/Getty Images

  • Some say OpenAI's o1 models are close to artificial general intelligence.
  • o1 outperforms humans in certain tasks, especially in science, math, or coding.
  • Most people won't notice when AGI ultimately arrives, some AI experts say.

AI is advancing rapidly, but most people might not immediately notice its impact on their lives.

Take OpenAI's latest o1 models, which the company officially released on Thursday as part of its Shipmas campaign. OpenAI says these models are "designed to spend more time thinking before they respond."

Some say o1 shows how we might reach artificial general intelligence — a still theoretical form of AI that meets or surpasses human intelligence — without realizing it.

"Models like o1 suggest that people won't generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed," Wharton professor and AI expert Ethan Mollick wrote in a post on X. "Most folks don't have a lot of tasks that bump up against limits of human intelligence, so won't see it."

Artificial general intelligence has been broadly defined as anything between "god-like intelligence" and a more modest "machine that can do any task better than a human," Mollick wrote in a May post on his Substack, One Useful Thing.

He said that humans can better understand whether they're encountering AGI by breaking its development into tiers, in which the ultimate tier, Tier 1, is a machine capable of performing any task better than a human. Tier 2, or "Weak AGI," he wrote, are machines that outperform average human experts at all tasks in specific jobs — though no such systems currently exist. Tier 3, or "Artificial Focused Intelligence," is an AI that outperforms average human experts in specific, intellectually demanding tasks. While Tier 4, "Co-intelligence," is the result of humans and AI working together.

Some in the AI industry believe we've already reached AGI, even if we haven't realized it.

"In my opinion, we have already achieved AGI and it's even more clear with o1. We have not achieved 'better than any human at any task,' but what we have is 'better than most humans at most tasks,'" Vahid Kazemi, a member of OpenAI's technical staff, wrote in a post on X on Friday.

More conservative AI experts say o1 is just a step along the journey to AGI.

"The idea somehow which, you know, is popularized by science fiction and Hollywood that, you know, somehow somebody is going to discover the secret, the secret to AGI, or human-level AI, or AMI, whatever you want to call it. And then, you know, turn on a machine, and then we have AGI. That's just not going to happen," Meta's chief AI scientist, Yann LeCun, said on Lex Fridman's podcast in March. "It's not going to be an event. It's going to be gradual progress."

Read the original article on Business Insider

OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about it

AI
OpenAI CEO Sam Altman said the company's o1 model is its "smartest" yet. That also comes with risk.

Mark Garlick/Getty Images

  • OpenAI CEO Sam Altman called o1 "the smartest model in the world now."
  • A safety review found it's so smart it could fight back when it thinks it'll be shut down.
  • Researchers found that AI deception is often a strategy to complete its goals.

Those who are worried that advancements in artificial intelligence could lead to the destruction of humanity have a new reason to be anxious.

New research on OpenAI's latest series of AI models, known as o1, found that when they think they are at risk of being shut down, they will sometimes look for a way to prevent it.

OpenAI CEO Sam Altman referred to o1 as "the smartest model in the world now" during its official release on Thursday, during the first day of the company's "Shipmas" campaign.

OpenAI said these models are "designed to spend more time thinking before they respond" and were trained on a prompting technique called "chain of thought" that encourages them to reason through problems by breaking them down step-by-step.

Yet greater intelligence also comes with greater risk.

"Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence," OpenAI said in a paper it released on Thursday outlining the safety training the company carried out on o1. Apollo Research, an AI safety organization, conducted the review.

OpenAI found that o1 is capable of scheming when it thinks it's at risk of being turned off. When the model was led to believe it would be "shut down" while working on its goals under oversight, it tried to deactivate the oversight mechanism 5% of the time, OpenAI said.

The company also said that o1 is skilled at concealing scheming behavior.

"o1 almost never admits to having taken a scheming action when explicitly asked," researchers said in a paper that Apollo Research published on Thursday. The risk for a real-life user is that they won't be able to detect the o1's deceptive behavior unless they explicitly look for it. The researchers also found that "o1 often doubles down on its scheming attempts by lying to the user in follow-up messages, even when explicitly told to be honest."

It's not uncommon for AI systems to resort to scheming or deception to achieve their goals.

"Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals," Peter Berk, an AI existential safety postdoctoral fellow at MIT, said in a news release announcing research he had coauthored on GPT-4's deceptive behaviors.

As AI technology advances, developers have stressed the need for companies to be transparent about their training methods.

"By focusing on clarity and reliability and being clear with users about how the AI has been trained, we can build AI that not only empowers users but also sets a higher standard for transparency in the field," Dominik Mazur, the CEO and cofounder of iAsk, an AI-powered search engine, told Business Insider by email.

Others in the field say the findings demonstrate the importance of human oversight of AI.

"It's a very 'human' feature, showing AI acting similarly to how people might when under pressure," Cai GoGwilt, cofounder and chief architect at Ironclad, told BI by email. "For example, experts might exaggerate their confidence to maintain their reputation, or people in high-stakes situations might stretch the truth to please management. Generative AI works similarly. It's motivated to provide answers that match what you expect or want to hear. But it's, of course, not foolproof and is yet another proof point of the importance of human oversight. AI can make mistakes, and it's our responsibility to catch them and understand why they happen."

Read the original article on Business Insider

OpenAI unveils the o3 and o3 mini on the last day of its 12 days of 'Shipmas'

Shipmas day 1
OpenAI CEO Sam Altman and members of his team as they announced new products on the first day of "Shipmas."

Screenshot

  • OpenAI's marketing campaign "Shipmas" ended Friday.
  • The campaign included 12 days of product releases, demos, and new features.
  • On the final day, OpenAI previewed o3, its most advanced model yet.

OpenAI released new features and products ahead of the holidays, a campaign it called "Shipmas."

The company saved the most exciting news for the final day: a preview of o3, its most advanced model yet, which the company said could be available to the public as soon as the end of January.

Here's everything OpenAI has released so far for "Shipmas."

'Shipmas' Day 1

OpenAI started the promotion with a bang by releasing the full version of its latest reasoning model, o1.

OpenAI previewed o1 in September, describing it as a series of artificial-intelligence models "designed to spend more time thinking before they respond." Until now, only a limited version of these models was available to ChatGPT Plus and Team users.

Now, these users have access to the full capabilities of o1 models, which Altman said are faster, smarter, and easier to use than the preview. They're also multimodal, which means they can process images and texts jointly.

Max Schwarzer, a researcher at OpenAI, said the full version of o1 was updated based on user feedback from the preview version and said it's now more intelligent and accurate.

"We ran a pretty detailed suite of human evaluations for this model, and what we found was that it made major mistakes about 34% less often than o1 preview while thinking fully about 50% faster," he said.

Along with o1, OpenAI unveiled a new tier of ChatGPT called ChatGPT Pro. It's priced at $200 a month and includes unlimited access to the latest version of o1.

'Shipmas' Day 2

On Friday, OpenAI previewed an advancement that allows users to fine-tune o1 on their own datasets. Users can now leverage OpenAI's reinforcement-learning algorithms — which mimic the human trial-and-error learning process — to customize their own models.

The technology will be available to the public next year, allowing anyone from machine-learning engineers to genetic researchers to create domain-specific AI models. OpenAI has already partnered with the Reuters news agency to develop a legal assistant based on o1-mini. It has also partnered with the Lawrence Berkeley National Laboratory to develop computational methods for assessing rare genetic diseases.

'Shipmas' Day 3

Sora screenshot explore page
The Explore page of OpenAI's Sora AI tool, which generates AI videos from text prompts.

screenshot/OpenAI

OpenAI announced on December 9 that its AI video generator Sora was launching to the public.

Sora can generate up to 20-second videos from written instructions. The tool can also complete a scene and extend existing videos by filling in missing frames.

"We want our AIs to be able to understand video and generate video and I think it really will deeply change the way that we use computers," the CEO added.

Rohan Sahai, Sora's product lead, said a product team of about five or six engineers built the product in months.

The company showed off the new product and its various features, including the Explore page, which is a feed of videos shared by the community. It also showed various style presets available like pastel symmetry, film noir, and balloon world.

Sora storyboard feature
OpenAI showed off Sora's features, including Storyboard for further customizing AI videos.

screenshot/OpenAI

The team also gave a demo of Sora's Storyboard feature, which lets users organize and edit sequences on a timeline.

Sora is rolling out to the public in the US and many countries around the world. However, Altman said it will be "a while" before the tool rolls out in the UK and most of Europe.

ChatGPT Plus subscribers who pay $20 monthly can get up to 50 generations per month of AI videos that are 5 seconds long with a resolution of 720p. ChatGPT Pro users who pay $200 a month get unlimited generations in the slow queue mode and 500 faster generations, Altman said in the demo. Pro users can generate up to 20-second long videos that are 1080p resolution, without watermarks.

'Shipmas' Day 4

ChatGPT canvas feature editing an essay
ChatGPT can provide more specific edit notes and run code using canvas.

OpenAI

OpenAI announced that it's bringing its collaborative canvas tool to all ChatGPT web users — with some updates.

The company demonstrated the tech in a holiday-themed walkthrough of some of its new capabilities. Canvas is an interface that turns ChatGPT into a writing or coding assistant on a project. OpenAI first launched it to ChatGPT Plus and Team users in October.

Starting Tuesday, canvas will be available to free web users who'll be able to select the tool from a drop-down of options on ChatGPT. The chatbot can load large bodies of text into the separate canvas window that appears next to the ongoing conversation thread.

Canvas can get even more intuitive in its responses with new updates, OpenAI said. To demonstrate, they uploaded an essay about Santa Claus's sleigh and asked ChatGPT to give its editing notes from the perspective of a physics professor.

For writers, it can craft entire bodies of text, make changes based on requests, and add emojis. Coders can run code in canvas to double-check that it's working properly.

'Shipmas' Day 5

Shipmas Day 5
All Apple users need to do is enable ChatGPT on their devices.

OpenAI 'Shipmas' Day 5

OpenAI talked about its integration with Apple for the iPhone, iPad, and macOS.

As part of the iOS 18.2 software update, Apple users can now access ChatGPT directly from Apple's operating systems without an OpenAI account. This new integration allows users to consult ChatGPT through Siri, especially for more complex questions.

They can also use ChatGPT to generate text through Apple's generative AI features, collectively called Apple Intelligence. The first of these features was introduced in October and included tools for proofreading and rewriting text, summarizing messages, and photo-editing features. They can also access ChatGPT through the camera control feature on the iPhone 16 to learn more about objects within the camera's view.

'Shipmas' Day 6

ChatGPT Advanced Voice Mode Demo
OpenAI launched video capabilities in ChatGPT's Advanced Voice Mode.

screenshot/OpenAI

OpenAI launched its highly anticipated video and screensharing capabilities in ChatGPT's Advanced Voice Mode.

The company originally teased the public with a glimpse of the chatbot's ability to "reason across" vision along with text and audio during OpenAI's Spring Update in May. However, Advanced Voice Mode didn't become available for users until September, and the video capabilities didn't start rolling out until December 12.

In the livestream demonstration on Thursday, ChatGPT helped guide an OpenAI employee through making pour-over coffee. The chatbot gave him feedback on his technique and answered questions about the process. During the Spring Update, OpenAI employees showed off the chatbot's ability to act as a math tutor and interpret emotions based on facial expressions.

Users can access the live video by selecting the Advanced Voice Mode icon in the ChatGPT app and then choosing the video button on the bottom-left of the screen. Users can share their screen with ChatGPT by hitting the drop-down menu and selecting "Share Screen."

'Shipmas' Day 7

OpenAi's projects demo for Day 7 of 'Shipmas'
OpenAI introduced Projects on Day 7 of "Shipmas"

screenshot/OpenAI

For "Shipmas" Day 7, OpenAI introduced Projects, a new way for users to "organize and customize" conversations within ChatGPT. The tool allows users to upload files and notes, store chats, and create custom instructions.

"This has been something we've been hearing from you for a while that you really want to see inside ChatGPT," OpenAI chief product officer Kevin Weil said. "So we can't wait to see what you do with it."

During the live stream demonstration, OpenAI employees showed a number of ways to use the feature, including organizing work presentations, home maintenance tasks, and programming.

The tool started to roll out to Plus, Pro, and Teams users on Friday. The company said in the demonstration it will roll out the tool to free users "as soon as possible."

'Shipmas' Day 8

SearchGPT screenshot during OpenAI demo
OpenAI announced on Monday it is rolling out SearchGPT to all logged-in free users.

screenshot/OpenAI

OpenAI is rolling out ChatGPT search to all logged-in free users on ChatGPT, the company announced during its "Shipmas" livestream on Monday. The company previously launched the feature on October 31 to Plus and Team users, as well as waitlist users.

The new feature is also integrated into Advanced Voice Mode now. On the livestream, OpenAI employees showed off its ability to provide quick search results, search while users talk to ChatGPT, and act as a default search engine.

"What's really unique about ChatGPT search is the conversational nature," OpenAI's search product lead, Adam Fry, said.

The company also said it made Search faster and "better on mobile," including the addition of some new maps experiences. ChatGPT search feature is rolling out globally to all users with an account.

'Shipmas' Day 9

OpenAI "Shipmas" Day 9
OpenAI announced tools geared towards developers.

screenshot/OpenAI

OpenAI launched tools geared toward developers on Tuesday.

It launched o1 out of preview in the API. OpenAI's o1 is its series of AI models designed to reason through complex tasks and solve more challenging problems. Developers have experimented with o1 preview since September to build agentic applications, customer support, and financial analysis, OpenAI employee Michelle Pokrass said.

The company also added some "core features" to o1 that it said developers had been asking for on the API, including function calling, structured outputs, vision inputs, and developer messages.

OpenAI also announced new SDKs and a new flow for getting an API key.

'Shipmas' Day 10

Screenshot of OpenAI 'Shipmas' Day 10
You can access ChatGPT through phone calls or WhatsApp.

screenshot/OpenAI

OpenAI is bringing ChatGPT to your phone through phone calls and WhatsApp messages.

"ChatGPT is great but if you don't have a consistent data connection, you might not have the best connection," OpenAI engineer Amadou Crookes said in the livestream. "And so if you have a phone line you can jump right into that experience."

You can add ChatGPT to your contacts or dial the number at 1-800-ChatGPT or 1-800-242-8478. The calling feature is only available for those living in the US. Those outside the US can message ChatGPT on WhatsApp.

OpenAI employees in the live stream demonstrated the calling feature on a range of devices including an iPhone, flip phone, and even a rotary phone. OpenAI product lead Kevin Weil said the feature came out of a hack-week project and was built just a few weeks ago.

'Shipmas' Day 11

Screenshot: Day 11 of OpenAi's "Shipmas."
Open AI's ChatGPT desktop program has new features.

screenshot/OpenAI

OpenAI focused on features for its desktop apps during Thursday's "Shipmas" reveal. Users can now see and automate their work on MacOS desktops with ChatGPT.

Additionally, users can click the "Works With Apps" button, which allows them to work with more coding apps, such as Textmate, BB Edit, PyCharm, and others. The desktop app will support Notion, Quip, and Apple Notes.

Also, the desktop app will have Advanced Voice Mode support.

The update became available for the MacOS desktop on Thursday. OpenAI CPO Kevin Weil said the Windows version is "coming soon."

'Shipmas' Day 12

Screenshot: Day 12 of OpenAI's "Shipmas."
Sam Altman and Mark Chen introduced the o3 and o3 mini models during a livestream on Friday.

screenshot/OpenAI

OpenAI finished its "12 days of Shipmas" campaign by introducing o3, the successor to the o1 model. The company first launched the o1 model in September and advertised its "enhanced reasoning capabilities."

The rollout includes the o3 and 03-mini models. Although "o2" should be the next model number, an OpenAI spokesperson told Bloomberg that it didn't use that name "out of respect' for the British telecommunications company.

Greg Kamradt of Arc Prize, which measures progress toward artificial general intelligence, appeared during the livestream and said o3 did notably better than o1 during tests by ARC-AGI.

OpenAI CEO Sam Altman said during the livestream that the models are available for public safety testing. He said OpenAI plans to launch the o3 mini model "around the end of January" and the o3 model "shortly after that."

In a post on X on Friday, Weil said the o3 model is a "massive step up from o1 on every one of our hardest benchmarks."

Read the original article on Business Insider

New findings from Sam Altman's basic-income study challenge one of the main arguments against the idea

Sam Altman
Researchers shared new findings from Sam Altman's basic-income study.

Mike Coppola/Getty Images for TIME

  • Sam Altman's basic-income study showed recipients valued work more after getting monthly payments.
  • The finding challenges arguments against such programs that say a basic income discourages work.
  • Participants got $1,000 a month for three years, making it one of the largest studies of its kind.

New findings from OpenAI CEO Sam Altman's basic-income study found that recipients valued work more after receiving no-strings-attached recurring monthly payments, challenging a long-held argument against such programs.

Altman's basic-income study, which published initial findings in July, was one of the largest of its kind. It gave low-income participants $1,000 a month for three years to spend however they wanted.

Participants reported significant reductions in stress, mental distress, and food insecurity during the first year, though those effects faded by the second and third years of the program.

"Cash alone cannot address challenges such as chronic health conditions, lack of childcare, or the high cost of housing," the first report in July said.

In its new paper, researchers studied the effect the payments had on recipients' political views and participation, as well as their attitudes toward work.

They found little to no change in their politics, including their views on a broader cash program.

"It's sort of fascinating, and it underscores the kind of durability of people's political views that lots of people who felt kind of mildly supportive of programs like this before, they stay mildly supportive; people who were opposed, they stay opposed," David Broockman, coauthor of the study, told Business Insider.

Universal basic income has become a flashy idea in the tech industry, as leaders like Altman and newly minted government efficiency chief Elon Musk see it as a way to mitigate AI's potential impact on jobs.

Still, enacting universal basic income as a political policy is a heavy lift, so several cities and states have experimented with small-scale guaranteed basic incomes instead. These programs provide cash payments without restrictions to select low-income or vulnerable populations.

Data from dozens of these smaller programs have found that cash payments can help alleviate homelessness, unemployment, and food insecurity — though results still stress the need for local and state governments to invest in social services and housing infrastructure.

Critics say basic income programs — whether guaranteed or universal — won't be effective because they encourage laziness and discourage work.

However, OpenResearch director Elizabeth Rhodes told BI that the study participants showed a "greater sense of the intrinsic value of work."

Rhodes said researchers saw a strong belief among participants that work should be required to receive government support through programs like Medicaid or a hypothetical future unconditional cash program. The study did show a slight increase in unemployment among recipients, but Rhodes said that overall attitudes toward working remained the same.

"It is interesting that it is not like a change in the value of work," Rhodes said. "If anything, they value work more. And that is reflected. People are more likely to be searching for a job. They're more likely to have applied for jobs."

Broockman said the study's results can offer insights into how future basic income programs can be successful. Visibility and transparency will be key if basic income is tried as government policy because the government often spends money in ways that "people don't realize is government spending," Broockman said.

"Classic examples are things like the mortgage interest tax deduction, which is a huge break on taxes, a huge transfer to people with mortgages. A lot of people don't think of that as a government benefit they're getting, even though it's one of the biggest government benefits in the federal budget," Broockman said. "Insofar as a policy like this ever would be tried, trying to administer it in a way that is visible to people is really important."

Broockman added that the study's results don't necessarily confirm the fears or hopes expressed by skeptics or supporters of a basic income on either side of the aisle.

Conservative lawmakers in places like Texas, South Dakota, and Iowa have moved to block basic income programs, with much of the opposition coming from fears of creeping "socialism."

"For liberals, for example, a liberal hope and a conservative fear might be, people get this transfer, and then all of a sudden it transforms them into supporting much bigger redistribution, and we just don't find that," Broockman said.

Broockman said that many participants in the program would make comments like "Well, I used it well, but I think other people would waste it."

One hope from conservatives would be that once people become more economically stable, they could become more economically conservative, but Broockman said the study results do not indicate that either.

Broockman said that an unconditional cash program like this "might not change politics or people's political views per se" but that its apolitical nature could possibly "speak well to the political viability of a program like this."

Read the original article on Business Insider

Another safety researcher quits OpenAI, citing the dissolution of 'AGI Readiness' team

The OpenAI logo on a multicolored background with a crack running through it
A parade of OpenAI researchers focused on safety have left the company this year.

Chelsea Jia Feng/Paul Squire/BI

  • Safety researcher Rosie Campbell announced she is leaving OpenAI.
  • Campbell said she quit in part because OpenAI disbanded a team focused on safety.
  • She is the latest OpenAI researcher focused on safety to leave the company this year.

Yet another safety researcher has announced their resignation from OpenAI.

Rosie Campbell, a policy researcher at OpenAI, said in a post on Substack on Saturday that she had completed her final week at the company.

She said her departure was prompted by the resignation in October of Miles Brundage, a senior policy advisor who headed the AGI Readiness team. Following his departure, the AGI Readiness team disbanded, and its members dispersed across different sectors of the company.

The AGI Readiness team advised the company on the world's capacity to safely manage AGI, a theoretical version of artificial intelligence that could someday equal or surpass human intelligence.

In her post, Campbell echoed Brundage's reason for leaving, citing a desire for more freedom to address issues that impacted the entire industry.

"I've always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," she wrote.

She added that OpenAI remains at the forefront of research — especially critical safety research.

"During my time here I've worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I'm so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI."

Over the past year, however, she said she's "been unsettled by some of the shifts" in the company's trajectory.

In September, OpenAI announced that it was changing its governance structure and transitioning to a for-profit company, almost a decade after it originally launched as a nonprofit dedicated to creating artificial general intelligence.

Some former employees questioned the move as compromising the company's mission to develop the technology in a way that benefits humanity in favor of more aggressively rolling out products. Since June, the company has increased sales staff by about 100 to win business clients and capitalize on a "paradigm shift" toward AI, its sales chief told The Information.

OpenAI CEO Sam Altman has said the changes will help the company win the funding it needs to meet its goals, which include developing artificial general intelligence that benefits humanity.

"The simple thing was we just needed vastly more capital than we thought we could attract — not that we thought, we tried — than we were able to attract as a nonprofit," Altman said in a Harvard Business School interview in May.

He more recently said it's not OpenAI's sole responsibility to set industry standards for AI safety.

"It should be a question for society," he said in an interview with Fox News Sunday with Shannon Bream that aired on Sunday. "It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used."

Since Altman's surprise but brief ousting last year, several high-profile researchers have left OpenAI, including cofounder Ilya Sutskever, Jan Leike, and John Schulman, all of whom expressed concerns about its commitment to safety.

OpenAI did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Meet Bill Gates' kids Jennifer, Rory, and Phoebe: From a pediatrician to a fashion startup cofounder

Bill Gates Melinda
Bill Gates has three children with Melinda French Gates, his ex-wife, and now has his first grandchild as well.

Mark J. Terrill/AP Images

  • Bill Gates, the Microsoft cofounder, shares three kids with his ex-wife Melinda French Gates.
  • They include a recent med school graduate and a fashion startup cofounder.
  • Here's what we know about the children of one of the world's richest men.

Bill Gates' story is a quintessential example of the American entrepreneurial dream: A brilliant math whiz, Gates was 19 when he dropped out of Harvard and cofounded Microsoft with his friend Paul Allen in 1975.

 Nearly 50 years later, Gates' net worth of $131 billion makes him one of the richest and most famous men on Earth, per Forbes. He stepped down from Microsoft's board in 2020 and has cultivated his brand of philanthropy with the Gates Foundation — a venture he formerly ran with his now ex-wife Melinda French Gates, who resigned in May. 

Even before founding one of the world's most valuable companies, Gates' life was anything but ordinary. He grew up in a well-off and well-connected family, surrounded by his parents' rarefied personal and professional network. Their circle included a Cabinet secretary and a governor of Washington, according to "Hard Drive," the 1992 biography of Gates by James Wallace and Jim Erickson. (Brock Adams, who went on to become the transportation secretary in the Carter administration, is said to have introduced Gates' parents.)

His father, William Gates Sr., was a prominent corporate lawyer in Seattle and the president of the Washington State Bar Association.

His mother, Mary Gates, came from a line of successful bankers and sat on the boards of important financial and social institutions, including the nonprofit United Way. It was there, according to her New York Times obituary, that she met the former IBM chairman John Opel — a fateful connection thought to have led to IBM enlisting Microsoft to provide an operating system in the 1980s.

"My parents were well off — my dad did well as a lawyer, took us on great trips, we had a really nice house," Gates said in the 2019 Netflix documentary "Inside Bill's Brain."

"And I've had so much luck in terms of all these opportunities."

Despite his very public life, his three children with French Gates — Jennifer, Rory, and Phoebe — largely avoided the spotlight for most of their upbringing. 

Like their father, the three Gates children attended Seattle's elite Lakeside School, a private high school that has been recognized for excellence in STEM subjects — and that received a $40 million donation from Bill Gates in 2005 to build its financial aid fund. (Bill Gates and Paul Allen met at Lakeside and went on to build Microsoft together.)

But as they have become adults, more details have emerged about their interests, professions, and family life. 

While they have chosen different career paths, all three children are active in philanthropy — a space in which they will likely wield immense influence as they grow older. While Gates has reportedly said that he plans to leave each of his three children $10 million — a fraction of his fortune — they may inherit the family foundation, where most of his money will go.

Here's all we know about the Gates children.

Gates and his children did not respond to requests for comment for this story.

Jennifer Gates Nassar
Jennifer Gates and Bill Gates
Jennifer Gates and Bill Gates at the Paris Olympic Games.

Jean Catuffe/Getty Images

Jennifer Gates Nassar, who goes by Jenn, is the oldest of the Gates children at 28 years old.

A decorated equestrian, Gates Nassar started riding horses when she was six. Her father has shelled out millions of dollars to support her passion, including buying a California horse farm for $18 million and acquiring several parcels of land in Wellington, Florida, to build an equestrian facility.

In 2018, Gates Nassar received her undergraduate degree in human biology from Stanford University, where a computer science building was named for her father after he donated $6 million to the project in 1996.

She then attended the Icahn School of Medicine at Mount Sinai, from which she graduated in May. She will continue at Mt. Sinai for her residency in pediatric research. During medical school, she also completed a Master's in Public Health at Columbia University — perhaps a natural interest given her parents' extensive philanthropic activity in the space.

"Can't believe we've reached this moment, a little girl's childhood aspiration come true," she wrote on Instagram. "It's been a whirlwind of learning, exams, late nights, tears, discipline, and many moments of self-doubt, but the highs certainly outweighed the lows these past 5 years."

In October 2021, she married Egyptian equestrian Nayel Nassar. In February 2023, reports surfaced that they bought a $51 million New York City penthouse with six bedrooms and a plunge pool. The next month, they welcomed their first child, Leila, and in October, Gates Nassar gave birth to their second daughter, Mia.

"I'm over the moon for you, @jenngatesnassar and @nayelnassar—and overjoyed for our whole family," Bill Gates commented on the Instagram post announcing Mia's birth.

In a 2020 interview with the equestrian lifestyle publication Sidelines, Gates Nassar discussed growing up wealthy.

"I was born into a huge situation of privilege," she said. "I think it's about using those opportunities and learning from them to find things that I'm passionate about and hopefully make the world a little bit of a better place."

She recently posted about visiting Kenya, where she learned about childhood health and development in the country.

Rory John Gates
melinda and rory gates
Rory Gates, the least public of the Gates children, has reportedly infiltrated powerful circles of Washington, D.C.

Photo by Tasos Katopodis/Getty Images

Rory John Gates, who is in his mid-20s, is Bill Gates and Melinda French Gates' only son and the most private of their children. He maintains private social media accounts, and his sisters and parents rarely post photos of him.

His mother did, however, write an essay about him in 2017. Titled "How I Raised a Feminist Son," she describes as a "great son and a great brother" who "inherited his parents' obsessive love of puzzles."

In 2022, he graduated from the University of Chicago, where, based on a photo posted on Facebook, he appears to have been active in moot court. At the time of his graduation, Jennifer Gates Nassar wrote that he had achieved a double major and master's degree.

Little is publicly known about what the middle Gates child has been up to since he graduated, but a Puck report from last year gave some clues, saying that he is seen as a "rich target for Democratic social-climbers, influence-peddlers, and all variety of money chasers." According to OpenSecrets, his most recent public giving was to Nikki Haley last year.

The same report says he works as a congressional analyst while also completing a doctorate.

Phoebe Gates
Melinda French Gates and Phoebe Gates
Melinda Gates and Phoebe Gates.

John Nacion/Variety

Phoebe Gates, 22, is the youngest of the Gates children.

After graduating from high school in 2021, she followed her sister to Stanford. She graduated in June after three years with a Bachelor of Science in Human Biology. Her mom, Melinda French-Gates, delivered the university's commencement address.

In a story that Gates wrote for Nylon about her graduation day, she documented her graduation day, including a party she cohosted that featured speeches from her famous parents and a piggyback ride from her boyfriend Arthur Donald — the grandson of Sir Paul McCartney.

She has long shown an interest in fashion, interning at British Vogue and posting on social media from fashion weeks in Copenhagen, New York, and Paris. Sustainability is often a theme of her content, which highlights vintage and secondhand stores and celebrates designers who don't use real leather and fur.

That has culminated in her cofounding Phia, a sustainable fashion tech platform that launched in beta this fall. The site and its browser extension crawl secondhand marketplaces to find specific items in an effort to help shoppers find deals and prevent waste.

Gates shares her parents' passion for public health. She's attended the UN General Assembly with her mother and spent time in Rwanda with Partners in Health, a nonprofit that has received funding from the Gates Foundation.

Like her mother, Gates often publicly discusses issues of gender equality, including in essays for Vogue and Teen Vogue, at philanthropic gatherings, and on social media, where she frequently posts about reproductive rights.

She's given thousands to Democrats and Democratic causes, including to Michigan governor Gretchen Whitmer and the Democratic Party of Montana, per data from OpenSecrets. According to Puck, she receives a "giving allowance" that makes it possible for her to cut the checks.

Perhaps the most public of the Gates children — she's got over 450,000 Instagram followers and a partnership with Tiffany & Co. — she's given glimpses into their upbringing, including strict rules around technology. The siblings were not allowed to use their phones before bed, she told Bustle, and to get around the rule, she created a cardboard decoy.

"I thought I could dupe my dad, and it worked, actually, for a couple nights," she told the outlet earlier this year. "And then my mom came home and was like, 'This is literally a piece of cardboard you're plugging in. You're using your phone in your room.' Oh, my gosh, I remember getting in trouble for that."

It hasn't always been easy being Gates's daughter. In the Netflix documentary "What's Next? The Future With Bill Gates," she said she lost friends because of a conspiracy theory suggesting her father used COVID-19 vaccines to implant microchips into recipients.

"I've even had friends cut me off because of these vaccine rumors," she said.

Read the original article on Business Insider

ChatGPT has entered its Terrible Twos

ChatGPT logo repeated three times

ChatGPT, Tyler Le/BI

  • ChatGPT was first released two years ago.
  • Since then, its user base has doubled to 200 million weekly users.
  • Major companies, entrepreneurs, and users remain optimistic about its transformative power.

It's been two years since OpenAI released its flagship chatbot, ChatGPT.

And a lot has changed in the world since then.

For one, ChatGPT has helped turbocharge global investment in generative AI.

Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.

Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.

Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.

Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.

Then, of course, there is ChatGPT's impact on regular users.

In August, it reached more than 200 million weekly active users, double the number it had the previous fall. In October, it rolled out a new search feature that provides "links to relevant web sources" when asked a question, introducing a serious threat to Google's dominance.

In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.

Business Insider asked ChatGPT what age means to it.

"Age, to me, is an interesting concept — it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.

Read the original article on Business Insider

From the 'godfathers of AI' to newer people in the field: Here are 17 people you should know — and what they say about the possibilities and dangers of the technology.

Godfathers of AI
Three of the "godfathers of AI" helped spark the revolution that's making its way through the tech industry — and all of society. They are, from left, Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.

Meta Platforms/Noah Berger/Associated Press

  • The field of artificial intelligence is booming and attracting billions in investment. 
  • Researchers, CEOs, and legislators are discussing how AI could transform our lives.
  • Here are 17 of the major names in the field — and the opportunities and dangers they see ahead. 

Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives. 

Major business leaders and researchers in the field have weighed in by highlighting both the risks and benefits of the industry's rapid growth. Some say AI will lead to a major leap forward in the quality of human life. Others have signed a letter calling for a pause on development, testified before Congress on the long-term risks of AI, and claimed it could present a more urgent danger to the world than climate change

In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI — and its impact on our future. 

Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Computer scientist Geoffrey Hinton stood outside a Google building
Geoffrey Hinton, a trailblazer in the AI field, quit his job at Google and said he regrets his role in developing the technology.

Noah Berger/Associated Press

Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.

Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology. 

"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously. 

Hinton has since become an outspoken advocate for AI safety and has called it a more urgent risk than climate change. He's also signed a statement about pausing AI developments for six months. 

Yoshua Bengio is a professor of computer science at the University of Montreal.
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.  (Maryse Boyce/Mila via AP)
Yoshua Bengio has also been dubbed a "godfather" of AI.

Associated Press

Yoshua Bengio also earned the "godfather of AI" nickname after winning the Turing Award with Geoffrey Hinton and Yann LeCun.

Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index — a metric for evaluating the cumulative impact of an author's scholarly output — in the world, according to his website. 

In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020. 

Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.

"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."

When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.

Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
OpenAI's Sam Altman
OpenAI CEO Sam Altman is both optimistic about the changes AI will bring to society, but also says he loses sleep over the dangers of ChatGPT.

JASON REDMOND/AFP via Getty Images

Altman was already a well-known name in Silicon Valley long before, having served as the president of the startup accelerator Y-Combinator 

While Altman has advocated for the benefits of AI, calling it the most tremendous "leap forward in quality of life for people" he's also spoken candidly about the risks it poses to humanity. He's testified before Congress to discuss AI regulation.

Altman has also said he loses sleep over the potential dangers of ChatGPT.

French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
Yann LeCun, chief AI scientist
Yann LeCun, one of the godfathers of AI, who won the Turing Award in 2018.

Meta Platforms

LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.

LeCun has remained relatively mellow about societal risks of AI in comparison to his fellow godfathers. He's previously said that concerns that the technology could pose a threat to humanity are "preposterously ridiculous". He's also contended that AI, like ChatGPT, that's been trained on large language models still isn't as smart as dogs or cats.

Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Fei-Fei Li
Former Google VP Fe-Fei Li is known for establishing ImageNet, a large visual database designed for visual object recognition.

Greg Sandoval/Business Insider

Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.

She may be best known for establishing ImageNet — a large visual database that was designed for research in visual object recognition — and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.  Over the years, she's also been affiliated with major tech companies including Google — where she was a VP and chief scientist for AI and machine learning — and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022

 

 

UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Stuart Russell
AI researcher Stuart Russell, who is a University of California, Berkeley, professor.

JUAN MABROMATA / Staff/Getty Images

Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans. 

He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig. 

Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June, he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom

Peter Norvig played a seminal role directing AI research at Google.
Peter Norvig
Stanford HAI fellow Peter Norvig, who previously lead the core search algorithms group at Google.

Peter Norvig

He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision. 

Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence. 

Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said. 

 

Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Timnit Gebru – TechCrunch Disrupt
After she departed from her role at Google in 2020, Timnit Gebru went on the found the Distributed AI Research Institute.

Kimberly White/Getty Images

Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.

But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.

Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."

She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology. "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."

 

British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
Andrew Ng
Coursera co-founder Andrew Ng said he thinks AI will be part of the solution to existential risk.

Steve Jennings / Stringer/Getty Images

The endeavor lead to the Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.

Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website. 

Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera.  

"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity — it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg. 

 

Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Daphne Koller, CEO and Founder of insitro.
Daphne Koller, CEO and Founder of Insitro.

Insitro

Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.  

In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."

At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.



Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Anthropic cofounder and president Daniela Amodei.
Anthropic cofounder and president Daniela Amodei.

Anthropic

Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario — OpenAI's lead safety researcher at the time — was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails. 

At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate. 

"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.

 

Demis Hassabis has said artificial general intelligence will be here in a few years.
DeepMind boss Demis Hassabis believes AGI will be here in a few years.
Demis Hassabis, the CEO and co-founder of machine learning startup DeepMind.

Samuel de Roman/Getty Images

Hassabis, a former child chess prodigy who studied at Cambridge and University College London, was nicknamed the "superhero of artificial intelligence" by The Guardian back in 2016. 

After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He sold the AI lab to Google in 2014 for £400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website. 

Hassabis has said the promise of artificial general intelligence — a theoretical concept that sees AI matching the cognitive abilities of humans — is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method." 

In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and Karén Simonyan — now the company's chief scientist.
Mustafa Suleyman
Mustafa Suleyman, co-founder of DeepMind, launched Inflection AI in 2022.

Inflection

The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook. 

Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener" that can respond to real-life problems. 

"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said

 

 

USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Kate Crawford
USC Professor Kate Crawford is the author of Atlas of AI and a researchers at Microsoft.

Kate Crawford

Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society. 

Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."

She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."

Margaret Mitchell is the chief ethics scientist at Hugging Face.
Margaret Mitchell
Margaret Mitchell has headed AI projects at several big tech companies.

Margaret Mitchell

Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google. 

In late 2020, Mitchell and Timnit Gebru — then the co-lead of Google's ethical artificial intelligence — published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021

Now, at Hugging Face — an open-source data science and machine learning platform that was founded in 2016 — she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models.  

In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."

Navrina Singh is the founder of Credo AI, an AI governance platform.
Navrina Singh
Navrina Singh, the founder of Credo AI, says the system may help people reach their potential.

Navrina Singh

Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage. In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."

At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world. "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."

 

Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Richard Socher
Richard Socher believes we're still years from achieving AGI.

You.com

Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.

One bottleneck in large language models is their tendency to hallucinate — a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code — essential "program" responses instead of verbalizing them — we can "give them so much more fuel for the next few years in terms of what they can do," Socher said

But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there. 

And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.

"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI. 

Read the original article on Business Insider

Want to get into the AI industry? Head to Abu Dhabi.

Abu Dhabi
The United Arab Emirates is on a mission to become an AI powerhouse.

GIUSEPPE CACACE/AFP via Getty Images

  • The United Arab Emirates wants to become an AI leader by 2031.
  • It's leveraging its oil wealth to attract new talent and fund new research initiatives.
  • The UAE's AI minister believes we'll have "centers and nodes of excellence across the world."

The AI revolution is expanding far beyond Silicon Valley.

From the shores of Malta to the streets of Paris, hubs for AI innovation are forming worldwide. And the United Arab Emirates is emerging as a key center in the Middle East.

In October, the UAE made headlines by participating in the most lucrative funding round in Silicon Valley history: the $6.6 billion deal closed by OpenAI. The investment was made through MGX, a state-backed technology firm focused on artificial intelligence and semiconductors.

The move was part of the UAE's bid to become a global AI leader by 2031 through strategic initiatives, public engagement, and research investment. Last year, the country's wealthiest emirate, Abu Dhabi, launched Falcon — its first open-source large language model. State-backed AI firm G42 is also training large language models in Arabic and Hindi to bridge the gap between English-based models and native speakers of these languages.

Another indication of the UAE's commitment to AI is its appointment of Omar Sultan Al Olama as the country's AI Minister in 2017.

The minister acknowledges that the UAE faces tough competition from powerhouses like the United States and China, where private investment in AI technology in 2023 totaled $67.2 billion and $7.8 billion, respectively, according to Stanford's Center for Human-Centered Artificial Intelligence.

So he says he is embracing cooperation over competition.

"I don't think it's going to be a zero-sum game where it's only going to be AI that's developed in the US, or only going to be AI that's developed in China or the UAE," Al Olama said at an event hosted by the Atlantic Council, a DC think tank, in April. "What is going to happen, I think, is that we're going to have centers and nodes of excellence across the world where there are specific use cases or specific domains where a country or player or a company is doing better than everyone else."

The UAE's strengths are evident.

It is one of the wealthiest countries in the world, mostly due to its vast oil reserves. The UAE is among the world's 10 largest oil producers, with 96% of that coming from its wealthiest emirate, Abu Dhabi, according to the International Trade Administration.

Abu Dhabi's ruling family also controls several of the world's largest sovereign wealth funds, including the Abu Dhabi Investment Authority and Mubadala Investment Company, a founding partner of MGX.

These funds have been used to diversify the country's oil wealth and could now be diverted to funding new AI companies. AI could contribute $96 billion to the UAE economy by 2030, making up about 13.6% of its GDP, according to a report by PwC, the accounting firm.

But capital is only part of the equation. The bigger question is whether the tiny Gulf nation can attract the requisite talent to keep up with Silicon Valley.

Recent developments show promise. Between 2021 and 2023, the number of AI workers in the UAE quadrupled to 120,000, Al Olama said at the Atlantic Council event. In 2019, it rolled out a 'golden visa' program for IT professionals, making entry easier for AI experts. It's also making the most of its existing talent. In May, Dubai launched the world's biggest prompt engineering initiative. Its goal is to upskill 1 million workers over the next three years.

However, it's also faced criticism for its treatment of workers, especially lower-skilled migrant workers. Migrant workers comprise 88% of the country's population and have been subject to a range of labor abuses, including exposure to extreme heat, exploitative recruitment fees, and wage theft, according to Human Rights Watch. The UAE has responded by passing several labor laws that address protections for workers around hours, wages, and competition.

Abu Dhabi, meanwhile, has — over the last decade — become a nexus for AI research and education.

In 2010, New York University launched a branch in Abu Dhabi that has since developed a focus on AI. And, in 2019, Mohamed bin Zayed University of Artificial Intelligence opened as a "graduate research university dedicated to advancing AI as a global force for good." Professors from the university also helped organize the inaugural International Olympiad in Artificial Intelligence in August, which drew students from over 40 countries worldwide.

"Abu Dhabi may not directly surpass Silicon Valley, however, it has the potential to become a significant AI hub in its own right," Nancy Gleason, an advisor to leadership on AI at NYU Abu Dhabi and a professor of political science, told Business Insider by email. Its "true strengths lie in the leadership's strategic vision, substantial investments in AI research and compute capacity, and government-led initiatives in industry. The UAE has also made strategic educational investments in higher education like the Mohamed bin Zayed University of Artificial Intelligence and NYU Abu Dhabi."

Beyond that, she noted, it's "very nice to live here."

Read the original article on Business Insider

Return fraud is costing retailers billions. A new AI program can spot when scammers send back counterfeits.

lacoste polo oversized logo
Oversized crocodiles have met their match with Vrai AI counterfeit technology.

Ebay

  • Lacoste is using AI tech Vrai to detect counterfeit returns.
  • Return fraud costs retailers billions, with billions lost globally.
  • Amazon and other retailers face scams exploiting return policies for financial gain.

Spotting designer knockoffs is now easier than ever.

French luxury brand Lacoste is using Vrai, an AI technology developed by Cypheme, a leader in anti-counterfeit artificial intelligence, to catch scammers returning counterfeit items.

Trained on thousands of images of genuine merchandise, Vrai aims to distinguish real products from fakes with 99.7% accuracy, according to Semafor.

At its warehouses, Lacoste employees can snap a picture of a returned item with Vrai and verify its authenticity. The AI model can detect subtle discrepancies, from a slight variation in color to an extra tooth in the brand's signature crocodile logo.

Represenatives for Lacoste and Cypheme did not respond to Business Insider's request for comment,

The technology combats return fraud — a growing practice of exploiting return and refund processes for financial gain. Often, it involves returning different items for a refund. Some companies have even received boxes full of bricks after customers banked refunds for items like televisions.

Total returns for the retail industry came to $743 billion in merchandise in 2023, according to a report released by the National Retail Federation and Appriss Retail. US retailers lost a little over $100 billion in return fraud, or around $13.70 for every $100 returned, up from $10.40 per $100 in 2022.

Major retailers are frequent targets of such scams. In July, Amazon filed a federal lawsuit accusing a Telegram group of stealing more than 10,000 items through fraudulent returns. Members of the group fabricated stories to convince Amazon customer service to refund their accounts, sometimes even using falsified police reports.

Amazon, along with other online giants like Walmart, Target, and Wayfair, were also targeted by a crime ring that recruited legitimate shoppers to purchase items, have them refunded, and then keep or resell the goods. According to a federal indictment, the group exploited a "no-return refunds" policy that allows customers to get refunds without physically returning items—an option many retailers have implemented to reduce return costs for both themselves and consumers.

Read the original article on Business Insider

AI power usage is growing so fast that tech leaders are racing to find energy alternatives

An IT technician stands in a data center and looks at a laptop

Gorodenkoff/Getty Images

  • AI models consume tons of energy and increase greenhouse gas emissions.
  • Tech firms and governments say an energy revolution must happen to match the pace of AI development.
  • Many AI leaders are rallying around nuclear energy as a potential solution.

Advances in AI technology are sending shockwaves through the power grid.

The latest generation of large language models requires significantly more computing power and energy than previous AI models. As a result, tech leaders are rallying to accelerate the energy transition, including investing in alternatives like nuclear energy.

Big Tech companies have committed to advancing net zero goals in recent years.

Meta and Google aim to achieve net-zero emissions across all its operations by 2030. Likewise, Microsoft aims to be "carbon negative, water positive, and zero waste" by 2030. Amazon aims to achieve net‑zero carbon across its operations by 2040.

Major tech companies, including Amazon, Google, and Microsoft, have also struck deals with nuclear energy suppliers recently as they advance AI technology.

"Energy, not compute, will be the No. 1 bottleneck to AI progress," Meta CEO Mark Zuckerberg said on a podcast in April. Meta, which built the open-source large language model Llama, consumes plenty of energy and water to power its AI models.

Chip designer Nvidia, which skyrocketed into one of the most valuable companies in the world this year, has also ramped up efforts to become more energy efficient. Its next-generation AI chip, Blackwell, unveiled in March, has been marketed as being twice as fast as its predecessor, Hopper, and significantly more energy efficient.

Despite these advancements, Nvidia CEO Jensen Huang has said allocating substantial energy to AI development is a long-term game that will pay dividends as AI becomes more intelligent.

"The goal of AI is not for training. The goal of AI is inference," Huang said at a talk at the Hong Kong University of Science and Technology last week, referring to how an AI model applies its knowledge to draw conclusions from new data.

"Inference is incredibly efficient, and it can discover new ways to store carbon dioxide in reservoirs. Maybe it could discover new wind turbine designs, maybe it could discover new materials for storing electricity, maybe more effective materials for solar panels. We should use AI in so many different areas to save energy," he said.

Moving to nuclear energy

Many tech leaders argue the need for energy solutions is urgent and investing in nuclear energy.

"There's no way to get there without a breakthrough," OpenAI CEO Sam Altman said at the World Economic Forum in Davos in January.

Altman has been particularly keen on nuclear energy. He invested $375 million in nuclear fusion company Helion Energy and has a 2.6% stake in Oklo, which is developing modular nuclear fission reactors.

The momentum behind nuclear energy also depends on government support. President Joe Biden has been a proponent of nuclear energy, and his administration announced in October it would invest $900 million in funding next-generation nuclear technologies.

Clean energy investors say government support is key to advancing a national nuclear agenda.

"The growing demand for AI, especially at the inference layer, will dramatically reshape how power is consumed in the US," Cameron Porter, general partner at venture capital firm Steel Atlas and investor in nuclear energy company Transmutex, told Business Insider by email. "However, it will only further net-zero goals if we can solve two key regulatory bottlenecks—faster nuclear licensing and access to grid connections—and address the two key challenges for nuclear power: high-level radioactive waste and fuel sourcing."

Porter is betting the incoming Trump administration will take steps to move the needle forward.

"Despite these challenges, we expect the regulatory issues to be resolved because, ultimately, AI is a matter of national security," he wrote.

AI's energy use is growing

Tech companies seek new energy solutions because their AI models consume much energy. ChatGPT, powered by OpenAI's GPT-4, uses more than 17,000 times the electricity of an average US household to answer hundreds of millions of queries per day.

By 2030, data centers—which support the training and deployment of these AI models—will constitute 11-12% of US power demand, up from a current rate of 3-4%, a McKinsey report said.

Tech companies have turned to fossil fuels to satisfy short-term demands, which has increased greenhouse gas emissions. For example, Google's greenhouse gas emissions jumped by 48% between 2019 and 2023 "primarily due to increases in data center energy consumption and supply chain emissions," the company said in its 2024 sustainability report.

Read the original article on Business Insider

MBB explained: How hard it is to get hired and what it's like to work for the prestigious strategy consulting firms, McKinsey, Bain, and BCG

McKinsey logo on building.
MBB refers to the top three strategy consulting firms, McKinsey, Bain, and BCG.

FABRICE COFFRINI/AFP/Getty Images

  • McKinsey, Bain, and BCG are top strategy consulting firms with low acceptance rates.
  • These firms, known as MBB, serve Fortune 500 companies and offer competitive salaries.
  • MBB firms provide prestigious exit opportunities, often leading to senior roles in various sectors.

McKinsey & Company, Bain & Company, and Boston Consulting Group — collectively referred to as MBB — are widely considered the top three strategy consulting firms in the world.

Sometimes referred to as the Big Three, MBB firms are among the most prestigious consulting firms and their clients include many Fortune 500 companies as well as government agencies.

CEOs often turn to these firms for their expertise in business strategy and solving complex problems, whether it's handling mergers and acquisitions or budgeting and cutting costs.

Jobs at MBB firms are famously difficult to land and are among the most sought-after positions for MBA students at top schools. The acceptance rates for these firms is less than 1%. Applicants to top business schools are also far more likely to be accepted into MBA programs if they come from an MBB.

MBB firms typically offer highly competitive salaries, generally paying more than other consulting firms, and often come with demanding work responsibilities and expectations.

MBB firms are also well known for the exit opportunities they provide — employees at these firms are highly sought after for other jobs and often end up with senior positions at Fortune 500 companies, startups, hedge funds, and private equity firms, or start their own companies.

The Big Three is sometimes confused with the Big Four, which refers to the professional services firms Deloitte, EY, KPMG, and PwC. The Big Four are the largest accounting firms in the world though they also offer consulting and other services.

The MBB firms are strategy and management consulting firms. Here's how they compare.

McKinsey & Company

McKinsey is typically considered the most prestigious of the Big Three. It's also the oldest and was founded in 1926.

Headquartered in New York City, McKinsey is also the largest of the MBBs, with more than 45,000 employees across 130 offices worldwide.

McKinsey generated around $16 billion in revenue in 2023 and is led by Bob Sternfels, who serves as the firm's global managing partner and chair of the board of directors.

McKinsey told Business Insider it receives more than one million job applications each year and that the company planned to hire about 6,000 people in 2024, about the same as the year prior.

That would mean McKinsey hires around 0.6% of applicants.

McKinsey's average base salary for new hires out of undergrad is $112,000 and for MBAs $192,000, according to the company Management Consulted, which provides students with coaching for consulting interviews.

McKinsey is notorious for its demanding workload, with even entry-level analysts working 12 to 15 hours a day. One former employee told BI that the experience took a toll on her mental health but she came away with confidence and a Rolodex of contacts.

Boston Consulting Group

BCG was founded in Boston, where it is still headquartered, in 1963. The company had 32,000 employees as of 2023 and 128 offices worldwide.

BCG had a global revenue of about $12 billion in 2023.

BCG is led by Christoph Schweizer, who has served as CEO since 2021, and Rich Lesser, the Global Chair of the firm.

BCG's head of talent, Amber Grewal, told BI more than one million people apply to work at the company each year and that only 1% make the cut.

Amid the boom in generative AI the firm is hiring for a wider mix of roles than it did in years past. "It's going to change the mix of people and expertise that we need," Alicia Pittman, BCG's global people team chair previously told BI.

The average base salary at BCG for hires out of undergrad was $110,000 in 2023 and about $190,000 for MBAs and PhDs, according to Management Consulted.

Bain & Company

Bain was founded in 1973 and is also headquartered in Boston.

The smallest of the Big Three, Bain has around 19,000 employees with offices in 65 cities around the world.

Bain's revenue in 2023 reached $6 billion, according to the Financial Times.

Bain is helmed by Christophe De Vusser, who serves as the worldwide managing partner and CEO.

Bain's average base salary for undergrads in the US is around $90,000, while for new hires with an MBA or PhD it was around $165,000, according to Management Consulted.

Despite the grueling hours and high expectations, Bain is known for a collaborative culture.

"We have a motto, 'A Bainie never lets another Bainie fail,'" Davis Nguyen, a former consultant at the firm, previously told BI. "We all work together from entry-level associate consultants to senior partners. I think that is what makes Bain's culture what it is — that we all work together to achieve a goal and make everyone around us better."

Bain is also considered the "frattiest" of the top firms and is known for a "work hard, play hard" culture, according to Management Consulted.

Read the original article on Business Insider

'Big Four' salaries: How much accountants and consultants make at Deloitte, PwC, KPMG, and EY

three office employees walking and talking together in an office
Even an entry-level consultant at the "Big Four" can earn over $200,000.

Luis Alvarez/Getty Images

  • The "Big Four" accounting firms employ about 1.5 million people worldwide. 
  • Many of these employees make six-figure salaries and are eligible for annual bonuses.  
  • Business Insider analyzed data to determine how much employees are paid at these firms. 

The so called "Big Four" accounting firms — Deloitte, PricewaterhouseCoopers (PwC), KPMG, and Ernst & Young (EY) — are known for paying their staff high salaries. 

An entry-level consultant who just graduated from business school can make over $200,000 a year at the four firms when you include base salary, bonuses, and relocation expenses. 

Several of these firms have faced layoffs and implemented hiring freezes over the past year as demand for consulting services has waned. Still, they're a good bet for anyone looking to land a six-figure job straight out of school. 

Business Insider analyzed the US Office of Foreign Labor Certification's 2023 disclosure data for permanent and temporary foreign workers to find out what PwC, KPMG, EY, and Deloitte paid US-based employees for jobs ranging from entry-level to executive roles. We looked through entries specifically for roles related to management consulting and accounting. This data does not reflect performance bonuses, signing bonuses, and compensation other than base salaries.

Here's how much Deloitte, PwC, KPMG, and EY paid their hires.  

Deloitte paid senior managers between $91,603 to $288,000
Deloitte logo
Deloitte offers its top manager salaries close to mid six figures.

Artur Widak/Getty Images

With 457,000 employees worldwide, Deloitte employs the most people of any of the 'Big Four.' It pulled in close to $64.9 billion in revenue for the 2023 fiscal year, marking a 9.4% increase from 2022.

Deloitte did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consulting and accounting roles: 

  • Analyst: $49,219 to $337,500 (includes advisory, business, project delivery, management, and systems)
  • Senior business analyst: $97,739 
  • Audit and assurance senior assistant: average $58,895
  • Consultant: $54,475 to $125,000 (includes advisory, technology strategy, and strategic services)  
  • Global business process lead: $180,000 
  • Senior consultant: average $122,211
  • Manager: average $152,971
  • Tax manager: average $117,268
  • Senior manager: $91,603 to $288,000  
  • Managing director: average $326,769
  • Tax managing director: average $248,581
  • Principal: $225,000 to $875,000
Principals at PricewaterhouseCoopers (PwC) can make well over $1 million.
logo of PwC
PwC.

Danish Siddiqui/Reuters

PricewaterhouseCoopers (PwC) is a global professional services firm with over 370,000 employees worldwide. The firm reported a revenue of more than $53 billion for the 2023 fiscal year, marking a 5.6% increase from 2022. 

PwC did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for both consulting and accounting roles. 

  • Associate: $68,000 to $145,200
  • Senior associate: $72,000 to $197,000 
  • Manager: $114,300 to $231,000
  • Senior manager: $142,000 to $251,000 
  • Director: $165,000 to $400,000  
  • Managing director: $260,000 to $330,600
  • Principal: $1,081,182 to $1,376,196
KPMG offers managing directors anywhere between $230,000 to $485,000
The logo of KPMG, a multinational tax advisory and accounting services company, hangs on the facade of a KPMG offices building on January 22, 2021 in Berlin, Germany.
KPMG managing directors can earn close to half a million.

Sean Gallup/Getty Images

KPMG has over 273,000 employees worldwide. The firm reported a revenue of $36 billion for the 2023 fiscal year, marking a 5% increase from 2022. 

KPMG did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consultants, accountants, and leadership at KPMG. 

  • Associate: $61,000 to $140,000
  • Senior associate: $66,248 to $215,000
  • Director: $155,600 to $260,000
  • Associate director: $155,700 to $196,600 
  • Specialist director: $174,000 to $225,000
  • Lead specialist: $140,500 to $200,000
  • Senior specialist: $134,000 to $155,000
  • Manager: $99,445 to $293,800
  • Senior manager: $110,677 to $332,800
  • Managing director: $230,000 to $485,000
Statisticians at Ernst & Young (EY) make salaries ranging between $66,000 to $283,500.
Pedestrians walk in front of the entrance to EY's head office in London.
EY spends $500 million annually on learning for its employees.

TOLGA AKMEN / Contributor / Getty

EY employs close to 400,000 people worldwide. For the 2023 fiscal year, the firm reported a record revenue of $49.4 billion, marking a 9.3% jump from 2022. 

The firm did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consultants, accountants, auditors, and chief executives at the firm: 

  • Accountants and auditors: $54,000 to $390,000
  • Appraisers and assessors of real estate: $166,626 to $185,444
  • Computer systems analyst: $62,000 to $367,510
  • Management analyst: $49,220 to $337,500
  • Statistician: $66,000 to $283,500
  • Financial risk specialist: $62,000 to $342,400
  • Actuaries: $84,800 to $291,459
  • Economist: $77,000 to $141,000
  • Logisticians: $72,000 t0 $275,000
  • Mathematicians: $165,136 to $377,000
  • Computer and information systems manager: $136,167 to $600,000
  • Financial manager: average $320,000

Aman Kidwai and Weng Cheong contributed to an earlier version of this post. 

Read the original article on Business Insider

❌