Temu tops 2024 Apple App Store downloads, surpassing TikTok and ChatGPT in popularity.
The Chinese e-commerce app offers big discounts on a wide range of products.
Americans appear to be exploring budget-friendly options through in-app deals.
The App Store favorites of 2024 include social media platforms and one popular AI assistant, but the most downloaded app of the year was Temu.
The Chinese-owned e-commerce app was downloaded more times this year than TikTok, Threads, or ChatGPT, according to Apple. It's become known for big discounts on various products, from tech gadgets to apparel.
Temu, owned by PDD Holdings, is particularly popular among Gen Z consumers in the US. Gen Zers between 18 and 24 downloaded it 42 million times during the first 10 months of 2024, according to the app analytics firm Appfigures, which pulled data from iOS and Android users.
The e-commerce giant launched in the US in 2022 and has had a meteoric rise since then. PDD Holdings' third-quarter sales grew 44% to $14.2 billion from the same period in 2023, according to exchange rates on September 30.
It has invested millions to market to American shoppers. Three Temu ads aired during the Super Bowl, where one 30-second clip during the highly-viewed game can cost $7 million.
With Donald Trump threatening high tariffs on Chinese goods, Temu's popularity could be at risk if it resorts to raising prices to offset a possible 60% levy on its products.
Apps from retailers Amazon, Shein, and McDonald's also made the Apple App Store's top 20 most-downloaded list this year — indicating that consumers were on the hunt for a deal across categories.
McDonald's has found success in using targeted in-app promotions to build loyalty among its customers.
The chain's head of US restaurants said earlier this year that loyalty customers visit 15% more often and spend nearly twice as much as non-loyalty customers, with loyalty platform sales expected to hit $45 billion by 2027.
Amazon, for its part, has sought to capitalize on Temu and Shein's low-price appeal with a new Haul section, which is also an app-only shopping experience.
As former Starbucks CEO Laxman Narasimhan was fond of saying, "The best offers are in the app."
Suchir Balaji, a former OpenAI researcher, was found dead on Nov. 26 in his apartment, reports say.
Balaji, 26, was an OpenAI researcher of four years who left the company in August.
He had accused his employer of violating copyright law with its highly popular ChatGPT model.
Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.
Balaji had recently criticized OpenAI over how the startup collects data from the internet to train its AI models. One of his jobs at OpenAI was gather this information for the development of the company's powerful GPT-4 AI model, and he'd become concerned about how this could undermine how content is created and shared on the internet.
A spokesperson for the San Francisco Police Department told Business Insider that "no evidence of foul play was found during the initial investigation."
David Serrano Sewell, executive director of the city's office of chief medical examiner, told the San Jose Mercury News "the manner of death has been determined to be suicide." A spokesperson for the city's medical examiner's office did not immediately respond to a request for comment from BI.
"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson said in a statement to BI.
In October, Balaji published an essay on his personal website that raised questions around what is considered "fair use" and whether it can apply to the training data OpenAI used for its highly popular ChatGPT model.
"While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data," Balaji wrote. "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use.' Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use."
Balaji argued in his personal essay that training AI models with masses of data copied for free from the internet is potentially damaging online knowledge communities.
He cited a research paper that described the example of Stack Overflow, a coding Q&A website that saw big declines in traffic and user engagement after ChatGPT and AI models such as GPT-4 came out.
Large language models and chatbots answer user questions directly, so there's less need for people to go to the original sources for answers now.
In the case of Stack Overflow, chatbots and LLMs are answering coding questions, so fewer people visit Stack Overflow to ask that community for help. This, in turn, means the coding website generates less new human content.
Elon Musk has warned about this, calling the phenomenon "Death by LLM."
The New York Times sued OpenAI last year, accusing the start up and Microsoft of "unlawful use of The Times's work to create artificial intelligence products that compete with it."
In an interview with Times that was published October, Balaji said chatbots like ChatGPT are stripping away the commercial value of people's work and services.
"This is not a sustainable model for the internet ecosystem as a whole," he told the publication.
In a statement to the Times about Balaji's accusations, OpenAI said: "We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness."
Balaji was later named in the Times' lawsuit against OpenAI as a "custodian" or an individual who holds relevant documents for the case, according to a letter filed on November 18 that was viewed by BI.
If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.
OpenAI launched its widely anticipated video feature for ChatGPT's Advanced Voice Mode.
It allows users to incorporate live video and screen sharing into conversations with ChatGPT.
ChatGPT can interpret emotions, assist with homework, and provide real-time visual context.
ChatGPT's Advanced Voice Mode can now help provide real-time design tips for your home, assistance with math homework, or instant replies to your texts from the Messages app.
After teasing the public with a glimpse of the chatbot's ability to "reason across" vision along with text and audio during OpenAI's Spring Update in May, the company finally launched the feature on Thursday as part of day six of OpenAI's "Shipmas."
"We are so excited to start the rollout of video and screen share in Advanced Voice today," the company said in the livestream on Thursday. "We know this is a long time coming."
OpenAI initially said the voice and video features would be rolling out in the weeks after its Spring Update. However, Advanced Voice Mode didn't end up launching to users until September, and the video mode didn't come out until this week.
The new capabilities help provide more depth in conversations with ChatGPT by adding "realtime visual context" with live video and screen sharing. Users can access the live video by selecting the Advanced Voice Mode icon in the ChatGPT app and then choosing the video button on the bottom far left.
In the livestream demonstration on Thursday, ChatGPT helped an OpenAI employee make pour-over coffee. The chatbot noticed details like what the employee was wearing and then walked him through the steps of making he drink, elaborating on certain parts of the process when asked. The chatbot also gave him feedback on his technique.
To share your screen with ChatGPT, hit the drop-down menu and select "Share Screen." In the "Shipmas" demo, ChatGPT could identify that the user was in the Messages app, understand the message sent, and then help formulate a response after the user asked.
During the company's Spring Update, OpenAI showed off some other uses of the video mode. The chatbot was able to interpret emotions based on facial expressions and also demonstrated its ability to act as a tutor. OpenAI Research Lead Barret Zoph walked through an equation on a whiteboard (3x+1=4) and ChatGPT provided him with hints to find the value of x.
The feature had a couple of stumbles during the Spring Update demonstration, like referring to one of the employees as a "wooden surface" or trying to solve a math problem before it was shown.
Now that it's out, we decided to give the feature a whirl — and so far, it seems pretty impressive.
We showed the chatbot an office plant and asked it to tell us about it, give context on whether it's healthy, and explain what the watering schedule should look like. The chatbot accurately described browning and drying on the leaf tips and identified it as an Aloe Vera plant, which seems to fit the right description.
The new video feature will be rolling out this week in the latest version of the ChatGPT mobile app to Team and most Plus and Pro users. The feature isn't available in the EU, Switzerland, Iceland, Norway, and Liechtenstein yet, but OpenAI said it will be as soon as possible.
I asked ChatGPT to come up with gift ideas for my dad, mom, and sister.
The AI tool gave me unique, thoughtful suggestions on what to get my parents.
I wasn't as impressed with its gift ideas for my sister, but overall, ChatGPT did a great job.
Although I love Christmas shopping and gift-giving, finding unique, meaningful gifts for my family can be difficult year after year.
Determined to switch things up, I turned to ChatGPT to help me come up with some gift ideas for them.
My hope was that the AI service would produce ideas that I wouldn't have thought of otherwise, with suggestions more creative than just another cookbook for my mom or band T-shirt for my sister.
Here's how it went.
Going into the holiday season, I was most worried about what to get my dad
He doesn't care much for material things, so I was curious if ChatGPT could suggest practical gifts or experiences he'd appreciate.
Here's the prompt I gave ChatGPT:
Please give me unique gift recommendations for what to get my dad for Christmas based on his interests. He loves anything about World War II history, is trying to learn Spanish via Duolingo, always rewatches "Breaking Bad," is on the keto diet, and loves making breakfast food.
In total, ChatGPT gave me 19 suggestions — three for each of the five interests I mentioned, along with additional ideas under categories suggesting quirky and personalized gifts.
I was most impressed with ChatGPT's suggestions for my dad
Some of the ideas ChatGPT gave me included a personalized World War II history book, Duolingo merchandise, a Los Pollos Hermanos (a restaurant from "Breaking Bad") apron, a keto snack-box subscription, and gourmet bacon.
I was impressed, as these were all ideas I wouldn't have come up with on my own. However, my favorite suggestions were under ChatGPT's "Fun and Quirky" and "Personalized Gift" sections.
The quirky ideas included a World War II-themed board game like Axis & Allies, and a movie-night pack comprised of a collection of Spanish-language films (with snacks to enjoy while watching).
Under the personalized gift section, ChatGPT suggested a keto-friendly breakfast basket with treats like low-carb muffins and nut butters.
Because my dad isn't really into collecting memorabilia, I decided the best idea would be to combine two ideas and pair the Spanish movie-night pack with an assortment of keto-friendly snacks.
I think he'd appreciate the experience of watching movies together. I may also check out some keto-snack-box-subscription websites for ideas on what to put in his basket.
I figured my mom would be easier to shop for
Going into this holiday season, I was a bit less worried about what to get my mom because she plans to retire next year and is looking for more hobbies to keep her busy.
Still, I didn't have anything particular in mind, which is where ChatGPT came in handy.
I asked it to come up with gift ideas based on this prompt:
Now, can you help me come up with ideas for my mom based on her interests? She is super excited to go to Iceland for the first time next year, is always trying to find low-carb, low-sugar TikTok recipes, wants to get more into exercising (recently bought a Peloton and Apple Watch), and is overall just looking for more hobbies to pick up when she retires next year.
It gave me 22 suggestions in total — four for each of the four points I mentioned and additional ideas under categories suggesting personalized and mindfulness-related ideas.
ChatGPT came up with some pretty unique ideas for my mom
Among the ideas ChatGPT suggested were a packing kit for Iceland that includes items like a travel adapter and language guide, a personalized binder of her favorite TikTok recipes, Apple Watch bands, and cooking or baking classes to enjoy in retirement.
Compared to my dad's results, I was less impressed with the additional categories ChatGPT created for my mom. Under the "Something Personalized" category, it suggested a customized Icelandic map, a personalized fitness-tracker case, and motivational-quote wall art. In my opinion, none of these seemed very practical or creative.
I thought the "Mindfulness and Relaxation" category had much better ideas: a subscription box for relaxation, a weighted blanket, and an indoor herb-garden kit.
A weighted blanket isn't likely something she'd buy for herself, but I can imagine her getting a lot of use out of it while unwinding after a long day. She's also been trying to eat healthier, so an indoor-herb-garden kit could lead her to a fun new hobby while allowing her to add fresh garnishes to her dishes.
I also liked the personalized recipe-binder idea since my mom usually just watches the same videos over and over again to remember the ingredients. Writing down and compiling her favorite TikTok recipes would be a practical and affordable gift.
I already had a gift idea in mind for my sister, so I was less reliant on the ChatGPT results
I was leaning toward getting my sister concert tickets for Christmas, but I still wanted to see what ideas ChatGPT had.
I figured if any of them stood out, I could give her another gift in addition to the tickets — or just replace them altogether.
Here's the information I gave ChatGPT:
Can you now help me come up with unique Christmas gift ideas for my sister based on her interests and hobbies? My sister loves everything music (she plays five instruments), likes unique party games, lives in San Diego, is graduating from college next year, is going to Bali next year, and likes to get merchandise from her favorite artists.
It gave me 26 gift suggestions, with ideas specific to all six of the points I mentioned and more under a category titled "Something Fun & Personalized."
None of the ideas for my sister blew me away
Although ChatGPT gave me the most ideas for my sister, I was actually the least impressed with these suggestions. However, this may have been because I already had an idea of what to get her.
Some of the ideas it gave me were a custom instrument case, specific party games (most of which she already owned), a Bali guidebook, a memory box to keep mementos from college, and merchandise from San Diego or her favorite artists.
These ideas seemed a lot more generic than the ones it produced for my mom and dad. For example, I wouldn't have thought to put together a TikTok-recipe binder for my mom or a Spanish movie night for my dad.
However, there weren't any ideas for my sister that I thought were especially unique or practical.
Perhaps it was due to the types of interests I entered for my sister, but I wouldn't choose any of those gifts over — or even as an addition to — concert tickets for her.
Before making any future holiday purchases, I'll consult ChatGPT first
Despite being slightly disappointed with ChatGPT's suggestions for my sister, I'll definitely be taking some of the ideas it gave me for my parents.
Although the AI tool may not have all the answers for mind-blowing, personalized gifts, I think it's a decent place to start if you need some ideas for brainstorming.
Based on this success, I plan to return to the platform to ask for gift suggestions for upcoming holidays and birthdays.
Google's search share slipped from June to November, new survey finds.
ChatGPT gained market share over the period, potentially challenging Google's dominance.
Still, generative AI features are benefiting Google, increasing user engagement.
ChatGPT is gaining on Google in the lucrative online search market, according to new data released this week.
In a recent survey of 1,000 people, OpenAI's chatbot was the top search provider for 5% of respondents, up from 1% in June, according to brokerage firm Evercore ISI.
Millennials drove the most adoption, the firm added in a research note sent to investors.
Google still dominates the search market, but its share slipped. According to the survey results, 78% of respondents said their first choice was Google, down from 80% in June.
It's a good business to be a gatekeeper
A few percentage points may not seem like much, but controlling how people access the world's online information is a big deal. It's what fuels Google's ads business, which produces the bulk of its revenue and huge profits. Microsoft Bing only has 4% of the search market, per the Evercore report, yet it generates billions of dollars in revenue each year.
ChatGPT's gains, however slight, are another sign that Google's status as the internet's gatekeeper may be under threat from generative AI. This new technology is changing how millions of people access digital information, sparking a rare debate about the sustainability of Google's search dominance.
OpenAI launched a full search feature for ChatGPT at the end of October. It's also got a deal with Apple this year that puts ChatGPT in a prominent position on many iPhones. Both moves are a direct challenge to Google. (Axel Springer, the owner of Business Insider, has a commercial relationship with OpenAI).
ChatGPT user satisfaction vs Google
When the Evercore analysts drilled down on the "usefulness" of Google's AI tools, ChatGPT, and Copilot, Microsoft's consumer AI helper, across 10 different scenarios, they found intriguing results.
There were a few situations where ChatGPT beat Google on satisfaction by a pretty wide margin: people learning specific skills or tasks, wanting help with writing and coding, and looking to be more productive at work.
It even had a 4% lead in a category that suggests Google shouldn't sleep too easy: people researching products and pricing online.
Google is benefiting from generative AI
Still, Google remains far ahead, and there were positive findings for the internet giant from Evercore's latest survey.
Earlier this year, Google released Gemini, a ChatGPT-like helper, and rolled out AI Overviews, a feature that uses generative AI to summarize many search results. In the Evercore survey, 71% of Google users said these tools were more effective than the previous search experience.
In another survey finding, among people using tools like ChatGPT and Gemini, 53% said they're searching more. That helps Google as well as OpenAI.
What's more, the tech giant's dominance hasn't dipped when it comes to commercial searches: people looking to buy stuff like iPhones and insurance. This suggests Google's market share slippage is probably more about queries for general information, meaning Google's revenue growth from search is probably safe for now.
So in terms of gobbling up more search revenue, ChatGPT has its work cut out.
Evercore analyst Mark Mahaney told BI that even a 1% share of the search market is worth roughly $2 billion a year in revenue. But that only works if you can make money from search queries as well as Google does.
"That's 1% share of commercial searches and assuming you can monetize as well as Google — and the latter is highly unlikely in the near or medium term," he said.
OpenAI released the full version of its o1 reasoning model on Thursday.
It says the o1 model, initially previewed in September, is now multimodal, faster, and more precise.
It was released as part of OpenAI's 12-day product and demo launch, dubbed "shipmas."
On Thursday, OpenAI released the full version of its hot new reasoning model as part of the company's 12-day sprint of product launches and demos.
The model, known as o1, was released in a preview mode in September. OpenAI CEO Sam Altman said during day one of the company's livestream that the latest version was more accurate, faster, and multimodal. Research scientists on the livestream said an internal evaluation indicated it made major mistakes about 34% less often than the o1 preview mode.
The model, which seems geared toward scientists, engineers, and coders, is designed to solve thorny problems. The researchers said it's the first model that OpenAI trained to "think" before it responds, meaning it tends to give more detailed and accurate responses than other AI helpers.
To demonstrate o1's multimodal abilities, they uploaded a photo of a hand-drawn system for a data center in space and asked the program to estimate the cooling-panel area required to operate it. After about 10 seconds, o1 produced what would appear to a layperson as a sophisticated essay rife with equations, ending with what was apparently the right answer.
The researchers think o1 should be useful in daily life, too. Whereas the preview version could think for a while if you merely said hi, the latest version is designed to respond faster to simpler queries. In Thursday's livestream, it was about 19 seconds faster than the old version at listing Roman emperors.
All eyes are on OpenAI's releases over the next week or so, amid a debate about how much more dramatically models like o1 can improve. Tech leaders are divided on this issue; some, like Marc Andreessen, argue that AI models aren't getting noticeably better and are converging to perform at roughly similar levels.
With its 12-day deluge of product news, dubbed "shipmas," OpenAI may be looking to quiet some critics while spreading awkward holiday cheer.
"It'll be a way to show you what we've been working on and a little holiday present from us," Altman said on Thursday.
Elon Musk helped found OpenAI, but he has frequently criticized it in recent years.
Musk filed a lawsuit against OpenAI in August and just amended it to include Microsoft.
Here's a history of Musk and Altman's working relationship.
Elon Musk and Sam Altman lead rival AI firms and now take public jabs at each other — but it wasn't always like this.
Years ago, the two cofounded OpenAI, which Altman now leads. Musk departed OpenAI, which created ChatGPT, in 2018, and recently announced his own AI venture, xAI.
There is enough bad blood that Musk sued OpenAI and Altman, accusing them in the suit of betraying the firm's founding principles, before dropping the lawsuit. The billionaire then filed a new one a few months later, claiming he was "deceived" into confounding the company. In November, he amended it to include Microsoft as a defendant, and his lawyers accused the two companies of engaging in monopolistic behavior. Microsoft is an investor in OpenAI.
Two weeks later, Musk's lawyers filed a motion requesting a judge to bring an injunction against OpenAI that would block it from dropping its nonprofit status. In the filing, Musk accused OpenAI and Microsoft of exploiting his donations to create a for-profit monopoly.
Here's a look at Musk and Altman's complicated relationship over the years:
Musk and Altman cofounded OpenAI, the creator of ChatGPT, in 2015, alongside other Silicon Valley figures, including Peter Thiel, LinkedIn cofounder Reid Hoffman, and Y Combinator cofounder Jessica Livingston.
The group aimed to create a nonprofit focused on developing artificial intelligence "in the way that is most likely to benefit humanity as a whole," according to a statement on OpenAI's website from December 11, 2015.
At the time, Musk said that AI was the "biggest existential threat" to humanity.
"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," a statement announcing the founding of OpenAI reads.
Musk stepped down from OpenAI's board of directors in 2018.
With his departure, Musk also backed out of a commitment to provide additional funding to OpenAI, a person involved in the matter told The New Yorker.
"It was very tough," Altman told the magazine of the situation. "I had to reorient a lot of my life and time to make sure we had enough funding."
It was reported that Sam Altman and other OpenAI cofounders had rejected Musk's proposal to run the company in 2018.
Semafor reported in 2023 that Musk wanted to run the company on his own in an attempt to beat Google. But when his offer to run the company was rejected, he pulled his funding and left OpenAI's board, the news outlet said.
In 2019, Musk shared some insight on his decision to leave, saying one of the reasons was that he "didn't agree" with where OpenAI was headed.
"I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX," he tweeted. "Also, Tesla was competing for some of same people as OpenAI & I didn't agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms."
Musk has taken shots at OpenAI on several occasions since leaving.
Two years after his departure, Musk said, "OpenAI should be more open" in response to an MIT Technology Review article reporting that there was a culture of secrecy there, despite OpenAI frequently proclaiming a commitment to transparency.
In December 2022, days after OpenAI released ChatGPT, Musk said the company had prior access to the database of Twitter — now owned by Musk — to train the AI chatbot and that he was putting that on hold.
"Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true," he said.
Musk was reportedly furious about ChatGPT's success, Semafor reported in 2023.
In February 2023, Musk doubled down, saying OpenAI as it exists today is "not what I intended at all."
"OpenAI was created as an open source (which is why I named it "Open" AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all," he said in a tweet.
Musk repeated this assertion a month later.
"I'm still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn't everyone do it?" he tweeted.
Musk was one of more than 1,000 people who signed an open letter calling for a six-month pause on training advanced AI systems.
The March 2023 letter, which also received signatures from several AI experts, cited concerns about AI's potential risks to humanity.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.
But while he was publicly calling for the pause, Musk was quietly building his own AI competitor, xAI, The New Yorker reported in 2023. He launched the company in March 2023.
Altman has addressed some of Musk's gripes about OpenAI.
"To say a positive thing about Elon, I think he really does care about a good future with AGI," Altman said last year on an episode of the "On With Kara Swisher" podcast, referring to artificial general intelligence.
"I mean, he's a jerk, whatever else you want to say about him — he has a style that is not a style that I'd want to have for myself," Altman told Swisher. "But I think he does really care, and he is feeling very stressed about what the future's going to look like for humanity."
In response to Musk's claim that OpenAI has turned into "a closed source, maximum-profit company effectively controlled by Microsoft," Altman said on the podcast, "Most of that is not true, and I think Elon knows that."
Altman has also referred to Musk as one of his heroes.
In a March 2023 episode of Lex Fridman's podcast, Altman also said, "Elon is obviously attacking us some on Twitter right now on a few different vectors."
In a May 2023 talk at University College London, Altman was asked what he's learned from various mentors, Fortune reported. He answered by speaking about Musk.
"Certainly learning from Elon about what is just, like, possible to do and that you don't need to accept that, like, hard R&D and hard technology is not something you ignore, that's been super valuable," he said.
Musk has since briefly unfollowed Altman on Twitter before following him again; separately, Altman later poked fun at Musk's claim to be a "free speech absolutist."
Twitter took aim at posts linking to rival Substack in 2023, forbidding users from retweeting or replying to tweets containing such links, before reversing course. In response to a tweet about the situation, Altman tweeted, "Free speech absolutism on STEROIDS."
Altman joked that he'd watch Musk and Mark Zuckerberg's rumored cage fight.
"I would go watch if he and Zuck actually did that," he said at the Bloomberg Technology Summit in June 2023, though he said he doesn't think he would ever challenge Musk in a physical fight.
Altman also repeated several of his previous remarks about Musk's position on AI.
"He really cares about AI safety a lot," Altman said at Bloomberg's summit. "We have differences of opinion on some parts, but we both care about that and he wants to make sure we, the world, have the maximal chance at a good outcome."
Separately, Altman told The New Yorker in August 2023 that Musk has a my-way-or-the highway approach to issues more broadly.
"Elon desperately wants the world to be saved. But only if he can be the one to save it," Altman said.
Musk first sued Altman and OpenAI in March 2024.
He first sued OpenAI, Altman, and cofounder Greg Brockman in March, alleging the company's direction in recent years has violated its founding principles.
His lawyers alleged OpenAI "has been transformed into a closed-source de facto subsidiary of the largest technology company in the world" and is "refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity."
The lawsuit alleges that OpenAI executives played on Musk's concerns about the existential risks of AI and "assiduously manipulated" him into cofounding the company as a nonprofit. The intent of the company was to focus on building AI safely in an open approach to benefit humanity, the lawsuit says.
The company has since decided to take a for-profit approach.
OpenAI responded to the lawsuit by stating that "Elon's prior emails continue to speak for themselves."
The emails, which were published by OpenAI in March, show correspondence between Musk and OpenAI executives that indicated he supported a pivot to a for-profit model and was open to merging the AI startup with Tesla.
Musk expanded his beef with OpenAI to include Microsoft, accusing the two of constituting a monopoly
The billionaire called OpenAI's partnership with Microsoft a "de facto merger" and accused the two of anti-competitive practices, such as engaging in "lavish compensation." Musk's lawyers said the two companies "possess a nearly 70% share of the generative AI market."
"OpenAI has attempted to starve competitors of AI talent by aggressively recruiting employees with offers of lavish compensation, and is on track to spend $1.5 billion on personnel for just 1,500 employees," lawyers for Musk said in the complaint.
Two weeks later, Musk filed a motion asking a judge to prevent OpenAI from dropping its nonprofit status.
Musk filed a complaint to Judge Yvonne GonzalezRogers of the US District Court for the Northern District of California, arguing that OpenAI and Microsoft exploited his donations to OpenAI as a nonprofit to build a monopoly "specifically targeting xAI." In the filing, Musk's lawyers said OpenAI engaged in anticompetitive behaviors and wrongfully shared information with Microsoft.
If granted by the judge, the injunction could cause issues with OpenAI's partnership with Microsoft and prevent it from becoming a for-profit company.
As Musk's influence on US policy grows, his feud with Altman hangs in the balance.
As President-elect Donald Trump's self-proclaimed "First Buddy," Musk's power and influence on the US economy could increase even further over the next four years. In addition to being a right-hand-man to Trump, he'll lead the new Department of Government Efficiency with biotech billionaire Vivek Ramaswamy.
Musk hasn't been quiet about his disdain for Altman post-election. He dubbed the OpenAI cofounder "Swindly Sam" in an X post on November 15. The Wall Street Journal reported that Musk "despises" Altman, according to people familiar.
Oracle shares are set for their best year since 1999 after a 75% surge.
The enterprise-computing stock has benefited from strong demand for cloud and AI infrastructure.
Oracle cofounder Larry Ellison's personal fortune has surged .
Oracle has surged 75% since January, putting the stock on track for its best year since a tripling in 1999 during the dot-com boom.
The enterprise-computing giant's share price has jumped from a low of about $60 in late 2022 to about $180, boosting Oracle's market value from below $165 billion to north of $500 billion.
It's now worth almost as much as Exxon Mobil ($518 billion), and more valuable than Mastercard ($489 billion), Costco ($431 billion), or Netflix ($379 billion).
Oracle's soaring stock price has boosted the net worth of Larry Ellison, who cofounded the company and is chief technology officer. His holding of more than 40% puts him second on the Forbes Real-Time Billionaires list worth $227 billion, second only to Tesla CEO Elon Musk's $330 billion.
Oracle provides all manner of software and hardware for businesses, but its cloud applications and infrastructure are fueling its growth as companies such as Tesla that are training large language models pay up for processing power.
The company was founded in 1977 but is still growing at a good clip. Net income jumped by 23% to $10.5 billion in the year ended May, fueled by 12% sales growth in the cloud services and license support division, which generated nearly 75% of its revenues.
Oracle signed the largest sales contracts in its history last year as it tapped into "enormous demand" for training LLMs, CEO Safra Catz said in the fourth-quarter earnings release. She said the client list included OpenAI and its flagship ChatGPT model, which kickstarted the AI boom.
Catz also predicted revenue growth would accelerate from 6% to double digits this financial year. That's partly because Oracle is working with Microsoft and Google to interconnect their respective clouds, which Ellison said would help to "turbocharge our cloud database growth."
Oracle has flown under the radar this year compared to Nvidia. The chipmaker's stock has tripled in the past year and it now rivals Apple as the world's most valuable company. Yet Oracle is still headed for its best annual stock performance in a quarter of a century — and its bosses are promising there's more to come.
OpenAI is seeking to reach 1 billion users by next year, a new report said.
Its growth plan involves building new data centers, company executives told the Financial Times.
The lofty user target signifies the company's growth ambitions following a historic funding round.
OpenAI is seeking to amass 1 billion users over the next year and enter a new era of accelerated growth by betting on several high-stakes strategies such as building its own data centers, according to a new report.
In 2025, the startup behind ChatGPT hopes to reach user numbers surpassed only by a handful of technology platforms, such as TikTok and Instagram, by investing heavily in infrastructure that can improve its AI models, its chief financial officer Sarah Friar told the Financial Times.
"We're in a massive growth phase, it behooves us to keep investing. We need to be on the frontier on the model front. That is expensive," she said.
ChatGPT, the generative AI chatbot introduced two years ago by OpenAI boss Sam Altman, serves 250 million weekly active users, the report said.
ChatGPT has enjoyed rapid growth before. It reached 100 million users roughly two months after its initial release thanks to generative AI features that grabbed the attention of businesses and consumers. At the time, UBS analysts said they "cannot recall a faster ramp in a consumer internet app."
Data center demand
OpenAI will require additional computing power to accommodate a fourfold increase in users and to train and run smarter AI models.
Chris Lehane, vice president of global affairs at OpenAI, told the Financial Times that the nine-year-old startup was planning to invest in "clusters of data centers in parts of the US Midwest and southwest" to meet its target.
Increasing data center capacity has become a critical global talking point for AI companies. In September, OpenAI was reported to have pitched the White House on the need for a massive data center build-out, while highlighting the massive power demands that they'd come with.
Altman, who thinks his technology will one day herald an era of "superintelligence," has been reported to be in talks this year with several investors to raise trillions of dollars of capital to fund the build-out of critical infrastructure like data centers.
Friar also told the FT that OpenAI is open to exploring an advertising model.
"Our current business is experiencing rapid growth and we see significant opportunities within our existing business model," Friar told Business Insider. "While we're open to exploring other revenue streams in the future, we have no active plans to pursue advertising."
OpenAI said the capital would allow it to "double down" on its leadership in frontier AI research, as well as "increase compute capacity, and continue building tools that help people solve hard problems."
In June, the company also unveiled a strategic partnership with Apple as part of its bid to put ChatGPT in the hands of more users.
OpenAI did not immediately respond to BI's request for comment.
Since then, its user base has doubled to 200 million weekly users.
Major companies, entrepreneurs, and users remain optimistic about its transformative power.
It's been two years since OpenAI released its flagship chatbot, ChatGPT.
And a lot has changed in the world since then.
For one, ChatGPT has helped turbocharge global investment in generative AI.
Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.
Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.
Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.
Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.
Then, of course, there is ChatGPT's impact on regular users.
In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.
Business Insider asked ChatGPT what age means to it.
"Age, to me, is an interesting concept — it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.
The field of artificial intelligence is booming and attracting billions in investment.
Researchers, CEOs, and legislators are discussing how AI could transform our lives.
Here are 17 of the major names in the field — and the opportunities and dangers they see ahead.
Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives.
In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI — and its impact on our future.
Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.
Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.
Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index — a metric for evaluating the cumulative impact of an author's scholarly output — in the world, according to his website.
In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020.
Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.
"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."
When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.
Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."
Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.
Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.
She may be best known for establishing ImageNet — a large visual database that was designed for research in visual object recognition — and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects. Over the years, she's also been affiliated with major tech companies including Google — where she was a VP and chief scientist for AI and machine learning — and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022.
UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans.
He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig.
Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June, he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom.
Peter Norvig played a seminal role directing AI research at Google.
He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision.
Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence.
Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said.
Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.
But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.
Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."
She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology. "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."
British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
The endeavor lead to the Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.
Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website.
Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera.
"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity — it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg.
Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.
In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."
At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.
Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario — OpenAI's lead safety researcher at the time — was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails.
At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate.
"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.
Demis Hassabis has said artificial general intelligence will be here in a few years.
After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He sold the AI lab to Google in 2014 for £400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website.
Hassabis has said the promise of artificial general intelligence — a theoretical concept that sees AI matching the cognitive abilities of humans — is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method."
In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and Karén Simonyan — now the company's chief scientist.
The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook.
Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener" that can respond to real-life problems.
"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said.
USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society.
Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."
She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."
Margaret Mitchell is the chief ethics scientist at Hugging Face.
Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google.
In late 2020, Mitchell and Timnit Gebru — then the co-lead of Google's ethical artificial intelligence — published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021
Now, at Hugging Face — an open-source data science and machine learning platform that was founded in 2016 — she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models.
In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."
Navrina Singh is the founder of Credo AI, an AI governance platform.
Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage. In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."
At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world. "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."
Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.
One bottleneck in large language models is their tendency to hallucinate — a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code — essential "program" responses instead of verbalizing them — we can "give them so much more fuel for the next few years in terms of what they can do," Socher said.
But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there.
And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.
"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI.