Enterprise AI startup Cohere has launched a new platform called North.
North allows users to quickly deploy AI agents to execute tasks across various business sectors.
The company says the platform cuts the time it takes to complete a task by over five-fold.
2025 is shaping up to be the year that AI "agents" go mainstream.
Unlike AI-based chatbots that respond to user queries, agents are AI tools that work autonomously. They can execute tasks and make decisions, and companies are already using them for everything from creating marketing campaigns to recruiting new employees.
Cohere, an AI startup focused on enterprise technology, unveiled North on Thursday β an all-in-one platform combining large language models, multimodal search, and agents to help its customers work more efficiently with AI.
Through North, users can quickly customize and deploy AI agents to find relevant information, conduct research, and execute tasks across various business functions.
The platform could make it easier for a company's finance team, for example, to quickly search through internal data sources and create reports. Its multimodal search function could also help extract information from everything from images to slides to spreadsheets.
AI agents built with North integrate with a company's existing workplace tools and applications. The platform can run in private, allowing organizations to integrate all their sensitive data in one place securely.
"North allows employees to build AI agents tailored to their role to execute complex tasks without ever leaving the platform," a representative for Cohere told Business Insider by email.
The company is now deploying North to a small set ofcompaniesin finance, healthcare, and critical infrastructure as it continues to refine the platform. There is no set date for when it will make the platform availablemore widely.
Cohere, launched in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, has quickly grown to rival ChatGPT maker OpenAI and was valued at over $5.5 billion at its Series D funding round announced last July, Bloomberg reported. As of last March, the company had an annualized revenue of $35 million, up from $13 million at the end of 2023.
The company is one of a few AI startups that are building their own large language models from the ground up. Unlike its competitors, it has focused on creating customized solutions for businesses rather than consumer apps or the more nebulous goal of artificial general intelligence.
Its partners include major companies like software company Oracle, IT company Fujitsu, and consulting firm McKinsey & Company.
This year, however, its goal is to "move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Gomez said in a post on LinkedIn outlining the company's objectives for 2025.
I made ChatGPT Search my default search engine after it rolled out to logged-in users.
However, I found myself opening a separate browser to find the answers I was looking for on Google.
ChatGPT Search hasn't quite nailed down keyword searching, but it worked well for open-ended prompts.
After a little over a week of using ChatGPT search as my default search engine, I can say that it won't be replacing Google Search for me in the near future.
ChatGPT Search became available to anyone with a free account on December 16 after OpenAI launched the feature on October 31 to select users. The feature allows users to get quick, up-to-date responses with the option to open up relevant links in a tab on the right-hand side.
"This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more," the company said in its announcement.
The launch of the tool to free users also means you can go into your web browser settings and set ChatGPT Search as the default search engine β meaning that typing a question or keyword into the browser bar will route that query through ChatGPT instead of your usual default, such as Google or DuckDuckGo.
I use ChatGPTΒ frequently for a number of tasks and I find the chatbot and search engine feature valuable. However, setting it as my default search engine made me realize how necessary Google Search continues to feel to my daily workflow β and how much the two platforms differ.
I feel Google shows, ChatGPT tells
While I found the summaries generated by ChatGPT Search to be useful, there were instances when I wanted to better see and select the sources for myself.
It can vary, but in my experience, ChatGPT Search includes about one to seven in-text links in the response and has a "Sources" tab at the bottom that users can select for an expanded source display. The tab includes the cited links at the top and about eight to 13 relevant links underneath. If you ask for more sources, you have to reopen sources from the first prompt to see the original list.
On Google, it often feels like you can scroll endlessly.
For example, if I want to shop around for a new TV stand for my living room and search "55 inch TV stand" on Google, I can see seemingly infinite options of stands that match that description. If I want to keep scrolling through, I can open Google Shopping which now operates more like a third-party marketplace, thanks to an AI revamp.
ChatGPT Search, on the other hand, responded to my search with around 5 products and linked additional sites in the source tab. Instead of seeing a visual array of products from different brands with filters to set my preferences, ChatGPT Search selected some. I also found the source tab to be a fairly narrow display that didn't present sources in a particularly helpful way visually.
Now, it's worth noting that some people may prefer the approach of ChatGPT Search to their usual search engine. And Google has been evolving the search experience over the past year as it leans into AI.
There's an argument to be made about too many options being shown on Google, and ChatGPT certainly narrows it down. However, if I'm doing a broad search, I personally want to see the breadth of what's out there.
While using ChatGPT Search, I found that I missed Google's layout of providing AI Overviews at the top of some queries. The feature allows users to get the short version while also having the option to scroll through other sources, which I feel offers a bit more agency in how users consume information.
Google is optimized for keyword searches
During this experiment, I often opened a separate browser with Google as my primary Search engine because ChatGPT Search took longer to provide the answers I needed.
It's worth noting that the tech giant has dominated the search engine space for years, so my habits are naturally optimized for Google Search. I'm used to typing in a single keyword and instantly finding an array of relevant links.
Google's head of search, Liz Reid, said at Google I/O in May that "Google will do the Googling for you," and I've generally found that to be true. When I look up a company, person, or website, I'm able to see social media accounts, recent news, and other relevant links. I can also select filters or categories such as images, videos, news, and shopping to narrow my search.
ChatGPT Search worked better when I had a specific prompt or question in mind.
For example, if I have an open-ended research query, like "What was President Gerald Ford's most notable accomplishment" or "How did Gerald Ford's legacy differ from other presidents," I would probably turn to ChatGPT Search because it can organize information concisely, and save me from reading hundreds of related articles that might not address that specific question or contain those keywords.
Google's lead in offering keyword-based access to the wider web is tough to beat. While I see valuable uses for both products, I don't see how ChatGPT Search could replace for me what Google has spent decades building β at least at this stage.
Suchir Balaji was a researcher at OpenAI who later accused his employer of violating copyright law.
One of his OpenAI colleagues told BI that he was one of the "true geniuses" at the startup.
Friends described Balaji as a brilliant person with a passion for artificial intelligence.
Longtime friends and former colleagues gathered at a private memorial service at the India Community Center in Silicon Valley on Saturday to remember Suchir Balaji, who many said was an intelligent but humble individual with impressive technical prowess.
"He was the sharpest person I ever met," Aayush Gupta, who interned with Balaji at Scale.AI in 2019, told Business Insider, adding that he was an "independent thinker." In his speech to the assembled crowd, Gupta said Balaji "seemed like he was entirely self-taught."
That brilliance didn't go unnoticed at OpenAI, where the 26-year-old Balaji worked for nearly four years before he left the company in August and later accused his employer of violating copyright law.
Tarun Gogineni, a research scientist at OpenAI since 2022, told Business Insider that he often bounced ideas about artificial general intelligence with Balaji and that his colleague was a "contrarian thinker" who could be seen getting into long debates on Slack and expressing his opinion.
"He was one of the true geniuses at OpenAI," Gogineni said.
Gogineni recalled how Balaji worked with key figures at OpenAI, including cofounders Ilya Sutskever and John Schulman, to help launch WebGPT. Gogineni described the project as "in all meaningful ways, a spiritual predecessor" to ChatGPT.
"He worked closely with Ilya and John Schulman and some of the top people at OpenAI to come up with new algorithms for post-training," Gogineni said, referring to the process of fine-tuning an AI model after its initial training.
After Balaji's death, which authorities have ruled a suicide, Schulman wrote in a social media post that Balaji was "one of the three lead contributors" to the WebGPT project.
"I worked with Suchir on and off since around 2021, and he was one of my favorite and most talented collaborators," Schulman wrote.
Sutskever and Schulman left OpenAI in May and August, respectively.
They did not respond to a request for comment.
Balaji goes public on OpenAI
Two months after Balaji left OpenAI, The New York Times published a profile in which the researcher said his employer violated copyright law to train ChatGPT.
OpenAI has denied the accusation.
In a statement, an OpenAI spokesperson said the company was "devastated to learn of Balaji's death" and that "our hearts go out to Suchir's loved ones during this difficult time."
Gogineni said he was surprised to read about Balaji's concerns. In the nearly two years he overlapped with Balaji at OpenAI, Gogineni said he never recalled his coworker bringing up concerns about copyright violation.
"It was very surprising," Gogineni said. "I mean, I can't claim I was best friends with him. He was a work friend. But I had never ever seen him express any concern about copyright."
Friends and Balaji's parents described Balaji as an independent person.
Will Gan, who had known Balaji since ninth grade and also attended the University of California, Berkeley, told BI that Balaji usually maintained a sense of humor.
If he ever shared concerns about AI, Gan said Balaji would always be "super jokey" and lighthearted.
"If you ever followed 'Dune,' how they've outlawed machines in that universe, that's what he would joke about that he'd want," Gan said.
In serious matters in his life, however, Gan said Balaji could be reserved.
"I feel like, for example, if he had something serious going on at work or otherwise, he might not necessarily share that openly," Gan said. "I think that was just part of who he was to some extent."
Gan said he never talked to Balaji about his plans to speak with a New York Times reporter.
"He just told us at some point before the article released that he was going to do this, and we were hyping him up and stuff like that," Gan said. "It wasn't like we were discussing, 'Oh shit, what are the ramifications' and stuff like that."
Balaji's mom, Poornima Ramarao, previously told BI that she scolded her son for talking with a reporter and doing it so publicly without remaining anonymous. Ramarao said she is working with an attorney to try to get the SF police to further investigate Balaji's death.
Balaji also had plans to provide documents to The New York Times Co. for its copyright lawsuit against OpenAI, court filings showed. His name appeared in a letter from the Times' attorney on November 18.
Gan said he last saw Balaji during a weeklong trip to Catalina Island on November 22. Authorities found Balaji's body on November 26.
"We were all together in Catalina," Gan said of the trip. "And he seemed fine on that day."
Generative AI is transforming technical tasks, making them accessible to non-experts.
AI tools like v0 and Julius AI streamline processes such as web development and data analysis.
Vercel's CFO uses generative AI tools to become a "quasi-coder."
The AI boom has added trillions of dollars to tech company valuations. Is it living up to the hype?
In some real ways, the answer is yes. This is especially true when it comes to the technical plumbing of modern companies. These are tasks that often go on behind the scenes and are either unknown or taken for granted by most non-technical people.
Generative AI burst onto the scene in late 2022 with OpenAI's release of ChatGPT, a chatbot that answers many questions and creates realistic and convincing content.
Since that flashy launch, this new form of AI has quietly begun to transform more mundane jobs and processes, such as web development, data analysis, legal research, and code writing.
At Vercel's Next.js conference in San Francisco earlier this year, the event was packed with young developers who were using AI models and tools to streamline hundreds of these technical tasks. This stuff has mostly been run by human technical employees. Now, that's changing in major ways.
"All the power was previously behind a gate guarded by programmers who were paid hundreds of thousands of dollars a year. Now, these capabilities are available to all," said attendee Rahul Sonwalkar, founder of Julius AI, a startup that's using AI models to automate data analysis.
Saving on legal fees
It's not just startups. A good friend who'sΒ an executive at an investment fund used ChatGPT recently to research a legal issue.
The chatbot helped him understand a lot of the background, including relevant laws and other rules.Β
When he met with his law firm, he was able to jump past the basics and get to the meat of the task more quickly. This is important when attorneys can cost $500 to $1,000 an hour.Β
My friend estimates this initial AI-powered research saved his investment firm $50,000 to $70,000 in legal fees and roughly 60 to 80 hours of work time over 2 months.
20x more code at Google
At Google, generative AI is upending how the internet giant creates products.
Another old friend of mine has worked at Google for well over a decade. He recently described how he's writing 20 times more software code than he used to, thanks to generative AI tools.
He starts in the usual way, by typing in some initial code. Then the AI autocompletes much of the rest.
The technology sometimes autocompletes in the wrong direction β essentially misunderstanding his intentions. He still needs the technical skills to spot these occasional mistakes. But fixing it is pretty straight forward: He goes back to where his own code ended and types a bit more of his own work. Then the system adjusts and completes the task accurately.
A CFO becomes aΒ 'quasi-coder'
Vercel CFO Marten AbrahamsenΒ is no professional coder. But even he's experienced the benefits of generative AI making technical tasks more accessible.
He cited Vercel's v0 service, which lets anyone type in English language requests and responds with code and outputs such as brand new websites.
"I can't do complex coding, but I can type in English and v0 creates what I want. This turns me into a quasi-coder," Abrahamsen said.
The CFO said this tool helps him get ideas in front of more technical colleagues quicker, and ensures the nascent products are in better shape at the pitch stage.
Vercel's goal is to use generative AI to increase "iteration velocity" by automating a lot of the technical blocking and tackling so developers can spend more time on the creative parts of their jobs, he explained.
"Making developers much more productive with generative AI β investors and Vercel are quite bullish on this. That's a very interesting new use case for AI," Abrahamsen told me in a recent interview.
Creating a website in 2 minutes or less
I tried v0 myself on Friday. It took about 45 seconds to create a website based on this simple request: "Make me a website that looks like Business Insider."
Vercel's v0 system responded in English with the steps it would take. Then, on the right hand side of the page, it swiftly pumped out the required software code and previewed the new website in less than a minute. Here's a look:
I asked for a little tweak: "Make the background more blue and add photos."
v0 responded with a similar English language answer, followed by more code generation and an updated site.
I then asked to make the top of the site blue and the system added that in maybe 20 seconds.
I could go on, but you get the point. I can't code at all, and I made a relatively solid website in about 2 minutes with v0.
2 million lines of code a day
Julius AI is taking a similar approach to automate data-analysis tasks.Β The service is used by scientists, marketing folks, hedge fund analysts and anyone else who needs to interpret a lot of data and isn't an expert at pulling such insights from mountains of information.Β
The online tool can ingest data in many forms, including Excel tables and PDFs, or viaΒ APIs and databases. You can drag and drop these into an open window and ask questions in plain English. Julius AI then taps into various AI models to spot correlations in the data and generate insights in seconds via charts and text outputs.Β
The service automatically generates the software code needed to do this analysis, and makes that available to re-use on other projects. This also helps users go back and check how the outputs were created.Β
Julius AI has about 2 million registered users and has pumped out more than 7 million data visualizations so far, according to Sonwalkar, who notes the service writes roughly 2 million lines of code a day.
"It would take an army of human coders to do that," he said. "A good engineer who's focusing on a good day can put out about 1,000 lines of code."
Quantitative hedge funds use Julius AI to create financial models from the data they drop into the tool. One model might factor in currency changes and how that impacts other parts of the world, such as oil and gas prices, for instance.Β
One Julius AI customer is a hedge fund with seven employees who are finance experts.Β
"Normally this firm would also hire a quantitative programmer to create financial models for data analysis," Sonwalker said. "AI does this in seconds now, without the need for a programming expert."Β
Generative AI is becoming increasingly omnipresentΒ β and is a growing hot topic.
Chatbots such as OpenAI's ChatGPT are changing the way we find information, generate images, and more.
Here are the people, companies, and words you need to know to understand AI.Β
It's becoming increasingly impossible to ignore AI in our everyday lives.
Since OpenAI released ChatGPT in late 2022, people have gotten used to using the chatbot in many ways. Workers are turning to AI to automate tasks, while others are using the technology to improve their personal lives.
And as AI continues to advance, there may be a greater need for everyone to understand what it is and how it may affect us.
Here's a list of the people, companies, and terms you need to know to talk about AI, in alphabetical order.
The top AI leaders and companies
Sam Altman: The cofounder and CEO of OpenAI, the company behind ChatGPT. In 2023, Altman was ousted by OpenAI's board before returning to the company as CEO days later.
Dario Amodei: The CEO and cofounder of Anthropic, a major rival to OpenAI, where he previously worked. The AI startup is behind an AI chatbot called Claude 2. Google and Amazon are investors in Anthropic.
Demis Hassabis: The cofounder of DeepMind and now CEO of Google DeepMind, Hassabis leads its AI efforts at Alphabet.
Jensen Huang: The CEO and cofounder of Nvidia, the tech giant behind the specialized chips companies use to power their AI technology.
Elon Musk: The Tesla and SpaceX CEO founded artificial intelligence startup xAI in 2023. The valuation of this new venture has risen dramatically in just 16 months β it now has an estimated valuation of $50 billion, according to recent reports. Musk also cofounded OpenAI, and after leaving the company in 2018, he has maintained a feud with Altman.
Satya Nadella: The CEO of Microsoft, the software giant behind the Bing AI-powered search engine and Copilot, a suite of generative AI tools. Microsoft is also an investor in OpenAI.
Mustafa Suleyman: The cofounder of DeepMind, Google's AI division, who left the company in 2022. He cofounded Inflection AI, before he joined Microsoft as its chief of AI in March 2024.
Mark Zuckerberg: The Facebook founder and Meta CEO who has been spending big to advance Meta's AI capabilities, including training its own models and integrating the technology into its platforms.
The AI terms you need to know
Agentic: A type of artificial intelligence that can make proactive, autonomous decisions with limited human input. Unlike generative AI models like ChatGPT, agentic AI does not need a human prompt to take action β for example, it can perform complex tasks and adapt when objectives change. Google's Gemini 2.0 focuses on agentic AI that can solve multi-step problems on its own.
AGI: "Artificial general intelligence," or the ability of artificial intelligence to perform complex cognitive tasks such as displaying self-awareness and critical thinking the way humans do.
Alignment: A field of AI safety research that aims to ensure that the goals, decisions, and behaviors of AI systems are consistent with human values and intentions. In July 2023 OpenAI announced a "Superalignment" to focus on making its AI safe. That team was later disbanded and in May the company set up a safety and security committee to advise the board on "critical safety and security decisions."
Biden's executive order on AI:Biden signed this landmark executive order, officially called the "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," in 2023. It does a number of things to try to regulate the development of AI, including demanding greater transparency from tech companies producing AI, setting new standards for AI safety and security, and taking steps to ensure the US stays competitive in AI research and development. President-elect Donald Trump has vowed to rescind this order.
Compute: The AI computing resources needed to train models and carry out tasks, including processing data. This can include GPUs, servers, and cloud services.
Deepfake: An AI-generated image, video, or voice meant to appear real which tends to be used to deceive viewers or listeners. Deepfakes have been used to create non-consensual pornography and extort people for money.
Effective altruists: Broadly speaking, this is a social movement that stakes its claim in the idea that all lives are equally valuable and those with resources should allocate them to helping as many as possible.Β And in the context of AI, effective altruists, or EAs, are interested in how AI can be safely deployed to reduce suffering caused by social ills like climate change and poverty. Figures including Elon Musk, Sam Bankman-Fried, and Peter Thiel identify as effective altruists. (See also: e/accs and decels).
Frontier models: Refers to the most advanced examples of AI technology. The Frontier Model Forum β an industry nonprofit launched by Microsoft, Google, OpenAI, and Anthropic in 2023 β defines frontier models as "large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks."
GPU: A computer chip, short for graphic processing unit, that companies use to train and deploy their AI models. Nvidia's GPUs are used by Microsoft and Meta to run their AI models.
Hallucinations: A phenomenon where a large language model (see below) generates inaccurate information that it presents as a fact. For example, during an early demo, Google's AI chatbot Bard hallucinated by generating a factual error about the James Webb Space Telescope.
Large language model: A complex computer program designed to understand and generate human-like text. The model is trained on large amounts of data and produces answers by scraping information across the web. Examples of LLMs include OpenAI's GPT-4, Meta's Llama 2, and Google's Gemini.
Machine learning: Also known as deep learning, machine learning refers to AI systems that can adapt and learn on their own, without following human instructions or explicit programming.
Multimodal: The ability for AI models to process text, images, and audio to generate an output. Users of ChatGPT, for instance, can now write, speak, and upload images to the AI chatbot.
Natural language processing: The umbrella term encompasses a variety of methods for interpreting and understanding human language. LLMs are one tool for interpreting language within the field of NLP.
Neural network: A machine learning program designed to think and learn like a human brain. Facial recognition systems, for instance, are designed using neural networks in order to identify a person by analyzing their facial features.
Open-source: A trait used to describe a computer program that anyone can freely access, use, and modify without asking for permission. Some AI experts have called for models behind AI, like ChatGPT, to be open-source so the public knows how exactly they are trained.
Optical character recognition: OCR is technology that can recognize text within images β like scanned documents, text in photos, and read-only PDFs β and extract it into text-only format that machines can read.
Prompt engineering: The process of asking AI chatbots questions that can produce desired responses. As a profession, prompt engineers are experts in fine tuning AI models on the backend to improve outputs.
Rationalists: People who believe that the most effective way to understand the world is through logic, reason, and scientific evidence. They draw conclusions by gathering evidence and critical thinking rather than following their personal feelings.
When it comes to AI, rationalists seek to answer questions like how AI can be smarter, how AI can solve complex problems, and how AI can better process information around risk. That stands in opposition to empiricists, who in the context of AI, may favor advancements in AI backed by observational data.
Responsible scaling policies: Guidelines for AI developers to follow that are designed to mitigate safety risks and ensure the responsible development of AI systems, their impact on society, and the resources they will consume, such as energy and data. Such policies help ensure that AI is ethical, beneficial, and sustainable as systems become more powerful.
Singularity: A hypothetical moment where artificial intelligence becomes so advanced that the technology surpasses human intelligence. Think of a science fiction scenario where an AI robot develops agency and takes over the world.
The tool has generated controversy and even kicked off a race among large tech companies like Google and Meta to develop their own, more powerful AI tools. OpenAI now has a $13 billion partnership with Microsoft and the tech giant has integrated GPT-4o into Copilot and the Azure AI cloud suite.
However, the startup behind it, OpenAI, has other AI products, too β and it recently made its AI video-generator Sora available to users. Take a look at some of the startup's other AI products.
DALL-EΒ
Just months before ChatGPT launched, OpenAI removed the waitlist for its generative AI art generator, DALL-E. It quickly grew to over 1.5 million daily users by September 2022, the company wrote in a blog post.Β The tool β which quickly creates imaginative and detailed artwork via a text prompt β sparked controversy among artists when it came out, who debated what DALL-E and other AI art generators like it could mean for people in creative jobs.
Since DALL-E launched, OpenAI released DALL-E 2 and DALL-E 3. The latest upgrade, DALL-E 3 understands more nuance and detail than previous versions, the company said.
The AI art generator creates original images called "generations" from detailed text prompts input by a person. You can write detailed prompts such as the one above β "astronaut fish swimming in an ocean in outer space, digital art" β and specify an art style or even reference a specific artist like Vincent Van Gogh.
You can also edit "generations" with the tool using one of the credits the program gives you each month, and upload your own photos to create images.
Whisper
Whisper is an automatic speech recognition model that transcribes speech to text and can identify and translate multiple languages to English. The model can transcribe in multiple languages too.Β
The system was trained on 680,000 hours of multilingual and multitask supervised data collected from the internet, according to OpenAI.
In examples on its product page, Whisper transcribes an almost 30-second long audio of quick-spoken text, a clip of a K-pop song, an audio clip of spoken French, and an audio clip of someone speaking with a strong accent.
Whisper is now used in a number of industries including healthcare. Recently, an Associated Press report revealed that the technology is prone to hallucinations that include comments about race and violent rhetoric, which could pose problems if it's used in medical settings.
Codex
Codex is an AI system that translates natural language into code. OpenAI says Codex is "most capable" in Python, but is also proficient in over a dozen coding languages like JavaScript and Swift.Β
The model can interpret simple commands input by a user. OpenAI says Codex is a "general-purpose programming model," which means it can be used for "essentially any programming task," although its results can vary. OpenAI said it's successfully used Codex "for transpilation, explaining code, and refactoring code."
OpenAI announced during its "Shipmas" livestream on December 9 that it would launch its AI video generator Sora to the public after making it available to a limited group of artists and creators in February.Β
Sora can generate up to 20-second videosΒ from written instructions. The tool can also complete a scene and extend existing videos by filling in missing frames.
The company showed off the new product and its various features, including the Explore page, which is a feed of videos shared by the Sora community. It also demonstrated various style presets for the videos like pastel symmetry, film noir, and balloon world.
The company said in a blog post that the product "may struggle to simulate the physics of a complex scene," as well as with depicting events that happen over time. It may also mix up left and right, the company said.
While the tool already made a strong impression on some in Hollywood, the tool's product designer said in the demonstration that Sora wasn't going to create feature films at the click of a button. Rather, the employee said the tool was moreso "an extension of the creator who's behind it."
API tools
OpenAI also has a set of tools geared toward developers. Its flagship reasoning models include o1, o1-mini, and the soon-to-be-released o3 and 03-mini models. OpenAI also has GPT models, including GPT-4o and GPT-4o mini. OpenAI offers Chat Completions API, Assistants API, Batch API, and Realtime API. Users can explore models and APIs in OpenAI's Playground without writing code. According to the company website, three million developers are building with its tools.
In an interview with The New York Times, he discussed how powerhouse AI companies might be breaking copyright laws.
OpenAI's models are trained on information from the internet. Balaji helped collect and organize that data, but he grew to feel the practice was unfair. He resigned in August. And in November, he was named by NYT lawyers as someone who might have "unique and relevant documents" for their copyright-infringement case against OpenAI.
"If you believe what I believe, you have to just leave," he told the Times.
On November 26, the young engineer was found dead in his apartment. The tragedy struck a chord, stoking conspiracy theories, grief, and debate. What do we lose when AI models gain?
In an exclusive interview with Business Insider, Balaji's mother, Poornima Ramarao, offered clues.
Balaji joined OpenAI because of AI's potential to do good, she said. Early on, he loved that the models were open-source, meaning freely available for others to use and study. As the company became more financially driven and ChatGPT launched, those hopes faded. Balaji went from believing in the mission to fearing its consequences for publishers and society as a whole, she told BI.
"He felt AI is a harm to humanity," Ramarao said.
An OpenAI spokesperson shared that Balaji was a valued member of the team, and that his passing deeply affected those who worked closely with him.
"We were devastated to learn of this tragic news and have been in touch with Suchir's family to offer our full support during this difficult time," the spokesperson wrote in a statement. "Our priority is to continue to do everything we can to assist them."
"We first became aware of his concerns when The New York Times published his comments and we have no record of any further interaction with him," OpenAI's spokesperson added. "We respect his, and others', right to share views freely. Our hearts go out to Suchir's loved ones, and we extend our deepest condolences to all who are mourning his loss."
Recruited by OpenAI
Growing up, Balaji's dad thought he was "more than average," Ramarao said. But she thought her son was a prodigy. By two years old, he could form complex sentences, she recalled.
"As a toddler, as a little 5-year-old, he never made mistakes. He was perfect," Ramarao said.
At age 11, he started learning to code using Scratch, a programming language geared toward kids. Soon, he was asking his mom, who's a software engineer, questions that went over her head. At 13, he built his own computer. At 14, he wrote a science paper about chip design.
"Dad would say, don't focus too much. Don't push him too much," Ramarao said.
They moved school districts to find him more challenges. His senior year, he was the US champion in a national programming contest for high-schoolers, leading to him getting recruited, at 17 years old, by Quora, the popular online knowledge-sharing forum. His mom was against it, so he fibbed to her about applying. But he had to fess up by the first day on the job because he couldn't drive yet.
"I had to give him a ride to his office in Mountain View," Ramarao said.
She was worried about how he'd handle "so many adults," but he made friends to play poker with and enjoyed Quora's abundant cafeteria.
She viewed it as a lesson in learning to trust her Balaji.
"Then I understood, okay, my son is really an advanced person. I cannot be a hindrance to him," Ramarao said.
After working for about a year, he went to UC Berkeley, and soon won $100,000 in a TSA-sponsored challenge to improve their passenger-screening algorithms.
It was all enough to be recruited by OpenAI. He interned with the company in 2018, per his LinkedIn, then joined full-time in 2021 after graduating.
An early standout
Over his nearly four-year tenure at OpenAI, Balaji became a standout, eventually making significant contributions to ChatGPT's training methods and infrastructure, John Schulman, an OpenAI cofounder, wrote in a social media post about Balaji.
"He'd think through the details of things carefully and rigorously. And he also had a slight contrarian streak that made him allergic to 'groupthink' and eager to find where the consensus was wrong," Schulman said in the post. Schulman didn't reply to BI's requests for comment.
Balaji had joined the company at a critical juncture, though.
OpenAI started off as a non-profit in 2015 with the explicit mission of ensuring that AI benefited all of humanity. As the startup moved away from its open-source and non-profit roots, Balaji became more concerned, Ramarao said.
When it launched ChatGPT publicly in November 2022, he reconsidered the copyright implications, she said.
Earlier that year, a big part of Balaji's role was gathering digital data β from all corners of the English-speaking internet β for GPT-4, a model that would soon power ChatGPT, per the Times interview. Balaji thought of this like a research project.
Using other people's data for research was one thing, he wrote in a later essay. Using it to make a product that could take away from those creators' revenue or traffic was another.
OpenAI didn't comment on Balaji's concerns to Business Insider. In court, it has argued that the legal doctrine of "fair use" protects how its models ingest publicly-available internet content.
"Too naive and too innocent"
By late 2023 and early 2024, Balaji's enthusiasm for OpenAI had fizzled out entirely, and he began to criticize CEO Sam Altman in conversations with friends and family, Ramarao said.
He used to tell his mom when he was working on "something cool," but more and more, he had nothing to say about his job, she told BI.
When he resigned in August, Ramarao didn't press the issue.
Come October, when she saw his bombshell interview with the Times, she unleashed a torrent of anxiety at Balaji. In shining a spotlight on what he thought was corporate wrongdoing, he was taking it all on his shoulders, she said.
"I literally blasted him," she said of their conversation. "'You should not go alone. Why did you give your picture? Why did you give your name? Why don't you stay anonymous? What's the need for you to give your picture?'"
"You have to go as a group. You have to go together with other people who are like-minded. Then he said, 'Yeah, yeah, yeah. I'm connecting with like-minded people. I'm building a team,'" she continued. "I think he was too naive and too innocent to understand this dirty corporate world."
Balaji's parents are calling for an investigation
When Balaji left OpenAI in August, he took a break.
"He said, 'I'm not taking up another job. Don't ask me,'" Ramarao said.
From Balaji's parents' vantage point, everything seemed fine with the young coder. He was financially stable, with enough OpenAI stock to buy a house one day, she said. He had plans to build a machine learning non-profit in the medical field.
"He wanted to do something for society," his mom said.
On November 21, a Thursday, Balaji celebrated his 26th birthday with friends while on vacation. The next day, he let his mom know when his flight home took off, and spoke with his dad on the phone before dinner. His dad wished him a happy birthday and said he was sending a gift.
According to Ramarao, the medical examiner said that Balaji died that evening, or possibly the next morning.
"He was upbeat and happy," she said. "What can go wrong within a few hours that his life is lost?"
On Saturday and Sunday, Ramarao didn't hear from her son. She thought that maybe he'd lost his phone or gone for a hike. But on Monday, she went and knocked on his door. He didn't answer. She thought about filing a missing person complaint. But, knowing he'd have to go in-person to remove it, she hesitated. "He'll get mad at me," she said of her thinking at the time.
The next morning, she called the San Francisco police. They found his body just after 1 p.m. PST, according to a spokesperson for the department. But Ramarao wasn't told or allowed inside, she said. As officers trickled in, she pleaded with them to check if his laptop and toothbrush were missing, she told BI; that way she'd know if he'd traveled.
"They didn't give the news to me," Ramaro said. "I'm still sitting there thinking, 'My son is traveling. He's gone somewhere.' It's such a pathetic moment."
Around 2 p.m., they told her to go home. She refused.
"I sat there firmly," Ramarao told BI.
Then, around 3:20 p.m., a long white van pulled up with the light on.
"I was waiting to see medical help or nurses or someone coming out of the van," she said. "But a stretcher came. A simple stretcher. I ran and asked the person. He said, 'We have a dead body in that apartment.'"
About an hour later, a medical examiner and police asked to speak with Ramarao one-on-one inside the apartment's office. They said that Balaji had died by suicide, and that from looking at CCTV footage, he was alone, according to Ramarao. There was no initial evidence of foul play, the department spokesperson told BI.
Balaji's parents aren't convinced. They arranged for a private autopsy, completed in early December. Ramarao said the results were atypical, but she declined to share any more details. BI has not seen a copy of the report.
Balaji's parents are working with an attorney to press the SF police to reopen the case and do a "proper investigation," Ramarao said.
Meanwhile, they and members of their community are trying to raise awareness of his case through social media and a Change.org petition. Besides seeking answers, they want to invoke a broader discussion about whistleblowers' vulnerability and lack of protections, Ramarao and a family friend, who's helping organize a an event about Balaji on December 27, told BI.
"We want to leave the question open," Ramarao said. "It doesn't look like a normal situation."
BI shared a detailed account of Ramarao's concerns and memory of November 26 with spokespeople for the SF police and the Office of the Chief Medical Examiner. These officials did not respond or offer comments.
Ramarao emphasized to BI that the family isn't pointing fingers at OpenAI.
'Yes, mom'
Ramarao said she shared a close bond with her son. He didn't eat enough fruit, so every time she visited, she'd arrange shipments to his apartment from Costco. He tended to skip breakfast, so she'd bring granola bars and cookies.
Balaji rarely expressed his emotions and always paid for everything. But on November 7, during their last meal together, something made Ramarao try extra hard to pay, give him a ride home, and seek reassurance. He still paid for the meal and called an Uber. But he did offer his mom two words of encouragement.
"I asked him, 'Suchir this is the hardship. This is how I raised you, and if you were to choose parents now, would you choose me as mom?' He didn't think for a second,'" she said. "'Yes, mom.' And you know what? As a mother, that will keep me going as long as I'm alive."
Temu tops 2024 Apple App Store downloads, surpassing TikTok and ChatGPT in popularity.
The Chinese e-commerce app offers big discounts on a wide range of products.
Americans appear to be exploring budget-friendly options through in-app deals.
The App Store favorites of 2024 include social media platforms and one popular AI assistant, but the most downloaded app of the year was Temu.
The Chinese-owned e-commerce app was downloaded more times this year than TikTok, Threads, or ChatGPT, according to Apple. It's become known for big discounts on various products, from tech gadgets to apparel.
Temu, owned by PDD Holdings, is particularly popular among Gen Z consumers in the US. Gen Zers between 18 and 24 downloaded it 42 million times during the first 10 months of 2024, according to the app analytics firm Appfigures, which pulled data from iOS and Android users.
The e-commerce giant launched in the US in 2022 and has had a meteoric rise since then. PDD Holdings' third-quarter sales grew 44% to $14.2 billion from the same period in 2023, according to exchange rates on September 30.
It has invested millions to market to American shoppers. Three Temu ads aired during the Super Bowl, where one 30-second clip during the highly-viewed game can cost $7 million.
With Donald Trump threatening high tariffs on Chinese goods, Temu's popularity could be at risk if it resorts to raising prices to offset a possible 60% levy on its products.
Apps from retailers Amazon, Shein, and McDonald's also made the Apple App Store's top 20 most-downloaded list this year β indicating that consumers were on the hunt for a deal across categories.
McDonald's has found success in using targeted in-app promotions to build loyalty among its customers.
The chain's head of US restaurants said earlier this year that loyalty customers visit 15% more often and spend nearly twice as much as non-loyalty customers, with loyalty platform sales expected to hit $45 billion by 2027.
Amazon, for its part, has sought to capitalize on Temu and Shein's low-price appeal with a new Haul section, which is also an app-only shopping experience.
As former Starbucks CEO Laxman Narasimhan was fond of saying, "The best offers are in the app."
Suchir Balaji, a former OpenAI researcher, was found dead on Nov. 26 in his apartment, reports say.
Balaji, 26, was an OpenAI researcher of four years who left the company in August.
He had accused his employer of violating copyright law with its highly popular ChatGPT model.
Suchir Balaji, a former OpenAI researcher of four years, was found dead in his San Francisco apartment on November 26, according to multiple reports. He was 26.
Balaji had recently criticized OpenAI over how the startup collects data from the internet to train its AI models. One of his jobs at OpenAI was gather this information for the development of the company's powerful GPT-4 AI model, and he'd become concerned about how this could undermine how content is created and shared on the internet.
A spokesperson for the San Francisco Police Department told Business Insider that "no evidence of foul play was found during the initial investigation."
David Serrano Sewell, executive director of the city's office of chief medical examiner, told the San Jose Mercury News "the manner of death has been determined to be suicide." A spokesperson for the city's medical examiner's office did not immediately respond to a request for comment from BI.
"We are devastated to learn of this incredibly sad news today and our hearts go out to Suchir's loved ones during this difficult time," an OpenAI spokesperson said in a statement to BI.
In October, Balaji published an essay on his personal website that raised questions around what is considered "fair use" and whether it can apply to the training data OpenAI used for its highly popular ChatGPT model.
"While generative models rarely produce outputs that are substantially similar to any of their training inputs, the process of training a generative model involves making copies of copyrighted data," Balaji wrote. "If these copies are unauthorized, this could potentially be considered copyright infringement, depending on whether or not the specific use of the model qualifies as 'fair use.' Because fair use is determined on a case-by-case basis, no broad statement can be made about when generative AI qualifies for fair use."
Balaji argued in his personal essay that training AI models with masses of data copied for free from the internet is potentially damaging online knowledge communities.
He cited a research paper that described the example of Stack Overflow, a coding Q&A website that saw big declines in traffic and user engagement after ChatGPT and AI models such as GPT-4 came out.
Large language models and chatbots answer user questions directly, so there's less need for people to go to the original sources for answers now.
In the case of Stack Overflow, chatbots and LLMs are answering coding questions, so fewer people visit Stack Overflow to ask that community for help. This, in turn, means the coding website generates less new human content.
Elon Musk has warned about this, calling the phenomenon "Death by LLM."
The New York Times sued OpenAI last year, accusing the start up and Microsoft of "unlawful use of The Times's work to create artificial intelligence products that compete with it."
In an interview with Times that was published October, Balaji said chatbots like ChatGPT are stripping away the commercial value of people's work and services.
"This is not a sustainable model for the internet ecosystem as a whole," he told the publication.
In a statement to the Times about Balaji's accusations, OpenAI said: "We build our A.I. models using publicly available data, in a manner protected by fair use and related principles, and supported by longstanding and widely accepted legal precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness."
Balaji was later named in the Times' lawsuit against OpenAI as a "custodian" or an individual who holds relevant documents for the case, according to a letter filed on November 18 that was viewed by BI.
If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.
OpenAI launched its widely anticipated video feature for ChatGPT's Advanced Voice Mode.
It allows users to incorporate live video and screen sharing into conversations with ChatGPT.
ChatGPT can interpret emotions, assist with homework, and provide real-time visual context.
ChatGPT's Advanced Voice Mode can now help provide real-time design tips for your home, assistance with math homework, or instant replies to your texts from the Messages app.
After teasing the public with a glimpse of the chatbot's ability to "reason across" vision along with text and audio during OpenAI's Spring Update in May, the company finally launched the feature on Thursday as part of day six of OpenAI's "Shipmas."
"We are so excited to start the rollout of video and screen share in Advanced Voice today," the company said in the livestream on Thursday. "We know this is a long time coming."
OpenAI initially said the voice and video features would be rolling out in the weeks after its Spring Update. However, Advanced Voice Mode didn't end up launching to users until September, and the video mode didn't come out until this week.
The new capabilities help provide more depth in conversations with ChatGPT by adding "realtime visual context" with live video and screen sharing. Users can access the live video by selecting the Advanced Voice Mode icon in the ChatGPT app and then choosing the video button on the bottom far left.
In the livestream demonstration on Thursday, ChatGPT helped an OpenAI employee make pour-over coffee. The chatbot noticed details like what the employee was wearing and then walked him through the steps of making he drink, elaborating on certain parts of the process when asked. The chatbot also gave him feedback on his technique.
To share your screen with ChatGPT, hit the drop-down menu and select "Share Screen." In the "Shipmas" demo, ChatGPT could identify that the user was in the Messages app, understand the message sent, and then help formulate a response after the user asked.
During the company's Spring Update, OpenAI showed off some other uses of the video mode. The chatbot was able to interpret emotions based on facial expressions and also demonstrated its ability to act as a tutor. OpenAI Research Lead Barret Zoph walked through an equation on a whiteboard (3x+1=4) and ChatGPT provided him with hints to find the value of x.
The feature had a couple of stumbles during the Spring Update demonstration, like referring to one of the employees as a "wooden surface" or trying to solve a math problem before it was shown.
Now that it's out, we decided to give the feature a whirl β and so far, it seems pretty impressive.
We showed the chatbot an office plant and asked it to tell us about it, give context on whether it's healthy, and explain what the watering schedule should look like. The chatbot accurately described browning and drying on the leaf tips and identified it as an Aloe Vera plant, which seems to fit the right description.
The new video feature will be rolling out this week in the latest version of the ChatGPT mobile app to Team and most Plus and Pro users. The feature isn't available in the EU, Switzerland, Iceland, Norway, and Liechtenstein yet, but OpenAI said it will be as soon as possible.
I asked ChatGPT to come up with gift ideas for my dad, mom, and sister.
The AI tool gave me unique, thoughtful suggestions on what to get my parents.
I wasn't as impressed with its gift ideas for my sister, but overall, ChatGPT did a great job.
Although I love Christmas shopping and gift-giving, finding unique, meaningful gifts for my family can be difficult year after year.
Determined to switch things up, I turned to ChatGPT to help me come up with some gift ideas for them.
My hope was that the AI service would produce ideas that I wouldn't have thought of otherwise, with suggestions more creative than just another cookbook for my mom or band T-shirt for my sister.
Here's how it went.
Going into the holiday season, I was most worried about what to get my dad
He doesn't care much for material things, so I was curious if ChatGPT could suggest practical gifts or experiences he'd appreciate.
Here's the prompt I gave ChatGPT:
Please give me unique gift recommendations for what to get my dad for Christmas based on his interests. He loves anything about World War II history, is trying to learn Spanish via Duolingo, always rewatches "Breaking Bad," is on the keto diet, and loves making breakfast food.
In total, ChatGPT gave me 19 suggestions β three for each of the five interests I mentioned, along with additional ideas under categories suggesting quirky and personalized gifts.
I was most impressed with ChatGPT's suggestions for my dad
Some of the ideas ChatGPT gave me included a personalized World War II history book, Duolingo merchandise, a Los Pollos Hermanos (a restaurant from "Breaking Bad") apron, a keto snack-box subscription, and gourmet bacon.
I was impressed, as these were all ideas I wouldn't have come up with on my own. However, my favorite suggestions were under ChatGPT's "Fun and Quirky" and "Personalized Gift" sections.
The quirky ideas included a World War II-themed board game like Axis & Allies, and a movie-night pack comprised of a collection of Spanish-language films (with snacks to enjoy while watching).
Under the personalized gift section, ChatGPT suggested a keto-friendly breakfast basket with treats like low-carb muffins and nut butters.
Because my dad isn't really into collecting memorabilia, I decided the best idea would be to combine two ideas and pair the Spanish movie-night pack with an assortment of keto-friendly snacks.
I think he'd appreciate the experience of watching movies together. I may also check out some keto-snack-box-subscription websites for ideas on what to put in his basket.
I figured my mom would be easier to shop for
Going into this holiday season, I was a bit less worried about what to get my mom because she plans to retire next year and is looking for more hobbies to keep her busy.
Still, I didn't have anything particular in mind, which is where ChatGPT came in handy.
I asked it to come up with gift ideas based on this prompt:
Now, can you help me come up with ideas for my mom based on her interests? She is super excited to go to Iceland for the first time next year, is always trying to find low-carb, low-sugar TikTok recipes, wants to get more into exercising (recently bought a Peloton and Apple Watch), and is overall just looking for more hobbies to pick up when she retires next year.
It gave me 22 suggestions in total β four for each of the four points I mentioned and additional ideas under categories suggesting personalized and mindfulness-related ideas.
ChatGPT came up with some pretty unique ideas for my mom
Among the ideas ChatGPT suggested were a packing kit for Iceland that includes items like a travel adapter and language guide, a personalized binder of her favorite TikTok recipes, Apple Watch bands, and cooking or baking classes to enjoy in retirement.
Compared to my dad's results, I was less impressed with the additional categories ChatGPT created for my mom. Under the "Something Personalized" category, it suggested a customized Icelandic map, a personalized fitness-tracker case, and motivational-quote wall art. In my opinion, none of these seemed very practical or creative.
I thought the "Mindfulness and Relaxation" category had much better ideas: a subscription box for relaxation, a weighted blanket, and an indoor herb-garden kit.
A weighted blanket isn't likely something she'd buy for herself, but I can imagine her getting a lot of use out of it while unwinding after a long day. She's also been trying to eat healthier, so an indoor-herb-garden kit could lead her to a fun new hobby while allowing her to add fresh garnishes to her dishes.
I also liked the personalized recipe-binder idea since my mom usually just watches the same videos over and over again to remember the ingredients. Writing down and compiling her favorite TikTok recipes would be a practical and affordable gift.
I already had a gift idea in mind for my sister, so I was less reliant on the ChatGPT results
I was leaning toward getting my sister concert tickets for Christmas, but I still wanted to see what ideas ChatGPT had.
I figured if any of them stood out, I could give her another gift in addition to the tickets β or just replace them altogether.
Here's the information I gave ChatGPT:
Can you now help me come up with unique Christmas gift ideas for my sister based on her interests and hobbies? My sister loves everything music (she plays five instruments), likes unique party games, lives in San Diego, is graduating from college next year, is going to Bali next year, and likes to get merchandise from her favorite artists.
It gave me 26 gift suggestions, with ideas specific to all six of the points I mentioned and more under a category titled "Something Fun & Personalized."
None of the ideas for my sister blew me away
Although ChatGPT gave me the most ideas for my sister, I was actually the least impressed with these suggestions. However, this may have been because I already had an idea of what to get her.
Some of the ideas it gave me were a custom instrument case, specific party games (most of which she already owned), a Bali guidebook, a memory box to keep mementos from college, and merchandise from San Diego or her favorite artists.
These ideas seemed a lot more generic than the ones it produced for my mom and dad. For example, I wouldn't have thought to put together a TikTok-recipe binder for my mom or a Spanish movie night for my dad.
However, there weren't any ideas for my sister that I thought were especially unique or practical.
Perhaps it was due to the types of interests I entered for my sister, but I wouldn't choose any of those gifts over β or even as an addition to β concert tickets for her.
Before making any future holiday purchases, I'll consult ChatGPT first
Despite being slightly disappointed with ChatGPT's suggestions for my sister, I'll definitely be taking some of the ideas it gave me for my parents.
Although the AI tool may not have all the answers for mind-blowing, personalized gifts, I think it's a decent place to start if you need some ideas for brainstorming.
Based on this success, I plan to return to the platform to ask for gift suggestions for upcoming holidays and birthdays.
Google's search share slipped from June to November, new survey finds.
ChatGPT gained market share over the period, potentially challenging Google's dominance.
Still, generative AI features are benefiting Google, increasing user engagement.
ChatGPT is gaining on Google in the lucrative online search market, according to new data released this week.
In a recent survey of 1,000 people, OpenAI's chatbot was the top search provider for 5% of respondents, up from 1% in June, according to brokerage firm Evercore ISI.
Millennials drove the most adoption, the firm added in a research note sent to investors.
Google still dominates the search market, but its share slipped. According to the survey results, 78% of respondents said their first choice was Google, down from 80% in June.
It's a good business to be a gatekeeper
A few percentage points may not seem like much, but controlling how people access the world's online information is a big deal. It's what fuels Google's ads business, which produces the bulk of its revenue and huge profits. Microsoft Bing only has 4% of the search market, per the Evercore report, yet it generates billions of dollars in revenue each year.
ChatGPT's gains, however slight, are another sign that Google's status as the internet's gatekeeper may be under threat from generative AI. This new technology is changing how millions of people access digital information, sparking a rare debate about the sustainability of Google's search dominance.
OpenAI launched a full search feature for ChatGPT at the end of October. It's also got a deal with Apple this year that puts ChatGPT in a prominent position on many iPhones. Both moves are a direct challenge to Google. (Axel Springer, the owner of Business Insider, has a commercial relationship with OpenAI).
ChatGPT user satisfaction vs Google
When the Evercore analysts drilled down on the "usefulness" of Google's AI tools, ChatGPT, and Copilot, Microsoft's consumer AI helper, across 10 different scenarios, they found intriguing results.
There were a few situations where ChatGPT beat Google on satisfaction by a pretty wide margin: people learning specific skills or tasks, wanting help with writing and coding, and looking to be more productive at work.
It even had a 4% lead in a category that suggests Google shouldn't sleep too easy: people researching products and pricing online.
Google is benefiting from generative AI
Still, Google remains far ahead, and there were positive findings for the internet giant from Evercore's latest survey.
Earlier this year, Google released Gemini, a ChatGPT-like helper, and rolled out AI Overviews, a feature that uses generative AI to summarize many search results. In the Evercore survey, 71% of Google users said these tools were more effective than the previous search experience.
In another survey finding, among people using tools like ChatGPT and Gemini, 53% said they're searching more. That helps Google as well as OpenAI.
What's more, the tech giant's dominance hasn't dipped when it comes to commercial searches: people looking to buy stuff like iPhones and insurance. This suggests Google's market share slippage is probably more about queries for general information, meaning Google's revenue growth from search is probably safe for now.
So in terms of gobbling up more search revenue, ChatGPT has its work cut out.
Evercore analyst Mark Mahaney told BI that even a 1% share of the search market is worth roughly $2 billion a year in revenue. But that only works if you can make money from search queries as well as Google does.
"That's 1% share of commercial searches and assuming you can monetize as well as Google β and the latter is highly unlikely in the near or medium term," he said.
OpenAI released the full version of its o1 reasoning model on Thursday.
It says the o1 model, initially previewed in September, is now multimodal, faster, and more precise.
It was released as part of OpenAI's 12-day product and demo launch, dubbed "shipmas."
On Thursday, OpenAI released the full version of its hot new reasoning model as part of the company's 12-day sprint of product launches and demos.
The model, known as o1, was released in a preview mode in September. OpenAI CEO Sam Altman said during day one of the company's livestream that the latest version was more accurate, faster, and multimodal. Research scientists on the livestream said an internal evaluation indicated it made major mistakes about 34% less often than the o1 preview mode.
The model, which seems geared toward scientists, engineers, and coders, is designed to solve thorny problems. The researchers said it's the first model that OpenAI trained to "think" before it responds, meaning it tends to give more detailed and accurate responses than other AI helpers.
To demonstrate o1's multimodal abilities, they uploaded a photo of a hand-drawn system for a data center in space and asked the program to estimate the cooling-panel area required to operate it. After about 10 seconds, o1 produced what would appear to a layperson as a sophisticated essay rife with equations, ending with what was apparently the right answer.
The researchers think o1 should be useful in daily life, too. Whereas the preview version could think for a while if you merely said hi, the latest version is designed to respond faster to simpler queries. In Thursday's livestream, it was about 19 seconds faster than the old version at listing Roman emperors.
All eyes are on OpenAI's releases over the next week or so, amid a debate about how much more dramatically models like o1 can improve. Tech leaders are divided on this issue; some, like Marc Andreessen, argue that AI models aren't getting noticeably better and are converging to perform at roughly similar levels.
With its 12-day deluge of product news, dubbed "shipmas," OpenAI may be looking to quiet some critics while spreading awkward holiday cheer.
"It'll be a way to show you what we've been working on and a little holiday present from us," Altman said on Thursday.
Elon Musk helped found OpenAI, but he has frequently criticized it in recent years.
Musk filed a lawsuit against OpenAI in August and just amended it to include Microsoft.Β
Here's a history of Musk and Altman's working relationship.
Elon Musk and Sam Altman lead rival AI firms and now take public jabs at each other β but it wasn't always like this.
Years ago, the two cofounded OpenAI, which Altman now leads. Musk departed OpenAI, which created ChatGPT, in 2018, and recently announced his own AI venture, xAI.
There is enough bad blood that Musk sued OpenAI and Altman, accusing them in the suit of betraying the firm's founding principles, before dropping the lawsuit. The billionaire then filed a new one a few months later, claiming he was "deceived" into confounding the company. In November, he amended it to include Microsoft as a defendant, and his lawyers accused the two companies of engaging in monopolistic behavior. Microsoft is an investor in OpenAI.
Two weeks later, Musk's lawyers filed a motion requesting a judge to bring an injunction against OpenAI that would block it from dropping its nonprofit status. In the filing, Musk accused OpenAI and Microsoft of exploiting his donations to create a for-profit monopoly.
Here's a look at Musk and Altman's complicated relationship over the years:
Musk and Altman cofounded OpenAI, the creator of ChatGPT, in 2015, alongside other Silicon Valley figures, including Peter Thiel, LinkedIn cofounder Reid Hoffman, and Y Combinator cofounder Jessica Livingston.
The group aimed to create a nonprofit focused on developing artificial intelligence "in the way that is most likely to benefit humanity as a whole," according to a statement on OpenAI's website from December 11, 2015.
At the time, Musk said that AI was the "biggest existential threat" to humanity.
"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," a statement announcing the founding of OpenAI reads.
Musk stepped down from OpenAI's board of directors in 2018.
With his departure, Musk also backed out of a commitment to provide additional funding to OpenAI, a person involved in the matter told The New Yorker.
"It was very tough," Altman told the magazine of the situation. "I had to reorient a lot of my life and time to make sure we had enough funding."
It was reported that Sam Altman and other OpenAI cofounders had rejected Musk's proposal to run the company in 2018.
Semafor reported in 2023 that Musk wanted to run the company on his own in an attempt to beat Google. But when his offer to run the company was rejected, he pulled his funding and left OpenAI's board, the news outlet said.
In 2019, Musk shared some insight on his decision to leave, saying one of the reasons was that he "didn't agree" with where OpenAI was headed.
"I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX," he tweeted. "Also, Tesla was competing for some of same people as OpenAI & I didn't agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms."
Musk has taken shots at OpenAI on several occasions since leaving.
Two years after his departure, Musk said, "OpenAI should be more open" in response to an MIT Technology Review article reporting that there was a culture of secrecy there, despite OpenAI frequently proclaiming a commitment to transparency.
In December 2022, days after OpenAI released ChatGPT, Musk said the company had prior access to the database of Twitter β now owned by Musk β to train the AI chatbot and that he was putting that on hold.
"Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true," he said.
Musk was reportedly furious about ChatGPT's success, Semafor reported in 2023.
In February 2023, Musk doubled down, saying OpenAI as it exists today is "not what I intended at all."
"OpenAI was created as an open source (which is why I named it "Open" AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all," he said in a tweet.
Musk repeated this assertion a month later.
"I'm still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn't everyone do it?" he tweeted.
Musk was one of more than 1,000 people who signed an open letter calling for a six-month pause on training advanced AI systems.
The March 2023 letter, which also received signatures from several AI experts, cited concerns about AI's potential risks to humanity.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.
But while he was publicly calling for the pause, Musk was quietly building his own AI competitor, xAI, The New Yorker reported in 2023. He launched the company in March 2023.
Altman has addressed some of Musk's gripes about OpenAI.
"To say a positive thing about Elon, I think he really does care about a good future with AGI," Altman said last year on an episode of the "On With Kara Swisher" podcast, referring to artificial general intelligence.
"I mean, he's a jerk, whatever else you want to say about him β he has a style that is not a style that I'd want to have for myself," Altman told Swisher. "But I think he does really care, and he is feeling very stressed about what the future's going to look like for humanity."Β
In response to Musk's claim that OpenAI has turned into "a closed source, maximum-profit company effectively controlled by Microsoft," Altman said on the podcast, "Most of that is not true, and I think Elon knows that."
Altman has also referred to Musk as one of his heroes.
In a March 2023 episode of Lex Fridman's podcast, Altman also said, "Elon is obviously attacking us some on Twitter right now on a few different vectors."
In a May 2023 talk at University College London, Altman was asked what he's learned from various mentors, Fortune reported. He answered by speaking about Musk.
"Certainly learning from Elon about what is just, like, possible to do and that you don't need to accept that, like, hard R&D and hard technology is not something you ignore, that's been super valuable," he said.
Musk has since briefly unfollowed Altman on Twitter before following him again; separately, Altman later poked fun at Musk's claim to be a "free speech absolutist."
Twitter took aim at posts linking to rival Substack in 2023, forbidding users from retweeting or replying to tweets containing such links, before reversing course. In response to a tweet about the situation, Altman tweeted, "Free speech absolutism on STEROIDS."
Altman joked that he'd watch Musk and Mark Zuckerberg's rumored cage fight.
"I would go watch if he and Zuck actually did that," he said at the Bloomberg Technology Summit in June 2023, though he said he doesn't think he would ever challenge Musk in a physical fight.
Altman also repeated several of his previous remarks about Musk's position on AI.
"He really cares about AI safety a lot," Altman said at Bloomberg's summit. "We have differences of opinion on some parts, but we both care about that and he wants to make sure we, the world, have the maximal chance at a good outcome."
Separately, Altman told The New Yorker in August 2023 that Musk has a my-way-or-the highway approach to issues more broadly.
"Elon desperately wants the world to be saved. But only if he can be the one to save it," Altman said.
Β
Musk first sued Altman and OpenAI in March 2024.
He first sued OpenAI, Altman, and cofounder Greg Brockman in March, alleging the company's direction in recent years has violated its founding principles.
His lawyers alleged OpenAI "has been transformed into a closed-source de facto subsidiary of the largest technology company in the world" and is "refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity."
The lawsuit alleges that OpenAI executives played on Musk's concerns about the existential risks of AI and "assiduously manipulated" him into cofounding the company as a nonprofit. The intent of the company was to focus on building AI safely in an open approach to benefit humanity, the lawsuit says.
The company has since decided to take a for-profit approach.
OpenAI responded to the lawsuit by stating that "Elon's prior emails continue to speak for themselves."
The emails, which were published by OpenAI in March, show correspondence between Musk and OpenAI executives that indicated he supported a pivot to a for-profit model and was open to merging the AI startup with Tesla.Β
Musk expanded his beef with OpenAI to include Microsoft, accusing the two of constituting a monopoly
The billionaire called OpenAI's partnership with Microsoft a "de facto merger" and accused the two of anti-competitive practices, such as engaging in "lavish compensation."Β Musk's lawyers said the two companies "possess a nearly 70% share of the generative AI market."
"OpenAI has attempted to starve competitors of AI talent by aggressively recruiting employees with offers of lavish compensation, and is on track to spend $1.5 billion on personnel for just 1,500 employees," lawyers for Musk said in the complaint.Β
Two weeks later, Musk filed a motion asking a judge to prevent OpenAI from dropping its nonprofit status.
Musk filed a complaint to Judge Yvonne GonzalezΒ Rogers of the US District Court for the Northern District of California, arguing that OpenAI and Microsoft exploited his donations to OpenAI as a nonprofit to build a monopoly "specifically targeting xAI." In the filing, Musk's lawyers said OpenAI engaged in anticompetitive behaviors and wrongfully shared information with Microsoft.
If granted by the judge, the injunction could cause issues with OpenAI's partnership with Microsoft and prevent it from becoming a for-profit company.
As Musk's influence on US policy grows, his feud with Altman hangs in the balance.
As President-elect Donald Trump's self-proclaimed "First Buddy," Musk's power and influence on the US economy could increase even further over the next four years. In addition to being a right-hand-man to Trump, he'll lead the new Department of Government Efficiency with biotech billionaire Vivek Ramaswamy.
Musk hasn't been quiet about his disdain for Altman post-election. He dubbed the OpenAI cofounder "Swindly Sam" in an X post on November 15. The Wall Street Journal reported that Musk "despises" Altman, according to people familiar.
Oracle shares are set for their best year since 1999 after a 75% surge.
The enterprise-computing stock has benefited from strong demand for cloud and AI infrastructure.
Oracle cofounder Larry Ellison's personal fortune has surged .
Oracle has surged 75% since January, putting the stock on track for its best year since a tripling in 1999 during the dot-com boom.
The enterprise-computing giant's share price has jumped from a low of about $60 in late 2022 to about $180, boosting Oracle's market value from below $165 billion to north of $500 billion.
It's now worth almost as much as Exxon Mobil ($518 billion), and more valuable than Mastercard ($489 billion), Costco ($431 billion), or Netflix ($379 billion).
Oracle's soaring stock price has boosted the net worth of Larry Ellison, who cofounded the company and is chief technology officer. His holding of more than 40% puts him second on the Forbes Real-Time Billionaires list worth $227 billion, second only to Tesla CEO Elon Musk's $330 billion.
Oracle provides all manner of software and hardware for businesses, but its cloud applications and infrastructure are fueling its growth as companies such as Tesla that are training large language models pay up for processing power.
The company was founded in 1977 but is still growing at a good clip. Net income jumped by 23% to $10.5 billion in the year ended May, fueled by 12% sales growth in the cloud services and license support division, which generated nearly 75% of its revenues.
Oracle signed the largest sales contracts in its history last year as it tapped into "enormous demand" for training LLMs, CEO Safra Catz said in the fourth-quarter earnings release. She said the client list included OpenAI and its flagship ChatGPT model, which kickstarted the AI boom.
Catz also predicted revenue growth would accelerate from 6% to double digits this financial year. That's partly because Oracle is working with Microsoft and Google to interconnect their respective clouds, which Ellison said would help to "turbocharge our cloud database growth."
Oracle has flown under the radar this year compared to Nvidia. The chipmaker's stock has tripled in the past year and it now rivals Apple as the world's most valuable company. Yet Oracle is still headed for its best annual stock performance in a quarter of a century β and its bosses are promising there's more to come.
OpenAI is seeking to reach 1 billion users by next year, a new report said.
Its growth plan involves building new data centers, company executives told the Financial Times.
The lofty user target signifies the company's growth ambitions following a historic funding round.
OpenAI is seeking to amass 1 billion users over the next year and enter a new era of accelerated growth by betting on several high-stakes strategies such as building its own data centers, according to a new report.
In 2025, the startup behind ChatGPT hopes to reach user numbers surpassed only by a handful of technology platforms, such as TikTok and Instagram, by investing heavily in infrastructure that can improve its AI models, its chief financial officer Sarah Friar told the Financial Times.
"We're in a massive growth phase, it behooves us to keep investing. We need to be on the frontier on the model front. That is expensive," she said.
ChatGPT, the generative AI chatbot introduced two years ago by OpenAI boss Sam Altman, serves 250 million weekly active users, the report said.
ChatGPT has enjoyed rapid growth before. It reached 100 million users roughly two months after its initial release thanks to generative AI features that grabbed the attention of businesses and consumers. At the time, UBS analysts said they "cannot recall a faster ramp in a consumer internet app."
Data center demand
OpenAI will require additional computing power to accommodate a fourfold increase in users and to train and run smarter AI models.
Chris Lehane, vice president of global affairs at OpenAI, told the Financial Times that the nine-year-old startup was planning to invest in "clusters of data centers in parts of the US Midwest and southwest" to meet its target.
Increasing data center capacity has become a critical global talking point for AI companies. In September, OpenAI was reported to have pitched the White House on the need for a massive data center build-out, while highlighting the massive power demands that they'd come with.
Altman, who thinks his technology will one day herald an era of "superintelligence," has been reported to be in talks this year with several investors to raise trillions of dollars of capital to fund the build-out of critical infrastructure like data centers.
Friar also told the FT that OpenAI is open to exploring an advertising model.
"Our current business is experiencing rapid growth and we see significant opportunities within our existing business model," Friar told Business Insider. "While we're open to exploring other revenue streams in the future, we have no active plans to pursue advertising."
OpenAI said the capital would allow it to "double down" on its leadership in frontier AI research, as well as "increase compute capacity, and continue building tools that help people solve hard problems."
In June, the company also unveiled a strategic partnership with Apple as part of its bid to put ChatGPT in the hands of more users.
OpenAI did not immediately respond to BI's request for comment.
Since then, its user base has doubled to 200 million weekly users.
Major companies, entrepreneurs, and users remain optimistic about its transformative power.
It's been two years since OpenAI released its flagship chatbot, ChatGPT.
And a lot has changed in the world since then.
For one, ChatGPT has helped turbocharge global investment in generative AI.
Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.
Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.
Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.
Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.
Then, of course, there is ChatGPT's impact on regular users.
In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.
Business Insider asked ChatGPT what age means to it.
"Age, to me, is an interesting concept β it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.
The field of artificial intelligence is booming and attracting billions in investment.Β
Researchers, CEOs, and legislators are discussing how AI could transform our lives.
Here are 17 of the major names in the field β and the opportunities and dangers they see ahead.Β
Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives.Β
In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI β and its impact on our future.Β
Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.
Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology.Β
"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.Β
Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index β a metric for evaluating the cumulative impact of an author's scholarly output β in the world, according to his website.Β
In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020.Β
Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.
"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."
When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.
Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."
Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.
Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.
She may be best known for establishing ImageNet β a large visual database that was designed for research in visual object recognition β and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.Β Over the years, she's also been affiliated with major tech companies including Google β where she was a VP and chief scientist for AI and machine learning β and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022.Β
Β
Β
UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans.Β
He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig.Β
Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June,Β he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom.Β
Peter Norvig played a seminal role directing AI research at Google.
He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision.Β
Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence.Β
Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said.Β
Β
Timnit Gebru is a computer scientist whoβs become known for her work in addressing bias in AI algorithms.
Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.
But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.
Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."
She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology.Β "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."
Β
British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
The endeavor lead to theΒ Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.
Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website.Β
Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera. Β
"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity β it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg.Β
Β
Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.Β Β
In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."
At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.
Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario β OpenAI's lead safety researcher at the time β was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails.Β
At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate.Β
"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.
Β
Demis Hassabis has said artificial general intelligence will be here in a few years.
After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He soldΒ the AI lab to Google in 2014 for Β£400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website.Β
Hassabis has said the promise of artificial general intelligence β a theoretical concept that sees AI matching the cognitive abilities of humans β is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method."Β
The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook.Β
Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener"Β that can respond to real-life problems.Β
"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said.Β
Β
Β
USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society.Β
Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."
She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."
Margaret Mitchell is the chief ethics scientist at Hugging Face.
Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google.Β
In late 2020, Mitchell and Timnit Gebru β then the co-lead of Google's ethical artificial intelligence β published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021
Now, at Hugging Face β an open-source data science and machine learning platform that was founded in 2016 β she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models. Β
In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."
Navrina Singh is the founder of Credo AI, an AI governance platform.
Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage.Β In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."
At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world.Β "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."
Β
Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.
One bottleneck in large language models is their tendency to hallucinate β a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code β essential "program" responses instead of verbalizing them β we can "give them so much more fuel for the next few years in terms of what they can do," Socher said.Β
But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there.Β
And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.
"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI.Β