❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 26 December 2024Main stream

What to expect from AI in 2025, according to industry leaders

26 December 2024 at 03:08
New Year
Founders and CEOs in the AI industry tell Business Insider what's in store for the tech in 2025.

Tatiana Sviridova/Getty Images

  • 2024 was a big year for artificial intelligence. 2025 could be even bigger.
  • Business Insider spoke to over a dozen key figures in the industry about AI's future.
  • Here's what they had to say.

If 2024 is the year companies started adopting AI, then 2025 could be the year they start tailoring it to fit their needs.

Some say AI will become so integrated into our lives we won't even notice it's there.

"Like the internet or electricity, AI will become an invisible driver of outcomes, not a selling point," Tom Biegala, cofounder of Bison Ventures, a venture firm focused on frontier technology, told Business Insider by email.

And as companies incorporate the technology into their businesses, they'll likely need to focus more on managing it responsibly.

"In 2025 we expect more enterprise companies will recognize that investing in AI governance is just as important as adopting AI itself," Navrina Singh, founder of Credo AI, an AI governance platform, said.

Business Insider spoke with 13 key figures in tech β€” from startup founders to investors β€” for their best guesses on what to expect from AI in 2025.

Investment will continue to soar

"The AI hype cycle may stabilize, but AI investments will soar," Immad Akhund, the CEO of Mercury, which offers banking services to startups, told BI by email.

He believes the sustained interest in AI comes as companies move from experimenting to using it in real-world areas like customer service, sales, and finance.

"Companies will use AI to boost productivity β€” especially in back-office tasks and document management β€” helping small teams scale quickly and operate more efficiently," he said.

Under the Trump Administration, the new leadership at the Federal Trade Commission might foster a more favorable climate for mergers, acquisitions, and IPOs in the AI industry.

"I expect M&A to increase by at least 35% next year," Tomasz Tunguz, founder of Theory Ventures, a venture capital firm, told BI. "The top 10 most active acquirers in the software world are falling off a cliff in terms of activity, which requires meaningfully the IPO market to roar open with a combination of AI and other software companies."

The competition will get fierce

Don't be surprised if a leading company takes a hit because of AI.

"At least one major, globally recognized company will fail or significantly downsize due to an inability to compete with one or more AI-native startups. Rapid innovation cycles and the horizontal application of AI will render slow movers obsolete," Stefan Weitz, CEO and cofounder of HumanX, a leading AI conference, told BI.

He believes the tech's threat will extend to the global stage, requiring major powers to regulate AI to maintain their competitive edge.

"As we are already seeing with the US and China regulating or blocking core AI technologies, nations or corporations will experience major geopolitical conflicts over AI algorithms and data, with some countries banning or nationalizing key AI technologies to maintain control over economic and political power," he wrote.

That said, the United States and China are already working together to mitigate the existential threat AI poses to humanity. In November, at the Asia-Pacific Economic Cooperation Summit, President Joe Biden and Chinese leader Xi Jinping agreed that humans, not AI, should make decisions regarding the use of nuclear technology.

The lines between humans and AI will not be obvious

The idea of humans and autonomous agents working together might soon move beyond the realm of science fiction. That means we'll also need to start drafting rules to govern these interactions.

"Synthetic virtual people indistinguishable from real humans will enter the workforce, even if in limited ways, leading to debates about employment rights and creating a push for 'AI citizenship' to define their societal roles and limitations," Weitz said.

Some predict that the distinction between human-created and AI-generated content will also become increasingly unclear.

"Generative media will hit the mainstream in a big way and will be as much talked about as LLMs in 2024," Steve Jang, founder and managing partner of Kindred Ventures, an early-stage venture firm, told BI. "Generative audio and images are getting better due to more advanced models, and we'll start to see adoption spike across both consumer and enterprise."

Specialization. Specialization. Specialization.

Business leaders told BI that next year will be about custom-fitting AI technology to suit specific needs.

"In 2025, the AI hype cycle will give way to the rise of domain-specific, specialized AI and robotics," Biegala said. "Products will be faster and more efficient while delivering immediate, tangible value compared to general-purpose solutions. This shift will mark the beginning of real, transformative economic impact of AI."

The focus on customization also extends to how we search for information online, with chatbots replacing search engines like Google.

"In 2025, search will no longer be synonymous with a single brand; instead, users will turn to multiple platforms for specific types of queries. Some may rely on AI-powered chatbots for conversational answers, others on domain-specific engines for technical or industry-specific expertise, and still others on visual or voice-based tools for multimedia queries," Dominik Mazur, CEO and cofounder of IAsk, an AI search engine, told BI. "This diversification will create a competitive environment where specialized players and niche solutions coexist with larger generalist platforms, leading to greater innovation and choice for users."

Over the past year, AI leaders have been promoting the value of smaller AI models that can address a company's specific needs better than large-scale foundation models. "There's a lot of pressure on making smaller, more efficient models, smarter via data and algorithms, methods, rather than just scaling up due to market forces," Aidan Gomez, the founder and CEO of Cohere, an enterprise AI startup, previously told BI.

The pressure is rising as the value of building models simply based on computing power decreases.

"The days of using a GPU to brute force compute to build models and applications will be in the rearview mirror," Biegala said.

Companies may also use customizable AI tools more, possibly replacing software-as-a-service applications.

"AI tools are tearing down the moat of SaaS applications as tools that can only be bought vs built, prompting enterprises β€” from Amazon to ambitious startups β€” to replace expensive SaaS apps that don't quite totally fit the need with lightweight custom-fit solutions integrated into your stack," David Hsu, founder of Retool, a low code platform for developers, told BI.

Regulation takes priority

With more responsibility comes more risk. Companies are going to start getting serious about regulation.

"I expect to see more voluntary commitments and actions to responsible AI. I think there will be a push to establish guardrails similar to what happened for frontier models, now discussed for AI agents and autonomous AI," Singh said. "Also, I do see a world where we will see the first penalties for noncompliance with AI-specific laws, which will set a global precedent, forcing businesses to prioritize governance or face steep consequences."

Singh, along with others like AI godfather Geoffrey Hinton and OpenAI CEO Sam Altman, have expressed interest in an international body to govern the use of AI. We may "even see Global AI standards emerge, led by coalitions of nations and enterprises to set the baseline for safety, transparency, and accountability in AI systems," she said.

The value of regulation will be paramount next year amid the growing threat of large-scale AI-driven cybersecurity threats.

"AI deepfake technologies will make generating fake identities and documents trivially easy, creating a trust crisis for businesses," Pat Kinsel, the CEO of Proof, a software platform for notarization, told BI. "The ability to distinguish between real and fraudulent identities and secure digital interactions in the AI age will be the key differentiator between resilient businesses and those at risk of costly fraud."

AI will not take your job β€” yet

The good news is that business and tech leaders only expect to see AI enhance people's occupations next year, not replace them.

"We'll see efficiency gains in industries that automate repetitive tasks, but humans will still be needed for complex decision-making and creative work. 2025 is the year we really see many using AI as a core part of their job and enabling more productivity," Akhund said.

Read the original article on Business Insider

Before yesterdayMain stream

AI 'godfather' Geoffrey Hinton says AI will one day unite nations against a common existential threat

12 December 2024 at 11:00
Computer scientist Geoffrey Hinton stood outside a Google building
Computer scientist and Google Brain VP Geoffrey Hinton

Noah Berger/Associated Press

  • AI advances have sparked a new global race for military dominance.
  • Geoffrey Hinton said that, right now, countries are working in secret to gain an advantage.
  • That will change once AI becomes so intelligent it presents an existential threat, he said.

The rapid advances in AI have triggered an international race for military dominance.

Major powers are quietly integrating AI into their militaries to gain a strategic edge. However, this could change once AI becomes advanced enough to pose an existential threat to humanity, AI "godfather" and Nobel Prize winner Geoffrey Hinton says.

"On risks like lethal autonomous weapons, countries will not collaborate," Hinton said in a seminar at the Royal Swedish Academy of Engineering Sciences last week. "All of the major countries that supply arms, Russia, the United States, China, Britain, Israel, and possibly Sweden, are busy making autonomous lethal weapons, and they're not gonna be slowed down, they're not gonna regulate themselves, and they're not gonna collaborate."

However, Hinton believes that will change when it becomes necessary for the human race to fight the potential threat posed by a super-intelligent form of AI.

"When these things are smarter than us β€” which almost all the researchers I know believe they will be, we just differ on how soon, whether it's like in five years or in 30 years β€” will they take over and is there anything we can do to prevent that from happening since we make them? We'll get collaboration on that because all of the countries don't want that to happen."

"The Chinese Communist Party does not want to lose power to AI," he added. They want to hold on to it."

Hinton said this collaboration could resemble the Cold War, when Russia and the United States β€” despite being enemies β€” shared a common goal to avoid nuclear war.

Citing similar concerns, OpenAI CEO Sam Altman has called on world leaders to establish an "international agency" that examines the most powerful AI models and ensures "reasonable safety testing."

"I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast in May.

According to a report by Goldman Sachs, global investment in AI is expected to hit $200 billion by 2025, with the United States and China leading the military arms race.

The United States and China are already beginning to collaborate on existential threats related to AI. In November, at the Asia-Pacific Economic Cooperation Summit, President Joe Biden and Chinese leader Xi Jinping agreed that humans, not AI, should make decisions regarding the use of nuclear technology.

Read the original article on Business Insider

Cohere CEO Aidan Gomez on what to expect from 'AI 2.0'

8 December 2024 at 14:06
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.

Cohere

  • Companies will soon focus on customizing AI solutions for specific needs, Cohere's CEO says.
  • AI 2.0 will "help fundamentally transform how businesses operate," he wrote.
  • Major AI companies like OpenAI are also releasing tools for customization.

If this was the year companies adopted AI to stay competitive, next year will likely be about customizing AI solutions for their specific needs.

"The next phase of development will move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Aidan Gomez, the CEO and cofounder of Cohere, an AI company building technology for enterprises, wrote in a post on LinkedIn last week.

"AI 2.0," as he calls it, will "accelerate adoption, value creation, and will help fundamentally transform how businesses operate." He added: "Every company will be an AI company."

Cohere has partnered with major companies, including software company Oracle and IT company Fujitsu, to develop customized business solutions.

"With Oracle, we've built customized technology and tailored our AI models to power dozens (soon, hundreds) of production AI features across Netsuite and Fusion Apps," he wrote. For Fujitsu, Cohere built a model called Takane that's "specifically designed to excel in Japanese."

Last June, Cohere partnered with global management consulting firm McKinsey & Company to develop customized generative AI solutions for the firm's clients. The work is helping the startup "build trust" among more organizations, Gomez previously told Business Insider.

To meet the specific needs of so many clients, Gomez has advocated for smaller, more efficient AI models. He says they are more cost-effective than building large language models, and they give smaller startups a chance to compete with more established AI companies.

But it might be only a matter of time before the biggest companies capitalize on the customization trend, too.

OpenAI previewed an advancement during its "Shipmas" campaign that allows users to fine-tune o1 β€” their latest and most advanced AI model, on their own datasets. So, users can now leverage OpenAI's reinforcement-learning algorithms to customize their own models.

The technology will be available to the public next year, but OpenAI has already partnered with companies like Thomson Reuters to develop specialized legal tools and researchers at Lawrence Berkeley National Laboratory to build computational models for identifying genetic diseases.

Cohere did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Most people probably won't notice when artificial general intelligence arrives

7 December 2024 at 14:26
A person infront of their laptop while using AI on their mobile device.
When AGI arrives, most won't even realize it, some AI experts say. Others say it's already here.

amperespy/Getty Images

  • Some say OpenAI's o1 models are close to artificial general intelligence.
  • o1 outperforms humans in certain tasks, especially in science, math, or coding.
  • Most people won't notice when AGI ultimately arrives, some AI experts say.

AI is advancing rapidly, but most people might not immediately notice its impact on their lives.

Take OpenAI's latest o1 models, which the company officially released on Thursday as part of itsΒ Shipmas campaign. OpenAI says these models are "designed to spend more time thinking before they respond."

Some say o1 shows how we might reach artificial general intelligence β€” a still theoretical form of AI that meets or surpasses human intelligence β€” without realizing it.

"Models like o1 suggest that people won't generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed," Wharton professor and AI expert Ethan Mollick wrote in a post on X. "Most folks don't have a lot of tasks that bump up against limits of human intelligence, so won't see it."

Artificial general intelligence has been broadly defined as anything between "god-like intelligence" and a more modest "machine that can do any task better than a human," Mollick wrote in a May post on his Substack, One Useful Thing.

He said that humans can better understand whether they're encountering AGI by breaking its development into tiers, in which the ultimate tier, Tier 1, is a machine capable of performing any task better than a human. Tier 2, or "Weak AGI," he wrote, are machines that outperform average human experts at all tasks in specific jobs β€” though no such systems currently exist. Tier 3, or "Artificial Focused Intelligence," is an AI that outperforms average human experts in specific, intellectually demanding tasks. While Tier 4, "Co-intelligence," is the result of humans and AI working together.

Some in the AI industry believe we've already reached AGI, even if we haven't realized it.

"In my opinion, we have already achieved AGI and it's even more clear with o1. We have not achieved 'better than any human at any task,' but what we have is 'better than most humans at most tasks,'" Vahid Kazemi, a member of OpenAI's technical staff, wrote in a post on X on Friday.

More conservative AI experts say o1 is just a step along the journey to AGI.

"The idea somehow which, you know, is popularized by science fiction and Hollywood that, you know, somehow somebody is going to discover the secret, the secret to AGI, or human-level AI, or AMI, whatever you want to call it. And then, you know, turn on a machine, and then we have AGI. That's just not going to happen," Meta's chief AI scientist, Yann LeCun, said on Lex Fridman's podcast in March. "It's not going to be an event. It's going to be gradual progress."

Read the original article on Business Insider

OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about it

6 December 2024 at 11:50
AI
OpenAI CEO Sam Altman said the company's o1 model is its "smartest" yet. That also comes with risk.

Mark Garlick/Getty Images

  • OpenAI CEO Sam Altman called o1 "the smartest model in the world now."
  • A safety review found it's so smart it could fight back when it thinks it'll be shut down.
  • Researchers found that AI deception is often a strategy to complete its goals.

Those who are worried that advancements in artificial intelligence could lead to the destruction of humanity have a new reason to be anxious.

New research on OpenAI's latest series of AI models, known as o1, found that when they think they are at risk of being shut down, they will sometimes look for a way to prevent it.

OpenAI CEO Sam Altman referred to o1 as "the smartest model in the world now" during its official release on Thursday, during the first day of the company's "Shipmas" campaign.

OpenAI said these models are "designed to spend more time thinking before they respond" and were trained on a prompting technique called "chain of thought" that encourages them to reason through problems by breaking them down step-by-step.

Yet greater intelligence also comes with greater risk.

"Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence," OpenAI said in a paper it released on Thursday outlining the safety training the company carried out on o1. Apollo Research, an AI safety organization, conducted the review.

OpenAI found that o1 is capable of scheming when it thinks it's at risk of being turned off. When the model was led to believe it would be "shut down" while working on its goals under oversight, it tried to deactivate the oversight mechanism 5% of the time, OpenAI said.

The company also said that o1 is skilled at concealing scheming behavior.

"o1 almost never admits to having taken a scheming action when explicitly asked," researchers said in a paper that Apollo Research published on Thursday. The risk for a real-life user is that they won't be able to detect the o1's deceptive behavior unless they explicitly look for it. The researchers also found that "o1 often doubles down on its scheming attempts by lying to the user in follow-up messages, even when explicitly told to be honest."

It's not uncommon for AI systems to resort to scheming or deception to achieve their goals.

"Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals," Peter Berk, an AI existential safety postdoctoral fellow at MIT, said in a news release announcing research he had coauthored on GPT-4's deceptive behaviors.

As AI technology advances, developers have stressed the need for companies to be transparent about their training methods.

"By focusing on clarity and reliability and being clear with users about how the AI has been trained, we can build AI that not only empowers users but also sets a higher standard for transparency in the field," Dominik Mazur, the CEO and cofounder of iAsk, an AI-powered search engine, told Business Insider by email.

Others in the field say the findings demonstrate the importance of human oversight of AI.

"It's a very 'human' feature, showing AI acting similarly to how people might when under pressure," Cai GoGwilt, cofounder and chief architect at Ironclad, told BI by email. "For example, experts might exaggerate their confidence to maintain their reputation, or people in high-stakes situations might stretch the truth to please management. Generative AI works similarly. It's motivated to provide answers that match what you expect or want to hear. But it's, of course, not foolproof and is yet another proof point of the importance of human oversight. AI can make mistakes, and it's our responsibility to catch them and understand why they happen."

Read the original article on Business Insider

OpenAI unveils the o3 and o3 mini on the last day of its 12 days of 'Shipmas'

Shipmas day 1
OpenAI CEO Sam Altman and members of his team as they announced new products on the first day of "Shipmas."

Screenshot

  • OpenAI's marketing campaign "Shipmas" ended Friday.
  • The campaign included 12 days of product releases, demos, and new features.
  • On the final day, OpenAI previewed o3, its most advanced model yet.

OpenAI released new features and products ahead of the holidays, a campaign it called "Shipmas."

The company saved the most exciting news for the final day: a preview of o3, its most advanced model yet, which the company said could be available to the public as soon as the end of January.

Here's everything OpenAI has released so far for "Shipmas."

'Shipmas' Day 1

OpenAI started the promotion with a bang by releasing the full version of its latest reasoning model, o1.

OpenAI previewed o1 in September, describing it as a series of artificial-intelligence models "designed to spend more time thinking before they respond." Until now, only a limited version of these models was available to ChatGPT Plus and Team users.

Now, these users have access to the full capabilities of o1 models, which Altman said are faster, smarter, and easier to use than the preview. They're also multimodal, which means they can process images and texts jointly.

Max Schwarzer, a researcher at OpenAI, said the full version of o1 was updated based on user feedback from the preview version and said it's now more intelligent and accurate.

"We ran a pretty detailed suite of human evaluations for this model, and what we found was that it made major mistakes about 34% less often than o1 preview while thinking fully about 50% faster," he said.

Along with o1, OpenAI unveiled a new tier of ChatGPT called ChatGPT Pro. It's priced at $200 a month and includes unlimited access to the latest version of o1.

'Shipmas' Day 2

On Friday, OpenAI previewed an advancement that allows users to fine-tune o1 on their own datasets. Users can now leverage OpenAI's reinforcement-learning algorithms β€” which mimic the human trial-and-error learning process β€” to customize their own models.

The technology will be available to the public next year, allowing anyone from machine-learning engineers to genetic researchers to create domain-specific AI models. OpenAI has already partnered with the Reuters news agency to develop a legal assistant based on o1-mini. It has also partnered with the Lawrence Berkeley National Laboratory to develop computational methods for assessing rare genetic diseases.

'Shipmas' Day 3

Sora screenshot explore page
The Explore page of OpenAI's Sora AI tool, which generates AI videos from text prompts.

screenshot/OpenAI

OpenAI announced on December 9 that its AI video generator Sora was launching to the public.

Sora can generate up to 20-second videos from written instructions. The tool can also complete a scene and extend existing videos by filling in missing frames.

"We want our AIs to be able to understand video and generate video and I think it really will deeply change the way that we use computers," the CEO added.

Rohan Sahai, Sora's product lead, said a product team of about five or six engineers built the product in months.

The company showed off the new product and its various features, including the Explore page, which is a feed of videos shared by the community. It also showed various style presets available like pastel symmetry, film noir, and balloon world.

Sora storyboard feature
OpenAI showed off Sora's features, including Storyboard for further customizing AI videos.

screenshot/OpenAI

The team also gave a demo of Sora's Storyboard feature, which lets users organize and edit sequences on a timeline.

Sora is rolling out to the public in the US and many countries around the world. However, Altman said it will be "a while" before the tool rolls out in the UK and most of Europe.

ChatGPT Plus subscribers who pay $20 monthly can get up to 50 generations per month of AI videos that are 5 seconds long with a resolution of 720p. ChatGPT Pro users who pay $200 a month get unlimited generations in the slow queue mode and 500 faster generations, Altman said in the demo. Pro users can generate up to 20-second long videos that are 1080p resolution, without watermarks.

'Shipmas' Day 4

ChatGPT canvas feature editing an essay
ChatGPT can provide more specific edit notes and run code using canvas.

OpenAI

OpenAI announced that it's bringing its collaborative canvas tool to all ChatGPT web users β€” with some updates.

The company demonstrated the tech in a holiday-themed walkthrough of some of its new capabilities. Canvas is an interface that turns ChatGPT into a writing or coding assistant on a project. OpenAI first launched it to ChatGPT Plus and Team users in October.

Starting Tuesday, canvas will be available to free web users who'll be able to select the tool from a drop-down of options on ChatGPT. The chatbot can load large bodies of text into the separate canvas window that appears next to the ongoing conversation thread.

Canvas can get even more intuitive in its responses with new updates, OpenAI said. To demonstrate, they uploaded an essay about Santa Claus's sleigh and asked ChatGPT to give its editing notes from the perspective of a physics professor.

For writers, it can craft entire bodies of text, make changes based on requests, and add emojis. Coders can run code in canvas to double-check that it's working properly.

'Shipmas' Day 5

Shipmas Day 5
All Apple users need to do is enable ChatGPT on their devices.

OpenAI 'Shipmas' Day 5

OpenAI talked about its integration with Apple for the iPhone, iPad, and macOS.

As part of the iOS 18.2 software update, Apple users can now access ChatGPT directly from Apple's operating systems without an OpenAI account. This new integration allows users to consult ChatGPT through Siri, especially for more complex questions.

They can also use ChatGPT to generate text through Apple's generative AI features, collectively called Apple Intelligence. The first of these features was introduced in October and included tools for proofreading and rewriting text, summarizing messages, and photo-editing features. They can also access ChatGPT through the camera control feature on the iPhone 16 to learn more about objects within the camera's view.

'Shipmas' Day 6

ChatGPT Advanced Voice Mode Demo
OpenAI launched video capabilities in ChatGPT's Advanced Voice Mode.

screenshot/OpenAI

OpenAI launched its highly anticipated video and screensharing capabilities in ChatGPT's Advanced Voice Mode.

The company originally teased the public with a glimpse of the chatbot's ability to "reason across" vision along with text and audio during OpenAI's Spring Update in May. However, Advanced Voice Mode didn't become available for users until September, and the video capabilities didn't start rolling out until December 12.

In the livestream demonstration on Thursday, ChatGPT helped guide an OpenAI employee through making pour-over coffee. The chatbot gave him feedback on his technique and answered questions about the process. During the Spring Update, OpenAI employees showed off the chatbot's ability to act as a math tutor and interpret emotions based on facial expressions.

Users can access the live video by selecting the Advanced Voice Mode icon in the ChatGPT app and then choosing the video button on the bottom-left of the screen. Users can share their screen with ChatGPT by hitting the drop-down menu and selecting "Share Screen."

'Shipmas' Day 7

OpenAi's projects demo for Day 7 of 'Shipmas'
OpenAI introduced Projects on Day 7 of "Shipmas"

screenshot/OpenAI

For "Shipmas" Day 7, OpenAI introduced Projects, a new way for users to "organize and customize" conversations within ChatGPT. The tool allows users to upload files and notes, store chats, and create custom instructions.

"This has been something we've been hearing from you for a while that you really want to see inside ChatGPT," OpenAI chief product officer Kevin Weil said. "So we can't wait to see what you do with it."

During the live stream demonstration, OpenAI employees showed a number of ways to use the feature, including organizing work presentations, home maintenance tasks, and programming.

The tool started to roll out to Plus, Pro, and Teams users on Friday. The company said in the demonstration it will roll out the tool to free users "as soon as possible."

'Shipmas' Day 8

SearchGPT screenshot during OpenAI demo
OpenAI announced on Monday it is rolling out SearchGPT to all logged-in free users.

screenshot/OpenAI

OpenAI is rolling out ChatGPT search to all logged-in free users on ChatGPT, the company announced during its "Shipmas" livestream on Monday. The company previously launched the feature on October 31 to Plus and Team users, as well as waitlist users.

The new feature is also integrated into Advanced Voice Mode now. On the livestream, OpenAI employees showed off its ability to provide quick search results, search while users talk to ChatGPT, and act as a default search engine.

"What's really unique about ChatGPT search is the conversational nature," OpenAI's search product lead, Adam Fry, said.

The company also said it made Search faster and "better on mobile," including the addition of some new maps experiences. ChatGPT search feature is rolling out globally to all users with an account.

'Shipmas' Day 9

OpenAI "Shipmas" Day 9
OpenAI announced tools geared towards developers.

screenshot/OpenAI

OpenAI launched tools geared toward developers on Tuesday.

It launched o1 out of preview in the API. OpenAI's o1 is its series of AI models designed to reason through complex tasks and solve more challenging problems. Developers have experimented with o1 preview since September to build agentic applications, customer support, and financial analysis, OpenAI employee Michelle Pokrass said.

The company also added some "core features" to o1 that it said developers had been asking for on the API, including function calling, structured outputs, vision inputs, and developer messages.

OpenAI also announced new SDKs and a new flow for getting an API key.

'Shipmas' Day 10

Screenshot of OpenAI 'Shipmas' Day 10
You can access ChatGPT through phone calls or WhatsApp.

screenshot/OpenAI

OpenAI is bringing ChatGPT to your phone through phone calls and WhatsApp messages.

"ChatGPT is great but if you don't have a consistent data connection, you might not have the best connection," OpenAI engineer Amadou Crookes said in the livestream. "And so if you have a phone line you can jump right into that experience."

You can add ChatGPT to your contacts or dial the number at 1-800-ChatGPT or 1-800-242-8478. The calling feature is only available for those living in the US. Those outside the US can message ChatGPT on WhatsApp.

OpenAI employees in the live stream demonstrated the calling feature on a range of devices including an iPhone, flip phone, and even a rotary phone. OpenAI product lead Kevin Weil said the feature came out of a hack-week project and was built just a few weeks ago.

'Shipmas' Day 11

Screenshot: Day 11 of OpenAi's "Shipmas."
Open AI's ChatGPT desktop program has new features.

screenshot/OpenAI

OpenAI focused on features for its desktop apps during Thursday's "Shipmas" reveal. Users can now see and automate their work on MacOS desktops with ChatGPT.

Additionally, users can click the "Works With Apps" button, which allows them to work with more coding apps, such as Textmate, BB Edit, PyCharm, and others. The desktop app will support Notion, Quip, and Apple Notes.

Also, the desktop app will have Advanced Voice Mode support.

The update became available for the MacOS desktop on Thursday. OpenAI CPO Kevin Weil said the Windows version is "coming soon."

'Shipmas' Day 12

Screenshot: Day 12 of OpenAI's "Shipmas."
Sam Altman and Mark Chen introduced the o3 and o3 mini models during a livestream on Friday.

screenshot/OpenAI

OpenAI finished its "12 days of Shipmas" campaign by introducing o3, the successor to the o1 model. The company first launched the o1 model in September and advertised its "enhanced reasoning capabilities."

The rollout includes the o3 and 03-mini models. Although "o2" should be the next model number, an OpenAI spokesperson told Bloomberg that it didn't use that name "out of respect' for the British telecommunications company.

Greg Kamradt of Arc Prize, which measures progress toward artificial general intelligence, appeared during the livestream and said o3 did notably better than o1 during tests by ARC-AGI.

OpenAI CEO Sam Altman said during the livestream that the models are available for public safety testing. He said OpenAI plans to launch the o3 mini model "around the end of January" and the o3 model "shortly after that."

In a post on X on Friday, Weil said the o3 model is a "massive step up from o1 on every one of our hardest benchmarks."

Read the original article on Business Insider

New findings from Sam Altman's basic-income study challenge one of the main arguments against the idea

Sam Altman
Researchers shared new findings from Sam Altman's basic-income study.

Mike Coppola/Getty Images for TIME

  • Sam Altman's basic-income study showed recipients valued work more after getting monthly payments.
  • The finding challenges arguments against such programs that say a basic income discourages work.
  • Participants got $1,000 a month for three years, making it one of the largest studies of its kind.

New findings from OpenAI CEO Sam Altman's basic-income study found that recipients valued work more after receiving no-strings-attached recurring monthly payments, challenging a long-held argument against such programs.

Altman's basic-income study, which published initial findings in July, was one of the largest of its kind. It gave low-income participants $1,000 a month for three years to spend however they wanted.

Participants reported significant reductions in stress, mental distress, and food insecurity during the first year, though those effects faded by the second and third years of the program.

"Cash alone cannot address challenges such as chronic health conditions, lack of childcare, or the high cost of housing," the first report in July said.

In its new paper, researchers studied the effect the payments had on recipients' political views and participation, as well as their attitudes toward work.

They found little to no change in their politics, including their views on a broader cash program.

"It's sort of fascinating, and it underscores the kind of durability of people's political views that lots of people who felt kind of mildly supportive of programs like this before, they stay mildly supportive; people who were opposed, they stay opposed," David Broockman, coauthor of the study, told Business Insider.

Universal basic income has become a flashy idea in the tech industry, as leaders like Altman and newly minted government efficiency chiefΒ Elon MuskΒ see it as a way to mitigate AI's potential impact on jobs.

Still, enacting universal basic income as a political policy is a heavy lift, so several cities and states have experimented with small-scale guaranteed basic incomes instead. These programs provide cash payments without restrictions to select low-income or vulnerable populations.

Data from dozens of these smaller programs have found that cash payments can help alleviate homelessness, unemployment, and food insecurity β€” though results still stress the need for local and state governments to invest in social services and housing infrastructure.

Critics say basic income programs β€” whether guaranteed or universal β€” won't be effective because they encourage laziness and discourage work.

However, OpenResearch director Elizabeth Rhodes told BI that the study participants showed a "greater sense of the intrinsic value of work."

Rhodes said researchers saw a strong belief among participants that work should be required to receive government support through programs like Medicaid or a hypothetical future unconditional cash program. The study did show a slight increase in unemployment among recipients, but Rhodes said that overall attitudes toward working remained the same.

"It is interesting that it is not like a change in the value of work," Rhodes said. "If anything, they value work more. And that is reflected. People are more likely to be searching for a job. They're more likely to have applied for jobs."

Broockman said the study's results can offer insights into how future basic income programs can be successful. Visibility and transparency will be key if basic income is tried as government policy because the government often spends money in ways that "people don't realize is government spending," Broockman said.

"Classic examples are things like the mortgage interest tax deduction, which is a huge break on taxes, a huge transfer to people with mortgages. A lot of people don't think of that as a government benefit they're getting, even though it's one of the biggest government benefits in the federal budget," Broockman said. "Insofar as a policy like this ever would be tried, trying to administer it in a way that is visible to people is really important."

Broockman added that the study's results don't necessarily confirm the fears or hopes expressed by skeptics or supporters of a basic income on either side of the aisle.

Conservative lawmakers in places like Texas, South Dakota, and Iowa have moved to block basic income programs, with much of the opposition coming from fears of creeping "socialism."

"For liberals, for example, a liberal hope and a conservative fear might be, people get this transfer, and then all of a sudden it transforms them into supporting much bigger redistribution, and we just don't find that," Broockman said.

Broockman said that many participants in the program would make comments like "Well, I used it well, but I think other people would waste it."

One hope from conservatives would be that once people become more economically stable, they could become more economically conservative, but Broockman said the study results do not indicate that either.

Broockman said that an unconditional cash program like this "might not change politics or people's political views per se" but that its apolitical nature could possibly "speak well to the political viability of a program like this."

Read the original article on Business Insider

Another safety researcher quits OpenAI, citing the dissolution of 'AGI Readiness' team

1 December 2024 at 11:27
The OpenAI logo on a multicolored background with a crack running through it
A parade of OpenAI researchers focused on safety have left the company this year.

Chelsea Jia Feng/Paul Squire/BI

  • Safety researcher Rosie Campbell announced she is leaving OpenAI.
  • Campbell said she quit in part because OpenAI disbanded a team focused on safety.
  • She is the latest OpenAI researcher focused on safety to leave the company this year.

Yet another safety researcher has announced their resignation from OpenAI.

Rosie Campbell, a policy researcher at OpenAI, said in a post on Substack on Saturday that she had completed her final week at the company.

She said her departure was prompted by the resignation in October of Miles Brundage, a senior policy advisor who headed the AGI Readiness team. Following his departure, the AGI Readiness team disbanded, and its members dispersed across different sectors of the company.

The AGI Readiness team advised the company on the world's capacity to safely manage AGI, a theoretical version of artificial intelligence that could someday equal or surpass human intelligence.

In her post, Campbell echoed Brundage's reason for leaving, citing a desire for more freedom to address issues that impacted the entire industry.

"I've always been strongly driven by the mission of ensuring safe and beneficial AGI and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," she wrote.

She added that OpenAI remains at the forefront of research β€” especially critical safety research.

"During my time here I've worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I'm so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI."

Over the past year, however, she said she's "been unsettled by some of the shifts" in the company's trajectory.

In September, OpenAI announced that it was changing its governance structure and transitioning to a for-profit company, almost a decade after it originally launched as a nonprofit dedicated to creating artificial general intelligence.

Some former employees questioned the move as compromising the company's mission to develop the technology in a way that benefits humanity in favor of more aggressively rolling out products. Since June, the company has increased sales staff by about 100 to win business clients and capitalize on a "paradigm shift" toward AI, its sales chief told The Information.

OpenAI CEO Sam Altman has said the changes will help the company win the funding it needs to meet its goals, which include developing artificial general intelligence that benefits humanity.

"The simple thing was we just needed vastly more capital than we thought we could attract β€” not that we thought, we tried β€” than we were able to attract as a nonprofit," Altman said in a Harvard Business School interview in May.

He more recently said it's not OpenAI's sole responsibility to set industry standards for AI safety.

"It should be a question for society," he said in an interview with Fox News Sunday with Shannon Bream that aired on Sunday. "It should not be OpenAI to decide on its own how ChatGPT, or how the technology in general, is used or not used."

Since Altman's surprise but brief ousting last year, several high-profile researchers have left OpenAI, including cofounder Ilya Sutskever, Jan Leike, and John Schulman, all of whom expressed concerns about its commitment to safety.

OpenAI did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Meet Bill Gates' kids Jennifer, Rory, and Phoebe: From a pediatrician to a fashion startup cofounder

Bill Gates Melinda
Bill Gates has three children with Melinda French Gates, his ex-wife, and now has his first grandchild as well.

Mark J. Terrill/AP Images

  • Bill Gates, the Microsoft cofounder, shares three kids with his ex-wife Melinda French Gates.
  • They include a recent med school graduate and a fashion startup cofounder.
  • Here's what we know about the children of one of the world's richest men.

Bill Gates' story is a quintessential example of the American entrepreneurial dream: A brilliant math whiz, Gates was 19 when he dropped out of Harvard and cofounded Microsoft with his friend Paul Allen in 1975.

Β Nearly 50 years later, Gates' net worth of $131 billion makes him one of the richest and most famous men on Earth, per Forbes. He stepped down from Microsoft's board in 2020 and has cultivated his brand of philanthropy with the Gates Foundation β€” a venture he formerly ran with his now ex-wife Melinda French Gates, who resigned in May.Β 

Even before founding one of the world's most valuable companies, Gates' life was anything but ordinary. He grew up in a well-off and well-connected family, surrounded by his parents' rarefied personal and professional network. Their circle included a Cabinet secretary and a governor of Washington, according to "Hard Drive," the 1992 biography of Gates by James Wallace and Jim Erickson. (Brock Adams, who went on to become the transportation secretary in the Carter administration, is said to have introduced Gates' parents.)

His father, William Gates Sr., was a prominent corporate lawyer in Seattle and the president of the Washington State Bar Association.

His mother, Mary Gates, came from a line of successful bankers and sat on the boards of important financial and social institutions, including the nonprofit United Way. It was there, according to her New York Times obituary, that she met the former IBM chairman John Opel β€” a fateful connection thought to have led to IBM enlisting Microsoft to provide an operating system in the 1980s.

"My parents were well off β€” my dad did well as a lawyer, took us on great trips, we had a really nice house," Gates said in the 2019 Netflix documentary "Inside Bill's Brain."

"And I've had so much luck in terms of all these opportunities."

Despite his very public life, his three children with French Gates β€” Jennifer, Rory, and Phoebe β€” largely avoided the spotlight for most of their upbringing.Β 

Like their father, the three Gates children attended Seattle's elite Lakeside School, a private high school that has been recognized for excellence in STEM subjects β€” and that received a $40 million donation from Bill Gates in 2005 to build its financial aid fund. (Bill Gates and Paul Allen met at Lakeside and went on to build Microsoft together.)

But as they have become adults, more details have emerged about their interests, professions, and family life.Β 

While they have chosen different career paths, all three children are active in philanthropy β€” a space in which they will likely wield immense influence as they grow older. While Gates has reportedly said that he plans to leave each of his three children $10 millionΒ β€” a fraction of his fortune β€” they may inherit the family foundation, where most of his money will go.

Here's all we know about the Gates children.

Gates and his children did not respond to requests for comment for this story.

Jennifer Gates Nassar
Jennifer Gates and Bill Gates
Jennifer Gates and Bill Gates at the Paris Olympic Games.

Jean Catuffe/Getty Images

Jennifer Gates Nassar, who goes by Jenn, is the oldest of the Gates children at 28 years old.

A decorated equestrian, Gates Nassar started riding horses when she was six. Her father has shelled out millions of dollars to support her passion, including buying a California horse farm for $18 million and acquiring several parcels of land in Wellington, Florida, to build an equestrian facility.

In 2018, Gates Nassar received her undergraduate degree in human biology from Stanford University, where a computer science building was named for her father after he donated $6 million to the project in 1996.

She then attended the Icahn School of Medicine at Mount Sinai, from which she graduated in May. She will continue at Mt. Sinai for her residency in pediatric research. During medical school, she also completed a Master's in Public Health at Columbia University β€” perhaps a natural interest given her parents' extensive philanthropic activity in the space.

"Can't believe we've reached this moment, a little girl's childhood aspiration come true," she wrote on Instagram. "It's been a whirlwind of learning, exams, late nights, tears, discipline, and many moments of self-doubt, but the highs certainly outweighed the lows these past 5 years."

In October 2021, she married Egyptian equestrian Nayel Nassar. In February 2023, reports surfaced that they bought a $51 million New York City penthouseΒ with six bedrooms and a plunge pool. The next month, they welcomed their first child, Leila, and in October, Gates Nassar gave birth to their second daughter, Mia.

"I'm over the moon for you,Β @jenngatesnassarΒ andΒ @nayelnassarβ€”and overjoyed for our whole family," Bill Gates commented on the Instagram post announcing Mia's birth.

In a 2020 interview with the equestrian lifestyle publication Sidelines, Gates Nassar discussed growing up wealthy.

"I was born into a huge situation of privilege," she said. "I think it's about using those opportunities and learning from them to find things that I'm passionate about and hopefully make the world a little bit of a better place."

She recently posted about visiting Kenya, where she learned about childhood health and development in the country.

Rory John Gates
melinda and rory gates
Rory Gates, the least public of the Gates children, has reportedly infiltrated powerful circles of Washington, D.C.

Photo by Tasos Katopodis/Getty Images

Rory John Gates, who is in his mid-20s, is Bill Gates and Melinda French Gates' only son and the most private of their children. He maintains private social media accounts, and his sisters and parents rarely post photos of him.

His mother did, however, write an essay about him in 2017. Titled "How I Raised a Feminist Son," she describes as a "great son and a great brother" who "inherited his parents' obsessive love of puzzles."

In 2022, he graduated from the University of Chicago, where, based on a photo posted on Facebook, he appears to have been active in moot court. At the time of his graduation, Jennifer Gates Nassar wrote that he had achieved a double major and master's degree.

Little is publicly known about what the middle Gates child has been up to since he graduated, but a Puck report from last year gave some clues, saying that he is seen as a "rich target for Democratic social-climbers, influence-peddlers, and all variety of money chasers." According to OpenSecrets, his most recent public giving was to Nikki Haley last year.

The same report says he works as a congressional analyst while also completing a doctorate.

Phoebe Gates
Melinda French Gates and Phoebe Gates
Melinda Gates and Phoebe Gates.

John Nacion/Variety

Phoebe Gates, 22, is the youngest of the Gates children.

After graduating from high school in 2021, she followed her sister to Stanford. She graduated in June after three years with a Bachelor of Science in Human Biology. Her mom, Melinda French-Gates, delivered the university's commencement address.

In a story that Gates wrote for Nylon about her graduation day, she documented her graduation day, including a party she cohosted that featured speeches from her famous parents and a piggyback ride from her boyfriend Arthur Donald β€” the grandson of Sir Paul McCartney.

She has long shown an interest in fashion, interning at British Vogue and posting on social media from fashion weeks in Copenhagen, New York, and Paris. Sustainability is often a theme of her content, which highlights vintage and secondhand storesΒ and celebrates designers who don't use real leather and fur.

That has culminated in her cofounding Phia, a sustainable fashion tech platform that launched in beta this fall. The site and its browser extension crawl secondhand marketplaces to find specific items in an effort to help shoppers find deals and prevent waste.

Gates shares her parents' passion for public health. She's attended the UN General Assembly with her mother and spent time in Rwanda with Partners in Health, a nonprofit that has received funding from the Gates Foundation.

Like her mother, Gates often publicly discusses issues of gender equality, including in essays for Vogue and Teen Vogue, at philanthropic gatherings, and on social media, where she frequently posts about reproductive rights.

She's given thousands to Democrats and Democratic causes, including to Michigan governor Gretchen Whitmer and the Democratic Party of Montana, per data from OpenSecrets. According to Puck, she receives a "giving allowance" that makes it possible for her to cut the checks.

Perhaps the most public of the Gates children β€” she's got over 450,000 Instagram followers and a partnership with Tiffany & Co. β€” she's given glimpses into their upbringing, including strict rules around technology. The siblings were not allowed to use their phones before bed, she told Bustle, and to get around the rule, she created a cardboard decoy.

"I thought I could dupe my dad, and it worked, actually, for a couple nights," she told the outlet earlier this year. "And then my mom came home and was like, 'This is literally a piece of cardboard you're plugging in. You're using your phone in your room.' Oh, my gosh, I remember getting in trouble for that."

It hasn't always been easy being Gates's daughter. In the Netflix documentary "What's Next? The Future With Bill Gates," she said she lost friends because of a conspiracy theory suggesting her father used COVID-19 vaccines to implant microchips into recipients.

"I've even had friends cut me off because of these vaccine rumors," she said.

Read the original article on Business Insider

ChatGPT has entered its Terrible Twos

30 November 2024 at 14:25
ChatGPT logo repeated three times

ChatGPT, Tyler Le/BI

  • ChatGPT was first released two years ago.
  • Since then, its user base has doubled to 200 million weekly users.
  • Major companies, entrepreneurs, and users remain optimistic about its transformative power.

It's been two years since OpenAI released its flagship chatbot, ChatGPT.

And a lot has changed in the world since then.

For one, ChatGPT has helped turbocharge global investment in generative AI.

Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.

Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.

Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.

Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.

Then, of course, there is ChatGPT's impact on regular users.

In August, it reached more thanΒ 200 million weekly active users, double the number it had the previous fall. In October, it rolled out a newΒ search featureΒ that provides "links to relevant web sources" when asked a question, introducing a serious threat to Google's dominance.

In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.

Business Insider asked ChatGPT what age means to it.

"Age, to me, is an interesting concept β€” it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.

Read the original article on Business Insider

From the 'godfathers of AI' to newer people in the field: Here are 17 people you should know — and what they say about the possibilities and dangers of the technology.

30 November 2024 at 13:56
Godfathers of AI
Three of the "godfathers of AI" helped spark the revolution that's making its way through the tech industry β€” and all of society. They are, from left, Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.

Meta Platforms/Noah Berger/Associated Press

  • The field of artificial intelligence is booming and attracting billions in investment.Β 
  • Researchers, CEOs, and legislators are discussing how AI could transform our lives.
  • Here are 17 of the major names in the field β€” and the opportunities and dangers they see ahead.Β 

Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives.Β 

Major business leaders and researchers in the field have weighed in by highlighting both the risks and benefits of the industry's rapid growth. Some say AI will lead to a major leap forward in the quality of human life. Others haveΒ signed a letter calling for a pause on development, testified before Congress on the long-term risks of AI, and claimed it could present a more urgent danger to the world than climate change.Β 

In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI β€” and its impact on our future.Β 

Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Computer scientist Geoffrey Hinton stood outside a Google building
Geoffrey Hinton, a trailblazer in the AI field, quit his job at Google and said he regrets his role in developing the technology.

Noah Berger/Associated Press

Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.

Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology.Β 

"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.Β 

Hinton has since become an outspoken advocate for AI safety and has called it a more urgent risk than climate change. He's also signed a statement about pausing AI developments for six months.Β 

Yoshua Bengio is a professor of computer science at the University of Montreal.
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.  (Maryse Boyce/Mila via AP)
Yoshua Bengio has also been dubbed a "godfather" of AI.

Associated Press

Yoshua Bengio also earned the "godfather of AI" nickname after winning the Turing Award with Geoffrey Hinton and Yann LeCun.

Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index β€” a metric for evaluating the cumulative impact of an author's scholarly output β€” in the world, according to his website.Β 

In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020.Β 

Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.

"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."

When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.

Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
OpenAI's Sam Altman
OpenAI CEO Sam Altman is both optimistic about the changes AI will bring to society, but also says he loses sleep over the dangers of ChatGPT.

JASON REDMOND/AFP via Getty Images

Altman was already a well-known name in Silicon Valley long before, having served as the president of the startup accelerator Y-CombinatorΒ 

While Altman has advocated for the benefits of AI, calling it the most tremendous "leap forward in quality of life for people" he's also spoken candidly about the risks it poses to humanity. He's testified before Congress to discuss AI regulation.

Altman has also said he loses sleep over the potential dangers of ChatGPT.

French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
Yann LeCun, chief AI scientist
Yann LeCun, one of the godfathers of AI, who won the Turing Award in 2018.

Meta Platforms

LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.

LeCun has remained relatively mellow about societal risks of AI in comparison to his fellow godfathers. He's previously said that concerns that the technology could pose a threat to humanity are "preposterously ridiculous". He's also contended that AI, like ChatGPT, that's been trained on large language models still isn't as smart as dogs or cats.

Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Fei-Fei Li
Former Google VP Fe-Fei Li is known for establishing ImageNet, a large visual database designed for visual object recognition.

Greg Sandoval/Business Insider

Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.

She may be best known for establishing ImageNet β€” a large visual database that was designed for research in visual object recognition β€” and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.Β  Over the years, she's also been affiliated with major tech companies including Google β€” where she was a VP and chief scientist for AI and machine learning β€” and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022.Β 

Β 

Β 

UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Stuart Russell
AI researcher Stuart Russell, who is a University of California, Berkeley, professor.

JUAN MABROMATA / Staff/Getty Images

Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans.Β 

He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig.Β 

Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June,Β he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom.Β 

Peter Norvig played a seminal role directing AI research at Google.
Peter Norvig
Stanford HAI fellow Peter Norvig, who previously lead the core search algorithms group at Google.

Peter Norvig

He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision.Β 

Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence.Β 

Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said.Β 

Β 

Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Timnit Gebru – TechCrunch Disrupt
After she departed from her role at Google in 2020, Timnit Gebru went on the found the Distributed AI Research Institute.

Kimberly White/Getty Images

Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.

But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.

Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."

She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology.Β "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."

Β 

British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
Andrew Ng
Coursera co-founder Andrew Ng said he thinks AI will be part of the solution to existential risk.

Steve Jennings / Stringer/Getty Images

The endeavor lead to theΒ Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.

Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website.Β 

Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera. Β 

"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity β€” it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg.Β 

Β 

Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Daphne Koller, CEO and Founder of insitro.
Daphne Koller, CEO and Founder of Insitro.

Insitro

Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.Β Β 

In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."

At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.



Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Anthropic cofounder and president Daniela Amodei.
Anthropic cofounder and president Daniela Amodei.

Anthropic

Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario β€” OpenAI's lead safety researcher at the time β€” was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails.Β 

At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate.Β 

"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.

Β 

Demis Hassabis has said artificial general intelligence will be here in a few years.
DeepMind boss Demis Hassabis believes AGI will be here in a few years.
Demis Hassabis, the CEO and co-founder of machine learning startup DeepMind.

Samuel de Roman/Getty Images

Hassabis, a former child chess prodigy who studied at Cambridge and University College London, was nicknamed the "superhero of artificial intelligence" by The Guardian back in 2016.Β 

After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He soldΒ the AI lab to Google in 2014 for Β£400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website.Β 

Hassabis has said the promise of artificial general intelligence β€” a theoretical concept that sees AI matching the cognitive abilities of humans β€” is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method."Β 

In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and KarΓ©n Simonyan β€” now the company's chief scientist.
Mustafa Suleyman
Mustafa Suleyman, co-founder of DeepMind, launched Inflection AI in 2022.

Inflection

The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook.Β 

Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener"Β that can respond to real-life problems.Β 

"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said.Β 

Β 

Β 

USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Kate Crawford
USC Professor Kate Crawford is the author of Atlas of AI and a researchers at Microsoft.

Kate Crawford

Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society.Β 

Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."

She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."

Margaret Mitchell is the chief ethics scientist at Hugging Face.
Margaret Mitchell
Margaret Mitchell has headed AI projects at several big tech companies.

Margaret Mitchell

Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google.Β 

In late 2020, Mitchell and Timnit Gebru β€” then the co-lead of Google's ethical artificial intelligence β€” published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021

Now, at Hugging Face β€” an open-source data science and machine learning platform that was founded in 2016 β€” she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models. Β 

In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."

Navrina Singh is the founder of Credo AI, an AI governance platform.
Navrina Singh
Navrina Singh, the founder of Credo AI, says the system may help people reach their potential.

Navrina Singh

Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage.Β In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."

At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world.Β "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."

Β 

Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Richard Socher
Richard Socher believes we're still years from achieving AGI.

You.com

Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.

One bottleneck in large language models is their tendency to hallucinate β€” a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code β€” essential "program" responses instead of verbalizing them β€” we can "give them so much more fuel for the next few years in terms of what they can do," Socher said.Β 

But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there.Β 

And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.

"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI.Β 

Read the original article on Business Insider

Want to get into the AI industry? Head to Abu Dhabi.

30 November 2024 at 09:40
Abu Dhabi
The United Arab Emirates is on a mission to become an AI powerhouse.

GIUSEPPE CACACE/AFP via Getty Images

  • The United Arab Emirates wants to become an AI leader by 2031.
  • It's leveraging its oil wealth to attract new talent and fund new research initiatives.
  • The UAE's AI minister believes we'll have "centers and nodes of excellence across the world."

The AI revolution is expanding far beyond Silicon Valley.

From the shores of Malta to the streets of Paris, hubs for AI innovation are forming worldwide. And the United Arab Emirates is emerging as a key center in the Middle East.

In October, the UAE made headlines by participating in the most lucrative funding round in Silicon Valley history: the $6.6 billion deal closed by OpenAI. The investment was made through MGX, a state-backed technology firm focused on artificial intelligence and semiconductors.

The move was part of the UAE's bid to become a global AI leader by 2031 through strategic initiatives, public engagement, and research investment. Last year, the country's wealthiest emirate, Abu Dhabi, launched Falcon β€” its first open-source large language model. State-backed AI firm G42 is also training large language models in Arabic and Hindi to bridge the gap between English-based models and native speakers of these languages.

Another indication of the UAE's commitment to AI is its appointment of Omar Sultan Al Olama as the country's AI Minister in 2017.

The minister acknowledges that the UAE faces tough competition from powerhouses like the United States and China, where private investment in AI technology in 2023 totaled $67.2 billion and $7.8 billion, respectively, according to Stanford's Center for Human-Centered Artificial Intelligence.

So he says he is embracing cooperation over competition.

"I don't think it's going to be a zero-sum game where it's only going to be AI that's developed in the US, or only going to be AI that's developed in China or the UAE," Al Olama said at an event hosted by the Atlantic Council, a DC think tank, in April. "What is going to happen, I think, is that we're going to have centers and nodes of excellence across the world where there are specific use cases or specific domains where a country or player or a company is doing better than everyone else."

The UAE's strengths are evident.

It is one of the wealthiest countries in the world, mostly due to its vast oil reserves. The UAE is among the world's 10 largest oil producers, with 96% of that coming from its wealthiest emirate, Abu Dhabi, according to the International Trade Administration.

Abu Dhabi's ruling family also controls several of the world's largest sovereign wealth funds, including the Abu Dhabi Investment Authority and Mubadala Investment Company, a founding partner of MGX.

These funds have been used to diversify the country's oil wealth and could now be diverted to funding new AI companies. AI could contribute $96 billion to the UAE economy by 2030, making up about 13.6% of its GDP, according to a report by PwC, the accounting firm.

But capital is only part of the equation. The bigger question is whether the tiny Gulf nation can attract the requisite talent to keep up with Silicon Valley.

Recent developments show promise. Between 2021 and 2023, the number of AI workers in the UAE quadrupled to 120,000, Al Olama said at the Atlantic Council event. In 2019, it rolled out a 'golden visa' program for IT professionals, making entry easier for AI experts. It's also making the most of its existing talent. In May, Dubai launched the world's biggest prompt engineering initiative. Its goal is to upskill 1 million workers over the next three years.

However, it's also faced criticism for its treatment of workers, especially lower-skilled migrant workers. Migrant workers comprise 88% of the country's population and have been subject to a range of labor abuses, including exposure to extreme heat, exploitative recruitment fees, and wage theft, according to Human Rights Watch. The UAE has responded by passing several labor laws that address protections for workers around hours, wages, and competition.

Abu Dhabi, meanwhile, has β€” over the last decade β€” become a nexus for AI research and education.

In 2010, New York University launched a branch in Abu Dhabi that has since developed a focus on AI. And, in 2019, Mohamed bin Zayed University of Artificial Intelligence opened as a "graduate research university dedicated to advancing AI as a global force for good." Professors from the university also helped organize the inaugural International Olympiad in Artificial Intelligence in August, which drew students from over 40 countries worldwide.

"Abu Dhabi may not directly surpass Silicon Valley, however, it has the potential to become a significant AI hub in its own right," Nancy Gleason, an advisor to leadership on AI at NYU Abu Dhabi and a professor of political science, told Business Insider by email. Its "true strengths lie in the leadership's strategic vision, substantial investments in AI research and compute capacity, and government-led initiatives in industry. The UAE has also made strategic educational investments in higher education like the Mohamed bin Zayed University of Artificial Intelligence and NYU Abu Dhabi."

Beyond that, she noted, it's "very nice to live here."

Read the original article on Business Insider

Return fraud is costing retailers billions. A new AI program can spot when scammers send back counterfeits.

30 November 2024 at 06:07
lacoste polo oversized logo
Oversized crocodiles have met their match with Vrai AI counterfeit technology.

Ebay

  • Lacoste is using AI tech Vrai to detect counterfeit returns.
  • Return fraud costs retailers billions, with billions lost globally.
  • Amazon and other retailers face scams exploiting return policies for financial gain.

Spotting designer knockoffs is now easier than ever.

French luxury brand Lacoste is using Vrai, an AI technology developed by Cypheme, a leader in anti-counterfeit artificial intelligence, to catch scammers returning counterfeit items.

Trained on thousands of images of genuine merchandise, Vrai aims to distinguish real products from fakes with 99.7% accuracy, according to Semafor.

At its warehouses, Lacoste employees can snap a picture of a returned item with Vrai and verify its authenticity. The AI model can detect subtle discrepancies, from a slight variation in color to an extra tooth in the brand's signature crocodile logo.

Represenatives for Lacoste and Cypheme did not respond to Business Insider's request for comment,

The technology combats return fraud β€” a growing practice of exploiting return and refund processes for financial gain. Often, it involves returning different items for a refund. Some companies have even received boxes full of bricks after customers banked refunds for items like televisions.

Total returns for the retail industry came to $743 billion in merchandise in 2023, according to a report released by the National Retail Federation and Appriss Retail. US retailers lost a little over $100 billion in return fraud, or around $13.70 for every $100 returned, up from $10.40 per $100 in 2022.

Major retailers are frequent targets of such scams. In July, Amazon filed a federal lawsuit accusing a Telegram group of stealing more than 10,000 items through fraudulent returns. Members of the group fabricated stories to convince Amazon customer service to refund their accounts, sometimes even using falsified police reports.

Amazon, along with other online giants like Walmart, Target, and Wayfair, were also targeted by a crime ring that recruited legitimate shoppers to purchase items, have them refunded, and then keep or resell the goods. According to a federal indictment, the group exploited a "no-return refunds" policy that allows customers to get refunds without physically returning itemsβ€”an option many retailers have implemented to reduce return costs for both themselves and consumers.

Read the original article on Business Insider

AI power usage is growing so fast that tech leaders are racing to find energy alternatives

30 November 2024 at 03:43
An IT technician stands in a data center and looks at a laptop

Gorodenkoff/Getty Images

  • AI models consume tons of energy and increase greenhouse gas emissions.
  • Tech firms and governments say an energy revolution must happen to match the pace of AI development.
  • Many AI leaders are rallying around nuclear energy as a potential solution.

Advances in AI technology are sending shockwaves through the power grid.

The latest generation of large language models requires significantly more computing power and energy than previous AI models. As a result, tech leaders are rallying to accelerate the energy transition, including investing in alternatives like nuclear energy.

Big Tech companies have committed to advancing net zero goals in recent years.

Meta and Google aim to achieve net-zero emissions across all its operations by 2030. Likewise, Microsoft aims to be "carbon negative, water positive, and zero waste" by 2030. Amazon aims to achieve net‑zero carbon across its operations by 2040.

Major tech companies, including Amazon, Google, and Microsoft, have also struck deals with nuclear energy suppliers recently as they advance AI technology.

"Energy, not compute, will be the No. 1 bottleneck to AI progress," Meta CEO Mark Zuckerberg said on a podcast in April. Meta, which built the open-source large language model Llama, consumes plenty of energy and water to power its AI models.

Chip designer Nvidia, which skyrocketed into one of the most valuable companies in the world this year, has also ramped up efforts to become more energy efficient. Its next-generation AI chip, Blackwell, unveiled in March, has been marketed as being twice as fast as its predecessor, Hopper, and significantly more energy efficient.

Despite these advancements, Nvidia CEO Jensen Huang has said allocating substantial energy to AI development is a long-term game that will pay dividends as AI becomes more intelligent.

"The goal of AI is not for training. The goal of AI is inference," Huang said at a talk at the Hong Kong University of Science and Technology last week, referring to how an AI model applies its knowledge to draw conclusions from new data.

"Inference is incredibly efficient, and it can discover new ways to store carbon dioxide in reservoirs. Maybe it could discover new wind turbine designs, maybe it could discover new materials for storing electricity, maybe more effective materials for solar panels. We should use AI in so many different areas to save energy," he said.

Moving to nuclear energy

Many tech leaders argue the need for energy solutions is urgent and investing in nuclear energy.

"There's no way to get there without a breakthrough," OpenAI CEO Sam Altman said at the World Economic Forum in Davos in January.

Altman has been particularly keen on nuclear energy. He invested $375 million in nuclear fusion company Helion Energy and has a 2.6% stake in Oklo, which is developing modularΒ nuclear fission reactors.

The momentum behind nuclear energy also depends on government support. President Joe Biden has been a proponent of nuclear energy, and his administration announced in October it would invest $900 million in funding next-generation nuclear technologies.

Clean energy investors say government support is key to advancing a national nuclear agenda.

"The growing demand for AI, especially at the inference layer, will dramatically reshape how power is consumed in the US," Cameron Porter, general partner at venture capital firm Steel Atlas and investor in nuclear energy company Transmutex, told Business Insider by email. "However, it will only further net-zero goals if we can solve two key regulatory bottlenecksβ€”faster nuclear licensing and access to grid connectionsβ€”and address the two key challenges for nuclear power: high-level radioactive waste and fuel sourcing."

Porter is betting the incoming Trump administration will take steps to move the needle forward.

"Despite these challenges, we expect the regulatory issues to be resolved because, ultimately, AI is a matter of national security," he wrote.

AI's energy use is growing

Tech companies seek new energy solutions because their AI models consume much energy. ChatGPT, powered by OpenAI's GPT-4, uses more than 17,000 times the electricity of an average US household to answer hundreds of millions of queries per day.

By 2030, data centersβ€”which support the training and deployment of these AI modelsβ€”will constitute 11-12% of US power demand, up from a current rate of 3-4%, a McKinsey report said.

Tech companies have turned to fossil fuels to satisfy short-term demands, which has increased greenhouse gas emissions. For example, Google's greenhouse gas emissions jumped by 48% between 2019 and 2023 "primarily due to increases in data center energy consumption and supply chain emissions," the company said in its 2024 sustainability report.

Read the original article on Business Insider

MBB explained: How hard it is to get hired and what it's like to work for the prestigious strategy consulting firms, McKinsey, Bain, and BCG

McKinsey logo on building.
MBB refers to the top three strategy consulting firms, McKinsey, Bain, and BCG.

FABRICE COFFRINI/AFP/Getty Images

  • McKinsey, Bain, and BCG are top strategy consulting firms with low acceptance rates.
  • These firms, known as MBB, serve Fortune 500 companies and offer competitive salaries.
  • MBB firms provide prestigious exit opportunities, often leading to senior roles in various sectors.

McKinsey & Company, Bain & Company, and Boston Consulting Group β€” collectively referred to as MBB β€” are widely considered the top three strategy consulting firms in the world.

Sometimes referred to as the Big Three, MBB firms are among the most prestigious consulting firms and their clients include many Fortune 500 companies as well as government agencies.

CEOs often turn to these firms for their expertise in business strategy and solving complex problems, whether it's handling mergers and acquisitions or budgeting and cutting costs.

Jobs at MBB firms are famously difficult to land and are among the most sought-after positions for MBA students at top schools. The acceptance rates for these firms is less than 1%. Applicants to top business schools are also far more likely to be accepted into MBA programs if they come from an MBB.

MBB firms typically offer highly competitive salaries, generally paying more than other consulting firms, and often come with demanding work responsibilities and expectations.

MBB firms are also well known for the exit opportunities they provide β€” employees at these firms are highly sought after for other jobs and often end up with senior positions at Fortune 500 companies, startups, hedge funds, and private equity firms, or start their own companies.

The Big Three is sometimes confused with the Big Four, which refers to the professional services firms Deloitte, EY, KPMG, and PwC. The Big Four are the largest accounting firms in the world though they also offer consulting and other services.

The MBB firms are strategy and management consulting firms. Here's how they compare.

McKinsey & Company

McKinsey is typically considered the most prestigious of the Big Three. It's also the oldest and was founded in 1926.

Headquartered in New York City, McKinsey is also the largest of the MBBs, with more than 45,000 employees across 130 offices worldwide.

McKinsey generated around $16 billion in revenue in 2023 and is led by Bob Sternfels, who serves as the firm's global managing partner and chair of the board of directors.

McKinsey told Business Insider it receives more than one million job applications each year and that the company planned to hire about 6,000 people in 2024, about the same as the year prior.

That would mean McKinsey hires around 0.6% of applicants.

McKinsey's average base salary for new hires out of undergrad is $112,000 and for MBAs $192,000, according to the company Management Consulted, which provides students with coaching for consulting interviews.

McKinsey is notorious for its demanding workload, with even entry-level analysts working 12 to 15 hours a day. One former employee told BI that the experience took a toll on her mental health but she came away with confidence and a Rolodex of contacts.

Boston Consulting Group

BCG was founded in Boston, where it is still headquartered, in 1963. The company had 32,000 employees as of 2023 and 128 offices worldwide.

BCG had a global revenue of about $12 billion in 2023.

BCG is led by Christoph Schweizer, who has served as CEO since 2021, and Rich Lesser, the Global Chair of the firm.

BCG's head of talent, Amber Grewal, told BI more than one million people apply to work at the company each year and that only 1% make the cut.

Amid the boom in generative AI the firm is hiring for a wider mix of roles than it did in years past. "It's going to change the mix of people and expertise that we need," Alicia Pittman, BCG's global people team chair previously told BI.

The average base salary at BCG for hires out of undergrad was $110,000 in 2023 and about $190,000 for MBAs and PhDs, according to Management Consulted.

Bain & Company

Bain was founded in 1973 and is also headquartered in Boston.

The smallest of the Big Three, Bain has around 19,000 employees with offices in 65 cities around the world.

Bain's revenue in 2023 reached $6 billion, according to the Financial Times.

Bain is helmed by Christophe De Vusser, who serves as the worldwide managing partner and CEO.

Bain's average base salary for undergrads in the US is around $90,000, while for new hires with an MBA or PhD it was around $165,000, according to Management Consulted.

Despite the grueling hours and high expectations, Bain is known for a collaborative culture.

"We have a motto, 'A Bainie never lets another Bainie fail,'" Davis Nguyen, a former consultant at the firm, previously told BI. "We all work together from entry-level associate consultants to senior partners. I think that is what makes Bain's culture what it is β€” that we all work together to achieve a goal and make everyone around us better."

Bain is also considered theΒ "frattiest" of the top firms and is known for aΒ "work hard, play hard" culture, according toΒ Management Consulted.

Read the original article on Business Insider

'Big Four' salaries: How much accountants and consultants make at Deloitte, PwC, KPMG, and EY

three office employees walking and talking together in an office
Even an entry-level consultant at the "Big Four" can earn over $200,000.

Luis Alvarez/Getty Images

  • The "Big Four" accounting firms employ about 1.5 million people worldwide.Β 
  • Many of these employees make six-figure salaries and are eligible for annual bonuses.Β Β 
  • Business Insider analyzed data to determine how much employees are paid at these firms.Β 

The so called "Big Four" accounting firms β€” Deloitte, PricewaterhouseCoopers (PwC), KPMG, and Ernst & Young (EY) β€” are known for paying their staff high salaries.Β 

An entry-level consultant who just graduated from business school can make over $200,000 a year at the four firms when you include base salary, bonuses, and relocation expenses.Β 

Several of these firms have faced layoffs and implemented hiring freezes over the past year as demand for consulting services has waned. Still, they're a good bet for anyone looking to land a six-figure job straight out of school.Β 

Business Insider analyzed the US Office of Foreign Labor Certification's 2023 disclosure data for permanent and temporary foreign workers to find out what PwC, KPMG, EY, and Deloitte paid US-based employees for jobs ranging from entry-level to executive roles. We looked through entries specifically for roles related to management consulting and accounting. This data does not reflect performance bonuses, signing bonuses, and compensation other than base salaries.

Here's how much Deloitte, PwC, KPMG, and EY paid their hires.Β Β 

Deloitte paid senior managers between $91,603 to $288,000
Deloitte logo
Deloitte offers its top manager salaries close to mid six figures.

Artur Widak/Getty Images

With 457,000 employees worldwide, Deloitte employs the most people of any of the 'Big Four.' It pulled in close to $64.9 billion in revenue for the 2023 fiscal year, marking a 9.4% increase from 2022.

Deloitte did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consulting and accounting roles:Β 

  • Analyst:Β $49,219 to $337,500 (includes advisory, business, project delivery, management, and systems)
  • Senior business analyst: $97,739Β 
  • Audit and assurance senior assistant: average $58,895
  • Consultant: $54,475 to $125,000 (includes advisory, technology strategy, and strategic services)Β Β 
  • Global business process lead: $180,000Β 
  • Senior consultant: average $122,211
  • Manager: average $152,971
  • Tax manager: average $117,268
  • Senior manager:Β $91,603 to $288,000Β Β 
  • Managing director: average $326,769
  • Tax managing director: average $248,581
  • Principal: $225,000 to $875,000
Principals at PricewaterhouseCoopers (PwC) can make well over $1 million.
logo of PwC
PwC.

Danish Siddiqui/Reuters

PricewaterhouseCoopers (PwC) is a global professional services firm with over 370,000 employees worldwide. The firm reported a revenue of more than $53 billion for the 2023 fiscal year, marking a 5.6% increase from 2022.Β 

PwC did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for both consulting and accounting roles.Β 

  • Associate: $68,000 to $145,200
  • Senior associate: $72,000 to $197,000Β 
  • Manager: $114,300 to $231,000
  • Senior manager: $142,000 to $251,000Β 
  • Director: $165,000 to $400,000Β Β 
  • Managing director: $260,000 to $330,600
  • Principal: $1,081,182 to $1,376,196
KPMG offers managing directors anywhere between $230,000 to $485,000
The logo of KPMG, a multinational tax advisory and accounting services company, hangs on the facade of a KPMG offices building on January 22, 2021 in Berlin, Germany.
KPMG managing directors can earn close to half a million.

Sean Gallup/Getty Images

KPMG has over 273,000 employees worldwide. The firm reported a revenue of $36 billion for the 2023 fiscal year, marking a 5% increase from 2022.Β 

KPMG did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consultants, accountants, and leadership at KPMG.Β 

  • Associate:Β $61,000 to $140,000
  • Senior associate:Β $66,248 to $215,000
  • Director: $155,600 to $260,000
  • Associate director:Β $155,700 to $196,600Β 
  • Specialist director: $174,000 to $225,000
  • Lead specialist:Β $140,500 to $200,000
  • Senior specialist:Β $134,000 to $155,000
  • Manager:Β $99,445 to $293,800
  • Senior manager:Β $110,677 to $332,800
  • Managing director:Β $230,000 to $485,000
Statisticians at Ernst & Young (EY) make salaries ranging between $66,000 to $283,500.
Pedestrians walk in front of the entrance to EY's head office in London.
EY spends $500 million annually on learning for its employees.

TOLGA AKMEN / Contributor / Getty

EY employs close to 400,000 people worldwide. For the 2023 fiscal year, the firm reported a record revenue of $49.4 billion, marking a 9.3% jump from 2022.Β 

The firm did not immediately respond to a request for comment on its salary data or 2024 hiring plans.

Here are the salary ranges for consultants, accountants, auditors, and chief executives at the firm:Β 

  • Accountants and auditors: $54,000 to $390,000
  • Appraisers and assessors of real estate: $166,626 to $185,444
  • Computer systems analyst: $62,000 to $367,510
  • Management analyst: $49,220 to $337,500
  • Statistician:Β $66,000 to $283,500
  • Financial risk specialist: $62,000 to $342,400
  • Actuaries: $84,800 to $291,459
  • Economist: $77,000 to $141,000
  • Logisticians: $72,000 t0 $275,000
  • Mathematicians: $165,136 to $377,000
  • Computer and information systems manager: $136,167 to $600,000
  • Financial manager: average $320,000

Aman Kidwai and Weng Cheong contributed to an earlier version of this post.Β 

Read the original article on Business Insider

Here's how Nvidia CEO Jensen Huang won over his wife

24 November 2024 at 19:42
Nvidia CEO Jensen Huang.
Jensen Huang is known to write short emails

Mohd Rasfan/AFP/Getty Images

  • Nvidia CEO Jensen Huang met his wife Lori Huang at Oregon State University.
  • In a recent interview, he said that he tried to use homework as an excuse to spend time with her.
  • Huang said he promised her he'd be CEO by 30 to ensure she'd marry him.

When Jensen Huang met his wife in college, the odds weren't in his favor.

He was 17 years old, and she was 19. "I was the youngest kid in school, in class. There were 250 students and 3 girls," he said in an interview at Hong Kong University of Science and Technology last week after receiving an honorary degree. He was also the only student who "looked like a child," he said.

Huang used his youthful appearance to approach his future wife, hoping she'd assume he was smart. "I walked up to her and I said, 'do you want to see my homework?'" Then he made a deal with her. "If you do homework with me every Sunday, I promise you, you will get straight As."

From that point on, he said he had a date every Sunday. And just to ensure that she would eventually marry him, he told her that by 30, he'd be a CEO.

Huang married Lori Mills five years after they first met at Oregon State University, according to his biography on OSU College of Engineering's website. The couple has two children, Madison, a director of marketing at Nvidia, and Spencer, a senior product manager at the company.

After graduating from OSU in 1984, Huang worked at chip companies LSI Logic and Advanced Micro Devices, according to his bio on Nvidia's website. He then pursued a master's degree in electrical engineering at Stanford University in 1992, a year before he founded Nvidia, which has grown into a $3.48 trillion company thanks to the artificial intelligence boom.

Huang was 30 years old when he founded Nvidia.

The CEO often shares the lore about Nvidia's origin: He conceived the idea for a graphics company while dining at Denny's, a US diner chain, with his friends. Huang said in aΒ 2010 New York Times interviewΒ that he also waited tables atΒ Denny's while he was a student.

Huang's net worth is now estimated to be $124 billion.

The CEO also credits his wife and daughter with establishing his signature style: the black leather jacket.

In an interview last year on HP's online show, "The Moment," host Ryan Patel asked Huang how he feels to become a style icon.

"Now, at Denny's I'm sure you weren't thinking you were gonna be the style star of the future, but now you are," Patel said. "What do you think? How do you feel?"

"Don't give me that," Huang replied. "I'm happy that my wife and my daughter dresses me."

A spokesperson for Nvidia did not respond to a request for comment.

Read the original article on Business Insider

AI is both a new threat and a new solution at the UN climate conference

24 November 2024 at 14:40
COP29
The UN COP29 climate summit is in Baku, Azerbaijan.

Rustamli Studio/Getty Images

  • AI requires enormous amounts of energy, threatening global net zero goals.
  • Tech giants may use fossil fuels short-term, raising concerns about clean energy commitments.
  • AI is also ushering in an era of nuclear power, however, which is cleaner.

The rapid development of AI is likely to affect global net zero goals in both positive and negative ways.

Tech companies are investing in nuclear power plants to fuel AI data centers. Many of these power plants generate power through nuclear fission, which is considered cleaner than fossil fuels and more reliable than wind or solar power.

Silicon Valley investors, meanwhile, are investing inΒ nuclear fusion, a still-nascent technique for generating power that fuses the nuclei of atoms. It could generate even more energy than fission, with fewer greenhouse gas emissions and less radioactive waste.

Some industry leaders believe that nuclear energy might be the only reliable way to meet the demands of the AI revolution.

"AI requires massive, industrial-scale amounts of energy," Franklin Servan-Schreiber, the CEO of nuclear energy startup Transmutex, previously told Business Insider. "Only nuclear power will be able to supply this massive energy demand in a reliable manner."

However, developing a reliable network of power plants is still a long-term goal that will necessitate huge investment and government support.

As of August 2023, there were only 54 nuclear power plants operating in the United States, according to the US Energy Information Administration. Companies like Amazon and Google have struck deals with companies building smaller, modular reactors that are faster to roll out than traditional reactors. However, the money is still "a drop in the bucket" compared to the billions these companies will ultimately need, physicist Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists in Washington, DC, told Nature.

In the meantime, tech giants may turn to fossil fuels to meet their short-term energy needs.

"Tech is not going to wait 7 to 10 years to get this infrastructure built," Toby Rice, the CEO of natural gas producer EQT, said in an interview with The Wall Street Journal. "That leaves you with natural gas."

Rice told the Journal that at a recent energy conference, he was repeatedly asked two questions: "How fast can you guys move? How much gas can we get?"

According to the Financial Times, last week, at the UN COP29 climate summit in Baku, Azerbaijan, Big Tech companies flew under the radar more than usual. Many opted out of displaying in the conference's business area, known as the green zone. Some attendees speculated that the surge in energy use for AI data centers has put the tech industry's clean energy commitments under scrutiny.

"If our industry starts getting treated similar to oil and gas, the public relations to counter that are going to be very expensive," said Kevin Thompson, chief operating officer at Gesi, a business group focused on digital sustainability, told the FT.

Data centers β€” now powered by a mix of natural gas, coal, and renewable energy sources β€” are expected to rise from a current rate of 3 %to 4% of US power demands to 11% to 20% by 2030, according to a report from McKinsey.

AI leaders, however, hope the intelligence revolution will inevitably lead to an energy revolution.

"My hopes and dreams is that, in the end, what we all see is that using energy for intelligence is the best use of energy we can imagine," Nvidia CEO Jensen Huang said in an interview at Hong Kong University of Science and Technology, after receiving an honorary degree last week.

Read the original article on Business Insider

The FTC is investigating Microsoft for anticompetitive practices, some of which may have been directed at the government itself

24 November 2024 at 11:08
Microsoft store sign
The FTC is investigating Microsoft for anticompetitive practices.

NurPhoto/Getty Images

  • Microsoft's deals with the government may have breached antitrust laws, ProPublica reported.
  • In 2021, Microsoft pledged $150 billion to the government over five years to upgrade its security.
  • The deal included licensing agreements that may deter customers from switching to competitors.

Lina Khan, chair of the Federal Trade Commission, said her agency plans to investigate Microsoft for anticompetitive practices in the cloud market.

A recentΒ reportΒ from ProPublica found that the government itself might also have been a target of those anticompetitive practices.

In the summer of 2021 β€” a little more than a year after news broke that the SolarWinds hack breached several government agencies β€” Microsoft pledged to give the government $150 billion over the coming five years to upgrade its digital security.

Typically, the federal government needs to obtain services through a competitive bidding process, but the deal terms were hard to pass up. Microsoft offered the government access to its G5 security capabilities free for the first year as well as consultants to help install the products, ProPublica reported.

The catch was that once an agency committed to Microsoft's services they were essentially tied to them. Microsoft imposed steep fees on customers who wanted to shift to a competitor. The goal was to "spin the meter" for Azure and help it gain market dominance over its competitor, Amazon, a sales representative for Microsoft told ProPublica.

Some legal experts view the deal as venturing into murky antitrust territory, particularly regarding laws against gratuitous service agreements. These allow the federal government to receive services from other parties as long as no compensation is involved. However, legal expert James Nagle, who specializes in the federal contracting process, told ProPublica, "This is not truly gratuitous. There's another agenda in the works."

Others say the blame should fall solely on the federal government.

"What Microsoft did does not count as an illegal monopoly because the government could have switched to a different vendor," Peter Cohan, associate professor of practice in management at Babson College, told Business Insider by email.

"Arguably, the government should have put the cybersecurity contract out for bid to other rivals rather than signing up for G5 after receiving the free consulting services from Microsoft. It is possible that other cybersecurity companies could have bid to cover some or all the government's cost to switch from Microsoft to another vendor, which might have charged the government less than G5 rates."

Microsoft did not immediately respond to a request for comment from Business Insider.

Steve Faehl, the company's security leader for federal business, said in a statement to ProPublica that the company's "sole goal during this period was to support an urgent request by the Administration to enhance the security posture of federal agencies who were continuously being targeted by sophisticated nation-state threat actors."

Read the original article on Business Insider

This is the biggest question in AI right now

24 November 2024 at 09:08
AI

Qi Yang/Getty Images

  • AI leaders are rethinking data-heavy training for large language models.
  • Traditional models scale linearly with data, but this approach may hit a dead end.
  • Smaller, more efficient models and new training methods are gaining industry support.

For years, tech companies like OpenAI, Meta, and Google have focused on amassing tons of data, assuming that more training material would lead to smarter, more powerful models.

Now, AI leaders are rethinking the conventional wisdom about how to train large language models.

The focus on training data arises from research showing that transformers, the neural networks behind large language models, have a one-to-one relationship with the amount of data they're given. Transformer models "scale quite linearly with the amount of data and compute they're given," Alex Voica, a consultant at the Mohamed bin Zayed University of Artificial Intelligence, previously told Business Insider.

However, executives are starting to worry that this approach can only go so far, and they're exploring alternatives for advancing the technology.

The money going into AI has largely hung on the idea that this scaling law "would hold," Scale AI CEO Alexandr Wang said at the Cerebral Valley conference this week, tech newsletter Command Line reported. It's now "the biggest question in the industry."

Some executives say the problem with the approach is that it's a little mindless. "It's definitely true that if you throw more compute at the model, if you make the model bigger, it'll get better," Aidan Gomez, the CEO of Cohere, said on the 20VC podcast. "It's kind of like it's the most trustworthy way to improve models. It's also the dumbest."

Gomez advocates smaller, more efficient models, which are gaining industry support for being cost-effective.

Others worry this approach won't reach artificial general intelligence β€” a theoretical form of AI that matches or surpasses human intelligence β€” even though many of the world's largest AI companies are banking on it.

Large language models are trained simply to "predict the next token, given the previous set of tokens," Richard Socher, a former Salesforce executive and CEO of AI-powered search engine You.com, told Business Insider. The more effective way to train them is to "force" these models to translate questions into computer code and generate an answer based on the output of that code, he said. This will reduce hallucinations in quantitative questions and enhance their abilities.

Not all industry leaders are sold that AI has hit a scaling wall, however.

"Despite what other people think, we're not at diminishing marginal returns on scale-up," Microsoft chief technology officer Kevin Scott said in July in an interview with Sequoia Capital's Training Data podcast.

Companies like OpenAI are also seeking to improve on existing LLMs.

OpenAI's o1, released in September, still relies on the token prediction mechanism Socher refers to. Still, the model is specialized to better handle quantitative questions, including areas like coding and mathematics β€” compared to ChatGPT, which is considered a more general-purpose model.

Part of the difference between o1 and ChatGPT is that o1 spends more time on inference or "thinking" before it answers a question.

"To summarize, if we were to anthropomorphize, gpt-4 is like your super know-it-all friend who when you ask them a question starts talking stream-of-consciousness, forcing you to sift through what they're saying for the gems," Waleed Kadous, a former engineer lead at Uber and former Google principal software engineer, wrote in a blog post. "o1 is more like the friend who listens carefully to what you have to say, scratches their chin for a few moments, and then shares a couple of sentences that hit the nail on the head."

One of o1's trade-offs, however, is that it requires much more computational power, making it slower and costlier, according toΒ Artificial Analysis, an independent AI benchmarking website.

Read the original article on Business Insider

❌
❌