You've heard of generative AI, but agentic AI might sound a little less familiar.
Major industry players are working on AI agents for what some say marks the third wave of AI.
But what exactly is agentic AI? Here's a quick rundown of the tech everyone's talking about.
Generative AI has been the talk of tech for a while now, but tune into your favorite business podcast and you'll probably hear a different phrase tossed around: "agentic" AI.
So what's the difference?
The two are closely related. You couldn't have agentic AI without generative AI. Definitions vary, but in general, agentic AI refers to AI technology that's capable of performing agent-like behavior that can autonomously accomplish complex tasks on your behalf.
Companies working on AI agents say they are intended to one day be digital coworkers or assistants to human workers in fields spanning from healthcare and supply chain management to cybersecurity and customer service.
Here's how some Big Tech companies explain the concept:
Nvidia's definition says agentic AI "uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems."
IBM says agentic AI is a system or program with "agency" that can "make decisions, take actions, solve complex problems and interact with external environments beyond the data upon which the system's machine learning (ML) models were trained."
Microsoft says AI agents "range from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can run complex workflows autonomously."
Some leaders in the field say agents are ushering in a new frontier in AI.
"In just a few years, we've already witnessed three generations of A.I.," Salesforce CEO Marc Benioff told The New York Times earlier this month. "First came predictive models that analyze data. Next came generative A.I., driven by deep-learning models like ChatGPT. Now, we are experiencing a third wave β one defined by intelligent agents that can autonomously handle complex tasks."
Salesforce, which launched its Agentforce suite earlier this year, has said it plans to have more than 1 billion AI agents in use for companies by the end of next year.
Google CEO Sundar Pichai recently said the company has been "investing in developing more agentic models" over the last year. (He defined agentic AI as being able to "understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision.") The company made agentic AI a major focus of its Gemini 2.0 launch this month.
OpenAI plans to launch an AI agent code-named "Operator" in January that would be able to use a computer on a person's behalf to do things like write code or book flights, Bloomberg reported last month, citing two people familiar with the matter.
The company previewed its latest AI model, o3, on Friday as the final announcement of its 12 days of "Shipmas" campaign.
Over the past month, we've seen a rapid cadence of notable AI-related announcements and releases from both Google and OpenAI, and it's been making the AI community's head spin. It has also poured fuel on the fire of the OpenAI-Google rivalry, an accelerating game of one-upmanship taking place unusually close to the Christmas holiday.
"How are people surviving with the firehose of AI updates that are coming out," wrote one user on X last Friday, which is still a hotbed of AI-related conversation. "in the last <24 hours we got gemini flash 2.0 and chatGPT with screenshare, deep research, pika 2, sora, chatGPT projects, anthropic clio, wtf it never ends."
Rumors travel quickly in the AI world, and people in the AI industry had been expecting OpenAI to ship some major products in December. Once OpenAI announced "12 days of OpenAI" earlier this month, Google jumped into gear and seemingly decided to try to one-up its rival on several counts. So far, the strategy appears to be working, but it's coming at the cost of the rest of the world being able to absorb the implications of the new releases.
Cloud software giant Salesforce is looking to hire thousands of new salespeople to sell its AI tools to customers. The company plans to hire 2,000 new sales representatives, according to CNBC, which cited remarks from CEO Marc Benioff at a company event Tuesday. This doubles the hiring plans that Benioff shared with Bloomberg last month. [β¦]
On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It's similar to multimodal AI models like GPT-4o, which powers OpenAI's ChatGPT.
"Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times," said Google in a statement. "Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed."
Gemini 2.0 Flashβwhich is the smallest model of the 2.0 family in terms of parameter countβlaunches today through Google's developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.
A deep learning scientist whose last startup was acquired by Snap to build its My AI chatbot has raised seed funding for his latest venture: a platform for building and operating real-time, video-based conversational AI agents.Β eSelf, as the startup is known, is today coming out of stealth with $4.5 million in its coffers to [β¦]
Google unveiled Gemini 2.0, enhancing the AI product's capabilities.
Gemini 2.0 focuses on agentic AI, improving multi-step problem-solving.
Google's Project Astra and Mariner showcase advanced AI integration with popular services.
It's December, which apparently means it's time for all the AI companies to show off what they've been working on for the past year. Not to be left out, Google lifted the lid on its next-generation AI model, Gemini 2.0, which it promises is a big step up in smarts and capabilities.
If the theme of Gemini 1.0 was multimodality β an ability to combine and understand different types of information, such as text and images β Gemini 2.0 is all about agents, AI that can act more autonomously and solve multi-step problems with limited human input.
"Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision," said Google CEO Sundar Pichai in a blog post announcing Gemini 2.0 on Wednesday.
Users can test out some of Gemini 2.0's new abilities this week, including a new "Deep Research" feature that will have Gemini scour the web for information on a topic and prepare it in an easy-to-read report. Google said Deep Research, which will be available to Gemini Advanced subscribers, will perform like a human in the way it searches and locates relevant information on the web before starting a new search based on what it's learned.
Google plans to bring Gemini 2.0 to its AI Overviews feature in Search. The feature, which has dramatically transformed the way Google retrieves answers from the web, got off to a rocky start (pizza glue, anyone?). Google then scaled Overviews back and made various technical tweaks to improve performance.
With Gemini 2.0, Google says Overviews can tackle more complex searches, including multi-step questions and multimodal queries that use text and images. Google said it's already started testing the improved Overviews this week and will roll them out more broadly early next year.
This week, Google is also rolling out an experimental version of Gemini 2.0 Flash β a model designed for high-volume tasks at speed β that developers can play with. Anyone accessing the Gemini chatbot through the browser or the Gemini app will also be able to try it with the new model.
Google said Flash 2.0 will make Gemini faster, smarter, and more capable of reasoning. It's also now capable of generating images natively (previously, Google had stitched on a separate AI model to conjure up pictures within Gemini). Google said that should improve image generation, as it's drawing from Gemini 2.0's vast knowledge of the world.
Project Astra
Most of the other interesting new announcements Google teased won't be available for wider public consumption for a while.
One of these is Project Astra, which Google first previewed at I/O back in May. Google demoed a real-time AI assistant that could see the world around it and answer questions. Now, Google is showing an even better version of Astra built on Gemini 2.0, which the company said can draw on some of Google's most popular services, such as Search, Lens, and Maps.
In a new virtual demo, Google showed someone holding up their phone camera to a London bus and Astra answering a question on whether that bus could get them to Chinatown. The new and improved Astra can also converse in multiple (and mixed) languages, Google said.
Google will roll out Astra to a limited number of early testers, and it didn't say when more people will have access to it. Bibo Xu, the Astra product manager at Google DeepMind, told reporters on a call that Google expects these features to roll out through its apps over time, suggesting Astra may arrive incrementally rather than as one big product.
Additionally, Google teased Project Mariner, a tool that lets AI take control of a browser and scour the web for information. It can recognize pixels, images, text, and code on a webpage and use them to navigate and find answers.
Google referred to Mariner as an early research prototype and said it's only letting a select group of early testers try it via a Chrome extension.
"We're early in our understanding of the full capabilities of AI agents for computer use and we understand the risks associated with AI models that can take actions on a user's behalf," said Google Labs product manager Jaclyn Konzelmann.
For example, Google said it would limit certain actions, such as by having Mariner ask for final confirmation before making an online purchase.
Got a tip share? Some insight to share? You can reach the reporter Hugh Langley via the encrypted messaging app Signal (+1-628-228-1836) or email ([email protected]).
Google unveiled its first-ever AI agent that can take actions on the web on Wednesday, a research prototype from the companyβs DeepMind division called Project Mariner. The Gemini-powered agent takes control of your Chrome browser, moves the cursor on your screen, clicks buttons, and fills out forms, allowing it to use and navigate websites much [β¦]
Amazon says that itβs establishing a new R&D lab in San Francisco, the Amazon AGI SF Lab, to focus on building βfoundationalβ capabilities for AI agents. The Amazon AGI SF Lab will be led by David Luan, the co-founder of AI startup Adept, and will seek to build agents that can βtake actions in the [β¦]
A settlement earlier this year was expected to change the way Americans buy and sell homes.
Some predicted more people would forego hiring traditional real-estate agents for their deals.
Four people who sold or bought homes this year without a traditional broker shared how they did it.
A 2023 court ruling against the country's biggest association of real-estate agents was expected to transform how Americans buy and sell homes.
A jury found that the National Association of Realtors, or NAR, had colluded with large real-estate brokerages to keep its members' commissions high. A settlement earlier this year imposed new rules and requirements to prevent agents from unfairly siphoning more money from home sellers.
Real estate professionals have debated endlessly how the NAR settlement could affect the housing market.One prediction is that people might forego using an agent altogether; another is that homebuyers and sellers will bargain hard to pay brokers they do hire less in commissions.
The majority of Americans hire a traditional real-estate broker to facilitate their transactions. According to data from NAR itself, 86% of homebuyers in 2024 used an agent's services. A study by real-estate media company RISmedia found early indications of commissions falling, while Redfin found that commissions have remained almost unchanged since the new rules took effect in August.
Even if it's too early to see the full impact of the settlement, some insights can be gained from people who bought or sold homes without using traditional agents.
Four people told Business Insider that the NAR ruling didn't influence their decision not to hireΒ a typical broker;Β rather, saving time and making the process more convenient were top priorities.
The sellers acknowledged that going to the open market represented by a classic sellers' agent could have fetched higher prices but added that they might need to pay the agent more or wait longer to close their deals.
"If I had time and I was really wanting to prioritize maximizing my profit, then I probably would utilize an agent to sell just because of the ability to get multiple offers and drive up that competition," said Chelsea Hutchison, who sold her home to a publicly traded real-estate tech company in April.
There are also other costs to transact on a home, including attorney fees.
Read on to hear from the four people who bought or sold homes this year via big companies or real-estate startups instead of a traditional agent. They break down what they paid in commission or fees, and what they felt the benefits were.
2 people sold their houses to a company that charges a lower commission
In April, Hutchison sold her house in Canby, Oregon, to property technology company Opendoor.
The publicly traded firm worth $1.5 billion that pays cash for homes and can close a deal in days, charging a 5% fee to the seller. That's a little less than the 5% to 6% sellers have paid traditional agents in the past, who then share the commission with the buyer's agent.
Hutchison said she did consider using a traditional agent and spoke with one who estimated she could sell her 2,000-square-foot, four-bedroom home for about $565,000.
She ultimately went with Opendoor, which offered her less money β $535,000 β for her home but more speed and convenience.
"It was very fast, but it could have been slower if I needed it," she said. "There was flexibility for choosing the closing date, which was really helpful to me."
She ended up paying $26,750 in service fees to Opendoor, rather than the $32,100 a typical seller's broker would have asked for to split with the buyer's broker.
Another seller, Melissa Gonzales-Szott, thought Opendoor offered a fair price for her 2,200-square-foot Las Vegas house: $448,500.
Gonzales-Szott, a 46-year-old who works in marketing, said she did her own research on the market value of her home and thought the amount was in line with what she had seen in her neighborhood.
Selling to Opendoor took 60 days, she added, compared to the six months it took the last time she sold a home.
"There were just so many factors that contributed to this being a more convenient type of process to go through," she told BI. "If there were going to be any type of losses financially for us, we were prepared β because who could put a price tag on convenience and on peace?"
A home seller had agents bid for his listing and picked one willing to take a lower commission
In June, real-estate agent and "Million Dollar Listing LA" star Josh Altman cofounded Redy, a marketplace where prospective home sellers post their properties and agents compete with each other for the opportunity to sell them.
Agents even give sellers a "cash bonus" after they are chosen by the seller.
According to Kenneth Bloom, who's already sold two properties with Redy, the savings are significant compared to using a traditional real-estate agent.
Bloom, 69, sold a three-bedroomrental property in Waterford, Michigan, for $245,000 and his late mother-in-law's 1,500-square-foot condo in West Bloomfield, Michigan, for $250,000.
Bloom, who said he's bought and sold at least a dozen properties in his life, said he used to look up real-estate agents in the area of the home and research their sales volumes. He would then contact the top candidates before hiring one.
He found that he preferred Redy because the agents reached out to him.
"I posted the house and a dozen Realtors responded," he told BI. "For me, it was really a time saver. It did all the research that I had to do, and they came to me versus me having to go and do it on my own."
Bloom said the commission was also lower than he had paid in the past. The agent he selected settled on a 4.5% commission, which was lower than the 6% He was used to paying as the seller.
The cash bonus the agent paid him β which Bloom said was $1,200 for the first house he sold and $1,040 for the second house β was also a large factor.
Bloom said he saved nearly $10,000 on broker commissions between the two transactions. Selling each home took less than a month, he added.
Redy, which has officially launched in markets including Atlanta, Dallas, Orlando, Phoenix, and San Diego, is continuing to expand to other parts of the country.
A California homebuyer paid a flat fee rather than a percent of the sales price
California-based real-estate investor Sergio Rodriguez used a new homebuying service to purchase a home from his neighbor.
Rodriguez, 38, and the seller wanted to keep their $600,000 transaction off-market, which means the property wouldn't be listed on the Multiple Listing Service (MLS).
The seller wanted to get as much money as possible for the home and get rid of it fast, Rodriguez said, while he wanted to save on commission costs, too.
They couldn't find an agent to help them complete the transaction because Rodriguez and his neighbor wanted to keep the commission under 4% of the total sale price, below the 6% standard.
"I even have family and friends who are Realtors, and they were like, 'Let me run it through my brokerage and see if they'll be willing to transact for you,' and they all said no," Rodriguez told BI. "It was really hard to find a Realtor to just simply transact on it. You can try to do it privately, but it's just too much paperwork."
To close the deal, Rodriguez turned to TurboHome, a homebuying service in California, Texas, and Washington. The company, which bills itself as a "real estate brokerage of the future," uses AI in addition to licensed agents. It pays its agents salaries, then has buyers pay a flat fee rather than a percent commission.
After TurboHome got involved, Rodriguez said, he and his neighbor were in contract in 24 hours.
Rodriguez said he ended up paying TurboHome about $1,000 in fees. (The company said the standard flat fee ranges from $5,000 to $10,000.)
He estimated he saved about $40,000 compared to using a traditional agent.
"That's a lot of money when you're buying a $600,000 house," he said.
KPMG's head of AI, David Rowlands, is helping the firm transform for an AI future.
He spoke with Business Insider about the barriers businesses face as they try to do the same.
Don't focus on single-use cases for AI and sort out your data, he advised.
KPMG, one of the Big Four consultancies, has weaved AI into all its operations and is advising global businesses about how to do the same.
All employees across the firm's three divisions β accounting, tax, and advisory β have the ability to use AI. Everyone has access to a form of GPT and roughly a fifth of the global workforce have Copilot licenses, David Rowlands, KPMG's global head of AI, told Business Insider.
"Whatever they were doing already, they can now do quicker," he said in an interview.
But the vast majority of companies are still on the adoption path, and clients who come to KPMG are still thinking about how to get it going and how to get the data, Rowlands said.
Clients' concerns about AI have shifted over time: First, it was ethics, hallucinations, and trust; the past four or five months, it's been about realizing the business case and enabling the workforce to adopt it; and next, it's the question of data and how organizations differentiate through their data and protect it.
BI spoke with Rowlands about KPMG's own adoption journey and how the firm advises businesses as they deepen their use of AI.
One of the biggest barriers for companies to overcome is the focus on single-use cases for AI systems.
Many businesses are deploying an AI agent to sit over a curated database and pick out data to make rapid recommendations to a human operator, he explained. But systems must have reusability.
"What you have to think about is having AI embedded in your operating model," he said. "At KPMG, we're keen to get people beyond use cases because a point piece of technology, a point use case, hasn't been a particularly effective business case."
Clarity on the business case is also the best way to see a return on investment, he added.
The question of returns has been a hot topic among CEOs as billions of dollars continue to pour into AI infrastructure. Some economists and analysts have warned that money is being wasted on hype, while others have said the rate of improvement is slowing and AI is hitting a wall.
KPMG is buying into the hype. In 2023, the firm said it would invest $2 billion in artificial intelligence and cloud services in partnership with Microsoft over the next five years and expected the strategy to generate more than $12 billion in revenue over that period.
In November, it announced a $100 million investment in Google Cloud, which it said could drive $1 billion in growth.
Rowlands said it's hard to get clients to see the wider impact when they can't see an immediate ROI. But he said the benefits would come through improvements in growth, quality, and agility: "We already see that a copilot system saves about 40 minutes a week."
In mid-2024, some of KPMG's surveys on returns were "ambivalent," but they're now "getting some anecdotal evidence of ROI," Rowlands said. This time next year, he added, COOs and CFOs are going to be positive about the returns they're getting.
Data will differentiate your business
How to approach data is another barrier for clients in their AI journey, Rowlands said.
"Organizations will be increasingly differentiated by the data that they own," he said. That requires becoming more mindful about where your data sits, who owns it, where it's generated, and how you keep it up to date.
Data will only be more integral as we enter the next stage of AI's evolution, Rowlands added. He expects that within the next 12 months, multi-agent models β a group of specialized AI agents that coordinate to solve a collective goal βwill rapidly become a reality.
"That is where AI is going to start to have a big impact on solving some of the biggest problems, such as decarbonization," he said.
Preparing the workforce for these changes is part of implementing AI responsibly, Rowlands said.
KPMG ran a "24 hours of AI" training session in January this year. The key message was that everyone should know how to use AI against their problems and be trusted and innovative with their use of it in front of clients. The firm is continuing to train its workforce in data curation, looking after data, and prompt craft.
Rowlands doesn't deny that AI will have a "deep transformational impact" on the professional-services industry. He said there would be a rotation of jobs, as happened with the internet, but it wouldn't diminish consultants' purpose.
"We don't really think about replacing jobs. It's more about enhancing individuals and roles. And those who are using AI well are being more successful than those who aren't.
"Our consultants will, as they've always done, strive increasingly to make sure that our clients are getting valuable outcomes out of our work."
GitLab, the popular developer and security platform, and AWS, the popular cloud computing and AI service, today announced that they have teamed up to combine GitLabβs Duo AI assistant with Amazonβs Q autonomous agents. The goal here, the two companies say, is to accelerate software innovation and developer productivity, and unlike so many partnerships in [β¦]
Finance firms are keen on AI agents that can automate combinations of tasks.
Demand for AI agents is giving birth to a new class of startups and VCs hungry to invest in them.
It was a topic of conversation at the Evident AI Symposium in New York on Thursday.
"Talk to this like a teammate and treat it like a teammate."
That's Danny Goldman's guidance to private-equity customers of his startup, Mako AI, which offers a generative AI assistant for junior finance professionals and is backed by Khosla Ventures, an early investor in OpenAI.
His hope is that "engaging with Mako looks much more like engaging with a real human associate than a software tool," he previously told BI. Goldman, who worked in private equity before cofounding Mako AI, predicts that in a year or two, every junior on Wall Street will have their own AI direct report.
It's not just juniors, either. JPMorgan CEO Jamie Dimon, is a "tremendous user" of the bank's generative AI assistant suite. Teresa Heitsenrether, JPMorgan's chief data and analytics officer, said at a conference last week that JPMorgan is working toward giving employees AI assistants that are specific to them and their jobs.
Wall Streeters, say hello to your new coworker. Across the industry, AI agents are beginning to permeate the labor force as assistants who can help humans prep for meetings, write their emails, and wade through troves of information to answer questions almost instantaneously.
In many cases, AI agents are still limited to specific, individual tasks like querying internal data and creating PowerPoints and emails. To take AI agents a step further, technologists and startup investors are fueling a shift to so-called multi-agent systems that coordinate several AI agents to complete more complex tasks more autonomously.
Some tech executives at the Evident AI Symposium said they could see a world with more artificial intelligence agents than humans by 2025. But what will work and life look like in an increasingly hybrid world with humans and bots? Well, that's still being worked out, according to a number of tech executives at the Evident AI Symposium Thursday.
"What's really exciting about agents is that we are still figuring out the tasks they're actually good at, the tools they know how to use, the tools we have to teach them how to use," said Gabriel Stengel, cofounder and CEO of Rogo, which is building the generative AI equivalent of a junior banker.
Another question that still needs to be answered is how to define when an agent is smarter or not than a human, said Kristin Milchanowski, chief AI and data officer of BMO Financial Group.
To some extent, benchmarking humans against AI agents is already happening. In a recent University of Cambridge study that compared who could run a business better, AI outperformed humans on most metrics including profitability, product design, and managing inventory. But they fell short when it came to making decisions on the fly.
Heitsenrether, speaking at the Evident AI conference, told the audience that, over time, she expects AI to be seamlessly embedded in an employee's workflow. By this time next year, she said that she hopes to have a clearer picture of what a more personalized AI assistant for each employee might look like.
But unlocking more autonomous uses of AI is going to require more than technological breakthroughs.
"We don't have a lot of trust right now in these systems," Sumitra Ganesh, a member of JPMorgan's AI research team, said at the symposium.
"We have to slow-walk it to release it to people who are experts who can verify the output and go, 'Okay, that looks fine, you can take that action,'" Ganesh said. "But that's kind of babysitting these agents at this point," she added. "But hopefully, it's like training wheels β at some point, we will be confident enough to let them go."
H, the Paris startup founded by Google alums, made a big splash last summer when, out of the blue, it announced a seed round of $220 million before releasing a single product. Three months later, still without a product, that splash started to look like a catastrophic flood when three of the companyβs five co-founders [β¦]