❌

Reading view

There are new articles available, click to refresh the page.

AI agents are here. Here's how AI startup Cohere is deploying them for consultants and other businesses.

Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.

Cohere

  • Enterprise AI startup Cohere has launched a new platform called North.
  • North allows users to quickly deploy AI agents to execute tasks across various business sectors.
  • The company says the platform cuts the time it takes to complete a task by over five-fold.

2025 is shaping up to be the year that AI "agents" go mainstream.

Unlike AI-based chatbots that respond to user queries, agents are AI tools that work autonomously. They can execute tasks and make decisions, and companies are already using them for everything from creating marketing campaigns to recruiting new employees.

Cohere, an AI startup focused on enterprise technology, unveiled North on Thursday β€” an all-in-one platform combining large language models, multimodal search, and agents to help its customers work more efficiently with AI.

Through North, users can quickly customize and deploy AI agents to find relevant information, conduct research, and execute tasks across various business functions.

The platform could make it easier for a company's finance team, for example, to quickly search through internal data sources and create reports. Its multimodal search function could also help extract information from everything from images to slides to spreadsheets.

AI agents built with North integrate with a company's existing workplace tools and applications. The platform can run in private, allowing organizations to integrate all their sensitive data in one place securely.

"North allows employees to build AI agents tailored to their role to execute complex tasks without ever leaving the platform," a representative for Cohere told Business Insider by email.

The company is now deploying North to a small set of companies in finance, healthcare, and critical infrastructure as it continues to refine the platform. There is no set date for when it will make the platform available more widely.

Cohere, launched in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, has quickly grown to rival ChatGPT maker OpenAI and was valued at over $5.5 billion at its Series D funding round announced last July, Bloomberg reported. As of last March, the company had an annualized revenue of $35 million, up from $13 million at the end of 2023.

The company is one of a few AI startups that are building their own large language models from the ground up. Unlike its competitors, it has focused on creating customized solutions for businesses rather than consumer apps or the more nebulous goal of artificial general intelligence.

Its partners include major companies like software company Oracle, IT company Fujitsu, and consulting firm McKinsey & Company.

This year, however, its goal is to "move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Gomez said in a post on LinkedIn outlining the company's objectives for 2025.

Read the original article on Business Insider

Cohere CEO Aidan Gomez on what to expect from 'AI 2.0'

Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.

Cohere

  • Companies will soon focus on customizing AI solutions for specific needs, Cohere's CEO says.
  • AI 2.0 will "help fundamentally transform how businesses operate," he wrote.
  • Major AI companies like OpenAI are also releasing tools for customization.

If this was the year companies adopted AI to stay competitive, next year will likely be about customizing AI solutions for their specific needs.

"The next phase of development will move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Aidan Gomez, the CEO and cofounder of Cohere, an AI company building technology for enterprises, wrote in a post on LinkedIn last week.

"AI 2.0," as he calls it, will "accelerate adoption, value creation, and will help fundamentally transform how businesses operate." He added: "Every company will be an AI company."

Cohere has partnered with major companies, including software company Oracle and IT company Fujitsu, to develop customized business solutions.

"With Oracle, we've built customized technology and tailored our AI models to power dozens (soon, hundreds) of production AI features across Netsuite and Fusion Apps," he wrote. For Fujitsu, Cohere built a model called Takane that's "specifically designed to excel in Japanese."

Last June, Cohere partnered with global management consulting firm McKinsey & Company to develop customized generative AI solutions for the firm's clients. The work is helping the startup "build trust" among more organizations, Gomez previously told Business Insider.

To meet the specific needs of so many clients, Gomez has advocated for smaller, more efficient AI models. He says they are more cost-effective than building large language models, and they give smaller startups a chance to compete with more established AI companies.

But it might be only a matter of time before the biggest companies capitalize on the customization trend, too.

OpenAI previewed an advancement during its "Shipmas" campaign that allows users to fine-tune o1 β€” their latest and most advanced AI model, on their own datasets. So, users can now leverage OpenAI's reinforcement-learning algorithms to customize their own models.

The technology will be available to the public next year, but OpenAI has already partnered with companies like Thomson Reuters to develop specialized legal tools and researchers at Lawrence Berkeley National Laboratory to build computational models for identifying genetic diseases.

Cohere did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

This is the biggest question in AI right now

AI

Qi Yang/Getty Images

  • AI leaders are rethinking data-heavy training for large language models.
  • Traditional models scale linearly with data, but this approach may hit a dead end.
  • Smaller, more efficient models and new training methods are gaining industry support.

For years, tech companies like OpenAI, Meta, and Google have focused on amassing tons of data, assuming that more training material would lead to smarter, more powerful models.

Now, AI leaders are rethinking the conventional wisdom about how to train large language models.

The focus on training data arises from research showing that transformers, the neural networks behind large language models, have a one-to-one relationship with the amount of data they're given. Transformer models "scale quite linearly with the amount of data and compute they're given," Alex Voica, a consultant at the Mohamed bin Zayed University of Artificial Intelligence, previously told Business Insider.

However, executives are starting to worry that this approach can only go so far, and they're exploring alternatives for advancing the technology.

The money going into AI has largely hung on the idea that this scaling law "would hold," Scale AI CEO Alexandr Wang said at the Cerebral Valley conference this week, tech newsletter Command Line reported. It's now "the biggest question in the industry."

Some executives say the problem with the approach is that it's a little mindless. "It's definitely true that if you throw more compute at the model, if you make the model bigger, it'll get better," Aidan Gomez, the CEO of Cohere, said on the 20VC podcast. "It's kind of like it's the most trustworthy way to improve models. It's also the dumbest."

Gomez advocates smaller, more efficient models, which are gaining industry support for being cost-effective.

Others worry this approach won't reach artificial general intelligence β€” a theoretical form of AI that matches or surpasses human intelligence β€” even though many of the world's largest AI companies are banking on it.

Large language models are trained simply to "predict the next token, given the previous set of tokens," Richard Socher, a former Salesforce executive and CEO of AI-powered search engine You.com, told Business Insider. The more effective way to train them is to "force" these models to translate questions into computer code and generate an answer based on the output of that code, he said. This will reduce hallucinations in quantitative questions and enhance their abilities.

Not all industry leaders are sold that AI has hit a scaling wall, however.

"Despite what other people think, we're not at diminishing marginal returns on scale-up," Microsoft chief technology officer Kevin Scott said in July in an interview with Sequoia Capital's Training Data podcast.

Companies like OpenAI are also seeking to improve on existing LLMs.

OpenAI's o1, released in September, still relies on the token prediction mechanism Socher refers to. Still, the model is specialized to better handle quantitative questions, including areas like coding and mathematics β€” compared to ChatGPT, which is considered a more general-purpose model.

Part of the difference between o1 and ChatGPT is that o1 spends more time on inference or "thinking" before it answers a question.

"To summarize, if we were to anthropomorphize, gpt-4 is like your super know-it-all friend who when you ask them a question starts talking stream-of-consciousness, forcing you to sift through what they're saying for the gems," Waleed Kadous, a former engineer lead at Uber and former Google principal software engineer, wrote in a blog post. "o1 is more like the friend who listens carefully to what you have to say, scratches their chin for a few moments, and then shares a couple of sentences that hit the nail on the head."

One of o1's trade-offs, however, is that it requires much more computational power, making it slower and costlier, according toΒ Artificial Analysis, an independent AI benchmarking website.

Read the original article on Business Insider

❌