❌

Reading view

There are new articles available, click to refresh the page.

Decart nabs $32M at $500M+ valuation to build AI tech and β€˜open world’ apps

A young startup that emerged from stealth less than two months ago with big-name backers and bigger ambitions is returning to the spotlight.Β  Decart is building what its CEO and co-founder Dean Leitersdorf (pictured above, right) describes as β€œa fully vertically integrated AI research lab,” alongside enterprise and consumer products based on the lab’s work. […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Spotify Wrapped is always a mess for parents. The new AI 'podcast' version just makes it worse.

spotify wrpapped podast
Spotify Wrapped uses Google's AI to make a podcast about your favorite songs.

Spotify

One of the indignities inflicted on parents of young children is Spotify Wrapped. Each December, thousands of adults open up their year-end treat to discover the sad fact that they listened to "Baby Shark" more times than anything else.

As a parent, this has been my fate for the last few years. (My Spotify account is connected to our Amazon Echo, which means that in some years, my kids' requests for songs about potty words have ended up on my Wrapped.)

I take very little pleasure in Spotify Wrapped, although I know it's a massively popular thing that many people β€”presumably those who don't listen to Raffi on repeat β€” really look forward to.

However, this year, there's a new feature. And I struggle to imagine how anyone won't feel mildly weirded out by it: Spotify uses Google's new NotebookLM AI-powered feature to create an individualized AI-generated podcast with two talking heads discussing your listening habits in a conversational, podcast-y tone. Yikes!

I received a 3-minute podcast with a man and woman chatting about how impressive it was that I had listened to "Cruel Summer" by Taylor Swift β€” my 4-year-old's current favorite tune, narrowly edging out "Let It Go" this year β€”Β so many times that I was in the Top 0.02% of listeners. (I should note here that the podcast said I was in the Top 0.02%, while the main Wrapped said it was 0.05%. Possibly the podcast version hallucinated?)

I can understand why people like sharing screenshots from their Wrapped. It's normal to want to share what music you like β€” and what those lists say about you and your personality.

But listening to an AI podcast about it? Voiced by robots? I'm not sure anyone wants that.

Google's NotebookLM is a fascinating product β€” I've played around with it a little, and it is very cool, if not uncanny. You can add in text or a PDF or other kinds of data, and it will create a conversational podcast episode with two hosts β€” "likes" and "ums" and all.

It's got that factor about GenAI that makes you go "whoa," like trying ChatGPT for the first time to have it write a poem.

It's got the dog-walking-on-its-hind-legs element: It's impressive because the dog can do it at all, not because it's doing it particularly well. The idea that AI could generate a chatty podcast that sounds almost real is, admittedly, mindblowing. But would you want to actually listen to it? I'm not really so sure.

I've wondered what this would be used for β€” I assume some people find listening to something makes it easier to engage with than simply reading it. You could take the Wikipedia page for "The War of 1812," plug it into AI, and generate an engaging history podcast instead of slogging through dry text.

And in a business setting, perhaps a busy exec could upload an accounting report and listen to it while on the putting green instead of reading a stale PDF. (I tried uploading my tax return and created what may be the most boring podcast in human history.)

But NotebookLM is a pretty niche product so far β€” and Spotify Wrapped is a massively popular feature on a massively popular app. It's likely that this will be many people's first exposure to NotebookLM's abilities.

I imagine it will be mindblowing for many people! But I urge restraint and moderation. Although seeing a screenshot of your friends' top artists might be fun, no one wants to hear a podcast about it.

Read the original article on Business Insider

AWS brings prompt routing and caching to its Bedrock LLM service

As businesses move from trying out generative AI in limited prototypes to putting them into production, they are becoming increasingly price conscious. Using large language models (LLMs) isn’t cheap, after all. One way to reduce cost is to go back to an old concept: caching. Another is to route simpler queries to smaller, more cost-efficient […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Oracle stock is set for its best year since the dot-com boom after a 75% surge

Larry Ellison
Oracle cofounder Larry Ellison.

Justin Sullivan/Getty Images

  • Oracle shares are set for their best year since 1999 after a 75% surge.
  • The enterprise-computing stock has benefited from strong demand for cloud and AI infrastructure.
  • Oracle cofounder Larry Ellison's personal fortune has surged .

Oracle has surged 75% since January, putting the stock on track for its best year since a tripling in 1999 during the dot-com boom.

The enterprise-computing giant's share price has jumped from a low of about $60 in late 2022 to about $180, boosting Oracle's market value from below $165 billion to north of $500 billion.

It's now worth almost as much as Exxon Mobil ($518 billion), and more valuable than Mastercard ($489 billion), Costco ($431 billion), or Netflix ($379 billion).

Oracle's soaring stock price has boosted the net worth of Larry Ellison, who cofounded the company and is chief technology officer. His holding of more than 40% puts him second on the Forbes Real-Time Billionaires list worth $227 billion, second only to Tesla CEO Elon Musk's $330 billion.

Oracle provides all manner of software and hardware for businesses, but its cloud applications and infrastructure are fueling its growth as companies such as Tesla that are training large language models pay up for processing power.

The company was founded in 1977 but is still growing at a good clip. Net income jumped by 23% to $10.5 billion in the year ended May, fueled by 12% sales growth in the cloud services and license support division, which generated nearly 75% of its revenues.

Oracle signed the largest sales contracts in its history last year as it tapped into "enormous demand" for training LLMs, CEO Safra Catz said in the fourth-quarter earnings release. She said the client list included OpenAI and its flagship ChatGPT model, which kickstarted the AI boom.

Catz also predicted revenue growth would accelerate from 6% to double digits this financial year. That's partly because Oracle is working with Microsoft and Google to interconnect their respective clouds, which Ellison said would help to "turbocharge our cloud database growth."

Oracle has flown under the radar this year compared to Nvidia. The chipmaker's stock has tripled in the past year and it now rivals Apple as the world's most valuable company. Yet Oracle is still headed for its best annual stock performance in a quarter of a century β€” and its bosses are promising there's more to come.

Read the original article on Business Insider

Apple is working on an AI-powered Siri overhaul by 2026. Here's what we know about the full Apple Intelligence timeline.

iPhone 16 taking a photo of an iPhone display
With the iOS 18.1 update, the iPhone 16 Pro has access to Apple Intelligence.

NurPhoto/NurPhoto via Getty Images

  • Apple Intelligence for iPhone 15 Pro and later was released in September.
  • However, some AI features, like "LLM Siri," reportedly won't be available until 2026.
  • The first update included a new Siri interface, enhanced Messages, and Mail app improvements.

Much of the chatter about the newest iPhone 16 models has been about how they can support Apple Intelligence.

There are also still a lot of questions about when, exactly, all the cool new AI features will be fully available.

Apple has touted the iPhone 16 as a phone "built from the ground up" for artificial intelligence. It hit the market in September, and Apple Intelligence began rolling out later that month as part of the iOS 18.1 software update.

The first AI drop included several new features available on the iPhone 15 Pro or later, but some of the tools highlighted at June's Worldwide Developer Conference won't come to iOS until 2025 or later.

Although the first AI rollout as part of the iOS 18.1 software update included some tweaks to virtual assistant Siri, Apple is still working to infuse improved large language models into the voice assistant by 2026, Bloomberg reported. The goal is to make Siri even more conversational to rival competitors in the AI arms race.

This "LLM Siri" would compete with AI offerings made by companies like OpenAI and Google. It is expected to be announced in 2025 and released as part of iOS 19 the year after.

Apple has yet to provide a clear-cut calendar for the full Apple Intelligence rollout, but it provided some more details on the timeline when it announced iOS 18.1.

Here's an estimated timeline for the US English Apple Intelligence release based on what experts on Apple and the company have said since WWDC.

October is the initial Apple Intelligence beta test.

When iOS 18.1 came out in September, it included the option for those with eligible iPhones to enable Apple Intelligence.

Here are some of the features that came in the first drop.

  • Updates to the Messages app, including more extensive reply suggestions
  • A new section of the Mail app that categorizes high-priority messages.
  • The Reduce Interruptions Focus mode β€” similar to Do Not Disturb, but your phone will allow alerts from messages it deems urgent.
  • Email and text summaries in notifications.
  • Writing Tools, which will help with summarizing, proofreading, and editing bodies of text.
  • A new Siri animation and interface that will make the perimeter of a device's screen glow, along with a "Type to Siri" feature.

There's more to come in December.

Apple said more colorful features are coming next month.

  • Visual intelligence, which Apple said will "help users learn about objects and places instantly" using their camera.
  • Writing tools will get an upgrade, allowing it to apply more specific changes to text.
  • OpenAI's ChatGPT will also be integrated into eligible iPhones.

The new Siri and more languages are coming in 2025 and beyond.

Apple has been promoting a "more personal Siri" in its marketing, but Bloomberg correspondent Mark Gurman reported that it won't come out for a while.

In one clip from Apple, actor Bella Ramsey asks Siri to recall the name of a man they met months prior. The revamped Siri assistant instantly reminds Ramsey of the man's name, which is impressive, but the feature won't be available on iPhone 15s or iPhone 16s until 2025 or later.

It's unclear if this will come as part of the overhauled version of Siri expected in 2026 or in earlier updates.

According to the company, Apple Intelligence will first be available in American English and will "quickly expand" to other English-speaking countries, including Canada, New Zealand, South Africa, Australia, and the UK in December.

Apple said more languages are coming in April. So far, they include Indian English, Singaporean English, Chinese, Japanese, French, German, Italian, Korean, Portuguese, Spanish, Vietnamese, and more.

An earlier version of this story was published September 22.

Read the original article on Business Insider

Amazon is hedging its big bet on Anthropic with its own AI video model, report says

Adam Selipsky and Dario Amodei sitting onstage at a conference with the logos of Amazon and Anthropic behind them.
Amazon Web Services CEO Adam Selipsky speaks with Anthropic CEO Dario Amodei during a 2023 conference.

Noah Berger/Getty

  • Amazon is developing a new AI model called Olympus for video analysis, The Information reported.
  • The AI tool could help search for specific scenes in video archives.
  • Amazon-backed Anthropic has already launched its own multimodal model for images and videos.

Amazon may have doubled down on its investment in Anthropic, but it also appears to be hedging its AI bets by developing its own model that can process images and video in addition to text.

The tech giant's new model, code-named Olympus, could help customers search video archives for specific scenes, The Information reported.

It's a type of AI β€” known as multimodal β€” already offered by Anthropic, the startup that Amazon pumped a fresh $4 billion into earlier this month, bringing its total investment into the company to $8 billion.

Amazon could launch Olympus as soon as next week at its annual AWS re:Invent conference, The Information reported.

Amazon's partnership with Anthropic goes beyond capital. The e-commerce juggernaut has used Anthropic's technology to power its digital assistant and AI coding products. And Amazon Web Service customers get early access to a key Anthropic feature: fine-tuning their data through Anthropic's chatbot Claude.

In return for Amazon's most recent investment in Anthropic, the startup said it would use AWS as its "primary cloud and training partner." The deal also includes an agreement for the OpenAI rival to use more of Amazon's chips.

The development and launch of Olympus could reduce Amazon's dependency on Anthropic for multimodal AI, especially if the new video model becomes as a cheaper alternative.

The Big Tech giant has a huge repository of video archives, which could be used to train the AI model for various use cases, from sports analysis to geological inspections for oil and gas companies, per the report.

Amazon doesn't have a seat on Anthropic's board, but it takes a portion of the proceeds from its sales, as its platform runs on AWS servers.

Amazon and Anthropic did not immediately respond to a Business Insider request for comment.

Read the original article on Business Insider

Marc Benioff thinks we've reached the 'upper limits' of LLMs — the future, he says, is AI agents

Salesforce CEO Marc Benioff.
In a podcast appearance, Salesforce CEO Marc Benioff said he thinks the future of AI lies in agents that do tasks autonomously.

BrontΓ« Wittpenn/San Francisco Chronicle via Getty Images

  • Tech titan Marc Benioff says we're near the "upper limits" of LLM use in AI advancement.
  • In a podcast, the Salesforce CEO said the future of AI lies in agents that work autonomously.
  • "'Terminator'? Maybe we'll be there one day," Benioff said, referencing the 1984 film about a cyborg assassin.

Salesforce CEO Marc Benioff, in an episode of The Wall Street Journal's "Future of Everything" podcast, said he thinks the future of AI advancement lies in autonomous agents β€”Β not the large language models used to train bots like ChatGPT.

"I actually think we're hitting the upper limits of the LLMs right now," Benioff said.

Over the last several years, Benioff said, we've all "got drunk on the ChatGPT Kool-Aid," leading the average consumer to believe that AI is more powerful than it is and that LLMs are key to advancement in the technology. But there is a burgeoning use of artificial intelligence β€”Β autonomous agents, which can be deployed to conduct tasks independently, such as executing sales communications or marketing campaigns β€”Β that he says will be more significant than LLMs have been for companies trying to become more efficient and transform the world of work.

Salesforce offers prebuilt and customizable AI agents for clients seeking to automate customer service tasks. OpenAI is closing in on a launch date for its own agents, which Bloomberg reported will be able to complete assigned tasks like writing code or booking travel.

Nvidia CEO Jensen Huang recently said he believes we'll all eventually be working alongside agents and "AI employees."

"We have incredible tools to augment our productivity, to augment our employees, to prove our margins, to prove our revenues, to make our companies fundamentally better, have higher fidelity relationships with our customers," Benioff said. "But we are not at that moment that we've seen in these crazy movies β€” and maybe we will be one day, but that is not where we are today."

Benioff said the general public has learned about the power of AI agents from movies like the 1984 film "Terminator," starring Arnold Schwarzenegger and about a cyborg assassin, and the 2002 hit "Minority Report," about police preemptively arresting would-be criminals using AI-powered technology to detect crime before it occurs.

Benioff said there are some industry insiders and AI evangelists who suggest the tech, which hasn't yet evolved too far beyond LLMs, is capable of feats like curing cancer or solving climate change β€” but not only is that overstating what the technology can do, he said, it's misleading to people who could benefit from it through applications in which it is currently useful.

"This idea that these AI priests and priestesses are out there telling the world things about AI that are not true is a huge disservice to these enterprising customers who can increase their margins, increase their revenues, augment their employees, improve their customer relationships," Benioff said.

He added: "Yes, you can do all of these things with AI, but this other part β€”Β that we are all living in 'Minority Report?' No, we're not there yet. Maybe we'll be there one day. 'Terminator?' Maybe we'll be there one day. 'WarGames' β€” I hope we will never be there."

In the 1983 film "WarGames," starring Matthew Broderick, a high school student hacks a military supercomputer, activating the country's nuclear arsenal and risking World War III.

Representatives for Salesforce did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

AI adoption is surging — but humans still need to be in the loop, say software developers from Meta, Amazon, Nice, and more

Photo collage featuring headshots of Greg Jennings, Aditi Mithal, Pooya Amini, Shruti Kapoor, Neeraj Verma, Kesha Williams, Igor Ostrovsky
Top Row: Greg Jennings, Aditi Mithal, Pooya Amini, and Shruti Kapoor. Bottom Row: Neeraj Verma, Kesha Williams, and Igor Ostrovsky.

Alyssa Powell/BI

This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

The future of software-development jobs is changing rapidly as more companies adopt AI tools that can accelerate the coding process and close experience gaps between junior- and senior-level developers.

Increased AI adoption could be part of the tech industry's "white-collar recession," which has seen slumps in hiring and recruitment over the past year. Yet integrating AI into workflows can offer developers the tools to focus on creative problem-solving and building new features.

On November 14, Business Insider convened a roundtable of software developers as part of our "CXO AI Playbook" series to learn how artificial intelligence was changing their jobs and careers. The conversation was moderated by Julia Hood and Jean Paik from BI's Special Projects team.

These developers discussed the shifts in their day-to-day tasks, which skills people would need to stay competitive in the industry, and how they navigate the expectations of stakeholders who want to stay on the cutting edge of this new technology.

Panelists said AI has boosted their productivity by helping them write and debug code, which has freed up their time for higher-order problems, such as designing software and devising integration strategies.

However, they emphasized that some of the basics of software engineering β€” learning programming languages, scaling models, and handling large-scale data β€” would remain important.

The roundtable participants also said developers could provide critical insight into challenges around AI ethics and governance.

The roundtable participants were:

  • Pooya Amini, software engineer, Meta.
  • Greg Jennings, head of engineering for AI, Anaconda.
  • Shruti Kapoor, lead member of technical staff, Slack.
  • Aditi Mithal, software-development engineer, Amazon Q.
  • Igor Ostrovsky, cofounder, Augment.
  • Neeraj Verma, head of applied AI, Nice.
  • Kesha Williams, head of enterprise architecture and engineering, Slalom.

The following discussion was edited for length and clarity.


Julia Hood: What has changed in your role since the popularization of gen AI?

Neeraj Verma: I think the expectations that are out there in the market for developers on the use of AI are actually almost a bigger impact than the AI itself. You hear about how generative AI is sort of solving this blank-paper syndrome. Humans have this concept that if you give them a blank paper and tell them to go write something, they'll be confused forever. And generative AI is helping overcome that.

The expectation from executives now is that developers are going to be significantly faster but that some of the creative work the developers are doing is going to be taken away β€” which we're not necessarily seeing. We're seeing it as more of a boilerplate creation mechanism for efficiency gains.

Aditi Mithal: I joined Amazon two years ago, and I've seen how my productivity has changed. I don't have to focus on doing repetitive tasks. I can just ask Amazon Q chat to do that for me, and I can focus on more-complex problems that can actually impact our stakeholders and our clients. I can focus on higher-order problems instead of more-repetitive tasks for which the code is already out there internally.

Shruti Kapoor: One of the big things I've noticed with writing code is how open companies have become to AI tools like Cursor and Copilot and how integrated they've become into the software-development cycle. It's no longer considered a no-no to use AI tools like ChatGPT. I think two years ago when ChatGPT came out, it was a big concern that you should not be putting your code out there. But now companies have kind of embraced that within the software-development cycle.

Pooya Amini: Looking back at smartphones and Google Maps, it's hard to remember how the world looked like before these technologies. It's a similar situation with gen AI β€” I can't remember how I was solving the problem without it. I can focus more on actual work.

Now I use AI as a kind of assisted tool. My main focus at work is on requirement gathering, like software design. When it comes to the coding, it's going to be very quick. Previously, it could take weeks. Now it's a matter of maybe one or two days, so then I can actually focus on other stuff as AI is solving the rest for me.

Kesha Williams: In my role, it's been trying to help my team rethink their roles and not see AI as a threat but more as a partner that can help boost productivity, and encouraging my team to make use of some of the new embedded AI and gen-AI tools. Really helping my team upskill and putting learning paths in place so that people can embrace AI and not be afraid of it. More of the junior-level developers are really afraid about AI replacing them.


Hood: Are there new career tracks opening up now that weren't here before?

Verma: At Nice, we have something like 3,000 developers, and over the last, I think, 24 months, 650 of them have shifted into AI-specific roles, which was sort of unheard of before. Even out of those 650, we've got about a hundred who are experts at things like prompt engineering. Over 20% of our developers are not just developers being supported by AI but developers using AI to write features.

Kapoor: I think one of the biggest things I've noticed in the last two to three years is the rise of a job title called "AI engineer," which did not exist before, and it's kind of in between an ML engineer and a traditional software engineer. I'm starting to see more and more companies where AI engineer is one of the top-paying jobs available for software engineers. One of the cool things about this job is that you don't need an ML-engineering background, which means it's accessible to a lot more people.

Greg Jennings: For developers who are relatively new or code-literate knowledge workers, I think they can now use code to solve problems where previously they might not have. We have designers internally that are now creating full-blown interactive UIs using AI to describe what they want and then providing that to engineers. They've never been able to do that before, and it greatly accelerates the cycle.

For more-experienced developers, I think there are a huge number of things that we still have to sort out: the architectures of these solutions, how we're actually going to implement them in practice. The nature of testing is going to have to change a lot as we start to include these applications in places where they're more mission-critical.

Amini: On the other side, looking at threats that can come out of AI, new technologies and new positions can emerge as well. We don't currently have clear regulations in terms of ownership or the issues related to gen AI, so I imagine there will be more positions in terms of ethics.

Mithal: I feel like a Ph.D. is not a requirement anymore to be a software developer. If you have some foundational ML, NLP knowledge, you can target some of these ML-engineer or AI-engineer roles, which gives you a great opportunity to be in the market.

Williams: I'm seeing new career paths in specialized fields around ML and LLM operations. For my developers, they're able to focus more on strategy and system design and creative problem-solving, and it seems to help them move faster into architecture. System design, system architecture, and integration strategies β€” they have more time to do that because of AI.


Jean Paik: What skills will developers need to stay competitive?

Verma: I think a developer operating an AI system requires product-level understanding of what you're trying to build at a high level. And I think a lot of developers struggle with prompt engineering from that perspective. Having the skills to clearly articulate what you want to an LLM is a very important skill.

Williams: Developers need to understand machine-learning concepts and how AI models work, not necessarily how to build and train these models from scratch but how to use them effectively. As we're starting to use Amazon Q, I've realized that our developers are now becoming prompt engineers because you have to get that prompt right in order to get the best results from your gen-AI system.

Jennings: Understanding how to communicate with these models is very different. I almost think that it imparts a need for engineers to have a little bit more of a product lens, where a deeper understanding of the actual business problem they're trying to solve is necessary to get the most out of it. Developing evaluations that you can use to optimize those prompts, so going from prompt engineering to actually tuning the prompts in a more-automated way, is going to emerge as a more common approach.

Igor Ostrovsky: Prompt engineering is really important. That's how you interact with AI systems, but this is something that's evolving very quickly. Software development will change in five years much more rapidly than anything we've seen before. How you architect, develop, test, and maintain software β€” that will all change, and how exactly you interact with AI will also evolve.

I think prompt engineering is more of a sign that some developers have the desire to learn and are eager to figure out how to interact with artificial intelligence, but it won't necessarily be how you interact with AI in three years or five years. Software developers will need this desire to adapt and learn and have the ability to solve hard problems.

Mithal: As a software developer, some of the basics won't change. You need to understand how to scale models, build scalable solutions, and handle large-scale data. When you're training an AI model, you need data to support it.

Kapoor: Knowledge of a programming language would be helpful, specifically Python or even JavaScript. Knowledge of ML or some familiarity with ML will be really helpful. Another thing is that we need to make sure our applications are a lot more fault-tolerant. That is also a skill that front-end or back-end engineers who want to transition to an AI-engineering role need to be aware of.

One of the biggest problems with prompts is that the answers can be very unpredictable and can lead to a lot of different outputs, even for the same prompt. So being able to make your application fault-tolerant is one of the biggest skills we need to apply in AI engineering.


Hood: What are the concerns and obstacles you have as AI gains momentum? How do you manage the expectations of nontech stakeholders in the organization who want to stay on the leading edge?

Ostrovsky: Part of the issue is that interacting with ChatGPT or cloud AI is so easy and natural that it can be surprising how hard it is actually to control AI behavior, where you need AI to understand constraints, have access to the right information at the right time, and understand the task.

When setting expectations with stakeholders, it is important they understand that we're working with this very advanced technology and they are realistic about the risk profile of the project.

Mithal: One is helping them understand the trade-offs. It could be security versus innovation or speed versus accuracy. The second is metrics. Is it actually improving the efficiency? How much is the acceptance rate for our given product? Communicating all those to the stakeholders gives them an idea of whether the product they're using is making an impact or if it's actually helping the team become more productive.

Williams: Some of the challenges I'm seeing are mainly around ethical AI concerns, data privacy, and costly and resource-intensive models that go against budget and infrastructure constraints. On the vendor or stakeholder side, it's really more about educating our nontechnical stakeholders about the capabilities of AI and the limitations and trying to set realistic expectations.

We try to help our teams understand for their specific business area how AI can be applied. So how can we use AI in marketing or HR or legal, and giving them real-world use cases.

Verma: Gen AI is really important, and it's so easy to use ChatGPT, but what we find is that gen AI makes a good developer better and a worse developer worse. Good developers understand how to write good code and how good code integrates into projects. ChatGPT is just another tool to help write some of the code that fits into the project. That's the big challenge that we try to make sure our executives understand, that not everybody can use this in the most effective manner.

Jennings: There are some practical governance concerns that have emerged. One is understanding the tolerance for bad responses in certain contexts. Some problems, you may be more willing to accept a bad response because you structure the interface in such a way that there's a human in the loop. If you're attempting to not have a human in the loop, that could be problematic depending on what you want the model to do. Just getting better muscle for the organization to have a good intuition about where these models can potentially fail and in what ways.

In addition to that, understanding what training data went into that model, especially as models are used more as agents and have privileged access to different applications and data sources that might be pretty sensitive.

Kapoor: I think one of the biggest challenges that can happen is how companies use the data that comes back from LLM models and how they're going to use it within the application. Removing the human component scares me a lot.

Verma: It's automation versus augmentation. There are a lot of cases where augmentation is the big gain. I think automation is a very small, closed case β€” there are very few things I think LLMs are ready in the world right now to automate.

Read the original article on Business Insider

❌