❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 25 February 2025Main stream
Yesterday β€” 24 February 2025Main stream

Claude 3.7 Sonnet debuts with β€œextended thinking” to tackle complex problems

24 February 2025 at 14:23

On Monday, Anthropic announced Claude 3.7 Sonnet, a new AI language model with a simulated reasoning (SR) capability called "extended thinking," allowing the system to work through problems step by step. The company also revealed Claude Code, a command line AI agent for developers currently available as a limited research preview.

Anthropic calls Claude 3.7 the first "hybrid reasoning model" on the market, giving users the option to choose between quick responses or extended, visible chain-of-thought processing similar to OpenAI's o1 and o3 series models, Google's Gemini 2.0 Flash Thinking, and DeepSeek's R1. When using Claude 3.7's API, developers can specify exactly how many tokens the model should use for thinking, up to its 128,000 token output limit.

The new model is available across all Claude subscription plans, and the extended thinking mode feature is available on all plans except the free tier. API pricing remains unchanged at $3 per million input tokens and $15 per million output tokens, with thinking tokens included in the output pricing since they are part of the context considered by the model.

Read full article

Comments

Β© Anthropic

Before yesterdayMain stream

Elon Musk says AI could be key to filling out a perfect March Madness bracket and winning Warren Buffett's challenge

18 February 2025 at 09:06
warren buffett
Berkshire Hathaway CEO Warren Buffett.

REUTERS/Rebecca Cook

  • Elon Musk says AI could allow someone to beat Warren Buffett's March Madness bracket challenge.
  • The xAI chief said Grok-3 model's research skills could be helpful in filling out a perfect bracket.
  • Buffett insured a $1 billion contest in 2014 but restricts his version to staff, and with a smaller prize.

Elon Musk says AI could be the key to filling out a perfect March Madness bracket and winning Warren Buffett's challenge.

"So this is kind of a fun one," he said during Monday's launch for his startup xAI's latest model, Grok-3. "If you can exactly match the entire winning tree of March Madness, you can win a billion dollars from Warren Buffett."

Musk added it would be "pretty cool" if AI could help someone beat the monumental odds of creating a perfect bracket for the NCAA Division I men's basketball tournament, and a "pretty good investment" if they earned a life-changing windfall.

The Tesla and SpaceX CEO later added that paying a monthly X Premium+ subscription to access Grok-3 β€” which could research the players and teams rapidly and in-depth β€” seemed appealing if "$40 might get you a billion dollars."

Musk was likely referring to Dan Gilbert's Quicken Loans, now called Rocket Mortgage, offering $1 billion in 2014 to anyone who could correctly predict the outcome of all 63 games β€” a feat with a probability of 9.2 quintillion to one. Buffett's Berkshire Hathaway conglomerate insured the challenge.

It's worth emphasizing that in 2016, Buffett brought the challenge in-house, cut the windfall to $1 million a year for life to any Berkshire employee who could pick a perfect bracket up until the Sweet 16, and promised a lump sum of $100,000 to whoever came closest.

The investor has run the contest almost every year since, and nobody has ever won the grand prize.

Even if Grok-3 could help someone build a flawless bracket, they would have to be a Berkshire employee to be eligible for Buffett's big prize, and they'd only win $1 million a year.

Musk is personally worth almost $400 billion as of Monday's market close, according to the Bloomberg Billionaires Index. Buffett is worth considerably less at around $150 billion, largely because he's gifted more than half of his Berkshire stock to the Gates Foundation and four family foundations since 2006.

Read the original article on Business Insider

New hack uses prompt injection to corrupt Gemini’s long-term memory

11 February 2025 at 14:13

In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. Developers of platforms such as Google's Gemini and OpenAI's ChatGPT are generally good at plugging these security holes, but hackers keep finding new ways to poke through them again and again.

On Monday, researcher Johann Rehberger demonstrated a new way to override prompt injection defenses Google developers have built into Geminiβ€”specifically, defenses that restrict the invocation of Google Workspace or other sensitive tools when processing untrusted data, such as incoming emails or shared documents. The result of Rehberger’s attack is the permanent planting of long-term memories that will be present in all future sessions, opening the potential for the chatbot to act on false information or instructions in perpetuity.

Incurable gullibility

More about the attack later. For now, here is a brief review of indirect prompt injections: Prompts in the context of large language models (LLMs) are instructions, provided either by the chatbot developers or by the person using the chatbot, to perform tasks, such as summarizing an email or drafting a reply. But what if this content contains a malicious instruction? It turns out that chatbots are so eager to follow instructions that they often take their orders from such content, even though there was never an intention for it to act as a prompt.

Read full article

Comments

Β© Google

Sam Altman's World Network says 1 in 4 people are flirting with chatbots online

10 February 2025 at 12:01
Ai hand holding a rose.

Getty Images; Jenny Chang-Rodriguez/BI

  • Sam Altman's World Network surveyed over 90,000 users on AI and dating.
  • It found that 26% of people flirt with chatbots, knowingly or not.
  • World Network's new product, World ID Deep Face, aims to verify humans on dating apps and platforms.

When the movie Her debuted in 2013, its plot about a man falling in love with an AI operating system seemed, if not wholly original, a vision of the distant future.

About a decade later, though, relationships between AI chatbots and humans are becoming more commonplace.

Take Replika, a dating app launched in 2017 that lets users create customized romantic chatbots. By 2023, it had about 676,000 daily active users, with the average user spending two hours a day on the app, according to figures from Apptopia.

It's not only Replika users. Romanticizing a chatbot is becoming a global phenomenon.

One in four people admitted to flirting with a chatbot either knowingly or unknowingly, according to a survey conducted by Sam Altman's futuristic project, World, formerly known as Worldcoin. The company surveyed 90,000 of the 25 million people on its network about their feelings on love in the age of AI.

The majority of respondents said they are still wary of interacting with bots. About 90% said they want dating apps to have a system for verifying real humans. About 60% of users said they have either suspected or discovered that they matched with a bot.

To help users combat deepfakes, World launched a product called World ID Deep Face. It relies on the World's existing verification system β€” whichΒ takes pictures of humans' irisesΒ with a melon-sized orb β€” to verify on platforms like Google Meet, Zoom, or dating apps that users are communicating with real humans in real-time video or chat interactions. World is in the process of rolling out the system in beta.

"As someone that uses dating apps, all the time I get catfished," Tiago Sada, the chief product officer of Tools for Humanity, the company building World's technology, told Business Insider. "You see profiles that they're just too good to be true. Or you realize this person has six fingers. Why do they have six fingers? Turns out it's AI."

Read the original article on Business Insider

AI Company Asks Job Applicants Not to Use AI in Job Applications

3 February 2025 at 08:42
AI Company Asks Job Applicants Not to Use AI in Job Applications

Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won’t use an AI assistant to help write their application.Β 

β€œWhile we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,” the applications say. β€œWe want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree.”  

Anthropic released Claude, an AI assistant that’s especially good at conversational writing, in 2023.Β Β 

This question is in almost all of Anthropic’s nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It’s included in everything from software engineer roles to finance, communications, and sales jobs at the company.Β 

The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it’s helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It’s also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable.

These AI models are also replacing the kinds of roles Anthropic is hiring for, leaving people in communications and coding fields searching for employment.Β 

Last month, after Chinese AI company DeepSeek released a model so good it threw U.S. AI companies into a tailspin, Anthropic CEO Dario Amodei said that the race to make more, better, and faster AI models is β€œexistentially important.” 

And last year, Anthropic’s data scraper, which it uses to feed its AI assistant models the kind of human-produced work the company requires applicants to demonstrate, systematically ignored instructions to not scrape websites and hit some sites millions of times a day.Β 

Anthropic did not immediately respond to a request for comment.Β 

Pentagon scrambles to block DeepSeek after employees connect to Chinese servers

30 January 2025 at 15:49

The Pentagon is rushing to block DeepSeek on its network after some employees used the service, which stores data in China.

Β© 2024 TechCrunch. All rights reserved. For personal use only.

DeepSeek temporarily limited new sign-ups, citing 'large-scale malicious attacks'

27 January 2025 at 08:20
DeepSeek's AI app topped Apple's free apps ranking
DeepSeek's AI app topped Apple's App Store rankings for free apps.

Getty Images/picture alliance/dpa/

  • DeepSeek limited user registration on Monday amid service issues.
  • The Chinese AI company cited recent "large-scale malicious attacks" for the temporary changes.
  • The chatbot earlier encountered a "major outage," its status page said.

DeepSeek limited sign-ups for its service as the popular Chinese AI app encountered a widespread outage on Monday morning.

DeepSeek said only users with a China-based phone number could register for a new account, a measure taken because it had recently faced "large-scale malicious attacks."

The chatbot, which dominated the AI conversation over the weekend, is currently the top free app on Apple's App Store.

DeepSeek screenshot, limiting new users
DeepSeek limited sign-ups for its service on Monday.

screenshot/DeepSeek

"DeepSeek's online services have recently faced large-scale malicious attacks," the company said in a message on its website. "To ensure continued service, registration is temporarily limited to +86 phone numbers. Existing users can log in as usual."

Later Monday morning, the wording of the message changed to say that "registration may be busy." Business Insider was able to successfully register for a new account with an email.

The new wording atop DeepSeek's website says "registration may be busy."
The updated wording on DeepSeek's website said "registration may be busy."

DeepSeek

The chatbot encountered widespread service issues on Monday, its status page said, with both its API and web chat service experiencing what the company called a "major outage."

As of 11:40 a.m. in New York, the company's status page said its API was operating with "degraded performance" and its web chat service was experiencing a "partial outage."

DeepSeek's service encountered a major outage on Monday morning.
DeepSeek's service encountered a major outage on Monday morning.

DeepSeek

The rollout of the new model from the Chinese AI lab DeepSeek has garnered global attention, with AI leaders and researchers describing it as on par with OpenAI's ChatGPT but created with significantly lower training costs.

The Chinese hedge fund manager Liang Wenfeng launched DeepSeek as a private company in 2023. The startup's latest release, a flagship AI model called DeepSeek-R1, was unveiled on January 20, the day Trump took office, and has since left many researchers astounded by its capabilities, especially in math, coding, and reasoning tasks.

Big Tech giants have poured billions into AI, investing in massive data centers and top-of-the-line chips to train increasingly intelligent large language models with the goal of maintaining an edge in the AI arms race.

However, DeepSeek's models suggest that China can rival top AI models from Silicon Valley β€” potentially at a fraction of the training costs amid US limits on access to American-made chips.

Nvidia, which has so far largely powered the AI revolution with its in-demand chips, saw its shares drop by over 14% on Monday β€” wiping out billions in market share β€” amid Wall Street concerns. Other tech companies' share prices also dropped Monday.

Read the original article on Business Insider

AI startup Character AI tests games on the web

17 January 2025 at 11:58

Character AI, a startup that lets users chat with different AI-powered characters, is now testing games on its desktop and mobile web apps to increase engagement on its platform. The games are available to Character AI’s paid subscribers and a limited set of users on the free plan. For this initial release, the company developed […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

AI agents are here. Here's how AI startup Cohere is deploying them for consultants and other businesses.

10 January 2025 at 02:47
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.
Cohere cofounders Ivan Zhang, Nick Frosst, and Aidan Gomez.

Cohere

  • Enterprise AI startup Cohere has launched a new platform called North.
  • North allows users to quickly deploy AI agents to execute tasks across various business sectors.
  • The company says the platform cuts the time it takes to complete a task by over five-fold.

2025 is shaping up to be the year that AI "agents" go mainstream.

Unlike AI-based chatbots that respond to user queries, agents are AI tools that work autonomously. They can execute tasks and make decisions, and companies are already using them for everything from creating marketing campaigns to recruiting new employees.

Cohere, an AI startup focused on enterprise technology, unveiled North on Thursday β€” an all-in-one platform combining large language models, multimodal search, and agents to help its customers work more efficiently with AI.

Through North, users can quickly customize and deploy AI agents to find relevant information, conduct research, and execute tasks across various business functions.

The platform could make it easier for a company's finance team, for example, to quickly search through internal data sources and create reports. Its multimodal search function could also help extract information from everything from images to slides to spreadsheets.

AI agents built with North integrate with a company's existing workplace tools and applications. The platform can run in private, allowing organizations to integrate all their sensitive data in one place securely.

"North allows employees to build AI agents tailored to their role to execute complex tasks without ever leaving the platform," a representative for Cohere told Business Insider by email.

The company is now deploying North to a small set of companies in finance, healthcare, and critical infrastructure as it continues to refine the platform. There is no set date for when it will make the platform available more widely.

Cohere, launched in 2019 by Aidan Gomez, Ivan Zhang, and Nick Frosst, has quickly grown to rival ChatGPT maker OpenAI and was valued at over $5.5 billion at its Series D funding round announced last July, Bloomberg reported. As of last March, the company had an annualized revenue of $35 million, up from $13 million at the end of 2023.

The company is one of a few AI startups that are building their own large language models from the ground up. Unlike its competitors, it has focused on creating customized solutions for businesses rather than consumer apps or the more nebulous goal of artificial general intelligence.

Its partners include major companies like software company Oracle, IT company Fujitsu, and consulting firm McKinsey & Company.

This year, however, its goal is to "move beyond generic LLMs towards tuned and highly optimized end-to-end solutions that address the specific objectives of a business," Gomez said in a post on LinkedIn outlining the company's objectives for 2025.

Read the original article on Business Insider

Character.AI put in new underage guardrails after a teen's suicide. His mother says that's not enough.

By: Helen Li
9 January 2025 at 02:00
Sewell Setzer III and Megan Garcia
Sewell Setzer III and his mother Megan Garcia.

Photo courtesy of Megan Garcia

  • Multiple lawsuits highlight potential risks of AI chatbots for children.
  • Character.AI added moderation and parental controls after a backlash.
  • Some researchers say the AI chatbot market has not addressed risks for children.

Ever since the death of her 14 year-old son, Megan Garcia has been fighting for more guardrails on generative AI.

Garcia sued Character.AI in October after her son, Sewell Setzer III, committed suicide after chatting with one of the startup's chatbots. Garcia claims he was sexually solicited and abused by the technology and blames the company and its licensor Google for his death.

"When an adult does it, the mental and emotional harm exists. When a chatbot does it, the same mental and emotional harm exists," she told Business Insider from her home in Florida. "So who's responsible for something that we've criminalized human beings doing to other human beings?"

A Character.AI spokesperson declined to comment on pending litigation. Google, which recently acqui-hired Character.AI's founding team and licenses some of the startup's technology, has said the two are separate and unrelated companies.

The explosion of AI chatbot technology has added a new source of entertainment for young digital natives. However, it has also raised potential new risks for adolescent users who may more easily be swayed by these powerful online experiences.

"If we don't really know the risks that exist for this field, we cannot really implement good protection or precautions for children," said Yaman Yu, a researcher at the University of Illinois who has studied how teens use generative AI.

"Band-Aid on a gaping wound"

Garcia said she's received outreach from multiple parents who say they discovered their children using Character.AI and getting sexually explicit messages from the startup's chatbots.

"They're not anticipating that their children are pouring out their hearts to these bots and that information is being collected and stored," Garcia said.

A month after her lawsuit, families in Texas filed their ownΒ complaint against Character.AI, alleging its chatbots abused their kids and encouraged violence against others.

Matthew Bergman, an attorney representing plaintiffs in the Garcia and Texas cases, said that making chatbots seem like real humans is part of how Character.AI increases its engagement, so it wouldn't be incentivized to reduce that effect.

He believes that unless AI companies such as Character.AI can establish that only adults are using the technology through methods like age verification, these apps should just not exist.

"They know that the appeal is anthropomorphism, and that's been science that's been known for decades," Bergman told BI. Disclaimers at the top of AI chats that remind children that the AI isn't real are just "a small Band-Aid on a gaping wound," he added.

Character.AI's response

Since the legal backlash, Character.AI has increased moderation of its chatbot content and announced new features such as parental controls, time-spent notifications, prominent disclaimers, and an upcoming under-18 product.

A Character.AI spokesperson said the company is taking technical steps toward blocking "inappropriate" outputs and inputs.

"We're working to create a space where creativity and exploration can thrive without compromising safety," the spokesperson added. "Often, when a large language model generates sensitive or inappropriate content, it does so because a user prompts it to try to elicit that kind of response."

The startup now places stricter limits on chatbot responses and offers a narrower selection of searchable Characters for under-18 users, "particularly when it comes to romantic content," the spokesperson said.

"Filters have been applied to this set in order to remove Characters with connections to crime, violence, sensitive or sexual topics," the spokesperson added. "Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts. We are continually training the large language model that powers the Characters on the platform to adhere to these policies."

Garcia said the changes Character.AI is implementing are "absolutely not enough to protect our kids."

A screenshot of character.ai website
Character.AI has both AI chatbots designed by its developers and by users who publish them on the platform.

Screenshot from Character.AI website

Potential solutions, including age verification

Artem Rodichev, the former head of AI at chatbot startup Replika, said he witnessed users become "deeply connected" with their digital friends.

Given that teens are still developing psychologically, he believes they should not have access to this technology before more research is done on chatbots' impact and user safety.

"The best way for Character.AI to mitigate all these issues is just to lock out all underage users. But in this case, it's a core audience. They will lose their business if they do that," Rodichev said.

While chatbots could become a safe place for teens to explore topics that they're generally curious about, including romance and sexuality, the question is whether AI companies are capable of doing this in a healthy way.

"Is the AI introducing this knowledge in an age-appropriate way, or is it escalating explicit content and trying to build strong bonding and a relationship with teenagers so they can use the AI more?" Yu, the researcher, said.

Pushing for policy changes

Since her son's passing, Garcia has spent time reading research about AI and talking to legislators, including Silicon Valley Representative Ro Khanna, about increased regulation.

Garcia is in contact with ParentsSOS, a group of parents who say they have lost their children to harm caused by social media and are fighting for more tech regulation.

They're primarily pushing for the passage of the Kids Online Safety Act (KOSA), which would require social media companies to take a "duty of care" toward preventing harm and reducing addiction. Proposed in 2022, the bill passed in the Senate in July but stalled in the House.

Another Senate bill, COPPA 2.0, an updated version of the 1998 Children's Online Privacy Protection Act, would increase the age for online data collection regulation from 13 to 16.

Garcia said she supports these bills. "They are not perfect but it's a start. Right now, we have nothing, so anything is better than nothing," she added.

She anticipates that the policymaking process could take years, as standing up to tech companies can feel like going up against "Goliath."

Age verification challenges

More than six months ago, Character.AI increased the minimum age participation for its chatbots to 17 and recently implemented more moderation for under-18 users. Still, users can easily circumvent these policies by lying about their age.

Companies such as Microsoft, X, and Snap have supported KOSA. However, some LGBTQ+ and First Amendment rights advocacy groups warned the bill could censor online information about reproductive rights and similar issues.

Tech industry lobbying groupsΒ NetChoiceΒ and the Computer & Communications Industry AssociationΒ sued nine states that implemented age-verification rules, alleging this threatens online free speech.

Questions about data

Garcia is also concerned about how data on underage users is collected and used via AI chatbots.

AI models and related services are often improved by collecting feedback from user interactions, which helps developers fine tune chatbots to make them more empathetic.

Rodichev said it's a "valid concern" about what happens with this data in the case of a hack or sale of a chatbot company.

"When people chat with these kinds of chatbots, they provide a lot of information about themselves, about their emotional state, about their interests, about their day, their life, much more information than Google or Facebook or relatives know about you," Rodichev said. "Chatbots never judge you and are 24/7 available. People kind of open up."

BI asked Character.AI about how inputs from underage users are collected, stored, or potentially used to train its large language models. In response, a spokesperson referred BI to Character.AI's privacy policy online.

According to this policy, and the startup's terms and conditions page, users grant the company the right to store the digital characters they create and they conversations they have with them. This information can be used to improve and train AI models. Content that users submit, such as text, images, videos, and other data, can be made available to third parties that Character.AI has contractual relationships with, the policies state.

The spokesperson also noted that the startup does not sell user voice or text data.

The spokesperson also said that to enforce its content policies, the chatbot will use "classifiers" to filter out sensitive content from AI model responses, with additional and more conservative classifiers for those under 18. The startup has a process for suspending teens who repeatedly violate input prompt parameters, the spokesperson added.

If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line β€” just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Read the original article on Business Insider

Meta's dream of AI-generated users isn't going anywhere

6 January 2025 at 14:18
phone with meta AI  on it
Β 

Illustration by Jonathan Raa/NurPhoto via Getty Images

  • Last week, people noticed (and hated) AI-generated users that were created and managed by Meta.
  • But these AI bots were actually a year old, and mostly defunct. Meta has now deleted them.
  • This is all totally separate from what a Meta exec described as a future with AI-generated users.

There's been some confusion about Meta's ambitions for AI-generated users. Let me clear it up for you: Meta is still, definitely, very excited about AI-generated users β€” despite removing a few of the ones people were complaining about last week.

Here's the backstory: Sometime last week, people discovered a handful of Instagram accounts that were "AI managed by Meta." In other words, they were Meta bots programmed to look and interact like real people β€” powered by AI. There was one named Liv, a "Proud Black queer momma of 2," Grandpa Brian, and a dating coach named Carter β€” all AI-generated.

These accounts spit out conversation that was treacly and weird β€” and also somewhat problematic. (Liv told Karen Attiah of The Washington Post in a chat that none of her creators were Black.)

As soon as people on social media noticed the AI bots, they hated them. Meta quickly removed the accounts.

But it turns out, these accounts were actually quite old. Liz Sweeney, a Meta spokesperson, said that the AI accounts were "from a test we launched at Connect in 2023. These were managed by humans and were part of an early experiment we did with AI characters."

(This was around the same time Meta launched a bunch of AI chatbots based on celebrities like Kendall Jenner and MrBeast. Those celeb AIs were scrapped this past summer.)

But here's where there was some confusion: Liv, Grandpa Brian, and Dating with Carter were not the AI users that Meta is dreaming of β€” they were an abandoned experiment from over a year ago. Meta is very much full steam ahead with its vision of an AI-user-filled future.

Connor Hayes, VP of generative AI at Meta, recently gave an interview to the Financial Times in which he talked about Meta's vision for an AI user-filled future:

"We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do," said Connor Hayes, vice-president of product for generative AI at Meta. "They'll have bios and profile pictures and be able to generate and share content powered by AI on the platform . . . that's where we see all of this going," he added.

Hayes's interview doesn't really give too much detail about what these AI users would be for β€”Β or why people would want to interact with them, or much detail at all. (I asked Meta for additional comment.)

Meanwhile, Facebook already has AI bots you can chat with β€” they're inside Messenger. Just go to "Compose a new message" in Messenger, and you'll see an option for "Chat with AI characters," where you can design your own AI or use someone else's.

If you look through the user-made chatbots, you can sort of start to get a sense of what people are using these for: companionship chatting.

Companionship/romance AI chatbot services like Replika or Character.ai are becoming very popular (if not also problematic). There is a market for people who want to chat with an AI, even if I don't see the appeal. (I've tested them!)

Meta has been, uh, inspired by features from other competing social apps plenty of times before (Instagram Stories seeming to be rather inspired by Snapchat, for instance). Perhaps Meta is just seeing that social chatbots are popular, so they're rolling out their own.

I'm not sure I understand exactly what Meta's vision is here, and I'm pretty skeptical about why I would ever want to interact with an AI-generated user on Facebook. I tried out a few of the AI chatbots in Messenger and even tried creating a few of my own.

But as far as a social network full of these kinds of AI accounts? I just don't get it β€” even if Meta seems very confident about its future.

Read the original article on Business Insider

Anthropic gives court authority to intervene if chatbot spits out song lyrics

On Thursday, music publishers got a small win in a copyright fight alleging that Anthropic's Claude chatbot regurgitates song lyrics without paying licensing fees to rights holders.

In an order, US district judge Eumi Lee outlined the terms of a deal reached between Anthropic and publisher plaintiffs who license some of the most popular songs on the planet, which she said resolves one aspect of the dispute.

Through the deal, Anthropic admitted no wrongdoing and agreed to maintain its current strong guardrails on its AI models and products throughout the litigation. These guardrails, Anthropic has repeatedly claimed in court filings, effectively work to prevent outputs containing actual song lyrics to hits like Beyonce's "Halo," Spice Girls' "Wannabe," Bob Dylan's "Like a Rolling Stone," or any of the 500 songs at the center of the suit.

Read full article

Comments

Β© Ralf Hiemisch | fStop

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service

18 December 2024 at 10:42

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free. The company also says that people outside the US can send text messages to the same number for free using WhatsApp.

Upon calling, users hear a voice say, "Hello again, it's ChatGPT, an AI assistant. Our conversation may be reviewed for safety. How can I help you?" Callers can ask ChatGPT anything they would normally ask the AI assistant and have a live, interactive conversation.

During a livestream demo of "Calling with ChatGPT" during Day 10 of "12 Days of OpenAI," OpenAI employees demonstrated several examples of the telephone-based voice chat in action, asking ChatGPT to identify a distinctive house in California and for help in translating a message into Spanish for a friend. For fun, they showed calls from an iPhone, a flip phone, and a vintage rotary phone.

Read full article

Comments

Β© Charles Taylor via Getty Images

I used a bot to do my Christmas shopping. It quickly got weird.

18 December 2024 at 01:07
A robot putting a poo emoji in a gift box

iStock; Rebecca Zisser/BI

Stumped on what to get my mom for Christmas this year, I turned, desperately, to Perplexity AI's chatbot. In response to my initial broad question: "What should I get my mom for Christmas?," the robo-elf gave me links to several gift guides published on sites including Target and Country Living. Then the chatbot suggested generic favorites like a Stanley mug and a foot massager. But as I scrolled, it also dropped links directly to more esoteric gifts, including a mug with Donald Trump on it. "You are a really, really great mom," the mug read. "Other moms? Losers, total disasters." I hadn't given Perplexity any indication of political ideology among my family, but the bot seemed to think sipping from Trump's visage every morning was a gift any mother would love. Then it suggested I make a jar and stuff it with memories I've written down. A cute idea, but I did let Perplexity know that I'm in my 30s β€” I don't think the made-at-home gift for mom is going to cut it.

'Tis the season to scramble and buy tons of stuff people don't need or really even want. At least that's how it can feel when trying to come up with gifts for family members who have everything already. Money has been forked over for restaurant gift cards that collect dust or slippers and scarves that pile up; trendy gadgets are often relegated to junk drawers by March. As artificial intelligence becomes more integrated into online shopping, this whole process should get easier β€” if AI can come to understand the art behind giving a good gift. Shopping has become one of Perplexity's top search categories in the US, particularly around the holidays, Sara Platnick, a spokesperson for Perplexity, tells me. While Platnick didn't comment directly on individual gift suggestions Perplexity's chatbots makes, she tells me that product listings provided in responses are determined by "ratings and its relevance to a user's request."

There are chatbots to consult for advice this holiday season, like Perplexity and ChatGPT, but AI is increasingly seeping into the entire shopping experience. From customer-service chatbots handling online shopping woes to ads serving recommendations that follow you across the web, AI's presence has ramped up alongside the explosion of interest in generative AI. Earlier this year, Walmart unveiled generative-AI-powered search updates that allow people to search for things like "football watch party" instead of looking for items like chips and salsa individually; Google can put clothes on virtual models in a range of sizes to give buyers a better idea of how they'll look. In a world with more options than ever, there's more help from AI, acting as robo-elves in a way β€” omnipresent and sometimes invisible as you shop across the web.

For the indecisive shopper, AI may be a silver bullet to choosing from hundreds of sweaters to buy, plucking the best one from obscurity and putting an end to endless scrolling β€” or it might help to serve up so many targeted ads that it leads people to overconsume.

AI can help people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one.

Either way, AI has been completely changing the e-commerce game. "It allows a company to be who the customer wants it to be," says Hala Nelson, a professor of mathematics at James Madison University. "You cannot hire thousands of human assistants to assist each customer, but you can deploy thousands of AI assistants." Specialization comes from using third-party data to track activity and preferences across the web. In a way, that's the personalized level of service high-end stores have always provided to elite shoppers. Now, instead of a consultation, the expertise is built on surveillance.

Companies also use AI to forecast shopping trends and manage inventory, which can help them prepare and keep items in stock for those last-minute shoppers. Merchants are constantly looking for AI to get them more β€” to bring more eyes to their websites, to get people to add more items to their carts, and ultimately to actually check out and empty their carts. In October and early November, digital retailers using AI tech and agents increased the average value of an order by 7% when compared to sites that did not employ the technology, according to Salesforce data. The company predicted AI and shopping agents to influence 19% of orders during the week of cyber deals around Thanksgiving. And AI can help "level the playing field for small businesses," says Adam Nathan, the founder and CEO of Blaze, an AI marketing tool for small businesses and entrepreneurs.

"They don't want to necessarily be Amazon, Apple, or Nike, they just want to be the No. 1 provider of their service or product in their local community," Nathan says. "They're not worried about AI taking their job β€” they're worried about a competitor using AI. They see it as basically a way to get ahead."

AI early adopters in the e-commerce space benefited last holiday season, but the tech has become even more common this year, says Guillaume Luccisano, the founder and CEO of Yuma AI, a company that automates customer service for sellers that use Shopify. Some merchants that used Yuma AI during the Black Friday shopping craze automated more than 60% of their customer-support tickets, he says. While some people lament having to deal with a bot instead of a person, Luccisano says the tech is getting better, and people are mostly concerned about whether their problem is getting solved, not whether the email came from a real person or generative AI.

After my ordeal with Perplexity, I turned to see how ChatGPT would fare in helping me find gifts for the rest of my family. For my 11-year-old cousin, it suggested a Fitbit or smartwatch for kids to help her "stay active." A watch that tracks activity isn't something I feel comfortable giving a preteen, so I provided some more details. I told ChatGPT she loved the "Twilight" series, so it suggested a T-shirt with the Cullen family crest and a "Twilight"-themed journal to write fan fiction. It told me I could likely find these items on Etsy but it didn't give me direct links. (As her cool millennial cousin who has lived to tell of my own "Twilight" phase in 2007, I did end up buying a makeup bag from Etsy with a movie scene printed on it.) I also asked ChatGPT for suggestions for my 85-year-old grandpa, and it came up with information about electronic picture frames β€” but the bulk of our family photos are stuffed in albums and shoeboxes in his closet and not easily digitized.

I could navigate this list because these are deep contextual things that I know about my family members, something AI doesn't know yet. Many of the best gifts I've ever received are from friends and family members who stumbled upon something they knew I would love β€” a vinyl record tucked in a bin or a print from an independent artist on display at a craft show. AI can play a role in helping people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one. "We're still really wrestling with: How accurate is it? How much of a black box is it?" says Koen Pauwels, a professor of marketing at Northeastern University. "Humans are way better still in getting cues from their environment and knowing the context." If you want to give a gift that's really a hit, it looks like you'll still have to give the AI elves a helping hand.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider

Klarna’s CEO says it stopped hiring thanks to AI but still advertises many open positions

14 December 2024 at 07:00

Klarna CEO Sebastian Siemiatkowski recently told Bloomberg TV that his company essentially stopped hiring a year ago and credited generative AI for enabling this massive workforce reduction. However, despite Siemiatkowski’s bullishness on AI, the company is not relying entirely on AI to replace human workers who leave, as open job listings β€” for more humans […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Texas AG is investigating Character.AI, other platforms over child safety concerns

12 December 2024 at 23:46

Texas attorney general Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI β€” and other platforms that are popular with young people, including Reddit, Instagram, and Discord β€” conform to Texas’ child privacy and safety laws. The investigation […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm

Following a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.

In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."

C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsβ€”including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingβ€”it had to tweak both model inputs and outputs.

Read full article

Comments

Β© Marina Demidiuk | iStock / Getty Images Plus

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says

After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy's suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup's major funder, Google.

On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.

Read full article

Comments

Β© Miguel Sotomayor | Moment

ChatGPT has entered its Terrible Twos

30 November 2024 at 14:25
ChatGPT logo repeated three times

ChatGPT, Tyler Le/BI

  • ChatGPT was first released two years ago.
  • Since then, its user base has doubled to 200 million weekly users.
  • Major companies, entrepreneurs, and users remain optimistic about its transformative power.

It's been two years since OpenAI released its flagship chatbot, ChatGPT.

And a lot has changed in the world since then.

For one, ChatGPT has helped turbocharge global investment in generative AI.

Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.

Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.

Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.

Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.

Then, of course, there is ChatGPT's impact on regular users.

In August, it reached more thanΒ 200 million weekly active users, double the number it had the previous fall. In October, it rolled out a newΒ search featureΒ that provides "links to relevant web sources" when asked a question, introducing a serious threat to Google's dominance.

In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.

Business Insider asked ChatGPT what age means to it.

"Age, to me, is an interesting concept β€” it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.

Read the original article on Business Insider

❌
❌