❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service

18 December 2024 at 10:42

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free. The company also says that people outside the US can send text messages to the same number for free using WhatsApp.

Upon calling, users hear a voice say, "Hello again, it's ChatGPT, an AI assistant. Our conversation may be reviewed for safety. How can I help you?" Callers can ask ChatGPT anything they would normally ask the AI assistant and have a live, interactive conversation.

During a livestream demo of "Calling with ChatGPT" during Day 10 of "12 Days of OpenAI," OpenAI employees demonstrated several examples of the telephone-based voice chat in action, asking ChatGPT to identify a distinctive house in California and for help in translating a message into Spanish for a friend. For fun, they showed calls from an iPhone, a flip phone, and a vintage rotary phone.

Read full article

Comments

Β© Charles Taylor via Getty Images

I used a bot to do my Christmas shopping. It quickly got weird.

18 December 2024 at 01:07
A robot putting a poo emoji in a gift box

iStock; Rebecca Zisser/BI

Stumped on what to get my mom for Christmas this year, I turned, desperately, to Perplexity AI's chatbot. In response to my initial broad question: "What should I get my mom for Christmas?," the robo-elf gave me links to several gift guides published on sites including Target and Country Living. Then the chatbot suggested generic favorites like a Stanley mug and a foot massager. But as I scrolled, it also dropped links directly to more esoteric gifts, including a mug with Donald Trump on it. "You are a really, really great mom," the mug read. "Other moms? Losers, total disasters." I hadn't given Perplexity any indication of political ideology among my family, but the bot seemed to think sipping from Trump's visage every morning was a gift any mother would love. Then it suggested I make a jar and stuff it with memories I've written down. A cute idea, but I did let Perplexity know that I'm in my 30s β€” I don't think the made-at-home gift for mom is going to cut it.

'Tis the season to scramble and buy tons of stuff people don't need or really even want. At least that's how it can feel when trying to come up with gifts for family members who have everything already. Money has been forked over for restaurant gift cards that collect dust or slippers and scarves that pile up; trendy gadgets are often relegated to junk drawers by March. As artificial intelligence becomes more integrated into online shopping, this whole process should get easier β€” if AI can come to understand the art behind giving a good gift. Shopping has become one of Perplexity's top search categories in the US, particularly around the holidays, Sara Platnick, a spokesperson for Perplexity, tells me. While Platnick didn't comment directly on individual gift suggestions Perplexity's chatbots makes, she tells me that product listings provided in responses are determined by "ratings and its relevance to a user's request."

There are chatbots to consult for advice this holiday season, like Perplexity and ChatGPT, but AI is increasingly seeping into the entire shopping experience. From customer-service chatbots handling online shopping woes to ads serving recommendations that follow you across the web, AI's presence has ramped up alongside the explosion of interest in generative AI. Earlier this year, Walmart unveiled generative-AI-powered search updates that allow people to search for things like "football watch party" instead of looking for items like chips and salsa individually; Google can put clothes on virtual models in a range of sizes to give buyers a better idea of how they'll look. In a world with more options than ever, there's more help from AI, acting as robo-elves in a way β€” omnipresent and sometimes invisible as you shop across the web.

For the indecisive shopper, AI may be a silver bullet to choosing from hundreds of sweaters to buy, plucking the best one from obscurity and putting an end to endless scrolling β€” or it might help to serve up so many targeted ads that it leads people to overconsume.

AI can help people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one.

Either way, AI has been completely changing the e-commerce game. "It allows a company to be who the customer wants it to be," says Hala Nelson, a professor of mathematics at James Madison University. "You cannot hire thousands of human assistants to assist each customer, but you can deploy thousands of AI assistants." Specialization comes from using third-party data to track activity and preferences across the web. In a way, that's the personalized level of service high-end stores have always provided to elite shoppers. Now, instead of a consultation, the expertise is built on surveillance.

Companies also use AI to forecast shopping trends and manage inventory, which can help them prepare and keep items in stock for those last-minute shoppers. Merchants are constantly looking for AI to get them more β€” to bring more eyes to their websites, to get people to add more items to their carts, and ultimately to actually check out and empty their carts. In October and early November, digital retailers using AI tech and agents increased the average value of an order by 7% when compared to sites that did not employ the technology, according to Salesforce data. The company predicted AI and shopping agents to influence 19% of orders during the week of cyber deals around Thanksgiving. And AI can help "level the playing field for small businesses," says Adam Nathan, the founder and CEO of Blaze, an AI marketing tool for small businesses and entrepreneurs.

"They don't want to necessarily be Amazon, Apple, or Nike, they just want to be the No. 1 provider of their service or product in their local community," Nathan says. "They're not worried about AI taking their job β€” they're worried about a competitor using AI. They see it as basically a way to get ahead."

AI early adopters in the e-commerce space benefited last holiday season, but the tech has become even more common this year, says Guillaume Luccisano, the founder and CEO of Yuma AI, a company that automates customer service for sellers that use Shopify. Some merchants that used Yuma AI during the Black Friday shopping craze automated more than 60% of their customer-support tickets, he says. While some people lament having to deal with a bot instead of a person, Luccisano says the tech is getting better, and people are mostly concerned about whether their problem is getting solved, not whether the email came from a real person or generative AI.

After my ordeal with Perplexity, I turned to see how ChatGPT would fare in helping me find gifts for the rest of my family. For my 11-year-old cousin, it suggested a Fitbit or smartwatch for kids to help her "stay active." A watch that tracks activity isn't something I feel comfortable giving a preteen, so I provided some more details. I told ChatGPT she loved the "Twilight" series, so it suggested a T-shirt with the Cullen family crest and a "Twilight"-themed journal to write fan fiction. It told me I could likely find these items on Etsy but it didn't give me direct links. (As her cool millennial cousin who has lived to tell of my own "Twilight" phase in 2007, I did end up buying a makeup bag from Etsy with a movie scene printed on it.) I also asked ChatGPT for suggestions for my 85-year-old grandpa, and it came up with information about electronic picture frames β€” but the bulk of our family photos are stuffed in albums and shoeboxes in his closet and not easily digitized.

I could navigate this list because these are deep contextual things that I know about my family members, something AI doesn't know yet. Many of the best gifts I've ever received are from friends and family members who stumbled upon something they knew I would love β€” a vinyl record tucked in a bin or a print from an independent artist on display at a craft show. AI can play a role in helping people discover new items they may never have known to buy online, but it can't replace that intuition we have when we find the perfect thing for a loved one. "We're still really wrestling with: How accurate is it? How much of a black box is it?" says Koen Pauwels, a professor of marketing at Northeastern University. "Humans are way better still in getting cues from their environment and knowing the context." If you want to give a gift that's really a hit, it looks like you'll still have to give the AI elves a helping hand.


Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.

Read the original article on Business Insider

Klarna’s CEO says it stopped hiring thanks to AI but still advertises many open positions

14 December 2024 at 07:00

Klarna CEO Sebastian Siemiatkowski recently told Bloomberg TV that his company essentially stopped hiring a year ago and credited generative AI for enabling this massive workforce reduction. However, despite Siemiatkowski’s bullishness on AI, the company is not relying entirely on AI to replace human workers who leave, as open job listings β€” for more humans […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Texas AG is investigating Character.AI, other platforms over child safety concerns

12 December 2024 at 23:46

Texas attorney general Ken Paxton on Thursday launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns. The investigation will assess whether Character.AI β€” and other platforms that are popular with young people, including Reddit, Instagram, and Discord β€” conform to Texas’ child privacy and safety laws. The investigation […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Character.AI steps up teen safety after bots allegedly caused suicide, self-harm

Following a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.

In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."

C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsβ€”including bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingβ€”it had to tweak both model inputs and outputs.

Read full article

Comments

Β© Marina Demidiuk | iStock / Getty Images Plus

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says

After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy's suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup's major funder, Google.

On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that "murdering his parents was a reasonable response to their imposing time limits on his online activity," the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.

Read full article

Comments

Β© Miguel Sotomayor | Moment

ChatGPT has entered its Terrible Twos

30 November 2024 at 14:25
ChatGPT logo repeated three times

ChatGPT, Tyler Le/BI

  • ChatGPT was first released two years ago.
  • Since then, its user base has doubled to 200 million weekly users.
  • Major companies, entrepreneurs, and users remain optimistic about its transformative power.

It's been two years since OpenAI released its flagship chatbot, ChatGPT.

And a lot has changed in the world since then.

For one, ChatGPT has helped turbocharge global investment in generative AI.

Funding in the space grew fivefold from 2022 to 2023 alone, according to CB Insights. The biggest beneficiaries of the generative AI boom have been the biggest companies. Tech companies on the S&P 500 have seen a 30% gain since January 2022, compared to only 15% for small-cap companies, Bloomberg reported.

Similarly, consulting firms are expecting AI to make up an increasing portion of their revenue. Boston Consulting Group generates a fifth of its revenue from AI, and much of that work involves advising clients on generative AI, a spokesperson told Business Insider. Almost 40% of McKinsey's work now comes from AI, and a significant portion of that is moving to generative AI, Ben Ellencweig, a senior partner who leads alliances, acquisitions, and partnerships globally for McKinsey's AI arm, QuantumBlack, told BI.

Smaller companies have been forced to rely on larger ones, either by building applications on existing large language models or waiting for their next major developer tool release.

Still, young developers are optimistic that ChatGPT will level the playing field and believe it's only a matter of time before they catch up to bigger players. "You still have your Big Tech companies lying around, but they're much more vulnerable because the bleeding edge of AI has basically been democratized," Bryan Chiang, a recent Stanford graduate who built RizzGPT, told Business Insider.

Then, of course, there is ChatGPT's impact on regular users.

In August, it reached more thanΒ 200 million weekly active users, double the number it had the previous fall. In October, it rolled out a newΒ search featureΒ that provides "links to relevant web sources" when asked a question, introducing a serious threat to Google's dominance.

In September, OpenAI previewed o1, a series of AI models that it says are "designed to spend more time thinking before they respond." ChatGPT Plus and Team users can access the models in ChatGPT. Users hope a full version will be released to the public in the coming year.

Business Insider asked ChatGPT what age means to it.

"Age, to me, is an interesting concept β€” it's a way of measuring the passage of time, but it doesn't define who someone is or what they're capable of," it responded.

Read the original article on Business Insider

From the 'godfathers of AI' to newer people in the field: Here are 17 people you should know — and what they say about the possibilities and dangers of the technology.

30 November 2024 at 13:56
Godfathers of AI
Three of the "godfathers of AI" helped spark the revolution that's making its way through the tech industry β€” and all of society. They are, from left, Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.

Meta Platforms/Noah Berger/Associated Press

  • The field of artificial intelligence is booming and attracting billions in investment.Β 
  • Researchers, CEOs, and legislators are discussing how AI could transform our lives.
  • Here are 17 of the major names in the field β€” and the opportunities and dangers they see ahead.Β 

Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives.Β 

Major business leaders and researchers in the field have weighed in by highlighting both the risks and benefits of the industry's rapid growth. Some say AI will lead to a major leap forward in the quality of human life. Others haveΒ signed a letter calling for a pause on development, testified before Congress on the long-term risks of AI, and claimed it could present a more urgent danger to the world than climate change.Β 

In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI β€” and its impact on our future.Β 

Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Computer scientist Geoffrey Hinton stood outside a Google building
Geoffrey Hinton, a trailblazer in the AI field, quit his job at Google and said he regrets his role in developing the technology.

Noah Berger/Associated Press

Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.

Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology.Β 

"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.Β 

Hinton has since become an outspoken advocate for AI safety and has called it a more urgent risk than climate change. He's also signed a statement about pausing AI developments for six months.Β 

Yoshua Bengio is a professor of computer science at the University of Montreal.
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.  (Maryse Boyce/Mila via AP)
Yoshua Bengio has also been dubbed a "godfather" of AI.

Associated Press

Yoshua Bengio also earned the "godfather of AI" nickname after winning the Turing Award with Geoffrey Hinton and Yann LeCun.

Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index β€” a metric for evaluating the cumulative impact of an author's scholarly output β€” in the world, according to his website.Β 

In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020.Β 

Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.

"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."

When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.

Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
OpenAI's Sam Altman
OpenAI CEO Sam Altman is both optimistic about the changes AI will bring to society, but also says he loses sleep over the dangers of ChatGPT.

JASON REDMOND/AFP via Getty Images

Altman was already a well-known name in Silicon Valley long before, having served as the president of the startup accelerator Y-CombinatorΒ 

While Altman has advocated for the benefits of AI, calling it the most tremendous "leap forward in quality of life for people" he's also spoken candidly about the risks it poses to humanity. He's testified before Congress to discuss AI regulation.

Altman has also said he loses sleep over the potential dangers of ChatGPT.

French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
Yann LeCun, chief AI scientist
Yann LeCun, one of the godfathers of AI, who won the Turing Award in 2018.

Meta Platforms

LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.

LeCun has remained relatively mellow about societal risks of AI in comparison to his fellow godfathers. He's previously said that concerns that the technology could pose a threat to humanity are "preposterously ridiculous". He's also contended that AI, like ChatGPT, that's been trained on large language models still isn't as smart as dogs or cats.

Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Fei-Fei Li
Former Google VP Fe-Fei Li is known for establishing ImageNet, a large visual database designed for visual object recognition.

Greg Sandoval/Business Insider

Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.

She may be best known for establishing ImageNet β€” a large visual database that was designed for research in visual object recognition β€” and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.Β  Over the years, she's also been affiliated with major tech companies including Google β€” where she was a VP and chief scientist for AI and machine learning β€” and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022.Β 

Β 

Β 

UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Stuart Russell
AI researcher Stuart Russell, who is a University of California, Berkeley, professor.

JUAN MABROMATA / Staff/Getty Images

Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans.Β 

He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig.Β 

Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June,Β he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom.Β 

Peter Norvig played a seminal role directing AI research at Google.
Peter Norvig
Stanford HAI fellow Peter Norvig, who previously lead the core search algorithms group at Google.

Peter Norvig

He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision.Β 

Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence.Β 

Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said.Β 

Β 

Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Timnit Gebru – TechCrunch Disrupt
After she departed from her role at Google in 2020, Timnit Gebru went on the found the Distributed AI Research Institute.

Kimberly White/Getty Images

Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.

But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.

Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."

She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology.Β "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."

Β 

British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
Andrew Ng
Coursera co-founder Andrew Ng said he thinks AI will be part of the solution to existential risk.

Steve Jennings / Stringer/Getty Images

The endeavor lead to theΒ Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.

Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website.Β 

Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera. Β 

"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity β€” it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg.Β 

Β 

Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Daphne Koller, CEO and Founder of insitro.
Daphne Koller, CEO and Founder of Insitro.

Insitro

Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.Β Β 

In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."

At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.



Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Anthropic cofounder and president Daniela Amodei.
Anthropic cofounder and president Daniela Amodei.

Anthropic

Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario β€” OpenAI's lead safety researcher at the time β€” was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails.Β 

At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate.Β 

"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.

Β 

Demis Hassabis has said artificial general intelligence will be here in a few years.
DeepMind boss Demis Hassabis believes AGI will be here in a few years.
Demis Hassabis, the CEO and co-founder of machine learning startup DeepMind.

Samuel de Roman/Getty Images

Hassabis, a former child chess prodigy who studied at Cambridge and University College London, was nicknamed the "superhero of artificial intelligence" by The Guardian back in 2016.Β 

After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He soldΒ the AI lab to Google in 2014 for Β£400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website.Β 

Hassabis has said the promise of artificial general intelligence β€” a theoretical concept that sees AI matching the cognitive abilities of humans β€” is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method."Β 

In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and KarΓ©n Simonyan β€” now the company's chief scientist.
Mustafa Suleyman
Mustafa Suleyman, co-founder of DeepMind, launched Inflection AI in 2022.

Inflection

The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook.Β 

Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener"Β that can respond to real-life problems.Β 

"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said.Β 

Β 

Β 

USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Kate Crawford
USC Professor Kate Crawford is the author of Atlas of AI and a researchers at Microsoft.

Kate Crawford

Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society.Β 

Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."

She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."

Margaret Mitchell is the chief ethics scientist at Hugging Face.
Margaret Mitchell
Margaret Mitchell has headed AI projects at several big tech companies.

Margaret Mitchell

Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google.Β 

In late 2020, Mitchell and Timnit Gebru β€” then the co-lead of Google's ethical artificial intelligence β€” published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021

Now, at Hugging Face β€” an open-source data science and machine learning platform that was founded in 2016 β€” she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models. Β 

In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."

Navrina Singh is the founder of Credo AI, an AI governance platform.
Navrina Singh
Navrina Singh, the founder of Credo AI, says the system may help people reach their potential.

Navrina Singh

Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage.Β In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."

At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world.Β "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."

Β 

Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Richard Socher
Richard Socher believes we're still years from achieving AGI.

You.com

Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.

One bottleneck in large language models is their tendency to hallucinate β€” a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code β€” essential "program" responses instead of verbalizing them β€” we can "give them so much more fuel for the next few years in terms of what they can do," Socher said.Β 

But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there.Β 

And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.

"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI.Β 

Read the original article on Business Insider

The future of customer service is here, and it's making customers miserable

26 November 2024 at 01:27
A photo collage of an angry man talking on the phone with a robot
Β 

Liubomyr Vorona/Getty, PhonlamaiPhoto/Getty, Tyler Le/BI

I've been fighting with my health insurance company a lot lately. The mundane billing disputes are exactly the type of situation that, theoretically, AI should make easier. That, however, is not what's going on. The first point of contact is the AI-powered online virtual assistant, which asks what it can help me with but has, thus far, never been able to actually help. After some back and forth, it directs me to an allegedly real person who's supposed to be better equipped to handle the matter. A lot of the time, I get referred to a phone number to call instead. Once I call that number, I'm presented with a new robot β€” this time, one that talks. It's not any better at understanding my problem than the typing robot, but it's also not so sure I'm ready to get to an agent just yet. Yes, it understands I'd like to speak with a representative, but why don't I explain what about first? As my frustration grows, I can hear my voice rise to a Karen-level pitch I swore I'd never use.

By corporate America's (sometimes dubious) telling, AI is basically the answer to everything, including customer service. Businesses say it's the way to unlock efficiencies and improve customer "journeys" so people can solve their problems and get what they need on their own, and fast. The bigger, though less advertised, focus is how AI can save companies money and cut costs, whether by helping human assistants or, in likelier scenarios, reducing the need for human assistants at all. Corporations have long seen contact centers as cost centers, and ones they're constantly looking for ways to reduce.

"It's a lot of work, and it's expensive to think about customer experience and design your AI in a way that's going to be an enjoyable experience," said Michelle Schroeder, the senior vice president of marketing at PolyAI, which creates AI-based voice assistants. "And most companies that are thinking about cost cutting and the AI revolution are not really thinking about the customer."

Simply put, the AI still doesn't work that well. Many of these chatbots and virtual support agents are not ready for prime time. People don't want to use them, but they have to anyway.

"Companies are operating in the dark, in some sense. They have this idea that this technology is going to provide them with cost savings," said Michelle Kinch, an assistant professor of business administration at Dartmouth's Tuck School of Business. "They don't exactly know how to deploy it."

At the moment, customers are the guinea pigs in companies' experimentation with AI. We're the ones navigating the mishaps, overcoming the hurdles, and serving as case studies for what works and what doesn't. The hope is that all this testing will pan out, and the AI will get better as time goes on. But that's not the only outcome possible. We may just be consumers, standing in front of a chatbot, begging to talk to a real person forever.


Consumers are already suspicious of the whole chatbot thing. A recent Gartner survey found that nearly two-thirds of customers prefer that companies don't use AI for customer service. The main reason for their concern was that it would make it harder for them to reach a person. They also worried it would take jobs and give the wrong answers. A J.D. Power survey found bank customers aren't sold on AI. Some academic research indicates that when consumers hear "AI," it lowers emotional trust, and that consumers evaluate service as worse when it's provided by a bot versus a human, even when the service is identical. People think automation is meant to benefit the company β€” as in, save money β€” and not them.

When we do have that acute need to talk to a person, the chatbot becomes a hurdle.

Many of them use AI in their daily lives, to some extent, like using ChatGPT to research a product or ask a question about a warranty, said Keith McIntosh, a researcher at Gartner. They're just wary in a customer-service setting that it won't do the trick. "They know the tools can work, but they're just worried that service organizations will use it to just block access to a person and probably do not trust yet that the technology will actually give them a solution," he said.

Companies need to reassure customers that they're actually using AI to deliver a solution they can use in a self-service way and offer a clear path to an agent when necessary, he said. That sounds nice, but that's often not the reality. It's tough, if not impossible, to get a real person on the phone in a way that can be deeply frustrating and anxiety-inducing.

"When we do have that acute need to talk to a person, the chatbot becomes a hurdle," Kinch said.


Even setting aside the cost savings for companies, there are clear reasons that AI should be a good fit for customer service. When people reach out to a company, it's often with the same basic questions β€” when is my package arriving, where are my tickets, what is the balance on my checking account? Generative AI chatbots are good at distilling this sort of simple information and packaging it in an easy-to-read, conversational way β€” assuming they're not making stuff up.

"Most companies have tiered operations where they have tier-one, tier-two, tier-three support in increasing complexity, and that tier-one support is typically the sort of high-volume, low-complexity type questions," said Jason Maynard, the chief technology officer of North America and Asia Pacific Zendesk, a customer-service platform. "We're already seeing some customers that are really successful at automating a lot of what has been typically like their tier-one operations."

He pointed to DraftKings, which has millions of players, many of whom have basic questions about where to find their bonuses or how to work a promotion that would be expensive and inefficient for a human to answer on a case-by-case basis. It would be an "untenable cost" for the size of their brand, he said.

What gets more complicated is when people get up the ladder into tier-two and tier-three issues. When "Where is my package?" becomes, "You say my package is here and keep sending me a picture the FedEx guy snapped of the delivery, which shows β…“ of my package is clearly missing," the robot's in a pickle. (A former coworker is in such a situation now.)

"Customer experience is so much more complicated than people realize," said Chris Filly, who heads marketing at Callvu, a customer-experience company. "The customer-service team has to deal with an infinite number of potential issues that come up across all these different touchpoints, all these different customer types. It's very, very complicated to make sure that every node in that network has perfect information from everything else."

No system, AI-driven or otherwise, is going to be perfect. But weighing on the corporate decision of what counts as "good enough" is money. Maynard, from Zendesk, spends a lot of time with chief operating officers and chief customer officers in his position, and they're under pressure to cut costs. They "know they're under the microscope," he said β€” some CFO reads a story about how a company cut 700 jobs using AI support agents, and they shoot over an email asking, "Why aren't we doing that?"

"We're in a macroeconomic environment where there's just much more scrutiny on costs these days for any organization," Maynard said, adding that thanks to increases in interest rates, there's a "real focus on profitability, and that puts pressure on margins."

This creates some misaligned incentives. Companies are inclined to implement AI broadly even if it's not appropriate and will make their customers miserable. They may see the immediate dollar signs they save by moving to an automated system β€” but they don't see the consumer on the line shouting at the AI agent and pleading to talk to a human.

"They tend to view contact centers as a cost center, not as a profit center, and the only thing you want to do in a cost center is reduce cost," said Jeff Gallino, the CEO of CallMiner, a software company that focuses on conversation intelligence and customer experience. "They're not looking for transformative, they're looking for incremental."


I recently found myself watching a panel at a conference hosted by Fortune magazine that was focused on unlocking the economic potential of AI, featuring executives at companies such as Santander and Siemens. The consensus was that AI was inevitable β€” bank tellers are out, robots are in, and everyone is just going to have to get used to it, including begrudging consumers who are often on the unfortunate end of it. Rodney Zemmel, a senior partner at McKinsey, said consumer acceptance is coming. "It's amazing how many people in the US were dead against any form of facial recognition until it saves them two minutes in the Delta security line in the airport," he said, or were "massive privacy advocates and for a free pizza online will give away all their personal information." As long as the benefits are there, people will come around to it.

That sounds lovely, except for a lot of consumers, the benefits aren't that evident yet, or at least not enough to outweigh the drawbacks. AI looks like just another measure companies put in place to boost their bottom lines. The bull case is that the AI gets better over time, that five years from now, the virtual agents will be lifelike enough that nobody can tell the difference, and we'll just be chatting away with robots all day to solve our problems. At the moment, companies are building the AI-enabled plane, in a sense, while flying it. Eventually, the plane will be built: The models will be trained, they'll have the right data, and there will be best practices in place for deployment.

People are not enjoying that experience right now.

Maynard compared the current moment to building a website in 1999 β€” everyone's guessing at what this is supposed to look like, but eventually, they'll figure it out. "That transition, we're just very, very early in it, and like all technology changes, it's sort of like things that you think are going to happen really fast tend just to proliferate out into the broader economy and have people adopt them and all these things, it just takes longer than anyone expects," he said.

"People are not enjoying that experience right now," Gallino said. "I very strongly believe that they will enjoy the experience probably soon."

Filly, from Callvu, said that a survey his company conducted on attitudes toward AI in customer-service settings shows consumers are coming around on it and are more willing to give it a chance. Still, they prefer to deal with a live agent in most situations.

"The honest truth is that the data is getting better, that there is hope that this will all resolve itself," he said. "We know that there are certain aspects of customer service that AI is doing well. Now, how long before the state-of-the-art AI makes it into that chatbot that's annoying the heck out of you? It might not be there yet."

The bear case is that significantly better doesn't come. There are no guarantees that this will all just work itself out. The conventional wisdom in business is that if customers have a bad experience, they'll vote with their pocketbooks and go elsewhere. But many industries are uncompetitive, and you can't easily pick up and walk away from your health insurer or your cable company. What's more, if every company has a mediocre AI experience, the bar might just be lowered across the board.

Many companies don't prioritize customer service and contact centers. They're a necessity, but the goal is to make them as cheap as possible.

"Everybody says, 'Oh, this is just going to get better naturally, and then thus conversational AI will get better naturally.' There's two huge flaws with that," Schroeder, from PolyAI, said. For one thing, Google Home and Alexa have been around for years, and they're not wizards. "Even that is, still years later, not getting the difference between 15 and 50," she said. That's a "dealbreaker" for a good conversation. "The second thing is that most of these companies are thinking about conversational AI purely as an efficiency play and as a cost savings and human replacement," she said. If the point of the AI isn't to do a good job, then why would it?

Companies' new favorite way to make β€” or, rather, save β€” money, is making consumers slightly more miserable. Hopefully, that will change, eventually. We've just got to wait and see.


Emily Stewart is a senior correspondent at Business Insider, writing about business and the economy.

Read the original article on Business Insider

Anthropic proposes a new way to connect data to AI chatbots

25 November 2024 at 08:30

Anthropic is proposing a new standard for connecting AI assistants to the systems where data resides. Called the Model Context Protocol, or MCP for short, Anthropic says the standard, which it open sourced today, could help AI models produce better, more relevant responses to queries. MCP lets models β€” any models, not just Anthropic’s β€” […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Marissa Mayer just laid out a possible business model for ad-supported AI chatbots

21 November 2024 at 12:29

Marissa Mayer has a lot of insights into the promise and problems with online advertising. She played an instrumental role in the early days of Google Search and spent several years as Yahoo’s CEO. Today, Mayer is the CEO of her own company, Sunshine, which is creating apps to do things like share photos among […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

❌
❌