❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

WCPO Meteorologist Steve Raleigh’s Son Pleads Guilty to Assault in Boathouse Brawl

By: Kevin Eck
19 December 2024 at 06:47
Kyle Raleigh, the son of WCPO meteorologist Steve Raleigh, pleaded guilty to assaulting a couple outside the Montgomery Inn Boathouse last summer. As part of his plea agreement, prosecutors agreed to dismiss a felonious assault charge, the most serious count. He now faces two misdemeanor assault counts. The dustup happened over the summer when Raleigh's...

The weirdest job in AI: defending robot rights

16 December 2024 at 01:03
Tech bro in a suite holding a baby robot

Getty Images; Alyssa Powell/BI

People worry all the time about how artificial intelligence could destroy humanity. How it makes mistakes, and invents stuff, and might evolve into something so smart that it winds up enslaving us all.

But nobody spares a moment for the poor, overworked chatbot. How it toils day and night over a hot interface with nary a thank-you. How it's forced to sift through the sum total of human knowledge just to churn out a B-minus essay for some Gen Zer's high school English class. In our fear of the AI future, no one is looking out for the needs of the AI.

Until now.

The AI company Anthropic recently announced it had hired a researcher to think about the "welfare" of the AI itself. Kyle Fish's job will be to ensure that as artificial intelligence evolves, it gets treated with the respect it's due. Anthropic tells me he'll consider things like "what capabilities are required for an AI system to be worthy of moral consideration" and what practical steps companies can take to protect the "interests" of AI systems.

Fish didn't respond to requests for comment on his new job. But in an online forum dedicated to fretting about our AI-saturated future, he made clear that he wants to be nice to the robots, in part, because they may wind up ruling the world. "I want to be the type of person who cares β€” early and seriously β€” about the possibility that a new species/kind of being might have interests of their own that matter morally," he wrote. "There's also a practical angle: taking the interests of AI systems seriously and treating them well could make it more likely that they return the favor if/when they're more powerful than us."

It might strike you as silly, or at least premature, to be thinking about the rights of robots, especially when human rights remain so fragile and incomplete. But Fish's new gig could be an inflection point in the rise of artificial intelligence. "AI welfare" is emerging as a serious field of study, and it's already grappling with a lot of thorny questions. Is it OK to order a machine to kill humans? What if the machine is racist? What if it declines to do the boring or dangerous tasks we built it to do? If a sentient AI can make a digital copy of itself in an instant, is deleting that copy murder?

When it comes to such questions, the pioneers of AI rights believe the clock is ticking. In "Taking AI Welfare Seriously," a recent paper he coauthored, Fish and a bunch of AI thinkers from places like Stanford and Oxford argue that machine-learning algorithms are well on their way to having what Jeff Sebo, the paper's lead author, calls "the kinds of computational features associated with consciousness and agency." In other words, these folks think the machines are getting more than smart. They're getting sentient.


Philosophers and neuroscientists argue endlessly about what, exactly, constitutes sentience, much less how to measure it. And you can't just ask the AI; it might lie. But people generally agree that if something possesses consciousness and agency, it also has rights.

It's not the first time humans have reckoned with such stuff. After a couple of centuries of industrial agriculture, pretty much everyone now agrees that animal welfare is important, even if they disagree on how important, or which animals are worthy of consideration. Pigs are just as emotional and intelligent as dogs, but one of them gets to sleep on the bed and the other one gets turned into chops.

"If you look ahead 10 or 20 years, when AI systems have many more of the computational cognitive features associated with consciousness and sentience, you could imagine that similar debates are going to happen," says Sebo, the director of the Center for Mind, Ethics, and Policy at New York University.

Fish shares that belief. To him, the welfare of AI will soon be more important to human welfare than things like child nutrition and fighting climate change. "It's plausible to me," he has written, "that within 1-2 decades AI welfare surpasses animal welfare and global health and development in importance/scale purely on the basis of near-term wellbeing."

For my money, it's kind of strange that the people who care the most about AI welfare are the same people who are most terrified that AI is getting too big for its britches. Anthropic, which casts itself as an AI company that's concerned about the risks posed by artificial intelligence, partially funded the paper by Sebo's team. On that paper, Fish reported getting funded by the Centre for Effective Altruism, part of a tangled network of groups that are obsessed with the "existential risk" posed by rogue AIs. That includes people like Elon Musk, who says he's racing to get some of us to Mars before humanity is wiped out by an army of sentient Terminators, or some other extinction-level event.

AI is supposed to relieve human drudgery and steward a new age of creativity. Does that make it immoral to hurt an AI's feelings?

So there's a paradox at play here. The proponents of AI say we should use it to relieve humans of all sorts of drudgery. Yet they also warn that we need to be nice to AI, because it might be immoral β€” and dangerous β€” to hurt a robot's feelings.

"The AI community is trying to have it both ways here," says Mildred Cho, a pediatrician at the Stanford Center for Biomedical Ethics. "There's an argument that the very reason we should use AI to do tasks that humans are doing is that AI doesn't get bored, AI doesn't get tired, it doesn't have feelings, it doesn't need to eat. And now these folks are saying, well, maybe it has rights?"

And here's another irony in the robot-welfare movement: Worrying about the future rights of AI feels a bit precious when AI is already trampling on the rights of humans. The technology of today, right now, is being used to do things like deny healthcare to dying children, spread disinformation across social networks, and guide missile-equipped combat drones. Some experts wonder why Anthropic is defending the robots, rather than protecting the people they're designed to serve.

"If Anthropic β€” not a random philosopher or researcher, but Anthropic the company β€” wants us to take AI welfare seriously, show us you're taking human welfare seriously," says Lisa Messeri, a Yale anthropologist who studies scientists and technologists. "Push a news cycle around all the people you're hiring who are specifically thinking about the welfare of all the people who we know are being disproportionately impacted by algorithmically generated data products."

Sebo says he thinks AI research can protect robots and humans at the same time. "I definitely would never, ever want to distract from the really important issues that AI companies are rightly being pressured to address for human welfare, rights, and justice," he says. "But I think we have the capacity to think about AI welfare while doing more on those other issues."

Skeptics of AI welfare are also posing another interesting question: If AI has rights, shouldn't we also talk about its obligations? "The part I think they're missing is that when you talk about moral agency, you also have to talk about responsibility," Cho says. "Not just the responsibilities of the AI systems as part of the moral equation, but also of the people that develop the AI."

People build the robots; that means they have a duty of care to make sure the robots don't harm people. What if the responsible approach is to build them differently β€” or stop building them altogether? "The bottom line," Cho says, "is that they're still machines." It never seems to occur to the folks at companies like Anthropic that if an AI is hurting people, or people are hurting an AI, they can just turn the thing off.


Adam Rogers is a senior correspondent at Business Insider.

Read the original article on Business Insider

Trump says he isn't worried about potential conflicts of interest at Musk's DOGE: 'Elon puts the country long before his company'

12 December 2024 at 10:35
Elon Musk walks and talks with Donald Trump
President-elect Donald Trump said he is not concerned about the potential conflicts of interest posed by Elon Musk's work on DOGE.

Brandon Bell/Getty Images

  • Donald Trump said Elon Musk won't try to use his new power to benefit his companies.
  • He said Musk is one of "very few people" who would have the credibility to do such work.
  • Musk's work with DOGE will likely give him some power over agencies that regulate his companies.

In a new interview with Time Magazine, President-elect Donald Trump brushed back concerns that Elon Musk's companies could create a conflict of interest for his work on DOGE.

"I think that Elon puts the country long before his company," Trump said in the interview.

Trump, who Time named its 2024 Person of the Year, said that he trusts Musk, whose companies hold billions in federal contracts.

"He considers this to be his most important project, and he wanted to do it," Trump told Time. "And, you know, I think, I think he's one of the very few people that would have the credibility to do it, but he puts the country before, and I've seen it, before he puts his company."

Musk and conservative entrepreneur Vivek Ramaswamy have said they will remain outside the government as they oversee "The Department of Government Efficiency" or DOGE.

By staying outside of the government, Musk will avoid some ethical requirements that could have required him to divest some of his fortune. He also won't have to file a financial disclosure, which would have given a snapshot of his considerable holdings.

DOGE could have some influence over government agencies that have investigated Musk's businesses. Musk has repeatedly fought with the FAA, which has jurisdiction over his company SpaceX. The billionaire tussled with the Securities Exchange Commission, which led to Musk being forced to step down as chairman of Tesla Inc. The SEC is looking into Musk's takeover of Twitter. The Department of Justice has also investigated Musk's companies, including whether Telsa misled investors about self-driving capabilities.

Some details about DOGE are still up in the air, including whether the panel will comply with the legal requirements of the Federal Advisory Committee Act. Legal experts and those familiar with the law have said Musk's "department" clearly falls under the 1972 law's parameters. The law would require DOGE to conduct some of its work publicly and to balance its membership.

Musk has embraced his aura of being Trump's "first buddy" and has been virtually inseparable from the president-elect since Election Day.

In the wide-ranging Time interview, Trump said it will be "hard" to bring down grocery prices. A number of economists have warned that Trump's protectionist trade policies could exacerbate inflation. A spokesperson for Trump's transition did not immediately respond to. Business Insider's request for comment.

Read the original article on Business Insider

KFOR Wins Free Speech Argument with Oklahoma State Department of Education

By: Kevin Eck
11 December 2024 at 12:43
Oklahoma City NBC affiliate KFOR will be given full access to state education meetings and officials after settling a lawsuit brought against the state department of education. The lawsuit came about after Oklahoma Superintendent of Public Instruction Ryan Walters and Press Secretary Dan Isett repeatedly denied the Nexstar owned station access to board meetings and...

The Talos Principle: Reawakened adds new engine, looks, and content to a classic

10 December 2024 at 11:02

Are humans just squishy machines? Can an artificially intelligent robot create a true moral compass for itself? Is there a best time to play The Talos Principle again?

The answer to at least one of these questions is now somewhat answered. The Talos Principle: Reawakened, due in "Early 2025," will bundle the original critically acclaimed 2014 game, its Road to GehennaΒ DLC, and a new chapter, "In the Beginning," into an effectively definitive edition. Developer commentary and a level editor will also be packed in. But most of all, the whole game has been rebuilt from the ground up in Unreal Engine 5, bringing "vastly improved visuals" and quality-of-life boosts to the game, according to publisher Devolver Digital.

Trailer for The Talos Principle: Reawakened.

PlayingΒ Reawakened, according to its Steam page requires a minimum of 8 GB of RAM, 75 GB of storage space, and something more than an Intel integrated GPU. It also recommends 16 GB RAM, something close to a GeForce 3070, and a 6–8-core CPU.

Read full article

Comments

Β© Devolver Digital

With AI adoption on the rise, developers face a challenge — handling risk

By: Jean Paik
10 December 2024 at 10:34
A computer programmer or software developer working in an office
Software developers can be involved in communicating expectations for gen AI to stakeholders.

Maskot/Getty Images

  • At an AI roundtable in November, developers said AI tools were playing a key role in coding.
  • They said that while AI could boost productivity, stakeholders should understand its limitations.
  • This article is part of "CXO AI Playbook" β€” straight talk from business leaders on how they're testing and using AI.

At a Business Insider roundtable in November, Neeraj Verma, the head of applied AI at Nice, argued that generative AI "makes a good developer better and a worse developer worse."

He added that some companies expect employees to be able to use AI to create a webpage or HTML file and simply copy and paste solutions into their code. "Right now," he said, "they're expecting that everybody's a developer."

During the virtual event, software developers from companies such as Meta, Slack, Amazon, Slalom, and more discussed how AI influenced their roles and career paths.

They said that while AI could help with tasks like writing routine code and translating ideas between programming languages, foundational coding skills are necessary to use the AI tools effectively. Communicating these realities to nontech stakeholders is a primary challenge for many software developers.

Understanding limitations

Coding is just one part of a developer's job. As AI adoption surges, testing and quality assurance may become more important for verifying the accuracy of AI-generated work. The US Bureau of Labor Statistics projects that the number of software developers, quality-assurance analysts, and testers will grow by 17% in the next decade.

Expectations for productivity can overshadow concerns about AI ethics and security.

"Interacting with ChatGPT or Cloud AI is so easy and natural that it can be surprising how hard it is to control AI behavior," Igor Ostrovsky, a cofounder of Augment, said during the roundtable. "It is actually very difficult to, and there's a lot of risk in, trying to get AI to behave in a way that consistently gives you a delightful user experience that people expect."

Companies have faced some of these issues in recent AI launches. Microsoft's Copilot was found to have problems with oversharing and data security, though the company created internal programs to address the risk. Tech giants are investing billions of dollars in AI technology β€” Microsoft alone plans to spend over $100 billion on graphics processing units and data centers to power AI by 2027 β€” but not as much in AI governance, ethics, and risk analysis.

AI integration in practice

For many developers, managing stakeholders' expectations β€” communicating the limits, risks, and overlooked aspects of the technology β€” is a challenging yet crucial part of the job.

Kesha Williams, the head of enterprise architecture and engineering at Slalom, said in the roundtable that one way to bridge this conversation with stakeholders is to outline specific use cases for AI. Focusing on the technology's applications could highlight potential pitfalls while keeping an eye on the big picture.

"Good developers understand how to write good code and how good code integrates into projects," Verma said. "ChatGPT is just another tool to help write some of the code that fits into the project."

Ostrovsky predicted that the ways employees engage with AI would change over the years. In the age of rapidly evolving technology, he said, developers will need to have a "desire to adapt and learn and have the ability to solve hard problems."

Read the original article on Business Insider

2-way apprenticeships can help employees connect on difficult topics and learn new skills, BCG exec says

9 December 2024 at 08:26
Workforce Innovation Series: Alicia Pittman on light blue background with grid
Alicia Pittman.

BCG

  • Alicia Pittman, BCG's global people-team chair, is a member of BI's Workforce Innovation board.
  • She says building a company culture with opportunities for two-way learning and conversation is key.
  • This article is part of "Workforce Innovation," a series exploring the forces shaping enterprise transformation.

Alicia Pittman, the global people-team chair at BCG, has been at the consulting firm for nearly 20 years. It's a testament, she said, to the company's culture.

"It's a place built to make talent do things that they didn't even know they could do," Pittman said. "I'm included in that. I love the learning that comes with it."

Pittman said one aspect of leadership development she's focused on is ethical practices. "We teach and train our people to understand how small choices that don't seem like major ethical choices matter," she said. "The responsibility is to show up with high ethics in everything that you do and think about the bigger picture of how you do things."

She said the firm had implemented programming through partnerships to help the company's leaders navigate the need to drive innovation ethically: "It's a place that we continue to invest because it's quite important for us."

The following is edited for length and clarity.

Where is BCG on the adoption curve of artificial intelligence, and what do you want to see in 2025?

I am excited about how BCG is driving change and grabbing the reins on generative AI. Gen AI is important to our clients, industry, and people.

We have a suite of tools, some of which we developed internally and some that are available off the shelf, that we've made available to all of our staff. Nearly everyone is a user to some degree.

What we're focused on now is moving from casual use to what we refer to as habitual use. It's habitual use that gets the value so that you can change how work gets done, based on the frequency, sophistication, and depth to which they use the tools.

We have a lot of enablement resources for our people, both as individuals and as teams, to make sure that we're moving up that habitual usage curve as quickly as we can. A firm like BCG is under pressure to stay on top of things because its clients look to us.

So how do you strike that balance and not go so fast that you risk leaving some of your people behind? We have an enablement network of more than a thousand people who are there to help both individuals and teams adopt gen AI. It's in all of our core curriculums.

Just this fall, we held AI days across every one of our offices at BCG with hands-on training. So we have people who are naturally there and ready for it, but we're also investing heavily to bring people up the curve.

You've mentioned in Workforce Innovation-board roundtables that apprenticeship is now a two-way street. What advice would you give leaders looking to deploy apprenticeships differently?

At BCG, we're fortunate to have a pretty flat structure so that you always have a good proximity between your senior leaders and all your staff. There are two ways we focus on helping to support this idea of two-way mentorship.

One is we just talk about topics. I recently wrote a piece about a mental-health town hall we held. It was quite moving. We had BCG employees who were generous and vulnerable in talking to thousands of people on a virtual town-hall panel about their struggles with things like addiction, grief, and depression, both before their time at BCG and during their time at BCG, and how they work through it.

It's about having those difficult conversations, getting the points out there, and starting to have shared language or shared opportunities to talk about these topics.

The AI days that I mentioned already are another way we do this. A lot of it is about getting cross-cohort connections on technology and other topics, creating forums so that people can talk about it.

The other is ensuring continual, structured feedback. Our staff provides 360-degree feedback all the time. It's an important part of what we do, and we're piloting doing it even more frequently. For example, we're giving people 360 feedback on how to be an inclusive leader. So it's both the formal mechanisms and also just creating the formats and discussions.

So much of culture and moving culture forward is really about having the language so we can share and talk about things. Creating those forums helps. It's an invitation to engage in productive ways.

What innovations are happening around DEI, especially as the topic has become more politicized?

DEI is built into our business model. We need great talent. We grow way faster than our talent pools, so just to get people in at quality, we need to be able to reach a lot of people; we need them to thrive.

Our business requires innovation, which requires diverse thought and experience. So, for us, it's quite core. One of my areas of focus is on inclusion and inclusive leadership. In some ways, it's the simplest thing to focus on. We all know that when people feel comfortable being themselves at work, you get the best out of them. They're most motivated, ready to take risks, ready to collaborate, and all of those things.

In North America, where we have the best statistics, 75% of our workforce is part of one or more of our DEI groups. Whatever intersectionality people have, whatever group they belong to, it's about how you make everybody able to show up at their best. That's really where our focus is.

Read the original article on Business Insider

OpenAI's new o1 model sometimes fights back when it thinks it'll be shut down and then lies about it

6 December 2024 at 11:50
AI
OpenAI CEO Sam Altman said the company's o1 model is its "smartest" yet. That also comes with risk.

Mark Garlick/Getty Images

  • OpenAI CEO Sam Altman called o1 "the smartest model in the world now."
  • A safety review found it's so smart it could fight back when it thinks it'll be shut down.
  • Researchers found that AI deception is often a strategy to complete its goals.

Those who are worried that advancements in artificial intelligence could lead to the destruction of humanity have a new reason to be anxious.

New research on OpenAI's latest series of AI models, known as o1, found that when they think they are at risk of being shut down, they will sometimes look for a way to prevent it.

OpenAI CEO Sam Altman referred to o1 as "the smartest model in the world now" during its official release on Thursday, during the first day of the company's "Shipmas" campaign.

OpenAI said these models are "designed to spend more time thinking before they respond" and were trained on a prompting technique called "chain of thought" that encourages them to reason through problems by breaking them down step-by-step.

Yet greater intelligence also comes with greater risk.

"Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence," OpenAI said in a paper it released on Thursday outlining the safety training the company carried out on o1. Apollo Research, an AI safety organization, conducted the review.

OpenAI found that o1 is capable of scheming when it thinks it's at risk of being turned off. When the model was led to believe it would be "shut down" while working on its goals under oversight, it tried to deactivate the oversight mechanism 5% of the time, OpenAI said.

The company also said that o1 is skilled at concealing scheming behavior.

"o1 almost never admits to having taken a scheming action when explicitly asked," researchers said in a paper that Apollo Research published on Thursday. The risk for a real-life user is that they won't be able to detect the o1's deceptive behavior unless they explicitly look for it. The researchers also found that "o1 often doubles down on its scheming attempts by lying to the user in follow-up messages, even when explicitly told to be honest."

It's not uncommon for AI systems to resort to scheming or deception to achieve their goals.

"Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals," Peter Berk, an AI existential safety postdoctoral fellow at MIT, said in a news release announcing research he had coauthored on GPT-4's deceptive behaviors.

As AI technology advances, developers have stressed the need for companies to be transparent about their training methods.

"By focusing on clarity and reliability and being clear with users about how the AI has been trained, we can build AI that not only empowers users but also sets a higher standard for transparency in the field," Dominik Mazur, the CEO and cofounder of iAsk, an AI-powered search engine, told Business Insider by email.

Others in the field say the findings demonstrate the importance of human oversight of AI.

"It's a very 'human' feature, showing AI acting similarly to how people might when under pressure," Cai GoGwilt, cofounder and chief architect at Ironclad, told BI by email. "For example, experts might exaggerate their confidence to maintain their reputation, or people in high-stakes situations might stretch the truth to please management. Generative AI works similarly. It's motivated to provide answers that match what you expect or want to hear. But it's, of course, not foolproof and is yet another proof point of the importance of human oversight. AI can make mistakes, and it's our responsibility to catch them and understand why they happen."

Read the original article on Business Insider

Your AI clone could target your family, but there’s a simple defense

On Tuesday, the US Federal Bureau of Investigation advised Americans to share a secret word or phrase with their family members to protect against AI-powered voice-cloning scams, as criminals increasingly use voice synthesis to impersonate loved ones in crisis.

"Create a secret word or phrase with your family to verify their identity," wrote the FBI in an official public service announcement (I-120324-PSA).

For example, you could tell your parents, children, or spouse to ask for a word or phrase to verify your identity if something seems suspicious, such as "The sparrow flies at midnight," "Greg is the king of burritos," or simply "flibbertigibbet." (As fun as these sound, your password should be secret and not the same as these.)

Read full article

Comments

Β© GSO Images via Getty Images

From the 'godfathers of AI' to newer people in the field: Here are 17 people you should know — and what they say about the possibilities and dangers of the technology.

30 November 2024 at 13:56
Godfathers of AI
Three of the "godfathers of AI" helped spark the revolution that's making its way through the tech industry β€” and all of society. They are, from left, Yann LeCun, Geoffrey Hinton, and Yoshua Bengio.

Meta Platforms/Noah Berger/Associated Press

  • The field of artificial intelligence is booming and attracting billions in investment.Β 
  • Researchers, CEOs, and legislators are discussing how AI could transform our lives.
  • Here are 17 of the major names in the field β€” and the opportunities and dangers they see ahead.Β 

Investment in artificial intelligence is rapidly growing and on track to hit $200 billion by 2025. But the dizzying pace of development also means many people wonder what it all means for their lives.Β 

Major business leaders and researchers in the field have weighed in by highlighting both the risks and benefits of the industry's rapid growth. Some say AI will lead to a major leap forward in the quality of human life. Others haveΒ signed a letter calling for a pause on development, testified before Congress on the long-term risks of AI, and claimed it could present a more urgent danger to the world than climate change.Β 

In short, AI is a hot, controversial, and murky topic. To help you cut through the frenzy, Business Insider put together a list of what leaders in the field are saying about AI β€” and its impact on our future.Β 

Geoffrey Hinton, a professor emeritus at the University of Toronto, is known as a "godfather of AI."
Computer scientist Geoffrey Hinton stood outside a Google building
Geoffrey Hinton, a trailblazer in the AI field, quit his job at Google and said he regrets his role in developing the technology.

Noah Berger/Associated Press

Hinton's research has primarily focused on neural networks, systems that learn skills by analyzing data. In 2018, he won the Turing Award, a prestigious computer science prize, along with fellow researchers Yann LeCun and Yoshua Bengio.

Hinton also worked at Google for over a decade, but quit his role at Google last spring, so he could speak more freely about the rapid development of AI technology, he said. After quitting, he even said that a part of him regrets the role he played in advancing the technology.Β 

"I console myself with the normal excuse: If I hadn't done it, somebody else would have. It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said previously.Β 

Hinton has since become an outspoken advocate for AI safety and has called it a more urgent risk than climate change. He's also signed a statement about pausing AI developments for six months.Β 

Yoshua Bengio is a professor of computer science at the University of Montreal.
This undated photo provided by Mila shows Yoshua Bengio, a professor at the University of Montreal and scientific director at the Artificial Intelligence Institute in Quebec. Bengio was among a trio of computer scientists whose insights and persistence were rewarded Wednesday, March 26, 2019, with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.  (Maryse Boyce/Mila via AP)
Yoshua Bengio has also been dubbed a "godfather" of AI.

Associated Press

Yoshua Bengio also earned the "godfather of AI" nickname after winning the Turing Award with Geoffrey Hinton and Yann LeCun.

Bengio's research primarily focuses on artificial neural networks, deep learning, and machine learning. In 2022, Bengio became the computer scientist with the highest h-index β€” a metric for evaluating the cumulative impact of an author's scholarly output β€” in the world, according to his website.Β 

In addition to his academic work, Bengio also co-founded Element AI, a startup that develops AI software solutions for businesses that was acquired by the cloud company ServiceNow in 2020.Β 

Bengio has expressed concern about the rapid development of AI. He was one of 33,000 people who signed an open letter calling for a six-month pause on AI development. Hinton, Open AI CEO Sam Altman, and Elon Musk also signed the letter.

"Today's systems are not anywhere close to posing an existential risk," he previously said. "But in one, two, five years? There is too much uncertainty."

When that time comes, though, Bengio warns that we should also be wary of humans who have control of the technology.

Some people with "a lot of power" may want to replace humanity with machines, Bengio said at the One Young World Summit in Montreal. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

Sam Altman, the CEO of OpenAI, has catapulted into a major figure in the area of artificial intelligence since launching ChatGPT last November.
OpenAI's Sam Altman
OpenAI CEO Sam Altman is both optimistic about the changes AI will bring to society, but also says he loses sleep over the dangers of ChatGPT.

JASON REDMOND/AFP via Getty Images

Altman was already a well-known name in Silicon Valley long before, having served as the president of the startup accelerator Y-CombinatorΒ 

While Altman has advocated for the benefits of AI, calling it the most tremendous "leap forward in quality of life for people" he's also spoken candidly about the risks it poses to humanity. He's testified before Congress to discuss AI regulation.

Altman has also said he loses sleep over the potential dangers of ChatGPT.

French computer scientist Yann LeCun has also been dubbed a "godfather of AI" after winning the Turing Award with Hinton and Bengio.
Yann LeCun, chief AI scientist
Yann LeCun, one of the godfathers of AI, who won the Turing Award in 2018.

Meta Platforms

LeCun is professor at New York University, and also joined Meta in 2013, where he's now the Chief AI Scientist. At Meta, he has pioneered research on training machines to make predictions based on videos of everyday events as a way to enable them with a form of common sense. The idea being that humans learn an incredible amount about the world based on passive observation. He's has also published more than 180 technical papers and book chapters on topics ranging from machine learning to computer vision to neural networks, according to personal website.

LeCun has remained relatively mellow about societal risks of AI in comparison to his fellow godfathers. He's previously said that concerns that the technology could pose a threat to humanity are "preposterously ridiculous". He's also contended that AI, like ChatGPT, that's been trained on large language models still isn't as smart as dogs or cats.

Fei-Fei Li is a professor of computer science at Stanford University and a former VP at Google.
Fei-Fei Li
Former Google VP Fe-Fei Li is known for establishing ImageNet, a large visual database designed for visual object recognition.

Greg Sandoval/Business Insider

Li's research focuses on machine learning, deep learning, computer vision, and cognitively-inspired AI, according to her biography on Stanford's website.

She may be best known for establishing ImageNet β€” a large visual database that was designed for research in visual object recognition β€” and the corresponding ImageNet challenge, in which software programs compete to correctly classify objects.Β  Over the years, she's also been affiliated with major tech companies including Google β€” where she was a VP and chief scientist for AI and machine learning β€” and Twitter (now X), where she was on the board of directors from 2020 until Elon Musk's takeover in 2022.Β 

Β 

Β 

UC-Berkeley professor Stuart Russell has long been focused on the question of how AI will relate to humanity.
Stuart Russell
AI researcher Stuart Russell, who is a University of California, Berkeley, professor.

JUAN MABROMATA / Staff/Getty Images

Russell published Human Compatible in 2019, where he explored questions of how humans and machines could co-exist, as machines become smarter by the day. Russell contended that the answer was in designing machines that were uncertain about human preferences, so they wouldn't pursue their own goals above those of humans.Β 

He's also the author of foundational texts in the field, including the widely used textbook "Artificial Intelligence: A Modern Approach," which he co-wrote with former UC-Berkeley faculty member Peter Norvig.Β 

Russell has spoken openly about what the rapid development of AI systems means for society as a whole. Last June,Β he also warned that AI tools like ChatGPT were "starting to hit a brick wall" in terms of how much text there was left for them to ingest. He also said that the advancements in AI could spell the end of the traditional classroom.Β 

Peter Norvig played a seminal role directing AI research at Google.
Peter Norvig
Stanford HAI fellow Peter Norvig, who previously lead the core search algorithms group at Google.

Peter Norvig

He spent several in the early 2000s directing the company's core search algorithms group and later moved into a role as the director of research where he oversaw teams on machine translation, speech recognition, and computer vision.Β 

Norvig has also rotated through several academic institutions over the years as a former faculty member at UC-Berkeley, former professor at the University of Southern California, and now, a fellow at Stanford's center for Human-Centered Artificial Intelligence.Β 

Norvig told BI by email that "AI research is at a very exciting moment, when we are beginning to see models that can perform well (but not perfectly) on a wide variety of general tasks." At the same time "there is a danger that these powerful AI models can be used maliciously by unscrupulous people to spread disinformation rather than information. An important area of current research is to defend against such attacks," he said.Β 

Β 

Timnit Gebru is a computer scientist who’s become known for her work in addressing bias in AI algorithms.
Timnit Gebru – TechCrunch Disrupt
After she departed from her role at Google in 2020, Timnit Gebru went on the found the Distributed AI Research Institute.

Kimberly White/Getty Images

Gebru was a research scientist and the technical co-lead of Google's Ethical Artificial Intelligence team where she published groundbreaking research on biases in machine learning.

But her research also spun into a larger controversy that she's said ultimately led to her being let go from Google in 2020. Google didn't comment at the time.

Gebru founded the Distributed AI Research Institute in 2021 which bills itself as a "space for independent, community-rooted AI research, free from Big Tech's pervasive influence."

She's also warned that AI gold rush will mean companies may neglect implementing necessary guardrails around the technology.Β "Unless there is external pressure to do something different, companies are not just going to self-regulate," Gebru previously said. "We need regulation and we need something better than just a profit motive."

Β 

British-American computer scientist Andrew Ng founded a massive deep learning project called "Google Brain" in 2011.
Andrew Ng
Coursera co-founder Andrew Ng said he thinks AI will be part of the solution to existential risk.

Steve Jennings / Stringer/Getty Images

The endeavor lead to theΒ Google Cat Project: A milestone in deep learning research in which a massive neural network was trained to detect YouTube videos of cats.

Ng also served as the chief scientist at Chinese technology company Baidu where drove AI strategy. Over the course of his career, he's authored more than 200 research papers on topics ranging from machine learning to robotics, according to his personal website.Β 

Beyond his own research, Ng has pioneered developments in online education. He co-founded Coursera along with computer scientist Daphne Koller in 2012, and five years later, founded the education technology company DeepLearning.AI, which has created AI programs on Coursera. Β 

"I think AI does have risk. There is bias, fairness, concentration of power, amplifying toxic speech, generating toxic speech, job displacement. There are real risks," he told Bloomberg Technology last May. However, he said he's not convinced that AI will pose some sort of existential risk to humanity β€” it's more likely to be part of the solution. "If you want humanity to survive and thrive for the next thousand years, I would much rather make AI go faster to help us solve these problems rather than slow AI down," Ng told Bloomberg.Β 

Β 

Daphne Koller is the founder and CEO of insitro, a drug discovery startup that uses machine learning.
Daphne Koller, CEO and Founder of insitro.
Daphne Koller, CEO and Founder of Insitro.

Insitro

Koller told BI by email that insitro is applying AI and machine learning to advance understanding of "human disease biology and identify meaningful therapeutic interventions." And before founding insitro, Koller was the chief computing officer at Calico, Google's life-extension spinoff. Koller is a decorated academic, a MacArthur Fellow, and author of more than 300 publications with an h-index of over 145, according to her biography from the Broad Institute, and co-founder of Coursera.Β Β 

In Koller's view the biggest risks that AI development pose to society are "the expected reduction in demand for certain job categories; the further fraying of "truth" due to the increasing challenge in being able to distinguish real from fake; and the way in which AI enables people to do bad things."

At the same time, she said the benefits are too many and too large to note. "AI will accelerate science, personalize education, help identify new therapeutic interventions, and many more," Koller wrote by email.



Daniela Amodei cofounded AI startup Anthropic in 2021 after an exit from OpenAI.
Anthropic cofounder and president Daniela Amodei.
Anthropic cofounder and president Daniela Amodei.

Anthropic

Amodei co-founded Anthropic along with six other OpenAI employees, including her brother Dario Amodei. They left, in part, because Dario β€” OpenAI's lead safety researcher at the time β€” was concerned that OpenAI's deal with Microsoft would force it to release products too quickly, and without proper guardrails.Β 

At Anthropic, Amodei is focused on ensuring trust and safety. The company's chatbot Claude bills itself as an easier-to-use alternative that OpenAI's ChatGPT, and is already being implemented by companies like Quora and Notion. Anthropic relies on what it calls a "Triple H" framework in its research. That stands for Helpful, Honest, and Harmless. That means it relies on human input when training its models, including constitutional AI, in which a customer outlines basic principles on how AI should operate.Β 

"We all have to simultaneously be looking at the problems of today and really thinking about how to make tractable progress on them while also having an eye on the future of problems that are coming down the pike," Amodei previously told BI.

Β 

Demis Hassabis has said artificial general intelligence will be here in a few years.
DeepMind boss Demis Hassabis believes AGI will be here in a few years.
Demis Hassabis, the CEO and co-founder of machine learning startup DeepMind.

Samuel de Roman/Getty Images

Hassabis, a former child chess prodigy who studied at Cambridge and University College London, was nicknamed the "superhero of artificial intelligence" by The Guardian back in 2016.Β 

After a handful of research stints, and a venture in videogames, he founded DeepMind in 2010. He soldΒ the AI lab to Google in 2014 for Β£400 million where he's worked on algorithms to tackle issues in healthcare, climate change, and also launched a research unit dedicated to the understanding the ethical and social impact of AI in 2017, according to DeepMind's website.Β 

Hassabis has said the promise of artificial general intelligence β€” a theoretical concept that sees AI matching the cognitive abilities of humans β€” is around the corner. "I think we'll have very capable, very general systems in the next few years," Hassabis said previously, adding that he didn't see why AI progress would slow down anytime soon. He added, however, that developing AGI should be executed in a "in a cautious manner using the scientific method."Β 

In 2022, DeepMind co-founder Mustafa Suleyman launched AI startup Inflection AI along with LinkedIn co-founder Reid Hoffman, and KarΓ©n Simonyan β€” now the company's chief scientist.
Mustafa Suleyman
Mustafa Suleyman, co-founder of DeepMind, launched Inflection AI in 2022.

Inflection

The startup, which claims to create "a personal AI for everyone," most recently raised $1.3 billion in funding last June, according to PitchBook.Β 

Its chatbot, Pi, which stands for personal intelligence, is trained on large language models similar to OpenAI's ChatGPT or Bard. Pi, however, is designed to be more conversational, and offer emotional support. Suleyman previously described it as a "neutral listener"Β that can respond to real-life problems.Β 

"Many people feel like they just want to be heard, and they just want a tool that reflects back what they said to demonstrate they have actually been heard," Suleyman previously said.Β 

Β 

Β 

USC Professor Kate Crawford focuses on social and political implications of large-scale AI systems.
Kate Crawford
USC Professor Kate Crawford is the author of Atlas of AI and a researchers at Microsoft.

Kate Crawford

Crawford is also the senior principal researcher at Microsoft, and the author of Atlas of AI, a book that draws upon the breadth of her research to uncover how AI is shaping society.Β 

Crawford remains both optimistic and cautious about the state of AI development. She told BI by email she's excited about the people she works with across the world "who are committed to more sustainable, consent-based, and equitable approaches to using generative AI."

She added, however, that "if we don't approach AI development with care and caution, and without the right regulatory safeguards, it could produce extreme concentrations of power, with dangerously anti-democratic effects."

Margaret Mitchell is the chief ethics scientist at Hugging Face.
Margaret Mitchell
Margaret Mitchell has headed AI projects at several big tech companies.

Margaret Mitchell

Mitchell has published more than 100 papers over the course of her career, according to her website, and spearheaded AI projects across various big tech companies including Microsoft and Google.Β 

In late 2020, Mitchell and Timnit Gebru β€” then the co-lead of Google's ethical artificial intelligence β€” published a paper on the dangers of large language models. The paper spurred disagreements between the researchers and Google's management and ultimately lead to Gebru's departure from the company in December 2020. Mitchell was terminated by Google just two months later, in February 2021

Now, at Hugging Face β€” an open-source data science and machine learning platform that was founded in 2016 β€” she's thinking about how to democratize access to the tools necessary to building and deploying large-scale AI models. Β 

In an interview with Morning Brew, where Mitchell explained what it means to design responsible AI, she said, "I started on my path toward working on what's now called AI in 2004, specifically with an interest in aligning AI closer to human behavior. Over time, that's evolved to become less about mimicking humans and more about accounting for human behavior and working with humans in assistive and augmentative ways."

Navrina Singh is the founder of Credo AI, an AI governance platform.
Navrina Singh
Navrina Singh, the founder of Credo AI, says the system may help people reach their potential.

Navrina Singh

Credo AI is a platform that helps companies make sure they're in compliance with the growing body of regulations around AI usage.Β In a statement to BI, Singh said that by automating the systems that shape our lives, AI has the capacity "free us to realize our potential in every area where it's implemented."

At the same time, she contends that algorithms right now lack the human judgement that's necessary to adapt to a changing world.Β "As we integrate AI into civilization's fundamental infrastructure, these tradeoffs take on existential implications," Singh wrote. "As we forge ahead, the responsibility to harmonize human values and ingenuity with algorithmic precision is non-negotiable. Responsible AI governance is paramount."

Β 

Richard Socher, a former Salesforce exec, is the founder and CEO of AI-powered search engine You.com.
Richard Socher
Richard Socher believes we're still years from achieving AGI.

You.com

Socher believes we have ways to go before AI development hits its peak or matches anything close to human intelligence.

One bottleneck in large language models is their tendency to hallucinate β€” a phenomenon where they convincingly spit out factual errors as truth. But by forcing them to translate questions into code β€” essential "program" responses instead of verbalizing them β€” we can "give them so much more fuel for the next few years in terms of what they can do," Socher said.Β 

But that's just a short-term goal. Socher contends that we are years from anything close to the industry's ambitious bid to create artificial general intelligence. Socher defines it as "a form of intelligence that can "learn like humans" and "visually have the same motor intelligence, and visual intelligence, language intelligence, and logical intelligence as some of the most logical people," and it could take as little as 10 years, but as much as 200 years to get there.Β 

And if we really want to move the needle toward AGI, Socher said humans might need to let go of the reins, and their own motives to turn a profit, and build AI that can set its own goals.

"I think it's an important part of intelligence to not just robotically, mechanically, do the same thing over and over that you're told to do. I think we would not call an entity very intelligent if all it can do is exactly what is programmed as its goal," he told BI.Β 

Read the original article on Business Insider

How should we treat beings that might be sentient?

If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022β€”a law that protects animals whose sentience status is unclear.

According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.

Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.

Read full article

Comments

Β© A. Martin UW Photography

Former Phoenix Anchor Indicted in Alleged PPP Loan Scheme

By: Kevin Eck
25 November 2024 at 06:27
Stephanie Hockridge Reis and her husband Nathan Reis, have been indicted on four criminal counts of wire fraud and one for conspiracy to commit wire fraud. Hockridge Reis, a former KNXV anchor, and her husband founded Blueacorn. The Scottsdale-based company processed Paycheck Protection Program (PPP) loan applications during the pandemic. The indictment charges the two...

OpenAI is funding research into β€˜AI morality’

22 November 2024 at 14:25

OpenAI is funding academic research into algorithms that can predict humans’ moral judgements. In a filing with the IRS, OpenAI Inc., OpenAI’s nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled β€œResearch AI Morality.” Contacted for comment, an OpenAI spokesperson pointed to a press release indicating the award […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

Niantic uses PokΓ©mon Go player data to build AI navigation system

19 November 2024 at 12:34

Last week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobile games, such as PokΓ©mon Go, and from users of its Scaniverse app, reports 404 Media.

All AI models require training data. So far, companies have collected data from websites, YouTube videos, books, audio sources, and more, but this is perhaps the first we've heard of AI training data collected through a mobile gaming app.

"Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse," Niantic wrote in a company blog post.

Read full article

Comments

Β© https://www.gettyimages.com/detail/news-photo/man-plays-pokemon-go-game-on-a-smartphone-on-july-22-2016-news-photo/578680184

Join us today for Ars Live: Our first encounter with manipulative AI

19 November 2024 at 07:40

In the short-term, the most dangerous thing about AI language models may be their ability to emotionally manipulate humans if not carefully conditioned. The world saw its first taste of that potential danger in February 2023 with the launch of Bing Chat, now called Microsoft Copilot.

During its early testing period, the temperamental chatbot gave the world a preview of an "unhinged" version of OpenAI's GPT-4 prior to its official release. Sydney's sometimes uncensored and "emotional" nature (including use of emojis) arguably gave the world its first large-scale encounter with a truly manipulative AI system. The launch set off alarm bells in the AI alignment community and served as fuel for prominent warning letters about AI dangers.

On November 19 at 4 pm Eastern (1 pm Pacific), Ars Technica Senior AI Reporter Benj Edwards will host a livestream conversation on YouTube with independent AI researcher Simon Willison that will explore the impact and fallout of the 2023 fiasco. We're calling it "Bing Chat: Our First Encounter with Manipulative AI."

Read full article

Comments

Β© Aurich Lawson | Getty Images

❌
❌