โŒ

Normal view

There are new articles available, click to refresh the page.
Today โ€” 19 May 2025Main stream

Trump's 'Big Beautiful Bill' would create 'unfettered abuse' of AI, 141 high-profile orgs warn in letter to Congress

19 May 2025 at 13:49
Capitol Hill.
Trump's "Big Beautiful Bill," which includes a controversial AI provision, is making its way through Congress.

Tasos Katopodis/Getty Images

  • Trump's bill could lead to rampant AI abuse, organizations warn in a letter to Congress.
  • A provision in the bill would prevent states from regulating AI for a decade.
  • The critics argue it risks civil rights, privacy, and accountability.

A group of high-profile unions, advocacy groups, non-profits, and academic institutions are warning that a provision in President Donald Trump's "Big Beautiful Bill" could lead to the "unfettered abuse" of AI.

In a letter to Congress on Monday, 141 organizations called out a provision in Trump's signature bill that would prohibit states from regulating artificial intelligence for a decade. The provision, which Republicans placed into the sweeping tax, immigration, and defense legislation, would be a huge victory for regulation-wary AI companies.

But it would be a nightmare for Americans' civil rights, the groups argued in their letter, which was addressed to Republican House Speaker Mike Johnson and Democratic House Minority Leader Hakeem Jeffries.

"Protections for civil rights and children's privacy, transparency in consumer-facing chatbots to prevent fraud, and other safeguards would be invalidated, even those that are uncontroversial," the letter reads.

"The resulting unfettered abuses of AI or automated decision systems could run the gamut from pocketbook harms to working families like decisions on rental prices, to serious violations of ordinary Americans' civil rights, and even to large-scale threats like aiding in cyber attacks on critical infrastructure or the production of biological weapons," it continues.

And, the letter added, without state-level regulations on emerging technologies, companies wouldn't be held accountable.

"This moratorium would mean that even if a company deliberately designs an algorithm that causes foreseeable harm โ€” regardless of how intentional or egregious the misconduct or how devastating the consequences โ€” the company making that bad tech would be unaccountable to lawmakers and the public," the letter reads.

The letter's signatories include Georgetown Law's Center on Privacy and Technology, the Southern Poverty Law Center, the Economic Policy Institute, Amazon Employees for Climate Justice, the Alphabet Workers Union, and many others.

The provision would invalidate critical state laws โ€” like those already in effect in New Jersey and Colorado โ€” designed to protect people from the harms created by AI, like algorithmic discrimination, which can affect everything from housing, policing, healthcare, and financial services, the letter argues.

Those harms include "many documented cases of AI having highly sexualized conversations with minors and even encouraging minors to commit harm to themselves and others; AI programs making healthcare decisions that have led to adverse and biased outcomes; and AI enabling thousands of women and girls to be victimized by nonconsensual deepfakes," the letter says.

Trump's signature bill, which the House Budget Committee moved forward on Sunday, still has to clear a series of votes in the House before going to the Senate, and the bill's AI provision has to meet a high bar to remain in the larger bill.

The White House and a representative for Speaker Mike Johnson did not immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider

Before yesterdayMain stream

Palantir CEO Alex Karp praises Saudi engineers and takes a swipe at Europe, saying it has 'given up' on AI

13 May 2025 at 21:17
Alex Karp, CEO of Palantir Technologies
Palantir Technologies CEO Alex Karp says that Europe has "given up" on AI.

Brendan McDermid/REUTERS

  • Palantir CEO Alex Karp praised Saudi engineers at Riyadh's investment forum and criticized Europe.
  • European organizations are slower in AI adoption compared to their US counterparts, a report said.
  • Europe has more stringent AI regulations, and many ways of utilizing AI are categorized as high-risk.

At an investment forum in Riyadh, Alex Karp, CEO of defense tech company Palantir Technologies, praised Saudi engineers for meritocracy and patriotism โ€” and took a swipe at Europe over its slow AI adoption.

"You're seeing a receptivity in this region, especially in the kingdom," Karp said at the Saudi-US Investment Forum on Tuesday. "But the receptivity is on the back of people who have a deep tradition in engineering excellence and, quite frankly, believe in their own future."

Karp was addressing a request from panel host and CNBC anchor Sara Eisen to expand upon a previous comment that the countries and regions that are best utilizing AI right now are the US and the Middle East.

"Obviously, the great contradistinction here is Europe, where, you know, it's like people have given up, and we โ€” I really hope that turns around in Europe," he added.

The talk came as President Donald Trump received a royal welcome with golden chairs and Arabian horses in Riyadh on Tuesday as he kicked off his Gulf tour. Flanked by executives from Google, Nvidia, BlackRock, and others to discuss AI, defense, and energy with Saudi officials, Trump said he aims to secure $1 trillion in deals.

Based on an October 2024 report published by QuantumBlack, AI by McKinsey, European organizations lag behind their US counterparts by 45 to 70% in terms of AI adoption. And while Europe leads in producing AI semiconductor equipment โ€” machinery and tools used to make semiconductors โ€” the report said that Europe has below 5% market share in areas like raw materials, cloud infrastructure, and supercomputers.

"Opportunity remains wide open, but Europe is starting from a disadvantage," the report QuantumBlack wrote.

Europe is also known for much more stringent AI regulations. On August 1, 2024, Europe enacted the AI Act, the first-ever legal framework on AI, and established an AI office to oversee the implementation of these regulations.

The AI Act is just one part of a wider package of measures that the EU said ensures "trustworthy AI," which explicitly banned practices such as AI-based manipulation and deception, real-time remote biometric identification for law enforcement purposes in public spaces, and individual criminal offense risk assessment.

According to the AI Act, ways of using AI, such as robot-assisted surgery and credit scoring, are also categorized as "high-risk" and subjected to strict scrutiny.

The best-known European AI company that raised the most amount of funds is French AI startup Mistral, which is backed by over โ‚ฌ1 billion in funding. Dubbed as Europe's answer to OpenAI, the company has recently ruled out acquisition and is eyeing an IPO while pushing its open-source AI models and generative chatbot "Le Chat" into new markets.

Read the original article on Business Insider

โŒ
โŒ