Reading view

There are new articles available, click to refresh the page.

OpenAI and Google ask for a government exemption to train their AI models on copyrighted material

OpenAI is calling on the Trump administration to give AI companies an exemption to train their models on copyrighted material. In a blog post spotted by The Verge, the company this week published its response to President Trump's AI Action Plan. Announced at the end of February, the initiative saw the White House seek input from private industry, with the goal of eventually enacting policy that will work to "enhance America's position as an AI powerhouse" and enable innovation in the sector. 

"America's robust, balanced intellectual property system has long been key to our global leadership on innovation. We propose a copyright strategy that would extend the system's role into the Intelligence Age by protecting the rights and interests of content creators while also protecting America's AI leadership and national security," OpenAI writes in its submission. "The federal government can both secure Americans' freedom to learn from AI, and avoid forfeiting our AI lead to the [People's Republic of China] by preserving American AI models' ability to learn from copyrighted material."

In the same document, the company recommends the US maintain tight export controls on AI chips to China. It also says the US government should broadly adopt AI tools. Incidentally, OpenAI began offering a version of ChatGPT designed for US government use earlier this year.

This week, Google also published its own list of recommendations for the president's AI Action Plan. Like OpenAI, the search giant says it should be able to train AI models on copyrighted material.

"Balanced copyright rules, such as fair use and text-and-data mining exceptions, have been critical to enabling AI systems to learn from prior knowledge and publicly available data, unlocking scientific and social advances," Google writes. "These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation."

Last year, OpenAI said it would be "impossible to train today's leading AI models without using copyrighted materials." The company currently faces numerous lawsuits accusing it of copyright infringement, including ones involving The New York Times and a group of authors led by George R.R. Martin and Jonathan Franzen. At the same time, the company recently accused Chinese AI startups of trying to copy its technologies.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-google-ask-for-a-government-exemption-to-train-their-ai-models-on-copyrighted-material-212906990.html?src=rss

©

© Igor Bonifacic / Engadget

The icon for ChatGPT on iOS

Everything you say to your Echo will be sent to Amazon starting on March 28

Since Amazon announced plans for a generative AI version of Alexa, we were concerned about user privacy. With Alexa+ rolling out to Amazon Echo devices in the coming weeks, we’re getting a clearer view at the privacy concessions people will have to make to maximize usage of the AI voice assistant and avoid bricking functionality of already-purchased devices.

In an email sent to customers today, Amazon said that Echo users will no longer be able to set their devices to process Alexa requests locally and, therefore, avoid sending voice recordings to Amazon’s cloud. Amazon apparently sent the email to users with “Do Not Send Voice Recordings” enabled on their Echo. Starting on March 28, recordings of everything spoken to the Alexa living in Echo speakers and smart displays will automatically be sent to Amazon and processed in the cloud.

Attempting to rationalize the change, Amazon’s email said:

Read full article

Comments

© Getty

Ticker: Newsmax Paid $40 Million to Settle Smartmatic Defamation Lawsuit

Top of the Ticker: A recent regulatory filing by Newsmax revealed that the network paid $40 million to settle its defamation lawsuit with Smartmatic. The case was settled in September with the terms initially remaining undisclosed, Newsmax's filing also revealed that Smartimatic would be able to purchase 2,000 shares of preferred stock of the cable...

The Electric State can’t hold a charge to save its life

A girl and a humanoid robot that resembles a cartoon character.

It is hard to describe how utterly joyless and devoid of imaginative ideas The Electric State is. Netflix’s latest feature codirected by Joe and Anthony Russo takes many visual cues from Simon Stålenhag’s much-lauded 2018 illustrated novel, but the film’s leaden performances and meandering story make it feel like a project borne out by a streamer that sees its subscribers as easily impressed dolts who hunger for slop. 

While you can kind of see where some of the money went, it’s exceedingly hard to understand why Netflix reportedly spent upward of $300 million to produce what often reads like an idealized, feature-length version of the AI-generated “movies” littering social media. With a budget that large and a cast so stacked, you would think that The Electric State might, at the very least, be able to deliver a handful of inspired set pieces and characters capable of leaving an impression. But all this clunker of a movie really has to offer is nostalgic vibes and groan-inducing product placement.

Set in an alternate history where Walt Disney’s invention of simple automatons eventually leads to a devastating war, The Electric State centers Michelle (Millie Bobby …

Read the full story at The Verge.

Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research.

While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users.

While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models.

Read full article

Comments

© Malte Mueller via Getty Images

Consumers Are Protesting Retailers’ DEI Policies, but the Boycotts Aren’t Working

A nationwide boycott aimed at major retailers like Walmart and Amazon over their diversity, equity, and inclusion policies was meant to send a financial message. However, data from three separate sources shows that the impact was negligible. On Feb. 28, a grassroots group organized by John Schwarz, a self-described "mindfulness and meditation facilitator" with more...

OpenAI and Google ask the government to let them train AI on content they don’t own

OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.”

The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump’s “AI Action Plan.” The initiative is supposed to “enhance America’s position as an AI powerhouse,” while preventing “burdensome requirements” from impacting innovation.

In its comment, Open claims that allowing AI companies to access copyrighted content would help the US “avoid forfeiting” its lead in AI to China, while calling out the rise of DeepSeek

“There’s little doubt that the PRC’s [People’s Republic of China] AI developers will enjoy unfettered access to data — including copyrighted data — that will improve their models,” OpenAI writes. “If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.”

Google, unsurprisingly, agrees. The company’s response similarly states that copyright, privacy, and patents policies “can impede appropriate access to data necessary for training leading models.” It adds that fair use policies, along with text and data mining exceptions, have been “critical” to training AI on publicly available data.

“These exceptions allow for the use of copyrighted, publicly available material for AI training without significantly impacting rightsholders and avoid often highly unpredictable, imbalanced, and lengthy negotiations with data holders during model development or scientific experimentation,” Google says.

Anthropic, the AI company behind the AI chatbot Claude, also submitted a proposal – but it doesn’t mention anything about copyrights. Instead, it asks the US government to develop a system to assess an AI model’s national security risks and to strengthen export controls on AI chips. Like Google and OpenAI, Anthropic also suggests that the US bolster its energy infrastructure to support the growth of AI.

Many AI companies have been accused of ripping copyrighted content to train their AI models. OpenAI currently faces several lawsuits from news outlets, including The New York Times, and has even been sued by well-known names like Sarah Silverman and George R.R. Martin. Apple, Anthropic, and Nvidia have also been accused of scraping YouTube subtitles to train AI, which YouTube has said violates its terms.

Anthropic’s plan to win the AI race

Anthropic is one of the world’s leading AI model providers, especially in areas like coding. But its AI assistant, Claude, is nowhere near as popular as OpenAI’s ChatGPT.

According to chief product officer Mike Krieger, Anthropic doesn’t plan to win the AI race by building a mainstream AI assistant. “I hope Claude reaches as many people as possible,” Krieger told me onstage at the HumanX AI conference earlier this week. “But I think, [for] our ambitions, the critical path isn’t through mass-market consumer adoption right now.”

Instead, Krieger says Anthropic is focused on two things: building the best models; and what he calls “vertical experiences that unlock agents.” The first of these is Claude Code, Anthropic’s AI coding tool that Krieger says amassed 100,000 users within its first week of availability. He says there are more of these so-called agents for specific use cases coming this year and that Anthropic is working on “smaller, cheaper models” for developers. (And, yes, there are future versions of its biggest and most capable model, Opus, coming at some point, too.)

Krieger made his name as the cofounder of Instagram and then the news aggregati …

Read the full story at The Verge.

❌