❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Music labels will regret coming for the Internet Archive, sound historian says

On Thursday, music labels sought to add nearly 500 more sound recordings to a lawsuit accusing the Internet Archive (IA) of mass copyright infringement through its Great 78 Project, which seeks to digitize all 3 million three-minute recordings published on 78 revolutions-per-minute (RPM) records from about 1898 to the 1950s.

If the labels' proposed second amended complaint is accepted by the court, damages sought in the caseβ€”which some already feared could financially ruin IA and shut it down for goodβ€”could increase to almost $700 million. (Initially, the labels sought about $400 million in damages.)

IA did not respond to Ars' request for comment, but the filing noted that IA has not consented to music labels' motion to amend their complaint.

Read full article

Comments

Β© harrisale | iStock / Getty Images Plus

Publishers sue AI startup Cohere over alleged copyright infringement

13 February 2025 at 07:43

A consortium of 14 publishers including CondΓ© Nast, The Atlantic, and Forbes have filed a lawsuit against Cohere alleging that the generative AI startup has engaged in β€œmassive, systematic” copyright infringement. In the complaint, the publisher plaintiffs accuse Cohere of using at least 4,000 copyrighted works to train its AI models and display large portions […]

Β© 2024 TechCrunch. All rights reserved. For personal use only.

OpenAI blamed NYT for tech problem erasing evidence of copyright abuse

OpenAI keeps deleting data that could allegedly prove the AI company violated copyright laws by training ChatGPT on authors' works. Apparently largely unintentional, the sloppy practice is seemingly dragging out early court battles that could determine whether AI training is fair use.

Most recently, The New York Times accused OpenAI of unintentionally erasing programs and search results that the newspaper believed could be used as evidence of copyright abuse.

The NYT apparently spent more than 150 hours extracting training data, while following a model inspection protocol that OpenAI set up precisely to avoid conducting potentially damning searches of its own database. This process began in October, but by mid-November, the NYT discovered that some of the data gathered had been erased due to what OpenAI called a "glitch."

Read full article

Comments

Β© NurPhoto / Contributor | NurPhoto

❌
❌