Negative
27Serious
Neutral
Optimistic
Positive
- Total News Sources
- 4
- Left
- 3
- Center
- 0
- Right
- 0
- Unrated
- 1
- Last Updated
- 20 days ago
- Bias Distribution
- 100% Left


Federal Judge Rules Anthropic AI Training Fair Use, Sets December Piracy Trial
A federal judge ruled that Anthropic, the Amazon-backed AI company behind the Claude chatbot, did not violate copyright law by using copyrighted books to train its AI model, deeming the training process "quintessentially transformative" and thus a fair use under U.S. law. The ruling recognizes that the AI's use of these materials to develop new content is similar to how humans learn from reading, marking a significant legal precedent supporting AI innovation. However, the judge also found that Anthropic's separate act of downloading and storing over seven million pirated books from unauthorized "shadow libraries" was unlawful, leading to a scheduled trial in December to determine damages. This nuanced decision balances the encouragement of technological progress with the protection of intellectual property rights, underscoring that how AI companies acquire data is as important as how they use it. The case, initiated by three authors who alleged copyright infringement, highlights ongoing tensions between AI developers and content creators and is likely to influence future regulations and litigation involving AI training data. Anthropic welcomed the ruling on fair use, while the upcoming trial will focus on the legality and consequences of using pirated content.



- Total News Sources
- 4
- Left
- 3
- Center
- 0
- Right
- 0
- Unrated
- 1
- Last Updated
- 20 days ago
- Bias Distribution
- 100% Left
Negative
27Serious
Neutral
Optimistic
Positive
Related Topics
Stay in the know
Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Gift Subscriptions
The perfect gift for understanding
news from all angles.