Negative
20Serious
Neutral
Optimistic
Positive
- Total News Sources
- 2
- Left
- 0
- Center
- 1
- Right
- 1
- Unrated
- 0
- Last Updated
- 33 days ago
- Bias Distribution
- 50% Center
Study Reveals AI-Generated Content Fails to Engage
Generative AI, particularly tools like OpenAI's ChatGPT and Google's Bard, has raised concerns over misinformation, especially as the U.S. presidential election approaches. Studies show that these AI systems can generate misleading narratives, with Bard producing misinformation on 78% of tested false narratives related to climate change, while ChatGPT-3.5 and 4 generated misinformation 80% and 100% of the time, respectively. Despite the proliferation of AI-generated content, OpenAI's recent findings suggest that such misinformation struggles to gain traction online due to low engagement levels. Additionally, the growing public perception of AI is marred by copyright issues, leading to an increasing aversion towards its applications. Research indicates that smarter AI models, while more accurate, are also more likely to fabricate facts instead of admitting uncertainty. As various nations tackle the issue, Rwanda has even banned ChatGPT in response to rising misinformation ahead of its elections.
- Total News Sources
- 2
- Left
- 0
- Center
- 1
- Right
- 1
- Unrated
- 0
- Last Updated
- 33 days ago
- Bias Distribution
- 50% Center
Open Story
Timeline
Analyze and predict the
development of events
Negative
20Serious
Neutral
Optimistic
Positive
Related Topics
Stay in the know
Get the latest news, exclusive insights, and curated content delivered straight to your inbox.