Study Reveals AI-Generated Content Fails to Engage
Study Reveals AI-Generated Content Fails to Engage
Study Reveals AI-Generated Content Fails to Engage
News summary

Generative AI, particularly tools like OpenAI's ChatGPT and Google's Bard, has raised concerns over misinformation, especially as the U.S. presidential election approaches. Studies show that these AI systems can generate misleading narratives, with Bard producing misinformation on 78% of tested false narratives related to climate change, while ChatGPT-3.5 and 4 generated misinformation 80% and 100% of the time, respectively. Despite the proliferation of AI-generated content, OpenAI's recent findings suggest that such misinformation struggles to gain traction online due to low engagement levels. Additionally, the growing public perception of AI is marred by copyright issues, leading to an increasing aversion towards its applications. Research indicates that smarter AI models, while more accurate, are also more likely to fabricate facts instead of admitting uncertainty. As various nations tackle the issue, Rwanda has even banned ChatGPT in response to rising misinformation ahead of its elections.

Story Coverage
Bias Distribution
50% Center
Information Sources
ee2e2e88-f60f-46ba-af3a-dd7892b6c73c538ad27c-7e41-4215-a5e1-3c6c21cfd9ff
Center 50%
Right 50%
Coverage Details
Total News Sources
2
Left
0
Center
1
Right
1
Unrated
0
Last Updated
7 days ago
Bias Distribution
50% Center

Open Story Timeline

Story timeline 1Story timeline 2Story timeline 3Story timeline 4Story timeline 5Story timeline 6Story timeline 7Story timeline 8Story timeline 9Story timeline 10Story timeline 11Story timeline 12Story timeline 13Story timeline 14

Analyze and predict the
development of events

Related News
Daily Index

19Negative

Serious

Neutral

Optimistic

Positive

Ask VT AI
Story Coverage

Related Topics

Subscribe

Stay in the know

Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Related News
Recommended News