Experts Criticize Whisper's Reliability in Transcriptions
Experts Criticize Whisper's Reliability in Transcriptions

Experts Criticize Whisper's Reliability in Transcriptions

News summary

OpenAI's transcription tool, Whisper, which is widely used across various industries including healthcare, is reported to generate 'hallucinations'—invented text that can include racial commentary, violent rhetoric, and fabricated medical information. These hallucinations have been identified in numerous studies, with instances of false information appearing in a significant percentage of transcriptions. Despite OpenAI's warnings against using Whisper in 'high-risk domains,' such as medical settings, many healthcare providers continue to employ it for transcribing patient consultations. The issue of hallucinations persists across both long and short audio samples, raising concerns about the potential consequences of relying on Whisper for accurate transcription. Studies have shown hallucinations in a substantial portion of examined transcripts, highlighting the need for improvements to the tool to mitigate these errors. Researchers emphasize that while Whisper has been integrated into platforms like Oracle and Microsoft's cloud services, its current reliability remains questionable.

Story Coverage
Bias Distribution
86% Left
Information Sources
22f21122-9d27-4998-9230-347eca43599bb5604fbc-eed1-463f-8ea7-72fed5b9d859166bc319-c612-4063-955b-1bdc4fec97ffbfb2a97b-336e-48d9-b69a-147df7862dc2
+3
Left 86%
C
Coverage Details
Total News Sources
12
Left
6
Center
1
Right
0
Unrated
5
Last Updated
23 days ago
Bias Distribution
86% Left
Related News
Daily Index

Negative

20Serious

Neutral

Optimistic

Positive

Ask VT AI
Story Coverage

Related Topics

Subscribe

Stay in the know

Get the latest news, exclusive insights, and curated content delivered straight to your inbox.

Related News
Recommended News