The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.
OpenAI’s Transcription Tool Misses the Mark with Excessive Hallucinations
OpenAI's Whisper AI tool raises alarming accuracy issues, especially in healthcare. Could it lead to serious misdiagnoses?

Originally reported byzdnet
OpenAI's Whisper, an AI-powered speech recognition tool launched in 2022, has been found to "hallucinate"—making up information unexpectedly. Experts warn this inaccuracy could lead to serious issues if used in critical contexts.
A recent review of OpenAI's Whisper found frequent "hallucinations" in nearly 50% of 100 hours of transcriptions, while another developer noted hallucinations in almost all 26,000 transcripts he tested.
Although AI transcription tools can sometimes misinterpret words, researchers say that no other tool hallucinates as much as Whisper. On the other hand, OpenAI claims Whisper, an open-source neural network, has near-human accuracy in English speech recognition and is widely used for transcribing interviews and creating video subtitles across various industries.
However, the rise of Whisper technology is raising serious concerns about spreading misinformation, such as fake quotes and false information.
The AP reported that Whisper is used in tools like ChatGPT, call centers, and services from Oracle and Microsoft, with over 4.2 million downloads on HuggingFace last month.
Physicians are using Whisper to transcribe patient visits, which is particularly concerning according to experts. Additionally, Interviews with engineers and researchers revealed that Whisper sometimes creates made-up phrases, including harmful comments and false medical advice.
According to Alondra Nelson, a professor at the Institute for Advanced Study:
"Nobody wants a misdiagnosis."
The issue isn’t limited to poor audio; researchers found that even short, clear recordings can lead to mistakes, potentially causing tens of thousands of errors across millions of transcriptions.
ES
Editorial Staff Editor
View all posts
Filter:
No comments yet. Be the first to comment!
Related stories
xAI's Anthropic Deal: What's the Catch?
#ainews#anthropic#xai#spacexipo#neocloud
A significant partnership has been announced between Anthropic and xAI, with Anthropic acquiring the entirety of the compute capacity at xAI’s Colossus 1 data center located in Tennessee. This develop...
16h ago
Wispr Flow's Audacious Bet on India's Voice AI Challenge
#ainews#wisprflow#indiamarket#voiceai#hinglish
Indian internet users extensively leverage voice notes, voice search, and multilingual messaging. However, transforming these prevalent habits into a scalable AI business presents significant hurdles...
1d ago
Heard AI Terms? Stop Nodding, Start Understanding.
#ainews#aiterms#aiglossary#agi#aiagents
Artificial intelligence is rapidly transforming the world, simultaneously coining an entirely new vocabulary to articulate its mechanisms. Even a brief engagement with AI topics quickly introduces ter...
1d ago