Sponsored by Looka AI – Exclusive lifetime deal

OpenAI’s Transcription Tool Misses the Mark with Excessive Hallucinations

OpenAI’s Whisper, an AI-powered speech recognition tool launched in 2022, has been found to “hallucinate”—making up information unexpectedly. Experts warn this inaccuracy could lead to serious issues if used in critical contexts.

A recent review of OpenAI’s Whisper found frequent “hallucinations” in nearly 50% of 100 hours of transcriptions, while another developer noted hallucinations in almost all 26,000 transcripts he tested. 

Although AI transcription tools can sometimes misinterpret words, researchers say that no other tool hallucinates as much as Whisper. On the other hand, OpenAI claims Whisper, an open-source neural network, has near-human accuracy in English speech recognition and is widely used for transcribing interviews and creating video subtitles across various industries.

However, the rise of Whisper technology is raising serious concerns about spreading misinformation, such as fake quotes and false information. 

The AP reported that Whisper is used in tools like ChatGPT, call centers, and services from Oracle and Microsoft, with over 4.2 million downloads on HuggingFace last month.

Physicians are using Whisper to transcribe patient visits, which is particularly concerning according to experts. Additionally, Interviews with engineers and researchers revealed that Whisper sometimes creates made-up phrases, including harmful comments and false medical advice.

According to Alondra Nelson, a professor at the Institute for Advanced Study:

“Nobody wants a misdiagnosis.” 

The issue isn’t limited to poor audio; researchers found that even short, clear recordings can lead to mistakes, potentially causing tens of thousands of errors across millions of transcriptions.

Facebook
X
LinkedIn
Pinterest
Reddit
Related News

Leave a Reply

'