Researchers and engineers using OpenAI’s Whisper audio transcription tool have said that it often includes hallucinations in its output, commonl

Concerns about medical note-taking tool raised after researcher discovers it invents things no one said — Nabla is powered by OpenAI's Whisper

submited by
Style Pass
2024-10-28 11:30:04

Researchers and engineers using OpenAI’s Whisper audio transcription tool have said that it often includes hallucinations in its output, commonly manifested as chunks of text that don't accurately reflect the original recording. According to the Associated Press, a University of Michigan researcher said that he found made-up text in 80% of the AI tool’s transcriptions that were inspected, which led to him trying to improve it.

AI hallucination isn’t a new phenomenon, and researchers have been trying to fix this using different tools like semantic entropy. However, what’s troubling is that the Whisper AI audio transcription tool is widely used in medical settings, where mistakes could have deadly consequences.

For example, one speaker said, “He, the boy, was going to, I’m not sure exactly, take the umbrella,” but Whisper transcribed, “He too a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.” Another recording said, “two other girls and one lady,” and the AI tool transcribed this as “two other girls and one lady, um, which were Black.” Lastly, one medical-related example showed Whisper writing down “hyperactivated antibiotics” in its output, which do not exist.

Despite the above news, Nabla, an ambient AI assistant that helps clinicians transcribe the patient-doctor interaction, and create notes or reports after the visit, still uses Whisper. The company claims that over 45,000 clinicians across 85+ health organizations use the tool, including Children’s Hospital Los Angeles and Mankato Clinic in Minnesota.

Leave a Comment