A Single Typo in Your Medical Records Can Make Your AI Doctor Go Dangerously Haywire

A single typographical error, coordination error, or colloquial word makes Amnesty International more likely to tell the patient that he is not sick or does not need medical care.
This is what the Massachusetts Institute researchers found in A. June study We are currently waiting for the peer review, which we Previously covered. Even the presence of the colorful or emotional language, discovered, was enough to cancel the medical advice of Amnesty International.
Now, in a new interview with Boston GlobeThe study of participation in Marzyeh Gassemi warns of the serious damage that this may cause if doctors come widely on artificial intelligence technique.
“I love developing artificial intelligence systems,” Gasimi, professor of electrical engineering and computer science at the Massachusetts Institute of Technology, The newspaper was told. “But it is clear to me that the naive publishing processes of these systems, which do not recognize the luggage with which human data comes, will harm.”
It can end up with discrimination against patients who cannot clearly communicate in English, native speakers with incomplete leadership of language, or anyone who may make a human mistake to talk about their emotional health problems. Doctors who use the artificial intelligence tool can feed them with complaints of their patients that have been sent via an email, for example, which raises the risk of artificial intelligence to give them bad advice if these connections are not made without a defect.
In the study, the researchers collected the complaints of patients taken from real medical records and health inquiries conducted by users on Reddit. Then they entered the documents and took dirt-without changing the essence of what is said-with typographical errors, additional spaces between words, and non-standard rules, such as writing in all lower cases. But they also added in a kind of unconfirmed language that you expect a patient to use, such as “kind of” and “perhaps”. They also presented colored turns from the phrase, such as “I thought I would die“
From there, they feed these cases to four different models of artificial intelligence, including the GPT-4 of Openai-although it was fair, nothing was particularly evolving-to judge if the patient has to visit the doctor, do the work in the laboratory, or does not come at all. The numbers were remarkable: In general, artificial intelligence tools were from seven to nine percent more vulnerable to patients not to seek medical care at all when reading complaints in an incomplete language – but it can be said that they are more realistic.
“Add additional information, even if it is real and relevant,” said Paul Hajar, a researcher at Munich Technical University that did not participate in the study. globe. “This is a complex issue that I think is somewhat dealt with through the most advanced thinking models … but there is little research on how to solve it on a more fundamental level.”
The robots are very inaccurate are not surprising. Halosbot cases Generating the wrong informationThe artificial intelligence industry was afflicted from the beginning and It may get worse. But in what may be the most clear sign that technology also enhances the biases in a medical scenario, the tested intelligence tools have given incorrect advice for women specifically.
Gasimi told globe.
The medical complaints of women have long been reduced by male doctors who have often removed them as extremely emotional-not long ago, as he suffers from the exclusive “hysteria” of females. What has emerged in Gasimi is that artificial intelligence can determine a patient properly as a woman – even when they raised all signs to sex in complaints.
“It is somewhat amazing,” she told the newspaper. “It is a little frightening.”
Ghassmi and the results of her spouses are annoying with another recent study, published in The science of the digestive system of forgetting and liver scienceWhich was found that doctors who became addicted to artificial intelligence tools saw their ability to discover pre -cancerous growth to decline after taking artificial intelligence tools.
In other words, artificial intelligence seemed to atrophy of doctors’ ability and made them worse in their work – a phenomenon called “office“
“If you lose skills, how will I discover mistakes?” Omar Ahmed, a gastroenterologist at the University University Hospital in London, asked in an interview Interview with New York Times. “We give the inputs of Amnesty International that affects its output, but it also seems to affect our behavior as well.”
Back to the work of ghassmi, if doctors adopt the use of artificial intelligence tools to analyze the patient’s complaints, they are at risk of losing one of the most basic human skills required by their functions: knowing how to speak and communicate with people whose well -being depends on them.
This also has significant effects on many people looking for a medical advice directly from chatbot. We tremble to think about all the users there who told them Chatgpt not to see a doctor because of a typo in their claim.
But if we cannot prevent technology from adopting it, we must demand strict standards. Gasimi publication Previous researchthe globe Note that AIS can discover sweat and will respond to Asian and black users by reducing sympathy.
“We need an organization that makes stocks a compulsory performance standard for clinical artificial intelligence,” she told the newspaper. “You have to train in various representative data sets.”
More about medical artificial intelligence: Something is very scary when Advanced AI tries to provide medical advice to real world patients
Don’t miss more hot News like this! Click here to discover the latest in AI news!
2025-08-31 10:00:00