Business

UK health service AI tool generated a set of false diagnoses for a patient

The use of artificial intelligence in health care has the ability to save time, money and life. But when known technology is sometimes introduced in patient care, it also provokes serious risks.

One of the London-based patients recently suffered the seriousness of these risks after receiving a letter calling for a diabetic eye examination-a standard annual examination of people with diabetes in the United Kingdom. Problem: He has not been diagnosed with diabetes or any signs of the condition appear.

After opening the date of the appointment late in the evening of one evening, he told the patient, a healthy man in the mid -twenties, luck He was briefly concerned that he was unaware of the case, before concluding that the message should be just a responsible mistake. The next day, in a routine blood test in advance, a nurse asked about the diagnosis, and when the patient confirmed that he was not diabetes, the husband reviewed his medical history.

“I showed me notes on the system, and they were created from artificial intelligence. I realized at that stage that something strange is happening.” luck.

After requesting and reviewing his entire medical records, the patient noted that the input that entered the diagnosis of diabetes is listed as a summary created by Annie Ai. ” Coronation. In fact, he had no of these symptoms.

Records reviewed by luck, He also indicated that the patient was diagnosed with type 2 diabetes at the end of last year and was currently on a series of medicines. It also included dose and drug management details. However, none of these details were accurate, according to the patient and many other medical records that were reviewed by luck.

“Health Hospital” in “Health City”

Even strange, the record attributed the title of the medical document that seemed to be treated into an imaginary “health hospital” located on “456 Care Road” in “Health City”. The title also included a invented postal code.

A representative of NHS, Dr. Matthew Nobeland He said luck The GP exercise responsible for censorship employs the “limited use of the supervisory organization” Amnesty International “and the error was” one time of human error “. He said that a medical summary was initially discovered the error in the patient’s record, but he was distorting his attention and “memorizing the original version inadvertently instead of the updated version [they] He was working. ”

However, it appears that the imaginary record created from artificial intelligence has severe consequences, with the patient’s invitation to attend an appointment to examine the diabetic eye on the basis of the wrong summary.

While most of the artificial intelligence tools used in health care are monitored by strict human supervision, another worker said in NHS luck The jump is one of the original symptoms – thyroid inflammation – to what has been returned – faded angina due to coronary artery disease – alarm bells.

“These errors of human errors are somewhat inevitable if you have an Amnesty International system that produces completely inaccurate summaries,” said an NHS employee. “Many elderly patients or less than reading and writing may not know that there is a problem.”

The company behind this technology, Anima Health, did not respond to it wealth Questions about this issue. However, Dr. Nobel said, “Anima is the NHS certified document management system and helps employees in the documents received and do any necessary tasks.”

He added: “No documents are addressed by Amnesty International, Anima suggests only symbols and a summary of the human auditor in order to improve safety and efficiency. Every document requires a review by man before work and submission.”

Amnesty International starts in the health sector

The accident is a somewhat symbol of growth pain about the start of Amnesty International in health care. Since hospitals and GP practices are racing to adopt automated operating tools that are eliminating work burdens and reducing costs, they are also struggling with the challenge of integrating technology that still exists in high -risk environments.

Pressure for innovation and saving life with this technology is high, but this is the need for strict supervision, especially since the tools that are seen as “help” begin to influence the care of real patients.

The company behind technology, Anima Health, is that “health care professionals can provide hours per day through automation.” The company provides services including generating “patient communications automatically, clinical notes, supervisor requests, and papers that doctors deal with daily.”

Anima’s AI, Annie, was registered at the UK Drug and Health Organizational Agency (MHRA) as a first -class medical device. This means that it is considered low risk and designed to help doctors, such as examination lamps or bandages, rather than automating medical decisions.

Artificial intelligence tools in this category require outputs by the doctor before taking action or inserting elements in the patient’s record. However, in this case the wrong patient, this practice seemed to fail to address realistic errors appropriately before adding them to the patient’s records.

This incident comes amid the increase in the UK health service to use and classify artificial intelligence technology. Last month, health service heads warned doctors and hospitals that some of the current uses of artificial intelligence programs can violate the rules of data protection and endanger patients.

In an email message reported by Sky News for the first time and confirmed it luckNHS England warned that artificial intelligence programs that are not approved, which violated the minimum standards could risk patients. The message specifically took the use of surrounding sound technology, or “AVT” by some doctors.

Brendan Dylani, professor of medical information and decision -making at Imperial College London and a PT general practitioner, told Brendan Dylani, a professor of medical information and decision -making at Imperial College London and a PT general practitioner, which is the main issue in copying or summarizing information luck.

“Instead of just scoring negatively, it gives him a medical goal,” said Dylani. However, the last directive issued by NHS means that some companies and practices play an organizational knee.

“Most of the devices that were now common are now one category [categorization]”I know at least one, but it is likely that many others are now scrambling to try to start the 2A category, because they should be.” Dylani said.

Whether the device should be defined as a 2A medical device that mainly depends on the intended purpose and the level of clinical risk. Under the rules of the UK medical devices, if the tool output is relied on to inform the care decisions, this may require a re -classification as a 2A medical device, a category that is subject to tougher organizational controls.

Anima Health, along with other UK health technology companies, is currently recording 2A.

Amnesty International in the UK Health Payment

The UK government adopts the possibilities of artificial intelligence in the field of health care, in the hope that it will be able to strengthen the tense national health system in the country.

In the last 10 -year “health plan”, the British government said it aims to make NHS the most enabled care system for Amnesty International in the world, using technology to reduce the supervisor’s burden, support preventive care, and empower patients through technology.

But offering this technology in a way that meets the current rules within the organization is complicated. Even the UK Minister of Health seemed to indicate earlier this year that some doctors may push the border when it comes to integrating artificial intelligence technology into patient care.

“I heard about the palaces of the pub, really in the pub, said that some doctors are advancing the game and already used Amnesty International for the neighborhoods of artificial intelligence for the artificial intelligence of things and things from the records, even when they have not yet practiced or confused them to circumvent them.”

He added: “Now, many issues there – do not encourage them – but she tells me that unlike this,” Oh, they don’t want to change, the employees are very happy and they are really resisting to change, “it is the opposite. People scream for these things.”

AI Tech Certainly has huge capabilities to improve speed, accuracy and greatly reach care, especially in areas such as diagnosis, preservation of medical notebooks, access to patients in unrestricted or remote settings. However, the walk between technology and risk capabilities is difficult in sectors such as health care that deals with sensitive data and may cause great harm.

The patient said that thinking about his experience luck: “In general, I think we should use artificial intelligence tools to support NHS. It has tremendous potential to save money and time. However, LLMS is still truly experimental, so it must be used with strict supervision. I would like to hate this to be used as an excuse to follow innovation but it must be used instead to highlight caution and censorship.”

Don’t miss more hot News like this! Click here to discover the latest in Business news!

2025-07-20 13:41:00

Related Articles

Back to top button