Fortune | FORTUNE 07月20日 21:54
UK health service AI tool generated a set of false diagnoses for one patient that led to him being wrongly invited to a diabetes screening appointment
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能在医疗领域的应用潜力巨大,有望节省时间、金钱并挽救生命。然而,当这项可能“撒谎”的技术被引入患者护理时,也伴随着严重的风险。一名伦敦患者因AI生成的错误医疗记录被错误诊断为糖尿病,并被邀请进行眼部筛查,而他本人并未患有此病。该事件揭示了AI在医疗领域应用的挑战,即如何在追求效率和创新的同时,确保数据的准确性和患者的安全,尤其是在AI输出需要人类审核的环节,一旦审核疏漏,可能导致严重的后果。这突显了在医疗AI推广过程中,加强监管、严格审核和用户教育的重要性,以规避潜在风险,确保技术真正服务于患者健康。

💡AI医疗记录出现严重错误:一名患者因AI生成的医疗摘要被误诊为糖尿病,甚至收到了不必要的眼部筛查邀请,而他本人并未患有该病。该AI系统错误地将患者的扁桃体炎记录为胸痛和呼吸短促,并错误地添加了2型糖尿病的诊断及用药信息,甚至虚构了就诊医院地址。

🧑‍⚕️人为疏忽加剧AI风险:虽然AI系统是错误信息源头,但一名医疗人员在审核AI生成的摘要时,未能发现并纠正其中的明显错误,反而错误地保存了原始版本。这表明即使有AI辅助,严格的人工审核依然是防止错误信息进入患者记录的关键环节,否则可能导致“一刀切”的错误。

⚖️AI医疗设备分类与监管挑战:涉事AI工具“Annie”被注册为Class I医疗设备,属于低风险辅助工具,其输出需经医生审核。然而,此次事件暴露了该类设备在实际应用中可能产生的严重后果,引发了对AI医疗设备分类标准的讨论,尤其当AI输出影响临床决策时,可能需要重新评估其风险等级并加强监管。

🇬🇧英国政府推动AI医疗但需谨慎:英国政府正积极推动AI在国家医疗系统中的应用,以提高效率和降低成本。然而,此次事件以及英国国家医疗服务体系(NHS)发出的关于未经批准AI软件可能违反数据保护规则并危及患者安全的警告,表明在推广AI技术的同时,必须警惕其潜在风险,并确保所有AI应用都符合严格的监管标准和安全要求。

🤔患者对AI医疗的期望与担忧并存:该患者认为AI在医疗领域具有巨大潜力,可以节省成本和时间,但也强调LLMs(大型语言模型)仍处于实验阶段,必须在严格监督下使用。他希望这次事件能成为推动AI创新和加强AI监管的契机,而不是阻碍创新的借口,明确了在拥抱技术进步的同时,审慎和监督不可或缺。

AI use in healthcare has the potential to save time, money, and lives. But when technology that is known to occasionally lie is introduced into patient care, it also raises serious risks.

One London-based patient recently experienced just how serious those risks can be after receiving a letter inviting him to a diabetic eye screening—a standard annual check-up for people with diabetes in the UK. The problem: He had never been diagnosed with diabetes or shown any signs of the condition.

After opening the appointment letter late one evening, the patient, a healthy man in his mid-20’s, told Fortune he had briefly worried that he had been unknowingly diagnosed with the condition, before concluding the letter must just be an admin error. The next day, at a pre-scheduled routine blood test, a nurse questioned the diagnosis and, when the patient confirmed he wasn’t diabetic, the pair reviewed his medical history.

“He showed me the notes on the system, and they were AI-generated summaries. It was at that point I realized something weird was going on,” the patient, who asked for anonymity to discuss private health information, told Fortune.

After requesting and reviewing his medical records in full, the patient noticed the entry that had introduced the diabetes diagnosis was listed as a summary that had been “generated by Annie AI.” The record appeared around the same time he had attended the hospital for a severe case of tonsillitis. However, the record in question made no mention of tonsillitis. Instead, it said he had presented with chest pain and shortness of breath, attributed to a “likely angina due to coronary artery disease.” In reality, he had none of those symptoms.

The records, which were reviewed by Fortune, also noted the patient had been diagnosed with Type 2 diabetes late last year and was currently on a series of medications. It also included dosage and administration details for the drugs. However, none of these details were accurate, according to the patient and several other medical records reviewed by Fortune.

‘Health Hospital’ in ‘Health City’

Even stranger, the record attributed the address of the medical document it appeared to be processing to a fictitious “Health Hospital” located on “456 Care Road” in “Health City.” The address also included an invented postcode.

A representative for the NHS, Dr. Matthew Noble, told Fortune the GP practice responsible for the oversight employs a “limited use of supervised AI” and the error was a “one-off case of human error.” He said that a medical summariser had initially spotted the mistake in the patient’s record but had been distracted and “inadvertently saved the original version rather than the updated version [they] had been working on.”

However, the fictitious AI-generated record appears to have had downstream consequences, with the patient’s invitation to attend a diabetic eye screening appointment presumedly based on the erroneous summary. 

While most AI tools used in healthcare are monitored by strict human oversight, another NHS worker told Fortune that the leap from the original symptoms—tonsillitis—to what was returned—likely angina due to coronary artery disease—raised alarm bells.

“These human error mistakes are fairly inevitable if you have an AI system producing completely inaccurate summaries,” the NHS employee said. “Many elderly or less literate patients may not even know there was an issue.”

The company behind the technology, Anima Health, did not respond to Fortune’s questions about the issue. However, Dr. Noble said, “Anima is an NHS-approved document management system that assists practice staff in processing incoming documents and actioning any necessary tasks.”

“No documents are ever processed by AI, Anima only suggests codes and a summary to a human reviewer in order to improve safety and efficiency. Each and every document requires review by a human before being actioned and filed,” he added.

AI’s uneasy rollout in the health sector

The incident is somewhat emblematic of the growing pains around AI’s rollout in healthcare. As hospitals and GP practices race to adopt automation tools that promise to ease workloads and reduce costs, they’re also grappling with the challenge of integrating still-maturing technology into high-stakes environments. 

The pressure to innovate and potentially save lives with the technology is high, but so is the need for rigorous oversight, especially as tools once seen as “assistive” begin influencing real patient care.

The company behind the tech, Anima Health, promises healthcare professionals can “save hours per day through automation.” The company offers services including automatically generating “the patient communications, clinical notes, admin requests, and paperwork that doctors deal with daily.”

Anima’s AI tool, Annie, is registered with the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) as a Class I medical device. This means it is regarded as low-risk and designed to assist clinicians, such as examination lights or bandages, rather than automate medical decisions.

AI tools in this category require outputs to be reviewed by a clinician before action is taken or items are entered into the patient record. However, in this case of the misdiagnosed patient, the practice appeared to fail to appropriately address the factual errors before they were added to the patient’s records.

The incident comes amid increased scrutiny within the UK’s health service of the use and categorization of AI technology. Last month, bosses for the health service warned GPs and hospitals that some current uses of AI software could breach data protection rules and put patients at risk.

In an email first reported by Sky News and confirmed by Fortune, NHS England warned that unapproved AI software that breached minimum standards could risk putting patients at harm. The letter specifically addressed the use of Ambient Voice Technology, or “AVT” by some doctors.

The main issue with AI transcribing or summarizing information is the manipulation of the original text, Brendan Delaney, professor of Medical Informatics and Decision Making at Imperial College London and a PT General Practitioner, told Fortune.

“Rather than just simply passively recording, it gives it a medical device purpose,” Delaney said. The recent guidance issued by the NHS, however, has meant that some companies and practices are playing regulatory catch-up. 

“Most of the devices now that were in common use now have a Class One [categorization],” Delaney said. “I know at least one, but probably many others are now scrambling to try and start their Class 2a, because they ought to have that.”

Whether a device should be defined as a Class 2a medical device essentially depends on its intended purpose and the level of clinical risk. Under U.K. medical device rules, if the tool’s output is relied upon to inform care decisions, it could require reclassification as a Class 2a medical device, a category subject to stricter regulatory controls.

Anima Health, along with other UK-based health tech companies, is currently pursuing Class 2a registration.

The U.K.’s AI for health push

The U.K. government is embracing the possibilities of AI in healthcare, hoping it can boost the country’s strained national health system.

In a recent “10-Year Health Plan,” the British government said it aims to make the NHS the most AI-enabled care system in the world, using the tech to reduce admin burden, support preventive care, and empower patients through technology.

But rolling out this technology in a way that meets current rules within the organization is complex. Even the U.K.’s health minister appeared to suggest earlier this year that some doctors may be pushing the limits when it comes to integrating AI technology in patient care.

“I’ve heard anecdotally down the pub, genuinely down the pub, that some clinicians are getting ahead of the game and are already using ambient AI to kind of record notes and things, even where their practice or their trust haven’t yet caught up with them,” Wes Streeting said, in comments reported by Sky News.

“Now, lots of issues there—not encouraging it—but it does tell me that contrary to this, ‘Oh, people don’t want to change, staff are very happy and they are really resistant to change’, it’s the opposite. People are crying out for this stuff,” he added.

AI tech certainly has huge possibilities to dramatically improve speed, accuracy, and access to care, especially in areas like diagnostics, medical recordkeeping, and reaching patients in under-resourced or remote settings. However, walking the line between the tech’s potential and risks is difficult in sectors like healthcare that deal with sensitive data and could cause significant harm.

Reflecting on his experience, the patient told Fortune: “In general, I think we should be using AI tools to support the NHS. It has massive potential to save money and time. However, LLMs are still really experimental, so they should be used with stringent oversight. I would hate this to be used as an excuse to not pursue innovation but instead should be used to highlight where caution and oversight are needed.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI医疗 医疗AI风险 患者安全 数据准确性 医疗监管
相关文章