Mashable 07月24日 02:53
The FDAs new drug-approving AI chatbot is not helping
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国食品药品监督管理局(FDA)推出的名为Elsa的人工智能工具,旨在加速药物审批流程并协助员工处理日常任务,但实际应用中却暴露出严重的“幻觉”问题。据内部人士透露,Elsa常捏造医学研究或误读关键数据,导致其无法被用于药物审评,且无法访问承诺提供的内部关键文件。FDA官员承认该工具存在“幻觉”风险,需要进一步的测试和培训。尽管FDA将AI创新列为优先事项,并推出了AI辅助科学审评试点项目,但Elsa的表现引发了对其在关键领域应用AI工具的担忧,尤其是在追求“美国优先”AI议程的背景下,对新技术的监管和风险缓解措施是否足够受到关注。

🤖 Elsa工具被FDA引入以期加速药物审批并辅助日常工作,但据内部人士称,该工具存在严重的“幻觉”问题,会捏造医学研究或误读关键数据,导致其无法胜任审评工作,且无法获取承诺的内部文件。

🗣️ FDA官员承认Elsa工具存在“幻觉”风险,与其他大型语言模型(LLM)面临的风险类似,需要进一步的测试和培训才能确保其准确性。目前该工具仅被用于“组织性事务”,而非强制性要求员工使用。

🚀 FDA将AI创新视为优先事项,并推出了AI辅助科学审评试点,但Elsa工具的表现引发了对其在新技术应用中的监管和风险控制的担忧。在“美国优先”AI议程的推动下,如何平衡创新与必要的审查机制成为关键。

📈 尽管Elsa工具的出现是为了解决药物审批周期长的问题,但其目前的不稳定表现,特别是“自信地幻觉”的特点,表明在关键的医疗健康领域,AI技术的可靠性和安全性仍需严格验证和审慎对待。

The Food and Drug Administration's new AI tool — touted by Secretary of Health and Human Services Robert F. Kennedy, Jr. as a revolutionary solution for shortening drug approvals — is initially causing more hallucinations than solutions.

Known as Elsa, the AI chatbot was introduced to help FDA employees with daily tasks like meeting notes and emails, while simultaneously supporting quicker drug and device approval turnaround times by sorting through important application data. But, according to FDA insiders who spoke to CNN under anonymity, the chatbot is rife with hallucinations, often fabricating medical studies or misinterpreting important data. The tool has been sidelined by staffers, with sources saying it can't be used in reviews and does not have access to crucial internal documents employees were promised.

"It hallucinates confidently," one FDA employee told CNN. According to the sources, the tool often provides incorrect answers on the FDA's research areas, drug labels, and can't link to third-party citations from external medical journals.

Despite initial claims that the tool was already integrated into the clinical review protocol, FDA Commissioner Marty Makary told CNN that the tool was only being used for "organizational duties" and was not required of employees. The FDA's head of AI admitted to the publication that the tool was at risk of hallucinating, carrying the same risk as other LLMs. Both said they weren't surprised it made mistakes, and said further testing and training was needed.

But not all LLM's have the job of approving life-saving medicine.

The agency announced the new agentic tool in June, with Vinay Prasad, director of the FDA's Center for Biologics Evaluation and Research (CBER), and Makary writing that AI innovation was a leading priority for the agency in an accompanying Journal of the American Medical Association (JAMA) article. The tool, which examines device and drug applications, was pitched as a solution for lengthy and oft-criticized drug approval periods, following the FDA's launch of an AI-assisted scientific review pilot.

The Trump administration has rallied government agencies behind an accelerated, "America-first" AI agenda, including recent federal guidance to establish FDA-backed AI Centers of Excellence for testing and deploying new AI tools, announced in the government's newly unveiled AI Action Plan. Many are worried that the aggressive push and deregulation efforts eschew necessary oversight of the new tech.

"Many of America’s most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards," the action plan reads. "A coordinated Federal effort would be beneficial in establishing a dynamic, 'try-first' culture for AI across American industry."

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

FDA 人工智能 药物审批 AI工具 Elsa
相关文章