MIT News - Artificial intelligence 2024年12月13日
AI in health should be regulated, but don’t forget about the algorithms, researchers say
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

美国卫生与公众服务部民权办公室发布了一项平价医疗法案的最终规则,禁止在“患者护理决策支持工具”中使用基于种族、肤色、国籍、年龄、残疾或性别的歧视.该规则适用于医疗领域中的AI和非自动化工具.尽管FDA已批准近1000种AI医疗设备,但临床决策支持工具产生的临床风险评分缺乏监管.研究人员呼吁加强对AI的监管,以确保医疗公平和透明度,并指出即使是非AI工具也可能存在偏见.

🇺🇸美国卫生与公众服务部民权办公室(OCR)在平价医疗法案(ACA)下发布了一项最终规则,禁止在“患者护理决策支持工具”中基于种族、肤色、国籍、年龄、残疾或性别进行歧视,该规则适用于医疗领域中的AI和非自动化工具.

📈自1995年第一个AI医疗设备获批以来,FDA批准的AI医疗设备数量急剧上升,截至目前已批准近1000种,其中许多设备旨在支持临床决策.

🩺尽管65%的美国医生每月使用临床决策支持工具来确定患者护理的下一步措施,但目前还没有监管机构监督这些工具产生的临床风险评分.

🤖研究人员指出,许多决策支持工具虽然不使用AI,但它们同样可能在医疗保健中延续偏见,因此也需要受到监管.

⚖️由于嵌入电子病历的临床决策支持工具激增并在临床实践中广泛使用,因此对临床风险评分进行监管面临重大挑战,但为了确保透明度和非歧视性,这种监管仍然是必要的.

One might argue that one of the primary duties of a physician is to constantly evaluate and re-evaluate the odds: What are the chances of a medical procedure’s success? Is the patient at risk of developing severe symptoms? When should the patient return for more testing? Amidst these critical deliberations, the rise of artificial intelligence promises to reduce risk in clinical settings and help physicians prioritize the care of high-risk patients.

Despite its potential, researchers from the MIT Department of Electrical Engineering and Computer Science (EECS), Equality AI, and Boston University are calling for more oversight of AI from regulatory bodies in a new commentary published in the New England Journal of Medicine AI's (NEJM AI) October issue after the U.S. Office for Civil Rights (OCR) in the Department of Health and Human Services (HHS) issued a new rule under the Affordable Care Act (ACA).

In May, the OCR published a final rule in the ACA that prohibits discrimination on the basis of race, color, national origin, age, disability, or sex in “patient care decision support tools,” a newly established term that encompasses both AI and non-automated tools used in medicine.

Developed in response to President Joe Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from 2023, the final rule builds upon the Biden-Harris administration’s commitment to advancing health equity by focusing on preventing discrimination. 

According to senior author and associate professor of EECS Marzyeh Ghassemi, “the rule is an important step forward.” Ghassemi, who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and the Institute for Medical Engineering and Science (IMES), adds that the rule “should dictate equity-driven improvements to the non-AI algorithms and clinical decision-support tools already in use across clinical subspecialties.”

The number of U.S. Food and Drug Administration-approved, AI-enabled devices has risen dramatically in the past decade since the approval of the first AI-enabled device in 1995 (PAPNET Testing System, a tool for cervical screening). As of October, the FDA has approved nearly 1,000 AI-enabled devices, many of which are designed to support clinical decision-making.

However, researchers point out that there is no regulatory body overseeing the clinical risk scores produced by clinical-decision support tools, despite the fact that the majority of U.S. physicians (65 percent) use these tools on a monthly basis to determine the next steps for patient care.

To address this shortcoming, the Jameel Clinic will host another regulatory conference in March 2025. Last year’s conference ignited a series of discussions and debates amongst faculty, regulators from around the world, and industry experts focused on the regulation of AI in health.

“Clinical risk scores are less opaque than ‘AI’ algorithms in that they typically involve only a handful of variables linked in a simple model,” comments Isaac Kohane, chair of the Department of Biomedical Informatics at Harvard Medical School and editor-in-chief of NEJM AI. “Nonetheless, even these scores are only as good as the datasets used to ‘train’ them and as the variables that experts have chosen to select or study in a particular cohort. If they affect clinical decision-making, they should be held to the same standards as their more recent and vastly more complex AI relatives.”

Moreover, while many decision-support tools do not use AI, researchers note that these tools are just as culpable in perpetuating biases in health care, and require oversight.

“Regulating clinical risk scores poses significant challenges due to the proliferation of clinical decision support tools embedded in electronic medical records and their widespread use in clinical practice,” says co-author Maia Hightower, CEO of Equality AI. “Such regulation remains necessary to ensure transparency and nondiscrimination.”

However, Hightower adds that under the incoming administration, the regulation of clinical risk scores may prove to be “particularly challenging, given its emphasis on deregulation and opposition to the Affordable Care Act and certain nondiscrimination policies.” 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 医疗保健 监管 平价医疗法案 临床决策支持
相关文章