MarkTechPost@AI 2024年12月07日
This AI Paper from UCLA Unveils ‘2-Factor Retrieval’ for Revolutionizing Human-AI Decision-Making in Radiology
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州大学洛杉矶分校的研究人员推出了一种名为“双因子检索”(2FR)的新方法,该系统将验证机制融入AI决策过程,允许临床医生将AI预测与标记数据库中相似病例的示例进行交叉参考。2FR系统在AI生成的诊断结果旁展示标记数据库中的代表性图像,使临床医生能够比较检索到的示例与正在审查的病理,从而支持诊断回忆和决策验证。实验结果表明,当AI预测准确时,使用2FR的准确率达到70%,显著高于其他方法。该方法显著提高了诊断的准确性和信心,特别是在AI预测准确的情况下,验证机制的引入有望优化人机协作,推动AI在医疗保健领域的应用。

💡加州大学洛杉矶分校的研究人员引入了一种名为“双因子检索”(2FR)的新方法,该系统将验证机制融入AI决策过程,旨在提高AI辅助诊断的准确性和可靠性。

🔍2FR系统的设计包括在AI生成的诊断结果旁展示来自标记数据库中的代表性图像,允许临床医生将AI预测与相似病例的示例进行交叉参考,从而支持诊断回忆和决策验证。

📊研究通过一项有69名不同专业和经验水平的临床医生参与的对照实验,评估了2FR在诊断胸部X光片中四种病症(心脏肥大、气胸、肿块/结节和积液)的效果。

📈实验结果表明,当AI预测准确时,使用2FR的准确率达到70%,显著高于基于显著性图的方法(65%)、仅AI预测(64%)和无AI支持的情况(45%)。

🩺该研究强调了基于验证的方法在AI决策支持系统中的变革潜力,通过允许临床医生验证AI预测,提高了准确性和信心,减轻了认知负担,并建立了对AI辅助决策的信任。

Integration of AI into clinical practices is very challenging, especially in radiology. While AI has proven to enhance the accuracy of diagnosis, its “black-box” nature often erodes clinicians’ confidence and acceptance. Current clinical decision support systems (CDSSs) are either not explainable or use methods like saliency maps and Shapley values, which do not give clinicians a reliable way to verify AI-generated predictions independently. This lack is significant, as it limits the potential of AI in medical diagnosis and increases the dangers involved with overreliance on potentially wrong AI output. To address this requires new solutions that will close the trust deficit and arm health professionals with the right tools to assess the quality of AI decisions in demanding environments like health care.

Explainability techniques in medical AI, such as saliency maps, counterfactual reasoning, and nearest-neighbor explanations, have been developed to make AI outputs more interpretable. The main goal of the techniques is to explain how AI predicts, thus arming clinicians with useful information to understand the decision-making process behind the predictions. However, limitations exist. One of the greatest challenges is overreliance on the AI. Clinicians often are swayed by potentially convincing but incorrect explanations presented by the AI.

Cognitive biases, such as confirmation bias, worsen this problem significantly, often leading to incorrect decisions. Most importantly, these methods lack strong verification mechanisms, which would enable clinicians to trust the reliability of AI predictions. These limitations underscore the need for approaches beyond explainability to include features that proactively support verification and enhance human-AI collaboration.

To address these limitations, the researchers from the University of California, Los Angeles UCLA introduced a novel approach called 2-factor Retrieval (2FR). This system integrates verification into AI decision-making, allowing clinicians to cross-reference AI predictions with examples of similarly labeled cases. The design involves presenting AI-generated diagnoses alongside representative images from a labeled database. These visual aids enable clinicians to compare retrieved examples with the pathology under review, supporting diagnostic recall and decision validation. This novel design reduces dependence and encourages collaborative diagnostic processes by making clinicians more actively engaged in validating AI-generated outputs. The development improves both trust and precision and therefore, it is a notable step forward in the seamless integration of artificial intelligence into clinical practice.

The study evaluated 2FR through a controlled experiment with 69 clinicians of varying specialties and experience levels. It adopted the NIH Chest X-ray and contained images labeled with the pathologies of cardiomegaly, pneumothorax, mass/nodule, and effusion. This work was randomized into four different modalities: AI-only predictions, AI predictions with saliency maps, AI predictions with 2FR, and no AI assistance. It used cases of different difficulties, such as easy and hard, to measure the effect of task complexity. Diagnostic accuracy and confidence were the two primary metrics, and analyses were done using linear mixed-effects models that control for clinician expertise and AI correctness. This design is robust enough to give a thorough assessment of the method’s efficacy.

The results show that 2FR significantly improves the accuracy of diagnostics in AI-aided decision-making structures. Specifically, when the AI-generated predictions were accurate, the level of accuracy achieved with 2FR reached 70%, which was significantly higher than that of saliency-based methods (65%), AI-only predictions (64%), and no-AI support cases (45%). This method was particularly helpful for less confident clinicians, as they achieved highly significant improvements compared to other approaches. The experience levels of the radiologists also improved well with the use of 2FR and thus showed higher accuracy regardless of experience levels. However, all modalities declined similarly whenever AI predictions were wrong. This shows that clinicians mostly relied on their skills during such scenarios. Thus, these results show the capability of 2FR to improve the confidence and performance of the pipeline in diagnosis, especially when the AI predictions are accurate. 

This innovation further underlines the tremendous transformative capacity of verification-based approaches in AI decision support systems. Beyond the limitations that have been attributed to traditional explainability methods, 2FR allows clinicians to accurately verify AI predictions, which further enhances accuracy and confidence. The system also relieves cognitive workload and builds trust in AI-assisted decision-making in radiology. Such mechanisms integrated into human-AI collaboration will provide optimization toward the better and safer use of AI deployments in healthcare. This may eventually be used to explore the long-term impact on diagnostic strategies, clinician training, and patient outcomes. The next generation of AI systems with 2FRs holds the potential to contribute considerably to advancements in medical practice with high reliability and accuracy.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 60k+ ML SubReddit.

[Partner with us]: ‘Next Magazine/Report- Open Source AI in Production’

The post This AI Paper from UCLA Unveils ‘2-Factor Retrieval’ for Revolutionizing Human-AI Decision-Making in Radiology appeared first on MarkTechPost.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 医疗诊断 双因子检索 临床决策 人机协作
相关文章