Nvidia Developer 02月16日
AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

斯坦福大学研究人员推出一种名为MUSK的新型AI模型,旨在简化癌症诊断、治疗计划和预后预测。MUSK通过深度学习处理临床文本数据和病理图像,识别医生可能难以察觉的模式,从而提供更准确的临床见解。该模型在病理图像与医疗文本匹配、癌症亚型检测和分类、以及预测癌症生存结果等方面均表现出色,为肿瘤医生提供更明智的决策支持,并推动精准肿瘤学的发展。

🔬MUSK模型利用深度学习,整合临床文本数据(如医生笔记)和病理图像(如组织学切片),以识别医生可能难以立即发现的模式,从而实现更佳的临床洞察。

📊MUSK在23个病理学基准测试中超越了现有的AI模型,尤其擅长匹配病理图像与相关的医疗文本,从而更有效地收集相关患者信息。在解释病理学相关问题时,例如识别癌变区域或预测生物标志物的存在,其准确率达到73%。

🎗️MUSK能够将乳腺癌、肺癌和结直肠癌等癌症亚型的检测和分类提高高达10%,有助于早期诊断和治疗计划的制定。它还能以83%的AUC(模型准确性指标)检测乳腺癌生物标志物。

⏱️MUSK可靠地预测癌症生存结果的准确率为75%,预测哪些肺癌和胃食管癌会对免疫疗法产生反应的准确率为77%。相比之下,标准临床生物标志物的准确率仅为60-65%。

A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in medical AI research,” said Ruijiang LI, an associate professor of radiation oncology and study senior author. “Recently, vision–language foundation models have been developed for medicine, particularly in the field of pathology. However, existing studies use off-the-shelf foundation models that require paired image–text data for pretraining. Despite extensive efforts that led to the curation of 1M pathology image–text pairs, it’s still insufficient to fully capture the diversity of the entire disease spectrum.”  Oncologists rely on many data sources when considering a patient’s condition and planning optimal treatments. However, integrating and interpreting complex medical data remains difficult for doctors and AI models. The study, recently published in Nature, highlights how MUSK could help doctors make more accurate and informed decisions while also solving this long-standing challenge in medical AI.Using deep learning, MUSK processes clinical text data (such as doctor’s notes) and pathology images (like histology slides), to identify patterns that may not be immediately obvious to doctors, leading to better clinical insights. To do so, it uses a two-step multimodal transformer model. First, it learns from large amounts of unpaired data, pulling features from the text and images that are useful. Then it finetunes its understanding of the data by linking paired image-text data, which helps it recognize different types of cancer, predict biomarkers, and suggest effective treatment options.The researchers pretrained the AI model on one of the biggest datasets in the field, using 50M pathology images from 11,577 patients with 33 tumor types and 1B pathology-related text data.According to Jinxi Xiang, study lead author and postdoctoral scholar in radiation physics, the pretraining was conducted over 10 days using 64 NVIDIA V100 Tensor Core GPUs across eight nodes, enabling MUSK to process vast amounts of pathology images and clinical text efficiently. A secondary pretraining phase and ablation studies used NVIDIA A100 80 gb Tensor Core GPUs. The researchers also used NVIDIA RTX A6000 GPUs for evaluating downstream tasks. The framework was accelerated with NVIDIA CUDA and NVIDIA cuDNN libraries, for optimized performance. When tested on 23 pathology benchmarks, MUSK outperformed existing AI models in several key areas. It excelled at matching pathology images with correlating medical text, making it more effective in gathering relevant patient information. It also interpreted pathology-related questions, such as identifying a cancerous area or predicting biomarker presence with 73% accuracy. Figure 1. An example of the visual question-answering MUSK can performIt improved detection and classification for cancer subtypes including breast, lung, and colorectal cancer by up to 10%, which could help with early diagnosis and treatment planning. It also detected ‌breast cancer biomarkers with an AUC (a measure of model accuracy) of 83%. Additionally, MUSK reliably predicted cancer survival outcomes 75% of the time, and which lung and gastro-esophageal cancers would respond to immunotherapy with 77% accuracy. This is a significant improvement over standard clinical biomarkers with an accuracy of only 60-65%.“One striking finding is that AI models that integrate multi-modal data consistently outperform those based on imaging or text data alone, highlighting the power of a multimodal approach,” Li said. “The true value of MUSK lies in its ability to leverage large-scale unpaired image and text data for pretraining, which is a substantial increase over existing models that require paired data.”A core strength of the research is that it can adapt across different clinical settings with little training. This could improve efficiency in oncology workflows and help doctors diagnose cancer faster while tailoring treatments for better patient outcomes.  Their future work will focus on validating the model in multi-institution cohorts of patients from diverse populations and for high-stakes applications such as treatment decision-making. The researchers note that prospective validation in clinical trials will be required for regulatory approval. “We are also working on an extension of the MUSK approach to digital pathology to other types of data such as radiology images and genomic data,” said Li.The researchers’ work, including installation instructions, model weights, evaluation code, and sample data is available on GitHub.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

MUSK 人工智能 癌症诊断 精准医疗
相关文章