MIT News - Artificial intelligence 前天 04:53
Changing the conversation in health care
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了生成式人工智能(AI)在医疗保健领域的变革潜力,特别关注如何利用AI改善医患沟通,从而提升治疗效果。通过MIT的人工智能/语言孵化器项目,研究人员探索了AI在跨语言、跨文化交流中的应用,旨在弥合社会经济、文化和语言差异。文章强调了语言在医疗中的重要性,以及AI如何帮助医生更好地理解患者的需求和文化背景。同时,文章也指出了在AI应用中需要注意的伦理和社会问题,呼吁跨学科合作,共同构建更人性化的医疗体系。

🗣️ **语言是医疗的桥梁**: 文章强调了语言在医疗保健中的核心作用,认为有效的沟通是实现良好治疗的关键。语言不仅是传递信息的工具,也反映了文化、身份和权力关系,影响着患者的治疗体验和结果。

🤝 **AI赋能医患沟通**: 研究者们正在探索如何利用AI技术,特别是大型语言模型(LLMs),来改善医患沟通。AI可以提供跨文化和语言的背景知识,帮助医生更好地理解患者,并提供更个性化的治疗方案。

💡 **跨学科合作的重要性**: 文章强调了跨学科合作的重要性,特别是将社会科学与硬科学相结合。通过汇集不同领域的专家,如医生、语言学家、社会学家和AI专家,可以更好地理解AI在医疗中的应用,并解决潜在的伦理和社会问题。

🌍 **关注包容性与公平性**: 文章也提到了在AI应用中需要关注的包容性和公平性问题。研究人员需要考虑不同语言和文化背景下的差异,避免AI工具加剧现有的不平等。确保AI技术服务于所有人,特别是弱势群体。

🌱 **重新定义医疗教育**: 文章呼吁重新思考医疗教育,鼓励医生和患者共同参与到医疗实践中。通过改变传统的医疗模式,可以更好地利用AI技术,提升医疗服务的质量和效率。

Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges. 

The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.

The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program. 

“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”

A chance collaboration

Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.

“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”

Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”

Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness. 

Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach. 

“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.

Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.

“Science has to have a heart” 

LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”

The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence. 

“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”

“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?” 

Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous. 

Changing the conversation

AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says. 

‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.

“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”

“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.

Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute

The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.

Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives. 

“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”

Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools. 

“AI is our chance to rewrite the rules”

While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care. 

But the team isn’t daunted.

Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”

Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.

“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”

Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.

“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”

“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 医疗健康 医患沟通 语言 跨文化交流
相关文章