TechCrunch News 49秒前
People struggle to get useful health advice from chatbots, study finds
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着医疗系统压力增大,AI聊天机器人被用于医疗自诊。一项研究表明,人们在使用AI聊天机器人时,难以提供最佳健康建议所需的信息,导致沟通障碍,甚至不如传统方法。研究发现,参与者在使用ChatGPT等工具时,不仅更难识别相关健康状况,还可能低估已识别状况的严重性。专家建议,在AI技术应用于高风险医疗场景时需谨慎,并呼吁在真实世界中进行充分测试。

🚨 研究表明,用户在使用AI聊天机器人进行医疗自诊时,存在双向沟通障碍,难以提供关键信息,导致诊断效果不佳。

🤖 参与者在使用ChatGPT、Cohere’s Command R+和Meta’s Llama 3等聊天机器人时,更容易忽略相关健康状况,并低估已识别状况的严重性。

⚠️ 专家建议,在AI应用于医疗决策时应谨慎,美国医学协会不推荐医生使用ChatGPT等工具辅助临床决策,OpenAI也警告不要基于聊天机器人输出进行诊断。

🧪 像新药临床试验一样,聊天机器人系统应在部署前进行真实环境测试,以评估其在复杂人机交互中的有效性和安全性。

With long waiting lists and rising costs in overburdened healthcare systems, many people are turning to AI-powered chatbots like ChatGPT for medical self-diagnosis. About 1 in 6 American adults already use chatbots for health advice at least monthly, according to one recent survey.

But placing too much trust in chatbots’ outputs can be risky, in part because people struggle to know what information to give chatbots for the best possible health recommendations, according to a recent Oxford-led study.

“The study revealed a two-way communication breakdown,” Adam Mahdi, director of graduate studies at the Oxford Internet Institute and a co-author of the study, told TechCrunch. “Those using [chatbots] didn’t make better decisions than participants who relied on traditional methods like online searches or their own judgment.”

For the study, the authors recruited around 1,300 people in the U.K. and gave them medical scenarios written by a group of doctors. The participants were tasked with identifying potential health conditions in the scenarios and using chatbots, as well as their own methods, to figure out possible courses of action (e.g. seeing a doctor or going to the hospital).

The participants used the default AI model powering ChatGPT, GPT-4o, as well as Cohere’s Command R+ and Meta’s Llama 3, which once underpinned the company’s Meta AI assistant. According to the authors, the chatbots not only made the participants less likely to identify a relevant health condition, but it made them more likely to underestimate the severity of the conditions they did identify.

Mahdi said that the participants often omitted key details when querying the chatbots or received answers that were difficult to interpret.

“[T]he responses they received [from the chatbots] frequently combined good and poor recommendations,” he added. “Current evaluation methods for [chatbots] do not reflect the complexity of interacting with human users.”

Techcrunch event

Berkeley, CA | June 5

BOOK NOW

The findings come as tech companies increasingly push AI as a way to improve health outcomes. Apple is reportedly developing an AI tool that can dispense advice related to exercise, diet, and sleep. Amazon is exploring an AI-based way to analyze medical databases for “social determinants of health.” And Microsoft is helping build AI to triage messages to care providers sent from patients.  

But as TechCrunch has previously reported, both professionals and patients are mixed as to whether AI is ready for higher-risk health applications. The American Medical Association recommends against physician use of chatbots like ChatGPT for assistance with clinical decisions, and major AI companies including OpenAI warn against making diagnoses based on their chatbots’ outputs.

“We would recommend relying on trusted sources of information for health care decisions,” Mahdi said. “Current evaluation methods for [chatbots] do not reflect the complexity of interacting with human users. Like clinical trials for new medications, [chatbot] systems should be tested in the real world before being deployed.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI医疗 ChatGPT 医疗诊断 风险评估
相关文章