TechCrunch News 2024年11月20日
PSA: You shouldn’t upload your medical images to AI chatbots
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着生成式AI聊天机器人的普及,越来越多的人开始使用它们咨询医疗问题并理解自身健康状况。然而,将个人医疗数据上传至AI聊天机器人存在安全和隐私风险。文章指出,上传的医疗数据可能被用于训练AI模型,导致个人敏感信息泄露,并且大多数消费者应用不受HIPAA保护。虽然一些公司声称上传数据有助于提高AI模型的准确性,但数据的使用目的和共享对象并不总是透明,用户只能依赖公司的承诺。因此,在将个人医疗数据上传至AI聊天机器人之前,需仔细权衡利弊,谨慎选择,保护个人隐私安全。

🤔 **将个人医疗数据上传至AI聊天机器人存在安全风险:**上传的医疗数据可能被用于训练AI模型,这可能导致个人敏感信息泄露,例如病史、诊断结果等,这些信息一旦泄露,可能会被用于医疗机构、潜在雇主或政府机构等,造成不可预估的损失。

⚠️ **大多数消费者应用不受HIPAA保护:**HIPAA是美国医疗保健隐私法,但大多数消费者应用并不受其保护,这意味着上传的医疗数据缺乏法律保障,更容易受到泄露和滥用的风险。

💡 **AI模型训练数据来源不透明:**虽然AI模型训练需要数据,但数据的使用目的和共享对象并不总是透明,用户难以了解自己的数据是如何被使用的,以及会被分享给谁,这增加了数据泄露的风险。

💻 **互联网数据难以完全删除:**一旦将数据上传到互联网,就很难完全删除,这意味着个人医疗数据可能被永久保存和使用,存在潜在风险。

🚀 **AI模型发展尚处于早期阶段:**一些AI模型,例如X平台上的Grok,虽然声称能够解读医学影像,但其准确性和可靠性仍有待验证,使用过程中需保持谨慎态度。

Here’s a quick reminder before you get on with your day: Think twice before you upload your private medical data to an AI chatbot.

Folks are frequently turning to generative AI chatbots, like OpenAI’s ChatGPT and Google’s Gemini, to ask questions about their medical concerns and to better understand their health. Some have relied on questionable apps that use AI to decide if someone’s genitals are clear from disease, for example. And most recently, since October, users on social media site X have been encouraged to upload their X-rays, MRIs, and PET scans to the platform’s AI chatbot Grok to help interpret their results.

Medical data is a special category with federal protections that, for the most part, only you can choose to circumvent. But just because you can doesn’t mean you should. Security and privacy advocates have long warned that any uploaded sensitive data can then be used to train AI models, and risks exposing your private and sensitive information down the line. 

Generative AI models are often trained on the data that they receive, under the premise that the uploaded data helps to build out the information and accuracy of the model’s outputs. But it’s not always clear how and for what purposes the uploaded data is being used, or whom the data is shared with — and companies can change their minds. You must trust the companies largely at their word.

People have found their own private medical records in AI training data sets — and that means anybody else can, including healthcare providers, potential future employers, or government agencies. And, most consumer apps aren’t covered under the U.S. healthcare privacy law HIPAA, offering no protections for your uploaded data. 

X owner Elon Musk, who in a post encouraged users to upload their medical imagery to Grok, conceded that the results from Grok are “still early stage,” but that the AI model “will become extremely good.” By asking users to submit their medical imagery to Grok, the aim is that the AI model will improve over time and become capable of interpreting medical scans with consistent accuracy. As for who has access to this Grok data isn’t clear; as noted elsewhere, Grok’s privacy policy says that X shares some users’ personal information with an unspecified number of “related” companies.

It’s good to remember that what goes on the internet never leaves the internet. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI聊天机器人 医疗数据 隐私安全 HIPAA 数据泄露
相关文章