少点错误 2024年07月21日
Freedom and Privacy of Thought Architectures
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了如何确保与AI系统的对话隐私,并提出了一些可能的解决方案。作者指出,为了保护敏感信息,例如金融数据、国家机密和健康数据,以及个人隐私,需要开发一种机制来保护用户与AI系统之间的通信,使其如同神父的忏悔室一样安全。作者认为,本地托管的AI系统可以是一个解决方案,但由于性能问题,目前难以实现。作者还提出了一种基于加密和零知识证明的方案,可以帮助建立对AI系统安全性的信任,但仍然存在一些技术挑战。

🤔 为了保护与AI系统的对话隐私,作者提出了一些关键问题:如何确保与AI系统的通信如同神父的忏悔室一样安全?如何防止敏感信息泄露?如何防止AI公司利用用户数据?

💡 本地托管的AI系统可以是一个解决方案,但由于性能问题,目前难以实现。作者认为,即使数据清洗和算法学习取得重大进步,本地托管的AI系统也难以达到令人满意的性能。

🔐 作者提出了一种基于加密和零知识证明的方案,可以帮助建立对AI系统安全性的信任。该方案可以保护用户输入和输出的隐私,并防止第三方访问用户数据。但是,作者指出,目前还没有成熟的技术方案可以完全确保AI系统不会访问用户的未加密数据。

🧐 作者呼吁更多研究和开发,以解决AI隐私保护问题,并提出了一系列问题供读者思考,例如:如何设计一个可以确保AI系统不会访问用户未加密数据的审计系统?如何平衡AI的实用性和用户的隐私保护?

Published on July 20, 2024 9:43 PM GMT

I don't work in cyber security, so others will have to teach me.

I'm interested in the question of how AI systems can become private. How to make communications with an AI system as protected as the confessional. Some AI capabilities are throttled not for public interest reasons but because if those private conversations became public, the company would suffer reputational damage.

I'm not libertarian enough to mind that AI companies don't allow certain unsavory conversations to occur, but I do think they could be more permissive if there were less risk of blowback.

A lot of high value uses of a eyes is impossible without data security of the inputs and outputs. Sensitive financial information, State secrets, health data: this isn't information you can just hand over to an AI company no matter the promise of security.

Similarly a lot of individuals are going to want to cordon off certain parts of their life, including their own mental health.

The obvious answer is to have locally hosted AI. However even vast improvements in data cleaning and algorithmic learning are unlikely to get us acceptably high performance.

You could start out with your local Host, send an encrypted file, and receive an encrypted file from a huge Network hosted model. But I don't see how that model could interact with that encrypted file not being trained on that type of thing as an input. There's no point in sending the key along with it.

Or is there?

If there is an encryption and decryption layer in the AI system for the inputs and the outputs, an AI service could probably use zero knowledge proofs (or something else) to help create trust that they do not have method to read your messages. At the very least this would help with blocking out third parties.

But I don't know enough about software architecture for creating an audit that would show the AI company did not have access to the unencrypted input or output.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI隐私 数据安全 零知识证明 加密
相关文章