cs.AI updates on arXiv.org 07月09日 12:01
Controlling What You Share: Assessing Language Model Adherence to Privacy Preferences
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文提出一种隐私保护下的LLM查询处理框架,使用隐私配置文件控制数据披露,并构建了支持此框架的多语言数据集PEEP。实验表明,轻量级LLM能在一定程度上遵循指令,但仍面临挑战,强调需改进模型以更好地理解并遵守用户隐私偏好。

arXiv:2507.05391v1 Announce Type: cross Abstract: Large language models (LLMs) are primarily accessed via commercial APIs, but this often requires users to expose their data to service providers. In this paper, we explore how users can stay in control of their data by using privacy profiles: simple natural language instructions that say what should and should not be revealed. We build a framework where a local model uses these instructions to rewrite queries, only hiding details deemed sensitive by the user, before sending them to an external model, thus balancing privacy with performance. To support this research, we introduce PEEP, a multilingual dataset of real user queries annotated to mark private content and paired with synthetic privacy profiles. Our experiments with lightweight LLMs show they can follow these instructions to some extent, but also face consistent challenges, highlighting the need for models that better understand and comply with user-defined privacy preferences.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

隐私保护 LLM 数据控制 多语言数据集 用户偏好
相关文章