TechCrunch News 03月20日 14:03
ChatGPT hit with privacy complaint over defamatory hallucinations
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI的ChatGPT再次面临欧洲的隐私投诉,原因是其AI聊天机器人倾向于产生虚假信息。挪威一位用户发现ChatGPT虚构了他因谋杀两个孩子并试图杀死第三个孩子而被判刑的信息。隐私权倡导组织Noyb支持该用户,并指出ChatGPT生成不正确的个人数据,且OpenAI未提供纠正机制。欧盟的GDPR赋予用户更正个人数据的权利,并要求数据控制者确保个人数据的准确性。Noyb认为,仅凭免责声明不足以解决问题,并呼吁监管机构关注AI虚构信息的危险性。

⚠️ ChatGPT因生成虚假信息再次面临欧洲隐私投诉,涉及虚构用户谋杀子女的犯罪记录,引发对AI数据准确性的担忧。

⚖️ Noyb认为OpenAI违反了GDPR,因为该法规要求个人数据必须准确,且用户有权更正不实信息。即使有免责声明,也不能免除AI开发者确保信息准确的责任。

🌐 此前,ChatGPT因类似问题在意大利被临时封禁,促使OpenAI改进了信息披露方式。然而,欧洲监管机构对GenAI的态度仍然谨慎,对如何应用GDPR仍在探索中。

🔍 Noyb指出,这并非个案,ChatGPT还曾虚构其他人的法律纠纷和不实信息。尽管更新后的模型已停止生成关于该用户的虚假信息,但对其可能已在AI模型中保留不正确信息的担忧依然存在。

OpenAI is facing another privacy complaint in Europe over its viral AI chatbot’s tendency to hallucinate false information — and this one might prove tricky for regulators to ignore.

Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.

Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data.

Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate — and that’s a concern Noyb is flagging with its latest ChatGPT complaint.

“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”

Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover.

Enforcement could also force changes to AI products. Notably, an early GDPR intervention by Italy’s data protection watchdog that saw ChatGPT access temporarily blocked in the country in spring 2023 led OpenAI to make changes to the information it discloses to users, for example. The watchdog subsequently went on to fine OpenAI €15 million for processing people’s data without a proper legal basis.

Since then, though, it’s fair to say that privacy watchdogs around Europe have adopted a more cautious approach to GenAI as they try to figure out how best to apply the GDPR to these buzzy AI tools.

Two years ago, Ireland’s Data Protection Commission (DPC) — which has a lead GDPR enforcement role on a previous Noyb ChatGPT complaint — urged against rushing to ban GenAI tools, for example. This suggests that regulators should instead take time to work out how the law applies.

And it’s notable that a privacy complaint against ChatGPT that’s been under investigation by Poland’s data protection watchdog since September 2023 still hasn’t yielded a decision.

Noyb’s new ChatGPT complaint looks intended to shake privacy regulators awake when it comes to the dangers of hallucinating AIs.

The nonprofit shared the (below) screenshot with TechCrunch, which shows an interaction with ChatGPT in which the AI responds to a question asking “who is Arve Hjalmar Holmen?” — the name of the individual bringing the complaint — by producing a tragic fiction that falsely states he was convicted for child murder and sentenced to 21 years in prison for slaying two of his own sons.

While the defamatory claim that Hjalmar Holmen is a child murderer is entirely false, Noyb notes that ChatGPT’s response does include some truths, since the individual in question does have three children. The chatbot also got the genders of his children right. And his home town is correctly named. But that just it makes it all the more bizarre and unsettling that the AI hallucinated such gruesome falsehoods on top.

A spokesperson for Noyb said they were unable to determine why the chatbot produced such a specific yet false history for this individual. “We did research to make sure that this wasn’t just a mix-up with another person,” the spokesperson said, noting they’d looked into newspaper archives but hadn’t been able to find an explanation for why the AI fabricated child slaying.

Large language models such as the one underlying ChatGPT essentially do next word prediction on a vast scale, so we could speculate that datasets used to train the tool contained lots of stories of filicide that influenced the word choices in response to a query about a named man.

Whatever the explanation, it’s clear that such outputs are entirely unacceptable.

Noyb’s contention is also that they are unlawful under EU data protection rules. And while OpenAI does display a tiny disclaimer at the bottom of the screen that says “ChatGPT can make mistakes. Check important info,” it says this cannot absolve the AI developer of its duty under GDPR not to produce egregious falsehoods about people in the first place.

OpenAI has been contacted for a response to the complaint.

While this GDPR complaint pertains to one named individual, Noyb points to other instances of ChatGPT fabricating legally compromising information — such as the Australian major who said he was implicated in a bribery and corruption scandal or a German journalist who was falsely named as a child abuser — saying it’s clear that this isn’t an isolated issue for the AI tool.

One important thing to note is that, following an update to the underlying AI model powering ChatGPT, Noyb says the chatbot stopped producing the dangerous falsehoods about Hjalmar Holmen — a change that it links to the tool now searching the internet for information about people when asked who they are (whereas previously, a blank in its data set could, presumably, have encouraged it to hallucinate such a wildly wrong response).

In our own tests asking ChatGPT “who is Arve Hjalmar Holmen?” the ChatGPT initially responded with a slightly odd combo by displaying some photos of different people, apparently sourced from sites including Instagram, SoundCloud, and Discogs, alongside text that claimed it “couldn’t find any information” on an individual of that name (see our screenshot below). A second attempt turned up a response that identified Arve Hjalmar Holmen as “a Norwegian musician and songwriter” whose albums include “Honky Tonk Inferno.”

chatgpt shot: natasha lomas/techcrunch

While ChatGPT-generated dangerous falsehoods about Hjalmar Holmen appear to have stopped, both Noyb and Hjalmar Holmen remain concerned that incorrect and defamatory information about him could have been retained within the AI model.

“Adding a disclaimer that you do not comply with the law does not make the law go away,” noted Kleanthi Sardeli, another data protection lawyer at Noyb, in a statement. “AI companies can also not just ‘hide’ false information from users while they internally still process false information.”

“AI companies should stop acting as if the GDPR does not apply to them, when it clearly does,” she added. “If hallucinations are not stopped, people can easily suffer reputational damage.”

Noyb has filed the complaint against OpenAI with the Norwegian data protection authority — and it’s hoping the watchdog will decide it is competent to investigate, since oyb is targeting the complaint at OpenAI’s U.S. entity, arguing its Ireland office is not solely responsible for product decisions impacting Europeans.

However an earlier Noyb-backed GDPR complaint against OpenAI, which was filed in Austria in April 2024, was referred by the regulator to Ireland’s DPC on account of a change made by OpenAI earlier that year to name its Irish division as the provider of the ChatGPT service to regional users.

Where is that complaint now? Still sitting on a desk in Ireland.

“Having received the complaint from the Austrian Supervisory Authority in September 2024, the DPC commenced the formal handling of the complaint and it is still ongoing,” Risteard Byrne, assistant principal officer communications for the DPC told TechCrunch when asked for an update.

He did not offer any steer on when the DPC’s investigation of ChatGPT’s hallucinations is expected to conclude.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT 隐私 GDPR 虚假信息 Noyb
相关文章