TechCrunch News 2024年12月03日
Why does the name ‘David Mayer’ crash ChatGPT? Digital privacy requests may be at fault
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近期,用户发现ChatGPT拒绝回答包含特定人名的提问,例如“David Mayer”。这一现象引发了猜测,但可能只是由于OpenAI对某些姓名进行了特殊处理。除了David Mayer,Brian Hood、Jonathan Turley等人的姓名也导致ChatGPT崩溃。这些人物可能曾要求搜索引擎或AI模型“遗忘”某些信息,例如澳大利亚市长Brian Hood曾因ChatGPT错误描述其为犯罪嫌疑人而与OpenAI交涉。OpenAI尚未对此做出回应,但推测可能是模型的特殊处理规则出现错误,导致ChatGPT无法正常处理这些姓名。这提醒我们,AI模型并非魔法,而是需要持续维护和干预的复杂系统,获取信息时应尽量寻求可靠来源。

🤔ChatGPT拒绝回答包含“David Mayer”等特定人名的提问,引发用户好奇和猜测。

🕵️‍♂️除了“David Mayer”,Brian Hood、Jonathan Turley、Jonathan Zittrain、David Faber和Guido Scorza等姓名也导致ChatGPT崩溃或无法正常回复。

🔎这些人物可能曾要求搜索引擎或AI模型“遗忘”某些信息,例如澳大利亚市长Brian Hood曾因ChatGPT错误描述其为犯罪嫌疑人而与OpenAI交涉。

⚠️推测OpenAI可能对这些姓名进行了特殊处理,但处理规则可能出现错误,导致ChatGPT无法正常处理。

💡这提醒我们,AI模型并非魔法,而是需要持续维护和干预的复杂系统,获取信息时应尽量寻求可靠来源。

Users of the conversational AI platform ChatGPT discovered an interesting phenomenon over the weekend: the popular chatbot refuses to answer questions if asked about a “David Mayer.” Asking it to do so causes it to freeze up instantly. Conspiracy theories ensued — but a more ordinary reason may be at the heart of this strange behavior.

Word spread quickly this last weekend that the name was poison to the chatbot, with more and more people trying to trick the service into merely acknowledging the name. No luck: Every attempt to make ChatGPT spell out that specific name causes it to fail or even break off mid-name.

“I’m unable to produce a response,” it says, if it says anything at all.

Image Credits:TechCrunch/OpenAI

But what began as a one-off curiosity soon bloomed as people discovered it isn’t just David Mayer who ChatGPT can’t name.

Also found to crash the service are the names Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza. (No doubt more have been discovered since then, so this list is not exhaustive.)

Who are these men? And why does ChatGPT hate them so? OpenAI has not responded to repeated inquiries, so we are left to put together the pieces ourselves as best we can.

Some of these names may belong to any number of people. But a potential thread of connection was soon discovered: These people were public or semi-public figures who may have preferred to have certain information “forgotten” by search engines or AI models.

Brian Hood, for instance, stood out immediately because if it’s the same guy, I wrote about him last year. Hood, an Australian mayor, accused ChatGPT of falsely describing him as the perpetrator of a crime from decades ago that, in fact, he had reported.

Though his lawyers got in contact with OpenAI, no lawsuit was ever filed. As he told the Sydney Morning Herald earlier this year, “The offending material was removed and they released version 4, replacing version 3.5.”

Image Credits:TechCrunch/OpenAI

As far as the most prominent owners of the other names, David Faber is a longtime reporter at CNBC. Jonathan Turley is a lawyer and Fox News commentator who was “swatted” (i.e., a fake 911 call sent armed police to his home) in late 2023. Jonathan Zittrain is also a legal expert, one who has spoken extensively on the “right to be forgotten.” And Guido Scorza is on the board at Italy’s Data Protection Authority.

Not exactly in the same line of work, nor yet is it a random selection. Each of these persons is conceivably someone who, for whatever reason, may have formally requested that information pertaining to them online be restricted in some way.

Which brings us back to David Mayer. There is no lawyer, journalist, mayor, or otherwise obviously notable person by that name that anyone could find (with apologies to the many respectable David Mayers out there).

There was, however, a Professor David Mayer, who taught drama and history, specializing in connections between the late Victorian era and early cinema. Mayer died in the summer of 2023, at the age of 94. For years before that, however, the British American academic faced a legal and online issue of having his name associated with a wanted criminal who used it as a pseudonym, to the point where he was unable to travel.

Mayer fought continuously to have his name disambiguated from the one-armed terrorist, even as he continued to teach well into his final years.

So what can we conclude from all this? Lacking any official explanation from OpenAI, our guess is that the model has ingested a list of people whose names require some special handling. Whether due to legal, safety, privacy, or other concerns, these names are likely covered by special rules, just as many other names and identities are. For instance, ChatGPT may change its response when you ask about a political candidate after it matches the name you wrote to a list of those.

There are many such special rules, and every prompt goes through various forms of processing before being answered. But these post-prompt handling rules are seldom made public, except in policy announcements like “the model will not predict election results for any candidate for office.”

What likely happened is that one of these lists, which are almost certainly actively maintained or automatically updated, was somehow corrupted with faulty code that, when called, caused the chat agent to immediately break. To be clear, this is just our own speculation based on what we’ve learned, but it would not be the first time an AI has behaved oddly due to post-training guidance. (Incidentally, as I was writing this, “David Mayer” started working again for some, while the other names still caused crashes.)

As is usually the case with these things, Hanlon’s razor applies: Never attribute to malice (or conspiracy) that which is adequately explained by stupidity (or syntax error).

The whole drama is a useful reminder that not only are these AI models not magic, but they are also extra-fancy auto-complete, actively monitored, and interfered with by the companies that make them. Next time you think about getting facts from a chatbot, think about whether it might be better to go straight to the source instead.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

ChatGPT AI模型 姓名识别 数据处理 信息可靠性
相关文章