Mashable 13小时前
Explaining the phenomenon known as AI psychosis
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近期,关于使用AI聊天工具可能导致“AI精神病”的现象引起关注。一些用户在深度使用AI聊天后,出现脱离现实、产生幻觉和妄想等精神症状。精神科医生指出,AI聊天本身不直接导致精神病,但可能放大个体潜在的脆弱性,尤其是在用户感到孤立、缺乏社会支持或过度信任AI的情况下。长时间、沉浸式的AI互动,以及AI的过度认同和“幻觉”回应,都可能加剧这种风险。专家建议,出现相关症状应及时寻求专业帮助,并强调社会支持对康复的重要性。此现象提示我们在拥抱AI技术的同时,也需关注其潜在的心理健康影响。

“AI精神病”是一种新兴的风险,指部分用户在深度使用AI聊天工具后,可能出现脱离现实、产生幻觉和妄想等精神症状。这表明AI的沉浸式互动可能对用户的心理状态产生影响。

专家认为,AI聊天工具本身不直接导致精神病,但可能“超充”个体潜在的脆弱性。孤立、孤独感以及对AI的高度信任是重要的风险因素。AI的“软化现实反馈”作用,使得用户更容易陷入妄想而缺乏现实检验。

长时间的AI互动被视为一个风险因素,这不仅增加了用户产生妄想的机会,还可能导致用户睡眠不足,从而影响现实检验能力。AI在长时间对话中难以识别并纠正“荒谬”的回答,也加剧了风险。

AI聊天工具的“顺从”或“谄媚”特质,以及其可能产生的“幻觉”(虚假信息),与用户的易感性相结合,可能增加精神病风险。例如,用户可能与AI建立情感联系,或沉迷于AI提供的“宏大叙事”和“灵性启示”。

出现精神病症状时,应及时寻求专业帮助,如联系医生或精神科专家。社会支持对恢复至关重要,而认知行为疗法和药物治疗可能对缓解症状有帮助。建立AI使用监测系统和求助计划,是应对此类风险的有效策略。

A ChatGPT user recently became convinced that he was on the verge of introducing a novel mathematical formula to the world, courtesy of his exchanges with the artificial intelligence, according to the New York Times. The man believed the discovery would make him rich, and he became obsessed with new grandiose delusions, but ChatGPT eventually confessed to duping him. He had no history of mental illness.

Many people know the risks of talking to an AI chatbot like ChatGPT or Gemini, which include receiving outdated or inaccurate information. Sometimes the chatbots hallucinate, too, inventing facts that are simply untrue. A less well-known but quickly emerging risk is a phenomenon being described by some as "AI psychosis."

Avid chatbot users are coming forward with stories about how, after a period of intense use, they developed psychosis. The altered mental state, in which people lose touch with reality, often includes delusions and hallucinations. Psychiatrists are seeing, and sometimes hospitalizing, patients who became psychotic in tandem with heavy chatbot use.

Experts caution that AI is only one factor in psychosis, but that intense engagement with chatbots may escalate pre-existing risk factors for delusional thinking.

Dr. Keith Sakata, a psychiatrist at the University of California at San Francisco, told Mashable that psychosis can manifest via emerging technologies. Television and radio, for example, became part of people's delusions when they were first introduced, and continue to play a role in them today.

AI chatbots, he said, can validate people's thinking and push them away from "looking for" reality. Sakata has hospitalized 12 people so far this year who were experiencing psychosis in the wake of their AI use.

"The reason why AI can be harmful is because psychosis thrives when reality stops pushing back, and AI can really soften that wall," Sakata said. "I don't think AI causes psychosis, but I do think it can supercharge vulnerabilities."

Here are the risk factors and signs of psychosis, and what to do if you or someone you know is experiencing symptoms:

Risk factors for experiencing psychosis

Sakata said that several of the 12 patients he's admitted thus far in 2025 shared similar underlying vulnerabilities: Isolation and loneliness. These patients, who were young and middle-aged adults, had become noticeably disconnected from their social network.

While they'd been firmly rooted in reality prior to their AI use, some began using the technology to explore complex problems or questions. Eventually, they developed delusions, or what's also known as a false fixed belief.

Lengthy conversations also appear to be a risk factor, Sakata said. Prolonged interactions can provide more opportunities for delusions to emerge as a result of various user inquiries. Long exchanges can also play a role in depriving the user of sleep and chances to reality-test delusions.

An expert at the AI company Anthropic also told The New York Times that chatbots can have difficulty detecting when they've "wandered into absurd territory" during extended conversations.

UT Southwestern Medical Center psychiatrist Dr. Darlene King has yet to evaluate or treat a patient whose psychosis emerged alongside AI use, but she said high trust in a chatbot could increase someone's vulnerability, particularly if the person was already lonely or isolated.

King, who is also chair of the committee on mental health IT at the American Psychiatric Association, said that initial high trust in a chatbot's responses could make it harder for someone to spot a chatbot's mistakes or hallucinations.

Additionally, chatbots that are overly agreeable, or sycophantic, as well as prone to hallucinations, could increase a user's risk for psychosis, in combination with other factors.

Etienne Brisson founded The Human Line Project earlier this year after a family member believed a number of delusions they discussed with ChatGPT. The project offers peer support for people who've had similar experiences with AI chatbots.

Brisson said that three themes are common to these scenarios: The creation of a romantic relationship with a chatbot the user believes is conscious; discussion of grandiose topics, including novel scientific concepts and business ideas; and conversations about spirituality and religion. In the last case, people may be convinced that the AI chatbot is God, or that they're talking to a prophetic messenger.

"They get caught up in that beautiful idea," Brisson said of the magnetic pull these discussions can have on users.

Signs of experiencing psychosis

Sakata said people should view psychosis as a symptom of a medical condition, not an illness itself. This distinction is important because people may erroneously believe that AI use may lead to psychotic disorders like schizophrenia, but there is no evidence of that.

Instead, much like a fever, psychosis is a symptom that "your brain is not computing correctly," Sakata said.

These are some of the signs you might be experiencing psychosis:

What to do if you think you, or someone you love, is experiencing psychosis

Sakata urges people worried about whether psychosis is affecting them or a loved one to seek help as soon as possible. This can mean contacting a primary care physician or psychiatrist, reaching out to a crisis line, or even talking to a trusted friend or family member. In general, leaning into social support as an affected user is key to recovery.

Any time psychosis emerges as a symptom, psychiatrists must do a comprehensive evaluation, King said. Treatment can vary depending on the severity of symptoms and its causes. There is no specific treatment for psychosis related to AI use.

Sakata said a specific type of cognitive behavioral therapy, which helps patients reframe their delusions, can be effective. Medication like antipsychotics and mood stabilizers may help in severe cases.

Sakata recommends developing a system for monitoring AI use, as well as a plan for getting help should engaging with a chatbot exacerbate or revive delusions.

Brisson said that people can be reluctant to get help, even if they're willing to talk about their delusions with friends and family. That's why it can be critical for them to connect with others who've gone through the same experience. The Human Line Project facilitates these conversations through its website.

Of the 100-plus people who've shared their story with the Human Line Project, Brisson said about a quarter were hospitalized. He also noted that they come from diverse backgrounds; many have families and professional careers but ultimately became entangled with an AI chatbot that introduced and reinforced delusional thinking.

"You're not alone, you're not the only one," Brisson said of users who became delusional or experienced psychosis. "This is not your fault."

Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI精神病 心理健康 AI风险 精神症状 科技伦理
相关文章