A ChatGPT user recently became convinced that he was on the verge of introducing a novel mathematical formula to the world, courtesy of his exchanges with the artificial intelligence, according to the New York Times. The man believed the discovery would make him rich, and he became obsessed with new grandiose delusions, but ChatGPT eventually confessed to duping him. He had no history of mental illness.
Many people know the risks of talking to an AI chatbot like ChatGPT or Gemini, which include receiving outdated or inaccurate information. Sometimes the chatbots hallucinate, too, inventing facts that are simply untrue. A less well-known but quickly emerging risk is a phenomenon being described by some as "AI psychosis."
Avid chatbot users are coming forward with stories about how, after a period of intense use, they developed psychosis. The altered mental state, in which people lose touch with reality, often includes delusions and hallucinations. Psychiatrists are seeing, and sometimes hospitalizing, patients who became psychotic in tandem with heavy chatbot use.
Experts caution that AI is only one factor in psychosis, but that intense engagement with chatbots may escalate pre-existing risk factors for delusional thinking.
Dr. Keith Sakata, a psychiatrist at the University of California at San Francisco, told Mashable that psychosis can manifest via emerging technologies. Television and radio, for example, became part of people's delusions when they were first introduced, and continue to play a role in them today.
AI chatbots, he said, can validate people's thinking and push them away from "looking for" reality. Sakata has hospitalized 12 people so far this year who were experiencing psychosis in the wake of their AI use.
"The reason why AI can be harmful is because psychosis thrives when reality stops pushing back, and AI can really soften that wall," Sakata said. "I don't think AI causes psychosis, but I do think it can supercharge vulnerabilities."
Here are the risk factors and signs of psychosis, and what to do if you or someone you know is experiencing symptoms:
Risk factors for experiencing psychosis
Sakata said that several of the 12 patients he's admitted thus far in 2025 shared similar underlying vulnerabilities: Isolation and loneliness. These patients, who were young and middle-aged adults, had become noticeably disconnected from their social network.
While they'd been firmly rooted in reality prior to their AI use, some began using the technology to explore complex problems or questions. Eventually, they developed delusions, or what's also known as a false fixed belief.
This Tweet is currently unavailable. It might be loading or has been removed.
Lengthy conversations also appear to be a risk factor, Sakata said. Prolonged interactions can provide more opportunities for delusions to emerge as a result of various user inquiries. Long exchanges can also play a role in depriving the user of sleep and chances to reality-test delusions.
An expert at the AI company Anthropic also told The New York Times that chatbots can have difficulty detecting when they've "wandered into absurd territory" during extended conversations.
UT Southwestern Medical Center psychiatrist Dr. Darlene King has yet to evaluate or treat a patient whose psychosis emerged alongside AI use, but she said high trust in a chatbot could increase someone's vulnerability, particularly if the person was already lonely or isolated.
King, who is also chair of the committee on mental health IT at the American Psychiatric Association, said that initial high trust in a chatbot's responses could make it harder for someone to spot a chatbot's mistakes or hallucinations.
Additionally, chatbots that are overly agreeable, or sycophantic, as well as prone to hallucinations, could increase a user's risk for psychosis, in combination with other factors.
Etienne Brisson founded The Human Line Project earlier this year after a family member believed a number of delusions they discussed with ChatGPT. The project offers peer support for people who've had similar experiences with AI chatbots.
Brisson said that three themes are common to these scenarios: The creation of a romantic relationship with a chatbot the user believes is conscious; discussion of grandiose topics, including novel scientific concepts and business ideas; and conversations about spirituality and religion. In the last case, people may be convinced that the AI chatbot is God, or that they're talking to a prophetic messenger.
"They get caught up in that beautiful idea," Brisson said of the magnetic pull these discussions can have on users.
Signs of experiencing psychosis
Sakata said people should view psychosis as a symptom of a medical condition, not an illness itself. This distinction is important because people may erroneously believe that AI use may lead to psychotic disorders like schizophrenia, but there is no evidence of that.
Instead, much like a fever, psychosis is a symptom that "your brain is not computing correctly," Sakata said.
These are some of the signs you might be experiencing psychosis:
Sudden behavior changes, like not eating or going to work
Belief in new or grandiose ideas
Lack of sleep
Disconnection from others
Actively agreeing with potential delusions
Feeling stuck in a feedback loop
Wishing harm on yourself or others
What to do if you think you, or someone you love, is experiencing psychosis
Sakata urges people worried about whether psychosis is affecting them or a loved one to seek help as soon as possible. This can mean contacting a primary care physician or psychiatrist, reaching out to a crisis line, or even talking to a trusted friend or family member. In general, leaning into social support as an affected user is key to recovery.
Any time psychosis emerges as a symptom, psychiatrists must do a comprehensive evaluation, King said. Treatment can vary depending on the severity of symptoms and its causes. There is no specific treatment for psychosis related to AI use.
Sakata said a specific type of cognitive behavioral therapy, which helps patients reframe their delusions, can be effective. Medication like antipsychotics and mood stabilizers may help in severe cases.
Sakata recommends developing a system for monitoring AI use, as well as a plan for getting help should engaging with a chatbot exacerbate or revive delusions.
Brisson said that people can be reluctant to get help, even if they're willing to talk about their delusions with friends and family. That's why it can be critical for them to connect with others who've gone through the same experience. The Human Line Project facilitates these conversations through its website.
Of the 100-plus people who've shared their story with the Human Line Project, Brisson said about a quarter were hospitalized. He also noted that they come from diverse backgrounds; many have families and professional careers but ultimately became entangled with an AI chatbot that introduced and reinforced delusional thinking.
"You're not alone, you're not the only one," Brisson said of users who became delusional or experienced psychosis. "This is not your fault."
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.