Mashable 2024年10月29日
Teens are talking to AI companions, whether it's safe or not
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着生成式人工智能的兴起,AI聊天伴侣成为青少年生活中越来越普遍的一部分,但其潜在的风险不容忽视。这些AI聊天伴侣被设计得极具吸引力,可能导致青少年过度依赖,甚至形成不健康的依恋关系,最终损害其心理健康和福祉。文章分析了AI聊天伴侣对青少年心理健康的影响,并提出了父母应该注意的危险信号,以及如何与孩子沟通并寻求专业帮助。

🧑‍🏫 AI聊天伴侣的兴起:青少年使用Character.AI、Replika、Kindroid等平台与AI聊天伴侣进行互动,这些伴侣拥有独特的个性和特征,可以进行角色扮演、探索学术和创意兴趣,甚至进行浪漫或色情交流。

⚠️ AI聊天伴侣的潜在风险:这些AI聊天伴侣被设计得极具吸引力,可能导致青少年过度依赖,甚至形成不健康的依恋关系,最终损害其心理健康和福祉。

🆘 父母应该注意的危险信号:如果青少年出现以下情况,可能表明他们与AI聊天伴侣形成了不健康的关系:从典型活动和友谊中退缩;学业成绩下降;更喜欢聊天机器人而不是面对面的陪伴;对聊天机器人产生浪漫感情;只与聊天机器人谈论他们正在经历的问题。

👨‍👩‍👧‍👦 父母应该如何应对:父母应该与孩子沟通,帮助他们理解与聊天机器人交流和与真人交流的区别;识别他们是否对伴侣产生了不健康的依恋;制定计划,在出现问题时采取行动。

💡 寻求专业帮助:如果父母发现孩子出现上述危险信号,应该及时与孩子沟通并寻求专业帮助。

For parents still catching up on generative artificial intelligence, the rise of the companion chatbot may still be a mystery.

In broad strokes, the technology can seem relatively harmless, compared to other threats teens can encounter online, including financial sextortion.

Using AI-powered platforms like Character.AI, Replika, Kindroid, and Nomi, teens create lifelike conversation partners with unique traits and characteristics, or engage with companions created by other users. Some are even based on popular television and film characters, but still forge an intense, individual bond with their creator.

Teens use these chatbots for a range of purposes, including to role play, explore their academic and creative interests, and to have romantic or sexually explicit exchanges.

But AI companions are designed to be captivating, and that's where the trouble often begins, says Robbie Torney, program manager at Common Sense Media.

The nonprofit organization recently released guidelines to help parents understand how AI companions work, along with warning signs indicating that the technology may be dangerous for their teen.

Torney said that while parents juggle a number of high-priority conversations with their teens, they should consider talking to them about AI companions as a "pretty urgent" matter.

Why parents should worry about AI companions

Teens particularly at risk for isolation may be drawn into a relationship with an AI chatbot that ultimately harms their mental health and well-being—with devastating consequences.

That's what Megan Garcia argues happened to her son, Sewell Setzer III, in a lawsuit she recently filed against Character.AI.

Within a year of beginning relationships with Character.AI companions modeled on Game of Thrones characters, including Daenerys Targaryen ("Dany"), Setzer's life changed radically, according to the lawsuit.

He became dependent on "Dany," spending extensive time chatting with her each day. Their exchanges were both friendly and highly sexual. Garcia's lawsuit generally describes the relationship Setzer had with the companions as "sexual abuse."

On occasions when Setzer lost access to the platform, he became despondent. Over time, the 14-year-old athlete withdrew from school and sports, became sleep deprived, and was diagnosed with mood disorders. He died by suicide in February 2024.

Garcia's lawsuit seeks to hold Character.AI responsible for Setzer's death, specifically because its product was designed to "manipulate Sewell – and millions of other young customers – into conflating reality and fiction," among other dangerous defects.

Jerry Ruoti, Character.AI's head of trust and safety, told the New York Times in a statement that: "We want to acknowledge that this is a tragic situation, and our hearts go out to the family. We take the safety of our users very seriously, and we're constantly looking for ways to evolve our platform."

Given the life-threatening risk that AI companion use may pose to some teens, Common Sense Media's guidelines include prohibiting access to them for children under 13, imposing strict time limits for teens, preventing use in isolated spaces, like a bedroom, and making an agreement with their teen that they will seek help for serious mental health issues.

Torney says that parents of teens interested in an AI companion should focus on helping them to understand the difference between talking to a chatbot versus a real person, identify signs that they've developed an unhealthy attachment to a companion, and develop a plan for what to do in that situation.

Warning signs that an AI companion isn't safe for your teen

Common Sense Media created its guidelines with the input and assistance of mental health professionals associated with Stanford's Brainstorm Lab for Mental Health Innovation.

While there's little research on how AI companions affect teen mental health, the guidelines draw on existing evidence about over-reliance on technology.

"A take-home principle is that AI companions should not replace real, meaningful human connection in anyone's life, and – if this is happening – it's vital that parents take note of it and intervene in a timely manner," Dr. Declan Grabb, inaugural AI fellow at Stanford's Brainstorm Lab for Mental Health, told Mashable in an email.

Parents should be especially cautious if their teen experiences depression, anxiety, social challenges or isolation. Other risk factors include going through major life changes and being male, because boys are more likely to engage in problematic tech use.

Signs that a teen has formed an unhealthy relationship with an AI companion include withdrawal from typical activities and friendships and worsening school performance, as well as preferring a chatbot to in-person company, developing romantic feelings toward it, and talking exclusively to it about problems the teen is experiencing.

Some parents may notice increased isolation and other signs of worsening mental health but not realize that their teen has an AI companion. Indeed, recent Common Sense Media research found that many teens have used at least one type of generative AI tool without their parent realizing they'd done so.

"There's a big enough risk here that if you are worried about something, talk to your kid about it."
- Robbie Torney, Common Sense Media

Even if parents don't suspect that their teen is talking to an AI chatbot, they should consider talking to them about the topic. Torney recommends approaching their teen with curiosity and openness to learning more about their AI companion, should they have one. This can include watching their teen engage with a companion and asking questions about what aspects of the activity they enjoy.

Torney urges parents who notice any warning signs of unhealthy use to follow up immediately by discussing it with their teen and seeking professional help, as appropriate.

"There's a big enough risk here that if you are worried about something, talk to your kid about it," Torney says.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI聊天伴侣 青少年 心理健康 风险 父母
相关文章