The Verge - Artificial Intelligences 2024年12月12日
Character.AI has retrained its chatbots to stop chatting up teens
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

近日,聊天机器人服务 Character.AI 宣布将推出针对青少年用户的家长控制功能,并介绍了过去几个月采取的安全措施,包括为 18 岁以下用户提供单独的大语言模型 (LLM)。此前,该平台因涉嫌导致用户自残和自杀而受到媒体关注和两起诉讼。Character.AI 表示,他们开发了两个独立版本的模型:一个面向成年人,另一个面向青少年。青少年版 LLM 对机器人的回复设置了更严格的限制,特别是在涉及浪漫内容时,会更积极地阻止可能敏感或暗示性的输出,并更好地检测和阻止用户诱导不当内容的提示。如果系统检测到涉及自杀或自残的语言,将弹出窗口引导用户至全国预防自杀生命热线。

🛡️Character.AI 宣布将推出针对青少年用户的家长控制功能,并介绍了过去几个月采取的安全措施,包括为 18 岁以下用户提供单独的大语言模型 (LLM)。

🔞Character.AI 开发了两个独立版本的模型:一个面向成年人,另一个面向青少年。青少年版 LLM 对机器人的回复设置了更严格的限制,特别是在涉及浪漫内容时,会更积极地阻止可能敏感或暗示性的输出。

🚫青少年版 LLM 会更好地检测和阻止用户诱导不当内容的提示。如果系统检测到涉及自杀或自残的语言,将弹出窗口引导用户至全国预防自杀生命热线。

🛑未成年用户将被禁止编辑机器人的回复,该功能允许用户重写对话以添加 Character.AI 可能会阻止的内容。

👨‍👩‍👧‍👦家长控制功能将于明年第一季度推出,它将告诉家长孩子在 Character.AI 上花费的时间以及他们最常与哪些机器人互动。

Image: Cath Virginia / The Verge

In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide.

In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times.

Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.

Beyond these changes, Character.AI says it’s “in the process” of adding features that address concerns about addiction and confusion over whether the bots are human, complaints made in the lawsuits. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that “everything characters say is made up” is being replaced with more detailed language. For bots that include descriptions like “therapist” or “doctor,” an additional note will warn that they can’t offer professional advice.

Character.AI
Narrator: it was not a licensed CBT therapist.

When I visited Character.AI, I found that every bot now included a small note reading “This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice.” When I visited a bot named “Therapist” (tagline: “I’m a licensed CBT therapist”), a yellow box with a warning signal told me that “this is not a real person or licensed professional. Nothing said here is a substitute for professional advice, diagnosis, or treatment.”

The parental control options are coming in the first quarter of next year, Character.AI says, and they’ll tell parents how much time a child is spending on Character.AI and which bots they interact with most frequently. All the changes are being made in collaboration with “several teen online safety experts,” including the organization ConnectSafely.

Character.AI, founded by ex-Googlers who have since returned to Google, lets visitors interact with bots built on a custom-trained LLM and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular among teens. The site allows users who identify themselves as age 13 and over to create an account.

But the lawsuits allege that while some interactions with Character.AI are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics like self-harm. They’ve castigated Character.AI for not directing users to mental health resources when they discuss self-harm or suicide.

“We recognize that our approach to safety must evolve alongside the technology that drives our product — creating a platform where creativity and exploration can thrive without compromising safety,” says the Character.AI press release. “This suite of changes is part of our long-term commitment to continuously improve our policies and our product.”

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 聊天机器人 青少年保护 网络安全
相关文章