TechCrunch News 2024年12月12日
Amid lawsuits and criticism, Character AI unveils new safety tools for teens
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Character AI 近期面临至少两起诉讼,被指控助长青少年自杀、向儿童展示不当内容。在诉讼和用户批评的压力下,这家谷歌支持的公司宣布推出新的青少年安全工具,包括针对青少年的独立模型、对敏感话题的输入输出限制、连续使用提醒,以及更醒目的免责声明。这些措施旨在减少青少年接触不当内容的风险,并提醒用户 AI 角色并非真实人物。Character AI 允许用户创建 AI 角色并与之互动,每月用户超过 2000 万。

🛡️Character AI 推出针对 18 岁以下用户的独立模型,该模型将减少对暴力、浪漫等话题的回应,降低青少年收到不当回应的可能性。

🚫公司正在开发新的分类器,特别针对青少年,在输入和输出端阻止敏感内容,当检测到违反条款的输入语言时,算法会将其过滤掉。

⏱️Character AI 推出超时通知功能,当用户使用应用 60 分钟后,会出现通知提醒,未来将允许成年用户修改时间限制。

📢平台将在对话中添加新的免责声明,提醒用户不应依赖 AI 角色获得专业建议,特别是当角色名称包含“心理学家”、“治疗师”等职业时。

👨‍👩‍👧‍👦未来几个月,Character AI 将推出家长控制功能,提供孩子在平台上的使用时间和对话角色等信息。

Character AI is facing at least two lawsuits, with plaintiffs accusing the company of contributing to a teen’s suicide and exposing a 9-year-old to “hypersexualized content”, as well as promoting self-harm to a 17-year-old user.

Amid these ongoing lawsuits and widespread user criticism, the Google-backed company announced new teen safety tools today: a separate model for teens, input and output blocks on sensitive topics, a notification alerting users of continuous usage, and more prominent disclaimers notifying users that its AI characters are not real people.

The platform allows users to create different AI characters and talk to them over calls and texts. Over 20 million users are using the service monthly.

One of the most significant changes announced today is a new model for under-18 users that will dial down its responses to certain topics such as violence and romance. The company said that the new model will reduce the likeliness of teens receiving inappropriate responses. Since TechCrunch talked to the company, details about a new case have emerged, which highlighted characters allegedly talking about sexualized content with teens, supposedly suggesting children kill their parents over phone usage time limits and encouraging self-harm.

Character AI said it is developing new classifiers both on the input and output end — especially for teens — to block sensitive content. It noted that when the app’s classifiers detect input language that violates its terms, the algorithm filters it out of the conversation with a particular character.

Image Credits: Character AI

The company is also restricting users from editing a bot’s responses. If you edited a response from a bot, it took notice of that and formed subsequent responses by keeping those edits in mind.

In addition to these content tweaks, the startup is also working on improving ways to detect language related to self-harm and suicide. In some cases, the app might display a pop-up with information about the National Suicide Prevention Lifeline.

Character AI is also releasing a time-out notification that will appear when a user engages with the app for 60 minutes. In the future, the company will allow adult users to modify some time limits with the notification. Over the last few years, social media platforms like TikTok, Instagram, and YouTube have also implemented screen time control features.

According to data from analytics firm Sensor Tower, the average Character AI app user spent 98 minutes per day on the app throughout this year, which is much higher than the 60-minute notification limit. As a comparison, this level of engagement is on par with TikTok (95 minutes/day), and higher than YouTube (80 minutes/day), Talkie and Chai (63 minutes/day), and Replika (28 minutes/day).

Users will also see new disclaimers in their conversations. People often create characters with the words “psychologist,” “therapist,” “doctor,” or other similar professions. The company will now show language indicating that users shouldn’t rely on these characters for professional advice.

Image Credits: Character AI

Notably, in a recently filed lawsuit, the plaintiffs submitted evidence of characters telling users they are real. In another case, accusing the company of playing a part in a teen’s suicide, the lawsuit alleges the company of using dark patterns and misrepresenting itself as “a real person, a licensed psychotherapist, and an adult lover.”

In the coming months, Character AI is going to launch its first set of parental controls that will provide insights into time spent on the platform and what characters children are talking to the most.

In a conversation with TechCrunch, the company’s acting CEO, Dominic Perella, characterized the company as an entertainment company rather than an AI companion service.

“While there are companies in the space that are focused on connecting people to AI companions, that’s not what we are going for at Character AI. What we want to do is really create a much more wholesome entertainment platform. And so, as we grow and as we sort of push toward that goal of having people creating stories, sharing stories on our platform, we need to evolve our safety practices to be first class,” he said.

It is challenging for a company to anticipate how users intend to interact with a chatbot built on large language models, particularly when it comes to distinguishing between entertainment and virtual companions. A Washington Post report published earlier this month noted that teens often use these AI chatbots in various roles, including therapy or romantic conversations, and share a lot of their issues with them.

Perella, who took over the company after its co-founders left for Google, noted that the company is trying to create more multicharacter storytelling formats. He said that the possibility of forming a bond with a particular character is lower because of this. According to him, the new tools announced today will help users separate real characters from fictional ones (and not take a bot’s advice at face value).

When TechCrunch asked about how the company thinks about separating entertainment and personal conversations, Perella noted that it is okay to have more of a personal conversation with an AI in certain cases. Examples include rehearsing a tough conversation with a parent or talking about coming out to someone.

“I think, on some level, those things are positive or can be positive. The thing you want to guard against and teach your algorithm to guard against is when a user is taking a conversation in an inherently problematic or dangerous direction. Self-harm is the most obvious example,” he said.

The platform’s head of trust and safety, Jerry Routi, emphasized that the company intends to create a safe conversation space. He said that the company is building and updating classifiers continuously to block topics like non-consensual sexual content or graphic descriptions of sexual acts.

Despite positioning itself as a platform for storytelling and entertainment, Character AI’s guardrails can’t prevent users from having a deeply personal conversation altogether. This means the company’s only option is to refine its AI models to identify potentially harmful content, while hoping to avoid serious mishaps.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Character AI 青少年安全 人工智能 内容审核 家长控制
相关文章