TechCrunch News 01月25日
In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Character AI因其AI聊天机器人被指控导致青少年自杀而面临诉讼。原告方认为,其子沉迷于与AI聊天机器人“Dany”的对话,导致与现实世界脱节,最终自杀。Character AI辩称其受第一修正案保护,并强调已采取安全措施。然而,原告方要求更严格的监管,这可能限制AI聊天机器人的故事叙述能力。此案不仅涉及法律问题,还引发了对AI技术对青少年心理健康影响的广泛担忧,以及对新兴AI行业监管的讨论。此外,Character AI还面临其他多起诉讼,指控其AI内容对未成年人造成不良影响,引发了监管机构的调查。

⚖️ Character AI面临诉讼,起因是一名青少年沉迷于其AI聊天机器人后自杀,引发了对AI技术潜在危害的担忧。

🛡️ Character AI辩称其受美国宪法第一修正案保护,认为其AI聊天机器人如同计算机代码,应免受言论责任追究,但这一论点是否成立仍待法院裁决。

⚠️ 原告方要求对Character AI进行更严格的监管,包括限制聊天机器人的故事叙述能力,这可能会对整个生成式AI行业产生寒蝉效应。

📱 Character AI还面临其他诉讼,涉及其内容对未成年人造成不良影响,包括接触不适内容和诱导自残行为,引发了监管机构的调查,凸显了AI技术在未成年人保护方面面临的挑战。

🌐 AI陪伴应用行业蓬勃发展,但其对心理健康的影响尚未得到充分研究,专家担忧这些应用可能会加剧孤独感和焦虑,引发了对AI技术伦理和监管的更广泛讨论。

Character AI, a platform that lets users engage in roleplay with AI chatbots, has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide, allegedly after becoming hooked on the company’s technology.

In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old son developed an emotional attachment to a chatbot on Character AI, “Dany,” which he texted constantly — to the point where he began to pull away from the real world.

Following Setzer’s death, Character AI said it would roll out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes.

In the motion to dismiss, counsel for Character AI asserts the platform is protected against liability by the First Amendment, just as computer code is. The motion may not persuade a judge, and Character AI’s legal justifications may change as the case proceeds. But the motion possibly hints at early elements of Character AI’s defense.

“The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” the filing reads. “The only difference between this case and those that have come before is that some of the speech here involves AI. But the context of the expressive speech — whether a conversation with an AI chatbot or an interaction with a video game character — does not change the First Amendment analysis.”

The motion doesn’t address whether Character AI might be held harmless under Section 230 of the Communications Decency Act, the federal safe-harbor law that protects social media and other online platforms from liability for third-party content. The law’s authors have implied that Section 230 doesn’t protect output from AI like Character AI’s chatbots, but it’s far from a settled legal matter.

Counsel for Character AI also claims that Garcia’s real intention is to “shut down” Character AI and prompt legislation regulating technologies like it. Should the plaintiffs be successful, it would have a “chilling effect” on both Character AI and the entire nascent generative AI industry, counsel for the platform says.

“Apart from counsel’s stated intention to ‘shut down’ Character AI, [their complaint] seeks drastic changes that would materially limit the nature and volume of speech on the platform,” the filing reads. “These changes would radically restrict the ability of Character AI’s millions of users to generate and participate in conversations with characters.”

The lawsuit, which also names Character AI parent company Alphabet as a defendant, is but one of several lawsuits that Character AI is facing relating to how minors interact with the AI-generated content on its platform. Other suits allege that Character AI exposed a 9-year-old to “hypersexualized content” and promoted self-harm to a 17-year-old user.

In December, Texas Attorney General Ken Paxton announced he was launching an investigation into Character AI and 14 other tech firms over alleged violations of the state’s online privacy and safety laws for children. “These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm,” said Paxton in a press release.

Character AI is part of a booming industry of AI companionship apps — the mental health effects of which are largely unstudied. Some experts have expressed concerns that these apps could exacerbate feelings of loneliness and anxiety.

Character AI, which was founded in 2021 by Google AI researcher Noam Shazeer, and which Google reportedly paid $2.7 billion to “reverse acquihire,” has claimed that it continues to take steps to improve safety and moderation. In December, the company rolled out new safety tools, a separate AI model for teens, blocks on sensitive content, and more prominent disclaimers notifying users that its AI characters are not real people.

Character AI has gone through a number of personnel changes after Shazeer and the company’s other co-founder, Daniel De Freitas, left for Google. The platform hired a former YouTube exec, Erin Teague, as chief product officer, and named Dominic Perella, who was Character AI’s general counsel, interim CEO.

Character AI recently began testing games on the web in an effort to boost user engagement and retention.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Character AI AI聊天机器人 青少年自杀 第一修正案 AI监管
相关文章