TechCrunch News 01月23日
Scale AI is facing a third worker lawsuit in about a month
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Scale AI近期因劳工问题面临第三起诉讼,此次诉讼源于员工声称在审查令人不安的内容时遭受心理创伤,且公司未提供充分保障。原告指出,他们被迫编写关于暴力和虐待(包括儿童虐待)的令人不安的提示,且在寻求心理健康咨询时遭到报复。他们认为,在招聘过程中被误导了工作性质,最终因工作而患上创伤后应激障碍等心理健康问题。Scale AI则回应称,已采取多项保障措施,并遵守所有法律法规。该诉讼要求建立医疗监测计划和新的安全标准,并索赔未指明的损失和律师费,引发人们对AI公司劳工权益的关注。

⚖️Scale AI因劳工问题面临第三起诉讼,员工声称在审查令人不安的内容时遭受心理创伤,且公司未提供充分保障。

⚠️原告被迫编写关于暴力和虐待(包括儿童虐待)的令人不安的提示,且在寻求心理健康咨询时遭到报复,导致心理健康问题。

🛡️Scale AI回应称,已采取多项保障措施,并遵守所有法律法规,包括允许员工随时退出、提前通知敏感内容以及提供健康和保健计划。

📢原告律师认为Scale AI未能确保安全的工作场所,并强调必须追究大型科技公司的责任,避免工人继续被剥削。

Scale AI is facing its third lawsuit over alleged labor practices in just over a month, this time from workers claiming they suffered psychological trauma from reviewing disturbing content without adequate safeguards.

Scale, which was valued at $13.8 billion last year, relies on workers it categorizes as contractors to do tasks like rating AI model responses.

Earlier this month, a former worker sued alleging she was effectively paid below the minimum wage and misclassified as a contractor. A complaint alleging similar issues was also filed in December 2024.

This latest complaint, filed January 17 in the Northern District of California, is a class action complaint that focuses on the psychological harms allegedly suffered by 6 people who worked on Scale’s platform Outlier.

The plaintiffs claim they were forced to write disturbing prompts about violence and abuse – including child abuse – without proper psychological support, suffering retaliation when they sought mental health counsel. They say they were misled about the job’s nature during hiring and ended up with mental health issues like PTSD due to their work. They are seeking the creation of a medical monitoring program along with new safety standards, plus unspecified damages and attorney fees.

One of the plaintiffs, Steve McKinney, is the lead plaintiff in that separate December 2024 complaint against Scale. The same law firm, Clarkson Law Firm of Malibu, California, is representing plaintiffs in both complaints.

Clarkson Law Firm previously filed a class action suit against OpenAI and Microsoft over allegedly using stolen data that was dismissed after being criticized by a district judge for its length and content. Referencing that case, Joe Osborne, a spokesperson for Scale AI, criticized Clarkson Law Firm and said Scale plans “to defend ourselves vigorously.”

“Clarkson Law Firm has previously – and unsuccessfully – gone after innovative tech companies with legal claims that were summarily dismissed in court. A federal court judge found that one of their previous complaints was ‘needlessly long’ and contained ‘largely irrelevant, distracting, or redundant information,’” Osborne told TechCrunch.

Osborne said that Scale complies with all laws and regulations and has “numerous safeguards in place” to protect its contributors like the ability to opt-out at any time, advanced notice of sensitive content, and access to health and wellness programs. Osborne added that Scale does not take on projects that may include child sexual abuse material. 

In response, Glenn Danas, partner at Clarkson Law Firm, told TechCrunch that Scale AI has been “forcing workers to view gruesome and violent content to train these AI models” and has failed to ensure a safe workplace.

“We must hold these big tech companies like Scale AI accountable or workers will continue to be exploited to train this unregulated technology for profit,” Danas said. 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Scale AI 劳工权益 心理创伤 AI伦理 诉讼
相关文章