Mashable 前天 00:40
Instagram Teen Accounts still exposed to sexual content, investigation finds
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Instagram近年来投入巨资加强青少年安全功能,推出了“青少年账号”以提供更严格的内容控制。然而,一项由非营利组织Design It For Us和Accountable Tech进行的测试表明,尽管有这些新功能,平台仍然会向青少年推荐不适宜、具有性暗示和有害的内容。测试结果显示,假青少年账号在两周内收到了与不良体像和饮食失调相关的内容,以及涉及非法药物和性暗示的帖子。尽管Instagram声称其安全功能有效,但研究结果引发了对其保护措施的质疑。Meta公司回应称测试范围有限,未能反映青少年账号的真实影响,但承认了监控大规模用户和复杂算法的挑战。

👦🏻 Instagram推出了“青少年账号”功能,旨在为未成年用户提供更安全的使用体验,包括更严格的内容控制。

⚠️ Design It For Us和Accountable Tech进行的测试发现,即使启用了青少年账号,平台仍会向青少年推荐不适宜内容。测试使用了五个假青少年账号,在两周内收到了与不良体像、饮食失调相关的内容,以及涉及非法药物和性暗示的帖子。

📢 Meta公司回应称,测试范围有限,未能反映青少年账号的真实影响。然而,公司承认监控大规模用户和复杂算法的挑战。

🔒 青少年账号默认设置为私密,限制消息传递和直播功能,并过滤掉敏感内容。13-15岁的青少年使用受到更严格的控制,Meta的内部AI也会识别并标记违规账号。

Following years of criticism for its handling of the youth mental health crisis, Instagram has invested heavily in beefing up its teen safety features, including an entirely new way for underage users to post, communicate, and scroll on the app. But recent tests of these new safety features suggest it may still not be enough.

According to an investigation conducted by the youth-led nonprofit Design It For Us and watchdog Accountable Tech — later corroborated by Washington Post columnist Geoffrey Fowler — the platform continues to surface age-inappropriate, sexually explicit, and generally harmful content despite content control safeguards.

In the study, "Gen-Z aged" representatives from Design It For Use tested five fake teen accounts on the app's default Teen Account settings over a two-week period. In all of the cases, the youth accounts were recommended sensitive and sexual content. Four out of five accounts were recommended content related to poor body image and eating disorders, and only one account's algorithm surfaced what the nonprofit deemed "educational" content.

The individual algorithms additionally recommended descriptions of illegal substance use, and sexually explicit posts involving trendy, coded language slipped through the filters. But not all protections faltered, including the platform's built-in restrictions on messaging and tagging.

"These findings suggest that Meta has not independently fostered a safe online environment, as it purports to parents and lawmakers," the report writes. "Lawmakers should compel Meta to produce data about Teen Accounts so that regulators and nonprofits can understand over time whether teenagers are actually protected when using Instagram."

In a response to the Washington Post, Meta spokeswoman Liza Crenshaw said the test's limited scope doesn't capture the true impact of the app's safety features. “A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts. The report is flawed, but even taken at face value, it identified just 61 pieces of content that it deems ‘sensitive,’ less than 0.3 percent of all of the content these researchers would have likely seen during the test.”

Addressing an ongoing, platform-wide issue

A June 2024 experiment by the Wall Street Journal and Northeastern University found that minor-aged accounts were frequently recommended sexually explicit and graphic content within the app's video-centered Reels feed, despite being automatically set to the platform's strictest content settings. The phenomenon was a known algorithmic issue for parent company Meta, which, according to internal documents, was flagged by employees conducting safety reviews as early as 2021. In a response, Instagram representatives said the experiments did not reflect the reality of how young users interact with the app.

At that time, Instagram had yet to launch its new tentpole product, Teen Accounts, introduced as a new, more highly monitored way for younger users to exist and post online — including stronger content controls. Minor users are automatically placed into Teen Accounts when signing up for Instagram, which sets their page private, limits messaging capabilities and the ability to stream live, and filters out sensitive content from feeds and DMs. Teens between the ages of 13-15 have even tighter reins on their app usage, and accounts that fall through the cracks are now being spotted and flagged by Meta's in-house AI.

More than 54 million teens have been moved into a restricted Teen Account since the initial rollout, according to Meta, and the vast majority of users under the age of 16 have kept the default, stricter security settings. And while the numbers show a positive shift, even Meta CEO Mark Zuckerberg admits there may be limits to how effectively the company can monitor its vast user base and complex algorithm.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Instagram 青少年安全 内容审查 社交媒体
相关文章