The Verge - Artificial Intelligences 03月05日
AI now ‘analyzes’ LA Times articles for bias
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

《洛杉矶时报》所有者Patrick Soon-Shiong宣布使用AI为文章添加“声音”标签,用于标记带有个人观点或立场的文章,并生成AI“见解”。此举旨在提供更多不同观点,但引发了工会成员的质疑,他们认为未经编辑审核的AI分析难以增强媒体信任。目前,AI已产生一些问题,例如对一篇关于AI风险的评论文章,AI却认为“AI使历史叙事民主化”。另有一篇关于加州城市选举三K党成员的文章,AI生成的观点淡化了三K党的意识形态威胁。虽然其他媒体也在使用AI,但通常不用于生成编辑评估。

🗣️《洛杉矶时报》引入AI工具,为带有个人观点或立场的文章添加“声音”标签,并生成AI“见解”,以提供更多不同观点。

🤖工会成员对此表示担忧,认为未经编辑审核的AI分析难以增强媒体信任,并可能产生误导性结果。

🔍AI工具已在实践中出现问题,例如对关于AI风险的文章给出相反的观点,以及淡化三K党的历史影响。

📰其他媒体也在使用AI,但主要用于新闻运营的其他方面,而非生成编辑评估。

Yesterday morning, billionaire Los Angeles Times owner Patrick Soon-Shiong published a letter to readers letting them know the outlet is now using AI to add a “Voices” label to articles that take “a stance” or are “written from a personal perspective.” He said those articles may also get a set of AI-generated “Insights,” which appear at the bottom as bullet points, including some labeled, “Different views on the topic.”

“Voices is not strictly limited to Opinion section content,” writes Soon-Shiong, ”It also includes news commentary, criticism, reviews, and more. If a piece takes a stance or is written from a personal perspective, it may be labeled Voices.“ He also says, “I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”

The news wasn’t received well by LA Times union members. In a statement reported by The Hollywood Reporter, LA Times Guild vice chair Matt Hamilton said the union supports some initiatives to help readers separate news reporting from opinion stories, “But we don’t think this approach — AI-generated analysis unvetted by editorial staff — will do much to enhance trust in the media.”

It’s only been a day, but the change has already generated some questionable results. The Guardian points to a March 1st LA Times opinion piece about the danger inherent in unregulated use of AI to produce content for historical documentaries. At the bottom, the outlet’s new AI tool claims that the story “generally aligns with a Center Left point of view” and suggests that “AI democratizes historical storytelling.”

Insights were also apparently added to the bottom of a February 25th LA Times story about California cities that elected Klu Klux Klan members to their city councils in the 1920s. One of the now-removed, AI-generated, bullet-pointed views is that local historical accounts sometimes painted the Klan as “a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimizing its ideological threat.” That is correct, as the author points out on X, but it seems to be clumsily presented as a counterpoint to the story’s premise – that the Klan’s faded legacy in Anaheim, California has lived on in school segregation, anti-immigration laws, and local neo-Nazi bands.

Ideally, if AI tools are used, it is with some editorial oversight to prevent gaffes like the ones LA Times is experiencing. Sloppy or nonexistent oversight seems to be the road to issues like MSN’s AI news aggregator recommending an Ottawa food bank as a tourist lunch destination or Gizmodo’s awkward non-chronological “chronological” list of Star Wars films. And Apple recently tweaked its Apple Intelligence notification summaries’ appearance after the feature contorted a BBC headline to incorrectly suggest that UnitedHealthcare CEO shooting suspect Luigi Mangione had shot himself.

Other outlets use AI as part of their news operations, though generally not to generate editorial assessments. The technology is being used for a variety of purposes by Bloomberg, Gannett-owned outlets like USA Today, The Wall Street Journal, The New York Times, and The Washington Post.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

洛杉矶时报 人工智能 媒体伦理
相关文章