MIT Technology Review » Artificial Intelligence 02月25日
How AI is used to surveil workers
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文深入探讨了职场中算法监控的日益普及及其对劳资关系的影响。疫情以来,越来越多的公司采用软件分析员工的键盘输入和电脑使用时间,表面上是为了提高效率,实则反映了一种对远程工作者生产力的不信任。然而,这种监控不仅仅局限于远程工作,也影响着零工经济和传统行业的工人,例如亚马逊仓库的工人。算法工具的使用,往往缺乏透明度,使得工人难以理解数据收集和决策过程,最终导致工人对工作场所的控制力减弱。文章强调,需要推动算法透明化,以对抗这种权力转移,保障工人权益。

💻 职场算法监控日益普及:疫情后,公司使用软件分析员工工作情况,引发对远程工作生产力的质疑,但经济研究并未广泛支持这一观点。

⚖️ 算法决策影响多行业工人:零工平台司机可能被算法踢出平台,亚马逊仓库的生产力系统导致更多工伤,算法监控已不局限于远程办公。

🔒 算法工具本质是控制而非效率:公司缺乏数据透明度,工人难以理解算法如何运作,导致工人对工作场所的控制力下降,劳资关系失衡。

📢 工人团体寻求算法透明化:劳工团体正努力推动算法透明化,以对抗算法监控带来的权力转移,保障工人权益。

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Opaque algorithms meant to analyze worker productivity have been rapidly spreading through our workplaces, as detailed in a new must-read piece by Rebecca Ackermann, published Monday in MIT Technology Review

Since the pandemic, lots of companies have adopted software to analyze keystrokes or detect how much time workers are spending at their computers. The trend is driven by a suspicion that remote workers are less productive, though that’s not broadly supported by economic research. Still, that belief is behind the efforts of Elon Musk, DOGE, and the Office of Personnel Management to roll back remote work for US federal employees. 

The focus on remote workers, though, misses another big part of the story: algorithmic decision-making in industries where people don’t work at home. Gig workers like ride-share drivers might be kicked off their platforms by an algorithm, with no way to appeal. Productivity systems at Amazon warehouses dictated a pace of work that Amazon’s internal teams found would lead to more injuries, but the company implemented them anyway, according to a 2024 congressional report.

Ackermann posits that these algorithmic tools are less about efficiency and more about control, which workers have less and less of. There are few laws requiring companies to offer transparency about what data is going into their productivity models and how decisions are made. “Advocates say that individual efforts to push back against or evade electronic monitoring are not enough,” she writes. “The technology is too widespread and the stakes too high.”

Productivity tools don’t just track work, Ackermann writes. They reshape the relationship between workers and those in power. Labor groups are pushing back against that shift in power by seeking to make the algorithms that fuel management decisions more transparent. 

The full piece contains so much that surprised me about the widening scope of productivity tools and the very limited means that workers have to understand what goes into them. As the pursuit of efficiency gains political influence in the US, the attitudes and technologies that transformed the private sector may now be extending to the public sector. Federal workers are already preparing for that shift, according to a new story in Wired. For some clues as to what that might mean, read Rebecca Ackermann’s full story


Now read the rest of The Algorithm

Deeper Learning

Microsoft announced last week that it has made significant progress in its 20-year quest to make topological quantum bits, or qubits—a special approach to building quantum computers that could make them more stable and easier to scale up. 

Why it matters: Quantum computers promise to crunch computations faster than any conventional computer humans could ever build, which could mean faster discovery of new drugs and scientific breakthroughs. The problem is that qubits—the unit of information in quantum computing, rather than the typical 1s and 0s—are very, very finicky. Microsoft’s new type of qubit is supposed to make fragile quantum states easier to maintain, but scientists outside the project say there’s a long way to go before the technology can be proved to work as intended. And on top of that, some experts are asking whether rapid advances in applying AI to scientific problems could negate any real need for quantum computers at all. Read more from Rachel Courtland

Bits and Bytes

X’s AI model appears to have briefly censored unflattering mentions of Trump and Musk

Elon Musk has long alleged that AI models suppress conservative speech. In response, he promised that his company xAI’s AI model, Grok, would be “maximally truth-seeking” (though, as we’ve pointed out previously, making things up is just what AI does). Over last weekend, users noticed that if you asked Grok about who is the biggest spreader of misinformation, the model reported it was explicitly instructed not to mention Donald Trump or Elon Musk. An engineering lead at xAI said an unnamed employee had made this change, but it’s now been reversed. (TechCrunch)

Figure demoed humanoid robots that can work together to put your groceries away

Humanoid robots aren’t typically very good at working with one another. But the robotics company Figure showed off two humanoids helping each other put groceries away, another sign that general AI models for robotics are helping them learn faster than ever before. However, we’ve written about how videos featuring humanoid robots can be misleading, so take these developments with a grain of salt. (The Robot Report)

OpenAI is shifting its allegiance from Microsoft to Softbank

In calls with its investors, OpenAI has signaled that it’s weakening its ties to Microsoft—its largest investor—and partnering more closely with Softbank. The latter is now working on the Stargate project, a $500 billion effort to build data centers that will support the bulk of the computing power needed for OpenAI’s ambitious AI plans. (The Information)

Humane is shutting down the AI Pin and selling its remnants to HP

One big debate in AI is whether the technology will require its own piece of hardware. Rather than just conversing with AI on our phones, will we need some sort of dedicated device to talk to? Humane got investments from Sam Altman and others to build just that, in the form of a badge worn on your chest. But after poor reviews and sluggish sales, last week the company announced it would shut down. (The Verge)

Schools are replacing counselors with chatbots

School districts, dealing with a shortage of counselors, are rolling out AI-powered “well-being companions” for students to text with. But experts have pointed out the dangers of relying on these tools and say the companies that make them often misrepresent their capabilities and effectiveness. (The Wall Street Journal)

What dismantling America’s leadership in scientific research will mean

Federal workers spoke to MIT Technology Review about the efforts by DOGE and others to slash funding for scientific research. They say it could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the public’s access to next-generation consumer technologies. (MIT Technology Review)

Your most important customer may be AI

People are relying more and more on AI models like ChatGPT for recommendations, which means brands are realizing they have to figure out how to rank higher, much as they do with traditional search results. Doing so is a challenge, since AI model makers offer few insights into how they form recommendations. (MIT Technology Review)

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

算法监控 职场效率 劳资关系 算法透明
相关文章