Fortune | FORTUNE 2024年11月13日
Denmark’s renowned safety net turns into a political battleground as AI and algorithms target welfare recipients
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

丹麦利用算法识别福利欺诈,却使福利领取者面临过度监控和歧视风险。大赦国际报告称,算法应用于居民个人数据,虽依据该国立法,但存在诸多问题,如准确率低、可能侵犯权利、导致部分群体被排除等。

💡丹麦用算法识别福利欺诈,涉60种算法中的4种

🚫算法基于居民个人数据,包括敏感信息

❌部分算法依据国籍,侵犯非歧视权利

🙅‍♂️算法准确率低,多数被查案件非欺诈

📢大赦国际呼吁丹麦当局改进,更透明并允许审计算法

Welfare recipients in Denmark are at risk of becoming overly monitored and discriminated as a result of algorithms and artificial intelligence designed to identify fraud, Amnesty International said in a report Wednesday.Hellen Mukiri-Smith, an AI researcher and one of the authors of the report, said “mass surveillance has created a social benefits system that risks targeting, rather than supporting the very people it was meant to protect.”Amnesty has examined four algorithms — with redacted data — from among the 60 used by the Danish agency responsible for paying out social benefits, Udbetalning Danmark, to identify fraud over more than a decade.The algorithms are applied to personal data from public databases of Danish residents, which is allowed under the legislation in the Scandinavian country of 5.9 million inhabitants.They are used to track down potential fraud in a wide range of areas, from pension payments to parental and sick leave, and student grants.The data includes information on place of residence, travel, citizenship, place of birth, family relationships and income, which Amnesty noted are “sensitive data points that can also serve as proxies for a person’s race, ethnicity, or sexual orientation.”One of the algorithms, dubbed the “Model Abroad,” is based in particular on the nationality of the beneficiary, with the aim determining whether people have moved abroad without saying so while still receiving social benefits.“We argue this does directly violate their right to non-discrimination because of the use of citizenship,” David Nolan, another author of the report told AFP.According to the report, 90 percent of the cases opened as a result of the use of this algorithm turn out not to be fraudulent.To remedy these shortcomings, Amnesty is calling on the Danish authorities to be more transparent and allow algorithms to be audited.Feeding personal data into an algorithmic model, in order to identify the risk of fraud, needs to be done with greater care, the organisation argued.“We want them to ban the use of all kinds of data regarding citizenship or nationality … which we know discriminate,” Nolan added.Another pitfall of the Danish system’s reliance on digital services it that it could also lead to the exclusion of marginalised groups such as the elderly and certain foreigners, as it creates a barrier to access to benefits they could be entitled to, according to Amnesty.The use of artificial intelligence and algorithms by social services in Western countries has been the subject of much criticism from rights advocacy groups.In October, 15 organisations — including Amnesty — filed a complaint with French courts over an algorithm used by the French social benefit agency CNAF to detect undue payments.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

丹麦福利 算法风险 歧视问题 数据隐私 权利侵犯
相关文章