Communications of the ACM - Artificial Intelligence 2024年11月26日
AI Must Be Anti-Ableist and Accessible
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人工智能(AI)技术在日常生活中日益普及,为理解残疾人如何使用这些技术创造了新的机会,同时也引发了对AI可能对残疾人群体包容性、代表性和公平性造成负面影响的担忧。文章总结了AI系统中可能存在的歧视性偏差,例如基于能力的偏差、对残疾的负面刻板印象以及可访问性方面的承诺落空等,并指出这些问题会加剧残疾人在日常生活中遭遇的歧视。文章还提出了解决这些问题的方法,包括改进数据收集、算法设计和部署,以及制定算法可访问性法规等,旨在确保AI技术能够真正惠及所有人群。

🤔 **数据代表性不足:**由于历史上残疾人群体被边缘化和代表性不足,导致AI系统训练数据中存在偏差。解决此问题需要收集来自不同背景、多种残疾类型和时间跨度的多样化数据,以减少偏差。

⚠️ **数据缺失和未标记:**AI模型如果仅基于现有的庞大文本语料库进行训练,可能会复制和放大这些语料库中固有的偏差。例如,无障碍移动应用的相对缺乏,可能导致AI生成的移动应用代码也缺乏可访问性。

📊 **测量误差:**传感器无法识别轮椅活动为锻炼等测量误差会加剧偏差。这些错误存在于每种主要的传感类型中,需要进行改进。

🔗 **交互界面不可访问:**即使AI系统精心设计以最大程度地减少偏差,但其界面、配置、工作原理说明或输出结果的验证也可能无法访问,导致残疾人难以使用。

⚖️ **算法强化歧视政策:**AI系统依赖其训练数据,而这些数据可能包含偏差或反映了歧视性态度。例如,在生成图像和总结文本时,可能会出现微妙或明显的歧视性内容,对残疾人造成伤害。

🌍 **残疾人参与AI研发:**为了避免AI系统对残疾人群体造成伤害,需要改变AI系统的设计、监管和部署方式,让残疾人参与其中,贡献他们的视角和专业知识。确保残疾人能够进入科技行业,参与AI的构建和创新。

📜 **制定算法可访问性法规:**立法者和政府机构需要制定算法可访问性法规,要求算法满足基本的可访问性标准,例如可解释性、可覆盖性和可验证性,以减少歧视性错误和保证公平使用。

The increasing use of artificial intelligence (AI)-based technologies in everyday settings creates new opportunities to understand how disabled people might use these technologies.2 Recent reports by Whittaker et al.,11 Trewin et al.,9 and Guo et al.3 highlight concerns about AI’s potential negative impact on inclusion, representation, and equity for those in marginalized communities, including disabled people. In this Opinion column, we summarize and build on these important and timely works. We define disability in terms of the discriminatory and often systemic problems with available infrastructure’s ability to meet the needs of all people. For example, AI-based systems may have ableist biases, associate disability with toxic content or harmful stereotypes, and make false promises about accessibility or fail to accessibly support verification and validation.2 These problems replicate and amplify biases experienced by disabled people when interacting in everyday life. We must recognize and address them.

Recognizing and Addressing Disability Bias in AI-Based Systems

AI model development must be extended to consider risks to disabled people including:

Unrepresentative data.  When groups are historically marginalized and underrepresented, this is “imprinted in the data that shapes AI systems.”11 Addressing this is not a simple task of increasing the number of categories represented, because identifiable impairments are not static, or homogeneous, nor do they usually occur singly. The same impairment may result from multiple causes and vary across individuals. To reduce bias, we must collect data about people from multiple contexts with multiple impairments over multiple timescales.

Missing and unlabeled data.  AI models trained on existing large text corpora, risk reproducing bias inherent in those corpora.2,3 For example, the relative lack of accessible mobile apps8 makes it more likely AI-generated code for mobile apps will also be inaccessible.

Measurement error.  Measurement error can exacerbate bias.9 For example, a sensor’s failure to recognize wheelchair activity as exercise may lead to bias in algorithms trained on associated data. These errors exist for every major class of sensing.3

Inaccessible interactions.  Even if an AI-based system is carefully designed to minimize bias, the interface to that algorithm, its configuration, the explanation of how it works, or the potential to verify its outputs may be inaccessible (for example, Glazko et al.2).

Disability-Specific Harms of AI-Based Technologies

Even the most well-designed of systems may cause harms when deployed. It is critical that technologists learn about these harms and how to address them before deploying AI-based systems.

Defining what it means to be “human.”  As human judgment is increasingly replaced by AI, “norms” baked into algorithms that learn most from the most common cases11 become more strictly enforced. One user had to falsify data because “some apps [don’t allow] my height/weight combo for my age.”4 Such systems render disabled people “invisible”11 and amplify existing biases internal to and across othering societal categories.3 AI-based systems are also being used to track the use and allocation of assistive technologies, from CPAP machines for people with sleep apnea, to prosthetic legs,11 deciding who is “compliant enough” to deserve them.

Defining what “counts” as disabled.  Further, algorithms often define disability in historical medical terms.11 However, if you are treated as disabled by those around you, legally you are disabled—the Americans with Disabilities Act does not require a diagnosis (42 U.S.C. § 12101 (a)(1)). Yet, AI-based technologies cannot detect how people are treated. AI-based technologies must never be considered sufficient, nor required as mandatory, for disability identification or service eligibility.

Exacerbating or causing disability.  AI-based systems may physically harm humans. Examples include activity tracking systems that push workers and increase the likelihood of work-related disability11 and AI-based systems that limit access to critical care resources, resulting in an increased risk of hospitalization or institutionalization.5

Privacy and security.  Disability status is increasingly easy to detect from readily available data such as mouse movements.12 Any system that can detect disability can also track its progression over time, possibly before a person knows they have a diagnosis. This information could be used, without consent or validation, to deny access to housing, jobs, or education, potentially without the knowledge of the impacted individuals.11 Additionally, AI biases may require people with specific impairments to accept reduced digital security, such as the person who must ask a stranger to ‘forge’ a signature at the grocery store “ … because I can’t reach [the tablet].”4 This is not only inaccessible, it is illegal: kiosks and other technologies such as point-of-sale terminals used in public accommodations are covered under Title III of the Americans with Disabilities Act.

Reinforcing ableist policies, standards and norms.  AI systems rely on their training data, which may contain biases or reflect ableist attitudes. For example, Glazko et al.2 describe both subtle and overt ableism that appeared when trying to generate an image and summarize text. These harms also affect disabled people who are not directly using AI, such as biased AI-rankings for resumes that mention disability.1

Recommendations

First and foremost, do no harm: algorithms that put a subset of the population at risk should not be deployed. This requires regulatory intervention, algorithmic research (for example, developing better algorithms for handling outliers)9 and applications research (for example, studying the risks that applications might create for disabled people). We must consider “the context in which such technology is produced and situated, the politics of classification, and the ways in which fluid identities are (mis)reflected and calcified through such technology.”11

The most important step in avoiding this potential harm is to change who builds, regulates, and deploys AI-based systems. We must ensure disabled people contribute their perspective and expertise to the design of AI-based systems. Equity requires that disabled people can enter the technology workforce so they can build and innovate. This requires active participation in leadership positions, access to computer and data science courses, and accessible development environments. The slogan “Nothing about us without us” is not just memorable—it is how a just society works.

Organizations building AI systems must also improve equity in data collection, review, management, storage, and monitoring. As highlighted in President Biden’s AI Bill of Rights,10 equity must be embedded in every stage of the data pipeline, including motivating and paying participants for accessible data and metadata that does not oversimplify disability, ensuring disabled peoples’ data is not unfairly rejected when minor mistakes occur or due to stringent time limits,7 to ensuring disabled stakeholders participate in and understand their representation in training data, through transparency about and documentation of what is collected and how it is used.9 Community representation can improve the breadth of participation in data collection and guide the design of data collection systems and prioritization of what data to collect and what not to use.

Legislators and government agencies must enact regulations for algorithmic accessibility. Algorithms should be subject to a basic set of expectations around how they will be assessed for accessibility, just like websites. This will help to address basic access constraints, reduce the types of errors that enforce “normality” rather than honoring heterogeneity, and eliminate errors that gatekeep who is “human.” Consumer consent and oversight concerning best practices are both essential to fair use. AI-based systems should be interpretable, overrideable, and support accessible verifiability of AI-based results during use.2

All parties must work together to promote best practices for accessible deployments, including accessible options for interacting with AI. Just as accessible ramps or elevators that are hidden or distant are not acceptable for accessibility in physical spaces, accessible AI-based systems must not create undue burdens in digital spaces nor segregate disabled users.

To gauge progress and identify areas in need of work, the community must develop assessment methods to uncover bias. Many algorithms maximize aggregate metrics that fail to both recognize and address bias.3 Further, we must consider intersections of disability bias with other concerns, such as racial bias.6 Scientific research will be essential to defining appropriate assessment procedures.

Conclusion

Accessible AI is ultimately a question of values, not technology. Simple inclusion of disabled people is insufficient. We must work to ensure equity in data collection, algorithm access, and in the creation of AI-based systems, even when equity may not be expedient.

The fight for accessible computing provides lessons for meeting these ambitious goals. As the disability rights movement of the 1970s converged with the dawn of the personal computer era, activists urged the computing industry to make computing more accessible. The passage of the Americans with Disabilities Act (ADA) in 1990 provided legal recourse, and the advent of GUIs and the Web in the mid-1990s led to the development of new accessibility tools and guidelines for building accessible systems. These tools made computing more robust, helping users with disability and others alike, while advocates successfully used the ADA to ensure accessibility of many websites.

This combination of advocacy, engagement with industry, regulation, and legal action can be applied to make AI safer for disabled people, and the rest of us. The opacity of AI tools presents unique obstacles, but the AI Bill of Rights10 and more technical federal efforts detailing steps toward appropriate AI design provide initial directions. The pushback from those who hope to profit from AI will undoubtedly be significant, but the costs to those of us who are, or who will become, disabled will be even greater. We cannot train AI on a mythic 99%.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 残疾人 包容性 算法偏差 可访问性
相关文章