Unite.AI 2024年11月26日
AI Can Be Friend or Foe in Improving Health Equity. Here is How to Ensure it Helps, Not Harms
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

人工智能(AI)在医疗领域展现出巨大潜力,可以优化医疗服务、实现个性化医疗并推动突破性发现。然而,AI也可能加剧医疗保健中的不平等现象,例如数据、算法和用户中的固有偏见。文章探讨了如何利用AI弥合医疗保健差距,例如在临床试验中确保多元化代表性,消除算法中的隐性偏见,提供针对弱势群体的预防策略,以及简化行政任务等。作者强调,AI从业者、数据科学家和算法开发者有责任确保AI的包容性、数据多样性和公平使用,并呼吁相关机构制定标准和框架,以防止AI加剧医疗保健不平等。

🤔**AI在临床试验中促进公平:**AI可以通过分析人口数据,识别数据集中的偏差和人口覆盖差距,确保多元化代表性,从而减少临床试验中的偏差,并优化结果,例如在新冠疫苗试验中,AI有助于识别怀孕女性的数据缺失,并确保疫苗的安全性和有效性。

👩‍⚕️**AI助力消除医疗中的隐性偏见:**AI可以帮助识别和纠正医疗保健中存在的隐性偏见,例如,通过分析数据发现算法中可能存在的种族偏见,并更新算法以排除种族或族裔作为风险因素,从而确保医疗决策的公平性,例如针对产妇分娩方式的算法更新,避免了种族因素导致的不公平医疗结果。

📊**AI提供公平的预防策略:**AI可以帮助预测弱势群体的健康风险,并制定个性化的风险评估,以便更好地针对性地进行干预,例如,在心血管疾病预测模型中,包含更多女性患者的数据,可以帮助更好地了解女性患者的心血管疾病风险,并提供更有效的预防策略。

⚙️**AI简化医疗行政任务:**AI可以自动化医疗行政任务,例如索赔编码、诊断代码验证和预授权流程,从而降低运营成本,提高医疗服务效率,并使医护人员有更多时间与患者互动,从而提高医疗服务的可及性和可负担性。

🤝**各方协作,构建公平的AI医疗生态:**AI从业者、数据科学家和算法开发者应共同努力,确保AI在医疗领域的公平使用,并呼吁相关机构制定标准和框架,例如数据交换和准确性标准,以防止AI加剧医疗保健不平等,并促进医疗数据共享,从而消除医疗保健中的偏见,并利用AI解决医疗保健差距。

Healthcare inequities and disparities in care are pervasive across socioeconomic, racial and gender divides. As a society, we have a moral, ethical and economic responsibility to close these gaps and ensure consistent, fair and affordable access to healthcare for everyone.

Artificial Intelligence (AI) helps address these disparities, but it is also a double-edged sword. Certainly, AI is already helping to streamline care delivery, enable personalized medicine at scale, and support breakthrough discoveries. However, inherent bias in the data, algorithms, and users could worsen the problem if we’re not careful.

That means those of us who develop and deploy AI-driven healthcare solutions must be careful to prevent AI from unintentionally widening existing gaps, and governing bodies and professional associations must play an active role in establishing guardrails to avoid or mitigate bias.

Here is how leveraging AI can bridge inequity gaps instead of widening them.

Achieve equity in clinical trials

Many new drug and treatment trials have historically been biased in their design, whether intentional or not. For example, it wasn’t until 1993 that women were required by law to be included in NIH-funded clinical research. More recently, COVID vaccines were never intentionally trialed in pregnant women—it was only because some trial participants  were unknowingly pregnant at the time of vaccination that we knew it was safe.

A challenge with research is that we do not know what we do not know. Yet, AI helps uncover biased data sets by analyzing population data and flagging disproportional representation or gaps in demographic coverage. By ensuring diverse representation and training AI models on data that accurately represents targeted populations, AI helps ensure inclusiveness, reduce harm and optimize outcomes.

Ensure equitable treatments

It’s well established that Black expectant mothers who experience pain and complications during childbirth are often ignored, resulting in a maternal mortality rate 3X higher for Black women than non-Hispanic white women regardless of income or education. The problem is largely perpetuated by inherent bias: there’s a pervasive misconception among medical professionals that Black people have a higher pain tolerance than white people.

Bias in AI algorithms can make the problem worse: Harvard researchers discovered that a common algorithm predicted that Black and Latina women were less likely to have successful vaginal births after a C-section (VBAC), which may have led doctors to perform more C-sections on women of color. Yet researchers found that “the association is not supported by biological plausibility,” suggesting that race is “a proxy for other variables that reflect the effect of racism on health.” The algorithm was subsequently updated to exclude race or ethnicity when calculating risk.

This is a perfect application for AI to root out implicit bias and suggest (with evidence) care pathways that may have previously been overlooked. Instead of continuing to practice “standard care,” we can use AI to determine if those best practices are based on the experience of all women or just white women. AI helps ensure our data foundations include the patients who have the most to gain from advancements in healthcare and technology.

While there may be conditions where race and ethnicity could be impactful factors, we must be careful to know how and when they should be considered and when we’re simply defaulting to historical bias to inform our perceptions and AI algorithms.

Provide equitable prevention strategies

AI solutions can easily overlook certain conditions in marginalized communities without careful consideration for potential bias. For example, the Veterans Administration is working on multiple algorithms to predict and detect signs of heart disease and heart attacks. This has tremendous life-saving potential, but the majority of the studies have historically not included many women, for whom cardiovascular disease is the number one cause of death. Therefore, it’s unknown whether these models are as effective for women, who often present with much different symptoms than men.

Including a proportionate number of women in this dataset could help prevent some of the 3.2 million heart attacks and half a million cardiac-related deaths annually in women through early detection and intervention. Similarly, new AI tools are removing the race-based algorithms in kidney disease screening, which have historically excluded Black, Hispanic and Native Americans, resulting in care delays and poor clinical outcomes.

Instead of excluding marginalized individuals, AI can actually help to forecast health risks for underserved populations and enable personalized risk assessments to better target interventions. The data may already be there; it’s simply a matter of “tuning” the models to determine how race, gender, and other demographic factors affect outcomes—if they do at all.

Streamline administrative tasks

Aside from directly affecting patient outcomes, AI has incredible potential to accelerate workflows behind the scenes to reduce disparities. For example, companies and providers are already using AI to fill in gaps on claims coding and adjudication, validating diagnosis codes against physician notes, and automating pre-authorization processes for common diagnostic procedures.

By streamlining these functions, we can drastically reduce operating costs, help provider offices run more efficiently and give staff more time to spend with patients, thus making care exponentially more affordable and accessible.

We each have an important role to play

The fact that we have these incredible tools at our disposal makes it even more imperative that we use them to root out and overcome healthcare biases. Unfortunately, there is no certifying body in the US that regulates efforts to use AI to “unbias” healthcare delivery, and even for those organizations that have put forth guidelines, there’s no regulatory incentive to comply with them.

Therefore, the onus is on us as AI practitioners, data scientists, algorithm creators and users to develop a conscious strategy to ensure inclusivity, diversity of data, and equitable use of these tools and insights.

To do that, accurate integration and interoperability are essential. With so many data sources—from wearables and third-party lab and imaging providers to primary care, health information exchanges, and inpatient records—we must integrate all of this data so that key pieces are included, regardless of formatting our source . The industry needs data normalization, standardization and identity matching to be sure essential patient data is included, even with disparate name spellings or naming conventions based on various cultures and languages.

We must also build diversity assessments into our AI development process and monitor for “drift” in our metrics over time. AI practitioners have a responsibility to test model performance across demographic subgroups, conduct bias audits, and understand how the model makes decisions. We may have to go beyond race-based assumptions to ensure our analysis represents the population we’re building it for. For example, members of the Pima Indian tribe who live in the Gila River Reservation in Arizona have extremely high rates of obesity and Type 2 diabetes, while members of the same tribe who live just across the border in the Sierra Madre mountains of Mexico have starkly lower rates of obesity and diabetes, proving that genetics aren’t the only factor.

Finally, we need organizations like the American Medical Association, the Office of the National Coordinator for Health Information Technology, and specialty organizations like the American College of Obstetrics and Gynecology, American Academy of Pediatrics, American College of Cardiology, and many others to work together to set standards and frameworks for data exchange and acuity to guard against bias.

By standardizing the sharing of health data and expanding on HTI-1 and HTI-2 to require developers to work with accrediting bodies, we help ensure compliance and correct for past errors of inequity. Further, by democratizing access to complete, accurate patient data, we can remove the blinders that have perpetuated bias and use AI to resolve care disparities through more comprehensive, objective insights.

The post AI Can Be Friend or Foe in Improving Health Equity. Here is How to Ensure it Helps, Not Harms appeared first on Unite.AI.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 医疗保健 医疗公平 算法偏见 数据多样性
相关文章