MIT Technology Review » Artificial Intelligence 01月22日
Implementing responsible AI in the generative age
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

许多企业尝试应用人工智能,但未能充分发挥其价值。AI系统的准确性、公平性和安全性问题是主要障碍。责任AI强调AI系统必须公平、透明且有益于社会,才能被广泛采用。美国国家标准与技术研究院认为,AI可信赖的关键要素包括有效性、可靠性、安全性、责任性、透明度、可解释性、隐私和公平性。尽管大多数企业领导者认为责任AI至关重要,但只有少数企业真正做好了实施准备。在生成式AI时代,企业需要通过最佳实践,如模型和数据编目、实施治理控制、风险评估、员工培训等,来有效实践责任AI。

✅ 责任AI是企业构建可信赖AI系统的基石,它强调AI系统的公平性、透明性和社会效益,是AI被广泛应用的前提。

🛡️ 美国国家标准与技术研究院提出了AI可信赖的几个关键要素,包括有效性、安全性、责任性、透明度、可解释性、隐私和公平性,这些要素是构建可靠AI系统的基础。

📊 尽管87%的受访企业高管认为责任AI非常重要,但只有15%的企业表示已充分准备好采用有效的责任AI实践,这表明企业在将理念转化为实践方面面临巨大挑战。

🚀 企业需要通过一系列最佳实践来实施责任AI,包括对AI模型和数据进行编目、实施治理控制、进行风险评估、提供员工培训,并将其提升为领导层优先事项,确保AI变革的成功。

Many organizations have experimented with AI, but they haven’t always gotten the full value from their investments. A host of issues standing in the way center on the accuracy, fairness, and security of AI systems. In response, organizations are actively exploring the principles of responsible AI: the idea that AI systems must be fair, transparent, and beneficial to society for it to be widely adopted. 

When responsible AI is done right, it unlocks trust and therefore customer adoption of enterprise AI. According to the US National Institute of Standards and Technology the essential building blocks of AI trustworthiness include: 

To investigate the current landscape of responsible AI across the enterprise, MIT Technology Review Insights surveyed 250 business leaders about how they’re implementing principles that ensure AI trustworthiness. The poll found that responsible AI is important to executives, with 87% of respondents rating it a high or medium priority for their organization.

A majority of respondents (76%) also say that responsible AI is a high or medium priority specifically for creating a competitive advantage. But relatively few have figured out how to turn these ideas into reality. We found that only 15% of those surveyed felt highly prepared to adopt effective responsible AI practices, despite the importance they placed on them. 

Putting responsible AI into practice in the age of generative AI requires a series of best practices that leading companies are adopting. These practices can include cataloging AI models and data and implementing governance controls. Companies may benefit from conducting rigorous assessments, testing, and audits for risk, security, and regulatory compliance. At the same time, they should also empower employees with training at scale and ultimately make responsible AI a leadership priority to ensure their change efforts stick. 

“We all know AI is the most influential change in technology that we’ve seen, but there’s a huge disconnect,” says Steven Hall, chief AI officer and president of EMEA at ISG, a global technology research and IT advisory firm. “Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and the funding allocated to responsible AI are well below where they need to be given its criticality to the organization.” 

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

责任AI AI信任 企业实践 AI治理 最佳实践
相关文章