AiThority 2024年09月18日
Practical steps to create your own AI governance roadmap
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能 (AI) 和智能自动化 (IA) 在日常生活中越来越重要,从回答客户问题的聊天机器人到改善客户体验的预约安排,治理对于确保其道德、法律和社会影响至关重要。欧盟的 AI 法案被视为世界上第一个保护用户权利的全面法律,旨在道德地规范欧洲及其他地区不断发展的 AI 应用开发者的需求。

📈 **AI 治理的紧迫性:** AI 治理将很快影响从数字制造自动化到客户聊天机器人和模仿人类工作人员后台任务的应用程序的方方面面。使用 AI 来提取数据、填写表格或移动文件的中央和地方政府办公室、法律和医疗保健行业也需要遵守。规则引擎 API、微服务和低代码应用程序也受到影响。

📢 **建立 AI 治理框架:** 组织可以通过建立促进负责任开发、部署和使用人工智能的流程、政策和实践来确保 AI 治理。至少,使用 AI 的政府部门和公司将被要求将 AI 风险和偏差检查作为定期强制性系统审计的一部分。除了数据安全和预测之外,组织在建立 AI 治理时可以采用几种战略方法。

📡 **AI 治理框架的持续承诺:** 请记住,AI 治理是一个持续的过程,需要领导层的承诺、与组织价值观的协调以及愿意适应技术和社会的变化。在处理这种持续的演变时,完善的治理策略对于确保您的组织了解使用这些机器学习技术的法律要求至关重要。建立安全法规和治理政策制度对于确保您的数据安全、准确和合规也是关键。通过采取这些步骤,您可以帮助确保您的组织以负责任和道德的方式开发和部署 AI。

📣 **AI 治理的现实世界例子:** 虽然它们在方法和范围上有所不同,但它们都解决了人工智能的道德、法律和社会影响。以下是几个值得注意的例子: 欧盟的 GDPR,虽然不完全专注于 AI,但包含了与 AI 系统相关的,数据保护和隐私条款。 此外,在国际人工智能联合会议上制定的关于人工智能的伙伴关系和蒙特利尔负责任人工智能宣言,都侧重于人工智能开发的研究、最佳实践和公开对话。 许多科技公司已经制定了自己的 AI 伦理准则和原则。例如,谷歌的 AI 原则概述了其致力于开发用于社会公益的 AI、避免造成伤害以及确保公平与问责制的承诺。微软、IBM 和亚马逊等其他公司也发布了类似的准则。 一些国家已经制定了包含治理考虑因素的国家 AI 战略。目前,加拿大的“泛加拿大 AI 战略”强调负责任地开发和使用 AI 来造福社会,包括与 AI 伦理、透明度和问责制相关的举措。

If you wondered why governance is suddenly a hot topic in the world of artificial intelligence (AI), look no further than the European Parliament. As AI and intelligent automation (IA) become more integral in daily life with chatbots answering and interacting with clients, to scheduling appointments that improve customer experience, governance is required to ensure ethical, legal, and societal implications.

The EU’s AI act, seen as the world’s first comprehensive law safeguarding the rights of users, also looks set to ethically regulate the ever-evolving needs of AI application developers in Europe and beyond.

With the EU and a number of other countries creating groundbreaking AI governance legislation, it is a timely opportunity for business leaders to prepare their own roadmap for what could be far-reaching change in the way we use AI that will transit daily life and borders.

In the same way most banks, auditors and insurance firms and their supply chains are already geared up to meet robust existing legislation such as General Data Protection Regulation (GDPR) in Europe and Sarbanes Oxley in the U.S., AI governance will require a similar approach. The penalties for not complying being proposed are potentially huge. A maximum fine that could be imposed under the proposed EU AI Act is EUR 30 million, or six percent of worldwide annual corporate turnover, whichever is higher.

Ahead of proactive AI governance, there are a number of measures that can be taken to assess business workflows and identify where AI technology should be used and the potential business risks. Through our work with customers and wider industry, SS&C Blue Prism is well positioned to provide guidance on governance roadmaps for businesses to follow, so they are best placed to meet new requirements.

Also Read: The Urgent Need for AI Guardrails in the Public Sector

The need for urgency

AI governance will soon impact everything from digital manufacturing automation to customer chatbots and apps mimicking back-office tasks of human workers. Central and regional government offices, law and healthcare industries using AI to extract data, fill in forms, or move files will also need to comply. Rules Engine APIs, microservices and low-code apps are also affected.

So if your business uses robotic operating, basic process improvement and macros for workflow management, intelligent character recognition converting handwriting into computer-readable text, or deep level AI and machine learning experiences, you need to comply.

Transparency and authenticity are also hugely important to the way consumers view and interact with their brands, especially Gen Z customers. Making up 32% of the global population and with a spending power of $44bn, Gen Z have high expectations of their brands and will only support and work for those that share their values.

Aspects of automation will also be covered by future AI legislation, so companies need to closely examine how they use intelligent automation execution, and ensure teams meet regulatory needs as they continuously discover, improve, and experiment with automated tasks/processes, BPM data analysis, enhanced automations, and business-driven automations.

The good news is that – by being able to create an auditable digital trail across everything it touches – intelligent automation is the ideal vehicle for AI. Its ability to increase efficiencies across workflows are well known, and being able to have full, auditable insights in actions and decisions is a superpower in itself.

How to establish AI governance

As always, whether it’s data retention or how a business application uses AI, safeguards are required across the AI lifecycle, including record keeping documenting the processes where it is used to ensure transparency.

By having a robust AI governance framework in place, organizations can instill accountability, responsibility, and oversight throughout the AI development and deployment process. This, in turn, fosters ethical and transparent AI practices, enhancing trust among users, customers, and the public.

Ultimately, when it comes to governance, everyone has responsibility – from the CEO and chief information officer to the employees. It starts with ensuring internal guidelines for regulatory compliance, security, and adherence to your organization’s values. There are a few ways to establish and maintain an AI governance model:

AI governance frameworks

Those disregarding AI governance run the risk of data leakage, fraud, and bypassed privacy laws, so any organization utilizing AI will be expected to maintain transparency, compliance, and standardization throughout their processes – a challenge as technical standards are still in the making.

The field of AI ethics and governance Is still evolving, and various stakeholders, including governments, companies, academia, and civil society, continue to work together to establish guidelines and frameworks for responsible AI development and deployment.

There are several real-world examples of AI governance that while they differ in terms of approach and scope, address the ethical, legal, and societal implications of artificial intelligence. Extracts from a few notable ones are here:

The EU’s GDPR, while not exclusively focused on AI, includes data protection and privacy provisions related to AI systems.

Additionally, the Partnership on AI and Montreal Declaration for Responsible AI – developed at the International Joint Conference on Artificial Intelligence – both focus on research, best practices, and open dialogue in AI development.

Many tech companies have developed their own AI ethics guidelines and principles. For instance, Google’s AI Principles outline its commitment to developing AI for social good, avoiding harm, and ensuring fairness and accountability. Other companies like Microsoft, IBM, and Amazon have also released similar guidelines.

Some countries have developed national AI strategies that include considerations for governance. Right now, Canada’s “Pan-Canadian AI Strategy” emphasizes the responsible development and use of AI to benefit society, including initiatives related to AI ethics, transparency, and accountability.

Also Read: For True End-to-End Process Automation: Evolve Your Data Strategy and Architecture

14-Steps to governance greatness

Ensuring AI governance in your organization involves establishing processes, policies, and practices that promote the responsible development, deployment, and use of artificial intelligence.

At the very least, government departments and companies using AI will be required to include AI risk and bias checks as part of regular mandatory system audits. In addition to data security and forecasting, there are several strategic approaches organizations can employ when establishing AI governance.

Also Read: AiThority Interview with Dr. Arun Gururajan, Vice President, Research & Data Science, NetApp

Ongoing commitment to AI governance

Remember that AI governance is an ongoing process that requires commitment from leadership, alignment with organizational values, and a willingness to adapt to changes in technology and society. Well-planned governance strategies are essential when working with this ongoing evolution to ensure your organization understands the legal requirements for using these machine learning technologies.

Setting up safety regulations and governance policy regimes is also key to keeping your data secure, accurate and compliant. By taking these steps, you can help ensure that your organization develops and deploys AI in a responsible and ethical manner.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Practical steps to create your own AI governance roadmap appeared first on AiThority.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 治理 人工智能 智能自动化 数据隐私 伦理
相关文章