Fortune | FORTUNE 2024年10月16日
AI regulation gets a bad rap—but lawmakers around the world are doing a decent job so far
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了欧盟的AI法规,包括其引发的担忧、带来的学习经验、对信任和透明度的重视、风险评估方法、为创新者提供空间以及其他相关问题,如支持地区优势领域和加强公众教育等。

🎯欧盟积极推动科技监管,AI Act是一系列法规中的最新举措,虽引发诸多担忧,但也有积极意义,如为其他国家提供学习经验。

💡欧盟将信任和透明度置于AI法规核心,要求披露AI生成内容、防止非法内容产生等,认为这有利于建立用户和企业间的信任。

🎯AI Act采用风险评估方法,不同风险的AI应用有不同要求,如医疗领域的AI系统需严格测试,而低风险的聊天机器人只需告知用户在与AI交互。

💡该法规通过提供“监管沙盒”为企业创新提供空间,同时作者认为各国应学习欧盟做法,考虑自身AI立法的信任、透明和创新原则。

🎯除法规外,还应考虑支持地区优势领域及加强公众教育,以消除公众对AI的误解,避免数字鸿沟。

Utter the word “regulation” in certain tech industry circles, and you’ll feel an immediate chilling effect. Fears about stifled innovation, burdensome costs, and curtailed growth are typical responses from skeptics concerned about government overreach.However, the EU, keen to position itself at the vanguard of tech regulation and as a champion of individual rights and consumer protection, pays little attention to such concerns.The implementation of the EU’s AI Act is the most recent in a series of laws emanating from Brussels that have divided opinion, hot on the heels of the General Data Protection Regulation (GDPR) and the Digital Markets Act (DMA) which came into force in 2018 and 2023 respectively.As expected, the AI Act is receiving a lot of attention and, once again, there are concerns. Will promising European AI startups be able to shoulder the costs of the new regime? Will overregulation put EU businesses at a disadvantage to American and Chinese competitors? Will Europeans be deprived of new AI services from abroad, as international businesses opt against rolling out their new AI services because compliance costs are deemed too high?As the founder of the AI-powered consumer and market intelligence company QUID, I have scaled a business through the implementation of both the GDPR and DMA, and I would encourage the AI Act’s doubters to take a pause.Yes, the act undoubtedly has flaws, but its pros far outweigh its cons. And, crucially, as one of the first major laws on AI globally, it presents a series of learnings that other countries should keep in mind when considering their own AI legislation.Trust is critical to AI’s evolutionLet’s get this straight: AI is doomed if people don’t trust it. At QUID, we have a ring-side seat to public opinion on AI—and sentiment, as it stands, suggests there’s a major reassurance job to be done. We were one of the lead contributors to Stanford University’s AI Index Annual Report, which found that 52% of people are nervous about AI products and services, while only half trust that companies that use artificial intelligence will protect their personal data. It’s for this reason that the EU is absolutely correct to put trust and transparency at the heart of its approach to AI.Are the Act’s requirements onerous? In my view, no. Having to disclose that content was generated by AI, designing AI models to prevent the generation of illegal content, and publishing summaries of copyrighted data used for training all seem perfectly reasonable asks.Transparency and data traceability breed trust—and that’s good for our users and businesses.A risk-based approach is the logical approachAI could have profound benefits for healthcare, productivity, and education—to name but a few areas. At the same time, allowing unchecked development of AI presents a significant threat to the safety and rights of individuals. Who wants to drive a car using unchecked AI? Who wants to go through an AI-enabled recruitment process at risk of bias and discrimination? No one I know.That’s why the act’s risk-based approach is the logical path to follow. For example, AI systems deployed in healthcare should rightly go through stringent testing and checks. Lower-risk AI applications such as chatbots are simply required to inform users that they’re interacting with AI. Minimal-risk applications such as those that might be used in video games require little to no oversight.Innovators must be given the space to innovateDespite our research revealing that overall corporate investment in AI across the globe has cooled, the numbers being directed into AI remain enormous, with nearly $96 billion invested in 2023 alone, with 1,812 companies founded in 2023, up over 40% in 2022. And we expect to see growth in the number of newly funded AI companies around the world.We should welcome private investment in AI, but it’s important businesses in receipt of that investment are given the space and time to innovate, without fear of incurring fines or penalties. Through its provision of “regulatory sandboxes,” frameworks set up by regulators that allow businesses to trial innovative new technologies in a controlled and monitored environment, this is precisely what the EU’s AI Act allows for. So, while some may say EU regulation stifles innovation, I disagree.No country or superstate is ever going to get AI legislation 100% right, but around the world, countries are starting to regulate AI. In the U.S., for example, in 2023 there were 25 AI-related regulations up from just one in 2016. Globally, mentions of AI in legislative proceedings have nearly doubled from 1,247 in 2022 to 2,175 in 2023. As more countries gear up to regulate, they would do well to learn from the EU’s approach. In the AI Act, they have key principles of trust, transparency, and innovation that they may well want to consider for their own laws.But beyond the confines of regulation, there are two other areas that countries and regions considering AI legislation should think about.The first is how they can support areas in which they have a comparative advantage. Our research for Stanford’s AI Index Annual report finds clear differences in regional investment levels in AI by focus area.For example, the EU and U.K. invested nearly double ($540 million) the level of the US ($280 million) in AI for cybersecurity and data protection. If countries have a pedigree and notable skills in specific areas of AI development, it makes sense for legislators to think about how they can create policy and regulatory environments that support them.The second is perhaps one of the major missing pieces in the jigsaw puzzle: public education.There is still a huge job to do in educating societies about the benefits of AI, its use cases, and what it means for individuals. This is the flip side of the trust debate, so it’s critical that governments educate as well as legislate. Public education campaigns should be playing a much greater role than they currently are.While forcing businesses to be transparent and ethical in their use of AI is no doubt a good thing, a large part of building trust successfully is having an informed public that feels confident in their understanding of how AI will impact them and why it’s a good thing.If we don’t demystify AI, we could fail to bring large parts of society along with us for the journey and risk a new digital divide.More must-read commentary published by Fortune:The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟AI法规 信任与透明 风险评估 创新空间 公众教育
相关文章