AiThority 2024年09月17日
Risk in Focus: What Enterprises Should Implement, Before Adopting Open-Source AI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了开源软件对AI发展的重要性,以及企业在采用开源AI时面临的风险,如数据泄露、法律责任等,并提出了应对措施。

🌐开源软件在商业软件中广泛应用,价值巨大。AI领域也不例外,开源AI与闭源模型竞争,但存在风险,如数据泄露和法律责任问题。

💪企业应评估开源AI模型的法律责任,考虑其与商业模型的相同风险,计算如何提升开源AI工具的安全性和一致性。

📋企业在考虑整合开源AI时,有三个主要工具:进行全面测试、使框架具有适应性、设计并实施严格的治理流程。

🚀AI治理和风险管理越发重要,企业与开源社区的互动将决定开源AI对行业和社会的影响。

When proponents discuss AI’s potential to deliver unprecedented, outsized value, they’re really standing on the shoulders of giants who have developed the open-source software that has enabled researchers and builders to catalyze this technological revolution. As the Cybersecurity & Infrastructure Security Agency (CISA) recently put it, “It’s safe to say that many innovations of the digital age would not have been possible without OSS.” That’s because open-source software (OSS) operates in 90% of commercial software and

generates returns far exceeding the price of its inputs. Harvard Business School suggests OSS costs $4.15 billion per year in development and maintenance, but creates $8.8 trillion in value.

AI is no exception to this rule, as recent advances in the open-source AI space mean that developers no longer need to choose between performance and open source. LLaMa 3.1, for example, outperforms GPT-4o and Claude 3.5 Sonnet on several benchmarks. The fact that open source is keeping pace with closed, proprietary models is great news for innovation. It also moves the needle on transparency; open-source models are more easily interrogated through model weights and training data, serving as a powerful resource for academia, and can level the playing field between incumbents and challengers. Greater transparency further lets an ecosystem of researchers, innovators, and security experts test the substance and integrity of models, enhancing their capabilities and safeguards.

Also Read: What is Return on AI – and How Do Companies Measure It

But while the magnifying potential of AI and OSS converges productively across most use cases and concerns, this nexus is a critical focus for risk management as data breaches are top of mind for consumers. According to the U.S. Internet Crime Complaint Center, the reported number of data breaches tripped between 2021 and 2023, suggesting recent trailblazing technological developments have not hampered malicious activity. It’s not far-flung to consider even more comprehensive breaches in a more AI-driven future, where automated technologies are responsible for even more data—and even more sensitive information. AI’s iterative nature may also make vulnerabilities harder to unwind, augmenting the technological cost of remediation.

Faced with that threat level, enterprises across industries should seriously evaluate the forms of legal recourse that AI models do or don’t deliver; open-source AI models almost invariably offer little, or no, liability protection compared to commercial alternatives. Couple security risks with IP risks—as open-source models may have been trained on copyrighted data—and licensing risks, and the potential costs of open-source AI snowball.

This doesn’t mean enterprises should jettison OSS in the age of AI. If anything, many commercial models harbor some of the same risks as OSS—such as using copyrighted data—though many are less transparent about shortcomings. OSS already operates in the majority of commercial software; that figure is unlikely to change, which should encourage enterprise teams to calculate how they can deploy internal resources to bring promising open-source AI tools up to enterprise-grade levels of security and consistency, while also ensuring that these tools advance the enterprise’s sensitive data.

It’s important to distinguish models that are “free to be used” from those that are developed in an open, participatory way. Transparency about which models you’re using—and what kind of data was used to train those models, how they behave, and what their potential shortcomings and vulnerabilities are is critical to mitigating ethical AI risks. Enterprises have three primary tools at their disposal while considering—and then integrating—open-source AI into their operations.

First, integrate thorough testing into AI processes and adoption. When using any kind of LLM, teams must thoroughly test its behavior in the context in which it will be deployed; this is especially critical for open-source models, which may not have undergone the same kind of rigorous testing or red-teaming as closed-source models. That includes evaluating the security standards of the foundations supporting the OSS; some fail to even abide by “basic API security patterns,” opening their users to major threats, including from state-sponsored hackers.

Also Read: AI and Big Data Governance: Challenges and Top Benefits

Second, make adaptability a core part of the framework. The landscape of AI assessment tools is constantly evolving. Numerous open-source packages currently exist that explore dataset and model characteristics alongside dimensions like fairness, security, privacy, and performance. Moreover, academic research in these fields is a thriving endeavor that continuously generates new concepts and frameworks. In short, measurement best practices are changing rapidly, and the challenge of assessing AI systems is far from being solved. This, combined with the rapid development of AI models themselves, requires an assessment framework to be highly adaptable.

Third, design and implement rigorous governance processes. Because open-source AI is so widely available, it’s all the more critical to have rigorous governance processes in place to address shortcomings and vulnerabilities—which can easily be discovered and exploited by bad actors. Tracking models and model versions across applications and projects can help ensure timely compliance with governance demands, especially in the face of a dynamic technological and regulatory environment. Changes to underlying models require further re-testing. AI governance is critical to address the new and emerging risks that most organizations don’t have processes to deal with already, in addition to shoring up the existing governance processes (like security and privacy workflows) to address new concerns related to generative AI.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

AI governance and managing risk is clearly coming more to the forefront across Big Tech boardrooms, which will ultimately deliver benefits for industry and society. Ultimately, whether open-source AI delivers outsized benefits—or creates a costly crucible of vulnerable tools—for industry and society writ large will depend on how enterprises interact with open-source communities. Dovetailing OSS’s scrappiness and efficiency with commercial-grade vigilance and resources can sustain a collaborative ethos in the AI era, while also continuing to ensure democratic access to secure, paradigm-shifting tools.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post Risk in Focus: What Enterprises Should Implement, Before Adopting Open-Source AI appeared first on AiThority.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

开源AI 企业风险 治理流程 AI发展
相关文章