AiThority 2024年09月17日
83 Percent of Organizations Use AI to Generate Code, Despite Mounting Security Concerns
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Venafi发布报告探讨AI与开源代码的风险及挑战,多数安全领导者对AI生成代码有担忧,存在安全团队跟不上开发速度、治理缺口等问题,还提到开源代码的潜在风险及代码签名的重要性。

🎯多数安全领导者对组织内使用AI生成代码表示担忧,开发者使用AI生成代码较为普遍,但存在安全风险,如开发者可能过度依赖AI、代码质量难以有效检查、可能使用过时未维护的开源库等。

🚀安全团队难以跟上AI驱动的开发速度,导致他们感觉失去控制,企业面临风险,很多人认为AI开发的代码会带来安全问题。

🕳️治理方面存在缺口,大部分安全领导者认为难以管理AI的安全使用,且只有不到一半的公司有确保AI安全使用的政策。

💡开源代码使用过度,存在潜在风险,虽多数安全领导者信任开源库代码,但难以验证每一行代码的安全性,因此认为应使用代码签名来确保其可信赖。

New Venafi Research Reveals AI- and Open Source-Powered Development Outpacing Security — With Many Security Leaders Wanting to Ban AI Code

Venafi, the leader in machine identity management, today released a new research report, Organizations Struggle to Secure AI-Generated and Open Source Code. The report explores the risks of AI-generated and open source code and the challenges of securing it amidst hyper-charged development environments.

Also Read: What is Return on AI – and How Do Companies Measure It

“The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown”

A survey of 800 security decision-makers across the U.S., U.K., Germany and France revealed that nearly all (92%) security leaders have concerns about the use of AI-generated code within their organization. Other key survey findings include:

    Tension Between Security and Developer Teams: Eighty-three percent of security leaders say their developers currently use AI to generate code, with 57% saying it has become common practice. However, 72% feel they have no choice but to allow developers to use AI to remain competitive, and 63% have considered banning the use of AI in coding due to the security risks.Inability to Secure at AI Speed: Sixty-six percent of survey respondents report it is impossible for security teams to keep up with AI-powered developers. As a result, security leaders feel like they are losing control and that businesses are being put at risk, with 78% believing AI-developed code will lead to a security reckoning and 59% losing sleep over the security implications of AI.Governance Gaps: Two-thirds (63%) of security leaders think it is impossible to govern the safe use of AI in their organization, as they do not have visibility into where AI is being used. Despite concerns, less than half of companies (47%) have policies in place to ensure the safe use of AI within development environments.

“Security teams are stuck between a rock and a hard place in a new world where AI writes code. Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg,” said Kevin Bocek, chief innovation officer at Venafi. “Anyone today with an LLM can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. So it’s the code that matters! We have to authenticate code wherever it comes from.”

Also Read: AI and Big Data Governance: Challenges and Top Benefits

The Open Source Trust Dilemma

When looking at specific concerns around developers using AI to write or generate code, security leaders cited three top concerns:

    Developers would become over-reliant on AI, leading to lower standardsAI-written code will not be effectively quality checkedAI will use dated open source libraries that have not been well-maintained

The research also highlights that it is not only AI’s use of open source that could present challenges to security teams:

    Open Source Overload: On average, security leaders estimate 61% of their applications use open source. This over-reliance on open source could present potential risks, given that 86% of respondents believe open source code encourages speed rather than security best practice amongst developers.Vexing Verification: Ninety percent of security leaders trust code in open source libraries, with 43% saying they have complete trust — yet 75% say it is impossible to verify the security of every line of open source code. As a result, 92% of security leaders believe code signing should be used to ensure open source code can be trusted.

“The recent CrowdStrike outage shows the impact of how fast code goes from developer to worldwide meltdown,” Bocek adds. “Code now can come from anywhere, including AI and foreign agents. There is only going to be more sources of code, not fewer. Authenticating code, applications and workloads based on its identity to ensure that it has not changed and is approved for use is our best shot today and tomorrow. We need to use the CrowdStrike outage as the perfect example of future challenges, not a passing one-off.”

Maintaining the code signing chain of trust can help organizations prevent unauthorized code execution, while also scaling their operations to keep up with developer use of AI and open source technologies. Venafi’s industry-first Stop Unauthorized Code Solution helps security teams and administrators maintain their code signing trust chain across all environments.

“In a world where AI and open source are as powerful as they are unpredictable, code signing becomes a business’ foundational line of defense,” Bocek concludes. “But for this protection to hold, the code signing process must be as strong as it is secure. It’s not just about blocking malicious code — organizations need to ensure that every line of code comes from a trusted source, validating digital signatures against and guaranteeing that nothing has been tampered with since it was signed. The good news is that code signing is used just about everywhere — the bad news is it is most often left unprotected by security teams who can help keep it safe.”

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

The post 83 Percent of Organizations Use AI to Generate Code, Despite Mounting Security Concerns appeared first on AiThority.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI代码 开源代码 代码安全 代码签名
相关文章