Communications of the ACM - Artificial Intelligence 16小时前
How AI/LLMs Can Help, Hinder Developers
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能和大型语言模型(AI/LLMs)的快速发展,软件开发领域正经历着深刻变革。超过半数的开发者表示正在使用AI辅助编码工具,以提高效率和生产力。然而,开发者们也必须警惕AI工具带来的偏见、虚假信息以及潜在的滥用风险,并采取务实的应用策略。未来的软件开发将是AI与人类开发者优势互补的结合,AI作为强大的助手,而非完全替代人类。

💻 AI/LLMs在日常开发中扮演重要角色,67%的组织计划或已在使用AI。AI工具能够提供安全的代码建议和改进,并识别潜在的安全漏洞,从而提升代码安全性。

✍️ AI可以根据自然语言描述生成安全代码,解放开发者,使其专注于更复杂任务。同时,LLMs还能为代码生成注释和解释,帮助开发者更快地理解复杂代码。

⚠️ 过度依赖AI可能导致安全隐患。开发者若过度依赖AI/LLMs,缺乏编写安全代码的能力,将无法有效使用这些工具。开发者可能陷入“黑盒”困境,对AI生成的代码缺乏全面理解,难以发现漏洞或优化性能。

📚 确保开发者具备扎实的安全编码实践至关重要。通过持续学习和培训,开发者能够识别和修复错误、识别第三方库漏洞、保持批判性思维并全面审查AI生成的代码,从而有效防御攻击。

🤝 软件开发的未来在于AI与人类的协作。建立在持续学习基础上的安全编码文化,辅以技术和培训支持,将使组织充分利用AI潜力,同时维护软件的完整性和安全性。

The rapid rise of artificial intelligence and large language models (AI/LLMs) is fundamentally reshaping how software is built. More than half of developers report using AI-powered coding tools at least occasionally, a trend that is only accelerating as organizations seek efficiency and productivity gains. However, developers must be mindful of challenges such as bias, misinformation, and the potential misuse of AI tools, and adopt these technologies pragmatically.

The future of software development lies in striking a balance between the unique strengths of AI and human developers. AI should be approached as a powerful assistant, used to complement the work of developers rather than take over entirely.

The day-to-day power of AI/LLMs

AI/LLMs have a significant role to play in the routine work of developers, which is why 67% of organizations polled by Deloitte are either planning to use AI or are already using AI. A key benefit of these tools relates to writing secure code, and they can assist developers to do so in several ways:

Dangers of AI/LLMs

While the advantages of using AI in software development are clear, human expertise must continue to be prioritized, or else there is a real danger of security vulnerabilities being introduced. Should developers become overly reliant on AI/LLMs and lack the ability to write secure code themselves, they are no longer able to use the tools effectively.

Why secure code training matters

The key to responsibly using AI to write secure code is ensuring developers have a strong understanding of secure coding practices. In doing so, they are empowered to:

    Recognize and rectify errors or vulnerabilities so code adheres to security standards.Identify third-party library vulnerabilities and dependencies and act upon them, securing the software supply chain.Maintain critical thinking and fully review, validate, and test AI-generated code prior to utilizing it.Effectively defend against attacks by anticipating them and introducing the necessary security measures.

The future of software development isn’t AI or humans, it’s both, working together. A secure coding culture, grounded in continuous learning and supported by both technology and training, is what will ultimately allow organizations to fully harness AI’s potential while preserving software integrity and security.

Michael Burch is an Ex-Army Green Beret turned application security engineer. He currently serves as Director of Application Security at Security Journey, where he is responsible for creating vulnerable code examples and educating developers on the importance of establishing solid security principles across the entire SDLC.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 软件开发 安全编码 AI/LLMs 开发者
相关文章