The rapid rise of artificial intelligence and large language models (AI/LLMs) is fundamentally reshaping how software is built. More than half of developers report using AI-powered coding tools at least occasionally, a trend that is only accelerating as organizations seek efficiency and productivity gains. However, developers must be mindful of challenges such as bias, misinformation, and the potential misuse of AI tools, and adopt these technologies pragmatically.
The future of software development lies in striking a balance between the unique strengths of AI and human developers. AI should be approached as a powerful assistant, used to complement the work of developers rather than take over entirely.
The day-to-day power of AI/LLMs
AI/LLMs have a significant role to play in the routine work of developers, which is why 67% of organizations polled by Deloitte are either planning to use AI or are already using AI. A key benefit of these tools relates to writing secure code, and they can assist developers to do so in several ways:
- AI-powered tools can offer secure code suggestions and improvements while also pinpointing potential security vulnerabilities, all while the code is still being written.AI/LLMs can generate secure code from natural language descriptions, freeing up developers to focus on more complex tasks.LLMs can generate comments and explanations for code, making it easier for developers to come to grips with intricate code more swiftly.
Dangers of AI/LLMs
While the advantages of using AI in software development are clear, human expertise must continue to be prioritized, or else there is a real danger of security vulnerabilities being introduced. Should developers become overly reliant on AI/LLMs and lack the ability to write secure code themselves, they are no longer able to use the tools effectively.
- Developers may fall into the “black box” scenario, where they rely on AI-generated code without fully understanding its logic or structure. This lack of visibility makes it difficult to identify bugs, fix vulnerabilities, or optimize the code’s performance and security.If developers trust AI-generated code without critically reviewing it, vulnerabilities such as outdated or vulnerable third-party libraries can go unnoticed, exposing both the application and its users to unnecessary risks.
Why secure code training matters
The key to responsibly using AI to write secure code is ensuring developers have a strong understanding of secure coding practices. In doing so, they are empowered to:
- Recognize and rectify errors or vulnerabilities so code adheres to security standards.Identify third-party library vulnerabilities and dependencies and act upon them, securing the software supply chain.Maintain critical thinking and fully review, validate, and test AI-generated code prior to utilizing it.Effectively defend against attacks by anticipating them and introducing the necessary security measures.
The future of software development isn’t AI or humans, it’s both, working together. A secure coding culture, grounded in continuous learning and supported by both technology and training, is what will ultimately allow organizations to fully harness AI’s potential while preserving software integrity and security.

Michael Burch is an Ex-Army Green Beret turned application security engineer. He currently serves as Director of Application Security at Security Journey, where he is responsible for creating vulnerable code examples and educating developers on the importance of establishing solid security principles across the entire SDLC.