Securing the AI Frontier

  • March 18, 2025
  • Security Automation
  • 4 Min Read

Artificial intelligence is no longer a futuristic concept in software development - it’s here, and it’s changing everything. According to PwC, AI can cut software development times in half, but this also brings an expanded threat surface that security teams must now protect. In our recent webinar "Understanding AI Attack Vectors," our Principal Solutions Architect Tracy Walker tackled this double-edged sword, sharing concrete findings on where AI can both strengthen and potentially compromise application security.

The Current State of AI in Code Generation

GitHub's 2024 research with Accenture uncovered some eye-opening numbers: 40% of newly committed code across their enterprise customers contains AI-assisted content and that number is only climbing. Over 50,000 organizations have already adopted tools like GitHub Copilot, and an astounding 96% of developers begin using AI suggestions the moment they install their IDE extensions. These figures highlight a major shift in the software development lifecycle (SDLC), where AI has become an indispensable tool for direct code generation, test case creation, documentation and even API development and refactoring.

Impact on Code Quality and Security

While AI tools have demonstrated significant benefits, like faster code completion and a 46% reduction in vulnerable code when using GitHub Copilot's security features and faster security incident detection, they also present new challenges. We're seeing:

  • Doubled code churn between 2020 and 2024

  • Increased copy-pasted code

  • AI suggestions optimized for acceptance rather than optimal solutions

  • New attack vectors specific to AI systems

In short, while AI tools provide advantages, they bring new vulnerabilities that need to be addressed.

A Balanced Approach to AI Security

AI-generated code shouldn't necessarily require more scrutiny than human-generated code but rather, we should apply equally rigorous security practices to all code regardless of its origin. However, as AI becomes more embedded in development workflows, there’s a risk of over-reliance, where developers might accept suggestions without fully understanding or reviewing the output. As Tracy pointed out, "If you're identifying places where you feel like you need more controls around your AI-generated stuff, my first question is why are we not doing that for human-generated code?" To put it simply, security practices should be equally rigorous for all types of code, regardless of its source because unchecked reliance on AI can introduce risks just as easily as human oversight can.

Practical Steps for Organizations

For organizations incorporating AI into their development processes, here are four recommendations: 

  1. Start with Policy: Develop a clear AI/LLM usage policy that outlines both encouraged uses and prohibited actions, particularly regarding sensitive data and proprietary code.

  2. Leverage Existing Frameworks: Consider frameworks like NIST's AI Risk Management Framework, but implement them incrementally based on your organization's specific needs rather than attempting full adoption at once.

  3. Monitor Integration Points: Pay special attention to where AI tools interface with your existing SDLC, ensuring proper security controls at each junction.

  4. Maintain Context Awareness: Remember that every environment is unique - what works for one organization may not work for another. Focus on understanding your specific context and risks.

Final Thoughts

The AI frontier in software development isn't just about adopting new tools - it's about thoughtfully integrating them into our existing security practices while remaining mindful of new risks and attack vectors. As Tracy emphasized, success lies not in treating AI as a completely new paradigm, but in applying time-tested security principles while accounting for AI-specific considerations.

You can watch the full webinar here