AI Security's Growing Pains: Why Traditional Defenses Are Falling Short

Page content

AI Security’s Growing Pains: Why Traditional Defenses Are Falling Short

As someone who’s been watching the security space evolve over the past few years, I’ve noticed something troubling: we’re rushing headfirst into AI adoption while our security practices lag dangerously behind. This week’s news really drives that point home.

The Skills Gap is Real (And Getting Worse)

Let’s start with the elephant in the room. A new report from Pentera surveyed 300 US CISOs and found that most of us are trying to secure AI systems with tools and skills that simply aren’t up to the task. I can’t say I’m surprised, but it’s concerning to see the numbers confirm what many of us suspected.

Think about it – we’re dealing with AI agents that have autonomous access to real data and systems, not just the chatbot copilots we got comfortable with last year. These agents can make decisions, access databases, and interact with other systems without human oversight. Yet most security teams are approaching them with the same identity and access management frameworks we’ve been using for traditional applications.

The timing couldn’t be worse. While we’re struggling with these fundamentals, threat actors like the Warlock ransomware group are getting more sophisticated. They’ve started using new BYOVD (Bring Your Own Vulnerable Driver) techniques for stealthier cross-network movement. When attackers are evolving their post-exploitation tactics faster than we’re adapting our AI security practices, we have a problem.

What CISOs Actually Need to Focus On

The good news is that some clear priorities are emerging from the chaos. Token Security has identified five critical areas where CISOs need to act now, and their emphasis on identity-based access control for AI agents resonates with what I’m seeing in the field.

Here’s the thing about AI agents – they’re not just processing requests and spitting out responses. They’re making API calls, querying databases, and potentially moving laterally through our networks. If we don’t have proper identity controls in place, we’re essentially giving these agents a blank check to access whatever they want. That’s a recipe for data exposure and misuse.

I’ve been thinking about this in the context of zero trust principles. We spent years learning not to trust network location or device type as security indicators. Now we need to apply that same skepticism to AI agents, regardless of how “smart” or well-intentioned they seem.

The Investment Side Shows Promise

While the skills gap is concerning, there’s reason for optimism in the funding space. Surf AI just raised $57 million for what they’re calling an “agentic security operations platform.” The backing from firms like Accel and Cyberstarts suggests that investors recognize the urgent need for AI-native security tools.

What interests me about this funding round is the focus on “agentic” operations. It seems like the industry is finally acknowledging that we need security tools that can keep pace with AI systems, not just monitor them after the fact.

Don’t Forget the Basics

Amid all this AI focus, we can’t lose sight of fundamental network security issues. The recent analysis of IPv4-mapped IPv6 addresses being used for attack obfuscation is a perfect reminder that attackers are still exploiting basic protocol transitions and network configurations.

This dual challenge – securing cutting-edge AI while maintaining vigilance on traditional attack vectors – is stretching security teams thin. We’re essentially fighting a two-front war, and many organizations don’t have the resources for both.

Moving Forward

The reality is that AI security isn’t going to wait for us to catch up. Organizations are deploying AI agents now, and they need security frameworks that work today, not next year. This means we need to get comfortable with iterative security approaches rather than waiting for perfect solutions.

My advice? Start with identity and access management for your AI systems. Get visibility into what your AI agents are actually doing. And invest in training your team on AI-specific security challenges – because traditional penetration testing and vulnerability management won’t cut it for autonomous systems.

The $57 million investment in Surf AI shows that the market recognizes this urgency. Now it’s up to us as security professionals to bridge the gap between where we are and where we need to be.

Sources