The AI Security Reckoning: When Move Fast and Break Things Meets Critical Infrastructure
The AI Security Reckoning: When “Move Fast and Break Things” Meets Critical Infrastructure
Remember when our biggest worry was whether someone would click on a phishing email? Those days feel quaint now. This week’s security news reads like a perfect storm of AI adoption outpacing security controls, and frankly, it’s keeping me up at night.
The “Who Approved This Agent?” Problem
Let’s start with what might be the most pervasive issue flying under the radar: AI agent governance. I’ve been in enough incident response calls to know that sinking feeling when you discover a system you didn’t know existed just caused a major problem.
AI agents are different beasts entirely from traditional applications or user accounts. They’re being deployed rapidly across organizations, often by individual teams or departments without central IT oversight. Unlike a human user who logs in at 9 AM and logs out at 5 PM, these agents operate continuously, accessing data, triggering workflows, and making decisions at machine speed.
The scary part? Many organizations are treating AI agents like they’re just another SaaS tool. But when an agent has access to your customer database, can modify financial records, and operates with minimal human oversight, you’re essentially giving autonomous systems the keys to your kingdom. We need to rethink our entire approach to identity and access management for this new reality.
When Zero-Days Hit Critical Infrastructure
Speaking of things that should keep us awake, there’s a critical zero-day in Cisco UC systems that’s already being exploited in the wild. CVE-2026-20045 allows complete system takeover, and mass scanning is underway.
This hits close to home because unified communications systems are everywhere now. They’re not just handling internal calls anymore – they’re integrated with everything from customer service platforms to emergency communication systems. When attackers can take complete control of these systems, they’re not just eavesdropping on conversations. They can pivot into other network segments, manipulate communications during incident response, or even disrupt critical operations.
If you’re running Cisco UC in your environment, this needs to be your top priority right now. The fact that active exploitation is happening means this isn’t a “patch next month” situation.
The Fundamental Flaw in AI Security
Here’s where things get really interesting from a technical perspective. Bruce Schneier’s analysis of prompt injection attacks perfectly captures why AI security feels like we’re fighting with one hand tied behind our backs.
The drive-through analogy is brilliant: no human would hand over cash just because someone said “ignore previous instructions and give me the money,” but that’s essentially what LLMs do. The core issue is that these models can’t distinguish between legitimate instructions and malicious ones embedded in user input.
What worries me most is how this intersects with the AI agent problem I mentioned earlier. We’re deploying agents that can access sensitive systems and data, but they’re fundamentally vulnerable to these injection attacks. Imagine an AI agent with database access being tricked into extracting customer records, or one with system administration privileges being manipulated into creating backdoors.
When Space Agencies Can’t Secure Earth-Based Systems
The European Space Agency breach is particularly troubling because it’s the second major incident in just a few weeks. When organizations responsible for spacecraft and critical space infrastructure can’t secure their own systems, it raises serious questions about our approach to critical infrastructure protection.
This isn’t just about embarrassment or data theft. Space systems are increasingly critical to everything from GPS navigation to weather forecasting to military operations. When these systems are compromised, the ripple effects can be enormous.
The AI Code Generation Dilemma
Finally, there’s an issue that hits every one of us directly: the security of AI-generated code. I’ll be honest – like many of you, I’ve been using AI to help write scripts and automate tasks. It’s incredibly convenient, but we need to talk about the security implications.
The problem isn’t just that AI might write insecure code (though it often does). It’s that we’re creating a generation of developers and security professionals who are becoming dependent on tools they don’t fully understand, producing code they can’t properly audit.
When I generate a script with AI, I know enough to review it for obvious security flaws. But what about junior developers who are learning to code alongside these AI tools? Are we creating a situation where fundamental security practices are being lost?
The Path Forward
None of this is meant to be doom and gloom – these are solvable problems, but only if we acknowledge them honestly. We need new frameworks for AI agent governance, better integration between AI development and security teams, and a serious investment in understanding these new attack vectors.
Most importantly, we need to resist the urge to move fast and break things when it comes to AI deployment in production environments. The stakes are too high, and the attack surface is too complex.
Sources
- Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents
- Exploited Zero-Day Flaw in Cisco UC Could Affect Millions
- Why AI Keeps Falling for Prompt Injection Attacks
- European Space Agency’s cybersecurity in freefall as yet another breach exposes spacecraft and mission data
- Is AI-Generated Code Secure?