AI is Supercharging Both Attackers and Attack Surfaces – Here's What We're Seeing

Page content

AI is Supercharging Both Attackers and Attack Surfaces – Here’s What We’re Seeing

I’ve been watching this week’s security news, and there’s a clear pattern emerging that should make all of us sit up and take notice. AI isn’t just changing how we defend systems – it’s fundamentally reshaping the threat landscape in ways that are both more sophisticated and, paradoxically, more accessible to low-skill attackers.

Let me walk you through what happened this week and why it matters for how we think about security going forward.

The Great Democratization of Cybercrime

Here’s something that would have seemed impossible just a few years ago: researchers at Unit 42 caught a low-skilled threat actor using a large language model to craft what they’re calling “vibe extortion” attacks. The attacker essentially fed an LLM prompts to generate professional-sounding extortion strategies, complete with psychological pressure tactics and carefully planned deadlines.

Think about that for a moment. We used to assume that effective social engineering required real skill – understanding psychology, crafting convincing narratives, timing communications properly. Now? An LLM can generate all of that on demand. The barrier to entry for sophisticated attacks just dropped through the floor.

This isn’t just about extortion either. When AI can help anyone write convincing phishing emails, create believable personas, or even generate attack scripts, we’re looking at a fundamental shift in who can be a threat actor.

APIs: The New Favorite Target Gets Worse

Meanwhile, the API security situation is getting genuinely scary. New research shows that attackers are now hitting APIs at machine speed, and AI-driven systems are amplifying both the exposure and the potential impact of successful attacks.

Here’s why this keeps me up at night: most organizations still treat API security as an afterthought. We’ve got APIs spinning up faster than we can catalog them, let alone secure them properly. Now add AI systems that can automatically discover, probe, and exploit API vulnerabilities faster than any human could respond, and you’ve got a recipe for disaster.

The “blast radius” concept is particularly troubling. When an AI system gets compromised through an API vulnerability, it doesn’t just affect that one system – it can cascade through interconnected services and data sources in ways that are hard to predict or contain.

Nation-State Actors Aren’t Sleeping Either

While low-skill attackers are getting AI assistance, the professionals are still doing what they do best – finding and exploiting zero-days. Chinese state-backed hackers have been quietly exploiting a critical Dell security flaw since mid-2024, flying under the radar for months.

This Dell situation is a perfect example of why zero-day detection is so challenging. When skilled attackers want to stay hidden, they can maintain access for extended periods while we remain completely unaware. The fact that this went undetected for so long should make all of us question what else might be lurking in our environments.

Critical Infrastructure Under Fire

The situation in Poland really drives home how vulnerable our critical infrastructure remains. Russia-aligned groups launched wiper attacks against renewable energy farms, targeting wind and solar installations along with heating and power plants.

What’s particularly concerning here is the focus on renewable energy infrastructure. As we transition to more distributed, software-dependent energy systems, we’re creating new attack surfaces that weren’t traditionally part of the threat model. Those wind and solar farms aren’t just generating power – they’re running sophisticated software systems that can be compromised just like any other network.

The Trust Problem Gets Personal

Perhaps most insidiously, attackers are now trojanizing legitimate tools that people actually want to use. The SmartLoader campaign involved creating a fake version of an Oura MCP server – a tool that connects AI assistants to health data from Oura rings.

This attack vector is brilliant in its simplicity. People are excited about connecting their health data to AI assistants, so they’re actively seeking out these integration tools. By creating a trojanized version that looks legitimate, attackers can get users to voluntarily install malware that steals their information.

What This Means for Us

Looking at these incidents together, I see three major shifts we need to address:

First, we can’t rely on attackers being technically incompetent anymore. AI is leveling the playing field in ways that make traditional risk assessments obsolete.

Second, our API security strategies need to assume machine-speed attacks. Manual monitoring and response times that might have worked against human attackers won’t cut it when facing automated systems.

Third, we need to rethink how we evaluate trust in the tools and integrations we use. As the Oura MCP server attack shows, even legitimate-seeming tools from recognizable brands can be compromised.

The common thread through all of this? Speed and scale. Attacks are happening faster, affecting more systems, and being executed by a broader range of threat actors than ever before. Our defenses need to evolve accordingly.

Sources