AI Gets Weaponized on Both Sides: From Code Scanning to Android Malware

Page content

AI Gets Weaponized on Both Sides: From Code Scanning to Android Malware

It’s been one of those weeks where the security headlines make you wonder if we’re living in a cyberpunk novel. We’ve got AI helping us find vulnerabilities, AI getting abused by malware, healthcare systems shutting down from ransomware, and everyone scrambling to train enough people to handle it all. Let me walk you through what’s happening and why it matters for all of us.

The Good: AI Actually Helping with Security

The most interesting development this week is Anthropic’s launch of Claude Code Security, which scans codebases for vulnerabilities and suggests patches. Right now it’s limited to Enterprise and Team customers in a research preview, but this could be a game-changer for how we approach code review.

What I find compelling about this isn’t just that it’s another AI security tool – we’ve seen plenty of those. It’s that Anthropic is positioning this as part of their existing Claude Code platform, which means developers are already using it for other tasks. The integration angle matters because getting security tools into existing workflows is half the battle.

Of course, we’ll need to see how accurate these vulnerability scans actually are. I’ve been burned before by tools that flag every strcpy() as critical while missing actual logic flaws. But given Claude’s track record with code understanding, I’m cautiously optimistic.

The Ugly: Ransomware Keeps Hitting Where It Hurts

Meanwhile, ransomware groups are having a field day with critical infrastructure. The University of Mississippi Medical Center had to close all its clinic locations statewide after getting hit. When you’re talking about shutting down medical facilities across an entire state, that’s not just an IT problem – that’s a public health crisis.

What’s particularly frustrating is that healthcare remains such a soft target. These organizations are running on tight budgets, often with legacy systems that can’t be easily patched or replaced. The attackers know this, which is why we keep seeing these devastating hits on hospitals and medical centers.

The manufacturing sector isn’t faring much better. Chip testing giant Advantest got ransomed this week too, and they’re still investigating whether customer or employee data was compromised. Given how critical chip testing is to the entire semiconductor supply chain, this could have ripple effects we won’t see for months.

The Weird: Malware Using AI Against Us

Here’s where things get really interesting from a technical perspective. Security researchers discovered PromptSpy, an Android malware that abuses Google’s Gemini AI to maintain persistence on infected devices. The malware actually uses Gemini to analyze on-screen elements and figure out how to stay hidden even after reboots.

This is the kind of adversarial use of AI that we’ve been worried about for years, but seeing it in the wild is still jarring. The malware isn’t just using AI as a buzzword – it’s genuinely using the AI’s analytical capabilities to be more effective at its job. It’s like having a malware author with a really smart assistant helping them stay one step ahead of detection.

The implications here go beyond just this one piece of malware. If attackers can start using AI APIs to make their malware more adaptive and persistent, we’re going to need to fundamentally rethink our detection strategies.

The Skills Gap Gets Real

All of this is happening against the backdrop of a massive skills shortage. EC-Council just launched a new Enterprise AI Credential Suite, citing $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling. Those numbers are staggering, but they align with what most of us are seeing in the field.

The challenge isn’t just that we need more security people – it’s that we need security people who understand AI well enough to secure it, and AI people who understand security well enough to build safely. That intersection is where the real expertise gap lives.

What This Means for Us

Looking at these stories together, I see a pattern that should concern all of us. AI is becoming a force multiplier on both sides of the security equation. We’ve got tools like Claude Code Security that could genuinely help us find and fix vulnerabilities faster. But we’ve also got malware like PromptSpy using AI to be more effective at avoiding detection.

The organizations getting hit by ransomware are often the ones least equipped to adopt new AI security tools. Healthcare systems and smaller manufacturers don’t have teams of ML engineers ready to integrate AI-powered vulnerability scanners. Meanwhile, the attackers are clearly not waiting around – they’re already using AI to make their malware more sophisticated.

The skills gap makes all of this worse. We’re trying to secure AI systems while training people to understand both AI and security, all while dealing with the same old problems like unpatched systems and insufficient budgets.

My take? We need to get realistic about timelines and priorities. AI security tools like Claude Code Security are promising, but they won’t help organizations that can’t even keep their basic security hygiene in order. We need to solve the fundamentals while simultaneously preparing for an AI-augmented threat landscape.

Sources