Iranian Hackers Target Industrial Controls While AI Security Gets a $13M Reality Check

Page content

Iranian Hackers Target Industrial Controls While AI Security Gets a $13M Reality Check

I’ve been following the threat intel feeds this week, and there’s a pattern emerging that’s worth discussing. We’re seeing nation-state actors getting bolder with critical infrastructure targeting, while the security industry is simultaneously racing to solve AI’s growing attack surface. Let me walk you through what caught my attention.

Iran’s Latest Play: Going After the Factory Floor

The US government issued a warning about Iranian-linked hackers specifically targeting Rockwell/Allen-Bradley programmable logic controllers (PLCs) that are exposed to the internet. This isn’t just another APT campaign – they’re going after the devices that actually control physical processes in our critical infrastructure.

What makes this particularly concerning is the targeting methodology. These aren’t opportunistic scans looking for any vulnerable system. Iranian actors are specifically hunting for internet-exposed PLCs, which tells me they understand exactly what kind of damage they can cause. When you compromise a PLC, you’re not just stealing data – you’re potentially disrupting power grids, water treatment facilities, or manufacturing processes.

The fact that these controllers are internet-accessible in the first place highlights a fundamental problem we’ve been dealing with for years. Too many organizations still treat operational technology (OT) networks like traditional IT infrastructure, without considering the unique security requirements. A database going offline is bad; a water treatment plant going offline is catastrophic.

AI Security Gets Serious Funding

Meanwhile, Trent AI just emerged from stealth with $13 million in funding to tackle AI agent security throughout their entire lifecycle. This timing isn’t coincidental – we’re seeing AI deployments accelerate faster than our ability to secure them properly.

What’s interesting about Trent’s approach is the focus on the complete AI lifecycle. Most security tools I’ve evaluated lately focus on either training data protection or inference-time monitoring, but rarely both. The reality is that AI systems present attack surfaces we’re still learning to map, from data poisoning during training to prompt injection during deployment.

The $13 million funding round also signals that investors are taking AI security risks seriously. We’re moving past the “AI will solve all our security problems” hype into the more practical “how do we secure AI itself” phase.

GPU Rowhammer: When Graphics Cards Become Attack Vectors

Here’s something that caught me off guard: researchers demonstrated a GPU-based Rowhammer attack called GPUBreach that can flip bits in GDDR6 memory to achieve privilege escalation and full system compromise.

I’ll admit, when I first heard about GPU Rowhammer, I was skeptical. Traditional Rowhammer attacks on system RAM are already difficult to pull off reliably. But the researchers managed to use the GPU’s memory controller to repeatedly access specific GDDR6 memory rows, causing bit flips in adjacent rows that corrupt page tables.

What makes this particularly nasty is that it bypasses traditional security boundaries. The attack starts with GPU access but ends with root privileges on the entire system. Given how much we rely on GPUs now – not just for gaming, but for AI workloads, cryptocurrency, and general compute – this represents a significant expansion of the attack surface.

The Cryptomining Botnet Buffet Continues

Speaking of expanded attack surfaces, there’s an active campaign targeting over 1,000 exposed ComfyUI instances for cryptocurrency mining and proxy operations. ComfyUI is a popular stable diffusion platform, and attackers are using purpose-built Python scanners to sweep cloud IP ranges for vulnerable targets.

This campaign highlights how quickly attackers adapt to new technologies. ComfyUI gained popularity as AI image generation took off, and now it’s become another vector for cryptomining botnets. The attackers are even using ComfyUI-Manager to automatically install malicious nodes, which shows they understand the platform’s architecture well enough to abuse its legitimate functionality.

AI’s Security Evolution at RSAC 2026

The timing of all this aligns with insights from RSAC 2026, where AI’s impact on cybersecurity was a major theme. The consensus seems to be that we’re past the experimental phase and into real-world deployment challenges.

What I’m seeing in my own work matches what was discussed at the conference: AI is simultaneously becoming a powerful defensive tool and creating new attack vectors we need to defend against. The Iranian PLC targeting, GPU Rowhammer research, and ComfyUI botnet campaigns all demonstrate how our threat landscape is expanding in ways we didn’t anticipate just a few years ago.

The Bottom Line

We’re dealing with threats that span from nation-state actors targeting physical infrastructure to novel hardware-based attacks to AI platforms being weaponized for botnets. The common thread is that our attack surface keeps expanding faster than our ability to secure it comprehensively.

The good news is that we’re seeing serious investment in solutions, like Trent AI’s funding round. The challenge is that we need to move from reactive security to proactive threat modeling for technologies that are still evolving rapidly.

Sources