AI Assistants Are the New Gold Mine for Cybercriminals

Page content

AI Assistants Are the New Gold Mine for Cybercriminals

You know how we’ve all been watching AI adoption explode across organizations? Well, the bad actors have been watching too, and they’re adapting faster than we might have expected. This week brought some sobering reminders that our shiny new AI tools are creating fresh attack surfaces we’re still learning to defend.

OpenClaw Becomes a Target

The biggest eye-opener has to be the discovery that infostealers are now specifically hunting for OpenClaw secrets. If you haven’t worked with OpenClaw yet, it’s become the go-to agentic AI assistant framework for a lot of organizations. The problem? Like most AI tools, it relies heavily on API keys and authentication tokens to function.

What’s particularly concerning here is that this isn’t just opportunistic data grabbing. The malware has been specifically updated to recognize and extract OpenClaw configuration files. That tells us threat actors are actively researching these new technologies and building targeted capabilities around them. It’s a reminder that every new tool we deploy needs its own security considerations from day one.

The Broader AI Attack Picture

Speaking of AI-specific threats, there’s been some excellent analysis on what researchers are calling the “Promptware Kill Chain”. This piece really cuts through the noise around prompt injection attacks and explains why our current thinking might be too narrow.

The core argument is that we’re fixated on prompt injection as if it’s just another input validation problem, when really we’re dealing with something much more complex. LLM-based systems have multiple attack vectors that can be chained together in ways we’re still discovering. It’s not just about malicious prompts anymore – it’s about understanding how these systems can be manipulated at different layers.

This resonates with what I’ve been seeing in our own AI security assessments. Teams often focus on the obvious prompt injection scenarios but miss the broader architectural risks that come with integrating AI into business processes.

Old Tricks, New Twists

While we’re dealing with these emerging AI threats, traditional attack methods are getting creative updates. Microsoft has been tracking something called ClickFix attacks that abuse DNS lookups to deliver a RAT called ModeloRAT.

The technique is clever in its simplicity – instead of relying on traditional payload delivery methods, attackers are using DNS requests as a communication channel. It’s another example of how threat actors keep finding ways to abuse legitimate protocols and services that we typically trust.

Then there’s Operation DoppelBrand, which is targeting major financial institutions like Wells Fargo through sophisticated brand impersonation. While phishing isn’t new, the level of brand mimicry we’re seeing continues to improve, making these campaigns increasingly difficult for end users to spot.

Learning from Lithuania’s Approach

One bright spot in all this has been seeing how different countries are approaching AI-driven cyber threats strategically. Lithuania’s initiative for a “Safe and Inclusive Digital Society” caught my attention because they’re taking a mission-oriented approach to addressing AI-driven cyber fraud.

What I find interesting about their strategy is the focus on building resilience at a societal level, not just within individual organizations. They’re recognizing that AI-powered attacks don’t just threaten businesses – they can undermine trust in digital systems more broadly.

What This Means for Us

Looking at these developments together, a few patterns emerge that we should be thinking about in our own security programs.

First, AI adoption is creating new attack surfaces faster than we’re developing defenses. The OpenClaw targeting shows us that threat actors are actively researching and adapting to new technologies. We need to build security considerations into our AI adoption processes from the start, not as an afterthought.

Second, the traditional boundaries between different types of attacks are blurring. The promptware kill chain concept illustrates how AI systems can be attacked through multiple vectors simultaneously. Our defense strategies need to account for this complexity.

Finally, we’re seeing continued innovation in how attackers abuse legitimate services and protocols. The DNS-based payload delivery and sophisticated brand impersonation remind us that security isn’t just about blocking known bad things – it’s about understanding how good things can be misused.

The key takeaway for me is that we need to stay curious and adaptive. The threat landscape is changing quickly, but so are our capabilities to understand and respond to these changes. We just need to make sure we’re learning as fast as the attackers are.

Sources