AI Is Changing the Attack Game: From Voice Phishing to Compromised Firewalls

Page content

AI Is Changing the Attack Game: From Voice Phishing to Compromised Firewalls

Last week brought some sobering reminders that threat actors are getting creative with AI tools, and frankly, they’re moving faster than many of us expected. While we’ve been debating the theoretical risks of AI in cybersecurity, attackers are already putting these tools to work in ways that should make every security team take notice.

When AI Meets Social Engineering

The Optimizely breach caught my attention not because voice phishing is new – we’ve all seen our share of vishing campaigns – but because of how it highlights the human element that AI is starting to amplify. The New York-based ad tech company confirmed that attackers successfully compromised their systems through a voice phishing attack, affecting an undisclosed number of customers.

What worries me here isn’t just the breach itself, but the timing. We’re seeing vishing attacks become more sophisticated just as AI voice cloning tools become more accessible. While Optimizely hasn’t disclosed whether AI was involved in their specific incident, the writing is on the wall. Attackers can now create convincing voice impersonations with just a few minutes of audio from social media or corporate videos.

Low-Skill Attackers, High-Impact Results

Speaking of AI amplifying threats, there’s a particularly concerning report about a Russian-speaking attacker using GenAI tools to compromise Fortinet firewalls. What makes this case study fascinating – and terrifying – is that researchers describe this as a “low-skilled” attacker.

Think about that for a moment. Someone without advanced technical skills successfully deployed an attack workflow against FortiGate instances by leveraging generative AI tools. This represents a fundamental shift in the threat landscape. We’re no longer just dealing with elite hacking groups; we’re facing a world where AI can democratize sophisticated attack techniques.

The attacker likely used AI to help with reconnaissance, payload development, or even understanding the technical documentation needed to exploit vulnerabilities. This isn’t science fiction – it’s happening right now, and it means our assumptions about attacker skill levels need a serious update.

Supply Chain Attacks Get an AI Upgrade

Perhaps the most technically interesting development is the emergence of autonomous AI agents in supply chain attacks. While the current campaign targets crypto wallets, the methodology has much broader implications that should concern anyone responsible for software supply chain security.

Traditional supply chain attacks require attackers to manually identify targets, craft specific payloads, and maintain persistence across multiple systems. Now we’re seeing AI agents that can automate these processes, potentially scaling attacks in ways we haven’t seen before. An AI agent could theoretically analyze thousands of repositories, identify vulnerable dependencies, and craft targeted attacks without human intervention.

This isn’t just about cryptocurrency anymore. The same techniques could target enterprise software dependencies, cloud services, or any system that relies on automated processes for updates and maintenance.

The Infrastructure Problem We’re Not Talking About

While everyone focuses on AI model security, there’s a quieter but equally important issue brewing: exposed endpoints in LLM infrastructure. Organizations rushing to deploy their own large language models are creating new attack surfaces faster than they can secure them.

Each LLM deployment typically requires multiple supporting services – APIs for model access, databases for training data, monitoring systems, and integration endpoints. Many organizations are treating these as internal services without applying the same security rigor they’d use for public-facing applications.

I’ve seen this pattern before with containerization and cloud migrations. The technology moves fast, security practices lag behind, and we end up with a mess to clean up later. The difference is that LLM infrastructure often handles sensitive data and can be leveraged for lateral movement within networks.

What This Means for Our Teams

These developments point to a common theme: AI is lowering the barrier to entry for sophisticated attacks while simultaneously creating new attack surfaces. We need to adjust our defensive strategies accordingly.

First, our security awareness training needs to address AI-enhanced social engineering. Employees should know that voice and video calls can be spoofed, and we need verification procedures that account for this reality.

Second, we can’t assume that low-skilled attackers will remain low-impact. AI tools are changing that equation, which means our threat modeling needs to consider a broader range of potential attackers.

Finally, if your organization is deploying AI infrastructure, treat those endpoints with the same security scrutiny you’d apply to any critical system. The supporting infrastructure around AI models is often more vulnerable than the models themselves.

The attackers are adapting quickly. We need to do the same.

Sources