WhatsApp's New Lockdown Mode Shows How Targeted Attacks Are Getting Personal
WhatsApp’s New Lockdown Mode Shows How Targeted Attacks Are Getting Personal
I’ve been following some concerning trends in this week’s security news, and there’s a common thread that’s worth talking about: the increasing sophistication of targeted attacks against specific groups and individuals. Let me walk you through what’s happening and why it matters for how we think about protection strategies.
High-Value Targets Need High-Value Protection
The biggest story that caught my attention is WhatsApp’s new lockdown feature. Meta is rolling out enhanced security specifically designed for journalists, public figures, and other high-risk users who face sophisticated threats like spyware attacks.
This isn’t just another privacy setting – it’s a recognition that certain users face fundamentally different threat models than the average person. When we’re dealing with nation-state actors or well-funded criminal groups, standard security measures often aren’t enough. The fact that WhatsApp is building specialized protections tells us two things: first, that targeted attacks against high-profile individuals are common enough to warrant platform-level intervention, and second, that these attacks are sophisticated enough to bypass normal security controls.
State-Sponsored Groups Keep Evolving Their Tools
Speaking of sophisticated threats, we’re seeing exactly this kind of evolution in action with Mustang Panda’s latest campaign. The Chinese-linked group has updated their COOLCLIENT backdoor for comprehensive data theft from government entities throughout 2025.
What strikes me about this is the persistence and continuous improvement we’re seeing from these state-sponsored groups. They’re not just using the same tools year after year – they’re actively developing and refining their capabilities. Mustang Panda (also known by a handful of other names including Earth Preta and Twill Typhoon) represents the kind of adversary that makes traditional perimeter security feel inadequate. When you’re up against groups with this level of resources and motivation, you need to assume they’re already inside your network.
The Supply Chain Problem Gets Worse
The scale of supply chain attacks is becoming genuinely alarming. Sonatype’s research uncovered over 454,000 malicious open source packages in 2025 alone. They’re calling it the “industrialization” of open source threats, and I think that’s exactly the right term.
This isn’t just a few bad actors anymore – it’s a systematic approach to poisoning the software supply chain at scale. Every time our teams pull in a new dependency or update an existing one, we’re potentially introducing malicious code into our environments. The sheer volume makes traditional code review processes feel inadequate. We need to start thinking about automated scanning and verification as non-negotiable parts of our development pipeline.
Physical Attacks Haven’t Gone Away
While we’re all focused on sophisticated digital threats, it’s worth remembering that sometimes the old-school approaches still work. The US just charged 31 more people in a massive ATM jackpotting scheme, bringing the total to 87 defendants, mostly Venezuelan nationals.
ATM jackpotting requires physical access to the machines, but it’s still incredibly effective. It’s a good reminder that our security models need to account for both digital and physical attack vectors. The criminals behind these schemes are often part of organized networks that can operate across multiple countries and coordinate complex operations.
AI is Creating New Accuracy Problems
Finally, there’s an emerging issue that I think we need to start taking seriously: AI model collapse and its impact on zero-trust architectures. As large language models increasingly train on AI-generated data, they’re becoming less accurate over time. This degradation can introduce security vulnerabilities, enable malicious activity, and compromise PII protections.
If we’re building security systems that rely on AI for decision-making – and many of us are – we need to account for the possibility that these systems will become less reliable over time. The feedback loop of AI training on AI-generated content is creating a new category of systemic risk that we’re just beginning to understand.
What This Means for Our Security Strategies
Looking at these stories together, I see a few key themes. First, targeted attacks are becoming more personalized and sophisticated, requiring specialized defenses for high-risk individuals and organizations. Second, the traditional boundaries between different types of threats are blurring – we’re dealing with state-sponsored groups, supply chain attacks, physical crimes, and AI-related vulnerabilities all at the same time.
The common thread is that static, one-size-fits-all security approaches aren’t keeping up with the diversity and sophistication of modern threats. We need security strategies that can adapt to different threat models, account for the specific risks our organizations face, and assume that our adversaries are continuously evolving their capabilities.
Sources
- New WhatsApp lockdown feature protects high-risk users from hackers
- Mustang Panda Deploys Updated COOLCLIENT Backdoor in Government Cyber Attacks
- Researchers Uncover 454,000+ Malicious Open Source Packages
- US Charges 31 More Defendants in Massive ATM Hacking Probe
- AI & the Death of Accuracy: What It Means for Zero-Trust