Why 2026's First Month Shows We're Fighting the Wrong Battles

Page content

Why 2026’s First Month Shows We’re Fighting the Wrong Battles

I’ve been watching the security news roll in this past week, and honestly, it feels like we’re stuck in a loop. New attack vectors, same old problems, and a growing disconnect between what we’re securing and what actually needs protection.

Let me walk you through what caught my attention and why I think we need to have a serious conversation about priorities.

The Technical Stuff That Keeps Us Busy

First up, there’s this interesting scanning activity that the SANS Internet Storm Center picked up. Attackers are probing web servers using /$(pwd)/ as their starting path - basically trying to execute shell commands right in the URL path. It started around January 13th and has been spreading across different sensors.

What’s notable here isn’t the technique itself (command injection attempts are hardly new), but the systematic nature of it. When we see coordinated scanning like this, it usually means someone’s building a target list for something bigger. The fact that it took a week to spread from initial detection to wider sensor networks suggests either careful reconnaissance or a slower botnet deployment.

Meanwhile, Fortinet’s research team documented a multi-stage phishing campaign hitting Russian users with something called Amnesia RAT alongside ransomware. The attack starts with those business-themed documents we’ve all seen a thousand times - the kind that look routine enough to slip past tired employees at 4 PM on a Friday.

What’s interesting about this campaign is the dual payload approach. Most groups pick either data theft (RAT) or immediate monetization (ransomware), but combining both suggests either a more sophisticated operation or possibly different groups sharing infrastructure.

The Bigger Picture We’re Missing

Here’s where things get more interesting, though. Dark Reading published a piece arguing that 2025 should have been our wake-up call to stop just protecting systems and start protecting human decision-making.

I’ve been thinking about this a lot lately. We spend enormous amounts of time and money building technical defenses, but when was the last time you saw a security budget line item for “helping people make better decisions under pressure”?

Think about every major incident you’ve worked. How many failed because of a technical control gap versus how many failed because someone made a reasonable-seeming decision with incomplete information? In my experience, it’s heavily weighted toward the latter.

The phishing campaign I mentioned earlier? The technical payload delivery is almost secondary to the social engineering that gets it there. We can build the best email filtering in the world, but if someone’s stressed about a deadline and receives a document that looks like it’s from their supply chain partner, human psychology often wins.

When Critical Infrastructure Gets Real

Speaking of human decisions under pressure, the NHS just issued an open letter demanding better cybersecurity standards from their suppliers. This isn’t just another compliance requirement - it’s healthcare technology leaders saying they need to identify supply chain risks across the entire health and social care system.

This hits different when you think about it in context. Healthcare systems are where technical failures directly translate to human consequences. When an NHS trust can’t access patient records because of a supply chain compromise, that’s not just an IT problem - it’s a life safety issue.

What I find encouraging is that they’re thinking systematically about supply chain security rather than just reacting to individual incidents. But what worries me is that this kind of proactive approach seems to only happen in critical infrastructure sectors. Why aren’t we seeing similar initiatives in financial services, education, or manufacturing?

What This Means for Us

Looking at these stories together, I see a pattern that’s been bothering me for months. We’re getting really good at the technical detection and response side of security, but we’re not making proportional progress on the human and systemic sides.

The web server scanning campaign will probably be blocked by updated signatures within days. The phishing campaign will get added to threat intelligence feeds. But the underlying problems - people making decisions with incomplete information, supply chains with unclear security postures, organizations that only think systematically about security when lives are directly at stake - those aren’t getting solved by technical controls.

I’m not saying we should stop improving our technical defenses. But maybe it’s time to admit that the hardest problems in security aren’t technical anymore. They’re about helping people make better decisions, building organizational cultures that support security thinking, and creating systems that fail gracefully when humans inevitably make mistakes.

What do you think? Are we solving the right problems, or just the ones we know how to solve?

Sources