When Attackers Take the Path of Least Resistance: RMM Tools Become the New Malware
When Attackers Take the Path of Least Resistance: RMM Tools Become the New Malware
I’ve been watching an interesting shift in how attackers operate, and it’s forcing us to rethink some fundamental assumptions about threat detection. Instead of crafting sophisticated malware that might get caught by our defenses, threat actors are increasingly just using the legitimate tools we already have installed in our environments.
The most striking example? Remote monitoring and management (RMM) software abuse is absolutely exploding. According to recent research from Dark Reading, hackers are ditching traditional malware in favor of these legitimate administrative tools because they offer something malware struggles with: stealth, persistence, and operational efficiency.
Think about it from an attacker’s perspective. Why spend time developing custom malware that antivirus might catch when you can use TeamViewer, AnyDesk, or similar tools that IT departments use every day? These applications have legitimate network access, administrative privileges, and they blend right into normal business operations.
The Double-Edged Sword of Legitimate Tools
This trend highlights something we’ve been grappling with for years: the challenge of distinguishing between legitimate administrative activity and malicious behavior. When an attacker uses RMM software to access systems, the network traffic looks normal, the process signatures are trusted, and traditional security tools give it a pass.
We’re seeing this same principle play out across different attack vectors. Take the recent supply chain concerns that prompted Notepad++ to implement what they’re calling a “double-lock” update mechanism. The popular text editor had to completely rethink their update process after security gaps led to a supply-chain compromise.
The new system adds an extra layer of verification to prevent attackers from pushing malicious updates through legitimate channels. It’s a smart move, but it also shows how attackers are consistently targeting the trust relationships we rely on rather than trying to break through our defenses head-on.
AI Amplifies the Privilege Problem
Speaking of trust relationships, we’re creating new ones faster than we can secure them. A recent study from Teleport found that organizations running over-privileged AI systems experience incident rates 4.5 times higher than those with properly scoped AI permissions. We’re talking about a 76% incident rate for over-privileged AI deployments.
This shouldn’t surprise us, but it’s concerning how quickly we’re repeating old mistakes with new technology. We’ve known for decades that excessive privileges create security risks, yet here we are giving AI systems broad access because it’s easier than figuring out exactly what they need.
The problem gets even more complex when you consider the novel attack vectors AI introduces. Security researcher Bruce Schneier recently highlighted three different papers describing side-channel attacks against large language models. These attacks exploit timing variations in AI inference to extract sensitive information - essentially using the performance optimizations that make AI practical as a way to spy on the models.
Critical Infrastructure Under Pressure
While we’re dealing with these emerging threats, we can’t forget about the fundamentals. Industrial control systems remain a prime target, and the latest analysis from SecurityWeek paints a sobering picture of what we’re up against.
Nation-state actors and ransomware groups are increasingly targeting critical infrastructure, and our aging industrial systems weren’t designed with modern threat actors in mind. The collision of sophisticated attackers with legacy infrastructure creates scenarios that keep me up at night.
What’s particularly challenging is that many of these industrial environments can’t implement traditional security controls without disrupting operations. You can’t just install endpoint detection and response software on a system that controls water treatment or power distribution - the risk of interference is too high.
Adapting Our Defensive Strategies
So where does this leave us? I think we need to fundamentally shift how we think about security monitoring. Instead of focusing primarily on malicious software, we need to get much better at detecting malicious behavior - regardless of what tools are being used.
For RMM abuse, this means implementing behavioral analytics that can spot unusual patterns in legitimate tool usage. We need to know when RMM software is being used outside normal business hours, from unexpected locations, or to access systems that don’t typically require remote management.
For AI systems, we need to apply the principle of least privilege just as rigorously as we do for human users. Maybe more so, given the potential for AI to operate at machine speed and scale.
And for critical infrastructure, we need to accept that perfect security isn’t possible and focus on resilience instead. This means having robust backup systems, clear incident response procedures, and the ability to operate manually when automated systems are compromised.
The common thread here is that attackers are consistently choosing the path of least resistance. Our job is to make sure that path doesn’t lead through our most critical assets.
Sources
- RMM Abuse Explodes as Hackers Ditch Malware - Dark Reading
- Notepad++ boosts update security with ‘double-lock’ mechanism - Bleeping Computer
- Cyber Insights 2026: The Ongoing Fight to Secure Industrial Control Systems - SecurityWeek
- Over-Privileged AI Drives 4.5 Times Higher Incident Rates - Infosecurity Magazine
- Side-Channel Attacks Against LLMs - Schneier on Security