Your Airline Miles Are Now Underground Currency (And Other Tales from This Week's Security Chaos)

Page content

Your Airline Miles Are Now Underground Currency (And Other Tales from This Week’s Security Chaos)

You know that feeling when you check your airline account and see a balance of zero miles? Well, there’s a decent chance those points didn’t just expire – they might be funding someone’s vacation on the dark web.

I’ve been digging into some fascinating security stories this week that really highlight how creative threat actors have become. From turning your hard-earned travel rewards into criminal currency to nation-states playing attribution shell games, it’s been quite the ride.

When Loyalty Programs Become Criminal Currency

The most eye-opening story has to be how cybercriminals are treating stolen airline miles and hotel points like cryptocurrency. Flare’s research shows that stolen loyalty points are being converted into actual flights and hotel stays, then resold at discounted rates to unsuspecting travelers.

Think about it from an operational perspective – this is brilliant from a money laundering standpoint. Instead of trying to cash out stolen credit card data directly, criminals are using compromised loyalty accounts to book legitimate travel, then selling those bookings for real money. The paper trail looks clean because the original booking appears legitimate.

What’s particularly concerning is how this affects us as defenders. Most organizations focus heavily on protecting financial accounts but treat loyalty program security as an afterthought. Yet for attackers, a frequent flyer account with 100,000 miles might be worth more than a checking account with a few hundred dollars.

The Polyfill Plot Thickens

Remember that massive polyfill supply chain attack from 2024 that hit over 100,000 websites? Well, we’ve got a plot twist that reads like a spy novel. What was initially attributed to Chinese threat actors now appears to have North Korean involvement.

The attribution change came about through an infostealer infection – essentially, investigators got lucky when malware on a threat actor’s machine revealed the true origins. This is a perfect example of why initial attribution in cyber attacks should always be taken with a grain of salt. Nation-state actors are getting increasingly sophisticated at false flag operations.

From a supply chain security perspective, this reinforces something we all know but often struggle to implement: dependency management isn’t just about keeping libraries updated, it’s about understanding who controls the infrastructure serving your code. The polyfill service was a single point of failure for thousands of sites.

AI Guardrails: More Like Guidelines

Speaking of things that aren’t as secure as they appear, Palo Alto Networks’ Unit 42 has been busy breaking AI safety mechanisms. They’ve developed successful attacks to bypass safety guardrails in popular generative AI tools.

This doesn’t surprise me at all. We’re essentially trying to solve a fundamentally unsolvable problem – how do you prevent a system designed to be helpful and creative from being helpful and creative in ways you don’t want? It’s like trying to build a car that can drive anywhere except to bank robberies.

The real issue here is that organizations are deploying AI tools assuming the guardrails work perfectly, when in reality they’re more like speed bumps than brick walls. If you’re integrating LLMs into your security workflows or customer-facing applications, you need defense in depth, not just trust in the vendor’s safety measures.

The Board-Level Reality Check

This ties into a broader theme I’m seeing in discussions about AI-automated exploitation. The uncomfortable truth is that we’re moving into an era where “we’ve accepted the risk” isn’t going to fly anymore with leadership.

The article poses a question that should keep every CISO awake at night: “You knew, and you could have acted. Why didn’t you?” When AI can automate the discovery and exploitation of vulnerabilities at scale, that comfortable vulnerability backlog becomes a ticking time bomb.

I’ve been in those board meetings where we present risk assessments with hundreds of medium and high-severity findings, and leadership nods along with the “accepted risk” narrative. But when attackers can use AI to automatically chain together those “medium” risks into critical compromise paths, the math changes completely.

Zombie Zip: Because We Needed More Ways for Archives to Attack Us

And just to round out the week, we’ve got a new vulnerability dubbed “Zombie Zip” (CVE-2026-0866). While details are still emerging, anything with “zombie” in the name involving file archives immediately gets my attention.

We’ve seen enough zip-based attacks over the years – from zip bombs to directory traversal – that any new archive-related vulnerability should be on everyone’s radar. File upload functionality is everywhere, and zip handling is notoriously complex to implement securely.

The Common Thread

What ties all these stories together is the theme of assumptions failing us. We assume loyalty programs are low-risk, we assume we know who’s behind attacks, we assume AI guardrails work, we assume accepted risks stay manageable, and we assume file formats are properly handled.

The reality is that attackers are constantly finding new ways to turn our assumptions against us. The best defense continues to be healthy skepticism combined with defense in depth. Trust, but verify. And then verify again.

Sources