Shadow AI and Exposed LLMs: Why Your Organization's AI Security is Probably Worse Than You Think
Shadow AI and Exposed LLMs: Why Your Organization’s AI Security is Probably Worse Than You Think
I’ve been digging through this week’s security news, and there’s a pattern emerging that should make every CISO lose sleep. We’re seeing AI security failures across multiple fronts – from shadow AI deployments to exposed language model hosts to malicious browser extensions stealing ChatGPT tokens. The common thread? Organizations are rushing to adopt AI without understanding the attack surface they’re creating.
Let me walk you through what happened this week and why it matters for your security program.
The Shadow AI Problem Just Got Real
Tenable just launched their AI Exposure add-on, and honestly, it’s about time someone tackled this systematically. The tool discovers unsanctioned AI use across organizations and enforces policy compliance with approved tools. But here’s what caught my attention – they wouldn’t have built this unless the shadow AI problem was massive.
Think about your own environment for a second. How many of your developers are using GitHub Copilot on personal accounts? How many marketing folks are feeding customer data into ChatGPT for content generation? The Tenable solution suggests this isn’t just a few rogue users – it’s systematic across enterprises.
What worries me most is the data exposure angle. Unlike traditional shadow IT where you might lose control of an application, shadow AI means you’re potentially exposing proprietary data to train models you don’t control. That customer database your sales team fed into an unapproved AI tool? It’s not coming back.
175,000 Exposed Ollama Hosts: A Ticking Time Bomb
Speaking of AI exposure, researchers found 175,000 exposed Ollama hosts that could enable large language model abuse. Among these, 23,000 hosts were persistently active over 293 days of scanning.
This is exactly what I was afraid would happen. Organizations deploy these self-hosted LLM platforms thinking they’re being security-conscious by keeping AI in-house, but then they expose them to the internet without proper access controls. It’s like setting up your own private cloud and then leaving the admin panel open to the world.
The persistence of these exposures – nearly 300 days for some hosts – tells me this isn’t accidental misconfiguration. It’s systematic negligence. These aren’t quick oops moments that get fixed in the next patch cycle.
Browser Extensions: The New Malware Highway
Here’s where it gets personal for every user in your organization. Researchers uncovered malicious Chrome extensions that hijack affiliate links, steal data, and – here’s the kicker – collect OpenAI ChatGPT authentication tokens.
One example is the “Amazon Ads Blocker” extension, which sounds innocuous enough that users would install it without thinking twice. But once installed, it’s harvesting their ChatGPT access tokens. Imagine an attacker with access to your employees’ AI conversations, including any sensitive data they might have discussed with ChatGPT.
This attack vector is particularly nasty because it bypasses most of our traditional security controls. Your DLP might catch someone copying data to a USB drive, but it probably won’t flag a browser extension silently exfiltrating AI authentication tokens.
AI-Powered Attacks Hit Real-World Targets
The most sobering story this week involves the RedKitten campaign, which uses AI-developed malware to target people seeking information about missing persons or political dissidents in Iran. This isn’t theoretical anymore – AI-generated malware is being used against real people in life-or-death situations.
What strikes me about this campaign is how it demonstrates AI’s ability to create highly targeted, contextually relevant attacks. The lures are designed specifically for people in desperate situations looking for information about loved ones. That level of social engineering precision would have taken human attackers significant time and cultural knowledge to develop.
What This Means for Your Security Program
We’re dealing with a fundamental shift in both attack and defense capabilities. The traditional approach of securing the perimeter and controlling approved applications doesn’t work when your users can spin up powerful AI capabilities through a browser extension or an exposed API endpoint.
Here’s what I’m recommending to my clients:
First, you need visibility into AI usage across your organization. Whether you use Tenable’s new tool or build your own detection capabilities, you can’t secure what you can’t see. Start with network monitoring for AI service API calls and browser extension audits.
Second, treat AI authentication tokens like any other credential. They should be in your privileged access management system, rotated regularly, and monitored for unusual usage patterns.
Finally, update your security awareness training to include AI-specific threats. Your users need to understand that browser extensions can steal their AI access and that feeding company data into unapproved AI services creates permanent exposure risks.
The good news is that law enforcement is still capable of large-scale operations against cybercriminals. Operation Switch Off dismantled major pirate TV streaming services this week, seizing three industrial-scale illegal IPTV operations. It’s a reminder that traditional law enforcement tools still work against even sophisticated criminal enterprises.
But for AI security, we’re largely on our own. The technology is moving faster than regulations, and the attack surface is expanding daily. The organizations that get ahead of this now will have a significant advantage over those that wait for the next breach to force their hand.
Sources
- Tenable Tackles AI Governance, Shadow AI Risks, Data Exposure
- Operation Switch Off dismantles major pirate TV streaming services
- 175,000 Exposed Ollama Hosts Could Enable LLM Abuse
- Researchers Uncover Chrome Extensions Abusing Affiliate Links and Stealing ChatGPT Access
- New AI-Developed Malware Campaign Targets Iranian Protests