When Insiders Strike: The Google Trade Secret Case Shows Why Trust Isn't Enough

Page content

When Insiders Strike: The Google Trade Secret Case Shows Why Trust Isn’t Enough

The security community got another wake-up call this week with news that three former Google engineers have been indicted for allegedly stealing trade secrets and transferring them to Iran. It’s the kind of insider threat that keeps CISOs up at night – and honestly, it should.

According to The Hacker News, Samaneh Ghandali, her husband Mohammadjavad Khosravi, and her sister Soroor Ghandali are accused of taking proprietary information from Google and other tech companies and moving it to unauthorized locations. The fact that this involved family members working together makes it particularly concerning from a threat modeling perspective.

This case hits different because it’s not about some external attacker finding a clever exploit – it’s about people who had legitimate access deciding to abuse it. We can patch vulnerabilities and update our firewalls, but the human element remains our most complex security challenge.

The Broader Pattern of Insider Threats

What makes this Google case even more troubling is how it fits into a pattern we’re seeing across different attack vectors. Take the Nigerian national who just got eight years for hacking tax preparation firms in Massachusetts. He managed to file fraudulent returns seeking over $8.1 million in refunds by compromising multiple firms.

While that’s technically an external threat, both cases highlight the same fundamental problem: attackers are increasingly targeting the trust relationships that our systems depend on. Whether it’s abusing legitimate employee access or compromising the firms that handle our sensitive financial data, the attack surface keeps expanding beyond traditional network perimeters.

Zero-Days That Aren’t So Zero

Speaking of expanding attack surfaces, the Ivanti situation continues to be a nightmare for security teams. Security researchers have traced active exploitation of Ivanti vulnerabilities back to July 2025, with attackers using them to deliver shells, conduct reconnaissance, and download malware.

What’s particularly frustrating about this is the timeline. We’re talking about vulnerabilities that have been actively exploited for months before being discovered and disclosed. That’s not a zero-day in the traditional sense – it’s more like a negative-seven-month-day. The attackers had a massive head start, and organizations running Ivanti systems were unknowingly exposed the entire time.

This really drives home why we need better visibility into our environments. Traditional signature-based detection would have missed this completely, since the attack techniques were unknown to defenders. It’s cases like this where behavioral analysis and anomaly detection prove their worth.

AI Security: The New Wild West

The security challenges aren’t limited to traditional software either. Endor Labs discovered six new vulnerabilities in OpenClaw, a popular AI assistant. As AI tools become more integrated into our workflows and decision-making processes, we’re essentially expanding our attack surface into completely uncharted territory.

The interesting thing about AI security is that we’re not just dealing with traditional code vulnerabilities. These systems can be manipulated through their training data, their prompts, or even their decision-making logic. It’s like we’ve built a new category of critical infrastructure without fully understanding all the ways it can be compromised.

The Human Element We Keep Missing

All of these incidents tie back to something I’ve been thinking about more lately – the human factors that our threat intelligence often overlooks. There’s an excellent piece in Dark Reading about how traditional threat intelligence has a “human-shaped blind spot.”

We’re great at tracking malware families, mapping attack infrastructure, and analyzing technical indicators of compromise. But we’re not as good at understanding the human motivations, relationships, and social dynamics that drive these attacks. The Google case is a perfect example – this wasn’t just about technical access, it was about family relationships and potentially ideological motivations.

What This Means for Our Security Programs

Looking at these incidents together, a few things become clear. First, we need to get better at monitoring for insider threats without creating a surveillance state for our employees. That’s a delicate balance, but tools that focus on data movement and access patterns rather than individual behavior can help.

Second, we need to expand our definition of “critical systems” to include anything that processes sensitive data – not just the obvious targets. Those tax preparation firms probably didn’t think they were high-value targets, but they became stepping stones to millions in fraudulent refunds.

Finally, we need to accept that some attacks will succeed despite our best efforts. The Ivanti case shows that even with good security practices, unknown vulnerabilities can leave us exposed. That’s why incident response and recovery capabilities are just as important as prevention.

The threat landscape isn’t getting simpler, and cases like these remind us why security remains such a challenging field. But understanding these patterns and learning from each incident helps us build better defenses for tomorrow.

Sources