top of page

The Hidden Threat: How AI is Making Insider Risk Everyone's Problem

  • Writer: ctsmithiii
    ctsmithiii
  • Aug 9
  • 4 min read

Nation-state actors are getting IT jobs at your company. Here's what developers and security teams need to know.


ree

While cybersecurity teams obsess over external threats—building bigger firewalls, deploying more sophisticated endpoint detection, and training users to spot phishing emails—the most dangerous attacks are already inside the building. And artificial intelligence is making this invisible threat exponentially more dangerous.


"I think right now, nation-state actors are a very big difference," says Lynsey Wolf, Investigations Manager for DTEX Systems' elite i³ (Insider Intelligence and Investigations) team. Speaking at Black Hat 2025, Wolf painted a disturbing picture of how AI is fundamentally changing the insider threat landscape in ways that most organizations aren't prepared for.


The North Korean IT Worker Problem

The most alarming trend Wolf identified involves North Korea systematically placing operatives in high-paying IT roles across American companies. These aren't traditional spies trying to steal state secrets—they're after something simpler and more profitable: paychecks.


"North Korea wants to funnel money back for their weapons programs," Wolf explains. "They're going after high-paying IT jobs, and they just want all the money that they're making, and they want to have as many jobs as possible, and they're sending all that money back."


The scheme is elegantly simple and terrifyingly effective. North Korean operatives, working remotely, get hired for legitimate IT positions. They do the work—often excelling as employees—while funneling their salaries back to fund weapons programs. When Wolf mentions that "finding one fake IT worker often means there are several more," she's describing a coordinated infiltration campaign happening across multiple organizations simultaneously.


The scariest part? "More often than not, these people are your best workers, because they just put their head down," Wolf notes. They're not trying to cause immediate harm—they want to keep their jobs. But once suspicion arises, that's when the real damage begins.


AI: The Great Equalizer for Bad Actors

Artificial intelligence is dramatically lowering the barrier to entry for sophisticated insider attacks. Wolf's team is tracking how AI enables two critical capabilities that previously required rare technical expertise:


Enhanced Social Engineering: Nation-state actors are using AI to master English conversations during job interviews, making their deception nearly undetectable. "They're using AI to help them learn English and be able to speak, because when they're getting interviewed, they want to make it seem as though they're not from North Korea," Wolf explains.


Technical Capability Amplification: More concerning is how AI enables non-technical bad actors to become sophisticated threats. "Usually, when I was talking about my super malicious user who's really technical, we don't need someone that's super technical. They just go ask AI to do the technical work," Wolf observes.


This democratization of attack capability means that the traditional insider threat profile—the rare, highly skilled malicious actor—no longer applies. "A super malicious threat actor was rare. It was like the one to 2% of your actual [threats]. Now, AI makes people more sophisticated," she warns.


The Behavioral Red Flags Developers Should Know

Wolf's team focuses on what she calls "left of boom" indicators—behavioral patterns that precede actual security incidents. For development teams and IT professionals, here are the subtle warning signs that often get overlooked:


Unusual Session Patterns: Watch for colleagues logging in at 3 AM and working for unusually long periods. This often indicates multiple people sharing a single account—a classic sign of coordinated infiltration.


Personal Activities on Corporate Devices: Users accessing cryptocurrency sites, personal accounts, or unauthorized AI tools from work computers. "People don't realize that even though you're doing the work, you're still leveraging corporate [resources]," Wolf emphasizes.


Job Search Behavior Combined with Access: While everyone looks for jobs, pay attention to who's actively interviewing while having significant system access. Context matters—a developer with administrative privileges researching "how to archive all this data from SharePoint" after being put on a performance improvement plan raises obvious red flags.


AI Tool Misuse: Employees uploading sensitive code to unauthorized AI platforms like ChatGPT instead of approved corporate tools. This creates both data exposure and potential intellectual property theft.


The Human-Centric Security Shift

What makes DTEX's approach unique is treating cybersecurity as fundamentally a human behavior problem. "I am a human behavioral scientist," Wolf explains. "I study human behavior, and then I apply that to cyber indicators."

This perspective reveals that virtually every security incident traces back to human decisions. Even technical vulnerabilities exist because "a human wrote bad code" or "they didn't secure the API properly." Understanding the psychological and behavioral factors that drive these decisions becomes crucial for prevention.


Building Defense Against the Insider Threat

Wolf emphasizes that effective insider risk management requires breaking down organizational silos. "It's a cross-functional mission. There's no single team that can address it," she notes. Security teams need to collaborate with HR, legal, physical security, and forensics teams to build a comprehensive defense.


For development teams specifically, this means shifting from reactive incident response to proactive behavioral monitoring. Instead of waiting for data exfiltration alerts, organizations need to watch for the behavioral patterns that precede malicious activity.


The Bottom Line for Tech Teams

The insider threat landscape is evolving rapidly, driven by AI democratization and coordinated nation-state campaigns. Development and IT teams can no longer treat insider risk as someone else's problem. When sophisticated attack capabilities are just a ChatGPT prompt away, and when foreign operatives are systematically infiltrating American companies, every technical professional becomes part of the defense equation.


The solution isn't more surveillance or draconian policies. It's developing human-centric security awareness that recognizes behavioral anomalies, implements appropriate AI governance, and fosters cross-functional collaboration between technical and security teams.


As Wolf puts it: "We need to look at it from a human perspective, not a cyber perspective." In 2025, that human perspective might be the difference between a security program that works and one that misses the threats already inside the building.



 
 
 

Comments


© 2025 by Tom Smith

bottom of page