The AI Security Paradox: Defenders' Advantage vs. New Attack Vectors
- ctsmithiii
- Aug 14
- 4 min read
AI gives defenders advantages in text analysis and automation, but creates new risks through social engineering and autonomous attacks. Here's how to win.

Artificial intelligence is reshaping cybersecurity in contradictory ways, simultaneously empowering defenders while creating unprecedented attack vectors. Insights from Black Hat 2025 reveal that while AI gives security teams significant advantages, organizations must prepare for entirely new categories of threats.
The Defender's AI Advantage
Ryan Fetterman, Senior Security Strategist at Splunk SURGe, argues that defenders currently hold the advantage in the AI arms race. "I think the defender has the advantage because so much of the work that we do in the SOC is based around text-based data—understanding it, generating it," he explained. "Those are the core strengths of models, and that's the core of a lot of the processes that flow through the SOC."
Fetterman's team demonstrated this advantage through "model-in-the-loop threat hunting," achieving remarkable results in PowerShell script analysis. Using off-the-shelf models like Llama 3, they achieved 80% accuracy with nearly perfect recall, reducing analysis time from five minutes per event to two seconds—a 150x speed improvement.
Practical AI Implementation Success Stories
Randall Degges, Head of Developer and Security Relations at Snyk, showcased how AI can make security completely transparent to developers. Their "Secure at Inception" platform automatically scans dependencies, analyzes code files, and fixes vulnerabilities as AI generates code, without requiring any security input from developers.
"Developers basically don't think about security at all. Zero. Absolutely zero," Degges said, demonstrating how AI can eliminate the traditional friction between development speed and security requirements.
Predictive Analytics and Digital Twins
Shannon Murphy from Trend Micro revealed how AI enables predictive security rather than reactive responses. "Generative AI really thrives on context," she explained. "The more context you give the LLM, the more specific the recommendations become."
Trend Micro's digital twin approach allows security teams to simulate attacks against virtual replicas of their networks without impacting production systems. "You can almost chat with your environment and simulate attacks to see what would happen in different scenarios," Murphy noted.
The Dark Side: AI-Powered Social Engineering
However, AI is simultaneously creating sophisticated new attack vectors. Jim Dolce, CEO of Lookout, demonstrated how his team created a convincing voice phishing attack in just 15 minutes using AI. The synthetic voice was so convincing that even Dolce's wife couldn't distinguish it from his real voice.
"You cannot train your way around an AI-generated exploit," Dolce emphasized. "The AI-generated exploit is way too smart to be able to train your way around it."
Lookout's research shows that 40% of phishing attempts now happen via SMS rather than email, as attackers exploit the psychological pressure for immediate response that text messages create.
Nation-State AI Adoption
Cristian Rodriguez, CrowdStrike Field CTO, revealed alarming intelligence about nation-state AI usage. North Korea's FAMOUS CHOLLIMA group has become "the most GenAI-proficient adversary," using AI-generated résumés, deepfake technology in video interviews, and sophisticated identity manipulation to infiltrate over 320 companies—a 220% increase.
"We've analyzed hundreds of hours of video interviews from these episodes, and they use very specific backgrounds consistently," Rodriguez explained, highlighting how AI enables systematic insider threat campaigns.
The Access Broker AI Economy
Ivan Novikov, CEO of Wallarm, described how AI is transforming the access broker economy. His honeypot research revealed that "35% more attackers" were caught when AI was involved in their systems, and successful prompt injection attacks confirmed that criminals are using AI to communicate with target systems.
"We literally got an exploit for an AI API—an attack prepared to target AI systems via their APIs," Novikov revealed. "This means AI systems right now are already vulnerable via APIs, and attackers are using this to automate attacks."
The Human Risk Expansion
Lynsey Wolf, i3 Investigations Manager at DTEX Systems, identified a troubling trend: AI is enabling non-technical actors to become sophisticated threats. "Usually, when I was talking about my super malicious user who's really technical, we don't need someone that's super technical. They just go ask AI to do the technical work," she observed.
AI is also being used to enhance social engineering during job interviews, with North Korean operatives using AI to master English conversations and bypass remote hiring processes.
Defending Against AI Threats
Despite these challenges, security leaders remain optimistic about defensive capabilities:
Confidence-Driven Automation: Snyk's approach uses confidence modeling to determine when AI can operate autonomously versus when human oversight is needed. "If confidence levels are approaching 100%, then maybe human-in-the-loop isn't as important," Degges explained.
AI vs. AI Detection: Lookout deploys AI-powered defenses that analyze messages and voice calls in real-time. "We simply ask the model, 'Is this okay or not?' and that framework is getting 98% accuracy," Dolce said.
Behavioral Analysis: Trend Micro uses AI to baseline both human and agent behavior, alerting when entities deviate from normal patterns. "Identities aren't just humans anymore," Murphy noted. "We want to monitor agent behavior and be alerted when they start behaving anomalously."
Strategic Recommendations
Based on insights from multiple AI security experts:
Embrace AI for Defense: Leverage AI's advantages in text analysis, pattern recognition, and automation
Implement Human-in-the-Loop: Design systems where AI handles initial analysis while humans make final decisions
Prepare for Agent-Based Threats: Build capabilities to detect and respond to AI-powered attacks
Focus on Fundamentals: Ensure basic security controls are solid before deploying advanced AI
Train on AI Threats: Educate teams about deepfakes, sophisticated phishing, and AI-generated attacks
The Future Landscape
Looking ahead, the AI security landscape will be characterized by:
Human-to-agent ratios where analysts oversee multiple AI assistants
Real-time threat detection and response powered by AI
Continuous red teaming using AI agents
Sophisticated social engineering requires technical countermeasures
As Fetterman concluded: "The organizations that thrive will be those that combine rigorous methodology with practical AI implementation—turning the promise of artificial intelligence into measurable improvements in security outcomes."
The AI security paradox requires organizations to simultaneously leverage AI's defensive advantages while preparing for AI-powered attacks. Success depends on thoughtful implementation rather than wholesale adoption of the latest AI trends.
Comments