The AI Security Arms Race: Why Bad Actors Are Winning and How Organizations Can Fight Back
- ctsmithiii
- Jun 12
- 4 min read
Updated: Jun 13
Former Pennsylvania CISO Erik Avakian reveals why cybercriminals are ahead in AI security and shares proven frameworks for defending against AI-driven threats.

As organizations rush to embrace artificial intelligence, cybersecurity professionals face an unprecedented challenge: defending against AI-powered attacks while securing their own AI implementations. Erik Avakian, former Chief Information Security Officer for the state of Pennsylvania and current Cybersecurity Counselor at Info-Tech Research Group, offers a sobering assessment of the current landscape.
"Right now, I think the bad guys are winning," Avakian states bluntly. After managing cybersecurity for 45 people across Pennsylvania's state government and navigating three gubernatorial transitions, his perspective carries weight. The reason for this imbalance? Organizations lack proper AI governance frameworks, while attackers leverage AI's speed and sophistication without constraints.
The New Threat Landscape: Speed, Scale, and Deepfakes
Traditional cyberattacks have undergone significant evolution with the integration of AI. "The sheer number of attacks and the sophistication behind them now, AI takes automation to another level," Avakian explains. Polymorphic malware powered by AI adapts faster than human defenders can respond, creating an asymmetric warfare scenario where "you have to combat AI attacks with AI."
Perhaps most concerning are deepfake attacks using voice and video impersonation. "We've seen real attacks using voice impersonation that have tricked individuals into giving up personal information," Avakian notes. These attacks exploit the same psychological tactics as traditional phishing, urgency, and emotional manipulation, but with unprecedented realism.
The speed advantage is particularly striking. AI can automatically launch attacks based on world events without human intervention. "Anything of a political nature – it doesn't even have to be political events. They're just deciding we're going to target organizations today," he explains. This automated opportunism means attacks can scale instantly across thousands of targets.
AI Governance: The Critical Defense Layer
The solution isn't to block AI innovation, but to implement robust governance frameworks. Avakian advocates for establishing AI governance committees comprising security leaders, enterprise architects, privacy officers, and chief data officers. "Security should not be saying no. Should be saying yes, but with conditions," he emphasizes.
This governance approach mirrors successful cloud adoption strategies. Just as organizations learned to manage shadow IT through proper cloud governance, they must now address shadow AI usage. "People are going outside to ChatGPT, so you've got that problem as well," Avakian observes. Rather than blocking these tools entirely, organizations should provide internal alternatives while monitoring unauthorized usage.
Treating AI Agents as Privileged Users
When implementing AI agents with access to enterprise systems, Avakian recommends treating them exactly like privileged human users. "The AI agent becomes a privileged user, just like any person," he explains. This means applying zero-trust principles: granting access only when needed, for specific timeframes, and with comprehensive monitoring.
The data protection challenge is particularly complex because AI agents can inadvertently create new data combinations that are not intended. "You might be collecting some non-PII over here, and then you start putting those elements together, it becomes PII," Avakian warns. This data sprawl risk necessitates an enterprise architecture review and ongoing monitoring to track how AI systems process and transform data.
Building Internal AI Capabilities
Rather than relying solely on external AI services, Avakian strongly advocates for developing internal RAG-based (Retrieval-Augmented Generation) systems. "I advocate for folks to have it internal – have your own internal ChatGPT," he suggests. This approach reduces data leakage risks while providing employees with approved AI tools.
Success in internal AI implementation correlates directly with data governance maturity. "The folks that have good data programs, good data quality, and solid data governance are well ahead of everyone else," Avakian observes. Organizations with poor data quality will struggle regardless of their AI technology investments.
Evolving Security Team Competencies
Security professionals must rapidly develop AI-specific skills. Avakian points to new certifications from organizations like ISACA that address AI governance and architecture. "We need cybersecurity professionals who become champions of the security parts of AI in the organization," he explains.
The evolution of the skillset parallels the cloud transformation period. Just as security teams have developed cloud architecture competencies, they now require "chief AI architect" capabilities that focus on data analytics, AI architecture, and insider threat management.
Practical Implementation: Sandbox Security and Zero Trust
For organizations beginning AI experimentation, Avakian recommends micro-segmented sandbox environments with zero-trust access controls. These environments enable innovation while preventing experimental vulnerabilities from being introduced into production systems.
Incident response procedures also require updates. Rather than creating entirely new playbooks, Avakian suggests incorporating AI-specific scenarios into existing frameworks. "How do we incorporate an AI-type scenario and test our incident response plan?" he asks, recommending tabletop exercises similar to ransomware simulations.
The Path Forward
The AI security challenge demands immediate action, not perfect solutions. Organizations can't wait for comprehensive AI security standards to emerge. "It takes AI to fight AI," Avakian concludes, emphasizing that defensive AI capabilities are essential for survival in this new threat landscape.
Success requires balancing innovation with security through proper governance, treating AI systems as privileged users, investing in internal capabilities, and continually evolving the security team's competencies. The organizations that act now will determine whether defenders can regain the advantage in this critical technological arms race.
Comments