Why Only 10% of Enterprises Successfully Scale AI — And What the Other 90% Are Getting Wrong
- 2 minutes ago
- 5 min read
Research from Forrester and Accenture presented at F5 AppWorld 2026 shows only 10% of enterprises successfully scale AI — while the other 90% remain stuck in proof of concept.

The enterprise AI conversation has shifted.
A few years ago, the question was whether to invest. Today, most organizations are already spending — sometimes heavily — and still not seeing results at scale.
That gap was the central theme at F5 AppWorld 2026 in Las Vegas this morning, where two back-to-back keynote speakers — Jeff Pollard, VP and Principal Analyst at Forrester, and Dr. Lan Guan, Chief AI and Data Officer at Accenture — laid out why AI adoption keeps stalling between proof of concept and production, and what the organizations that get it right are actually doing differently.
The number that framed both presentations: only 10% of enterprises successfully scale AI. The other 90% are stuck.
The Investment Is Real. The Results Aren't.
Start with what's already happening. According to Forrester research cited by Pollard, 50% of technology leaders in US enterprises are currently piloting AI agents and agentic AI. Another 24% already have agents deployed in production today. And 32% have already spent at least $500,000 specifically on AI agents and agentic AI — not on large language models, not on generative AI broadly, but on agent projects.
The money is moving. The scale isn't following.
Pollard pushed back on one of the more common dismissals in the enterprise AI conversation — the idea that AI is just more of what we've already been managing. More code, more applications, more APIs, more interconnectivity.
He acknowledged the logic but rejected the conclusion. If you had a software quality problem before AI-assisted coding made your developers 30 to 50% more productive, you now have a bigger software quality problem. If you had trouble knowing about all your APIs before agents started spawning new ones, you have a bigger invisibility problem now. More of the same, at higher velocity, with less visibility, is a qualitatively different challenge.
The reason we call them agents, Pollard said, is that we have given them agency. That's not a rhetorical point. It's an architectural one. In the old model, software executed business rules that humans designed. Agents make decisions on behalf of users, processes, and data. The threat model changes. The attack surface changes. The governance requirements change.
What the 10% Are Doing Right
Dr. Guan has worked with more than 9,000 enterprise clients on AI adoption through Accenture's practice of 45,000 AI and data practitioners. Her firm has booked $2.2 billion in AI-related work since 2023 and completed more than 11,000 advanced AI projects. She's seen what separates the organizations that scale from the ones that stall.
Three factors consistently show up among the 10% who succeed:
Ruthless Focus on Value
AI adoption in enterprises tends to be employee-driven rather than top-down. Workers start using consumer AI tools, bring the habits to work, and ideas multiply fast. Guan described an energy company in the Middle East that was receiving more than 800 net-new AI use cases from employees every month. Without a rigorous prioritization process — one that forces every submission to answer the question of its projected value — organizations end up chasing volume instead of impact.
An Intentional Business Case
Even high-value use cases can fail if the cost side isn't managed. Guan gave a direct example: a specialty retailer built an AI recommendation engine, piloted it in six stores, and got a $1.5 million cloud bill in the first month. The project had to stop, get rearchitected, and restart. AI infrastructure costs — data, compute, talent — can scale faster than the value it generates if no one is watching the math.
Significant Change Delta
This is the one most organizations skip. Using generative AI to summarize emails, as Guan put it, gets you coffee time. The organizations that succeed pick business processes where AI drives a noticeable, measurable difference — end-to-end, not incremental. She described the approach as helping employees visualize the future state before they're asked to change how they work.
The Three Gaps Holding the Other 90% Back
For the organizations that aren't scaling, Guan identified three common failure modes.
Performance Gap
Expectations are set too high, too early. AI trained on limited POC data doesn't always perform the same way when it hits full production data volumes. Organizations that don't set realistic benchmarks — and don't have the optimization expertise to close the gap — declare projects a failure that could have succeeded with better calibration.
Budget Gap
This one catches organizations off guard more than almost anything else. Token costs, data infrastructure, and AI talent add up quickly. The organizations that scale have someone watching the cost model from day one. The ones that stall often don't discover the budget problem until they're already committed.
Talent Gap
It's not just about hiring. Guan's team designed 12 different role archetypes for Accenture's own 45,000 practitioners to prepare them for AI-native ways of working. One of the most in-demand is what she calls the computational scientist — a role that requires advanced math, physics, and computational engineering as a foundation. The skill set enterprises need for advanced AI is genuinely new, and unplanned upskilling efforts don't produce it.
The Digital Core Is Non-Negotiable
Both speakers converged on a point that gets less attention than the AI tools themselves: the quality of the underlying data and infrastructure determines whether AI works at all.
Guan described a telco client that built an AI assistant for their contact center — a legitimate, valuable use case. It kept hallucinating. When her team dug into the architecture, they found 37 versions of standard operating procedures for human agents in the system. No single source of truth. The AI didn't have a performance problem. It had a data problem.
The organizations that scale AI invest in what Guan calls the digital core: cloud, infrastructure, security, modern applications, and — critically — integrated, high-quality data. AI is powerful at connecting dots across functional silos, but only if the dots exist and are accurate. Cleaning up data architecture isn't glamorous. It's also not optional.
The Agentic Threat Picture
Pollard's portion of the keynote focused specifically on what changes when AI agents enter the picture — and the security implications that most organizations haven't fully processed yet.
The threat categories for agentic systems differ from those for traditional software: goal and intent hijacking, cognitive and memory corruption, unrestrained agency and privilege, resource exhaustion, and evasion and deception.
One example landed hard with the room. A financial services company with 500 developers had given each developer access to 12 AI coding agents. That's 6,000 entities acting on behalf of 500 people. If the organization had an identity and access management problem before, it now has a much larger one.
The underlying point matters for any organization assessing its own readiness. Agentic AI doesn't just add capability. It multiplies exposure — at whatever scale the organization is already operating.
Culture Is the Last Mile
Guan closed with two principles she returns to consistently with clients, both of which get treated as soft factors but function as hard dependencies.
The first is AI as a team sport. AI tooling is increasingly democratized. The organizations that succeed bring business users into the process — not just technical teams. Domain expertise combined with AI capability produces better outcomes than AI capability alone.
The second is trust. Enterprise AI adoption ultimately depends on a large population of employees who use the tools, believe the tools work, understand what's in them, and feel confident that the organization has made responsible choices. That trust doesn't build itself.
The timing pressure is real. Both speakers ended on the same note: the gap between organizations that are scaling and those still stuck in POC is widening, and the window to catch up is narrowing.
The formula isn't complicated. But it requires intention, discipline, and a willingness to fix the unglamorous infrastructure problems before expecting the AI investments to pay off.