The Software You Buy Isn't the Software Your People Use
- Mar 28
- 7 min read
Enterprise software adoption has always been a problem. AI made it urgent.

Here is a number worth sitting with: only one in five companies has a mature model for governing autonomous AI agents.
That's from Deloitte. Pendo CEO Todd Olson cited it at the opening keynote of Pendomonium 2026, the annual product management conference held in Raleigh earlier this month. Over 2,000 attendees showed up — the largest in the conference's history. The fact that Olson led with that statistic tells you something about where the conversation is right now.
Companies are deploying AI agents faster than they can manage them. They're spending on licenses, building on models, and shipping to employees — and they have almost no visibility into what happens next. Do people use the tools? Do the agents do what they were designed to do? Is the investment paying off?
For most organizations, the honest answer is: we're not sure.
The Adoption Problem Didn't Start With AI
Enterprises have been bad at software adoption for decades. The pattern is familiar to anyone who's worked in or around enterprise technology.
A company makes a large software investment. The vendor delivers. Implementation happens. The go-live announcement goes out. And then, quietly, the metrics don't move. A year later, half the licenses are unused. The employees who were supposed to benefit from the new system have found workarounds. The ROI the executive team promised the board never shows up.
This happens constantly. It happens at companies that are otherwise well-run and analytically sophisticated. It happens because most enterprise software contracts measure delivery rather than adoption. The vendor gets paid when the system is live, not when people use it.
AI didn't create this problem. But it made it expensive enough that it was impossible to ignore.
When a company deploys Microsoft Copilot across 50,000 employees, and the average employee uses it twice a month, that's a number someone will eventually have to explain in a board meeting. When an AI agent that cost seven figures to build is quietly abandoned by the workforce six months after launch, that's not a technology problem. It's an adoption problem. And it's a much bigger dollar figure than the unused ERP license of five years ago.
What Pendo Is Actually Building
I've covered Pendo for a while, but Pendomonium 2026 gave me a clearer picture of where the company is heading than anything I'd seen before.
Most people in enterprise technology know Pendo as a digital adoption platform — a tool for building in-app guides, tracking feature usage, and measuring whether users can find their way around a software product. That description is still accurate. It's also increasingly incomplete.
At the conference, Pendo's leadership introduced a broader framing: Software Experience Management. The idea is that the data Pendo generates — behavioral data, usage patterns, session replays, sentiment signals, predictive models — is not just useful for product teams. It's useful for any organization trying to understand whether its software investments are working.
The platform now spans five distinct capabilities that connect into something larger than the sum of its parts:
Analytics and guidance. The original Pendo business. Tracks what users do inside applications and deploys contextual guides to help them complete tasks. Still the core and still growing.
Predictive intelligence. Pendo's Predict product uses behavioral data to identify customers at risk of churning before churn escalates. It processes 100 million predictions per month, and Pendo uses it internally to manage its own customer base. The principle: churn should never be a surprise. If it is, you weren't watching the right signals.
Agent Analytics. The newest product and, based on everything at the conference, the one with the most forward momentum. It measures whether AI agents are being used, whether they're working, where they're failing, and whether humans are doing anything differently as a result. Fast Company named Pendo one of its most innovative companies in 2026, specifically for this product. More than 200 accounts signed up within months of launch.
CIO Command Center. Gives technology leaders visibility into software usage across the entire enterprise — not just whether licenses are assigned, but whether people are actively using the tools. If a company is paying for 5,500 licenses for a platform and only 1,800 people are actually using it, Pendo can surface that, identify the cost of unused capacity, and flag shadow apps employees have adopted on their own.
MCP integration. Pendo recently launched an MCP server that exposes its behavioral data as tools that AI systems can query directly. The growth rate — 400% week-over-week since launch — reflects how fast the market is moving toward agent-to-data integration.
The AI Adoption Problem, Specifically
The conference was full of product managers and technology leaders from companies trying to figure out the same thing: we've invested in AI, we've deployed AI, and we don't actually know if our people are using it or whether it's working.
Pendo's framing for this problem was sharp. There are two distinct questions most organizations are conflating:
The first is technical performance. Is the model responding correctly? What's the latency? What's the cost per token? These questions have good tooling. Every major cloud provider and most AI platforms give you observability at this level.
The second is adoption and impact. Are employees actually using the AI tools? Are they using them for what you built them for? Are they coming back after the first interaction, or giving up? And — the hardest question — is their work getting better because of it?
Most organizations have answers to the first set of questions and no answers to the second.
This matters because leadership is now asking the second set. The era of AI experimentation that didn't require ROI justification is ending. CFOs are reviewing AI budgets. Boards are asking whether the investments made over the last two years are producing results. "We deployed it" is no longer a sufficient answer.
The Relationship Between Data and Adoption
One session at the conference brought this into focus in a way I found particularly useful. The Emburse team — presenting alongside Pendo's VP of Product Management — described how they transformed their customer success operation using predictive analytics.
The starting point was a problem most enterprise technology leaders recognize: their customer health data lived in too many places, their team was reactive rather than proactive, and by the time a customer churned, the signals had been visible for months. They just weren't connected.
Their philosophy, which they called "no surprises," was simple in theory and hard in practice. If a customer is going to churn, you should know it before they do. The data to predict it is almost always there. The challenge is getting it out of siloed systems and into a model that produces actionable signals rather than retrospective explanations.
Once they implemented Pendo's Predict product, the nature of the work changed. Instead of spending time figuring out which customers needed attention, the model told them. Their cost-to-serve dropped not because their team worked less, but because their work became more intentional. The same number of people handled more customers more effectively because they spent time on the right accounts at the right time.
The principle generalizes beyond customer success. Any organization deploying AI tools to employees faces the same challenge: they need to know which employees need support, which use cases are working, and where the intervention points are. That requires behavioral data. And behavioral data requires instrumentation.
The Conference Itself as Evidence
Andrew Ng, founder of DeepLearning.AI, recently wrote something that stuck with me as I was processing three days at Pendomonium:
"Relationships can be incredibly durable. In moments of uncertainty, having communities — networks of relationships — helps everyone. That's why opportunities to build relationships are so valuable and help us both get more done and protect ourselves against downside risks."
He was making a general argument for in-person gatherings in a world that increasingly defaults to remote interaction. But it applies with particular force to technology conferences right now.
The enterprise AI market is moving so fast that a company's understanding of where things are heading can shift significantly within three days. Not because of the keynotes — those are useful but rarely surprising. Because of the conversations in the hallways, at the lunch tables, and during the informal sessions where practitioners talk honestly about what's working and what isn't.
At Pendomonium this year, I met product managers from companies trying to solve the same AI adoption problems in very different contexts — healthcare, financial services, logistics, SaaS. The approaches varied. The underlying problem didn't. Nobody has fully figured out how to measure whether their AI investments are producing human behavior change at scale.
That's what makes this moment interesting. The tooling is arriving. The organizational awareness is building. The question of what enterprise AI actually produces — in terms of work getting better, decisions getting smarter, and people being more effective — is one the market is just starting to take seriously.
What to Watch
A few things worth tracking from Pendomonium 2026:
Agent Analytics will become a standard requirement. Right now, it's a competitive differentiator. Within 18 to 24 months, the ability to measure AI agent performance and adoption will be a baseline expectation for any enterprise AI deployment. The organizations building this instrumentation now will have a significant advantage in communicating ROI to leadership.
The governance gap is real and growing. One in five companies has mature agent governance. That means four in five don't. As agent deployments multiply, organizations with clear frameworks for what agents are allowed to do — and systems for auditing what they actually do — will have far fewer costly incidents than those that don't.
The CIO Command Center conversation is coming. In a cost-cutting environment, the ability to show a CIO which software licenses are actively used versus those collecting dust is a powerful entry point. The conversations that start with cost rationalization often lead to broader discussions about what the organization actually needs and what it doesn't.
Software experience is becoming a discipline. The framing of Software Experience Management that Pendo introduced at the conference reflects a real shift: there is a growing recognition that the human side of enterprise software — how people actually interact with the tools their organizations buy — is as important as the technical side.
Companies that treat this as an afterthought will continue to struggle with adoption. Companies that build systems around it will have a measurable advantage.
The software you buy and the software your people use are often very different things. The gap between them is where most enterprise technology investments quietly fail. Closing that gap is what the next few years of enterprise AI are actually about.



I read the post about how the software you buy is often not the one people actually use, and it really shows that the problem is not the tool but how people work, learn, and adapt to it. Many teams go back to old habits if they don’t get time or support to use new systems. I remember during a busy study period, I felt the same and looked for hire someone to take my online gre exam while trying to manage everything. It made me realize tools only help when habits change too.