top of page

The Third Door: Why AI Amplification Beats Automation-First

  • 7 minutes ago
  • 6 min read

Most organizations are debating the wrong choice. Derek Crager, founder of Practical AI and creator of Pocket Mentor, makes the case for a third path — one where AI makes people better at their jobs instead of replacing them.



About 80% of workers today believe AI is coming for their jobs. That fear isn't irrational. Every week brings another headline about automation replacing roles, shrinking headcounts, or making entire job categories obsolete. The anxiety is real — and companies aren't doing much to ease it.

But Derek Crager thinks most organizations are facing the wrong choice. It's not humans versus AI. There's a third option, and it's the one most leaders haven't seriously considered.

"I still think there's just about 80% of the world stuck between two doors," Crager told me recently. "AI is going to replace my job, so I'm going to stay away from it. The third door is AI that thinks with humans, to amplify humans. That's the story I want people to know."

Crager is the founder of Practical AI and the creator of Pocket Mentor, described as the world's first voice-based AI mentor designed for in-ear, hands-free, real-time support on the job. He's also the author of Human First AI, a book and framework built around a single conviction: make AI serve people, not replace them.

The Right Information at the Right Time


Before Crager started thinking about AI, he spent 16 years in automotive manufacturing and another six at Amazon. It was at Amazon that the insight shaping everything he does today first took hold.

He was asked to build out a training program. What he created became the highest-rated employee training program in Amazon's history — a 92 Net Promoter Score, in a company known for its obsession with metrics. The world-class threshold is 70.

The secret wasn't sophisticated technology or elaborate curriculum design. It was a simpler principle: "Humans work best when they have the right information at the right time."

That insight didn't just produce a great training program. It became the foundation of his entire argument about AI.

"The transition into AI is that AI can allow humans to have the right information at the right time," Crager explained. "And when they have that, they're unstoppable."

The Automation Trap


The problem with most AI deployments today is that they're built around the wrong goal. Automation-first thinking treats efficiency as the primary objective and people as the primary cost. It optimizes organizations for normal days — and makes them brittle when something unexpected happens.

Crager points to a straightforward but sobering statistic from Gartner: 40% of enterprise tasks are predicted to be automated by 2027. That number isn't alarming in itself. The danger, as he frames it, is automating the wrong 40% — stripping out the human judgment, institutional memory, and on-the-ground expertise that keeps organizations functional when things go sideways.

"The technicians who could hear a bearing going bad before the sensors caught it — gone," he writes in Human First AI. "Irreplaceable."

The 2022 U.S. airline industry meltdown is his go-to example of what happens when systems are optimized for normal conditions, and the unexpected arrives. When there's no human fallback, there's no resilience.

The 70-20-10 Problem


One of the most compelling arguments Crager makes concerns how learning actually works — and how completely most organizations ignore that reality.

Research on the 70-20-10 model shows that roughly 70% of workplace learning happens on the job, in the flow of actual work. Another 20% comes from informal peer interaction — the conversation in the parking lot after class, the quick question to a more experienced colleague. Only 10% happens in formal training environments: classrooms, virtual LMS platforms, structured programs.

Most corporate training dollars go toward that 10%.

Traditional training delivers knowledge in a context where workers can't immediately use it. Ebbinghaus's forgetting curve tells us that 50% of new information is forgotten within 20 minutes. 75% is gone within a day. By the end of a week, 90% has faded without reinforcement.

Pocket Mentor is Crager's answer to this problem. It lives in the 70%.

The Expert in Your Ear


Pocket Mentor is a voice-based AI mentor designed to be accessed hands-free, in-ear, in real time — while a worker is actually doing their job. No screen to navigate. No app to open. No interruption to the work itself.

The core idea is straightforward: take your best, most experienced performer and put their knowledge in the ear of every worker on your team.

"Without it, knowledge is siloed in veterans' heads," Crager explains. "New workers wait for someone to have time. Errors cost money, morale, and time. With it, every worker has an expert mentor available. Problems get solved faster. Confidence builds. Knowledge scales beyond any single expert."

The voice-first design isn't just a UX preference — it's a deliberate choice rooted in how frontline workers actually operate. A field technician working on electrical equipment, an aviation mechanic monitoring line pressure, a medical equipment technician diagnosing a problem in the field — none of them can step away from the work to consult a manual or scroll through an LMS. Pocket Mentor is already FAA-certified for aviation use, underscoring the safety-critical environments in which it is being deployed.

Field technicians can simply ask what they're looking for, describe what they're up against, and get guidance in real time. No binder. No escalation. No waiting.

NeuroAgnostic Design


There's a personal dimension to how Crager built this. He found out later in life that he's autistic, has ADHD, and is dyslexic. That experience fundamentally shaped his design philosophy.

He calls it NeuroAgnostic™ design — the idea that AI tools should work for every kind of mind, not just the neurotypical majority. Frontline workers, neurodivergent thinkers, and non-traditional learners all have different strengths, different learning speeds, and different ways of processing information.

"One size fits all becomes one size fits one," he says. "Because that's where the scale is in learning."

This isn't just an inclusivity argument. It's a capability argument. Organizations leave enormous amounts of human potential on the table when they build tools designed only for one way of thinking. A voice-based mentor that meets workers where they are — on the floor, hands occupied, in the middle of the task — closes that gap.

AI Is Oxygen

One of the most useful framings from my conversation with Crager has nothing to do with frontline workers or training programs. It's about how organizations talk about AI in the first place.

"AI is a commodity today," he said. "Remember when grocery stores went to electricity? They didn't say they were an electrical company. They were still a grocery store."

His point: we don't talk about oxygen every day, but we use it. AI is heading toward the same invisible infrastructure status. The organizations that get this right will stop leading with AI as a selling point and start leading with the problems they solve — using AI as the means, not the message.

The calculator analogy lands the same way. In the early 1980s, when Casio launched the calculator watch, the debate raged for over a decade: should students even be allowed to use calculators? We know how that ended. The calculator didn't make students worse at math. It freed up cognitive load for higher-level thinking. AI in the workplace is the same argument, just with higher stakes.

The Human First AI Manifesto

Crager has codified his framework into a 12-article manifesto that outlines what a genuinely human-first AI deployment looks like in practice. A few of the principles that stand out:

Keep humans accountable and visible — every key process has a named human owner. AI assists, but responsibility stays human. Preserve and scale knowledge — expertise is an asset, not a cost. Use AI to capture, protect, and deliver knowledge at scale. Measure capability, not just cost — track time to competence, exception handling, and recovery rates. Capability compounds.

Design for every performer — build tools that support different learning styles, speeds, and strengths. Build with, not for — co-design AI solutions with the people who actually do the work.

The manifesto is available at HumanFirstAI.net, where Chapter 1 of his book is also free to read.


The Case Worth Making


What strikes me about Crager's approach is how grounded it is. He's not dismissing automation or pretending AI doesn't change the nature of work. He's making a more nuanced argument: that the organizations which will come out ahead are the ones that use AI to grow what's already there — the skill, judgment, and institutional knowledge their people carry.


"AI isn't stealing our humanity," he writes. "It's revealing it."

The five human strengths he argues AI can never replace — empathy, creativity, judgment, ethics, and intuition — aren't abstractions. They're exactly the capabilities that break down when organizations over-automate and then face something they didn't plan for.

The third door isn't a compromise between automation and resistance. It's the only path that builds organizations capable of handling what comes next.

 
 
 

© 2025 by Tom Smith

bottom of page