Skip to content
Dustin's AI Lab
Go back

90% of People Are Two Years Behind: The Real AI Adoption Gap

I demonstrated Claude Code to friends and they were all amazed. Then I gave them free Pro trial passes — three of them. Not a single one was used. That told me a lot about how AI adoption actually works.


Before the New Year I demonstrated Claude Code to several friends — not coding, office work: organizing meeting notes, batch processing data, automating repetitive tasks. Every one of them said some version of “that’s insane.”

Then I handed out the free trial links. One week of Pro, free, three passes. I was initially worried three might not be enough.

All three are still unused.

The Gap Between Amazement and Action

I spent a while thinking about this. Not because they didn’t believe in the tool — they watched it work in front of them. Not because of cost — it was free.

The reason is the inertia of existing habits. The old workflow “isn’t broken,” so it doesn’t need fixing. Even when a better tool is right there, the effort required to switch and actually get fluent is enough to keep most people from moving.

From watching people around me, 90% of people are behind technology enthusiasts by about 1.5 to 2 years on AI adoption, sometimes more. And this gap is very hard to close — as enthusiasts move forward, regular users also move forward, just at a slower pace. The gap stays.

AI Adoption Needs to Be Top-Down

Someone asked me whether they could build their own inventory management system using Claude Code. I didn’t answer directly — I asked a few questions first: Is this for personal use or for other people? Are you worried about sending the data to AI? What are you managing it with now?

Answering those questions usually reveals the answer.

A lot of people imagine AI adoption as: “I’ll introduce a new tool and my colleagues will naturally follow.” Reality is that front-line workers don’t have incentives to optimize their own workflows. Most are completing defined tasks in defined ways, nine to five.

The route that works for organizational AI adoption is top-down — leadership says to use it, provides budget, sets expectations. Bottom-up has a very low success rate.

Outside the tech bubble, fewer people know Claude Code exists than we’d guess.

Learning Is Triggered by Obstacles, Not Curiosity

Looking back at how I learned Claude Code, the whole arc was obstacle-driven. Every new feature I picked up had a specific pain point that triggered the learning.

Not “this feature looks interesting” — more like “I hit a problem, and this feature solves exactly that problem.” That’s when genuine learning happens.

Example: conversations got too long, the model started forgetting earlier context, so I learned about CLAUDE.md and document-based context management. Without hitting that obstacle, I might never have discovered it.

For anyone trying to introduce AI tools to others, this is a critical insight: demonstrating “what it can do” is less effective than finding someone who’s already experiencing “what they’re struggling with.” The latter is the actual entry point.

The Real Barrier Isn’t Technical

Vibe coding marketing (“build an app without writing code”) has convinced some people that AI tools have no learning curve.

But if a CEO actually wanted to hire this way, the interview would include: What’s the design logic here? What’s the user research? Cost-benefit analysis? How are you handling security? Privacy and data governance? Risk testing? These are the real barriers — technical skill is only a small part of them.

Claude Code’s most important capability isn’t prompting — it’s the agent system. But how high that agent can operate depends on how complete your documentation is, how clear your rules are, how structured your knowledge base is. Someone who writes sloppy prompts but has solid “infrastructure” will outperform someone with beautiful prompts and nothing behind them.

Your attention to detail and your logic determine how high the agent can go. Models will keep improving, but if you’re the weak link, even the best model gets dragged down.

What the Gap Actually Means

Back to those three passes. I don’t feel disappointed about them anymore — they’re a useful calibration.

This tool’s reach is still largely within the tech enthusiast circle. Reaching the average office worker will be a slow process, with countless “I’ve heard of it but don’t use it” stages in between.

During that transition, the people who build deep fluency now will develop work habits and knowledge systems that are very hard for late adopters to replicate quickly.

That gap is the thing worth paying attention to.


Share this post on:

Previous Post
Opus 4.7 After One Week: From the System Card to the Roasting, Where Did It Go Wrong?
Next Post
Claude Code Subscription Guide: The Real Gap Between Pro and Max