Skip to content
Dustin's AI Lab
Go back

Critical Thinking in the AI Era: Playing Dumb, the Feynman Method, and What Future Talent Looks Like

AI won't replace your thinking, but it will expose the fact that you can't think. From rediscovering how to learn through AI interactions, to the growing demand for "AI collaboration skills" in exams and workplaces — observations from an educator.


AI can help you find answers, but it can’t find the questions you didn’t know to ask. Through working with AI, I rediscovered something: the ability to think is the scarcest resource of our time.

Play Dumb, Ask Relentlessly, Fear No Authority

While building AI tools, I accidentally caught a glimpse of my former straight-A student self.

When I interact with AI, I put myself in the learner’s seat. And in that process, I noticed I instinctively keep pressing — patiently reading its responses, spotting the flawed parts, then drilling deeper, asking it to explain things every which way — even asking it to explain like I’m a middle schooler.

The thing is, I already know what the answer is. I know exactly where its logic went wrong. But it’s precisely this attitude of playing dumb, relentlessly questioning, peeling back layers, and refusing to bow to authority that gets AI to produce something actually correct.

Looking back, it hits me. So many people, throughout their education, just accept whatever the teacher says as gospel. They don’t push back. They don’t say “I don’t get it, can you explain it a different way?” And some teachers, deep down, probably know they got something wrong, but they can’t bring themselves to correct it — so they hide behind authority to save face. The teacher-student relationship becomes one-way indoctrination instead of mutual sharpening. We end up as students who only listen and never ask. How could learning outcomes possibly be good?

But now we have AI. It will never get impatient. It never gets tired. You can ask it to explain something 10, 20 different ways and it won’t bat an eye. It’ll even openly admit when it made an error in its analysis. As long as you’re willing to change your attitude — willing to chase answers relentlessly and patiently examine its logic — you’ll find yourself learning more than you ever did in all your years of schooling.

Remember: play dumb and ask like crazy, listen patiently, peel back the layers — and above all, fear no authority.

The Feynman Method + AI = The Ultimate Review Technique

What does a traditional study review look like? Finish a problem set, check the answers, glance at the explanation for the ones you got wrong, think “oh I get it,” flip the page. Two months later you take the same test. Same score.

I eventually realized that wasn’t reviewing at all — that was just “looking at answers.” Reading the explanation and feeling like you understand is an illusion. The only thing you actually gained was a hit of dopamine from your brain’s reward system, no different from scrolling short-form videos.

I later found a method that turned out to align perfectly with the Feynman technique. The core of the Feynman method: explain something in simple language to someone with zero background. If you stumble through the explanation, you don’t actually understand it.

What I do is talk through my reasoning out loud and record it. Starting from reading the problem, step by step — how I interpret it, how I eliminate options, why I pick this answer. When I hit a point where I can’t keep going, that’s exactly where the problem is.

Then I take that exact sticking point to AI. No need to start from scratch — just zero in on the spot where you got stuck. AI will break down the logic at that breakpoint, give you different angles, until something finally clicks.

This is worlds apart from reading answer explanations. An answer key tells you the answer is B. It doesn’t tell you at which step you started going off track.

I compare it to this: you just joined a professional sports team. You watch the textbook pitching form, then record your own motion, and compare it frame by frame in slow motion against the pros. That’s how real, solid improvement happens.

Go back and listen to your own recording. You’ll notice plenty of spots where your explanation stutters, where things are vague and lack logic, where your thinking jumps around. Those spots are exactly what you need to study. Learn it again, re-record, re-explain — until the whole thing flows smoothly. That’s the full Feynman method cycle.

The Future Doesn’t Need Knowledge — It Needs Critical Thinking

Traditional business school jobs are shrinking fast. Data Analysts are getting hit first. Traditional Marketing and HR roles are in the danger zone too. Employers and universities are increasingly treating “how to collaborate with AI” as a key competency to evaluate.

But if you look at today’s standardized tests, the direct relevance to AI skills is actually pretty low. These exams are destined to lag behind the times. If they don’t reform — if they just keep making tests shorter and easier trying to poach competitors’ market share — they’ll eventually become irrelevant.

I took stock of which skills have the highest relevance in the AI era:

Data sufficiency — helps you provide adequate information in your prompts when collaborating with AI. You need to know what information matters and what’s missing.

Critical reasoning — helps you verify when AI draws overly optimistic or pessimistic, biased conclusions. AI will confidently spout nonsense with a straight face. You need the ability to spot it.

Integrated reasoning and chart interpretation — requires cross-referencing large amounts of data to verify whether AI’s output is grounded in facts or hallucinated from thin air.

What’s relatively less important? Long-form reading comprehension — you can use AI to summarize and do Q&A on long texts now. Pure math computation — current models can call tools automatically; calculation isn’t the bottleneck.

So here’s my bold prediction: future exams and competency assessments will shift toward “AI collaboration scenarios.” They won’t test how much knowledge you’ve memorized. They’ll test whether you can give AI the right instructions in a complex situation, then critically evaluate what it produces.

Focus on the Product, Not the Competition

Someone once asked me: “When competitors keep coming after you, how do you hold up?”

Honestly, I never felt like I was “holding up.” Because if I spend my time fighting them and comparing myself to them, that means I care more about the competition than I care about my users.

I spend my time building the product. Tell me that’s not way more meaningful than obsessing over what the competition is doing.

This attitude is even more relevant in the AI era. Instead of anxiously wondering whether AI will replace you, spend your time figuring out what you can do that AI can’t. Instead of stressing that other people use AI faster than you, spend your time thinking about what you want to build with AI.

The ability to think isn’t a talent — it’s an attitude. Playing dumb and asking relentlessly is an attitude. The Feynman method is an attitude. Critical reasoning is an attitude. Focusing on the product is an attitude.

AI won’t replace your thinking. But it will expose the fact that you can’t think.


Share this post on:

Previous Post
I scanned 500 Claude Code sessions and the AI stopped making the same mistakes
Next Post
How Teachers Can Embrace AI Without Getting Replaced