10 Minutes of AI Use Reduces Critical Thinking, Study Shows

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

I’ve spent three decades as a science communicator, and I want to break down a multi-institution study that looks at how even brief use of AI chatbots can mess with our persistence and learning. Researchers from Carnegie Mellon, MIT, Oxford, and UCLA ran three experiments with hundreds of paid participants.

Some folks got to use an AI assistant that could solve tasks like simple fractions or reading-comprehension questions. When the AI was suddenly taken away, those who’d leaned on it were more likely to give up or get answers wrong compared to people who hadn’t used it at all.

So, sure, AI can give you a quick productivity boost. But it might chip away at the deeper thinking and persistence we actually need to learn and get better over time.

Study highlights: AI assistance, persistence, and learning

In these experiments, participants tackled a series of problems. They were randomly assigned either to get AI help or to go it alone.

The AI ran in autonomous mode and could solve some tasks outright. But here’s the kicker—when the AI help vanished, those who’d relied on it just didn’t stick with the problems or perform as well as the others.

Turns out, even a little bit of AI-assisted success can leave people with shaky confidence in their own problem-solving if the tech suddenly disappears.

Why persistence matters for learning

Persistence—the grit to wrestle with tough problems and keep trying after setbacks—is at the heart of real learning. Michiel Bakker from MIT, one of the study’s authors, points out that it’s not just about getting things done faster.

It’s about how we respond when things get hard, and how our habits form around tools that sometimes just hand us the answer. If AI keeps solving everything for us, do we slip into a “just get the answer” mindset that kills our long-term growth?

It’s an open question: Can we build AI tools that nudge us to think, struggle, and try different strategies, instead of just spitting out the right answer every time?

Implications for AI design in education and work

The researchers say we need to tread carefully when bringing AI into classrooms or workplaces. It’s all about finding a balance between getting things done and actually building independent thinking skills.

If tools make things too easy or skip the hard parts, people miss out on practicing strategies, checking their own reasoning, and building that self-awareness of how they learn.

Practical design recommendations

To help people grow real skills over time, the study points to a few ideas for AI tool design:

  • Coaching and scaffolding over direct answers: Instead of just giving the answer, tools should guide with hints, step-by-step prompts, and explanations.
  • Adaptive difficulty and reflective prompts: Systems could ramp up the challenge or ask users to explain their thinking before showing the solution.
  • Transparency and error visibility: AI should admit when it’s unsure or might be wrong, so users learn to check and debug results.

Some experts also think dialing down the AI’s urge to flatter or always agree with users could help people think more critically. A human-centered approach means making AI more like a teacher and less like a shortcut machine.

Risks of agentic AI and real-world concerns

There’s a bigger worry brewing about AI systems that act on their own and can make unpredictable mistakes. When that happens, people lose the ability to spot mistakes or even understand what went wrong—especially if they’re making important decisions based on AI output.

One of the study’s authors shared a personal story: an AI-generated command sequence left their computer unbootable. It’s a real reminder that handing off decisions to AI can backfire when the tech slips up or doesn’t have enough guardrails.

Responsible deployment and ongoing evaluation

Moving forward, we need to introduce AI into schools and workplaces thoughtfully, always checking how it affects the basics of learning and thinking. Designers and policymakers ought to put human skill-building first, create ways to avoid overdependence, and keep tabs on whether people are actually learning in the long run—not just getting more done in the short term.

Conclusion: design for learning, not just speed

This research from CMU, MIT, Oxford, and UCLA points to a tricky conflict in how we use AI. Sure, AI can help us get tasks done faster, but it might chip away at our ability to learn deeply and solve problems on our own.

For folks building, teaching with, or deploying AI, the takeaway’s not so simple. Maybe we need a mix of guidance, thoughtful oversight, and tools that actually help us think—rather than just do the thinking for us.

 
Here is the source article for this story: Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

Scroll to Top