New AI Divide Leaves Most People Behind, Researcher Says

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This blog pulls together Vivienne Ming’s sharp critique of how folks use AI today. It also introduces hybrid intelligence and tosses out some practical steps for making human-AI collaboration actually boost our thinking—something organizations like ours, which lean on science and long-term cognitive skills, should care about.

Understanding hybrid intelligence: AI as a cognitive partner

Hybrid intelligence is about humans and AI teaming up to surface ideas and challenge assumptions. It’s a kind of collaborative cognition that creates productive friction.

Ming argues the real issue isn’t AI itself, but how we choose to use it. In her work with teams using Polymarket data, she noticed most people leaned on AI either to outsource answers or just to confirm what they already believed.

But there was a smaller group—let’s call them the outliers—who treated AI as a cognitive partner. They used it to press for counterarguments and dig into deeper reasoning, not just rubber-stamp their hunches.

This approach isn’t about letting software replace human judgment. Instead, it reframes AI as a collaborator that invites critique and pushes us to question ourselves.

Ming points out that hybrid intelligence really shows up when people push AI to explain how its conclusions could be wrong. That’s when thinking gets more robust and nuanced, and honestly, a bit more interesting.

What Ming observed in the experiments

In Ming’s experiments, about 90–95% of folks let AI do the heavy lifting or used it to confirm their own assumptions. Only a small minority—maybe 5–10%—acted like “cyborgs,” treating AI as a collaborative partner instead of just a tool.

These collaborators used AI to surface counterarguments and spot hidden biases in their own reasoning. Ming calls this hybrid intelligence, a real shift that comes from interacting with AI, not just adding machine power to human work.

She stresses that pulling off hybrid intelligence isn’t about having the fanciest AI model. It hinges more on human traits like curiosity, intellectual humility, perspective-taking, and being okay with uncertainty.

So, adopting AI is as much about our mindset as it is about the tech itself.

Why this matters for the workplace and learning

This stuff isn’t just for labs and think tanks. Ming warns that if we rely on AI the way we rely on GPS, it might solve quick problems but slowly chip away at our long-term cognitive skills if we stop thinking critically.

In fast-paced workplaces that crave speed, there’s a real risk that people just accept AI output without question. That’s how you end up with what Ming calls AI slop—bland, low-value work.

When AI services go down or hit limits, it’s already obvious who’s been outsourcing their thinking. Those folks struggle with tasks that used to demand independent judgment.

Ming’s message? If we want to keep our minds sharp and keep innovating, we’ve got to use AI to think with us, not for us.

Strategies to cultivate effective human-AI collaboration

Building robust hybrid intelligence takes deliberate practice. You also need thoughtful organizational design and a culture that actually rewards critical thinking.

Here are a few steps that can nudge teams from just using AI passively to actually working with it in a productive, engaged way.

  • Design prompts that force explanation — make the AI justify its conclusions and show its uncertainty ranges.
  • Prompt for counterarguments — ask the AI to bring up alternative hypotheses and point out possible biases.
  • Encourage cognitive humility — set up psychological safety so people feel okay challenging the AI and each other.
  • Invest in critical thinking training — help everyone get better at reasoning, evaluating evidence, and handling uncertainty.
  • Balance speed with scrutiny — put in checks that let humans slow things down and double-check decisions.
  • Governance that tracks overreliance — keep an eye on decision quality and notice when AI starts replacing, instead of supporting, human judgment.

Vivienne Ming’s work asks us to rethink how we bring AI into our teams. Shouldn’t it make us sharper, not just faster?

 
Here is the source article for this story: Most people are on the losing side of a new AI divide, researcher says

Scroll to Top