• OpenAI ships multimodal updates • EU AI Act compliance dates clarified • Anthropic releases new safety evals • NVIDIA earnings beat expectations • New open-source LLM hits SOTA on MMLU
ai thinking skills

AI Isn’t Making Us Dumber — We’re Using It at the Wrong Time

AI didn’t enter our lives as a dramatic takeover. It slipped in as a helper.

Draft this. Summarise that. Check my logic. Improve my tone.

But as generative AI becomes embedded in how we learn and work, researchers are beginning to ask a deeper question — not about productivity or plagiarism, but cognition itself.

When we let AI think for us, what happens to our AI thinking skills and our brains?

Recent studies suggest the answer depends less on whether we use AI — and more on when we use it.

The Brain on AI: Less Work, Less Engagement

Earlier this year, researchers at the Massachusetts Institute of Technology (MIT) ran an experiment that went beyond surveys or self-reports. They measured brain activity directly.

Using electroencephalography (EEG), the team recorded neural signals from participants as they wrote essays under different conditions. Some worked unaided. Others used ChatGPT for support — from summarising prompts to generating and refining ideas.

The results were striking.

Participants who relied on AI from the outset showed reduced activity in brain networks linked to deep cognitive processing. Later, when asked about their essays, they struggled to quote or recall their own arguments — even though the writing itself appeared competent.

The researchers warned of a “pressing matter” around potential declines in learning skills if AI is used as a cognitive substitute rather than an aid.

At first glance, the conclusion seemed bleak: AI makes thinking easier — and that might be the problem.

But buried in the data was a crucial detail that changes the entire narrative.

The Group That Broke the Pattern

One group in the MIT study was required to complete the task without AI first. They struggled, planned, wrote, and wrestled with the problem unaided. Only later were they given access to generative AI.

When they used it, something unexpected happened.

Their neural activity increased.

Instead of offloading thought, these participants treated AI as a high-level editor — using it to refine structure, challenge phrasing, and polish arguments they had already formed. The heavy thinking had already occurred. AI amplified effort instead of replacing it.

In effect, they turned the shortcut back into a tool.

This “second group” reveals the real issue isn’t AI itself, but premature delegation. When AI enters before mental struggle, it suppresses cognition. When it enters after, it can deepen understanding.

The difference is sequencing — not capability.

Confidence, Convenience, and Cognitive Drift

This pattern shows up beyond the classroom.

A separate study by Carnegie Mellon University and Microsoft examined how white-collar workers use AI tools like Copilot. Surveying hundreds of professionals and analysing nearly 1,000 AI-assisted tasks, researchers found that higher confidence in AI correlated with lower critical engagement.

The more people trusted the tool, the less they questioned its output.

While AI improved efficiency, it often replaced verification, reflection, and independent judgment — a trade-off that could lead to long-term deskilling if left unchecked.

In short: AI didn’t make people worse at their jobs. It made them less mentally involved in doing them.

Schools Are Already Feeling the Tension

Among students, the cognitive effects are even more visible.

A survey by Oxford University Press found that six in ten schoolchildren felt AI had negatively impacted their academic skills. At the same time, nine in ten said it helped them develop at least one skill, such as revision or creativity.

The contradiction reveals a nuanced reality. AI can support learning — but it can also short-circuit it.

Dr Alexandra Tomescu, a generative AI specialist at OUP, says many students benefit from AI but want clearer guidance. Used well, it can support problem-solving. Used poorly, it makes work “too easy” — stripping away the effort that leads to understanding.

Better Outputs, Worse Learning?

Professor Wayne Holmes of University College London warns that education may be drifting toward a dangerous equilibrium: polished work, weaker foundations.

He points to evidence of cognitive atrophy in other AI-assisted professions. In radiology, for example, AI tools have improved diagnostic accuracy for some clinicians while degrading it for others. A Harvard Medical School study found that once professionals defer judgment to machines, regaining independent expertise isn’t guaranteed.

“The outputs are better,” Holmes argues, “but the learning underneath is worse.”

This is the real risk of generative AI in education — not cheating, but outsourced cognition. Students may submit strong work without building the mental frameworks that education is meant to provide.

Even AI Makers Are Cautious

AI companies themselves acknowledge the risk.

OpenAI, whose ChatGPT now serves hundreds of millions of weekly users, argues that the tool should act as a tutor — not an answer engine. Features like “study mode” are designed to break problems down rather than solve them outright.

The goal, they say, is scaffolding: helping users think, not bypassing thinking.

Critics remain unconvinced. There is still no large-scale independent evidence proving that generative AI improves learning outcomes over time — or that it is cognitively safe at scale.

This Is Not a Calculator Moment

Comparisons to calculators or spellcheckers fall short.

Those tools automate execution. AI automates reasoning.

It summarises, prioritises, reframes, and decides — occupying the same mental territory humans use to form judgment. That makes it less like a tool and more like a cognitive partner — one that never tires, but never truly understands.

The MIT “second group” suggests the future isn’t about banning AI or embracing it blindly. It’s about designing friction back into the process.

Struggle first. Delegate later.
Think, then refine.
Model understanding before automation.

The real question is no longer “Does AI make us think less?”

It’s “At what point do we hand over the thinking?”

Because that moment may determine whether AI becomes a cognitive prosthetic —
or a slow, invisible eraser of human thought.

Related: Your Brain on AI: The Hidden Cognitive Cost of ChatGPT

Tags: