How AI Is Making Us Stupid – But Smarter on Paper

Is AI making us dumber — or are we just using it the wrong way?

Artificial intelligence isn’t necessarily making us less intelligent on its own, but the way we’re choosing to use it may be quietly affecting our ability to think deeply, solve problems, and retain information.

As more tasks like writing, researching, and decision-making are handed over to AI, we may be gradually reducing the mental effort we put into learning and understanding.

Over time, this can lead to a subtle decline in critical thinking and memory — not because AI is harmful, but because it makes it easier to skip the thinking process entirely.

Apps like ChatGPT are writing our emails, generating headlines, summarizing articles, and even handling parts of our jobs. And while this tech might feel revolutionary — maybe even liberating — I can’t help but ask the question: What’s the cost?

Is this convenience coming at the expense of something deeper — like our ability to think, remember, and solve problems?

From what I’ve read, experienced, and now researched, there’s growing evidence that AI is making us stupid. Not because it’s evil. But because it’s making it far too easy to avoid thinking altogether.

In this article, I’ll walk you through:

  • What the science says about AI and cognitive decline
  • How ChatGPT and other tools are silently rewiring our brains
  • Real-world consequences (like memory loss, overtrust, and laziness)
  • What we can do to stay sharp while still using AI

Cognitive Offloading – The First Sign of Trouble

Let’s start with a concept called cognitive offloading. It’s not new, but AI has put it on steroids.

What is it?

Cognitive offloading is when we use tools or external devices to help us remember, plan, or solve problems. A calculator offloads math. GPS offloads navigation. ChatGPT? It offloads critical thinking.

Why this matters:

  • Your brain is a muscle. When you stop using certain parts of it, they weaken.
  • Relying on AI to write, search, or explain everything means you engage less with the actual thinking process.
  • Over time, your brain stops “training” those skills — and they atrophy.

Real-life examples:

TaskWithout AI (Brain Use)With AI (Offloaded)
Writing an essayPlanning, argument structure, recallTyping a prompt, copy/paste
Searching for infoEvaluating multiple sources, filtering biasReading a pre-packaged summary
Decision makingWeighing pros/cons, risk assessmentFollowing AI “suggestion”

Cognitive offloading isn’t inherently bad. But overuse creates a dependence. When everything becomes automatic, your ability to think independently starts to decline.

The MIT Study That Blew My Mind

I want to talk about a study I think everyone should read — it’s called Your Brain on ChatGPT: Accumulation of Cognitive Debt from MIT Media Lab.

This wasn’t some surface-level analysis. They did deep EEG scans to see how people’s brains behave when writing essays — with and without AI.

Key takeaways:

  • The Brain-only group had the strongest neural activity
  • The Search Engine group showed moderate engagement
  • The ChatGPT group had the weakest brain signals

Let that sink in. Using AI literally dampened their brain activity — especially in memory and decision-making zones.

Here’s what the data showed:

GroupMemory RecallBrain ActivityEssay Ownership
Brain-onlyHighStrongHigh
Search EngineMediumMediumMedium
ChatGPTVery lowWeakLow

One thing that stood out to me was this: people in the AI group couldn’t quote their own essays minutes after writing them. They didn’t remember what they wrote because they didn’t really write it. That’s not just offloading — that’s outsourcing your brain.

It gets worse in follow-up sessions. The group that started with ChatGPT performed worse later when asked to write without it. They couldn’t switch gears easily. Their brain was already rewired.

Overtrust in AI Is Replacing Critical Thinking

We like to think we’re rational decision-makers. But most of us are just looking for shortcuts.

And AI is the perfect shortcut — it sounds confident, it gives answers fast, and it rarely tells you, “I don’t know.”

The problem?

We trust it even when it’s wrong.

This is known as automation bias — the tendency to believe automated systems more than human input, even when those systems are flawed.

Examples from studies:

  • A 2023 study from the University of Georgia found that 67% of people followed incorrect investment advice from AI, even after being told it had a 40% error rate.
  • In healthcare simulations, doctors were more likely to accept an AI diagnosis, even when it contradicted their own judgment.

Why it matters:

  • You stop double-checking.
  • You stop learning how to evaluate information.
  • You lose the instinct to challenge assumptions.

As someone who uses AI tools daily, I’ve caught myself doing this. I’ll ask ChatGPT for ideas and just accept the first few without thinking critically. That’s scary — because if I’m doing that, so are millions of others.

From Search to Summary – The Rise of Passive Learning

We used to search for answers.

Now we prompt for them.

The shift from search engines to language models seems small — but it’s fundamentally changing how we interact with information.

  • You enter a keyword
  • You browse multiple sources
  • You filter, compare, and think critically

With ChatGPT:

  • You type a question
  • You get a single, confident-sounding summary
  • You often don’t question it

This reduces what researchers call lateral thinking — the ability to explore different perspectives, connect unrelated ideas, and go deeper.

AI gives you an answer, not a path. And that’s the difference between learning and consuming.

The Rise of “AI-Induced Laziness”

There’s another side to this I’ve noticed personally and through others: mental laziness.

When a tool does everything for you, the motivation to try disappears. Why wrestle with ideas, structure your thoughts, or recall facts when AI will do it in seconds?

Here’s where it shows up:

  • Students copying ChatGPT-generated essays
  • Professionals letting AI draft everything from emails to pitches
  • People skipping books entirely and just asking AI for summaries

Real-world stats:

  • Over 51% of students have used ChatGPT to write assignments (Intelligent.com, 2023)
  • AI-generated resumes were found to be 90% inaccurate or exaggerated (HBR, 2024)
  • Productivity increased short-term, but originality and depth dropped by 30% (MIT Sloan, 2024)

When used improperly, AI isn’t helping us do more — it’s helping us do less and pretend we’re doing more.

AI’s Effect on Memory and Learning

Let’s talk about memory — something I didn’t expect AI to affect, but now it’s clear.

Studies have shown that when people use search engines, they remember where to find info, not the info itself. This is called the Google Effect.

With AI, it’s worse — you don’t even see the sources. You just get answers.

The MIT study confirmed this: the AI group showed weaker memory encoding, even when performing well in essay scores.

In other words, it looked like they were learning — but they weren’t retaining anything.

Over time, this can seriously erode your long-term memory, your ability to make connections, and even your sense of intellectual ownership.

The Illusion of Intelligence

AI often sounds smarter than it is. That’s part of its charm — and part of the trap.

It’s eloquent. It uses the right tone. It throws in stats. But under the hood, it’s just predicting words. There’s no understanding, no critical filter, no moral compass.

So when people start relying on AI to sound smart, they’re mimicking intelligence, not building it.

What happens next?

  • Students can’t explain essays they “wrote”
  • Employees submit reports they didn’t create
  • Creatives start recycling AI ideas that feel new — but are just mashed-up old content

This creates a feedback loop of mediocrity, where no one is thinking deeply — they’re just formatting what the machine gives them.

So… Is AI Making Us Dumber?

Here’s my honest opinion: AI isn’t inherently making us stupid. But the way we’re using it absolutely is.

It’s not about banning AI. It’s about understanding what it’s replacing.

When AI replaces repetition or low-value tasks, great. That frees up your time to think harder.

But when it replaces:

  • Your ability to think through problems
  • Your habit of seeking truth
  • Your memory
  • Your voice

Then we’ve got a serious problem.

I believe the next five years will split people into two camps:

GroupBehaviorOutcome
Passive UsersAccept AI output blindlyShallow thinking, decline in originality
Active UsersUse AI as a tool, not a crutchEnhanced thinking, better performance

How to Stay Smart While Using AI

If you’re like me — and you use AI a lot — here’s how to avoid becoming mentally lazy.

Use AI after you think

  • Try answering the question yourself first.
  • Then use AI to compare, not create.

Always verify

  • Fact-check key points, even when they sound right.
  • Use trusted sources, not just whatever AI gives you.

Practice without AI

  • Write by hand.
  • Memorize things.
  • Challenge yourself to think through problems without tools.

Treat AI like a junior assistant, not a senior partner

  • It helps, but it shouldn’t lead.
  • You’re responsible for the outcome — not the bot.

Conclusion

AI is one of the most powerful tools we’ve ever created.

But tools don’t just change what we do — they change who we are.

And right now, AI is making it easier than ever to stop thinking for ourselves.

If we’re not careful, we’ll end up smarter on paper — but weaker in reality.

Use the tools. Just don’t lose your brain in the process.

Avatar photo

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *