Does AI affect learning? What does MIT say? From Cognitive Debt to Cognitive Leverage

by Francisco Santolo

A few days ago I received a message "Fran, did you see it? MIT says using ChatGPT makes you dumber." I opened the article. Euronews indeed titled it: "Is ChatGPT making us dumber? An MIT study says so." A glance at LinkedIn confirmed the echo. But what does the study really say?

Does AI affect learning? What does MIT say? From Cognitive Debt to Cognitive Leverage

A few days ago I received a message «Fran, did you see it? MIT says that using ChatGPT makes you dumber.

I opened the note. Euronews in fact titled it:

Is ChatGPT making us dumber? An MIT study confirms this

A look at LinkedIn confirmed the echo:

Infobae (Argentina): The use of ChatGPT weakens memory and intellectual autonomy, according to an MIT study

Genbeta (Spain): Using ChatGPT takes a toll on our brain: MIT has a worrying verdict

The three repeated ideas were clear: less memory, less depth and the dreaded cognitive debt.

Paradoxically, in some cases with explicit confession, and in others obviously, the authors applied the same use of AI that the study mentions as harmful.

Through education I support others to improve their business decisions. We have successfully used AI to maximize the same.

But above all, I am convinced that it has leveraged my learning capacity.

What could there be in the article that refuted these intuitions? Under what circumstances was learning measured in the study? To what profiles, under what context?

I decided to undertake the analysis of the paper myself (I tell you step by step in section 2, and conclude with Cognitive Leverage, a proposal for another phenomenon to study in contrast to Cognitive Debt).

1. What does the MIT study say in a very summary way?

The MIT Media Lab study (June 2025) analyzed how using ChatGPT affects memory, text quality, brain activity (measured with EEG), and energy consumption in 54 college students while writing assigned essays.

They compared three groups:

AI-at-start: ChatGPT generated the first draft; then the students modified it.

AI-at-the-end: Students wrote alone three times and then used AI for final improvements.

Control: No AI, just internet searches.

Main results:

IA-at-beginning had a drop in memory (0.25 standard deviations), less depth of the essay according to teachers (reduction of 0.6 points on a scale of 10), difficulty quoting exact phrases and less perception of intellectual property over their essays.

EEG showed lower neural connectivity in those who used AI from the beginning, suggesting less cognitive effort.

Less mentioned findings:

AI-at-the-final showed no deterioration in memory or quality of the essay, comparable to the control group, and saved about 25% of the time.

The risk of "cognitive debt" appears when AI replaces the initial intellectual effort. However, used after one's own efforts (hybrid strategy), AI can be an effective support without sacrificing learning.

Limitations: The study has a small and preliminary sample size. The investigation is still in progress.

2. How I decided to analyze it (and what I learned along the way)

I had the opportunity and the excellent idea of ??investing my savings in studying at 7 of the best universities in the world.

A) Fast-pass reading Harvard method (45 min)

One of the first things I remember about Harvard is receiving an endless stack of business cases. As preparation, before the first class, we were offered an exclusive session to learn how to cover them strategically and effectively.

What did they teach us? Before reading the full article, do some prior strategic reading.

As? With this focus sequence:

Introduction (context, protagonist, main problem)

Jump to conclusion (critical decisions to make, what are we looking to solve?)

Understand what is available in tables, graphs, appendices

speed reading (one scans paragraph by paragraph quickly looking for main ideas)

So I decided to do the same with the endless paper with a focus on understanding: what was the experiment, what was the segment, under what circumstances and context, what were the conclusions (under what conditions are they sustained).

What questions did I have?

How does it apply to more experienced professionals? With the capacity for critical analysis, but above all, with the capacity for continuous learning and generation of new concepts?

What about motivation? In the case, writing of imposed topics is measured. What happens when there is a true desire to learn, understand, generate and apply what has been learned?

What did I do then?

With the original text (without the need for an exhaustive and complete reading) I formulated my own hypotheses. With that foundation, I started empowering with AI.

B) I generated a dedicated "Project" in ChatGPT

I uploaded the PDF of the complete paper.

In another window outside the project (model o3, the one with the most reasoning) I prepared a prompt-guide for the project instructions. He gave me a very long text, but it basically pointed to: «Act as a Critical Academic Analyst; summarize, point out biases, propose improvements... always respect the real data of the paper as a starting point (and most importantly)... learn and refine your own understanding in each iteration (I want you to grow, to evolve, to be more capable of renewing your conclusions with greater knowledge"

In the project: I fed him with questions, and offered him to take on different roles to criticize his own answers, expand his knowledge. Throughout the process I expanded my own knowledge of the paper and refined my initial interpretations, asking new questions.

C) Role-play of 7 specialists

To enrich perspectives I generated one of the unique dynamics enabled by AI.

We created together 7 people. PHDs with different perspectives based on their training and beliefs, which based on the real data of the paper gave rise to a debate and counterpoints about its uses and limitations.

Dr. Nicolás del Valle (Neuroeconomist) Education: PhD in Applied Cognitive Neuroscience, MIT Specialization: Analysis of cognitive biases and statistical significance (p-values) Key competencies: Neuroeconomic modeling and financial decision making

Sofía Cortázar (Pedagogical Maker) Education: Master in Educational Innovation, Stanford University Specialization: Educational prototypes and real-time measurement Key competencies: Design of tangible pedagogical experiences

Dr. Valeria Montero (Critical Sociologist) Education: Doctor in Digital Sociology, Complutense University of Madrid Specialization: Social equity and analysis of algorithmic power Key competencies: Critical research in digital and social justice

Dr. Andrés Beck (Futuristic bioethicist) Education: Doctorate in Bioethics and Philosophy of Science, Harvard University Specialization: Technological ethics, human autonomy and agency Key competencies: Ethical analysis of emerging technologies

Clara Olivares (Circular Economist) Education: Master in Environmental and Circular Economy, Wageningen University Specialization: Assessment and reduction of the energy footprint Key competencies: Sustainable economic models and energy management

Ignacio Serrano (Liberal entrepreneur) Education: MBA in Finance and Innovation, London Business School Specialization: Return on investment (ROI) and labor market analysis Key competencies: Business strategy based on economic efficiency

"Synthia AI" (synthesizing AI) Training: Advanced Generative Artificial Intelligence Specialization: Synthesis and resolution of complex contradictions Key competencies: Critical analysis and rapid synthesis of information

Learning: the frictions between profiles uncovered angles that the PDF did not cover: energy per token, geographical bias of the corpus, ethical threshold of cognitive loss.

D) Podcast script and execution

We design the dynamics for the podcast:

Initial presentation of perspectives (2 each)

Critical interaction (crossed questions)

Integrative synthesis (by synthesizing AI)

Closing of each one based on what was discussed

We generated the podcast script, resulting in 71 minutes of a vibrant conversation full of learning and new perspectives. If you are curious, write PODCAST and I will send you a document with the transcription of everything generated.

This type of exercise (generating it, consuming it, reflecting on what becomes, enriching our posture, intuiting or reinforcing new concepts), as we will see below with my initial definition of Cognitive Leverage clearly expands our learning capacity, it does not limit it.

E) Notebook LM as a test bed

I fed two sets:

Only the original paper

He delivered a very clear podcast, covering what appeared in the press and in 1. What does the MIT study say in a very summary way?

Documents derived from my ChatGPT project (summaries, debates, podcast).

It generated a slightly richer podcast, but I didn't achieve a very good result. I could have spent more time refining it, but I felt I had enough.

Learning conclusions

AI expanded my ability to understand and exposed my thinking and creation process to more instances.

Creating the process of 2. How I decided to analyze it (and what I learned along the way) clearly involves cognitive work.

It supported me to question my assumptions, reinforce some beliefs and think differently about others (via role-play, provocation under different characters, other points of view).

And yes, it made me produce faster (25-35% less time), but only after the initial friction (the fast-pass reading of the original paper).

3. What do I take away from the original MIT paper?

The risk of cognitive debt appears when AI avoids the initial intellectual friction.

Delegating the task to AI or performing it without bringing augmented intelligence into play gives rise to learning problems.

The more initial the learning is and the less developed are the critical thinking skills, the ability to take a well-founded position, the basic skills of the topic in question, etc. more likely to get caught in this learning trap (and continue to fail to develop skills)

When AI comes in after the initial effort, memory doesn't drop and quality is maintained. Even in the restrictive exercise of writing what has been imposed. The rest is territory for open experimentation.

There is a motivation factor for learning that is fundamental and is not put into play in the experiment.

Learning to learn has to do with designing our own optimal learning mechanisms. In the paper the learning dynamics are imposed and pre-established, the use of LLMs is also defined.

4. What new hypothesis/intuition allowed me to generate?: the one I call Cognitive Leverage

After going through the paper, talking with the model and submitting my notes to seven imaginary voices, a concept emerged that summarizes what I learned: Cognitive Leverage.

It is not a closed academic term; but it is a work proposal (for dedicated researchers from MIT or another academic entity who are interested in validating it).

What is needed as a basis to be able to take advantage of Cognitive Leverage

Developed critical thinking: ability to analyze, integrating different sources, generating one's own position, differentiating between the objective and the subjective.

Basic skills required depending on the type of analysis: e.g. in this case statistics, scientific thinking, etc.

(the students in the sample had it)

Curiosity and intrinsic motivation: we do not write out of school obligation but out of the desire to learn, obtain conclusions, and be able to apply it to our life or profession. There is a component of enjoyment and choice.

(fundamental)

Freedom to decide the objectives and the way in which the information will be worked with (experimentation as part of learning).

Understand which source we should start from for our objectives, and make a first analysis with intuitions or human reflection. Writing, as the original study shows, helps process it. Why do I say understand from what source? Many times an initial source generated by the LLM itself may be sufficient.

(the latter taken from the conclusions of the article).

Under these conditions and not the limited conditions of the study experiment, I believe that AI can leverage learning and not generate cognitive debt.

What can be good practices to decide the mechanics of cognitive leverage?

AI-sparring – Treat her as an opponent looking for cracks in your argument.

Changing lenses – Ask him to analyze your idea as a neuroscientist, then as a philosopher, then as a climate activist.

Tangential exploration – Question, what if I apply this in vertical farming? to discover remote connections.

Self-metacognition – Have it explain your own logic step by step: you will see invisible jumps.

Iterative Condensation – Force it to move your text from long essay to infographic, then to 30s pitch; Each format reveals gaps and essences.

Play and analyze with different formats - text, audio, now incorporates video (it can even perceive your surroundings)

Offer models and tools from papers or books as a basis to process information or lead its application: the more prepared the person, the more they will benefit from learning with AI.

My invitation is simple: play with these possibilities of what I have defined as Cognitive Leverage and gradually incorporate and generate your own learning mechanics leveraged by AI. And give me your own intuitions and conclusions.

What seems fundamental to me?

Avoid press releases and empty conclusions about which the article itself paradoxically warns. Stay away from the use of LLMs that, instead of enriching us, harms us.

Let's prevent cognitive debt from being the next fad disguised as absolute truth.

Next steps

A paper, even if it is statistically significant (in this case the sample is small and the temporal follow-up is limited); It is a trigger for questions. MIT study shows that AI can create cognitive debt… if it blocks initial intellectual friction. Read in full, it also demonstrates that that same AI does NOT harm memory when used after one's own written draft, and in fact offers tangible time savings. The data invites us to think, and we have to move away from dogmas.

Learning to learn is the vaccine. Without critical thinking, curiosity, and contrasting sources, AI will amplify gaps. With that foundation, it becomes a multiplier of breadth, depth and speed. Teaching how to write prompts is useless if we do not first teach ourselves to ask ourselves why and why.

Learning to learn daily is not only a lack of young people. It is the great lack of most professionals who were successful until today.

The education system—and vocational training—needs to pivot. We continue training to memorize content when the differential value is in designing questions, learning to learn, personalizing our education, and measuring what generates the most impact or results for us. Education must generate the virtuous cycle of results to trigger motivation. LLMs can be the best personalized discussion tool… if we know how to use them with the right mindset.

Advantage and gap widen at the same time. Whoever masters cognitive leverage will gain depth and efficiency (individuals + companies); Anyone who depends on automatic summaries for their deliverable/product will lose agency and credibility. This already affects both students and senior executives.

AI doesn't magically make us dumber or smarter. It makes us – above all – more similar in the quality of our questions and interactions.

Carol Dweck explains in Mindset that the growth mindset allows us to increase our intelligence. Using LLMs as the study applies has nothing to do with the growth mindset.


What to read next from Francisco Santolo