We’ve all had that moment: being unable to recall where an acquaintance works or a restaurant name, but knowing exactly where the information sits on LinkedIn or Google Maps. AI is similarly reshaping our cognition—only faster. Economist Ines Lee, who spent years teaching Oxford and Cambridge students to think independently, discovered her own dependency when ChatGPT went down one afternoon and she couldn’t articulate the ideas she needed. She argues that the solution isn’t using less AI, but using it differently. Drawing on MIT neuroscience research, Ines shares three practical principles for staying cognitively engaged while leveraging AI’s capabilities. Read on to learn how to think with AI rather than letting it think for you.—Kate Lee
Was this newsletter forwarded to you? Sign up to get it in your inbox.
When ChatGPT went down one afternoon while I was preparing a presentation, I opened my document and my fingers froze. I couldn’t articulate why the frameworks connected to the examples I’d planned to use. My explanations lived in chat history I could no longer access.
As an economics lecturer, I’d spent years teaching students at Oxford and Cambridge to think independently, question assumptions, and apply frameworks to new situations rather than memorize them. I was apparently losing that skill myself—and I wasn’t alone. Colleagues across knowledge work described the same creeping inability to start meaningful projects without first consulting AI.
This past June, MIT researchers published findings that seemed to explain what we’re experiencing. They scanned the brains of 54 students writing essays under three conditions: using only ChatGPT, using only Google, or using just their own thinking.
The results seemed damning. The ChatGPT group showed the lowest neural activity, and 83 percent couldn’t remember what they’d written, compared to just 11 percent in the other groups. “Is ChatGPT making us stupid?,” the headlines asked.
But buried in the study was a finding most coverage missed. The researchers also tested what happens when you sequence your AI use differently. Some participants thought first, then used AI (brain → AI). Others used AI first, then switched to thinking (AI → brain).
The brain → AI group showed better attention, planning, and memory even while using AI. Remarkably, their cognitive engagement stayed as high as students who never used AI. The researchers suggest this increased engagement came from integrating AI’s suggestions with the internal framework they’d already built through independent thinking.
Meanwhile, students who started with AI stayed mentally checked out, even after they switched to working on their own. Starting passive meant staying passive.
The study has limitations—a small sample, an artificial task, not yet peer reviewed—but the pattern matched what I’d seen in my classroom and in my own work.This isn’t the first time we’ve seen technology reshape cognition. A 2011 study found that when people knew they could Google information later, they remembered where to find it but not the information itself. A 2020 study shows frequent users of GPS navigation systems develop weaker spatial memory and struggle to navigate without directions. AI follows the same pattern—with higher stakes.
The question isn’t whether to use AI. It’s how to use it without losing the cognitive capabilities that make us valuable: the ability to defend our reasoning, adapt our thinking to new contexts, and understand where our approaches might fail.
The MIT study offers a clue: Sequence matters. What follows are three principles I’ve developed for using AI in ways that challenge assumptions, expose blind spots, and force you to explain your reasoning rather than letting it do all the thinking for you.
But first, we need to understand the fundamental distinction that makes these principles work: the difference between passive consumption and active collaboration.
Make email your superpower
Not all emails are created equal—so why does our inbox treat them all the same? Cora is the most human way to email, turning your inbox into a story so you can focus on what matters and getting stuff done instead of on managing your inbox. Cora drafts responses to emails you need to respond to and briefs the rest.
How to think with AI: Active versus passive use
Think about two ways to learn a piece of music. You can learn it by rote—like a kid memorizing the hand positions for Beethoven’s “Für Elise,” training your fingers through repetition until you can perform the piece flawlessly. Or you can learn the piece by understanding its structure, the chord progressions, the harmonic logic. You still practice until your fingers know the patterns, but you understand why the music works. Now you can transpose it, improvise variations, and explain why certain changes would or wouldn’t work.
This same pattern appears in programming: Developers who plan their approach before asking AI to generate code maintain a better understanding of their systems than those who start with prompts.
But the stakes are higher than individual productivity. Research shows critical thinking abilities are declining, especially among younger workers—precisely as employers increasingly demand these skills. The capabilities becoming scarcer are the ones organizations need most: the ability to defend reasoning, adapt thinking to new contexts, and understand where approaches might fail.
Passive AI use is like learning music by rote. You can produce output—an essay, a strategy document, an analysis—by following what AI generates. But you don’t always understand why the argument works, what assumptions it makes, or where it might fail. If somebody asks you to adapt it to a different context, you might be stuck. If you have to defend the reasoning, you have no answer. The output lives in your chat history, not your understanding.
Here’s an example: “Write me a strategy for improving team communication.”
You get an answer. You might even implement it. But you haven’t wrestled with what “better communication” means for your team, what’s causing the current problems, or why certain solutions might fail in your context.
Active AI use means building understanding while collaborating with the model. You frame the problem yourself, make an initial pass, then use AI to challenge your assumptions, uncover blind spots, and sharpen your arguments. You’re learning the chord progressions, not just memorizing the key presses. The machine assists; you own the reasoning.
This might look like: “Here are our context, goals, and constraints. I’ve listed three hypotheses and current evidence. Challenge my assumptions and ask for the missing data before proposing a plan.”
You’re still getting AI’s help, but you’ve done enough thinking that you can evaluate whether its challenges are valid, its questions reveal real gaps, and its suggestions fit your situation. You understand why the strategy works, so you can adapt it when circumstances change.
Of course, passive AI use has its place: transcribing text from screenshots, generating routine reports from data, creating multiple versions of the same message for different audiences. These are like scales and technical exercises—mechanical tasks that don’t require deep comprehension.
But for work where you care about judgment, learning, and deep understanding, you need to build your own understanding.
So how do you structure AI collaboration to stay active rather than passive? Each of these three principles creates friction at a different point in the thinking process, which keeps you cognitively engaged while still leveraging AI’s capabilities.
Three principles for active AI use
Principle 1: Think first, AI second
That MIT study revealed something crucial: When you start with your own thinking, you stay cognitively engaged throughout. When you start with AI, you struggle to activate your brain even after you stop using it.
So, for any piece of meaningful work, do your thinking first, before asking AI to generate on your behalf. Think of it as warming up your cognitive muscles before the main workout. You arrive at the AI interaction already activated, with a point of view to test rather than a blank slate to fill.
This summer, I was teaching a behavioral economics course for undergraduate students. Student feedback from the previous year showed they could pass exams but struggled to apply concepts to novel situations. I needed to restructure how concepts built on each other.
My instinct was to prompt ChatGPT: “Design a 6-week behavioral economics course that promotes deep learning and application.” Instead, I grabbed a notebook and spent an hour working through what I knew: Which concepts did students grasp easily versus struggle with? Where did they make predictable errors? What real-world examples sparked curiosity versus glazed-over compliance? What was I assuming about prerequisite knowledge?
When I finally opened ChatGPT, I had a framework to test. I gave it my notes—the conceptual map, the learning challenges I’d identified, the questions I was wrestling with—and asked it to challenge my sequencing and surface blind spots. Instead of accepting its suggested sequence, I could evaluate: “Does that ordering make pedagogical sense?” I caught structural issues I would have missed if I’d started with a blank prompt. And I’m pretty confident that the course was sharper because I’d mapped the conceptual terrain first.
Before using AI on any project where you care about judgment and understanding, spend 30 minutes capturing your raw thoughts: What do you already know? What are your hypotheses? What feels unclear? What constraints matter?
If you’re truly stuck and need help getting started, use AI to ask questions rather than generate answers:
Principle 2: Use AI as a coach, not a cheerleader
AI’s default mode is to be helpful and agreeable. It suffers from what researchers call “sycophancy”—tailoring responses to what it thinks you want to hear. Left to its own devices, it will tell you your ideas are great, your logic sound, your writing compelling.
This is exactly what you don’t need when you’re trying to think rigorously.
Explicitly prompt AI to be your cognitive sparring partner rather than a pleasant echo. Think of it as converting AI from an eager-to-please intern into a demanding coach who pushes you to think more rigorously.
Recently, I was asked to write a research report on how AI would affect labor markets. The dominant narrative is straightforward: AI will eliminate entry-level white collar jobs, displacing millions of workers in the process.
Instead of asking ChatGPT to help me explain this consensus view, I used a devil’s advocate prompt: “The dominant argument is that AI will wipe out entry-level knowledge work. Your job is to dismantle this narrative. What are the three strongest counterarguments, supported by economic theory or historical precedent? Don’t be diplomatic. Genuinely challenge this position.”
It surfaced three perspectives I hadn’t fully considered, which helped me develop nuanced alternatives I’d overlooked. More importantly, I understood why the conventional wisdom might be wrong, not just that there were alternative views.
Here are three prompts that create this kind of intellectual friction:
The “third-party reviewer” approach:
This shifts AI from talking to you about your work (where it’s sycophantic) to talking to other editors about your work (where it’s professionally candid). You’re no longer the audience, so it drops the protective politeness.
The “structural gap-mapper” approach:
This prompt encourages you to use AI for structural thinking, not just generation.
The “devil’s advocate” prompt:
The goal is to create genuine intellectual friction. You want AI to surface weaknesses you can’t see because you’re too close to the work.
Principle 3: Engineer productive friction
AI explains things in a way that is easy to follow, making us susceptible to what researchers call the “illusion of understanding.” We overestimate what we’ve learned. Real comprehension only emerges when we’re forced to articulate our thinking, because explaining reveals gaps we didn’t know existed.
This idea is the basis of Nobel-prize winning physicist Richard Feynman’s simple test for understanding: If you can’t explain something to a 5-year old, you don’t really understand it.
I learned this the hard way early in my teaching career. When I was preparing to teach a new concept, I’d read the papers and felt like I understood it. Then a student would ask a basic clarifying question, and I’d freeze. I had a vague sense of the idea but couldn’t clearly articulate the underlying logic or what made it different from related concepts. I’d consumed explanations but hadn’t built my own understanding.
Now when I’m learning something new—whether it’s preparing to teach or understanding a complex research paper—I use AI differently. Instead of asking it to explain the concept, I prompt: “Before you explain anything, ask me to describe this concept as if I’m teaching it to students. If my explanation is vague or misses key elements, point out what’s missing and ask me to try again.”
Build prompts that force you to demonstrate understanding, not just consume it. Here are some ways to create interaction patterns that force you to do the explaining, rather than letting AI do the heavy lifting.
The Feynman test:
The recall prompt:
The one-question rule:
Thinking with AI, not by AI
If you’ve felt the tension between wanting to stay sharp and needing to stay competitive, you don’t need to choose one or the other. The cognitive capabilities that make you valuable don’t disappear when you use AI. But they can atrophy if you use it in a passive way.
So, for your next meaningful project:
- Spend 30 minutes thinking first: Write what you know before you prompt.
- Make AI your critic: Ask it to challenge your assumptions, not validate them.
- Force yourself to explain: If you can’t articulate it clearly, you don’t understand it yet.
The MIT study showed what’s possible: You don’t have to sacrifice cognitive depth for efficiency. AI can amplify your thinking or replace it. Often, the difference comes down to how you use the tool, not whether you use it.
Ines Lee is an economist and writer, passionate about bridging research, policy, and public understanding of science and technology. Follow her on Substack or LinkedIn.
To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.
We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.
We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.
Get paid for sharing Every with your friends. Join our referral program.
For sponsorship opportunities, reach out to <a href="mailto:sponsorships@every.to">sponsorships@every.to</a>.














