Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150216 stories
·
33 followers

Think First, AI Second

1 Share

We’ve all had that moment: being unable to recall where an acquaintance works or a restaurant name, but knowing exactly where the information sits on LinkedIn or Google Maps. AI is similarly reshaping our cognition—only faster. Economist Ines Lee, who spent years teaching Oxford and Cambridge students to think independently, discovered her own dependency when ChatGPT went down one afternoon and she couldn’t articulate the ideas she needed. She argues that the solution isn’t using less AI, but using it differently. Drawing on MIT neuroscience research, Ines shares three practical principles for staying cognitively engaged while leveraging AI’s capabilities. Read on to learn how to think with AI rather than letting it think for you.Kate Lee

Was this newsletter forwarded to you? Sign up to get it in your inbox.


When ChatGPT went down one afternoon while I was preparing a presentation, I opened my document and my fingers froze. I couldn’t articulate why the frameworks connected to the examples I’d planned to use. My explanations lived in chat history I could no longer access.

As an economics lecturer, I’d spent years teaching students at Oxford and Cambridge to think independently, question assumptions, and apply frameworks to new situations rather than memorize them. I was apparently losing that skill myself—and I wasn’t alone. Colleagues across knowledge work described the same creeping inability to start meaningful projects without first consulting AI.

This past June, MIT researchers published findings that seemed to explain what we’re experiencing. They scanned the brains of 54 students writing essays under three conditions: using only ChatGPT, using only Google, or using just their own thinking.

The results seemed damning. The ChatGPT group showed the lowest neural activity, and 83 percent couldn’t remember what they’d written, compared to just 11 percent in the other groups. “Is ChatGPT making us stupid?,” the headlines asked.

But buried in the study was a finding most coverage missed. The researchers also tested what happens when you sequence your AI use differently. Some participants thought first, then used AI (brain → AI). Others used AI first, then switched to thinking (AI → brain).

The brain → AI group showed better attention, planning, and memory even while using AI. Remarkably, their cognitive engagement stayed as high as students who never used AI. The researchers suggest this increased engagement came from integrating AI’s suggestions with the internal framework they’d already built through independent thinking.

Meanwhile, students who started with AI stayed mentally checked out, even after they switched to working on their own. Starting passive meant staying passive.

The study has limitations—a small sample, an artificial task, not yet peer reviewed—but the pattern matched what I’d seen in my classroom and in my own work.This isn’t the first time we’ve seen technology reshape cognition. A 2011 study found that when people knew they could Google information later, they remembered where to find it but not the information itself. A 2020 study shows frequent users of GPS navigation systems develop weaker spatial memory and struggle to navigate without directions. AI follows the same pattern—with higher stakes.

The question isn’t whether to use AI. It’s how to use it without losing the cognitive capabilities that make us valuable: the ability to defend our reasoning, adapt our thinking to new contexts, and understand where our approaches might fail.

The MIT study offers a clue: Sequence matters. What follows are three principles I’ve developed for using AI in ways that challenge assumptions, expose blind spots, and force you to explain your reasoning rather than letting it do all the thinking for you.

But first, we need to understand the fundamental distinction that makes these principles work: the difference between passive consumption and active collaboration.

Make email your superpower

Not all emails are created equal—so why does our inbox treat them all the same? Cora is the most human way to email, turning your inbox into a story so you can focus on what matters and getting stuff done instead of on managing your inbox. Cora drafts responses to emails you need to respond to and briefs the rest.

How to think with AI: Active versus passive use

Think about two ways to learn a piece of music. You can learn it by rote—like a kid memorizing the hand positions for Beethoven’s “Für Elise,” training your fingers through repetition until you can perform the piece flawlessly. Or you can learn the piece by understanding its structure, the chord progressions, the harmonic logic. You still practice until your fingers know the patterns, but you understand why the music works. Now you can transpose it, improvise variations, and explain why certain changes would or wouldn’t work.

This same pattern appears in programming: Developers who plan their approach before asking AI to generate code maintain a better understanding of their systems than those who start with prompts.

But the stakes are higher than individual productivity. Research shows critical thinking abilities are declining, especially among younger workers—precisely as employers increasingly demand these skills. The capabilities becoming scarcer are the ones organizations need most: the ability to defend reasoning, adapt thinking to new contexts, and understand where approaches might fail.

Passive AI use is like learning music by rote. You can produce output—an essay, a strategy document, an analysis—by following what AI generates. But you don’t always understand why the argument works, what assumptions it makes, or where it might fail. If somebody asks you to adapt it to a different context, you might be stuck. If you have to defend the reasoning, you have no answer. The output lives in your chat history, not your understanding.

Here’s an example: “Write me a strategy for improving team communication.”

You get an answer. You might even implement it. But you haven’t wrestled with what “better communication” means for your team, what’s causing the current problems, or why certain solutions might fail in your context.

Active AI use means building understanding while collaborating with the model. You frame the problem yourself, make an initial pass, then use AI to challenge your assumptions, uncover blind spots, and sharpen your arguments. You’re learning the chord progressions, not just memorizing the key presses. The machine assists; you own the reasoning.

This might look like: “Here are our context, goals, and constraints. I’ve listed three hypotheses and current evidence. Challenge my assumptions and ask for the missing data before proposing a plan.”

You’re still getting AI’s help, but you’ve done enough thinking that you can evaluate whether its challenges are valid, its questions reveal real gaps, and its suggestions fit your situation. You understand why the strategy works, so you can adapt it when circumstances change.

Of course, passive AI use has its place: transcribing text from screenshots, generating routine reports from data, creating multiple versions of the same message for different audiences. These are like scales and technical exercises—mechanical tasks that don’t require deep comprehension.

But for work where you care about judgment, learning, and deep understanding, you need to build your own understanding.

So how do you structure AI collaboration to stay active rather than passive? Each of these three principles creates friction at a different point in the thinking process, which keeps you cognitively engaged while still leveraging AI’s capabilities.

Three principles for active AI use

Principle 1: Think first, AI second

That MIT study revealed something crucial: When you start with your own thinking, you stay cognitively engaged throughout. When you start with AI, you struggle to activate your brain even after you stop using it.

So, for any piece of meaningful work, do your thinking first, before asking AI to generate on your behalf. Think of it as warming up your cognitive muscles before the main workout. You arrive at the AI interaction already activated, with a point of view to test rather than a blank slate to fill.

This summer, I was teaching a behavioral economics course for undergraduate students. Student feedback from the previous year showed they could pass exams but struggled to apply concepts to novel situations. I needed to restructure how concepts built on each other.

My instinct was to prompt ChatGPT: “Design a 6-week behavioral economics course that promotes deep learning and application.” Instead, I grabbed a notebook and spent an hour working through what I knew: Which concepts did students grasp easily versus struggle with? Where did they make predictable errors? What real-world examples sparked curiosity versus glazed-over compliance? What was I assuming about prerequisite knowledge?

When I finally opened ChatGPT, I had a framework to test. I gave it my notes—the conceptual map, the learning challenges I’d identified, the questions I was wrestling with—and asked it to challenge my sequencing and surface blind spots. Instead of accepting its suggested sequence, I could evaluate: “Does that ordering make pedagogical sense?” I caught structural issues I would have missed if I’d started with a blank prompt. And I’m pretty confident that the course was sharper because I’d mapped the conceptual terrain first.

Before using AI on any project where you care about judgment and understanding, spend 30 minutes capturing your raw thoughts: What do you already know? What are your hypotheses? What feels unclear? What constraints matter?

If you’re truly stuck and need help getting started, use AI to ask questions rather than generate answers:

All illustrations courtesy of Ines Lee/Every.

All illustrations courtesy of Ines Lee/Every.

Principle 2: Use AI as a coach, not a cheerleader

AI’s default mode is to be helpful and agreeable. It suffers from what researchers call “sycophancy”—tailoring responses to what it thinks you want to hear. Left to its own devices, it will tell you your ideas are great, your logic sound, your writing compelling.

This is exactly what you don’t need when you’re trying to think rigorously.

Explicitly prompt AI to be your cognitive sparring partner rather than a pleasant echo. Think of it as converting AI from an eager-to-please intern into a demanding coach who pushes you to think more rigorously.

Recently, I was asked to write a research report on how AI would affect labor markets. The dominant narrative is straightforward: AI will eliminate entry-level white collar jobs, displacing millions of workers in the process.

Instead of asking ChatGPT to help me explain this consensus view, I used a devil’s advocate prompt: “The dominant argument is that AI will wipe out entry-level knowledge work. Your job is to dismantle this narrative. What are the three strongest counterarguments, supported by economic theory or historical precedent? Don’t be diplomatic. Genuinely challenge this position.”

It surfaced three perspectives I hadn’t fully considered, which helped me develop nuanced alternatives I’d overlooked. More importantly, I understood why the conventional wisdom might be wrong, not just that there were alternative views.

Here are three prompts that create this kind of intellectual friction:

The “third-party reviewer” approach:

Uploaded image

This shifts AI from talking to you about your work (where it’s sycophantic) to talking to other editors about your work (where it’s professionally candid). You’re no longer the audience, so it drops the protective politeness.

The “structural gap-mapper” approach:

Uploaded image

This prompt encourages you to use AI for structural thinking, not just generation.

The “devil’s advocate” prompt:

Uploaded image

The goal is to create genuine intellectual friction. You want AI to surface weaknesses you can’t see because you’re too close to the work.

Principle 3: Engineer productive friction

AI explains things in a way that is easy to follow, making us susceptible to what researchers call the “illusion of understanding.” We overestimate what we’ve learned. Real comprehension only emerges when we’re forced to articulate our thinking, because explaining reveals gaps we didn’t know existed.

This idea is the basis of Nobel-prize winning physicist Richard Feynmans simple test for understanding: If you can’t explain something to a 5-year old, you don’t really understand it.

I learned this the hard way early in my teaching career. When I was preparing to teach a new concept, I’d read the papers and felt like I understood it. Then a student would ask a basic clarifying question, and I’d freeze. I had a vague sense of the idea but couldn’t clearly articulate the underlying logic or what made it different from related concepts. I’d consumed explanations but hadn’t built my own understanding.

Now when I’m learning something new—whether it’s preparing to teach or understanding a complex research paper—I use AI differently. Instead of asking it to explain the concept, I prompt: “Before you explain anything, ask me to describe this concept as if I’m teaching it to students. If my explanation is vague or misses key elements, point out what’s missing and ask me to try again.”

Build prompts that force you to demonstrate understanding, not just consume it. Here are some ways to create interaction patterns that force you to do the explaining, rather than letting AI do the heavy lifting.

The Feynman test:

Uploaded image

The recall prompt:

Uploaded image

The one-question rule:

Uploaded image

Thinking with AI, not by AI

If you’ve felt the tension between wanting to stay sharp and needing to stay competitive, you don’t need to choose one or the other. The cognitive capabilities that make you valuable don’t disappear when you use AI. But they can atrophy if you use it in a passive way.

So, for your next meaningful project:

  1. Spend 30 minutes thinking first: Write what you know before you prompt.
  2. Make AI your critic: Ask it to challenge your assumptions, not validate them.
  3. Force yourself to explain: If you can’t articulate it clearly, you don’t understand it yet.

The MIT study showed what’s possible: You don’t have to sacrifice cognitive depth for efficiency. AI can amplify your thinking or replace it. Often, the difference comes down to how you use the tool, not whether you use it.


Ines Lee is an economist and writer, passionate about bridging research, policy, and public understanding of science and technology. Follow her on Substack or LinkedIn.

To read more essays like this, subscribe to Every, and follow us on X at @every and on LinkedIn.

We build AI tools for readers like you. Write brilliantly with Spiral. Organize files automatically with Sparkle. Deliver yourself from email with Cora. Dictate effortlessly with Monologue.

We also do AI training, adoption, and innovation for companies. Work with us to bring AI into your organization.

Get paid for sharing Every with your friends. Join our referral program.

For sponsorship opportunities, reach out to <a href="mailto:sponsorships@every.to">sponsorships@every.to</a>.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Convert Any Executable or Batch file to Windows Background Service

1 Share
When working in software development or deployment environment. Sometimes converting an executable (.exe) tool or a batch file to a windows background service is necessary in order to avoid restarting the whole process again and again manually. This is helpful especially in production servers where in case of abrupt server restart, you don't require to rush into restarting a particular tool or a batch file. The windows background service will do the restarting of your installed service automatically, in case your server or system restarts. So, how can you achieve this?

Today, I shall demonstrate conversion of any executable (.exe) or batch (.bat) file into a windows background service using a powerful yet simple tool NSSM.

 


Prerequisites:

Before proceeding any further in this article, following are some of the many prerequisites for this article:
  1. Download NSSM tool.
  2. Knowledge of Batch (.bat) file.
  3. Knowledge of Windows Command Prompt.

Let's begin now.

1) First step is to download NSSM tool from its official website at your target location.

 
2) In the next step, extract the NSSM tool from the downloaded ZIP file at your target location, make sure to extract your machine's relevant version. I am using windows 64 bit version of NSSM tool for this article.
 
 
3) Now, open Windows Command Prompt as administrator and change into directory location where you have extracted the NSSM tool.

 
4) I have created a sample batch file that simply prints hello world for this article you can check the video demo for details. Now, type below command into windows command prompt for the installation of your windows background service i.e.

 

5) An installation window will appear. Provide relevant configurations within application, details and I/O tab and also enter name of your service and hit install service. You will get a successful installation message. 




6) To confirm that your service is successfully installed as windows background service. Open windows services window by typing below command within run window. Then search for your target service, which in my case is hello world. You can see that my batch file is successfully installed as windows background service. Start the service and make sure that it is in running state because this will confirm that your batch file is working fine. You can check the output in the output log file that you have configured in I/O tab of the NSSM tool during installation.
 
 


 

7) If you which to remove your service then first stop your service from windows services windows. Next, open the windows command prompt window as administrator and type below command with your service name. A confirmation message will appear. Click yes and you will receive a successful removal message. Refresh windows service window and you can see that your service is no longer available on your machine.

 


 

Conclusion

In this article, you will learn to convert any executable (.exe) or batch (.bat) file to windows background service using a simple yet powerful tool called NSSM. You will learn the entire installation process that includes installation and verification of your service. You will also learn to verify your successful windows service installation through windows services window. Finally, you  will learn to remove your service through NSSM tool.
 

Related Articles


Video


Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

C# 14 New Feature: Field-Backed Properties

1 Share

In this talk, Ian Griffiths explains how C# 14's new field-backed properties feature can save you from metaphorically falling off a cliff when you need more flexibility beyond automatic properties' basic functionality.

He demonstrates the use of this feature to customize property setters without losing the simplicity and support of automatic properties. By allowing you to refer to the compiler-generated field inside get or set methods, C# 14 reduces verbosity and maintains code clarity and organization.

Learn how this small but impactful enhancement can improve your C# coding experience.

  • 00:00 Introduction to C# 14's New Feature
  • 00:30 Understanding Automatic Properties
  • 01:11 Customizing Property Behavior
  • 03:06 Introducing C# 14's New Syntax
  • 04:21 Benefits of the New Feature
  • 05:33 Conclusion

C# 14 can save you from falling off a cliff with its new field-backed properties feature. Admittedly, the cliff is metaphorical. Sometimes when you're using a language or library feature, you can find yourself wanting to go beyond what that feature is able to support. And by moving outside the bounds of its support, you, so to speak, walk off the cliff and that sudden loss of support makes your life a lot harder.

Let me show you what I mean. I've got a very simple class here with a couple of properties. As you may know, this syntax where we use just the get and set keywords and optionally an accessibility modifier makes the compiler generate some code for us. It'll define a hidden field to hold the value and it supplies bodies for the get and set that use that field. The proper name for this is an automatically implemented property, but we typically shorten that to just auto property. This saves us from the tedious business of declaring a field and writing the obvious code to read and write the value in that field. It's not a huge deal, but if you're writing lots of properties, this offers worthwhile improvements in clarity and reduces work.

But what if we want slightly more than what C# generates for us? Notice this type defines an IsModified property. What if I want to set that anytime the Value property changes? Before C# 14, the only way to do that was to write a full property instead of an automatic property. Visual Studio can make that change for me. As you can see, this means declaring the field explicitly and having get and set accessors that use that field. Actually, Visual Studio doesn't quite get it right in this case because it hasn't noticed that the field name collides with the contextual keyword value inside the getter. So I need to qualify the field with a this reference.

And now this is almost identical to what the compiler was generating for me. I can now add in the extra feature that I wanted. So I'm just gonna adjust the layout and then use the full block syntax for the setter. And that gives me a place to put the code that sets the IsModified flag. Let's just run that and check that it worked.

And you can see that after I've set the Value property, the IsModified flag reflects that change as required. The obvious downside is that this is more verbose. It's not terrible. I can't complain about the fact that I've had to write the setter explicitly. The goal here was to customize that, but I've also got an explicitly implemented getter, which is effectively identical to what the compiler was generating, and I've also now got this field.

It's only a slight increase in clutter, but perhaps more concerning is the fact that it would be possible for other code in this class to use this field directly, bypassing my change detection. So the cliff wasn't a big one, it's a bit jarring, but this isn't a major problem. However, it comes up often enough that the C# team decided to support scenarios like this without forcing you to stop using automatic properties entirely.

In C# 14, I can leave the automatic get exactly as it is because I didn't actually wanna change that. Here I can customize just the setter. I can add this extra feature setting the IsModified flag. But how do I modify the value? Well, this is where I use the new syntax. Inside a get or set method, I can use the field keyword to refer to the compiler-generated field.

Let me just change the startup project and running that again, we get the required behavior. Comparing this to what we had to do before, you can see that this is a relatively small change. Before, I needed to declare my own field if I wanted to customize my property behavior, but now I can still get the compiler to generate that field for me.

Before, I had to write a custom getter, even though it was only the setter that I wanted to change. Now I can continue to use the compiler-generated getter. So this new feature has a fairly small impact, but what I prefer about the new code is that it removes clutter. I can see immediately that the getter does nothing out of the ordinary, so it's easy to see the one thing that makes this property slightly unusual.

The other benefit is I've not had to introduce a field. And while that's just one line of code, it's a line that doesn't sit all that well with some widely used .NET style guidelines that require fields to be declared in a separate part of the code from properties. Often the field and property could end up being quite distant, and when conceptually closely related code gets scattered across a file, it increases the work required to understand the code.

Arguably that's a flaw in the coding style guidelines, but for better or worse, it's a very common style in .NET that does provide benefits in some scenarios. Also, by using the compiler-generated field, I can be sure that the only code that modifies the field directly is this line here, and with the old approach, I'd need to search for other uses of the field to understand whether any code elsewhere in the class might be bypassing this change detection.

So in conclusion, C# 14 enables us to continue to enjoy the benefits of automatic properties, even when we move beyond their basic capabilities. So automatic properties have always given us a concise way to get basic property behavior, but now if we want to extend beyond that behavior, we can do so without having to fall off the cliff of support.

My name's Ian Griffiths. Thanks for listening.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Learn NestJS for Beginners

1 Share

NestJS is a progressive Node.js framework for building efficient and reliable server-side applications. It uses TypeScript by default and encourages clean, modular code with concepts including controllers, services, and dependency injection.

We just published a NestJS course on the freeCodeCamp.org YouTube channel that will help you harness it’s modular architecture, TypeScript support, and built-in tools to create clean, testable, code.

In this course, you'll explore controllers, services, modules, decorators, pipes, guards, and exception handling - all while building the profile feature for DevMatch, a dating app for developers.

You’ll implement profile creation, updates, and data retrieval while exploring the full lifecycle of a NestJS backend. By the end, you’ll have a solid foundation in NestJS fundamentals, plus the confidence to apply these skills to your own APIs and applications.

The course covers:

  • Understand NestJS fundamentals: modules, decorators, and structure.

  • Build controllers to handle GET, POST, PUT, and DELETE requests.

  • Connect controllers to services to manage application logic.

  • Implement validation and transformation with pipes.

  • Handle errors gracefully with exception filters.

  • Use guards to manage application security and access control.

  • Solve hands-on challenges that reinforce each concept.

Watch the full course on the freeCodeCamp.org YouTube channel (2-hour watch).



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Learn R Programming from Harvard University

1 Share

Harvard University creates amazing beginner computer science courses.

We just released Harvard CS50’s introduction to programming using a language called R, a popular language for statistical computing and graphics in data science and other domains. Carter Zenke developed this course.

Learn to use RStudio, a popular integrated development environment (IDE). Learn to represent real-world data with vectors, matrices, arrays, lists, and data frames. Filter data with conditions, via which you can analyze subsets of data. Apply functions and loops, via which you can manipulate and summarize data sets. Write functions to modularize code and raise exceptions when something goes wrong. Tidy data with R’s tidyverse and create colorful visualizations with R’s grammar of graphics. By course’s end, learn to package, test, and share R code for others to use. Assignments inspired by real-world data sets.

Here are the sections in this course:

  • Introduction

  • Representing Data

  • Transforming Data

  • Applying Functions

  • Tidying Data

  • Visualizing Data

  • Testing Programs

  • Packaging Programs

Watch the full course on the freeCodeCamp.org YouTube channel (9-hour watch).



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Podcast: GenAI Security: Defending Against Deepfakes and Automated Social Engineering

1 Share

In this episode, QCon AI New York 2025 Chair Wes Reisz speaks with Reken CEO and Google Trust & Safety founder Shuman Ghosemajumder about the erosion of digital trust. They explore how deepfakes and automated social engineering are scaling cybercrime and argues defenders must move beyond default trust, utilizing behavioral telemetry and game theory to counter attacks that simulate human behavior.

By Shuman Ghosemajumder
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories