Windows users will no longer be forced to run automatic updates in the middle of a game or a busy day. Microsoft is rolling out some long-awaited changes to Windows Update to users on its Dev and Experimental Windows Insider channels, including the ability to indefinitely delay updates up to 35 days at a time.
Last month, Microsoft announced a slew of upcoming changes to improve Windows 11 and address some of users' most common complaints about the platform. Chief among the company's planned fixes was making updates less disruptive. In its blog post on Friday, Microsoft says you'll be able to "extend the pause end date as many times as you …
ChatGPT has been publicly available for over three years now, and generative AI is woven into the tools students use every day: web search, word processors, code editors. You might assume that by now, most programming instructors have figured out how to handle it. But when my collaborators and I went looking for computing instructors who had made meaningful changes to their course materials in response to GenAI, we were surprised by how few we found. Many instructors had updated their course policies, but far fewer had actually redesigned assignments, assessments, or how they teach.
I’m Sam Lau from UC San Diego, and together with Kianoosh Boroojeni (Florida International University), Harry Keeling (Howard University), and Jenn Marroquin (Google), we’re presenting a research paper at CHI 2026 on this topic. We wanted to understand: What happens when programming instructors try to shape how students interact with GenAI tools, and what gets in their way?
To find out, we interviewed 13 undergraduate computing instructors who had gone beyond policy changes to make concrete updates to their courses: redesigning assignments, building custom tools, or overhauling assessments. We also surveyed 169 computing faculty, including a substantial proportion from minority-serving institutions (51%) and historically Black colleges and universities (17%). What we found is that instructors are doing a kind of design work that nobody trained them for, under conditions that make it very hard to succeed.
Here’s a summary of our findings:
What is “emergency pedagogical design”?
We call this work emergency pedagogical design, drawing an analogy to the “emergency remote teaching” that instructors had to perform when COVID-19 forced courses online overnight. Just as emergency remote teaching was distinct from carefully designed online learning, emergency pedagogical design is distinct from thoughtfully integrating AI into pedagogy. Instructors are reacting in real time, with limited resources and no playbook.
We observed four defining properties. First, the work is reactive: Instructors didn’t plan for GenAI; they’re retrofitting courses that were designed before these tools existed. Second, it’s indirect: Unlike a UX designer who can change an interface, instructors can’t modify ChatGPT or Copilot, so they can only try to influence student behavior through policies, assignments, and course infrastructure. Third, instructors rely on ambient evidence like office-hour conversations and staff anecdotes rather than controlled evaluations. And fourth, instructors feel pressure to act now rather than wait for research or best practices to emerge.
Five barriers instructors keep hitting
Across our interviews and survey, five barriers came up again and again.
Fragmented buy-in. Most instructors we surveyed were personally open to adopting GenAI in their teaching: 81% described themselves as open or very open. But only 28% said the same about their colleagues. The result is that instructors who want to make changes often work in isolation, piloting course-specific tweaks without support or coordination from their departments.
Policy crosswinds. In the absence of top-down guidance, instructors set their own GenAI policies on a per-course basis. As one instructor put it, “From a student perspective, it’s the wild west. Some courses allow GenAI usage, some don’t.” Students have to track different rules for every class, and policies rarely distinguish between paid and unpaid tools, or between stand-alone chatbots and GenAI embedded in everyday software like code editors. 78% of surveyed instructors agreed that unequal access to paid GenAI tools could worsen disparities in learning outcomes.
Implementation challenges. Instructors wanted to shape how students used GenAI, not just whether they used it, but their options were indirect. Some made small adjustments, like permitting GenAI in specific labs. Others went further: One instructor required students to submit design documents before asking GenAI to generate code; another built a custom chatbot that offered conceptual help without writing code for students. 80% of surveyed instructors rated GenAI integration as important or very important, but only 37% reported actually using GenAI tools in course activities often.
Assessment misfit. Several instructors described a striking pattern: Students performed well on take-home assignments but struggled on proctored assessments. One instructor reported that a third of his 450-person class scored zero on a skill demonstration that required writing a short function from scratch, even though assignment grades had been fine. The problem wasn’t just that students were using GenAI to complete homework; it was that instructors had no reliable way to see how students were interacting with these tools day-to-day. Some instructors responded by shifting credit toward oral “stand-up” meetings and written explanations, but this created new challenges around grading consistency and staffing.
Lack of resources. This was the barrier that tied everything together. 53% of surveyed instructors said they lacked sufficient resources to implement GenAI effectively, and 62% said they didn’t have enough time given their workload. The gap was especially stark at minority-serving institutions: MSI instructors were more likely to report insufficient resources (62% vs. 43%) and heavier teaching loads (70% teaching 3+ courses per term versus 54%). All 10 respondents who taught six or more courses per term were from MSIs. Meanwhile, the interviewees who had made the most ambitious changes tended to have lighter teaching loads, external funding, or the ability to hire lots of course staff, advantages that most instructors don’t have.
What needs to change
One striking finding is that the instructors doing the most to improve student-AI interactions were also the most privileged in terms of time, staffing, and funding. One instructor needed over 50 course staff members to run weekly stand-up meetings for 300 students. Others spent their own money on API costs. These are not scalable models.
If only well-resourced institutions can afford to adapt their curricula, GenAI risks widening the very inequities that education is supposed to reduce. Students at under-resourced institutions could fall further behind, not because their instructors don’t care but because those instructors are teaching six courses a term with no additional support.
When surveyed instructors were asked what would help most, the top answers were faculty training and support, evidence of GenAI’s impact, and funding. What if universities, funders, and HCI researchers worked together with instructors to make emergency pedagogical design sustainable for all instructors, not just the most privileged ones?
Ranjan Roy from Margins is back for our weekly discussion of the latest tech news. We cover: 1) Incoming Apple CEO John Ternus's biggest challenge 2) Is turnover at Apple a good thing 3) The products Ternus will take to market 4) What is Apple's tabletop robot all about 5) OpenAI is communicating differently 6) Is TBPN helping OpenAI messaging? 7) Anthropic's Mythos rollout vs. OpenAI's Spud rollout 8) Meta's latest layoffs. 9) Meta tracks employees keystrokes 10) Is Meta's tracking a reinforcement learning play? 11) Ranjan's Streamflation rant.
---
Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.
Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b
Disclaimer: I've created everything on my channel in my free time. Nothing is officially affiliated or endorsed by Microsoft in any way. Opinions and views are my own! 🩷
Today on the show I’m talking with Amelia Wattenberger — designer, data-viz veteran, ex-GitHub Next, and now designing Intent at Augment Code. What if the last 30% of any software project is about to become the hardest part you’ve ever done? That’s the argument Amelia is making today. We discuss the identity crisis developers are having as agents take over the keyboard, the epic redesign of developer tooling in this agent-first world, the arc from autocomplete to chat to CLI back to UI, why Intent treats a workspace as their core primitive not a chat thread, the tradeoffs between one-worktree-per-agent vs. one-worktree-per-task, and why she thinks prototyping just got easier but finishing got harder.
Changelog++ members save 8 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
WorkOS – Auth for CLI with AuthKit from WorkOS — Bring secure browser-based login to your terminal apps using the OAuth Device Flow, with the same polished AuthKit experience plus SSO, MFA, and passkeys. Learn more at WorkOS.com and AuthKit.com
NordLayer – Toggle-ready network security for modern businesses. Get an exclusive offer: up to 22% off NordLayer yearly plans plus 10% on top with the coupon code changelog-10-NORDLAYER. Try it risk-free with a 14-day money-back guarantee at nordlayer.com/thechangelog
RWX – CI/CD platform for high velocity teams. When agents help developers write code in minutes, validation becomes your bottleneck. RWX gives agents programmatic control, sub-second cached builds, and semantic outputs they can act on. No commit required. Just iterate until CI passes, then push.
Fly.io – The home of Changelog.com — Deploy your apps close to your users — global Anycast load-balancing, zero-configuration private networking, hardware isolation, and instant WireGuard VPN connections. Push-button deployments that scale to thousands of instances. Check out the speedrun to get started in minutes.
Before you upgrade, get a clear assessment of what your .NET modernization path looks like.
In this video, I show how to use GitHub Copilot modernization for .NET in Visual Studio Code to assess a .NET 8 project and understand what it will take to move forward. This walkthrough focuses on the assessment experience so you can evaluate upgrade readiness, spot potential blockers, and get a practical starting point before making changes.
What you’ll see: - How to use the GitHub Copilot modernization agent in Visual Studio Code - How to request an assessment for upgrading a .NET project - What the assessment reveals about upgrade readiness - How this helps you plan a move from .NET 8 to .NET 10