Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152052 stories
·
33 followers

When Your Coding Assistant Finally Got X-Ray Vision

1 Share

The Fiddler Everywhere MCP server gives your coding assistant network inspection capabilities for debugging right inside your editor.

Let me paint you a picture.

You are sitting at your desk. Your app loads in the browser. The page renders. The buttons work. The data shows up. Everything looks fine.

Except it is not fine. Somewhere underneath that perfectly rendered page, a request is taking 3 seconds when it should take 200 milliseconds. Another endpoint is returning a 200 but leaking an auth token in a query parameter. A third one is completely missing its Cache-Control header, so every single page refresh costs your users the full round-trip all over again.

But hey—the page loads. So it is fine, right?

Question Hound comic meme with dog sitting in room on fire saying, This is fine
Image Credit: KC Green
Your browser: “Everything rendered successfully!” Your network traffic: literal dumpster fire.

This was the old debugging workflow for years: You open DevTools. You scroll through 47 network requests. You squint at the Timing column. You click into a request, copy the headers, paste them into a notepad, try to figure out if X-Content-Type-Options is set or not. By request number six, you have forgotten what you found in request number two. You do that for 12 more requests.

Then you switch back to your editor, ask your coding assistant for help, and it says something like: “You might want to check if your caching headers are configured properly.”

Might. Want. To check.

It does not know. It cannot know. It is reading your Express routes and giving you its best guess based on what the code probably does. It has never seen the actual traffic.

Now Imagine This Instead

You open your editor. You type:

#fiddler Start capturing HTTP traffic in Fiddler using Chrome

A Chrome window opens with the Fiddler proxy already configured. You click around your app for thirty seconds. Then you type:

#fiddler Identify the slowest API endpoints in the captured traffic

And your coding assistant—which can now actually see the captured sessions—comes back with:

GET /api/products/featured — average 2,847ms across 4 requests, 52KB response body, no Cache-Control header. This is your primary bottleneck.

No guessing. No “you might want to.” Just facts, pulled from real HTTPS sessions.

This is what the Progress Telerik Fiddler Everywhere MCP server does. It gives your coding assistant network vision. MCP—Model Context Protocol—is the open standard that lets your editor’s assistant call external tools during a conversation. The Fiddler MCP server exposes captured traffic data through those tools: session lists, request and response details, headers, bodies, timing, TLS info—the whole picture.

The setup? A JSON config file in your workspace:

{  
  "servers": {  
    "fiddler": {  
      "type": "http",  
      "url": "http://localhost:8868/mcp",  
      "headers": {  
        "Authorization": "ApiKey YOUR_API_KEY_HERE"  
      }  
    }  
  }  
}  

Generate the key in Fiddler Everywhere, drop the config and you are connected. Works with VS Code, Cursor, Claude Code, Windsurf, Copilot CLI—the full lineup.

The Prompts That Make You Feel Like a Wizard

Once Fiddler MCP is connected, the things you can ask your assistant become genuinely fun:

  • #fiddler Show me all failed requests (status codes 4xx and 5xx)
  • #fiddler Identify sessions with weak or missing security headers
  • #fiddler Create a comprehensive report covering security highlights and performance optimizations
  • #fiddler Analyze caching efficiency for the captured sessions

Each of these triggers real tool calls. The assistant reads actual captured sessions, not your imagination of what the sessions might look like. That means the security audit it produces is based on the headers your server actually sent, not the headers it assumes you configured.

The Fiddler Prompt Library has dozens more if you want to explore.

Skip the Setup: Fiddler Agent Skills

If even the two-minute config feels like too much (I get it, we are all busy), the official Fiddler agent skills do it for you. Clone the repo, drop the skills into your project, and tell your assistant:

Set up Fiddler MCP

Done. Port discovery, API key retrieval, config file, .gitignore update—all handled. There is also a skill that analyzes captured traffic after you run a feature and gives you a structured pass/fail report. Honestly, it is the kind of thing that makes you wonder why you were ever doing it manually.

Build Your Own Debugging Agents (This Is Where It Gets Good)

Here is where things get spicy. The built-in skills are great, but you can write your own. A skill is just a Markdown file—a SKILL.md—that tells your assistant which Fiddler tools to call, in what order, and how to format the output.

Some agents I think every team should consider building:

  • traffic-security-auditor – Scans every session for missing security headers, tokens exposed in URLs, insecure connections. Produces a prioritized report. Basically your own automated pen-test lite that runs from a chat prompt.

  • api-performance-watchdog – Flags slow endpoints, checks caching headers, reports payload bloat. Think of it as a performance review for your API—except this one is actually useful. (Sorry, managers.)

  • checkout-flow-verifier – Replays a captured e-commerce checkout flow and verifies each step hit the right status codes and redirects. Because nobody wants to find out the payment confirmation page is broken from an angry customer email.

The Creating Custom Skills guide has the full format and a working example.

You Never Have to Leave Your Editor

Here is the part that really lands once you try it: you do not need to switch to Fiddler Everywhere. You do not need to open a separate desktop application. You do not need to context-switch at all.

Everything happens inside the tool you are already using—VS Code with Copilot, Claude Code in the terminal, Cursor, Windsurf, whatever your setup is. Capture traffic, filter sessions, inspect headers, create rules, run a full security audit, all from the same chat window where you write code.

That is the whole point. The MCP server brings Fiddler network inspection capabilities directly into your coding workflow. Filters? Ask for them in a prompt. Rules? Create them with a sentence. Session details? One tool call away. You get the full power of a professional traffic inspector without ever breaking your flow.

Enlightenment meme with progressive stages: Reading server logs, opening Chrome DevTools, Opening Fiddler desktop, Typing #fiddler Analyze my traffic
When your entire debugging workflow fits inside one chat window and zero browser tabs.

Try It

Seriously—if you are already using a coding assistant while you code, giving it access to your network traffic is one of the highest-leverage things you can do. The guessing stops. The analysis gets real. And you might actually enjoy debugging for once.

Try Fiddler Everywhere

Seriously Though, Tell Us What You Think

We are building this as fast as we can, and your feedback is what steers the ship. Tried a prompt that did not work well? Built a custom skill that saved your team hours? Found a bug? We want all of it.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Customizing the Wolverine Code Generation Model

1 Share

All of this sample code is checked in on GitHub at https://github.com/JasperFx/CritterStackSamples/tree/main/Reports.

When you develop with Wolverine as your application framework, Wolverine is really trying to push you toward using pure functions for your main message handler or HTTP endpoint methods. Recently, I was reviewing a large codebase using Wolverine and found several message handlers that had to use a simple interface like this one just get the next assigned identifier:

public interface IReportIdService
{
Task<int> GetNextReportId(CancellationToken cancellation);
}

In their command handlers, they were writing command handlers like this:

public static class StartReportEndpoint2
{
[WolverinePost("/report")]
public static async Task<ReportStarted> Handle(
// The command
StartReport command,
// Marten session
IDocumentSession session,
// The service just to fetch the next report id
IReportIdService idService,
CancellationToken cancellation
)
{
var id = await idService.GetNextReportId(cancellation);
session.Store(new Report{Id = id, Name = command.Name});
return new ReportStarted(command.Name, id);
}
}

(the real life handlers were a little more complicated than this because real code is almost always far more complex than the simplistic samples people like me use to demonstrate concepts)

Of course, the code above isn’t very difficult to understand conceptually and maybe it’s not worth the effort to write it any differently. But all the same, let me show you a Wolverine capability to customize the code generation to turn that handler method above into a synchronous pure function.

First off — and shockingly maybe for anybody who has seen me complain about these dad gum little things online — I want to introduce a little custom value type for the report id like this:

// You'd probably use something like Vogen
// on this too, but I didn't need that just
// for the demo here
public record ReportId(int Number);

Just so we can use this little type to identify our Report entities:

public class Report(ReportId Id)
{
public string Name { get; set; }
}

And next, what we want to get to is an HTTP endpoint signature that’s a pure function where the next ReportId is just poked in as a parameter argument:

public static class StartReportEndpoint
{
[WolverinePost("/report")]
public static (ReportStarted, IMartenOp) Handle(
// The command
StartReport command,
// The next report
ReportId id)
{
var op = MartenOps.Store(new Report(id) { Name = command.Name });
return (new ReportStarted(command.Name, id), op);
}
}

The MartenOps.Store() thing above is a built in “side effect” from Wolverine.Marten. Many of Wolverine’s earliest serious users were Functional Programming fans and they helped push Wolverine into a bit of an FP direction.

Alright, so the next step is to teach Wolverine how to generate a ReportId automatically and relay that to handler or endpoint methods that express a need for that through a method parameter. As an intermediate step, let’s do this simply and say that we’re just using a PostgreSQL Sequence for the number (I think my client’s implementation was something meaningful and more complicated than this, but just go with it please).

Knowing that our application has this sequence:

builder.Services.AddMarten(opts =>
{
// Set the connection string and database schema...
// Create a sequence to generate unique ids for documents
var sequence = new Sequence("report_sequence");
opts.Storage.ExtendedSchemaObjects.Add(sequence);
}).IntegrateWithWolverine();

Then we can build this little extension helper:

public static class DocumentSessionExtensions
{
public static async Task<ReportId> GetNextReportId(this IDocumentSession session, CancellationToken cancellation)
{
// This API was added in Marten 8.31 as I tried to write this blog post
var number = await session.NextSequenceValue("reports.report_sequence", cancellation);
return new ReportId(number);
}
}

And finally to connect the dots, we’re going to teach Wolverine how to resolve ReportId parameters with this extension of Wolverine’s code generation:

// Variable source is part of JasperFx's code generation
// subsystem. This just tells the code generation how
// to resolve code for a variable of type ReportId
internal class ReportIdSource : IVariableSource
{
public bool Matches(Type type)
{
return type == typeof(ReportId);
}
public Variable Create(Type type)
{
var methodCall = new MethodCall(typeof(DocumentSessionExtensions), nameof(DocumentSessionExtensions.GetNextReportId))
{
CommentText = "Creating a new ReportId"
};
// Little sleight of hand. The return variable here knows
// that the MethodCall creates it, so that gets woven into
// the generated code
return methodCall.ReturnVariable!;
}
}

And finally to register that with Wolverine at bootstrapping:

builder.Host.UseWolverine(opts =>
{
// Here's where we are adding the ReportId generation
opts.CodeGeneration.Sources.Add(new ReportIdSource());
});

Now, moving back to the StartReportEndpoint HTTP endpoint from up above, Wolverine is going to generate code around it like this:

// <auto-generated/>
#pragma warning disable
using Microsoft.AspNetCore.Routing;
using System;
using System.Linq;
using Wolverine.Http;
using Wolverine.Marten.Publishing;
using Wolverine.Runtime;
namespace Internal.Generated.WolverineHandlers
{
// START: POST_report
[global::System.CodeDom.Compiler.GeneratedCode("JasperFx", "1.0.0")]
public sealed class POST_report : Wolverine.Http.HttpHandler
{
private readonly Wolverine.Http.WolverineHttpOptions _wolverineHttpOptions;
private readonly Wolverine.Runtime.IWolverineRuntime _wolverineRuntime;
private readonly Wolverine.Marten.Publishing.OutboxedSessionFactory _outboxedSessionFactory;
public POST_report(Wolverine.Http.WolverineHttpOptions wolverineHttpOptions, Wolverine.Runtime.IWolverineRuntime wolverineRuntime, Wolverine.Marten.Publishing.OutboxedSessionFactory outboxedSessionFactory) : base(wolverineHttpOptions)
{
_wolverineHttpOptions = wolverineHttpOptions;
_wolverineRuntime = wolverineRuntime;
_outboxedSessionFactory = outboxedSessionFactory;
}
public override async System.Threading.Tasks.Task Handle(Microsoft.AspNetCore.Http.HttpContext httpContext)
{
var messageContext = new Wolverine.Runtime.MessageContext(_wolverineRuntime);
// Building the Marten session
await using var documentSession = _outboxedSessionFactory.OpenSession(messageContext);
// Creating a new ReportId
var reportId = await DocumentSessionExtensions.GetNextReportId(documentSession, httpContext.RequestAborted).ConfigureAwait(false);
System.Diagnostics.Activity.Current?.SetTag("handler.type", "StartReportEndpoint");
// Reading the request body via JSON deserialization
var (command, jsonContinue) = await ReadJsonAsync<StartReport>(httpContext);
if (jsonContinue == Wolverine.HandlerContinuation.Stop) return;
// The actual HTTP request handler execution
(var reportStarted_response, var martenOp) = StartReportEndpoint.Handle(command, reportId);
if (martenOp != null)
{
// Placed by Wolverine's ISideEffect policy
martenOp.Execute(documentSession);
}
// Writing the response body to JSON because this was the first 'return variable' in the method signature
await WriteJsonAsync(httpContext, reportStarted_response);
}
}
// END: POST_report
}

And as usual, the generated code is an assault on the eyeballs, but if you squint and look for ReportId, you’ll see the generated code is executing our helper method to fetch the next report id value and pushing that into the call to our Handle() method.

Summary

I don’t know that this capability is something many teams would bother to employ, but it’s a possible way to simplify handler or endpoint code that hasn’t been previously documented very well. Wolverine itself uses this capability quite a bit for conventions.

What I think is more likely, I hope anyway, is that our continuing investment in AI Skills for the Critter Stack is that folks still get value out of these capabilities by either the AI skills recommending the usage upfront or helping people apply this later to continuously shrink the codebase and improve testability of the actual business logic and workflow code.

And because Wolverine has been such a busy project of the type that sometimes throws spaghetti up against the wall to see what sticks, there is another option for this kind of code generation customization you can see here in a sample that “pushes” DateTimeOffset.UtcNow into method parameters.

Lastly, we’re in the midst of an ongoing effort to improve the documentation across the JasperFx / Critter Stack family of projects and you can find more information about the code generation subsystem at https://shared-libs.jasperfx.net/codegen/.



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How did code handle 24-bit-per-pixel formats when using video cards with bank-switched memory?

1 Share

On the topic of what happens if an access violation straddles multiple pages, Gil-Ad Ben Or wonders how code handled 24-bit-per-pixel formats when using video cards with bank-switched memory. “The issue is that since 64k bytes is not divisible by 3, and you usually need a pixel granularity if you aren’t using some kind of buffering.”

This is referring to an older article about the Windows 95 VFLATD video driver helper which emulated a flat video address space even though the underlying video card used bank-switched memory by mapping the active bank into a location in the address that corresponds to its emulated flat address, and responding to page faults by switching banks and moving the mapping to the emulated flat address of the new bank.

The trick falls apart if somebody makes a memory access that straddles two banks, because that leads to an infinite cycle of bank switching: The CPU raises an access violation on the first bank, and the driver maps that bank in and invalidates the second bank. But since the memory access straddles two banks, then the CPU raises an access violation on the second bank, and the act of remapping that bank causes the first bank to become unmapped, and the cycle repeats.

So how did code deal with pixels that straddles two banks?

The underlying rule is that all accesses to memory must be properly-aligned. No properly-aligned memory access will straddle a page boundary.

Managing this requirement was just the cost of doing business. People who wrote code that accessed video memory knew that they couldn’t use tricks like “read a 32-bit value and ignore the top 8 bits.” If you have a pixel that straddles a boundary, you’ll have to break it up into three byte accesses, or at least a byte access and a word access (where the word access is properly aligned). In practice, it’s not worth the effort to do the work to decide whether to split the pixel as byte+word vs. word+byte, and everybody just did it as three bytes.

Now, if you were operating on an entire row of pixels, you could use aligned 32-bit reads and writes to access the entire row: Copy bytes until the address is 32-bit aligned, and then use 32-bit reads for the bulk of the row, and then copy any leftover bytes at the end. The 32-bit reads will straddle pixel boundaries, but that’s okay because they don’t straddle page boundaries.

In other words, the answer is that they handled it by handling it.

The post How did code handle 24-bit-per-pixel formats when using video cards with bank-switched memory? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s Next for React in 2026

1 Share

The State of React survey reveals developer insights into the patterns and tools they use and how their opinions about React are shifting.

The State of React 2025 survey results paint an interesting picture of where React stands as it heads into 2026. React 19 adoption is well underway, but stability hasn’t translated into consensus about what comes next.

State of React 2025

What the survey shows is that some of React’s biggest bets are paying off while others are still finding their footing. The patterns we’re building with, the tools we’re reaching for and the way we think about React architecture are all shifting. In this article, we’ll look at what the data actually says and what it means for the year ahead.

React Server Components

One of the most talked about additions to React in the last couple of years has been React Server Components (RSC). Server Components are components that run on the server and let us keep server-only logic, data access and sensitive code out of the client bundle. Alongside that, Server Functions let client-side code invoke server-side logic through a framework-managed interface, without hand-rolling traditional API endpoints for every interaction.

For a great read on Server Components, check out The Current State of React Server Components: A Guide for the Perplexed.

Some have said that Server Components were to be the foundation of React’s next evolution toward a more complete full-stack framework. The survey data, however, is more nuanced.

About 45% of respondents have used Server Components, and among those who have, only about a third report a positive experience. Server Functions tell a similar story, with roughly 37% adoption and 33% positive sentiment among users. In both cases, only a small fraction of the overall community has used these features and come away with a positive sentiment.

Contrast that with Suspense, React’s mechanism for declaratively handling loading states while waiting for asynchronous data or code. Suspense has the highest adoption rate among new features and boasts strong satisfaction numbers.

It’s a useful comparison because it shows that the React community isn’t resistant to new patterns: when a new API solves a clear problem with a reasonable developer experience, adoption follows. With that said, Suspense is a smaller, more contained feature that’s easier to introduce into existing applications, while Server Components require a more fundamental shift in how we think about our application architecture.

The architecture data reinforces this picture. When asked which rendering patterns they’ve used, most teams still rely on the tried-and-true: Single-Page Applications lead the way at 84%, followed by Server-Side Rendering (61%) and Static Site Generation (44%). Newer approaches like partial hydration (25%), streaming SSR (18%) and islands architecture (14%) are gaining traction, but they’re far from mainstream.

None of this means Server Components don’t matter. Architecturally, the ability to move rendering logic to the server, reduce client-side JavaScript and simplify data fetching is significant. But the developer experience may just need more time to mature and catch up to the architectural promise. For most teams, the pragmatic move is to adopt RSC incrementally and where it makes sense, rather than treating it as a mandate to rewrite everything.

What Developers Are Curious About

The survey also gives us a sense of where developer curiosity is headed. The reading list, the section of the survey that lets respondents flag topics they want to learn more about, is a useful signal here.

ViewTransition, a React API for coordinating animated transitions between UI states, ranks near the top. So does Activity, which lets us hide and show parts of our UI while preserving their internal state and DOM. Both are currently only available in React’s Canary channel, but they point to a future where React handles more of the UX polish that we currently rely on third-party libraries for.

What’s worth noting is the general pattern across the survey data. The features gaining the most positive attention tend to be the ones that solve focused problems without requiring a wholesale rethink of how we build applications. Developers are drawn to APIs that slot into their existing workflows and make specific things easier, whether that’s handling loading states with Suspense, coordinating transitions with <ViewTransition> or managing background rendering with <Activity>.

The UI Component Library Landscape

The survey data around UI component libraries also tells an interesting story. The average respondent has used 2.3 UI libraries and a significant proportion don’t use any component library at all. As the survey itself notes, this suggests the space isn’t quite settled yet, and that there’s still room for new entrants to make their mark.

What this tells us is that developers are still actively evaluating their options. Even the most widely used libraries in the survey sit around 50-57% usage, and the libraries with the highest satisfaction rates aren’t always the ones with the broadest adoption. The needs are clear: production-quality components, built-in accessibility, consistent theming, TypeScript support and, increasingly, integration with AI-powered development workflows.

KendoReact

For teams building enterprise applications, the choice of component library has long-term implications. It affects how quickly we can ship features, how accessible our applications are out of the box and how well our UI scales across complex use cases like data grids, schedulers and form-heavy workflows.

KendoReact website: Master the Art of React UI

Progress KendoReact is one library worth looking at in this context. It provides 120+ production-ready components with built-in accessibility, deep theming support through ThemeBuilder and recent investments in AI-powered developer tooling including an MCP server and AI coding assistant. For teams evaluating their UI toolkit for the year ahead, it’s a library that’s actively investing in the same directions the ecosystem is moving.

AI as an Accelerator

It would be impossible to write about React development in 2026 without mentioning AI. However, the important thing to keep in mind is that AI is changing how we write React code, not what we build with it.

The AI tooling landscape has shifted significantly over the last couple of years. AI-native editors like Cursor and Claude Code understand our entire codebase and can generate components that match our existing patterns and conventions. Model Context Protocol (MCP) integrations give AI assistants real-time access to component library documentation, so the code they generate actually uses the right props and follows current best practices.

These capabilities make us faster by reducing the time we spend on boilerplate and letting us iterate more quickly. However, they don’t replace the architectural decisions we need to make: when to adopt Server Components, how to structure our state management and which rendering patterns fit our use case. AI accelerates the execution of those decisions, not the decisions themselves.

For a deeper dive into how AI is reshaping day-to-day React workflows, from code generation to theming to agentic development, check out AI Productivity for React Developers in 2026.

What This Means for 2026

The data we surveyed today gives us a clear picture of where React is today and where it could be heading in 2026. React is stable, widely adopted and evolving, but the community’s appetite for significant change is measured.

  • Server Components represent React’s most ambitious shift, but mainstream acceptance will take time. For most teams, the winning approach is incremental adoption, where it solves real problems.
  • Developer experience still wins. The features seeing the strongest adoption and interest (Suspense, ViewTransition, Activity) solve focused problems without demanding that we rebuild our mental model of React.
  • The component library landscape remains unsettled. Teams need libraries that invest in accessibility, developer experience and integrate well with modern tooling, including AI assistants.
  • AI is making us more productive at the execution layer, but strategic decisions about architecture and user experience still require human judgment.

Looking ahead, the most successful React teams in 2026 will stay pragmatic: adopting new patterns when they solve real problems, choosing stable tooling and using AI to accelerate delivery without losing sight of fundamentals. React’s ecosystem is mature enough that we can be selective about what we adopt and when.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

1.0.33

1 Share

2026-04-20

  • Resuming a remote session with --resume or --continue automatically inherits the --remote flag without needing to re-specify it
  • Add /bug, /continue, /release-notes, /export, and /reset as command aliases
  • Slash command picker suggests similar commands when you type an unrecognized or misspelled slash command
  • Add /upgrade as an alias for the /update command
  • Grep no longer times out on large repositories when content exclusion policies are enabled
  • Non-interactive mode waits for all background agents to finish before exiting
  • Skill picker correctly truncates CJK/Japanese descriptions and long skill names without wrapping
  • Slash command picker selects the highlighted command when pressing Enter
  • ctrl+t to toggle reasoning display is now listed in the /help and ? overlay
  • Sub-agents in auto mode now inherit the session model
  • Show usage limit warnings at 50% and 95% capacity, giving earlier notice before hitting rate limits
  • Use j/k for vim-style navigation and x to kill tasks in the tasks dialog
Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Reading Notes #694

1 Share

A fast-moving mix this week: AI tooling, ARM readiness, Docker sandboxes, and real-world lessons from agents. Practical insights across .NET, DevOps, and local-first workflows.


Suggestion of the week

AI

Programming

DevOps

Podcasts

  • Our Favorite Agent Setups (Agentic DevOps) - Nice discussion that goes through many AI harnesses, agents, models, and what they are playing with right now. OpenClaw, OpenCode, Claude Code, Copilot, and all of it.

  • Michael Perry: AI-assisted Development - Episode 397 (AI DevOps Podcast) - Interesting discussion about AI-assisted Development (or can we say programming?) with a focus on skills and how they could be defined.


Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories