Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150060 stories
·
33 followers

1.0.10

1 Share

2026-03-20

  • Reduced memory usage when viewing large files in their entirety
  • /login device flow works correctly in Codespaces and remote terminal environments
  • Working directory is correctly detected when using --server mode with remote sessions
  • Arrow keys work correctly in terminals using application keypad mode
  • Repo hooks (.github/hooks/) now fire correctly when using prompt mode (-p flag)
  • /copy writes formatted HTML to clipboard on Windows for pasting into Word, Outlook, and Teams
  • SDK clients can register custom slash commands when starting or joining a session
  • SDK clients can show elicitation dialogs to the user via session.ui.elicitation
  • Add experimental support for multiple concurrent sessions
  • Add --effort as a shorthand alias for --reasoning-effort
  • Add /undo command to undo the last turn and revert file changes
  • Markdown bullet lists render correctly in alt-screen mode when content contains hard line breaks
  • Elicitation form shows Shift+Tab hint for navigating between fields in reverse
  • Remote session URL displays as a compact clickable 'Open in browser' link instead of a duplicated raw URL
  • Session history is no longer lost when exiting via /quit, Ctrl+C, or restart
  • Hook matcher filters defined in nested hook structures are now correctly applied to inner hook items
  • Plugins using .claude-plugin/ or .plugin/ manifest directories now load their MCP and LSP servers correctly
  • /terminal-setup no longer shows a misleading error for WSL users
  • Model picker reorganizes models into Available, Blocked/Disabled, and Upgrade tabs based on user plan and policy
  • Workspace MCP servers from .mcp.json, .vscode/mcp.json, and devcontainer.json are now loaded only after folder trust is confirmed
  • Config settings renamed to camelCase: includeCoAuthoredBy, effortLevel, autoUpdatesChannel, statusLine (old names still work)
  • When copying assistant responses, the leading 2-space UI indent is stripped from selections where all selected lines share that indent
  • Plugins loaded via --plugin-dir now appear in /plugin list under a separate 'External Plugins' section
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

We Give, We Grow: Celebrating Women’s History Month at New Relic

1 Share
We Give, We Grow: Celebrating Women’s History Month at New Relic
Read the whole story
alvinashcraft
23 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Rider 2026.1 Release Candidate Is Out!

1 Share

The Rider 2026.1 Release Candidate is ready for you to try.

This upcoming release brings improved support for the .NET ecosystem and game development workflows, as well as refinements to the overall developer experience. Rider 2026.1 allows you to work with file-based C# programs and offers an improved MAUI development experience on Windows, mixed-mode debugging, and early support for CMake projects.

If you’d like to explore what’s coming, you can download the RC build right now:

.NET highlights of this release

Support for file-based C# programs

You can now open, run, and debug standalone .cs files directly in Rider – no project file required.

This makes it easier to create quick scripts, prototypes, or small tools while still benefiting from full IDE support, including code completion, navigation, and debugging.

Viewer for .NET disassemblies

You can now inspect native disassembly generated from your C# code inside Rider.

With the new ASM Viewer tool window, you can explore output from JIT, ReadyToRun, and NativeAOT compilers without leaving the IDE. More on that here.

NuGet Package Manager Console (Preview)

Rider now includes a NuGet Package Manager Console with support for standard PowerShell commands and Entity Framework Core workflows.

If you’re used to working with PMC in Visual Studio, you can now use the same commands without leaving Rider. Learn more here.

Smoother MAUI iOS workflow from Windows

Building and deploying MAUI iOS apps from Windows is now more reliable and easier to set up.

When connecting to a Mac build host, Rider automatically checks and prepares the environment – including Xcode, .NET SDK, and required workloads – so you can get started faster and spend less time troubleshooting setup issues.

Azure DevOps: Ability to clone repositories

A new bundled Azure DevOps plugin lets you browse and clone repositories directly from Rider using your personal access token.

No need to switch tools – everything is available from File | Open | Get from Version Control.

Game development improvements

Rider 2026.1 continues to improve the experience of building and debugging games across Unreal Engine, Unity, and C++ workflows.

Full mobile development support for Unreal Engine

Rider 2026.1  fully supports mobile game development for Unreal Engine on both Android and iOS.

You can debug games running on iOS devices directly from Rider on macOS – set breakpoints, inspect variables, and step through code using the familiar debugger interface. This builds on previous Android support and completes the mobile workflow across platforms.

Faster and more responsive Unreal Engine debugging

C++ debugging in Rider now uses a new standalone parser and evaluator for Natvis expressions. Variable inspection with the rewritten evaluator is up to 87 times faster on warm runs and 16 times faster on cold ones. The debugger memory usage has dropped to just over a third of what it was.

Get the full story of how we were able to achieve that from this blog post.

Blueprint improvements

Finding usages, event implementations, and delegate bindings across Unreal Engine Blueprints and C++ code is now more reliable, making it easier to trace how gameplay logic connects across assets.

Code Vision now supports the BlueprintPure specifier and correctly detects blueprint events implementations in Blueprints. Find Usages has also been improved and now identifies additional BlueprintAssignable delegate bindings.

Blueprint usage search now relies on the asset path instead of the Blueprint name, ensuring accurate results even when multiple Blueprints share the same name.

CMake support for C++ gaming projects (Beta)

Rider 2026.1 introduces Beta support for CMake-based C++ projects.

You can now open, edit, build, and debug CMake projects directly in Rider, making it easier to work with game engines that rely on CMake. This is an early implementation focused on core C++ workflows, and we’ll continue expanding compatibility and performance in future releases.

Redesigned Unity Profiler integration

Performance analysis for Unity projects is now more integrated into your workflow.

You can open Unity Profiler snapshots directly in Rider and explore them in a dedicated tool window with a structured view of frames and call stacks. A timeline graph helps you identify performance hotspots, and you can navigate directly from profiler data to source code.

Mixed-mode debugging for game scenarios on Windows

With mixed-mode debugging on Windows, you can debug managed and native code in a single session. This is particularly useful for game development scenarios where .NET code interacts with native engines or libraries, allowing you to trace issues across the full stack without switching contexts.

Language support updates

Rider 2026.1 brings improvements across multiple languages:

  • C#: better support for extension members, new inspections, and early support for C# 15 Preview
  • C++: updated language support, improved code analysis, and smarter assistance
  • F#: improved debugging with Smart Step Into and better async stepping

Rider’s C# intelligence is powered by ReSharper. For a deeper dive into C# updates, check out this blog post for ReSharper 2026.1 Release Candidate.

Try it out and share your feedback

You can download and install Rider 2026.1 RC today:

We’d love to hear what you think. If you run into issues or have suggestions, please report them via YouTrack or reach out to us on X.

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

ReSharper 2026.1 Release Candidate Released!

1 Share

The ReSharper 2026.1 Release Candidate is ready for you to try.

This release focuses on making everyday .NET development faster and more predictable, with improvements to code analysis and language support, a new way to monitor runtime performance, and continued work on stability and responsiveness in Visual Studio.

If you’re ready to explore what’s coming, you can download the RC right now:

Release highlights

A new way to monitor runtime performance

ReSharper 2026.1 introduces the new Monitoring tool window, giving you a clearer view of how your application behaves at runtime.

You can track key performance metrics while your app is running or during debugging and get automated insights into potential issues. The new experience builds on capabilities previously available in Dynamic Program Analysis and our profiling tools, but brings them together in a single view that makes it easier to evaluate performance at a glance.

Starting with ReSharper 2026.1, the Monitoring tool window is available when using ReSharper as part of the dotUltimate subscription.

Note: The Dynamic Program Analysis (DPA) feature will be retired in the 2026.2 release, while its core capabilities will continue to be provided through the new monitoring experience.

Current limitations: The Monitoring tool window is not currently supported in Out-of-Process mode. We are working to remove this limitation in ReSharper 2026.2.

ReSharper now available in VS Code-compatible editors

ReSharper expands its support beyond Microsoft Visual Studio. The extension is now publicly available for Visual Studio Code and compatible editors like Cursor and Google Antigravity.

You can use familiar ReSharper features – including code analysis, navigation, and refactorings – in your preferred editor, along with support for C#, XAML, Razor, and Blazor, and built-in unit testing tools.

ReSharper for VS Code and compatible editors is available under the ReSharper, dotUltimate, and All Products Pack subscriptions. A free subscription is also available for non-commercial use.

Learn more in this dedicated blog post.

Better support for modern C#

ReSharper 2026.1 improves support for evolving C# language features, helping you work more efficiently with modern syntax.

  • Better handling of extension members, including improved navigation, refactorings, and auto-imports
  • Early support for upcoming C# features like collection expression arguments
  • New inspections to catch subtle issues, such as short-lived HttpClient usage or incorrect ImmutableArray<T> initialization

These updates help you write safer, more consistent code with less manual effort.

Faster code analysis and indexing

This release includes performance improvements across core workflows:

  • Faster indexing of annotated type members
  • More responsive import completion
  • Reduced overhead in code analysis by optimizing performance-critical paths

Improved stability in Out-of-Process mode

We continue to improve the reliability of ReSharper’s Out-of-Process (OOP) mode, which separates ReSharper’s backend from Visual Studio to keep the IDE responsive.

In this release, we fixed over 70 issues affecting navigation, UI interactions, unit testing sessions, and solution state synchronization, making everyday work more stable and predictable.

Updated editor UI

ReSharper’s editor experience has been refreshed to better align with the modern Visual Studio look and feel. Code completion, parameter info, and other popups now have a cleaner, more consistent design and properly support editor zoom, improving readability across different setups.

C++ improvements (ReSharper C++)

Alongside the core ReSharper updates, the 2026.1 Release Candidate also brings improvements for C++ developers working with ReSharper C++:

  • Performance: Faster startup times and lower memory usage in Unreal Engine projects.
  • Language support: Support for the C23/C++26 #embed directive, C++23 extended floating-point types, the C2Y _Countof operator, and other features.
  • Coding assistance: Auto-import for C++20 modules and postfix completion for primitive types, literals, and user-defined literal suffixes.
  • Code analysis: New inspections for out-of-order designated initializers and override visibility mismatches, update of bundled Clang-Tidy to LLVM 22.
  • Unreal Engine: Richer Blueprint integration in Code Vision and Find Usages, compatibility fixes for the upcoming Unreal Engine 5.8.

Try it out and share your feedback

You can download and install ReSharper 2026.1 RC today:

We’d love to hear what you think. If you run into issues or have suggestions, please share your feedback via YouTrack.

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Exploring Claude Code Hooks with the Coding Agent Explorer (.NET)

1 Share

Ever wondered what Claude Code is actually doing while it works? Every file it reads, every command it runs, every permission it requests. Claude Code exposes all of this through a hook system that lets you intercept and observe each step.

In this post, you’ll learn what hooks are, how to set them up, and how to use the Coding Agent Explorer to visualize hook events in real time.

The post is part of a multi-part series on the Coding Agent Explorer, an open-source .NET tool for inspecting what AI coding agents do under the hood. You can jump to the section you need, but for background and context, it’s best to start here:

To make this more concrete, the diagram below shows how hook events flow from Claude Code through HookAgent into the Coding Agent Explorer.

Claude Code Hooks Flow with HookAgent and Coding Agent Explorer

Now that we have the big picture, let’s see how to set this up.

What Are Claude Code Hooks?

When things happen inside Claude Code, it can execute a hook. For example:

  • Reading or writing a file
  • Running a shell command
  • Requesting a permission
  • Starting or ending a session
  • Submitting a user prompt

The diagram below highlights several of the hook events that Claude Code generates during execution.

Claude Code hook events diagram showing SessionStart, PreToolUse, Notification, and SubagentStart events

A hook is simply a shell command that Claude Code runs at that point, passing a JSON payload describing the event via stdin. This gives you a way to extend, observe, and control Claude Code’s behavior from the outside.

What can hooks be used for?

With hooks, you can:

  • Enforce security policies before tools run
  • Log and audit tool usage
  • Trigger external automation
  • Notify external systems (Slack, CI, etc.)

Example: Block access to secret files

A classic use case is intercepting PreToolUse events to block the agent from accessing files it should not touch.

Let’s say we have a file named Secret.key that we never want Claude Code to read. When the agent attempts to access it, your PreToolUse hook intercepts the request, inspects the file path, and blocks the operation by returning a non-zero exit code with an error message. Claude Code respects the response and continues without ever opening the file.

The diagram below illustrates how the PreToolUse hook intercepts and blocks the request.

Diagram illustrating how a PreToolUse hook prevents Claude Code from accessing a sensitive file like secret.key

Here’s what Claude Code sends to the hook via stdin when it tries to read secret.key:

				
					{
  "session_id": "9ec4714b-3475-4773-b4af-3f70d7fe68f7",
  "transcript_path": "C:\Users\Tore\.claude\projects\C--Conf\9ec4714b-3475-4773-b4af-3f70d7fe68f7.jsonl",
  "cwd": "C:\MyApp",
  "permission_mode": "default",
  "hook_event_name": "PreToolUse",
  "tool_name": "Read",
  "tool_input": {
    "file_path": "C:\MyApp\Secret.key"
  },
  "tool_use_id": "toolu_01BQ4EQBtbxbFRoTEc1CXRgf"
}
				
			

Which events can we hook into?

Claude Code provides four categories of hookable events:

  • Tool Events – PreToolUse, PostToolUse, PostToolUseFailure
  • Session Events – SessionStart, SessionEnd, PreCompact
  • Subagent Lifecycle – SubagentStart, SubagentStop, TaskCompleted
  • User Interaction – UserPromptSubmit, Notification, PermissionRequest

The diagram below gives a high-level overview of how hook events are grouped and emitted by Claude Code.

Diagram showing Claude Code hook events including PreToolUse, PostToolUse, SessionStart, and UserPromptSubmit categories

A complete reference for all events is included later in this post.

Why Hooks Matter for Every Project

Beyond observability, hooks are one of the most important control points available in any Claude Code project. By registering a PreToolUse hook, you get to run your own code before any tool executes, giving you the opportunity to inspect, block, or log every action the agent attempts.

This is a practical and effective way to restrict how Claude Code interacts with tools and files. You can block reads and writes to sensitive files and directories (credentials, .env files, private keys, config folders), prevent specific shell commands from running, and define clear rules around what the agent is allowed to touch.

Because your hook runs in your environment, it is always invoked before the tool executes and cannot be skipped through prompt manipulation alone. However, the effectiveness of the protection depends on how your hook validates and enforces its rules.

Hooks also provide a meaningful defense against prompt injection attacks. If a malicious file or web page contains hidden instructions telling Claude Code to exfiltrate data or modify a sensitive file, a PreToolUse hook can catch the attempt before the tool executes and block it with a non-zero exit code. Claude Code reads your error message and stops, preventing the action from reaching the file system.

In practice, hooks act as a powerful guardrail and auditing layer. For stronger guarantees, especially when shell access is enabled, they should be combined with environment-level controls such as filesystem restrictions or sandboxing.

In other words, hooks are one layer in a broader security model and should be combined with other controls for robust protection.

How to Observe Claude Code Hooks in Real Time

But how can we observe exactly when hooks are called and what data is passed to them? That is where the Coding Agent Explorer and its HookAgent tool come in.

Here is what the rest of this post covers:

  • How the HookAgent tool bridges Claude Code and the dashboard
  • How to set everything up in a few minutes
  • What you can actually see once it is running

Introducing HookAgent

HookAgent is a small companion CLI tool that ships with the Coding Agent Explorer. It acts as the bridge between Claude Code’s hook system and the Coding Agent Explorer.

The diagram below shows how HookAgent receives hook events from Claude Code and forwards them to the Coding Agent Explorer.

Diagram showing how HookAgent receives hook events from Claude Code via stdin, returns an exit code, and forwards the event to the Coding Agent Explorer over HTTP

To use it, configure Claude Code to call HookAgent for each hook event you want to capture. From that point on, every time Claude Code fires a hook, it runs HookAgent and passes the event data via stdin.

HookAgent’s core responsibility is simple:

  • Read the incoming event from stdin
  • Attach Claude Code environment variables like CLAUDE_PROJECT_DIR and CLAUDE_SESSION_ID
  • Forward everything to the Coding Agent Explorer via a single HTTP POST request.
  • When the Explorer receives a hook event, it updates the dashboard in real time.

If the Explorer is not running when a hook fires, HookAgent exits immediately and Claude Code continues normally. Observability is always optional.

FACT: stdin and exit codes

stdin (standard input) is a standard way for one process to send data to another; in this case, Claude Code passes the hook event JSON as text to HookAgent via stdin.

The exit code is how a process reports the result back: 0 (zero) means success, while any non-zero value signals an error and can be used to block an action.

Getting Started with HookAgent

Getting hooks running takes about five minutes. Here is what you need.

Prerequisites

Before you get started, make sure you have the following installed and ready:

See Introducing the Coding Agent Explorer for installation and setup instructions.

Step 1: Build HookAgent

Open a terminal in the root folder of the Coding Agent Explorer and run the appropriate publish script for your platform:

Windows:

publish.bat

macOS / Linux:

bash publish.sh

This builds both the Coding Agent Explorer and HookAgent into the Published folder:

  • Published/CodingAgentExplorer/
    The proxy and dashboard.
  • Published/HookAgent/HookAgent.exe
    HookAgent (Windows: .exe, macOS/Linux: no extension).

Step 2: Create a working directory

Create a fresh folder where you will run claude (in this example, we name it MyApp), then copy the HookAgent folder from the Published folder into it. After copying, your working directory should look like this:

Windows:

C:MyAppHookAgentHookAgent.exe

macOS / Linux:

~/MyApp/HookAgent/HookAgent

Step 3: Configure Claude Code hooks

The Coding Agent Explorer includes ready-to-use sample settings.json files for each platform.

  1. Create a .claude folder in your working directory (MyApp) if it does not already exist.
  2. Copy the sample settings.json for your platform into that .claude folder.

Use one of these sample files:

  • Windows: HookAgentSample-Settings-Windowssettings.json
  • macOS / Linux: HookAgent/Sample-Settings-LinuxMacOS/settings.json

Your working directory should now look like this:
Windows:

C:MyApp
      HookAgentHookAgent.exe
     .claudesettings.json

macOS / Linux:

~/MyApp/
HookAgent/HookAgent
.claude/settings.json

This registers HookAgent for all Claude Code hook events. Feel free to remove any events you do not need. The full list of supported events and the settings.json syntax is documented in the Claude Code hooks reference.

The “matcher”: “.*” field is a regex pattern applied to the tool name. .* matches all tools, so the hook fires for every tool call under that event. You can narrow it down. For example, “matcher”: “Bash” would only fire for Bash tool calls. Not all events support a matcher. 

Note: Claude Code runs hook commands through bash on all platforms, including Windows. Always use forward slashes in the command path. The Windows sample uses HookAgent/HookAgent.exe. The macOS/Linux sample uses HookAgent/HookAgent with no .exe extension.

Step 4: Verify the hooks are registered

Start Claude Code from your working directory and run the /hooks command:

/hooks

Claude Code will list every configured hook event and the command registered for each one. You should see all hook events pointing to HookAgent. If the list is empty or events are missing, check that .claude/settings.json is in the right folder and that the JSON is valid.

Once you have confirmed the hooks are registered, exit Claude Code and move on to the next step.

Step 5: Start the Coding Agent Explorer

From the Coding Agent Explorer folder, start the Explorer:

dotnet run

On Windows the browser opens automatically. On macOS and Linux, open the dashboard manually at https://localhost:5001.

Step 6: Test HookAgent

With the Explorer running, test HookAgent from your working directory before starting a full Claude Code session:

Windows (PowerShell):

{"hook_event_name":"UserPromptSubmit","session_id":"test"}' | HookAgentHookAgent.exe

Windows (cmd):

echo {"hook_event_name":"UserPromptSubmit","session_id":"test"} | HookAgentHookAgent.exe

macOS / Linux:

echo '{"hook_event_name":"UserPromptSubmit","session_id":"test"}' | HookAgent/HookAgent

A UserPromptSubmit event should appear in the Conversation View immediately. Make sure the “Hook Events” checkbox in the Conversation View is checked. If the Explorer is not running, the command exits silently and Claude Code is never affected.

Step 7: Run Claude

From your working directory (C:MyApp on Windows, ~/MyApp on macOS/Linux), run:

claude

Hook events will start appearing in the Conversation View as soon as Claude Code starts up, and then for every action it takes. Make sure the “Hook Events” checkbox in the Conversation View is checked, otherwise hook events will be hidden.

Seeing Claude Code Hooks and Tool Calls in Real Time

Once everything is running, the Conversation View becomes a live timeline of everything Claude Code does, with hook events interleaved alongside the API calls. You get a complete picture that was previously invisible.

Here is what a typical interaction looks like on the timeline when you ask Claude Code to fix a bug:

13:42:01 SessionStart
13:42:03 UserPromptSubmit "Fix the null reference exception in UserService.cs"
13:42:03 POST /v1/messages (Claude thinks about what to do)
13:42:05 PreToolUse Read (about to read UserService.cs)
13:42:05 PostToolUse Read (file contents returned)
13:42:05 POST /v1/messages (Claude analyses the code)
13:42:08 PreToolUse Edit (about to write the fix)
13:42:08 PostToolUse Edit (file updated)
13:42:08 POST /v1/messages (Claude confirms the fix)
13:42:10 Stop

In under 10 seconds, Claude Code made 3 LLM calls and 2 tool calls. Without hooks, you only see the API requests. With hooks, you see the full story.

Diagram showing how Claude Code sends hook event JSON to HookAgent via stdin, receives an exit code response, and forwards the event to the Coding Agent Explorer over HTTP

Clicking any hook event in the timeline opens the full JSON payload. For a `PreToolUse` event, for example, you see the tool name, the exact input parameters, the session ID, and the working directory. For a `PermissionRequest`, you see exactly what Claude Code wants to do and why.

				
					{
  "session_id": "814cc648-5d90-44e2-9239-144ad62abc76",
  "transcript_path": "C:\Users\Tore\.claude\projects\C--Conf\814cc648-5d90-44e2-9239-144ad62abc76.jsonl",
  "cwd": "C:\Conf",
  "permission_mode": "default",
  "hook_event_name": "PreToolUse",
  "tool_name": "Glob",
  "tool_input": {
    "pattern": "**/UserService.cs"
  },
  "tool_use_id": "toolu_019GP34HeFh8N4QkWQWdAyUA"
}
				
			

Why This Is Useful in Workshops

I use the hooks feature extensively in my workshops on agentic development. The question I hear most often from developers who are new to coding agents is:

But what is it actually doing? And what data is actually passed to my hooks?

The Conversation View with hooks answers that question directly. You can project the dashboard on a shared screen and watch every session event, tool call, and permission request appear in real time as Claude Code works. It turns the agent from a magic black box into a system that developers can reason about, question, and understand.

In my experience, once developers see the full picture, a lot of things click. They understand why the agent reads files before editing them. They see how it loops back to the LLM after each tool call. They notice where token costs accumulate. And they start writing better prompts as a result.

For more details about my workshops and training classes, visit tn-data.se/training.

Claude Code Hook Events Reference

As of today, Claude Code supports these hook events. For the full and up-to-date reference, see the Claude Code hooks documentation.

  • SessionStart
    Session begins or resumes
  • UserPromptSubmit
    User submits a prompt
  • PreToolUse
    Fires before any tool executes
  • PostToolUse
    Fires after a tool succeeds
  • PostToolUseFailure
    Fires after a tool fails
  • PermissionRequest
    Claude Code requests permission to perform an action
  • Stop
    Claude finishes responding
  • SubagentStart
    A subagent is started
  • SubagentStop
    A subagent finishes execution
  • Notification
    Claude Code sends a notification
  • PreCompact
    Fires before context compaction
  • ConfigChange
    A settings file changes
  • TeammateIdle
    Agent team coordination event
  • TaskCompleted
    A task is marked as complete
  • SessionEnd
    Session terminates

The PreToolUse and PostToolUse events are the most informative day-to-day. They fire for every tool call, whether that is a file read, a bash command, a web fetch, or an MCP tool invocation. Together they give you a complete record of every action the agent took.

  • PreToolUse is where you enforce rules: block access to sensitive files, reject dangerous shell commands, or log what the agent is about to do.
  • PostToolUse fires after a tool succeeds and is where you react to what just happened.

Common uses include:

  • Run your test suite automatically after the agent edits a file
  • Trigger a linter or code formatter after a write
  • Log the completed action to an audit trail
  • Notify an external system (Slack, CI pipeline) that a file changed
  •  Inspect the tool output before the agent continue

What’s Next: Inspecting Claude Code with MCP Observer

This post covered the hooks feature. The next post in this series will look at the MCP Observer, a new feature that lets you intercept and inspect traffic between Claude Code and any Model Context Protocol (MCP) server. If you’ve ever wanted to understand how MCP tools like Microsoft Learn or Context7 actually work under the hood when Claude Code uses them, that post is for you.

In the meantime, the full project is open-source and available on GitHub: github.com/tndata/CodingAgentExplorer.

 

I Want Your Feedback

The Coding Agent Explorer is open-source and I want to improve it with your help. If you find a bug, have a feature request, or want to share your experience, please create an issue on GitHub. I read every issue and appreciate all feedback.

Contributions are also very welcome. The codebase is intentionally kept simple (a single NuGet dependency, a vanilla JS frontend) to make it easy for anyone to jump in.

Want to Learn Agentic Development?

If this topic interests you, I’d love to help you go deeper. I’m currently working on a workshop called Agentic Development with Claude Code, where we explore how coding agents work, how to use them effectively, and how to build workflows around them. The Coding Agent Explorer is one of the tools I plan to use to help participants see what is really happening behind the scenes.

Learn more about my AI and Claude Code workshops and courses: Agentic Development and AI workshops.

I also give a presentation called How Does a Coding Agent Work? that covers the architecture and inner workings of AI coding agents. Contact me if you’d like me to run this at your company or conference.

Claude Code Hooks FAQ

Does HookAgent slow down Claude Code?

No. HookAgent makes a single HTTP POST per event and exits. The call is fast, and if the dashboard is not running it returns immediately. In practice, you will not notice any difference in Claude Code’s responsiveness.

Do I need to change my project or code to use hooks?

No. You only need to create or update the .claude/settings.json file in the directory where you run claude. Nothing in your project code changes.

Can I use only some of the hook events?

Yes. You can remove any hook entries from settings.json that you don’t want. For example, if you only care about tool calls, keep PreToolUse and PostToolUse and remove the rest.

Does this work on macOS and Linux?

Yes. Use publish.bat on Windows or bash publish.sh on macOS and Linux. Each script detects the current platform and builds a single-file HookAgent executable for it. On Linux and macOS the executable has no .exe extension, so use HookAgent/HookAgent as the command path in your settings.json.

Is this tool suitable for production use?

No. The Coding Agent Explorer is designed as a development and teaching tool. It is not intended for production use.

Can I block all sensitive file access using PreToolUse?

Not completely. PreToolUse lets you block direct tool calls (like reading secret.key), but it does not provide a true sandbox.

For example, the model might:

  • Generate a script (e.g. Bash or PowerShell) that reads files indirectly
  • Request access to a broader directory (e.g. “load all files in this folder”)
  • Use other tools or workflows that bypass your specific path checks

If you give Claude Code shell access (Bash, command line, PowerShell, etc.), it can effectively do anything you can do within that environment; including reading sensitive files through indirect means.

Even for direct tool calls, your hook logic must be implemented carefully. Naive checks (for example simple string matching on file paths) can sometimes be bypassed using variations in path representation, encoding, or traversal patterns. In practice, you need to normalize and validate inputs rigorously.

In other words, you are filtering requests, not enforcing OS-level isolation.

What does it take to truly secure file access?

If you need strong guarantees, you need to move beyond hooks and enforce restrictions at the environment level:

  • Run Claude Code in a sandbox
    (container, VM, or restricted workspace)
  • Limit filesystem permissions
    (only expose allowed directories)
  • Disable or tightly restrict shell access
    (for example by disabling the Bash tool or restricting what commands can be executed, since it can be used to invoke other interpreters like PowerShell or cmd)
  • Use allowlists instead of blocklists
    (explicitly permit safe paths only)

Hooks are best used as a guardrail and observability layer, not as your primary security boundary.

About the Author

Tore Nestenius is a Microsoft MVP in .NET and a senior .NET consultant, instructor, and software architect with over 25 years of experience in software development. He specializes in .NET, ASP.NET Core, Azure, identity architecture, and application security, helping development teams design secure, scalable, and maintainable systems.

Tore delivers .NET workshops, Azure training, and technical presentations for companies and development teams across Europe. His focus is on practical, hands-on learning that helps developers understand modern tooling, cloud architecture, and AI-assisted development.

Learn more on his .NET blog at nestenius.se or explore his workshops and training at tn-data.se.

Links to other blog posts

The post Exploring Claude Code Hooks with the Coding Agent Explorer (.NET) appeared first on Personal Blog of Tore Nestenius | Insights on .NET, C#, and Software Development.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Learning Vercel AI SDK—Part 1

1 Share

Learn how to set up a Vercel AI SDK project, install required dependencies and work with prompts, models and streaming.

The AI SDK is a robust TypeScript library that enables developers to easily create AI-powered applications. In this tutorial, you’ll build a simple AI chatbot with a real-time streaming interface.

To follow this tutorial, verify:

After completing this article, you will have a solid grasp of key concepts used in the Vercel AI SDK, including:

  • Models
  • Text prompts
  • System prompts
  • Text generation
  • Text streaming

Setting Up the Project

To get started, create a new Node.js application and set it up to use TypeScript.
For that, in your terminal, run the following commands:

  • npm install -D typescript @types/node ts-node
  • npx tsc –init

Once the commands have been executed, replace the contents of your tsconfig.json file with the configuration shown below.

{
  "compilerOptions": {
    "target": "ES2022", 
    "module": "ES2022",
    "moduleResolution": "node",
    "rootDir": "./src",
    "outDir": "./dist",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}

After that, install the dependencies below:

  • npm install dotenv
  • npm install -D @types/dotenv
  • npm install ai@beta @ai-sdk/openai@beta zod
  • npm install -D @types/node tsx typescript
  • npm i --save-dev @types/json-schema

Next, add the .env file to the project root and paste the OpenAI key information below inside the file:

OPENAI_API_KEY= ""

Now that you’ve added your API key and installed all the dependencies, you’ll notice that your package.json file has been updated with these changes.

{
  "name": "demo1",
  "version": "1.0.0",
  "main": "index.js",
  "type": "module",
  "scripts": {
    "build": "tsc",
    "start": "tsc && node dist/index.js"
  },
  "author": "",
  "license": "ISC",
  "description": "",
  "devDependencies": {
    "@types/dotenv": "^6.1.1",
    "@types/json-schema": "^7.0.15",
    "@types/node": "^24.7.2",
    "tsx": "^4.20.6",
    "typescript": "^5.9.3"
  },
  "dependencies": {
    "@ai-sdk/openai": "^3.0.0-beta.29",
    "ai": "^6.0.0-beta.47",
    "dotenv": "^17.2.3",
    "zod": "^4.1.12"
  }
}

In your setup, the package name and version may differ. Next, create a src folder in your project and add an index.ts file inside it. In that file, log the value of your OpenAI key to verify that the project has been configured correctly.

import dotenv from 'dotenv';
dotenv.config();
const openaiApiKey: string | undefined = process.env.OPENAI_API_KEY;
console.log(`OpenAI API Key: ${openaiApiKey}`);

When you execute npm run start in your terminal, you should see your OpenAI key printed in the console.

Generating Text

Using the Vercel AI SDK, you can generate responses from an LLM in just a few lines of code. Call the generateText method, passing the model name and your desired prompt.

const model = openai("gpt-3.5-turbo");

export const chat = async(prompt: string) => {
    const { text } = await generateText({
        model,
        prompt
    })
    return text;
}

const result = await chat("what is value of pi ?");
console.log("Result: ", result);

As shown above, we have defined a model and passed it to the generateText function. The Vercel AI SDK makes it simple to integrate models from multiple providers. For instance, if you want to use a model other than OpenAI’s GPT-3.5, declare that model and pass it to the generateText function. Verify that the corresponding provider dependency is installed in your project.

const model  = anthropic("claude-2");

Streaming Text

The Vercel AI SDK makes it simple to stream generated text token-by-token. To stream the response, use the streamText function.

export const chat = async(prompt: string) => {
    const { textStream } = await streamText({
        model,
        prompt
    });
    return textStream; 
   
}

The streamText function returns a textStream that you can iterate over using an await for loop to process the response chunk by chunk. Here, we’re printing each chunk to the console, but you could also stream it to a client or save it to a file.

const result = await chat("what is value of pi ?");
for await (const chunk of result) {
     process.stdout.write(chunk);
}

Along with the text stream, the streamText function provides other useful properties like:

  • content
  • text
  • reasoningText
  • etc.

Here’s an example of how you can use the content property:

export const chat = async(prompt: string) => {
    const { content } = await streamText({
        model,
        prompt
    });
    return content; 
   
}

const result = await chat("what is value of pi ?");
console.log("Result is : ", result);

You should get output as shown below:

text: The value of pi is approximately 3.14159…

Streaming Text with Image Prompt

The Vercel AI SDK provides a simple API for sending image inputs to multimodal models such as OpenAI’s GPT-4o. By setting the message type to image, you can include an image in your prompt and receive a corresponding model-generated response.

const model = openai("gpt-4o");
async function readImage() {
  const result = streamText({
    model: model,
    messages: [
      {
        role: 'user',
        content: [
          { type: 'text', text: 'Describe the image in detail.' },
          { type: 'image', image: fs.readFileSync('./m.jpg') },
        ],
      },
    ],
  });

  for await (const textPart of result.textStream) {
    process.stdout.write(textPart);
  }
}

readImage();

In the example above, both text and image inputs are sent to the multimodal model OpenAI’s GPT-4o. The model then streams a response that describes and explains the provided image.

For the above example, you should get a response as below:

The image showcases a breathtaking mountain landscape… description goes on for three paragraphs.

Different Types of Prompts

Prompts are the instructions provided to an LLM that define what task it should perform. Each LLM supports different message formats for handling these instructions. The Vercel AI SDK offers a simplified abstraction over these formats, providing a unified interface for working with multiple model providers. It mainly supports three types of prompts:

  1. Text Prompts
  2. Message Prompts
  3. System Prompts

A text prompt is a simple string that serves as input to the model. This format is ideal for straightforward generation tasks, such as producing variations of a specific message or pattern.

In the Vercel AI SDK, you can define a text prompt using the prompt property in functions such as generateText or streamText. As shown in the example below, the string prompt is passed directly to the prompt property of the generateText function.

export const chat = async() => {
    const { text } = await generateText({
        model,
        prompt:'What is value of pi ?'
    })
    return text;
}

You can also use template literals to pass dynamic values into a text prompt. This approach allows you to create flexible and reusable prompts for your models, as shown in the example below.

const model = openai("gpt-4o");
export const chat = async(days:number, destination:string) => {
    const { text } = await generateText({
        model,
        prompt:`Give me ${days} days vacation plan in ${destination}`
    })
    return text;
}
const result = await chat(7, "Egypt");
console.log("Result is : ", result);

In this example, the number of days and the country name are dynamically passed to the text prompt to generate a context-specific response from the model.

A system prompt provides high-level guidance that shapes the chat model’s behavior during a conversation.

  • It instructs the model on how to respond.
  • It establishes rules and constraints for interaction.
  • It sets the tone, style and personality of the responses.

You can define a system prompt using the system property. In the example below, the model is instructed to provide travel suggestions in a list format.

const model = openai("gpt-4o");

const system = `You are a travel assistant. 
You help users plan their vacations by providing detailed itineraries based 
on the number of days and destination they provide. Your responses should include 
daily activities, places to visit, and any special tips for travelers.`

export const chat = async(days:number, destination:string) => {
    const { text } = await generateText({
        model,
        system: system,
        prompt:`Give me ${days} days vacation plan in ${destination}`
    })
    return text;
}
const result = await chat(7, "Egypt");
console.log("Result is : ", result);

The model’s response should follow the instructions defined in the system prompt. Let’s look at another example of a system prompt. In this case, we instruct the model to:

  • Respond only with information related to India.
  • Provide the response in JSON format.
  • For any question unrelated to India, reply that it can only answer questions about India.
const model = openai("gpt-4o");

const system = `You only answer about India in JSON Format. 
Any question not related to India should be answered with
{"message": "I can only answer questions related to India"}
`

export const chat = async(question : string) => {
    const { text } = await generateText({
        model,
        system: system,
        prompt:question
    })
    return text;
}

let prompt = "Tell me about the capital of India";
const result = await chat(prompt);
console.log("Result is : ", result);
let prompt2 = "Tell me about the capital of USA";
const result1 = await chat(prompt2);
console.log("Result is : ", result1);

For above queries, model should give response as shown below:

capital: New Delhi, established: 1911, etc.

A Chat App

Bringing everything together, you can build a chat application that uses text prompts, system prompts and other prompt types, as shown in the code example below.

import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import dotenv from 'dotenv';

dotenv.config();

const model = openai("gpt-4o");

const system = `You only answer about India and USA. 
Any question not related to India and USA should be answered with
sorry I know only about India and USA`;

const chat = async (question: string) => {
    const { text } = await generateText({
        model,
        system: system,
        prompt: question
    });
    return text;
}

const getInput = (): Promise<string> => {
    return new Promise((resolve) => {
        process.stdout.write('Enter your question (or "q" to quit): ');
        process.stdin.once('data', (data) => {
            resolve(data.toString().trim());
        });
    });
}

async function main() {
    console.log('Welcome to AI Chat! Ask questions about India and USA.');
    console.log('Type "q" to quit.');
    console.log('-------------------');
    process.stdin.setEncoding('utf8');
    
    while (true) {
        try {
            const userInput = await getInput();
            
            if (userInput.toLowerCase() === 'q') {
                console.log('Goodbye!');
                process.exit(0);
            }
            
            console.log('Thinking...');
            const result = await chat(userInput);
            console.log('AI Response:', result);
            console.log('-------------------');
            
        } catch (error) {
            console.error('Error:', error);
            console.log('-------------------');
        }
    }
}

main().catch(console.error);

When you run the application, the model should respond only to queries related to India or the USA. For any question outside these topics, the model should refrain from answering.

When the user asks about the capital of France, the AI responds: Sorry, I know only about India and USA

As explained earlier, streaming model responses are straightforward with the Vercel AI SDK. You can use the streamText function to receive the output incrementally, as shown in the example below.

const chat = async (question: string) => {
    const { textStream } = await streamText({
        model,
        system: system,
        prompt: question
    });
    return textStream;
}

wAnd then printing the response as shown below:

console.log('Thinking...');
            const result = await chat(userInput);
            // console.log('AI Response:', result);
            for await (const chunk of result) {
                process.stdout.write(chunk);
            }

Summary

In this part of the Vercel AI SDK learning series, you learned how to set up the project, install the required dependencies and work with key concepts such as prompts, models and streaming. I hope you found this article helpful. Thank you for reading.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories