Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151450 stories
·
33 followers

Debugging your VSCode agent interactions

1 Share

If you've spent any time working with GitHub Copilot in agent mode, you've probably hit that frustrating moment: the agent does something unexpected, picks the wrong tool, ignores a prompt file, or just… takes forever, and you have no idea why.

Until recently, your only recourse was the raw Chat Debug view: useful, but dense, and not exactly designed for quick diagnosis. That changes with the Agent Debug Log panel, available in preview as of VS Code 1.110.

What it is

The Agent Debug Log panel shows a chronological event log of everything that happens during a chat session, including tool calls, LLM requests, prompt file discovery, and errors. Think of it as a structured, human-readable trace of your agent's entire thought process, rendered right inside VS Code. It replaces the old Diagnostics chat action with a richer, more detailed view, and is particularly valuable as your agent setups grow in complexity; custom instructions, multiple prompt files, MCP servers, and sub-agents all interacting at once.

How to enable it

The panel is off by default. To turn it on, enable the github.copilot.chat.agentDebugLog.enabled setting in VS Code, then open it via the ... menu in the Chat view and select Show Agent Debug Logs. If you also want logs written to disk for offline analysis, enable github.copilot.chat.agentDebugLog.fileLogging.enabled as well.



The agent debug log panel

Open the panel through the elipsis (...) menu in the Chat view and select Show Agent Debug Logs:


Or use the command palette: Run Developer: Open Agent Debug Log:

The panel surfaces your session data in three complementary ways:

Logs view

A chronological list of events with timestamps, event types, and summary information. You can expand each event to see full details — the complete system prompt for an LLM request, or the input and output for a tool call. You can switch between a flat list and a tree view that groups events by sub-agent and use filter options to focus on specific event types.

Summary view

Aggregate statistics about the chat session: total tool calls, token usage, error count, and overall duration. Great for a quick health check before diving into the logs.

Agent Flow Chart

A visual graph of the sequence of events and interactions between agents, making it easier to understand complex orchestrations. You can pan and zoom the flow chart and select any node to see details about that event.

Export, Import, and /troubleshoot

You can export a debug session to an OpenTelemetry JSON (OTLP format) file to share it with others or analyze it offline and import a previously exported file to view it in the Agent Debug panel. This is a big deal for team debugging — you can now hand off a session log the same way you'd share a stack trace.

You can even take it one step further: a /troubleshoot skill exists that reads the JSONL debug log files exported from the chat session and can help you understand why tools or sub-agents were used or skipped, why instructions or skills didn't load, what contributed to slow response times, and whether network connectivity problems occurred.

Just type /troubleshoot followed by a description of the issue and let the AI analyze its own log:

Conclusion

Today Agent mode has become the way how I interact with Copilot for real, multi-step tasks. The more powerful my setup (custom agents, prompt files, MCP tool chains), the harder it becomes to reason about what's actually happening under the hood.

Thanks to the Agent Debug Log panel I can finally understand what is going on. It brings the same kind of observability that you expect from their backend services; traces, spans, structured events; directly into the editor. If you're building or debugging anything in agent mode, this panel should be your first stop.

More information

Debug chat interactions

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Maker Faire Returns to the City of Light

1 Share
Maker Faire Returns to the City of Light

After four long years, Maker Faire returns to Paris on April 11–12, 2026. From its former home in the high tech halls of the Cité des Sciences et de l’Industrie. the event traverses, not just the city circle, but through time. In its new home at the historic Musée des Arts et Métiers, Maker exhibits […]

The post Maker Faire Returns to the City of Light appeared first on Make: DIY Projects and Ideas for Makers.

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

python-1.0.1

1 Share

1.0.1 - 2026-04-09

Important

Security hardening for FileCheckpointStorage: Checkpoint deserialization now flows through a restricted unpickler by default, which only permits a built-in set of safe Python types and all agent_framework framework types. If your application stores custom types in checkpoints, pass their "module:qualname" identifiers via the new allowed_checkpoint_types constructor parameter — otherwise loads will raise WorkflowCheckpointException. See Security Considerations for details and the opt-in format.

Added

  • samples: Add sample documentation for two separate Neo4j context providers for retrieval and memory (#4010)
  • agent-framework-azure-cosmos: Add Cosmos DB NoSQL checkpoint storage for Python workflows (#4916)

Changed

  • docs: Remove pre-release flag from agent-framework installation instructions (#5082)
  • samples: Revise agent examples in README.md (#5067)
  • repo: Update CHANGELOG with v1.0.0 release (#5069)
  • agent-framework-core: [BREAKING] Fix handoff workflow context management and improve AG-UI demo (#5136)
  • agent-framework-core: Restrict persisted checkpoint deserialization by default (#4941)
  • samples: Bump vite from 7.3.1 to 7.3.2 in /python/samples/05-end-to-end/ag_ui_workflow_handoff/frontend (#5132)
  • python: Bump cryptography from 46.0.6 to 46.0.7 (#5176)
  • python: Bump mcp from 1.26.0 to 1.27.0 (#5117)
  • python: Bump mcp[ws] from 1.26.0 to 1.27.0 (#5119)

Fixed

  • agent-framework-core: Raise clear handler registration error for unresolved TypeVar annotations (#4944)
  • agent-framework-openai: Fix response_format crash on background polling with empty text (#5146)
  • agent-framework-foundry: Strip tools from FoundryAgent request when agent_reference is present (#5101)
  • agent-framework-core: Fix test compatibility for entity key validation (#5179)
  • agent-framework-openai: Stop emitting duplicate reasoning content from response.reasoning_text.done and response.reasoning_summary_text.done events (#5162)

Full Changelog: python-1.0.0...python-1.0.1

Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Migrate Oracle Workloads to PostgreSQL Using AI-Powered Tools in the VS Code PostgreSQL Extension

1 Share
From: Microsoft Azure Developers
Duration: 14:09
Views: 8

This week on Azure Friday, Scott Hanselman talks with Jonathon Frost about AI-enhanced migration from Oracle to PostgreSQL using the VS Code PostgreSQL extension. See how developers can automate schema conversion, transform application code, and validate results using an intelligent, agent-driven workflow.

🌮 Chapter Markers:

0:00 – Introduction
0:36 – Model Hyperparameter Tuning, Agent Orchestration, and Determinism
2:25 - Architectural Overview of AI-enhanced Schema Migration
4:28 – How it is Built on the VS Code Extension for PostgreSQL
4:56 – Self-correction of AI-enhanced Migration
5:59 – Migration Demo
6:33 - Connect to Oracle Database
7:23 - Connect to PostgreSQL Database
7:50 - Connect to Azure OpenAI Endpoint
8:45 - Run Migration
9:37 – Review Completed Migration Report
11:07 - Visualize Schema of PostgreSQL Database
12:17 – Side-by-side File Diff
13:30 - Where to get the Extension and Learn More

🌮 Resources:
Learn Docs: https://aka.ms/pg-migration-tooling
VS Code Extension Marketplace Page: https://aka.ms/pgsql-vscode
Azure Product Page: https://azure.microsoft.com/products/postgresql/
Blog: https://aka.ms/pg-blog-orcl

🌮 Follow us on Social:
Jonathon Frost | https://linkedin.com/in/jjfrost
Azure Database for PostgreSQL | https://www.linkedin.com/company/azure-database-for-postgresql/
Scott Hanselman | @SHanselman – https://x.com/SHanselman
Azure Friday | @AzureFriday – https://x.com/AzureFriday

Blog - https://aka.ms/azuredevelopers/blog
Twitter - https://aka.ms/azuredevelopers/twitter
LinkedIn - https://aka.ms/azuredevelopers/linkedin
Twitch - https://aka.ms/azuredevelopers/twitch

#azuredeveloper #azure

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

All of AI's New Models and Tools

1 Share
From: AIDailyBrief
Views: 32

Overview of major model and agent launches: Meta's Muse/Spark multimodal models and personal-agent focus, Google's Gemini notebooks for shared task contexts, and open-source GLM 5.1 pushing coding benchmarks. Benchmark comparisons show GLM 5.1 and Muse leading on coding and visual reasoning while Anthropic's Claude/Mythos faced a restricted rollout over cybersecurity concerns. New managed-agent stacks and agent harnesses promise rapid prototype-to-production flows and persistent-memory assistants, with tooling, governance and safety challenges in the spotlight

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at http://patreon.com/aidailybrief
Learn more about the show https://aidailybrief.ai/

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Use AI to Achieve Operational Excellence with the Well-Architected Framework practices

1 Share
From: Microsoft Developer
Duration: 14:30
Views: 112

In this episode, we explore how AI enhances Operational Excellence. From observability and troubleshooting to automation, governance, and human-in-the-loop decision making.

In this conversation, Boris Scholl and Niels Buit discuss where AI delivers real value—and where architects must carefully manage risks and tradeoffs.

If you’re a cloud architect, platform engineer, or developer responsible for operating systems in production, this video will help you understand how to apply AI practically and assess risks and tradeoffs in real-world environments.

✅ What You’ll Learn

How AI fits into the Operational Excellence pillar of the Well-Architected Framework
How AI improves observability, dependency mapping, and proactive issue detection
What AI can do today in root cause analysis, summarization, and remediation
The risks of non-deterministic systems and how to mitigate them
When and why to keep a human in the loop
Security, privacy, and governance considerations for AI-driven operations
How AI can turn WAF from a static checklist into a living, evolving system

00:06 – AI and Operational Excellence in the Well-Architected Framework
00:48 – Running AI vs Using AI to Improve Operations
01:21 – Why Day Two Operations Matter More Than Day One
02:05 – Observability, Deployment, and Troubleshooting
02:40 – Why Observability Is the Best Starting Point for AI
03:38 – Non-Deterministic AI and Operational Risk
04:05 – Real-World AI Use in Engineering Operations
04:39 – AI Opportunities: Simulation and Auto-Remediation
05:00 – Getting Started with OpenTelemetry
05:48 – What AI Enables in Observability Today
06:24 – Dependency Graphs and Predictive Failure Detection
07:01 – Human-in-the-Loop for Critical Incidents
08:00 – Fail Fast, Guardrails, and Responsible AI
08:50 – Hallucinations, RCA Accuracy, and Summarization Risk
09:29 – AI-as-a-Judge and Evaluation Patterns
10:17 – Security and Privacy Considerations
11:30 – How WAF Helps Navigate AI Tradeoffs
12:19 – Making WAF a Living, AI-Driven Framework
13:15 – Key Takeaways and Final Thoughts

🔗 Resources & Links
🚀 Azure Well-Architected Framework
https://learn.microsoft.com/azure/well-architected/
Microsoft Learn
https://learn.microsoft.com/en-us/azure/well-architected/operational-excellence/
⚡ Try Azure for free
https://aka.ms/AzureFreeTrialYT

👥 Speakers
Boris Scholl
https://www.linkedin.com/in/bscholl/

Niels Buit
https://www.linkedin.com/in/nielsbuit/

#Azure
#AI
#OperationalExcellence
#WellArchitectedFramework
#CloudArchitecture
#Observability
#OpenTelemetry
#DevOps
#ResponsibleAI
#AzureArchitecture
#MicrosoftDeveloper
#CloudComputing

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories