Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147626 stories
·
33 followers

Former Bing boss says Windows 11 killed the vertical taskbar for symmetric UX, says it was the best productivity feature

1 Share

Shopify CTO Mikhail Parakhin, who previously served as Bing search boss and also took on broader responsibilities as the head of a new Windows team, says he fought hard against Microsoft’s decision to remove the movable taskbar in Windows 11, and that they dropped it to focus on a new “symmetric panes” UX.

Windows 11 is not exactly a bad operating system. There are things I like about Windows 11, and then there are things I straight-up hate, and that is largely true for all products Microsoft makes lately. One of the most upvoted feedback items in the Feedback Hub is the ability to move the taskbar, and the second is a toggle to resize it, similar to Windows 10.

Microsoft’s former Windows boss, who also advocated for Bing pop-ups in Windows 11 and Edge, considers the movable taskbar, particularly the vertical taskbar with the disappearing feature turned on, as the best UX for productivity.

Windows 11 auto hide taskbar
You can automatically hide the taskbar from Settings > Personalization > Taskbar

Mikhail also says macOS copied the idea of a “disappearing” taskbar from Windows, as Microsoft’s operating system had it since 1995.

“Yes, obviously, vertical and disappearing: Windows had it since 95, that’s how I use it my whole life. Mac copied it from Windows when it acquired Dock in macOS,” Microsoft’s former Windows boss responded when a user told him that they prefer the “macOS option of having it disappear.”

But why was the movable taskbar removed from Windows 11?

Windows 11 taskbar on the bottom

Microsoft dropped the vertical taskbar because it wanted to focus on the “centered-Start menu” and create a symmetric pane UX where Windows is meant to feel balanced and predictable, almost like two “side panels.”

That means Microsoft not only wanted to create a centered Start menu UX, but also give each side of the screen a clear “job.”

Windows 11 right side

On the right side of the screen, we have the “controls” area, such as your quick settings to turn on or off features like Wi-Fi and Bluetooth, and also manage your notifications.

Windows 11 Quick Settings

There are also plans to add back Outlook Agendas to the Notification Center in Windows 11. We are not getting Agendas on the left side because the right side is meant to be the “control” and “notifications” region, while left side is all about information.

Windows 11 Calendar Agenda View in Notification Center

On the left side, we have widgets like weather and MSN, which “pushed” the Start menu to the center, according to the former Windows/Bing boss.

Windows 11 left side

Microsoft’s designers envisioned a clear UI and UX hierarchy where Windows 11 puts all your system controls on the right instead of flyouts or pop-ups coming out of every area, and shows information on the left side.

Feature rich Start menu setup in Windows 11 after the new Start menu update
Feature rich Start menu setup in Windows 11 after the new Start menu update

This is also why the vertical or movable taskbar did not make the cut, as a taskbar on the left would compete with the widgets panel, and a taskbar on the right would run into the notifications area.

“The vision was to create symmetric panes: you have notification/system controls/etc. pane on the right, Weather/Widgets/News pane on the left. That pushed Start menu into the center position. If you have the taskbar vertically, it starts conflicting with the panes…,” the former Windows and Bing boss argues in a post on X.

Mikhail’s statement aligns with what we heard from Microsoft’s designers in 2021. As Windows Latest previously exclusively reported, Windows 11 designers were against the idea of a movable taskbar because it breaks the “flow” and causes a “sudden reflow…”

“When you think about having the taskbar on the right or the left, all of a sudden the reflow and the work that all of the apps have to do to be able to have a wonderful experience in those environments is just huge,” a Microsoft designer who worked on the new UI/UX argued back then.

The good news is that Microsoft is internally planning to bring back the “movable” taskbar to Windows 11, and you will be able to resize it as well, similar to how you could in Windows 10 and all older versions of Windows for decades.

Microsoft also plans to reduce Copilot integration in Windows and focus on performance optimization, as it hopes to win back users in 2026.

The post Former Bing boss says Windows 11 killed the vertical taskbar for symmetric UX, says it was the best productivity feature appeared first on Windows Latest

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake With Chris Degnan

1 Share

BONUS: Why Embedding Sales with Engineering in Stealth Mode Changed Everything for Snowflake

In this episode, we talk about what it really takes to scale go-to-market from zero to billions. We interview Chris Degnan, a builder of one of the most iconic revenue engines in enterprise software at Snowflake. This conversation is grounded in the transformation described in his book Make It Snow—the journey from early-stage chaos to durable, aligned growth.

Embedding Sales with Engineering While Still in Stealth

"I don't expect you to sell anything for 2 years. What I really want you to do is get a ton of feedback and get customers to use the product so that when we come out of stealth mode, we have this world-class product."

 

Chris joined Snowflake when there were zero customers and the company was still in stealth mode. The counterintuitive move of embedding sales next to engineering so early wasn't about driving immediate revenue, it was about understanding product-market fit. Chris's job was to get customers to try the product, use it for free, and break it. And break it they did. This early feedback led to material changes in the product before general availability. The approach helped shape their ideal customer profile (ICP) and gave the engineering team real-world validation that shaped Snowflake's technical direction. In a world where startups are pressured to show revenue immediately, Snowflake's investors took the opposite approach: focus on building a product people cannot live without first.

Why Sales and Marketing Alignment Is Existential

"If we're not driving revenue, if the revenue is not growing, then how are we going to be successful? Revenue was king."

 

When Denise Persson joined as CMO, she shifted the conversation from marketing qualified leads (MQLs) to qualified meetings for the sales team. This simple reframe eliminated the typical friction between sales and marketing. Both leaders shared challenges openly and held each other accountable. When someone in either organization wasn't being respectful to the other team, they addressed it directly. Chris warns founders against creating artificial friction between sales and marketing: "A lot of founders who are engineers think that they want to create this friction between sales and marketing. And that's the opposite instinct you should have." The key insight is treating sales and marketing as a symbiotic system where revenue is the shared north star.

Coaching Leaders Through Hypergrowth

"If there's a problem in one of our organizations, if someone comes with a mentality that is not great for us, we're gonna give direct feedback to those people."

 

Chris and Denise maintained tight alignment at the top level of their organizations through four CEO transitions. Their partnership created a culture of accountability that cascaded through both teams. When either hired senior people who didn't fit the culture, they investigated and addressed it. The coaching approach wasn't about winning by authority—it was about maintaining partnership and shared accountability for results. This required unlearning traditional management approaches that pit departments against each other and instead fostering genuine collaboration.

Cultural Behaviors That Scale (And Those That Don't)

"We got dumb and lazy. We forgot about it. And then we decided, hey, we're gonna go get a little bit more fit, and figure out how to go get the new logos again."

 

Chris describes himself as a "velocity salesperson" with a hyper-focus on new customer acquisition. This focus worked brilliantly during Snowflake's growth phase—land customers, and the high net retention rate would drive expansion. However, as Snowflake prepared to go public, they took their foot off the gas on new logo acquisition, believing not all new logos were equal. This turned out to be a mistake. In his final year at Snowflake, working with CEO Sridhar Ramaswamy, they redesigned the sales team to reinvigorate the new logo acquisition machine. The lesson: the cultural behaviors that fuel early success must be consciously maintained and sometimes redesigned as you scale.

Keeping the Message Narrow Before Going Platform

"Eventually, I know you want to be a platform. But having a targeted market when you're initially launching the company, that people are spending money on, makes it easier for your sales team."

 

Snowflake intentionally positioned itself in the enterprise data warehousing market—a $10-12 billion annual market with 5,000-7,000 enterprise customers—rather than trying to sound "bigger" as a platform play. The strategic advantage was accessing existing budgets. When selling to large enterprises that go through annual planning processes, fitting into an existing budget means sales cycles of 3-6 months instead of 9-18 months. Yes, competition eventually tried to corner Snowflake as "just a cute data warehouse," but by then they had captured significant market share and could stretch their wings into the broader data cloud opportunity.

Selling Consumption-Based Products to Fixed-Budget Buyers

"Don't believe anything I say, try it."

 

One of Snowflake's hardest challenges was explaining their elastic, consumption-based architecture to procurement and legal teams accustomed to fixed budgets. In 2013-2015, many CIOs still believed data would stay in their data centers. Snowflake's model—where customers could spin up a thousand servers for 4 hours, load data, while analysts ran queries without performance impact—seemed impossible. Chris's approach was simple: set up proof of concepts and pilots. Let the technology speak for itself. The shift from fixed resources to elastic architecture required changing not just technology but entire mindsets about how data infrastructure could work.

 

About Chris Degnan

Chris Degnan is a builder of one of the most iconic revenue engines in enterprise software. As the first sales hire at Snowflake, he helped scale the company from zero customers to billions in revenue. Chris co-authored Make It Snow: From Zero to Billions with Denise Persson, documenting their journey of building Snowflake's go-to-market organization. Today, Chris advises early-stage startups on building their go-to-market strategies and works with Iconiq Capital, the venture firm that led Snowflake's Series D round.

 

You can link with Chris Degnan on LinkedIn and learn more about the book at MakeItSnowBook.com.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260214_Chris_Degnan_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the Coding Agent Explorer (.NET)

1 Share

I’m excited to introduce you to the Coding Agent Explorer, a new open-source .NET teaching tool I’ve created that lets you see exactly what happens under the hood when an AI coding agent works on your code.

How claude Code and Coding Agent Explorer works together

It currently supports Claude Code (with more agents on the roadmap), and it is designed to help developers understand and adopt agentic development. If you’ve ever wondered what’s really going on between your prompts and the code changes that magically appear, this tool makes the invisible visible.

Here’s what you’ll learn:

  • Why understanding coding agents matters for every developer
  • What the Coding Agent Explorer does and how it works
  • How to get started in just a few minutes
  • What’s on the roadmap

Let’s dive in!

What Is Agentic Development?

Before we dive into the tool itself, let’s talk about what agentic development means. In traditional AI-assisted coding, you might get autocomplete suggestions or ask a chatbot a question and then copy the answer into your code. But agentic development takes this much further.

In agentic development, an AI agent operates autonomously inside your development environment. It can read your files, search your codebase, run commands, edit code, and even verify its own changes.

Instead of just suggesting code to you, the agent acts on your behalf, working through multi-step tasks in a loop: it thinks about what to do, takes an action, observes the result, and then decides what to do next.

Several tools have emerged in this space:

These tools are rapidly changing how developers write software. But with all this autonomy, it becomes even more important to understand what the agent is actually doing! That’s where the Coding Agent Explorer comes in.

Why I Built This Tool

In my role as instructor and creating training material [https://tn-data.se/courses/], I often create hands-on tools to help participants better understand, visualize, and grasp the various concepts that I teach.

The goal is that all participants should have a solid mental model of how things actually work under the hood. This is the same approach I took with the CloudDebugger, an open-source tool I created for teaching Azure to developers.

With AI coding agents becoming part of the mainstream developer toolkit, I saw the same need. For most of us, the experience feels like a black box: you type a prompt, something happens behind the scenes, and code appears.

But what is really going on?

I wanted a tool that lets you peek inside that black box and see every API call, every tool invocation, and every decision the agent makes. This tool helps you answer questions like:

  • How are tools and MCP servers used by the agent?
  • What does the /init command actually do?
  • What happens when I enable plan mode?
  • How many tokens do my tools and system prompt consume?
  • How can an agent read and modify my files?
What Happens When I Use Claude Code

What Is the Coding Agent Explorer?

The Coding Agent Explorer is a .NET reverse proxy combined with a real-time web dashboard. It sits between your coding agent (currently Claude Code) and the Anthropic API, intercepting every request and response that flows between them. Everything it captures is displayed in a live dashboard that you can explore while the agent is working or after the work is complete.

Here’s how the architecture looks at a high level:

The proxy captures all API traffic and streams it to the dashboard using SignalR for real-time updates. You see everything as it happens with no delay.

On the technology side, the project is built with:

  • .NET 10 and ASP.NET Core
  • YARP (Yet Another Reverse Proxy) for the proxy layer
  • SignalR for real-time communication between the proxy and the dashboard
  • A vanilla HTML/JS/CSS frontend

Two Ways to Explore With Coding Agent

The dashboard provides two complementary views for exploring what the coding agent is doing. Each gives you a different perspective on the same data.

The HTTP Inspector

The HTTP Inspector is the raw, detailed view. It shows every API call in a table with the key information at a glance: timestamp, HTTP method, model used, token counts, and timing.

You can click on any row to inspect the full details:

  • Request and response headers in their entirety
  • Request and response bodies, formatted for easy reading
  • Real-time response data, so you can see the full response as it was received by the agent
  • Token usage breakdown: input tokens, output tokens, cache creation tokens, and cache read tokens
  • Performance metrics: total duration and time-to-first-token

This view is especially valuable when you want to understand the technical details of the API communication. You can see exactly what the agent sends to the model, what the model responds with, and how long each step takes.

Http Inspector tool in Claude Code Coding Explorer Tool

Then, you can click on a given request and view all the request and response details:

However, this view is useful, but not really helpful when you’re trying to understand how an agent works. This is why we also added the Conversation View, which allows you to make sense of all of these requests.

The Conversation View

The Conversation View takes the same API data and renders it in a chat-style format that’s easier to follow. Instead of raw HTTP traffic, you see the conversation as it unfolds:

  • System prompts that set up the agent’s behavior
  • User messages (your prompts)
  • Assistant responses with the agent’s reasoning
  • Tool calls like Read, Write, Bash, Grep, and others, showing exactly what the agent does
  • Tool results showing what came back from each tool invocation
  • MCP tool usage, letting you explore how the agent presents and invokes CP servers when communicating with the LLM

Large content sections are also collapsible, so you can focus on the parts that interest you. Each message also provides token and character statistics, giving you a sense of how much context each part of the conversation consumes.

This is the view I use the most in my workshops. It gives you a clear picture of how the agent “thinks”. That shows how it breaks down a problem, which tools it decides to use, and how it iterates toward a solution.

The Claude Code Converstation example from the Coding Assistant Explorer

What Can You Learn From It?

Once you start monitoring the agent’s API calls, you’ll discover many things that aren’t obvious from the outside. Here are some of the insights that the Coding Agent Explorer reveals to us:

  • How prompts are constructed.
    You’ll see the actual system prompts that Claude Code sends to the model. These are far more detailed than you might expect. Impressively, you’ll get to see instructions about tool usage, safety, code style, and more.
  • Tool usage patterns.
    You’ll also get to see how the agent calls tools like Read, Write, Bash, Grep, Glob, and others. You’ll notice how it reads files before editing them, how it searches for code, and how it verifies its changes.
  • Token economics.
    It reveals what costs tokens and how prompt caching works. You’ll see cache creation tokens (when the model stores a prompt prefix), and cache read tokens (when it reuses a cached prefix). This is key for understanding performance and cost with these agents.
  • The agentic conversation loop.
    Another cool feature is seeing how the back-and-forth between the agent and the API works. Each “turn” involves the model generating a response (potentially with tool calls), the agent executing those tools, and then sending the results back to the model. This loop continues until the task is complete.
  • Real-time observation.
    You’ll be able to see the entire conversation unfold in real time as the agent works, giving you immediate insight into what it’s doing and why.

Have you wondered why Claude Code sometimes seems to “think” for a while before responding? The Conversation View shows you exactly what’s happening: the agent might be reading multiple files, searching for patterns, or analyzing code before it starts writing its response to you.

Getting Started

Getting the Coding Agent Explorer running doesn’t take as long as you might think; it takes just a few minutes. Here’s the quick version (the GitHub README has full details):

Prerequisites: You’ll need the .NET 10 SDK installed.

1. Clone and run the project:

				
					git clone https://github.com/tndata/CodingAgentExplorer.git
cd CodingAgentExplorer
dotnet run

				
			

This starts three endpoints:

  • The reverse proxy on port 8888
  • The web dashboard on ports 5000 (HTTP) and 5001 (HTTPS).

The dashboard opens automatically in your browser.

2. Point Claude Code at the proxy:

On Windows (cmd):

set ANTHROPIC_BASE_URL=http://localhost:8888

On Windows (PowerShell):

$env:ANTHROPIC_BASE_URL = "http://localhost:8888"

The repository also includes EnableProxy.bat and DisableProxy.bat scripts for convenience. These only affect the current terminal session, so closing the terminal automatically clears the setting.

3. Use Claude Code as normal.

Every API call will now flow through the proxy, and you’ll see it appear in the dashboard in real time.

Note: The proxy only listens on localhost and is not exposed to the network. API keys are automatically redacted from the stored data. Request data is kept in memory only (up to 1,000 requests), with no persistence to disk.

It's Time to Understand How Coding Agents Work

AI coding agents are becoming a real part of the daily developer workflow. More and more developers and teams are integrating them into how they write, review, and maintain code.

As .NET developers, we have a tradition of wanting to understand the tools we use, and we’ll ultimately need to bring the same curiosity and rigor into understanding how AI coding agents work, too.

For example, while writing the blog post DefaultAzureCredential Under the Hood, I cloned the Azure SDK for .NET repository and patched the code locally to better understand how the token credential requests access tokens from Azure. That hands-on exploration gave me insights that go beyond what the documentation alone can provide.


For example, we can examine in detail how Claude Code uses its tools:

Example of how claude code uses the tools in Coding Agent Explorer

In my workshops, I often say, “The things you’re scared of, you should do more often.”

If coding agents feel like magic or a mystery, that’s exactly why you should look under the hood. Understanding what the agent does (and doesn’t do) helps you write better prompts, catch mistakes earlier, and build trust in the tool.

I Want Your Feedback!

The Coding Agent Explorer is open-source, and I want to improve it with your help. If you find a bug, have a feature request, or want to share your experience with the tool, please create an issue on GitHub. I read every issue and appreciate all feedback.

Contributions are also welcome!

The codebase is intentionally kept simple (single NuGet dependency, vanilla frontend) to make it easy for anyone to jump in.

Fun fact: The first pull request arrived four days after I published the tool, before I had even announced it.

Currently, the tool only supports Claude Code with the Anthropic API. Support for other coding agents and API providers is on the roadmap. If there’s a specific agent you’d like to see supported, let me know in the issues. Your input helps me prioritize what to build next.

Want to Learn Agentic Development?

If this topic interests you, I’d love to help you go deeper. I am working on a new workshop called “Agentic Development with Claude Code“, where we explore how coding agents work, how to use them effectively, and how to build workflows around them. The Coding Agent Explorer is one of the tools that I use in the workshop to help participants see what’s really happening behind the scenes.

You can read more about the Agentic Development workshop and other development courses here: Agentic Development & Development Workshops.

I also have a presentation called “How Does a Coding Agent Work?” that covers the architecture and inner workings of AI coding agents. Contact me if you’d like me to run this presentation at your company or conference.

More information about my AI development talks is available here: AI Development Talks & Presentations

Frequently Asked Questions

If you’re considering using the Coding Agent Explorer with Claude Code, these are the most common questions about setup, security, performance, and intended use.

Does this tool work with coding agents other than Claude Code?

Not yet. The Coding Agent Explorer currently supports Claude Code only with the Anthropic API. Support for other coding agents and API providers is on the roadmap. If there’s a specific agent you’d like to see supported, let me know on GitHub

Does the proxy affect Claude Code’s performance?

The proxy adds minimal overhead. It captures traffic as it passes through, but does not modify or delay requests or responses in any meaningful way.

Is my API key safe?

Yes. API keys (x-api-key and Authorization headers) are automatically redacted from all stored request data. The proxy only listens on localhost, so it is never exposed to the network. All captured data is kept in memory only and is lost when you stop the application.

Do I need to change my code or project to use this?

No. You only need to set a single environment variable (ANTHROPIC_BASE_URL) to point Claude Code at the proxy. Everything else works exactly as before. When you’re done, just close the terminal or clear the variable.

Can I use this in production?

The Coding Agent Explorer is designed as a development and teaching tool. It is not intended for production use. Use it locally when you want to learn, debug, or demonstrate how a coding agent works.

About the Author

Tore Nestenius is a Microsoft MVP in .NET and a senior .NET consultant, instructor, and software architect with over 25 years of experience in software development. He specializes in .NET, ASP.NET Core, Azure, identity architecture, and application security, helping development teams design secure, scalable, and maintainable systems.

Tore delivers .NET workshops, Azure training, and technical presentations for companies and development teams across Europe. His focus is on practical, hands-on learning that helps developers understand modern tooling, cloud architecture, and AI-assisted development.

Learn more on my .NET blog at nestenius.se or explore workshops and training at tn-data.se.

Links to other blog posts

The post Introducing the Coding Agent Explorer (.NET) appeared first on Personal Blog of Tore Nestenius | Insights on .NET, C#, and Software Development.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Agent Framework: Implementing an AI Email Marketing Agent

1 Share

In an earlier post in this series, we built an AI agent that researches the latest AI news and generates a weekly digest newsletter as a local markdown file.

I’ve been using that agent personally for several weeks now to stay on top of what’s happening across Microsoft, Google, Anthropic, and OpenAI.

It’s become part of my weekly routine.

I had an idea.  What if my blog subscribers could get this digest too?

In this post, we’ll take that existing agent and extend it so that instead of just saving a file to disk, it automatically creates and sends an email broadcast via AWeber.

AWeber is a marketing tool.  I usedit to capture email address and communicate with blog subscribers.

This post shows how to :

  • integrate AWeber’s REST API using OAuth 2.0
  • convert our markdown newsletter into email-safe HTML,
  • give the agent the ability to schedule broadcasts for delivery.

 

Lets dig in.

~

What We’re Building On

If you haven’t read the previous post, here’s a quick recap. We built an AI agent using the Microsoft Agent Framework with OpenAI background responses that:

  • Searches the Microsoft Foundry blog for AI platform updates
  • Fetches `Microsoft Agent Framework` commits and releases from GitHub
  • Searches RSS feeds (TechCrunch, Wired, Azure Blog) for company-specific AI news
  • Compiles everything into a structured markdown newsletter with sections, predictions, and source links

 

The agent uses function tools to gather data and a GenerateNewsletter tool to save the result.

The background responses pattern lets us poll for progress while the agent works through multiple API calls.

All of that stays the same. What changes is the final step.

~

The Plan

Instead of just writing a markdown file, we want the agent to:

  1. Save the markdown locally (keeping the existing behaviour)
  2. Convert the markdown to email-safe HTML
  3. Create a broadcast draft in AWeber via their API
  4. Optionally schedule the broadcast for delivery

 

We also want to add a new data source: the latest blog post from jamiemaguire.net, so subscribers get a “From the Editor” section in each digest.

~

Project Structure

Here’s what the new project looks like:

Agent-Framework-8-AWeber-Newsletter/
├── Agent-Framework-8-AWeber-Newsletter.csproj
├── Program.cs
├── Agents/
│   └── AINewsServiceAgent.cs
├── Models/
│   ├── NewsItemModel.cs
│   ├── AWeberTokens.cs
│   └── AWeberApiModels.cs
├── Services/
│   └── AWeberClient.cs
└── aweber_tokens.json

The key additions compared to the previous project are the Services/AWeberClient.cs for the AWeber integration, the token/API models, and the Markdig NuGet package for markdown-to-HTML conversion.

~

Setting Up AWeber OAuth 2.0

Before we can send emails, we need to authenticate with AWeber. Their API uses OAuth 2.0 with token refresh.

One-time setup:

  1. Register your app at https://labs.aweber.com/
  2. Set the OAuth Redirect URL to urn:ietf:wg:oauth:2.0:oob (this is the out-of-band flow for desktop/CLI apps – instead of redirecting to a URL, AWeber displays the auth code on screen for you to copy)
  3. Authorise the app and exchange the code for tokens

 

The aweber_tokens.json file stores the credentials:

{
  "client_id": "your_client_id",
  "client_secret": "your_client_secret",
  "access_token": "your_access_token",
  "refresh_token": "your_refresh_token",
  "account_id": "",
  "list_id": ""
}

 

The account_id and list_id fields are left empty – the client discovers them automatically on first run and saves them back to the file.

~

The AWeber Client

The AWeberClient class handles all the AWeber API interaction. It manages token persistence, auto-refresh, account discovery, broadcast creation, and scheduling.

Here’s the token refresh flow:

private async Task RefreshAccessTokenAsync()
{
    var credentials = Convert.ToBase64String(
        Encoding.UTF8.GetBytes($"{_tokens.ClientId}:{_tokens.ClientSecret}"));

    var request = new HttpRequestMessage(HttpMethod.Post, TokenUrl);
    request.Headers.Authorization = new AuthenticationHeaderValue("Basic", credentials);
    request.Content = new FormUrlEncodedContent(new Dictionary<string, string>
    {
        ["grant_type"] = "refresh_token",
        ["refresh_token"] = _tokens.RefreshToken
    });

    var response = await _httpClient.SendAsync(request);
    var responseBody = await response.Content.ReadAsStringAsync();

    if (!response.IsSuccessStatusCode)
        throw new HttpRequestException(
            $"Failed to refresh AWeber token: {response.StatusCode} - {responseBody}");

    using var doc = JsonDocument.Parse(responseBody);
    var root = doc.RootElement;

    _tokens.AccessToken = root.GetProperty("access_token").GetString() ?? "";
    _tokens.RefreshToken = root.GetProperty("refresh_token").GetString() ?? "";

    SaveTokens();
}

AWeber access tokens expire every 2 hours, so every authenticated request goes through a wrapper that catches 401 Unauthorized, refreshes the token, and retries:

private async Task<HttpResponseMessage> SendAuthenticatedRequestAsync(
    HttpMethod method, string url, HttpContent? content = null)
{
    var response = await SendRequestAsync(method, url, content);

    if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized)
    {
        await RefreshAccessTokenAsync();
        response = await SendRequestAsync(method, url, content);
    }

    return response;
}

Creating a broadcast is a POST to the AWeber API with the email subject, HTML body, and plain text body:

public async Task<AWeberBroadcastResponse> CreateBroadcastAsync(
    string subject, string bodyHtml, string bodyText)
{
    await EnsureAccountAndListAsync();

    var url = $"{BaseUrl}/accounts/{_tokens.AccountId}/lists/{_tokens.ListId}/broadcasts";

    var content = new FormUrlEncodedContent(new Dictionary<string, string>
    {
        ["subject"] = subject,
        ["body_html"] = bodyHtml,
        ["body_text"] = bodyText
    });

    var response = await SendAuthenticatedRequestAsync(HttpMethod.Post, url, content);
    var responseBody = await response.Content.ReadAsStringAsync();

    if (!response.IsSuccessStatusCode)
        throw new HttpRequestException(
            $"Failed to create AWeber broadcast: {response.StatusCode} - {responseBody}");

    return JsonSerializer.Deserialize<AWeberBroadcastResponse>(responseBody)
        ?? throw new InvalidOperationException($"Failed to parse broadcast response");
}

~

Replacing GenerateNewsletter with CreateAndSendNewsletter

This is where the previous project’s GenerateNewsletter function tool gets replaced. The new CreateAndSendNewsletter does everything the old one did, plus the AWeber integration.

[Description("Creates an email newsletter from the gathered news, saves it locally as markdown, " +
             "and sends it as an AWeber broadcast. Call this after gathering news from all companies.")]
public static async Task<string> CreateAndSendNewsletter(
    [Description("The newsletter/email subject line")] string subject,
    [Description("The full markdown content of the newsletter")] string markdownContent,
    [Description("Minutes from now to schedule sending. Use 0 to create as draft only.")] int scheduleMinutesFromNow = 0)

The flow inside the method is:

  1. Save markdown locally – same as before, preserving the existing behaviour
  2. Convert markdown to HTML using Markdig
  3. Wrap in an email-safe HTML template with inline styles (email clients don’t support external CSS)
  4. Create the AWeber broadcast via the API
  5. Optionally schedule the broadcast for delivery

 

The markdown-to-HTML conversion uses Markdig’s advanced extensions pipeline:

var pipeline = new MarkdownPipelineBuilder()
    .UseAdvancedExtensions()
    .Build();

var innerHtml = Markdown.ToHtml(fullMarkdown, pipeline);
var bodyHtml = WrapInEmailTemplate(innerHtml, subject);

 

The HTML template uses table-based layout with inline styles – the standard approach for email HTML that renders consistently across Gmail, Outlook, Apple Mail, and other clients.

If the AWeber call fails, the method still returns successfully (the markdown file was already saved) and includes the error message so the agent can report it.

~

Adding a Personal Touch: The Blog Feed

I wanted each newsletter to include a “From the Editor” section featuring my latest blog post. This meant adding a new function tool: SearchJamieMaguireBlog.

A problem was my WordPress site’s security plugin blocks programmatic requests from .NET’s HttpClient – even when setting with browser agent headers.

The solution was to use the WordPress REST API (/wp-json/wp/v2/posts) and fetch it via Windows’ built-in curl.exe, which uses a different TLS stack (WinHTTP) that doesn’t get flagged:

[Description("Fetches the most recent blog post from jamiemaguire.net.")]
public static async Task<List<NewsItemModel>> SearchJamieMaguireBlog()
{
    var process = new System.Diagnostics.Process
    {
        StartInfo = new System.Diagnostics.ProcessStartInfo
        {
            FileName = "curl.exe",
            Arguments = $"-s -L \"{apiUrl}\"",
            RedirectStandardOutput = true,
            RedirectStandardError = true,
            UseShellExecute = false,
            CreateNoWindow = true
        }
    };

    process.Start();
    var json = await process.StandardOutput.ReadToEndAsync();
    await process.WaitForExitAsync();

    // Parse the WordPress REST API JSON response...
}

This is a pragmatic workaround.  Not ideal but in the interests of time, was a quick way to get around this.

~

Updated Agent Instructions

The agent’s workflow in Program.cs now includes the new data source and the AWeber step:

AIAgent agent = responseClient.CreateAIAgent(
    name: "AI News Digest Agent",
    instructions: """
        You are an AI news research agent that creates weekly digest newsletters
        and sends them via AWeber.

        Workflow:
        1. Call SearchFoundryBlog() to get Microsoft AI Foundry updates
        2. Call SearchAgentFrameworkUpdates() to get Microsoft Agent Framework code updates
        3. Call SearchCompanyNews for each company: Microsoft, Google, Anthropic, OpenAI
        4. Call SearchJamieMaguireBlog() to get the latest blog post from jamiemaguire.net
        5. Call CreateAndSendNewsletter to send the newsletter via AWeber
        ...
    """,
    tools: [
        AIFunctionFactory.Create(AINewsServiceAgent.SearchFoundryBlog),
        AIFunctionFactory.Create(AINewsServiceAgent.SearchAgentFrameworkUpdates),
        AIFunctionFactory.Create(AINewsServiceAgent.SearchCompanyNews),
        AIFunctionFactory.Create(AINewsServiceAgent.SearchJamieMaguireBlog),
        AIFunctionFactory.Create(AINewsServiceAgent.CreateAndSendNewsletter)
    ]);

The newsletter includes a few sections:

  •  overview
  • six news source sections
  • from the editor
  • what to watch trends
  • predictions for the next 90 days with confidence scores

Useful for skim reading.  Each section contains link that take you directly to the detail.

~

The Function Tools

Here’s a summary of the tools available to the agent:

Tool Description
SearchFoundryBlog Scrapes the Microsoft Foundry blog for the latest AI platform posts
SearchAgentFrameworkUpdates Fetches recent commits and releases from the Agent Framework GitHub repo
SearchCompanyNews Searches RSS feeds for AI news about a specific company
SearchJamieMaguireBlog Fetches the latest blog post from jamiemaguire.net via the WordPress REST API
CreateAndSendNewsletter Saves markdown locally, converts to HTML, creates AWeber broadcast, optionally schedules delivery

~

From Personal Tool to Subscriber Newsletter

I’d been running the previous version of this agent weekly as a personal productivity tool.

It saved me time.

Instead of manually checking multiple blogs, RSS feeds, and GitHub repos, I’d just type Generate this week's digest and get a comprehensive overview in about two minutes.

The jump to AWeber integration means this digest is now available to subscribers of my blog. The agent does the same research it always did, but now the output goes directly into an email broadcast that reaches anyone who’s signed up.

What started as a personal convenience has become a value-add for the community.

The scheduleMinutesFromNow parameter also means I can review the draft in AWeber’s dashboard before it goes out, or schedule it for a specific delivery window.

Setting it to 0 creates a draft only, which is what I use – I like to review the content before it reaches subscribers.

~

Demo

We can see the revised agent with AWeber integration in action here.  In this demo, we launch the agent, then verify the draft newsletter has been scheduled for delivery in the AWeber dashboard.

 

Summary

In this post, we extended our AI news digest agent to:

  • Integrate with AWeber’s REST API using OAuth 2.0 with automatic token refresh
  • Convert markdown to email-safe HTML using Markdig, wrapped in a table-based template with inline styles
  • Create broadcast drafts that appear in the AWeber dashboard ready for review or scheduling
  • Add a personal blog feed as a new data source for the “From the Editor” newsletter section
  • Handle the full email pipeline – from AI-powered research to formatted email broadcast – in a single agent run

 

The key architectural decision was replacing the GenerateNewsletter function tool with CreateAndSendNewsletter while keeping the same agent workflow.

The agent doesn’t know or care that its output is going to an email platform.  It just calls a function tool with a subject and markdown content. The tool handles the conversion and delivery.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing lazyqmd: a tui for QMD - and a little more...

1 Share

What's it all about?

QMD is a quite popular mini cli search engine for your docs, knowledge bases, meeting notes, whatever.

qmd has been created by Tobi Luetke, the founder of Shopify and can be found on GitHub.

It allows you to index Markdown files from several locations on your computer.

You can search with keywords or natural language.

QMD combines BM25 full-text search, vector semantic search, and LLM re-ranking—all running locally via node-llama-cpp with GGUF models.

After installing it, you can add a new collection like this:

qmd collection add ~/notes --name notes

Now all Markdown files under ~/notes will be indexed and be searched like this:

qmd search -c notes "search term"

If you have multiple collections, you can do a global search:

qmd search "search term"

You can query or do a vector search (see docs for more details).

qmd also provides a mcp server, so you can integrate it as a memory server into your agentic workflows:

qmd mcp --http

This will run start the mcp server on port 8181.

Of course you can specify another port and you can also run it in daemon mode:

qmd mcp --daemon --http --port 9000

So if I have a collection named blog which points to the source repository of my blog on my computer, I can run a search like this:

qmd search tmux

This will search all collections, hence also my collection named blog and the results will look like this:

qmd://blog/articles/my-tmux-tmuxinator-rails-ai-development-setup/index.md:2 #b9e642
Title: So what's tmux?
Score: 92%

@@ -1,4 @@ (0 before, 156 after)
---
title: "My tmux + Rails + AI TUIs development setup"
date: 2026-02-11T22:00:00
layout: default

Earlier today I was curious if I could display the whole Markdown file instead of result shown above and I came up with the idea of building a Temrinal UI (TUI) for for qmd - lazyqmd was born.

Introducing lazyqmd

As with qmd itself, lazyqmd can be installed using bun:

bun install -g lazyqmd

To use all of the lazyqmd features, make sure to start the qmd mcp server in daemon mode as shown before. If you stick with the default port, you're good to go.

If you set another port, you can configure lazyqmd to use this one in the ~/.config/lazyqmd/options.json file (for more details, take a look at the README).

Now we're ready to start lazyqmd:

lazyqmd

If you're starting from scratch without prior usage of qmd itself, you'll have no collections at hand and lazyqmd will look like this:

As can bee seen from the bottom bar in the screenshot, there's a command Add which can be invoked by pressing the a key - this brings up a little dialog to add a new collection:

Please notice, that you get tab completion for the path.

Please wait until the Indexing... message disappears. Once, the collection has been indexed, you can start using it. lazyqmd should look similar to this now:

You can navigte the collections in the left sidebar using up and down arrows and you can start a new search by either sticking with the All selection for a global search or you can select a particular collection and search this one.

Let's select blog and hit / to bring up the search dialog:

Now lets type "tmux":

Htting "Enter" will bring up the search results for this particular search term:

The first result seems interesting as it has a relevance of 86%. So lets hit <tab> to jump to the results list follow by Enter to open that particular document:

Now you can scroll inside this document and as can be seen, there's some syntax highlighting for Markdown and YAML frontmatter.

At this point I thought it would be nice to have a HTML preview at hand, so I've added it. Within the Markdown preview just hit <p>:

As expected, this brings up a Chrome, Chromium or Brave instance in app mode. If you're living the dream and run Omarchy, everything will be auto aligned nicely thanks to hyprland.

Now I wanted to go a little further: What if I could just edit the file and get a sort of hot reload of the HTML?

Lets focus the lazyqmd window and hit <e> for "Edit":

If your $EDITOR is nvim, this will seamlessly open the Markdown file in neovim.

Now lets make a little change to our Markdown file:

As you can see, when saving the change, the change is reflected in the HTML preview.

Quitting nvim will bring you back to lazyqmd where you left off.

Looking at the search screen again, you might have noticed, there's a "Mode" command show in the command bar at the bottom:

By hitting <CTRL+T> you can switch between regular search, vsearch (QMD Vector search) and query.

If you want to use vsearch, make sure you've created the Embeddings. If you didn't so far, you can just hit <e> on the main screen with a collection selected. This will run qmd ebmed for you and create the Embeddings.

After finishing this, I noticed that this little TUI + QMD might replace LogSeq and Obsidian for me: file based, run locally and I can have my files where they belong to- lets see how this works in daily use.

For more features, please have a look at the README on GitHub.

As this project is pretty new, expect not everything to be perfect right now.

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete

I threw thousands of files at Astro and you won't believe what happened next...

1 Share

Ok, forgive me for the incredibly over the top title there. Yes, it's clickbait, but I'm also tired after a very long week and feeling a little crazy, so just go with me here a bit, I promise it will be worth it. I was curious how well Astro could handle a large amount of data and I thought - what happens if I threw this blog (well, the Markdown files) at it and tried to render out a site? Here's what I did wrong and what eventually worked (better than I expected).

Round One

I began by creating a soft link locally from my blog's repo of posts to the src/pages/posts of a new Astro site. My blog currently has 6742 posts (all high quality I assure you). Each one looks like so:

---
layout: post
title: "Creating Reddit Summaries with URL Context and Gemini"
date: "2026-02-09T18:00:00"
categories: ["development"]
tags: ["python","generative ai"]
banner_image: /images/banners/cat_on_papers2.jpg
permalink: /2026/02/09/creating-reddit-summaries-with-url-context-and-gemini
description: Using Gemini APIs to create a summary of a subreddit.
---

Interesting content no one will probably read here...

In my Astro site's index.astro page, I tried this first:

const allPosts = Object.values(import.meta.glob('./posts/**/*.md', { eager: true }));

And immediately ran into an issue with the layout front matter. Astro parses this and expects to find a post component in the same directory. My "fix" was to... remove the symbolic link and make a real copy and then use multi-file search and replace to just delete the line.

That worked... but was incredibly slow. I'd say it took about 70 or so seconds for each load.

This was... obviously... the wrong approach.

Round Two - The Right Approach

The solution was simple - use content collections. This involved moving my content out of the src/pages directory and creating a file, src/content.config.js to define the collection:

import { defineCollection } from 'astro:content';

import { glob, file } from 'astro/loaders';


const blog = defineCollection({ 
    loader: glob({pattern:"**/*.md",base:"./posts"})
 });

export const collections = { blog  };

You an see where I define my blog collection using a glob pattern and a base directory. That's literally it. This still took a few seconds to load, but was cached and future reloads were zippy zippy.

But wait... there's more

With this working, I began building out a few pages just to see things in action. First, a home page that shows ten recent posts with excepts:

---
import BaseLayout from '../layouts/BaseLayout.astro';
import { formatDate, excerpt } from "../utils/formatters.js"

import { getCollection } from 'astro:content';

const posts = await getCollection('blog');
const sortedPosts  = posts.sort((a, b) => {
	return new Date(b.data.date)-new Date(a.data.date);
}).slice(0,10);

---

<BaseLayout pageTitle="Blog">

	{ sortedPosts.map((post:any) => 
	<div>
		<h3><a href={ `/posts/${post.id}` }>{post.data.title}</a></h3>
		<p><i>Published { formatDate(post.data.date)}</i></p>
		<p set:html={ excerpt(post.rendered.html)}></p>
	</div>
	)}
	<p>
		<a href="all.html">Every Post Ever</a>
	</p>
</BaseLayout>

I think the import bits here are on top. You can see I need to sort my posts and I do so such that the most recent posts are on top. (In theory this sort would be faster if I pre-processed the string based dates into Date objects once, but the demo was working so fast now I didn't bother.)

Now note the the link. To make this work, I created a new file, src/pages/posts/[...id].astro. The rest parameter in the filename (...id) is important. I'll explain after sharing the file contents:

---
import BaseLayout from '../../layouts/BaseLayout.astro';

import { getCollection, render } from 'astro:content';

export async function getStaticPaths() {
  const posts = await getCollection('blog');

  return posts.map(post => ({
    params:{ id: post.id },
    props: { post },
  }));

}

const { post } = Astro.props;
const { Content } = await render(post);
---

<BaseLayout pageTitle={post.data.title}>
  <Content />
</BaseLayout>

My id values are coming from the permalink of my blog posts and look like so: permalink: /2026/02/09/creating-reddit-summaries-with-url-context-and-gemini. Notice the forward slashes? This was throwing errors in Astro when I originally named my file [id].astro. The rest parameter version fixed that immediately.

That's almost the last issue. With this in place, I could browse a few blog posts and see how they looked. I noticed something odd though. I had a header with three dots in it:

## Temporal is Coming...

And when rendered out, it turned into trash. I went to Gemini, asked about it, and it turned out to be an issue with Astro's Markdown processor considering the three dots a Unicode ellipsis character. My app didn't have a "real" HTML layout at this point (I added BaseLayout later) and was missing:

<meta charset="utf-8" />

As soon as that was added, it rendered just fine!

Blog home page

And how well did it perform when building? At near seven thousand pages, npm run build took...

Pause for effect

8 seconds. That's pretty dang good I'd say.

So, if you want to try this yourself, you can find the source here: https://github.com/cfjedimaster/astro-tests/tree/main/rayblog

Note! I thought it was a bit of a waste to check in all of my blog posts in this repo so I filtered it down to the last three years. If you want to recreate that I did (and heck, you can probably make it quicker, if you do, drop me a line!), you can clone my posts here: https://github.com/cfjedimaster/raymondcamden2023

Photo by Jeremy Thomas on Unsplash

Read the whole story
alvinashcraft
8 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories