Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150207 stories
·
33 followers

Building an Agentic, AI-Powered Helpdesk with Agents Framework, Azure, and Microsoft 365

1 Share

The High-Level Architecture

The core idea is to decouple the system. Instead of one large application doing everything, we split the process into distinct, scalable components.

  1. Ingestion: A lightweight API endpoint simply to capture the request.
  2. Decoupling: A message queue to hold the request for background processing.
  3. Processing: An asynchronous worker that handles all the heavy lifting: AI enrichment, notifications, and decision-making.
  4. Action: A set of automated actions that connect directly to our M365 tools.

Here’s the entire flow visualized as a flowchart:

 

Step-by-Step Workflow Breakdown

Let's dive into the details of each step.

Ingestion: The FastAPI Endpoint

The user's journey begins at a simple web form (built with FastAPI and Jinja2). The form captures the essential details: Title, Description, Category, Priority, and the user's email.

When the user hits "Submit," the request hits our POST /submit endpoint. This endpoint does two things immediately:

  1. Full Storage: It saves the complete entity (using Category as the PartitionKey and a GUID as the RowKey) into Azure Table Storage for a permanent record.
  2. Compact Message: It sends a compact JSON message (containing just the key info like tableRow, category, priority, etc.) to an Azure Service Bus queue named 'm365'.

This "split" is crucial. The API responds instantly to the user ("Submitted!") without waiting for any complex processing. The entire "heavy" part of the job is now in the queue.

The Asynchronous Worker & AI Enrichment

A separate Python process is constantly listening to the 'm365' Service Bus queue. When our new message arrives, the worker wakes up and:

  1. Parses the compact message.
  2. Uses the partition and row keys to fetch the full entity from Azure Table Storage.
  3. Calls our enrich_helpdesk_entity function, which is a wrapper for Azure OpenAI.

This AI step is where the magic begins. We send a prompt with the user's raw data and ask the AI to return a clean JSON object with an improved title, a concise summary, and a calculated urgency. If the AI fails, it gracefully falls back to using the original user input.

Human-in-the-Loop: Teams Notification

Now that we have a clean, enriched summary, we need to let the support team know. The worker calls send_to_teams, which formats the enriched data into a nice MessageCard and posts it to a designated Teams channel via a webhook.

The support team now sees a clean, AI-summarized notification, giving them instant visibility.

The 'Agent' Decides: AI-Driven Action

This is the "agentic" part of the workflow. Just notifying a channel is good, but true automation means taking the next step.

The worker calls decide_action, which uses the Microsoft Agent Framework (powered by an AzureOpenAIChatClient). We prompt the agent with the key data (category, priority, and the user's original ActionHint).

The agent's job is to intelligently decide the best action. It returns a simple JSON response like { "action": "create-task" }. This is far more powerful than a simple if/else block, as it can be trained to handle nuanced requests. The system defaults to the user's hint if the agent fails.

Execution: Closing the Loop in M365

Based on the agent's decision, the worker executes one of four actions:

  • notify-team: Uses Azure Communication Services (ACS) to send a formatted email to a distribution list.
  • create-task: Uses MSAL to get a Microsoft Graph token and directly creates a new task in a specific Planner plan/bucket.
  • create-ticket: Makes an HTTP POST to a Power Automate flow, which can then connect to any system (like ServiceNow, JIRA, etc.) to create a formal ticket.
  • store-only: Does nothing further. The request is stored and visible in Teams, but no other action is taken.

Visualizing the Interactions (Sequence Diagram)

Conclusion

This architecture provides a powerful, scalable, and intelligent solution for a common business problem. By leveraging a decoupled, event-driven design with serverless components, the system is both cost-effective and resilient.

The real power, however, comes from the two-stage AI: first, for enrichment (making data human-readable) and second, for decision-making (making the system autonomous). This "agentic" pattern, deeply integrated with the Microsoft 365 ecosystem, is a clear look at the future of business process automation.

Bonus Round: An Analytics Agent for Process Insights

We can easily extend this project by adding a Chat Interface Agent. Imagine a simple chat UI (in Teams, or its own web page) where a support manager can ask, in plain English:

  • "How many total tickets did we receive today?"
  • "Show me all 'High' priority requests for the 'IT' category."
  • "Which team had the most 'create-task' actions assigned?"

Technically, this is another "agent" (powered by Azure OpenAI) that translates the user's natural language question into a valid OData query for our HelpdeskRequests table. It then fetches the data, summarizes it, and presents the answer in the chat. This creates a powerful, conversational "Copilot" for our new helpdesk process, giving us instant, natural language access to our operational data.

Git Repo

References

For more in-depth information on the services and frameworks used in this post, check out the official Microsoft Learn documentation:

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Reviving My Blog With AI

1 Share

What’s this, a blog post after not having written one all year? Look, I’m sure I could make up some excuses about being busy with work, life, and the general chaos of the world, but the truth is, I just lost my blogging mojo. However, as the year is coming to a close I decided to do at least some minor maintenance on my blog, and in doing so I thought it would be a great opportunity to document the process of reviving my blog with a little help from AI.

Clone, build, fail, repeat

The first step in any blog maintenance is cloning the repository and running a build to see what state things are in. I cloned the repo, ran hugo, and… nothing. The build failed immediately. It turns out that the version of Hugo I had installed locally was much newer than what the blog was last built with, and Hugo had introduced breaking changes since then. Well, that was frustrating. Look, I’m sure I could have read the error messages and dug through the hundreds of files in the repo to fix everything, but this is the era of AI assistance, and I’m lazy, so I decided to see if GitHub Copilot could help me out.

Fixing build issues with Hugo

Hugo’s first build crashed, exposing errors from shortcodes that no longer behave like they did in 2024. Copilot translated the logs into actionable hints and pointed at the culprits—github, gist, and tweet. We removed the brittle GitHub repo embed (which was something custom I’d built and look, I’m sure the code is somewhere in my GitHub…) in favor of a simple link and reduced every gist embed to a pair of clean Markdown URLs.

From there it was whack-a-mole across sixteen posts that still referenced the deprecated tweet, twitter, or tweet_simple shortcodes. Copilot helped hunt them down, flagged both spaced and unspaced forms (seriously, past Aaron?!), and drafted the direct https://x.com/<user>/status/<id> links that now keep the context but without any remote fetches. Each wave of edits was followed with a full hugo --buildDrafts --buildFuture run to make sure we hadn’t invented new problems; by the end the only remaining warning was Hugo reminding me those shortcodes will vanish in a future release—no breaking errors, no missing embeds, just a tidy diff full of Markdown links.

Now that the build was working again, it was time to give the design a refresh, something to make it pop.

Creating a new theme

The original theme had served faithfully for the past few years, but compared to today’s design language it felt cramped and colourless. I asked Copilot to scaffold a fresh base—hero, navigation, cards, footer—and it answered with a glassy, modern layout that emphasised typography and generous spacing. From there I layered in custom Sass tokens, built out the hero animation and CTA buttons, and tightened every component until the homepage finally had the polish I always wanted. The best part was how quickly I could iterate: Copilot implemented a change, I’d review it, give it suggestions on what I did/didn’t like about it (annotated screenshots FTW!), and set it on its merry way.

I had created the last theme from scratch, and while I did a lot of CSS back in the day, my skills are very much rusty, especially when it comes to modern layout techniques like Flexbox and Grid. What took me a few solid days of fighting with CSS back then was done in, like, 30 minutes with Copilot’s help.

Supporting light and dark mode

You know what all the cool kids have on their websites these days? Light and dark mode. So, naturally, for this new design I wanted both, and I just fired off another prompt to Copilot. It helped punch out a theming system based on CSS custom properties: a set-theme mixin emits the palette for light and dark, the root follows prefers-color-scheme, and a new toggle in the navigation lets anyone override the default. A little JavaScript stores the selection in localStorage, updates the emoji/iconography, and snaps the DOM into the chosen mode without a reload. I smoke-tested both palettes—dark hero gradients, light mode cards, rotating headline—fixing subtle contrast issues Copilot flagged along the way until everything felt cohesive regardless of the time of day.

Reducing build output size

Once the new theme was ready I went to deploy it, only to have the build fail because it was too big to deploy to Static Web Apps. SWA has a limit of 250mb for the free tier (I think…), which is what I’m using for hosting my blog. Look, yeah I know the blog has a lot of images and other assets, but I didn’t think it would be that big, so I switched to Standard tier (500mb limit), and deployed again… and it failed again. Turns out the blog was over 600mb now. Well, shit.

I downloaded the CI output because it was time to work out where all the bloat was coming from. My initial hunch was that it was just a lot of really legacy assets, after all, I’ve been blogging for nearly 20 years, and I know there’s some ZIP files in here from pre-OSS code hosting platforms (no, sourceforge wasn’t my jam). I looked at the size of each of the top-level folders and I was shocked to notice that, no the assets wasn’t that large (relatively speaking), it was the tags folder, with over 400mb in there. This folder contains a HTML file for each tag on my website, so you can go to the csharp tag and get all posts that are, well, tagged with csharp. But what I didn’t realise it what I was also generating an XML file for RSS on each tag, yes, each tag was also an RSS feed, and with over 200 unique tags on my blog (ok - fixing taxonomy is a problem for another day 😅), that adds up. Also, it was totally not needed.

So it was back to Copilot Chat and started a little diagnostic sprint. Prompt one was a deceptively simple “why am I generating an RSS feed for each tag?” Copilot dissected the project immediately: Hugo’s default taxonomy outputs were spawning an index.xml beside every tag page, and because the config inherited the defaults. Prompt two: “yeah, can you disable it” kicked off a quick solution loop (look, you don’t need to be super detailed in the prompt 🤣). Copilot proposed the edit (explicit taxonomy/term outputs set to HTML only), patched config.toml, suggested re-running hugo --buildDrafts --buildFuture, and surfaced the clean build log. With the redundant feeds gone and the site weight ticking downward, I had momentum to keep chipping away at the rest of the oversized assets.

This chat session turned into the perfect postscript. I opened with the ask, “I want to experiment with a new feature… I want to run an image compression step,” and Copilot laid out the whole experiment: a feature/image-compression branch, a Sharp-powered scripts/compress-images.js, npm scripts for static and .output runs, plus VS Code tasks so future me can trigger the same workflow without leaving the editor. A dry-run delivered the proof—199 images trimmed for a 40 MB delta—without touching a single byte.

The next prompt, “can we change the compress-images.js to use JavaScript modules rather than require,” pushed the helper into modern ESM territory. Copilot swapped the imports, flipped package.json to "type": "module", refreshed the lockfile, and re-ran npm run compress:images -- --dry-run to confirm nothing broke. From there I nudged, “now can you integrate it into the github actions workflows,” and Copilot slotted the compressor straight into the Hugo job with actions/setup-node, an npm ci, and a post-build node scripts/compress-images.js --input "$GITHUB_WORKSPACE/$OUTPUT_FOLDER" step so every artifact upload ships the leanest bits possible.

With the image compression in place and the tag RSS feeds disabled, the final build output was down to a manageable 220mb, well within the free tier limits of Static Web Apps. Time to deploy!

That’s a wrap!

What started as simple maintenance—clone, build, deploy—turned into a full renovation powered by AI assistance. GitHub Copilot helped me navigate breaking changes, modernize a stale design, implement light/dark theming, and optimize build output, all in a fraction of the time it would have taken solo. The real win wasn’t just fixing errors or shipping a fresh coat of paint; it was rediscovering the joy of tinkering with my blog without the friction that had kept me away all year.

And you know what the best part is, most of this blog post was able to be written by Copilot as well, I just asked it to summarise the chat sessions, then I went through and edited it to make it sound more like me. So AI writing blog posts about using AI to update my blog, I feel like this is delightfully peak laziness. 😅

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Anthropic Acquires Bun In First Acquisition

1 Share
Anthropic has made its first acquisition by buying Bun, the engine behind its fast-growing Claude Code agent. The move strengthens Anthropic's push into enterprise developer tooling as it scales Claude Code with major backers like Microsoft, Nvidia, Amazon, and Google. Adweek reports: Claude Code is a coding agent that lets developers write, debug and interpret code through natural-language instructions. Claude Code had already hit $1 billion in revenue six months since its public debut in May, according to a LinkedIn post from Anthropic's chief product officer, Mike Krieger. The coding agent continues to barrel toward scale with customers like Netflix, Spotify, and Salesforce. Further reading: Meet Bun, a Speedy New JavaScript Runtime

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Go 1.25.5-1 and 1.24.11-1 Microsoft builds now available

1 Share

A new release of the Microsoft build of Go including security fixes is now available for download. For more information about this release and the changes included, see the table below:

Microsoft Release Upstream Tag
v1.25.5-1 go1.25.5 release notes
v1.24.11-1 go1.24.11 release notes

The post Go 1.25.5-1 and 1.24.11-1 Microsoft builds now available appeared first on Microsoft for Go Developers.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

No we're likely not reaching AGI anytime soon: The Behaviourist Fallacy in AI Subsystem Development

1 Share
No we're likely not reaching AGI anytime soon: The Behaviourist Fallacy in AI Subsystem Development
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Thinking in Systems: A Retrospective on a Life Spent Trying to Understand How Things Fit Together

1 Share
Thinking in Systems: A Retrospective on a Life Spent Trying to Understand How Things Fit Together
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories