Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152307 stories
·
33 followers

Thumbs Up & Down for LLM Responses

1 Share

As more organizations deploy AI chatbots and copilots, one simple feature keeps showing up in the user interface: the familiar thumbs up and thumbs down buttons.

At first glance, this seems like a small UX detail. Add two buttons, store whether the answer was liked or disliked, and move on.

But in practice, a good feedback system for LLM responses is much more than that.

If you want your AI solution to improve over time, catch regressions, identify hallucinations, detect tool failures, and understand whether users are actually getting value, then thumbs up and thumbs down should be treated as a full feedback and observability system, not just a couple of icons on the screen.

In this post, I want to walk through what it really takes to implement this feature properly.

Why Thumbs Up / Down Matters

When users interact with an AI chatbot, every response is a chance to learn something:

  • Was the answer correct?
  • Was it useful?
  • Did it follow instructions?
  • Was it too long or too vague?
  • Did it cite sources well?
  • Did it fail because retrieval was weak?
  • Did a tool call break behind the scenes?

A simple feedback mechanism gives users a fast way to tell you whether the response helped. That is valuable. But the real value appears when that signal is captured with enough surrounding context to make it actionable.

A bare thumbs down tells you someone was unhappy.
A well-designed feedback event tells you why, under what conditions, and what to fix.

The Biggest Mistake Teams Make

The most common mistake is implementing thumbs up/down like this:

  • render buttons
  • store liked = true or false
  • maybe display a thank-you message

That is easy to build, but it provides very little operational value.

It does not tell you:

  • which model produced the bad answer
  • what prompt version was active
  • whether RAG was used
  • which documents were retrieved
  • whether a tool call failed
  • whether the response took 12 seconds and annoyed the user
  • whether the same issue is happening hundreds of times

If you want feedback that improves your AI product, you need to think bigger.

The Right Way to Think About It

A production-ready thumbs system has two layers:

1. A simple and frictionless user experience

Users should be able to click quickly without interrupting their workflow.

2. A rich backend feedback pipeline

The click should be tied to the exact response, trace, model, prompt, and execution details that produced it.

That combination is where the magic happens.

Start with the User Experience

For each assistant response, show:

  • 👍 Helpful
  • 👎 Not Helpful

That is the core interaction. Then, once the user clicks, optionally invite more detail.

After a thumbs up

You can ask:

  • What was good about it?
    • Correct
    • Clear
    • Fast
    • Useful
    • Good citations

After a thumbs down

You can ask:

  • What went wrong?
    • Incorrect
    • Hallucinated or made things up
    • Did not follow instructions
    • Bad tone
    • Too long or too short
    • Missing sources
    • Slow
    • Tool or action failed
    • Other

And then provide a small comment box:

Tell us more

This is an important design choice. A simple thumb is helpful, but a structured follow-up gives you the reason behind the rating. That is what turns raw sentiment into something the engineering team can act on.

Every Response Needs a Unique Identity

To make feedback meaningful, each assistant response should already have identifiers such as:

  • Conversation ID
  • Turn ID
  • Message ID
  • Response ID
  • Trace ID

When a user clicks thumbs up or thumbs down, the feedback event must reference those identifiers.

That way, you can link the feedback back to the exact response and everything that happened around it.

Without this, feedback becomes disconnected and much harder to analyze.

The Best Architecture Pattern

A strong implementation usually looks like this:

Frontend

The chat UI renders the thumbs controls on every assistant message.

When the user clicks, the frontend sends a feedback payload to the backend.

Backend API

A dedicated endpoint receives the feedback, validates it, and stores it.

Database

Feedback is stored in a dedicated table, not buried inside the chat transcript.

Telemetry

A custom event is emitted so feedback can be correlated with traces, logs, and metrics.

Analytics and evaluation

Offline processes aggregate the feedback into dashboards and evaluation datasets.

This turns thumbs up/down from a superficial interface feature into a real product improvement loop.

What the Feedback Payload Should Contain

A strong feedback event should include fields like:

  • conversationId
  • turnId
  • messageId
  • responseId
  • traceId
  • rating (up or down)
  • reason codes
  • optional comment
  • user ID or tenant ID
  • timestamp

That alone is a major improvement over simply storing “liked” or “disliked.”

What Else You Should Capture

The feedback event becomes far more valuable when you can correlate it with response metadata such as:

Response metadata

  • model name
  • model version
  • deployment name
  • prompt template version
  • token counts
  • latency
  • streaming vs non-streaming

Retrieval metadata

  • retrieved document IDs
  • chunk count
  • similarity scores
  • whether citations were shown

Tool metadata

  • tool calls made
  • success or failure of each tool
  • latency per tool
  • fallback path used

Safety and execution metadata

  • moderation flags
  • retry count
  • truncation status
  • error conditions

Once you have this, you can answer questions like:

  • Are thumbs down increasing after a model upgrade?
  • Are hallucination complaints tied to specific prompts?
  • Do tool failures correlate with user frustration?
  • Are slow answers getting penalized even when correct?

That is where feedback becomes genuinely powerful.

Why a Dedicated Feedback Table Matters

Do not just append thumbs data to the message content.

Store it in a separate structure.

For example, you might have one table for assistant messages and another for message feedback.

The assistant message record stores the response and all its execution details.

The feedback record stores:

  • who rated it
  • whether it was up or down
  • what reasons were selected
  • what comment was provided
  • when it happened

This separation makes reporting, filtering, auditing, and analytics much easier.

One Rating per User per Message

A simple best practice is to allow one feedback record per user per message, with the option to update it. That avoids spam and keeps your data cleaner. It also lets a user change their rating later if they want to.

Why Observability Matters

If your chatbot is in production, you should not treat thumbs feedback as only a database concern. It should also be part of your telemetry story. When feedback is submitted, emit a custom event such as:

chat.feedback.submitted

Attach useful dimensions such as:

  • rating
  • reason codes
  • message ID
  • trace ID
  • model
  • model version
  • prompt version
  • latency
  • tool count
  • retrieval source count

This lets you use observability dashboards to spot trends and investigate incidents quickly.

For example, if a new deployment causes a jump in thumbs down, you want to detect that fast.

The Most Important Insight: Thumbs Are Not Just UX, They Are Labels

A good thumbs system creates valuable labels for your AI lifecycle.

Those labels can feed three important loops:

1. Product improvement loop

You can identify usability issues such as:

  • bad formatting
  • weak citations
  • excessive verbosity
  • poor tone
  • slow responses

2. Prompt engineering loop

You can review negative feedback by:

  • model version
  • prompt version
  • task type
  • tool usage
  • retrieval path

This helps refine system prompts, grounding strategy, and orchestration logic.

3. Evaluation loop

Downvoted responses can be turned into test cases for future regression testing.

That means your real user feedback becomes part of your quality engineering process.

This is one of the smartest things an AI team can do.

What a Good Minimal Version Looks Like

If you want a practical first version, build this:

  • thumbs up/down on every assistant message
  • optional comment for thumbs down
  • dedicated feedback API
  • feedback table in the database
  • message ID and conversation ID correlation

That gives you a usable starting point.

What a Good Production Version Looks Like

A mature implementation adds:

  • structured reason codes
  • trace ID correlation
  • model and prompt version tracking
  • retrieval and tool metadata
  • telemetry events
  • dashboards
  • quality reviews
  • regression test case generation from bad answers

That is where the feature goes from “nice to have” to “strategic.”

Final Recommendation

If you are building an AI chatbot today, my recommendation is simple:

Do not implement thumbs up and thumbs down as just a couple of buttons.

Implement them as a feedback architecture.

  • Make the user experience effortless.
  • Capture structured reasons when possible.
  • Tie the rating to the exact response and execution trace.
  • Store it cleanly.
  • Monitor it operationally.
  • And use it to improve prompts, models, tools, and overall experience.

When done right, this small feature becomes one of the most valuable sources of truth in your AI application.

It helps you see what users trust, what they reject, and what needs attention next.

And in the world of LLM-powered solutions, that feedback loop is not optional. It is essential.

Reach out to us at the Training Boss to partner together on AI solutions.  We are happy to architect and implement your Thumbs Up & Down solution.

The post Thumbs Up & Down for LLM Responses appeared first on The Training Boss.

Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

🚀 Big Update: GPT-Image Models + AI Agent Skills

1 Share

⚠ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi!

usually hide my taskbar clock while recording videos, and after doing that dance one too many times, I had one goal: save a few clicks.

Then yesterday, during a GitHub Copilot CLI session (watch it here), I decided to pick up the ClockTray app and turn it into a CLI tool.

And… here we are. 🚀

Introducing the ClockTray CLI

ClockTray v1.0.0 is now a dual-mode utility combining a user-friendly GUI with powerful command-line controls.

The headline feature? Full CLI support. You can now control your taskbar clock entirely from PowerShell, batch scripts, or any automation tool. Show it, hide it, check its status—all without touching the mouse.

The New CLI Commands

Here’s what you can do right from your terminal:

# Show the clock on your taskbar
clocktray show
# Hide the clock
clocktray hide
# Check the current status
clocktray status
# View all available commands
clocktray --help
# Check the version
clocktray --version

Each command is instant, non-blocking, and integrates seamlessly with your scripts and workflows.

One Tool, Two Modes

ClockTray now works the way you work:

GUI Mode: Launch the application normally for a traditional system tray experience. Right-click the icon to toggle your clock or access settings—perfect for quick manual control.

CLI Mode: Run clocktray commands from PowerShell, Windows Terminal, or your build scripts. Perfect for automation, CI/CD pipelines, and repetitive tasks.

You don’t need separate tools. You get both, in a single 30-second install.

Automation & Real-World Use Cases

Imagine these scenarios—now all possible with ClockTray:

  • Streaming Setup: Hide the clock before you go live, show it again when you stop.
  • Multi-Monitor Management: Toggle the clock based on which display you’re using—via a custom PowerShell profile.
  • Build Pipelines: Hide the clock during automated testing, restore it when complete.
  • Accessibility Workflows: Create quick-access scripts that adapt your desktop to different needs.
  • Time-Tracking Scripts: Check the clock status as part of a larger automation routine.

PowerShell scripters and DevOps engineers, you’re going to love this.

Bonus Feature: Chinese Lunar Calendar Overlay

ClockTray also includes a beautiful overlay showing the Chinese lunar calendar—perfect if you’re tracking traditional holidays, working with international teams, or simply interested in lunar dates. This overlay displays alongside your taskbar clock and respects your show/hide commands.

Installation in 30 Seconds

You can install ClockTray as a global .NET tool from NuGet:

dotnet tool install --global ElBruno.ClockTray

That’s it. You’ll have clocktray available in any PowerShell session or terminal immediately.

Update an existing installation? Use:

dotnet tool update --global ElBruno.ClockTray

Get ClockTray Today

Explore the source code, contribute features, or report issues on GitHub. The project is open-source and welcomes your feedback.

What’s Next

With v1.0.0 released, we’re already thinking ahead. Future updates may include:

  • Configuration files for default behavior
  • Extended automation scenarios
  • Cross-platform considerations

Your input shapes this tool. If you have ideas, open an issue or discussion on GitHub.

Try It Now

Whether you’re a PowerShell enthusiast, a DevOps engineer, or someone who just wants better desktop control, ClockTray v1.0.0 has something for you. Install it today, explore the CLI, and tell us what you build.

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno




Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Document Classification Without AI: Deterministic, Explainable, and Built for Production in C# .NET

1 Share
In this article, we explore how to implement document classification without relying on AI. We will discuss deterministic methods that are explainable and suitable for production environments. This approach can be particularly beneficial for organizations that require transparency and control over their classification processes.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new for .NET in Ubuntu 26.04

1 Share

Today is launch day for Ubuntu 26.04 (Resolute Raccoon). Congratulations on the release to our friends at Canonical. Each new Ubuntu LTS comes with the latest .NET LTS so that you can develop and run .NET apps easily. Ubuntu 26.04 comes with .NET 10. You can also install .NET 8 and 9 via a separate PPA/feed. Installation instructions are demonstrated later in the post. .NET is one of the officially supported toolchains on Ubuntu. The two companies work together to ensure that .NET works well on Ubuntu.

To install .NET 10:

sudo apt update
sudo apt install dotnet-sdk-10.0

Ubuntu 26.04 container images are also available for .NET 10+, released earlier this month. They use the resolute tag. You can update -noble tags to -resolute to upgrade to the new OS.

Ubuntu 26.04 release notes describe many changes. The most relevant changes are Linux 7.0, post-quantum cryptography, and removal of cgroup v1 (container-related). We will start Linux 7.0 testing once we get Ubuntu 26.04 VMs in our lab, shortly. We added support for post-quantum cryptography in .NET 10. We added support for cgroup v2 many years ago so cgroup v1 removal should be a non-event.

We’ve been on the “road to next Ubuntu LTS” since early 2025 when we started testing Debian 13 and Ubuntu 24.10 in CI. We started testing 26.04 in late 2025. Every PR in dotnet/runtime runs validation on Ubuntu 26.04. It’s possible that our repo is one of the biggest users of Ubuntu 26.04 container images, at least up until today. Ubuntu standard releases (24.10, 25.04, …) invariably require some level of change on our part, which is all part of being ready for GA day.

Installing .NET 10

I’ll demonstrate installing .NET 10 in an ubuntu:resolute container image.

$ docker run --rm -it ubuntu:resolute
$ apt update
$ apt install -y dotnet-sdk-10.0
$ dotnet --version
10.0.105

Let’s run some C#.

$ dotnet run - << 'EOF'
using System.Runtime.InteropServices;

Console.WriteLine($"Hello {RuntimeInformation.OSDescription} from .NET {RuntimeInformation.FrameworkDescription}");
EOF
Hello Ubuntu Resolute Raccoon (development branch) from .NET .NET 10.0.5

“Run some C#”. That’s funny wording. The example above is file-based app constructed via a heredoc and passed to dotnet run via stdin. Standard unix patterns at play. If you watch closely, AI agents love to do the same thing in Python. Don’t hesitate to tell them that it works great with C# as well!

Containers

The new resolute images will be much the same as noble ones. There is no change in the image flavors we offer, like Chiseled. Recall that containers rely on the kernel of the host. For example, a 26.04 container hosted on a 24.04 host (as is the case in my examples) will be using Linux kernel 6.x.

Let’s try running our aspnetapp sample.

We first need to migrate the sample.

$ grep dotnet/ Dockerfile.chiseled
# https://github.com/dotnet/dotnet-docker/blob/main/samples/README.md
FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:10.0-noble AS build
FROM mcr.microsoft.com/dotnet/aspnet:10.0-noble-chiseled
$ sed -i "s/noble/resolute/g" Dockerfile.chiseled

And now to build and run it, with resource limits.

docker build --pull -t aspnetapp -f Dockerfile.chiseled .
docker run --rm -it -p 8000:8080 -m 50mb --cpus .5 aspnetapp

Welcome to .NET

Native AOT

Native AOT (NAOT) is a great choice when you want a fast to start self-contained native binary. The -aot variant package gives you most of what you need. I publish all my tools as RID-specific tools. That’s outside the scope of this post. Let’s focus on the basics. I’ll use ubuntu:resolute again.

Here’s the dotnet-sdk-aot-10.0 package, among the other SDK packages:

$ apt list dotnet-sdk*
dotnet-sdk-10.0-source-built-artifacts/resolute 10.0.105-0ubuntu1 amd64
dotnet-sdk-10.0/resolute,now 10.0.105-0ubuntu1 amd64 [installed,automatic]
dotnet-sdk-aot-10.0/resolute,now 10.0.105-0ubuntu1 amd64 [installed]
dotnet-sdk-dbg-10.0/resolute 10.0.105-0ubuntu1 amd64

We need the AOT package + clang (.NET uses LLD for linking):

apt install -y dotnet-sdk-aot-10.0 clang

I’ll publish the same hello world app (written to a file this time) as NAOT (the default for file-based apps).

$ dotnet publish app.cs
$ du -h artifacts/app/*
1.4M    artifacts/app/app
3.0M    artifacts/app/app.dbg

The binary is 1.4 MB. The .dbg file is a native symbols file, much like Windows PDB. The minimum NAOT binary is about 1.0 MB. The use of the RuntimeInformation class brings in more code.

Let’s check out the startup performance.

$ time ./artifacts/app/app
Hello Ubuntu Resolute Raccoon (development branch) from .NET .NET 10.0.5

real    0m0.003s
user    0m0.001s
sys     0m0.001s

Pretty good! That’s 3 milliseconds.

NAOT works equally well for web services. Let’s take a look at our releasesapi sample.

$ grep Aot releasesapi.csproj
    <PublishAot>true</PublishAot>
$ dotnet publish
$ du -h bin/Release/net10.0/linux-x64/publish/*
4.0K    bin/Release/net10.0/linux-x64/publish/NuGet.config
4.0K    bin/Release/net10.0/linux-x64/publish/appsettings.Development.json
4.0K    bin/Release/net10.0/linux-x64/publish/appsettings.json
13M     bin/Release/net10.0/linux-x64/publish/releasesapi
32M     bin/Release/net10.0/linux-x64/publish/releasesapi.dbg
4.0K    bin/Release/net10.0/linux-x64/publish/releasesapi.staticwebassets.endpoints.json
$ ./bin/Release/net10.0/linux-x64/publish/releasesapi
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://localhost:5000
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /dotnet-docker/samples/releasesapi

In another terminal:

$ curl -s http://localhost:5000/releases | jq '[.versions[] | select(.supported == true) | {version, supportEndsInDays}]'
[
  {
    "version": "10.0",
    "supportEndsInDays": 942
  },
  {
    "version": "9.0",
    "supportEndsInDays": 207
  },
  {
    "version": "8.0",
    "supportEndsInDays": 207
  }
]

Note: jq is an excellent tool that is basically “LINQ over JSON”. It takes JSON and enables generating new JSON with the analog of anonymous types.

That’s a 13 MB self-contained web service including a lot of source-generated System.Text.Json metadata and code.

Installing .NET 8 and 9

Our partners at Canonical make a hard separation between support and availability. .NET 8 and 9 are provided via the dotnet-backports PPA feed. These sometimes older, sometimes newer, versions are offered with “best-effort support”. We expect that .NET 11 will be added to this same PPA at GA.

The software-properties-common package is required to configure the feed. It is typically pre-installed on desktop versions of Ubuntu.

apt install -y software-properties-common

Configure the feed:

$ add-apt-repository ppa:dotnet/backports
PPA publishes dbgsym, you may need to include 'main/debug' component
Repository: 'Types: deb
URIs: https://ppa.launchpadcontent.net/dotnet/backports/ubuntu/
Suites: resolute
Components: main
'
Description:
The backports archive provides source-built .NET packages in cases where a version of .NET is not available in the archive for an Ubuntu release.

Currently available Ubuntu releases and .NET backports:

Ubuntu 26.04 LTS (Resolute Raccoon)
├── .NET 8.0 (End of Life on November 10th, 2026) [amd64 arm64]
└── .NET 9.0 (End of Life on November 10th, 2026) [amd64 arm64 s390x ppc64el]

Ubuntu 24.04 LTS (Noble Numbat)
├── .NET 6.0 (End of Life on November 12th, 2024) [amd64 arm64]
├── .NET 7.0 (End of Life on May 14th, 2024)      [amd64 arm64]
└── .NET 9.0 (End of Life on November 10th, 2026) [amd64 arm64 s390x ppc64el]

Ubuntu 22.04 LTS (Jammy Jellyfish)
├── .NET 9.0 (End of Life on November 10th, 2026) [amd64 arm64 s390x ppc64el]
└── .NET 10.0 (End of Life on November 14th, 2028) [amd64 arm64 s390x ppc64el]

Canonical provides best-effort support for packages contained in this archive, which is limited to the upstream lifespan or the support period of the particular Ubuntu version. See the upstream support policy [1] for more information about the upstream support lifespan of .NET releases or the Ubuntu Releases Wiki entry [2] for more information about the support period of any Ubuntu version.

[1] https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
[2] https://wiki.ubuntu.com/Releases
More info: https://launchpad.net/~dotnet/+archive/ubuntu/backports
Adding repository.
Press [ENTER] to continue or Ctrl-c to cancel.

You can see that the support statement for the feed is included.

Once the feed is registered, new dotnet and aspnetcore packages will show up. You can filter them by version or see all of them. Whichever you want.

$ apt list dotnet-*8.0
dotnet-apphost-pack-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-host-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-hostfxr-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-runtime-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-runtime-dbg-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-sdk-8.0/resolute 8.0.126-0ubuntu1~26.04.1~ppa1 amd64
dotnet-sdk-dbg-8.0/resolute 8.0.126-0ubuntu1~26.04.1~ppa1 amd64
dotnet-targeting-pack-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
dotnet-templates-8.0/resolute 8.0.126-0ubuntu1~26.04.1~ppa1 amd64
$ apt list aspnetcore-runtime-*
aspnetcore-runtime-10.0/resolute 10.0.5-0ubuntu1 amd64
aspnetcore-runtime-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
aspnetcore-runtime-9.0/resolute 9.0.15-0ubuntu1~26.04.1~ppa1 amd64
aspnetcore-runtime-dbg-10.0/resolute 10.0.5-0ubuntu1 amd64
aspnetcore-runtime-dbg-8.0/resolute 8.0.26-0ubuntu1~26.04.1~ppa1 amd64
aspnetcore-runtime-dbg-9.0/resolute 9.0.15-0ubuntu1~26.04.1~ppa1 amd64

And the packages that are actually installed on my machine.

apt list --installed 'aspnetcore*' 'dotnet*'
aspnetcore-runtime-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
aspnetcore-targeting-pack-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-apphost-pack-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-host-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-hostfxr-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-runtime-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-sdk-10.0/resolute,now 10.0.105-0ubuntu1 amd64 [installed,automatic]
dotnet-sdk-aot-10.0/resolute,now 10.0.105-0ubuntu1 amd64 [installed]
dotnet-targeting-pack-10.0/resolute,now 10.0.5-0ubuntu1 amd64 [installed,automatic]
dotnet-templates-10.0/resolute,now 10.0.105-0ubuntu1 amd64 [installed,automatic]

Summary

It’s that time again, another Ubuntu LTS. I wrote a similar post two years ago for Ubuntu 24.04. Much is the same. This time around, we put in more effort to ensure that preparing for the next Ubuntu LTS was central to the distro versions we chose for CI testing in dotnet/runtime in the intervening two years. Day-of Ubuntu 26.04 support just “falls out” of that. Enjoy!

The post What’s new for .NET in Ubuntu 26.04 appeared first on .NET Blog.

Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

LangChain.js for Beginners: A Free Course to Build Agentic AI Apps with JavaScript

1 Share

Want to build AI agents with JavaScript that go beyond basic chat completions? Agents that reason, call tools, and pull from knowledge bases on their own? We put together a free, open source course to help you get there.

LangChain.js for Beginners is 8 chapters and 70+ runnable TypeScript examples. Clone the repo, add your API key to a .env file, and start building.

Why LangChain.js?

If you already know Node.js, npm, TypeScript, and async/await, you don’t need to switch to Python to build AI apps. LangChain.js gives you components for chat models, tools, agents, retrieval, and more so you’re not wiring everything from scratch.

LangChain.js is like having a fully stocked hardware store at your disposal. Instead of fabricating every tool from raw metal, you grab what’s on the shelf and get to work.

The Hardware Store Analogy - LangChain.js gives you pre-built components instead of building everything from scratch

An Agent-First Approach

Most LangChain tutorials start with document loaders and embeddings. This course gets to tools and agents early because that’s closer to how production AI systems actually work. Agents decide what to do, when to use tools, and whether they even need to search your data.

Here’s the path through the course:

Chapters 1-3 cover the foundations: your first LLM call, chat models, streaming, prompt templates, and structured outputs with Zod schemas. Standard stuff, but you need it before things get interesting.

Chapter 4 – Function Calling & Tools. This is where the AI stops chatting and starts doing things. You teach it to call your functions, and it figures out when to use them. How the tool-calling loop works between the LLM and your code

Chapter 5 – Agents. An LLM answers questions. An agent reasons through problems, picks tools, and executes multi-step plans. Chapter 5 walks through the ReAct pattern and how to build agents with LangChain.js.

LLM vs Agent comparison

Chapter 6 – MCP. The Model Context Protocol is becoming the standard for connecting AI to external services. You’ll build MCP servers and wire agents to them using both HTTP and stdio transports.

LLM vs Agent comparison

Chapters 7 & 8 bring in documents, embeddings, and semantic search, then combine everything into Agentic RAG. The agent decides when to search your knowledge base versus just answering from what it already knows. That’s a big step up from the “search everything every time” approach most RAG tutorials teach.

MCP Architecture

LLM vs Agent comparison

Each chapter includes:

  • Conceptual explanations with real-world analogies
  • Code examples you can run immediately
  • Hands-on challenges to test your understanding
  • Key takeaways to reinforce learning

Why Teach Agents Before RAG?

This comes up a lot. Think about it like a student taking an “open book” exam. Traditional RAG is like the student who flips through the textbook for every question, even “What is 2+2?” Agentic RAG is the smart student who answers simple questions from memory and only opens the book when they actually need to look something up.

By the time you reach Agentic RAG in Chapter 8, you already understand tools, agents, and MCP. Document retrieval becomes one more capability your agent can reach for. And because the agent already knows how to reason about tool selection, it can figure out when retrieval is actually necessary versus when it can answer directly. The result is faster responses, lower costs (fewer unnecessary embedding lookups), and a better experience overall. That beats bolting search onto a chatbot and hoping for the best.

Who This Course Is For

JavaScript/TypeScript developers who know npm install and async/await. No prior AI or machine learning experience needed. Each chapter starts with a real-world analogy to ground the concept before any code shows up. You’ll see comparisons to hardware stores, restaurant staff, USB-C adapters (for MCP), and more. From there, you get working code examples you can run immediately, hands-on challenges to test your understanding, and key takeaways at the end of each section. The goal is to learn by building, not by reading walls of theory.

You can also work through the course locally or use GitHub Codespaces for a cloud-based setup if you’d rather skip the local install entirely.

Works With Your AI Provider

The course is provider-agnostic. Examples run with GitHub Models (free, great for learning), Microsoft Foundry (production-ready), or OpenAI directly. The setup is the same in every case: update four environment variables in your .env file (AI_API_KEY, AI_ENDPOINT, AI_MODEL, AI_EMBEDDING_MODEL) and every example works without touching a single line of code.

Capstone and Bonus Samples

The course includes a capstone project you can check out. It’s an MCP-powered RAG server that exposes document search and document ingestion as MCP tools over HTTP. Multiple agents can connect to it and share a centralized knowledge base. It’s the kind of architecture you’d actually use in cases where you don’t want each agent to maintain its own copy of the data.

Beyond the 70+ course examples and capstone project, the README links to several bonus samples you can explore: a burger-ordering agent with a serverless API and MCP server, a serverless AI chat with RAG running on Azure, and a multi-agent travel planner that orchestrates specialized agents across Azure Container Apps.

Get Started

Visit github.com/microsoft/langchainjs-for-beginners to get started! Clone it, configure your API key, and run the examples. Chapters build on each other but are self-contained enough to jump around if something specific catches your eye.

New to generative AI concepts? Check out the companion course Generative AI with JavaScript first to get the fundamentals down.

Additional Courses

Additional LangChain courses are also available for Python and Java:

The post LangChain.js for Beginners: A Free Course to Build Agentic AI Apps with JavaScript appeared first on Microsoft for Developers.

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

General Availability: Single Sign-On (SSO) from Native Apps to Embedded Web Views in Microsoft Entra External ID Native Authentication

1 Share

We’re excited to announce the General Availability (GA) of Single Sign-On (SSO) from Native Apps to Embedded Web Views for Microsoft Entra External ID (EEID) Native Authentication.

This release marks a major milestone in delivering end-to-end seamless authentication experiences for modern CIAM applications bridging the gap between native and web-based app surfaces.

Why SSO matters for Native Auth

Native Authentication gives developers full control over the identity UX—enabling pixel-perfect, in-app sign-in and sign-up experiences without browser redirects.

However, real-world applications rarely stay fully native.

Most modern apps include embedded web experiences, such as:

  • Profile management pages
  • Payment or checkout flows
  • Loyalty or rewards dashboards
  • Support or account portals

Without SSO, users are forced to authenticate again when transitioning from native UI to web content—creating friction, drop-offs, and inconsistent experiences.

With GA of SSO for embedded web views, this problem is now solved.

What’s now generally available

With this release, developers can now enable seamless SSO between native and web experiences within the same app session.

✅ Seamless user experience Users authenticate once via native UI—and are automatically signed into embedded web content without a second prompt.

âś… Token-based session continuity The native app securely retrieves an access token and passes it to the web view, enabling immediate access to protected resources.

✅ No browser dependency SSO works entirely within embedded web views (e.g., WKWebView, Android WebView)—preserving full control over UX.

âś… Developer-controlled integration Applications can inject authentication state into requests, ensuring flexibility across custom app architectures.

How it works (high-level)

The SSO flow builds on top of EEID Native Authentication:

  1. User signs in via native authentication (SDK or API)
  2. App retrieves a valid access token
  3. App loads the embedded web view with a request containing: Authorization: Bearer <access_token>
  4. The web resource validates the token and grants access immediately

This enables a secure bridge between native token state and web session state—without reauthentication.

Developer scenarios unlocked

This capability is especially impactful for CIAM developers building hybrid apps:

📱 Mobile + Web hybrid experiences Enable seamless transitions between native UI and web-based modules without re-login.

🛍 Commerce and customer journeys Avoid authentication interruptions across checkout, billing, and account management flows.

đź”’ Secure embedded experiences Maintain token-based security while delivering fully embedded web experiences.

🎯 Consistent branding Keep users inside your app—no redirects, no context switching—while maintaining authentication continuity.

Behind the scenes: Why this matters

Embedded web views are isolated from browser session state, which means they don’t automatically inherit SSO cookies. This historically forced developers to either:

  • Re-authenticate users in the web view, or
  • Use complex workarounds

With this release, EEID Native Auth introduces a first-class, token-based SSO model—bridging native authentication and web sessions in a secure and scalable way.

This is just the beginning of the SSO journey

While this GA unlocks SSO within a single application (native → embedded web view), it represents only the first step in a broader SSO vision for EEID Native Authentication.

We are actively investing in:

  • SSO across multiple apps (native-to-native)
  • SSO across devices and sessions
  • Integration with broader identity ecosystems
  • Advanced security scenarios (policy, conditional access, passkeys)

Our goal is to deliver a comprehensive, modern SSO platform for CIAM, built on the flexibility of Native Authentication.

Ready to get started with Native Authentication?

To begin using single sign-on (SSO) from native apps to embedded web views, configure Native Authentication in your Microsoft Entra External ID tenant and integrate your mobile application using the Native Authentication SDKs or APIs. Once your app successfully signs in users via native authentication, retrieve a valid access token and use it to load your embedded web view with the user’s authenticated context enabling a seamless, no‑relogin experience across native and web surfaces.

Stay connected and informed

To learn more or test out features in the Microsoft Entra suite of products, visit our developer center. Make sure you subscribe to the Identity blog for more insights and to keep up with the latest on all things Identity. And, follow us on YouTube for video overviews, tutorials, and deep dives.

The post General Availability: Single Sign-On (SSO) from Native Apps to Embedded Web Views in Microsoft Entra External ID Native Authentication appeared first on Microsoft Entra Identity Platform.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories