Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150033 stories
·
33 followers

Boom – A Node-Based AI Chat Tool built with Svelte and Photino

1 Share
<h1 id="tl;dr">TL;DR<a title="#tl;dr" href="#tl;dr"></a></h1> <p>Boom is a hackathon project to interact with LLMs in a visual node-based way, allowing branching conversations on an infinite canvas. Built with Svelte, Typescript, TailwindCSS and Photino for various model providers including OpenAI, Ollama and LM Studio.</p> <p><img src="/2025/12/Boom-A-Node-Based-AI-Chat-Tool-built-with-Svelte-and-Photino/screenshot.png" alt="screenshot of Boom" loading="lazy" class="φbp"></p> <h1 id="intro">Intro<a title="#intro" href="#intro"></a></h1> <p>I was attending an AI themed hackathon last weekend. Most teams there were building apps infused with AI or integrating AI capabilities into existing products. I wanted to try something a bit different and wanted to explore new ways of interacting with AI, specifically Large Language Models (LLMs). The result of that is Boom, a node-based AI chat tool that reimagines how we converse with AI.</p> <h1 id="boom">Boom<a title="#boom" href="#boom"></a></h1> <p>Boom is dutch for “tree”, and it perfectly captures the essence of this tool. Instead of linear conversation threads with messages going back and forth, Boom uses a node-based interface where each interaction with the AI is represented as a node on an infinite canvas. This allows users to branch out their conversations, explore multiple ideas simultaneously, and visualize the flow of their interactions.</p> <p>I noticed that traditional chat interfaces can become cumbersome when exploring more complex topics with LLMs. I often found myself scrolling through long threads, trying to piece together different parts of the conversation - looking for a way to “branch out” at a specific point in the discussion to explore alternative ideas or directions (while retaining the previous context to go back if needed). Too often I’d create new threads and copy-paste previous messages just to explore a different angle or ideas with a more narrow context for the model. Boom addresses this by allowing me to create branches from any point in the conversation, making it easy to explore different paths. The history only contains the previous nodes leading up to the root of the tree.</p> <h2 id="how-it-works">How It Works<a title="#how-it-works" href="#how-it-works"></a></h2> <p>In Boom, each message you send to the AI is represented as a node on the canvas. You can click on any node to expand it, revealing the AI’s response. All with markdown support, so you can include formatted text, code snippets, images, and more in your messages. The canvas is designed to be infinite, allowing you to create as many nodes and branches as you need without any constraints. Each canvas can be exported as a .boom file which contains all nodes and their connections, allowing you to save and share your conversations easily. You can also import these files back into Boom to switch between multiple projects.</p> <h2 id="branching">Branching<a title="#branching" href="#branching"></a></h2> <p>The rules for branching a new message are fairly simple. By default each message is appended below the previous one - just as you would expect in a traditional chat interface. If there is already a message connected below the current message, a new branch is created to the right creating a new path of conversation. These branched out nodes are not limited in number, so you can create as many branches as you like. You can also drag and drop nodes around the canvas to organize them in a way that makes sense to you.</p> <pre class="mermaid">flowchart TB A["the most popular superheroes in the<br>dc multiverse?"] B["1. Superman<br>2. Batman<br>3. Wonder Woman<br>4. The Flash<br>5. Green Lantern<br>6. Aquaman<br>7. Cyborg<br>8. Martian Manhunter<br>9. Shazam<br>10. Nightwing"] C["what are their real names?"] D["Certainly! Here are the real names of the mentioned DC superheroes:<br><br>1. Superman - Clark Kent (Kal-El)<br>2. Batman - Bruce Wayne<br>3. Wonder Woman - Diana Prince<br>4. The Flash - Barry Allen<br>5. Green Lantern - Hal Jordan<br>6. Aquaman - Arthur Curry<br>7. Cyborg - Victor Stone<br>8. Martian Manhunter - J'onn J'onzz<br>9. Shazam - Billy Batson<br>10. Nightwing - Dick Grayson"] E["who is in the justice league?"] F["Certainly! The core members of the Justice League typically include:<br><br>1. Superman<br>2. Batman<br>3. Wonder Woman<br>4. The Flash (Barry Allen)<br>5. Green Lantern (Hal Jordan or John Stewart)<br>6. Aquaman<br>7. Cyborg<br>8. Martian Manhunter<br><br>These members can vary depending on the storyline and continuity."] G["who are their main enemies?"] H["Here are the main enemies for each superhero:<br><br>1. Superman - Lex Luthor, Darkseid, Brainiac<br>2. Batman - Joker, Bane, Ra's al Ghul<br>3. Wonder Woman - Ares, Cheetah, Circe<br>4. The Flash - Reverse-Flash, Captain Cold, Gorilla Grodd<br>5. Green Lantern - Sinestro, Parallax<br>6. Aquaman - Black Manta, Ocean Master<br>7. Cyborg - Grid, Brother Blood<br>8. Martian Manhunter - Ma'alefa'ak<br>9. Shazam - Black Adam, Dr. Sivana<br>10. Nightwing - Deathstroke, Blockbuster"] I["what are the best comics featuring these villains?"] J["Great comics featuring these iconic villains:<br><br>1. Lex Luthor - Lex Luthor: Man of Steel<br>2. Joker - The Killing Joke, Death of the Family<br>3. Ares - Wonder Woman: God of War<br>4. Reverse-Flash - Flash: Rebirth, The Return of Barry Allen<br>5. Sinestro - Sinestro Corps War<br>6. Black Manta - Aquaman: Black Manta Rising<br>7. Darkseid - Final Crisis, Rock of Ages<br>8. Black Adam - 52, Black Adam: The Dark Age<br>9. Deathstroke - The Judas Contract<br>10. Ra's al Ghul - Batman: Birth of the Demon"] A --> B B --> C B --> E B --> G C --> D E --> F G --> H H --> I I --> J</pre> <p>Since user and LLM created nodes are rendered the same way, it’s also possible to modify the responses from the model directly and use these corrections in further interactions. This makes experimenting quite fun and allows to guide the model a bit more.</p> <h1 id="tech-stack">Tech Stack<a title="#tech-stack" href="#tech-stack"></a></h1> <p>Boom is built using Svelte + Typescript for the frontend with TailwindCSS for styling all powered by vite. For the desktop application, I used Photino, a lightweight webview-based framework that hosts the Svelte app in a native window. This allows Boom to run on Windows, macOS, and soon Linux, providing a seamless experience across platforms and easy access to native features (for example storing API keys in the OS keychain).</p> <p>As I haven’t built a backend for Boom, the app is currently “bring your own key” to interact with the LLMs. This keeps things simple and secure, as users maintain control over their data and usage. There is not only support for OpenAI models, but also local models via LM Studio or Ollama. This way everything runs locally without any data leaving the user’s machine. I might add support for more model providers or a backend in the middle for non-technical users in the future. Also I need to look into the possibilites of a web version, therefore I need to investigate the possibilities to support the byok approach in a browser environment.</p> <h1 id="future-plans">Future Plans<a title="#future-plans" href="#future-plans"></a></h1> <p>I haven’t found too much time after the hackathon to work on Boom, other than vibe coding a landing page - <a href="https://johnnys.page/boom" target="_blank">Boom</a>. There you can also find download links for the desktop app to try it out yourself - consider it as an early beta version. Next thing on my list is to let users create per-project system prompts, so each canvas can have its own context and behavior. Also in my day job I’m working on a similar node-based LLM approach directly integrated into <a href="https://deon.de" target="_blank">DEON</a> which is a whole visual collaboration platform powered by AI with many more features - so stay tuned for more updates on that front as well!</p>
Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The “Golden Triangle” of Agentic Development with Microsoft Agent Framework: AG-UI, DevUI & OpenTelemetry Deep Dive

1 Share

In the explosive era of Agentic AI, we’re not just seeking more powerful models—we’re searching for a development experience that lets developers actually get some sleep. When building Agents locally, we’ve traditionally faced three major challenges:

  1. Black-Box Execution: What is my Agent thinking? Why is it stuck? (Debugging is hard)
  2. Interaction Silos: I’ve built my Agent—how do I quickly demo a beautiful UI to stakeholders? (Productization is hard)
  3. Performance Blind Spots: How many tokens are being consumed? Where’s the latency? (Optimization is hard)

Today, I’ll walk you through a classic case from Microsoft Agent Framework Samples—GHModel.AI—to reveal the “Golden Triangle” development stack that perfectly solves these pain points: DevUIAG-UI, and OpenTelemetry.

Let’s explore how this powerful combination empowers the entire local development lifecycle.

Phase 1: Creation — Standing on the Shoulders of GitHub Models

In the GHModel.AI case, we first address the “brain” problem.

Traditional local development is often constrained by computing resources or expensive API keys. This case cleverly leverages GitHub Models. As an evangelist, I must strongly recommend this combination:

  • Zero-Barrier Access: Call GPT-4o, Llama 3, and other cutting-edge models directly with your GitHub account—no complex Azure configuration or credit card binding required.
  • Standardized SDK: Through Agent Framework’s abstraction layer, we can switch model backends with just a few lines of code.

In this case’s code structure, you’ll find Agent definitions become exceptionally clear. No more spaghetti-style Python/C# scripts—just structured “declarations.”

Quick Start Code

Python:

# Python - Create Agents with GitModels

from agent_framework.openai import OpenAIChatClient  

chat_client = OpenAIChatClient(
    base_url=os.environ.get("GITHUB_ENDPOINT"),    # 🌐 GitHub Models API endpoint
    api_key=os.environ.get("GITHUB_TOKEN"),        # 🔑 Authentication token
    model_id=os.environ.get("GITHUB_MODEL_ID")     # 🎯 Selected AI model
)


# Create Concierge Agent

CONCIERGE_AGENT_NAMES = "Concierge"
CONCIERGE_AGENT_INSTRUCTIONS = """
            You are an are hotel concierge who has opinions about providing the most local and authentic experiences for travelers.
            The goal is to determine if the front desk travel agent has recommended the best non-touristy experience for a traveler.
            If so, state that it is approved.
            If not, provide insight on how to refine the recommendation without using a specific example. """


concierge_agent = chat_client.create_agent(
    instructions=CONCIERGE_AGENT_INSTRUCTIONS,
    name=CONCIERGE_AGENT_NAMES,
)

# Create FrontDesk Agent

FRONTEND_AGENT_NAMES = "FrontDesk"
FRONTEND_AGENT_INSTRUCTIONS = """
            You are a Front Desk Travel Agent with ten years of experience and are known for brevity as you deal with many customers.
            The goal is to provide the best activities and locations for a traveler to visit.
            Only provide a single recommendation per response.
            You're laser focused on the goal at hand.
            Don't waste time with chit chat.
            Consider suggestions when refining an idea.
            """


frontend_agent = chat_client.create_agent(
    instructions=FRONTEND_AGENT_INSTRUCTIONS,
    name=FRONTEND_AGENT_NAMES,
)

# Create Workflow

frontend_executor = AgentExecutor(frontend_agent, id="frontend_agent")
concierge_executor = AgentExecutor(concierge_agent, id="concierge_agent")

workflow = (
WorkflowBuilder()
.set_start_executor(frontend_executor)
.add_edge(frontend_executor, concierge_executor)
.build()
)

.NET:

// .NET - Creat Agents with GitHub Models

var openAIOptions = new OpenAIClientOptions()
{
    Endpoint = new Uri(github_endpoint)
};
        
var openAIClient = new OpenAIClient(new ApiKeyCredential(github_token), openAIOptions);

var chatClient = openAIClient.GetChatClient(github_model_id).AsIChatClient();

const string ReviewerAgentName = "Concierge";
const string ReviewerAgentInstructions = @"
    You are an are hotel concierge who has opinions about providing the most local and authentic experiences for travelers.
    The goal is to determine if the front desk travel agent has recommended the best non-touristy experience for a traveler.
    If so, state that it is approved.
    If not, provide insight on how to refine the recommendation without using a specific example. ";

const string FrontDeskAgentName = "FrontDesk";
const string FrontDeskAgentInstructions = @"""
    You are a Front Desk Travel Agent with ten years of experience and are known for brevity as you deal with many customers.
    The goal is to provide the best activities and locations for a traveler to visit.
    Only provide a single recommendation per response.
    You're laser focused on the goal at hand.
    Don't waste time with chit chat.
    Consider suggestions when refining an idea.
    """;

var reviewerAgentBuilder = new AIAgentBuilder(chatClient.CreateAIAgent(
    name: ReviewerAgentName,
    instructions: ReviewerAgentInstructions));

var frontDeskAgentBuilder = new AIAgentBuilder(chatClient.CreateAIAgent(
    name: FrontDeskAgentName,
    instructions: FrontDeskAgentInstructions));

AIAgent reviewerAgent = reviewerAgentBuilder.Build(serviceProvider);
AIAgent frontDeskAgent = frontDeskAgentBuilder.Build(serviceProvider);

// Create Workflow
var workflow = new WorkflowBuilder(frontDeskAgent)
.AddEdge(frontDeskAgent, reviewerAgent)
.Build();

Phase 2: Testing & Debugging — DevUI

This is the highlight of this article. Previously, we debugged Agents using the print() method and endless console logs. Now, we have DevUI.

What is DevUI? It’s an “inner-loop” tool designed specifically for developers within Agent Framework. When GHModel.AI runs, DevUI provides a visual console:

  1. Chain of Thought Visualization: You no longer need to guess why the Agent chose Tool A over Tool B. In DevUI, you can see each ReasoningAction, and Observation step like a flowchart. This isn’t just debugging—it’s an “X-ray” of Agent behavior.

  2. Real-Time State Monitoring: What’s stored in the Agent’s Memory? Is the context overflowing? DevUI lets you view Conversation State in real-time, quickly pinpointing the root cause of “hallucinations.”

Python:

cd GHModel.Python.AI/GHModel.Python.AI.Workflow.DevUI
pip install agent-framework agent-framework-devui python-dotenv
python main.py
# Browser opens automatically at http://localhost:8090

.NET:

cd GHModel.dotNET.AI/GHModel.dotNET.AI.Workflow.DevUI
dotnet run
# DevUI: https://localhost:50516/devui
# OpenAI API: https://localhost:50516/v1/responses
 DevUI dramatically shortens the "write-run-fix" feedback loop. For complex Multi-Agent collaboration scenarios, it's your command center.

Screenshot 2025 12 01 at 4 24 26 PM image

Phase 3: Delivery & Interaction — AG-UI

Debugging is done, and your boss says: “Can you send me a link so I can try it too?” At this moment, don’t hand-write a React frontend! What you need is AG-UI (Agent Generic UI).

What does AG-UI solve? It’s a standardized Agent-User interaction protocol. In the GHModel.AI case, by integrating AG-UI:

  • Out-of-the-Box Frontend: Agent Framework can directly expose interfaces compliant with the AG-UI protocol. Any frontend supporting AG-UI (like components provided by CopilotKit) can connect directly to your local Agent.
  • Streaming Responses & Generative UI: It supports not only text streaming but also server-side UI component pushing. This means your Agent can dynamically render charts, tables, or cards on the user interface based on content—no frontend hardcoding required.

AG-UI Supported Features

  • ✅ Streaming responses (SSE)
  • ✅ Backend tool rendering
  • ✅ Human-in-the-Loop approvals
  • ✅ Shared state synchronization
  • ✅ Seamless CopilotKit integration

Implementation Examples

Python Server:

# Server — Register AG-UI endpoint
from agent_framework_ag_ui import add_agent_framework_fastapi_endpoint
from workflow import workflow

app = FastAPI()
agent = workflow.as_agent(name="Travel Agent")
add_agent_framework_fastapi_endpoint(app, agent, "/")

.NET Server:

// Program.cs — ASP.NET Core AG-UI endpoint registration
using Microsoft.Agents.AI.Hosting.AGUI.AspNetCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAGUI();

var app = builder.Build();
AIAgent workflowAgent = ChatClientAgentFactory.CreateTravelAgenticChat();
app.MapAGUI("/", workflowAgent);
await app.RunAsync();

The transition from DevUI to AG-UI is a seamless switch from “developer perspective” to “user perspective.” We can use CopilotKit to create UI

Screenshot 2025 12 01 at 4 27 36 PM image

Phase 4: Performance Tracking — OpenTelemetry

Before the Agent goes live, besides functioning correctly, we must answer: “Is it fast? Is it expensive?”

This is where OpenTelemetry (OTel) enters. In Agent Framework, OpenTelemetry support is baked-in. In GHModel.AI code, typically just one line of configuration (like AddOpenTelemetry or setup_observability):

  1. Distributed Tracing: When a request comes in, passes through routing, Guardrails, calls GitHub Models, and returns results—OTel generates a complete Flame Graph. You can precisely see:

    • How long does network I/O take?
    • How long does LLM Token generation take?
    • How long does local logic processing take?
  2. Cost Transparency: Combined with OTel Metrics, we can monitor Token consumption rates. This is crucial for cost estimation when migrating from GitHub Models (free/prototype stage) to Azure OpenAI (paid/production stage).

🔧 Quick Setup

Python:

# Enable telemetry in one line
from agent_framework.observability import setup_observability
from agent_framework import setup_logging

setup_observability()
setup_logging()

.NET:

// OpenTelemetry configuration
var tracerProvider = Sdk.CreateTracerProviderBuilder()
    .AddSource("*Microsoft.Agents.AI")
    .AddOtlpExporter(options => options.Endpoint = new Uri("http://localhost:4317"))
    .Build();

Environment Variables:

ENABLE_OTEL=true
ENABLE_SENSITIVE_DATA=true               # Enable sensitive data logging in dev
OTLP_ENDPOINT=http://localhost:4317       # Aspire Dashboard / OTLP Collector
APPLICATIONINSIGHTS_CONNECTION_STRING=... # Azure Application Insights (optional)

📈 Visualization Options

Platform Use Case Quick Start
Aspire Dashboard Local development docker run --rm -d -p 18888:18888 -p 4317:18889 mcr.microsoft.com/dotnet/aspire-dashboard:latest
Application Insights Production monitoring Set APPLICATIONINSIGHTS_CONNECTION_STRING
Grafana Dashboards Advanced visualization Agent OverviewWorkflow Overview

Screenshot 2025 12 01 at 4 30 48 PM image

Architecture Overview

Screenshot 2025 12 01 at 4 13 34 PM image

Summary: Build Your “Efficiency Closed Loop”

Returning to the GHModel.AI case, it’s not just a code sample—it demonstrates best practice architecture for modern Agent development:

Layer Tool Purpose
Model Layer GitHub Models Rapidly validate ideas with free, cutting-edge models
Debug Layer DevUI Gain “God Mode View,” iterate logic quickly
Presentation Layer AG-UI Standardize output, generate user interfaces in seconds
Observability Layer OpenTelemetry Data-driven optimization, no more guesswork

Final Thoughts

I encourage every Agent developer to dive deep into the code in Agent-Framework-Samples. Stop debugging AI with Notepad—arm yourself with these modern weapons and go build next-generation intelligent applications!

The combination of GitHub Models for rapid prototyping, DevUI for visual debugging, AG-UI for seamless user interaction, and OpenTelemetry for production-grade observability represents a paradigm shift in how we build agentic applications.

Your Agent development journey starts here. The future is agentic. Let’s build it together!

Resources

  1.  Microsoft Agent Framework github.com/microsoft/agent-framework
  2.  Microsoft Agent Framework Samples github.com/microsoft/Agent-Framework-Samples
  3.  Microsoft Agent Framework DevUI Samples DevUI Getting Started
  4.  Microsoft Agent Framework Observability Guide Observability Samples

The post The “Golden Triangle” of Agentic Development with Microsoft Agent Framework: AG-UI, DevUI & OpenTelemetry Deep Dive appeared first on Semantic Kernel.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

OSS AI Summit: Building with LangChain

1 Share

Most AI demos look impressive in a notebook, but they fall apart the moment they touch real data, real users, or real scale. The companies that will win in 2026 aren’t the ones with the flashiest prototypes, they’re the ones who can reliably design, debug, and deploy agent-powered AI applications.

That’s exactly why we created the OSS AI Summit.

On December 10th we’re bringing together people from LangChain and Microsoft for a focused, no-fluff 2-hour online event centered on LangChain v1 and the patterns that turn experiments into production systems.

What you’ll walk away with

  • A clear mental model of LangChain v1: how components, agents, tools, and memory actually fit together in Python and JavaScript
  • Real-world war stories from teams running agents to solve real-world problems (including a candid fireside chat with people from Intercom)
  • Live walkthrough of MCP (Model Context Protocol) powering single and multi-agent systems with LangChain.js
  • Practical demos you can run today including agents that query databases, call APIs, and coordinate across specialized roles
  • A Q&A panel with Hunter Lovell and Sydney Runkle from LangChain

Try the code before (or after) the event

We’re sharing three complete reference apps so you can explore the concepts hands-on:

• AI Sales Analyst – Python agent that analyzes real sales data in PostgreSQL using LangChain + Azure OpenAI + MCP

https://github.com/Azure-Samples/langchain-agent-python

• AI Travel Agency – Multi-agent system in LangChain.js with MCP servers in Python, Node.js, Java, and .NET, deployed on Azure Container Apps

https://github.com/Azure-Samples/ai-travel-agents

• Serverless Burger-Order Agent – End-to-end LangChain.js agent using MCP to place orders via a real API, running on Azure Static Web Apps + Azure Functions

https://github.com/Azure-Samples/mcp-agent-langchainjs

Who this is for

  • Developers moving from simple chatbots to agentic workflows
  • Architects figuring out how to connect LLMs to internal systems securely
  • Engineering leads who need proven patterns for reliability, observability, and scale on Azure

Date: December 10, 2025
Time: 8:00 – 10:00 AM Pacific Time
Format: Free live stream
Register: https://aka.ms/OSSAISummitRegistration

We’ll see you there. 🚀

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft’s ugly sweaters return with Clippy, Xbox, and Zune brown options

1 Share

Microsoft is bringing back its ugly sweaters for the holiday season. After taking a break for 2024, the company has an “Artifact” holiday sweater with lots of retro iconography, an even uglier Zune brown option, and even a green Xbox version.

Clippy was the star of Microsoft’s ugly sweater in 2022, and the Artifact option this year puts the paperclip at the center, surrounded by MSN, Minesweeper, Internet Explorer, MS-DOS, and plenty of Windows logos. The Zune brown holiday sweater has a play button that I really hope lights something up. Both the Artifact and Zune sweaters are available for $79.95, and the Xbox sweater can be pre-ordered for $59.95.

Microsoft first started sending out ugly sweaters to Windows fans in 2018, and then sold them to customers from 2020 onwards. For some reason the software maker didn’t ship an ugly sweater in 2024, but this year’s sweaters can be purchased through Microsoft’s online company store or its brick and mortar version in Redmond, Washington. Microsoft will also have its holiday sweaters available at its Microsoft store (experience center) in New York City.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing AWS Transform custom: Crush tech debt with AI-powered code modernization

1 Share

Technical debt is one of the most persistent challenges facing enterprise development teams today. Studies show that organizations spend 20% of their IT budget on technical debt instead of advancing new capabilities. Whether it’s upgrading legacy frameworks, migrating to newer runtime versions, or refactoring outdated code patterns, these essential but repetitive tasks consume valuable developer time that could be spent on innovation.

Today, we’re excited to announce AWS Transform custom, a new agent that fundamentally changes how organizations approach modernization at scale. This intelligent agent combines pre-built transformations for Java, Node.js, and Python upgrades with the ability to define custom transformations. By learning specific transformation patterns and automating them across entire codebases, customers using AWS Transform custom have achieved up to 80% reduction in execution time in many cases, freeing developers to focus on innovation.

You can define transformations using your documentation, natural language descriptions, and code samples. The service then applies these specific patterns consistently across hundreds or thousands of repositories, improving its effectiveness through both explicit feedback and implicit signals like developers’ manual fixes within your transformation projects.

AWS Transform custom offers both CLI and web interfaces to suit different modernization needs. You can use the CLI to define transformations through natural language interactions and execute them on local codebases, either interactively or autonomously. You can also integrate it into code modernization pipelines or workflows, making it ideal for machine-driven automation. Meanwhile, the web interface provides comprehensive campaign management capabilities, helping teams track and coordinate transformation progress across multiple repositories at scale.

Language and framework modernization
AWS Transform supports runtime upgrades without the need to provide additional information, understanding not only the syntax changes required but also the subtle behavioral differences and optimization opportunities that come with newer versions. The same intelligent approach applies to Node.js, Python and Java runtime upgrades, and even extends to infrastructure-level transitions, such as migrating workloads from x86 processors to AWS Graviton.

It also navigates framework modernization with sophistication. When organizations need to update their Spring Boot applications to take advantage of newer features and security patches, AWS Transform custom doesn’t merely update version numbers but understands the cascading effects of dependency changes, configuration updates, and API modifications.

For teams facing more dramatic shifts, such as migrating from Angular to React, AWS Transform custom can learn the patterns of component translation, state management conversion, and routing logic transformation that make such migrations successful.

Infrastructure and enterprise-scale transformations
The challenge of keeping up with evolving APIs and SDKs becomes particularly acute in cloud-based environments where services are continuously improving. AWS Transform custom supports AWS SDK updates across a broad spectrum of programming languages that enterprises use including Java, Python, and JavaScript. The service understands not only the mechanical aspects of API changes, but also recognizes best practices and optimization opportunities available in newer SDK versions.

Infrastructure as Code transformations represent another critical capability, especially as organizations evaluate different tooling strategies. Whether you’re converting AWS Cloud Development Kit (AWS CDK) templates to Terraform for standardization purposes, or updating AWS CloudFormation configurations to access new service features, AWS Transform custom understands the declarative nature of these tools and can maintain the intent and structure of your infrastructure definitions.

Beyond these common scenarios, AWS Transform custom excels at addressing the unique, organization-specific code patterns that accumulate over years of development. Every enterprise has its own architectural conventions, utility libraries, and coding standards that need to evolve over time. It can learn these custom patterns and help refactor them systematically so that institutional knowledge and best practices are applied consistently across the entire application portfolio.

AWS Transform custom is designed with enterprise development workflows in mind, enabling center of excellence teams and system integrators to define and execute organization-wide transformations while application developers focus on reviewing and integrating the transformed code. DevOps engineers can then configure integrations with existing continuous integration and continuous delivery (CI/CD) pipelines and source control systems. It also includes pre-built transformations for Java, Node.js and Python runtime updates which can be particularly useful for AWS Lambda functions, along with transformations for AWS SDK modernization to help teams get started immediately.

Getting started
AWS Transform makes complex code transformations manageable through both pre-built and custom transformation capabilities. Let’s start by exploring how to use an existing transformation to address a common modernization challenge: upgrading AWS Lambda functions due to end-of-life (EOL) runtime support.

For this example, I’ll demonstrate migrating a Python 3.8 Lambda function to Python 3.13, as Python 3.8 reached EOL and is no longer receiving security updates. I’ll use the CLI for this demo, but I encourage you to also explore the web interface’s powerful campaign management capabilities.

First, I use the command atx custom def list to explore the available transformation definitions. You can also access this functionality through a conversational interface by typing only atx instead of issuing the command directly, if you prefer.

This command displays all available transformations, including both AWS-managed defaults and any existing custom transformations created by users in my organization. AWS-managed transformations are identified by the AWS/ prefix, indicating they’re maintained and updated by AWS. In the results, I can see several options such as AWS/java-version-upgrade for Java runtime modernization, AWS/python-boto2-to-boto3-migration for updating Python AWS SDK usage, AWS/nodejs-version-upgrade for Node.js runtime updates.

For my Python 3.8 to 3.13 migration, I’ll use the AWS/python-version-upgrade transformation.

You run a migration by using the atx custom def exec command.  Please consult the documentation for more details about the command and all its options. Here, I run it against my project repository specifying the transformation name. I also add pytest to run unit tests for validation. More importantly, I use the additionalPlanContext section in the  --configuration input to specify which Python version I want to upgrade to. For reference, here’s the command I have for my demo (I’ve used multiple lines and indented it here for clarity):

atx custom def exec 
-p /mnt/c/Users/vasudeve/Documents/Work/Projects/ATX/lambda/todoapilambda 
-n AWS/python-version-upgrade
-C "pytest" 
--configuration 
    "additionalPlanContext= The target Python version to upgrade to is Python 3.13" 
-x -t

AWS Transform then starts the migration process. It analyzes my Lambda function code, identifies Python 3.8-specific patterns, and automatically applies the necessary changes for Python 3.13 compatibility. This includes updating syntax for deprecated features, modifying import statements, and adjusting any version-specific behaviors.

After execution, it provides a comprehensive summary including a report on dependencies updated in requirements.txt with Python 3.13-compatible package versions, instances of deprecated syntax replaced with current equivalents, updated runtime configuration notes for AWS Lambda deployment, suggested test cases to validate the migration, and more. It also provides a body of evidence that serve as proof of success.

The migrated code lives in a local branch so you can review and merge when satisfied. Alternatively, you can keep providing feedback and reiterating until yo’re happy that the migration is fully complete and meets your expectations.

This automated process changes what would typically require hours of manual work into a streamlined, consistent upgrade that maintains code quality while maintaining compatibility with the newer Python runtime.

Creating a new custom transformation
While AWS-managed transformations handle common scenarios effectively, you can also create custom transformations tailored to your organization’s specific needs. Let’s explore how to create a custom transformation to see how AWS Transform learns from your specific requirements.

I type atx to initialize the atx cli and start the process.

The first thing it asks me is if I want to use one of the existing transformations or create a new one. I choose to create a new one. Notice that from here on the whole conversation takes place using natural language, not commands. I typed new one but I could have typed I want to create a new one and it would’ve understood it exactly the same.

It then prompts me to provide more information about the kind of transformation I’d like to perform. For this demo, I’m going to migrate an Angular application, so I type angular 16 to 19 application migration which prompts the CLI to search for all transformations available for this type of migration. In my case, my team has already created and made available a few Angular migrations, so it shows me those. However, it warns me that none of them is an exact match to my specific request for migrating from Angular 16 to 19. It then asks if I’d like to select from one of the existing transformations listed or create a custom one.

I choose to create a custom one by continuing to use natural language and typing create a new one as a command. Again, this could be any variation of that statement provided that you indicate your intentions clearly. It follows by asking me a few questions including whether I have any useful documentation, example code or migration guides that I can provide to help customize the transformation plan.

For this demo, I’m only going to rely on AWS Transform to provide me with good defaults. I type I don't have these details. Follow best practices. and the CLI responds by telling me that it will create a comprehensive transformation definition for migrating Angular 16 to Angular 19.  Of course, I relied on the pre-trained data to generate results based on best practices. As usual, the recommendation is to provide as much information and relevant data as possible at this stage of the process for better results. However, you don’t need to have all the data upfront. You can keep on providing data at any time› as you iterate through the process of creating the custom transformation definition.

The transformation definition is generated as a markup file containing a summary and a comprehensive sequence of implementation steps grouped logically into phases such as premigration preparation, processing and partitioning, static dependency analysis, searching and applying specific transformation rules, and step-by-step migration and iterative validation.

It’s interesting to see that AWS Transform opted for the best practice of doing incremental framework updates creating steps for migrating the application first to 17 then 18 then 19 instead of trying to go directly from 16 to 19 to minimize issues.

Note that the plan includes various stages of testing and verification to confirm that the various phases can be concluded with confidence. At the very end, it also includes a final validation stage listing exit criteria that performs a comprehensive set of tests against all aspects of the application that will be used to accept the migration as successfully complete.

After the transformation definition is created, AWS Transform asks me about what I would like to do next. I can choose to review or modify the transformation definition and I can reiterate through this process as much as I need until I arrive at one that I’m satisfied with. I can also choose to already apply this transformation definition to an Angular codebase. However, first I want to make this transformation available to my team members as well as myself so we can all use it again in the future. So, I choose option 4 to publish this transformation to the registry.

This custom transformation needs a name and a description of its objective which is displayed when users browse the registry. AWS Transforms automatically extracts those from context for me and asks me if I would like to modify them before going ahead. I like the sensible default of “Angular-16-to-19-Migration”, and the objective is clearly stated, so I choose to accept the suggestions and publish it by answering with yes, looks good.

Now that the transformation definition is created and published, I can use it and run it multiple times against any code repository. Let’s apply the transformation to a code repository with a project written in Angular 16. I now choose option 1 from the follow-up prompt and the CLI asks me for the path in my file system to the application that I want to migrate and, optionally, the build command that it should use.

After I provide that information, AWS Transform proceeds to analyze the code base and formulate a thorough step-by-step transformation plan based on the definition created earlier. After it’s done, it creates a JSON file containing the detailed migration plan specifically designed for applying our transformation definition to this code base. Similar to the process of creating the transformation definition, you can review and iterate through this plan as much as you need, providing it with feedback and adjusting it to any specific requirements you might have.

When I’m ready to accept the plan, I can use natural language to tell AWS Transform that we can start the migration process. I type looks good, proceed and watch the progress in my shell as it starts executing the plan and making the changes to my code base one step at a time.

The time it takes will vary depending on the complexity of the application. In my case, it took a few minutes to complete. After it has finished, it provides me with a transformation summary and the status of each one of the exit criteria that were included in the final verification phase of the plan alongside all the evidence to support the reported status. For example, the Application Build – Production criteria was listed as passed and some of the evidence provided included the incremental Git commits, the time that it took to complete the production build, the bundle size, the build output message, and the details about all the output files created.

Conclusion
AWS Transform represents a fundamental shift in how organizations approach code modernization and technical debt. The service helps to transform what was at one time a fragmented, team-by-team effort into a unified, intelligent capability that eliminates knowledge silos, keeping your best practices and institutional knowledge available as scalable assets across the entire organization. This helps to accelerate modernization initiatives while freeing developers to spend more time on innovation and driving business value instead of focusing on repetitive maintenance and modernization tasks.

Things to know

AWS Transform custom is now generally available. Visit the get started guide to start your first transformation campaign or check out the documentation to learn more about setting up custom transformation definitions.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

AWS Transform announces full-stack Windows modernization capabilities

1 Share

Earlier this year in May, we announced the general availability of AWS Transform for .NET, the first agentic AI service for modernizing .NET applications at scale. During the early adoption period of the service, we received valuable feedback indicating that, in addition to .NET application modernization, you would like to modernize SQL Server and legacy UI frameworks. Your applications typically follow a three-tier architecture—presentation tier, application tier, and database tier—and you need a comprehensive solution that can transform all of these tiers in a coordinated way.

Today, based on your feedback, we’re excited to announce AWS Transform for full-stack Windows modernization, to offload complex, tedious modernization work across the Windows application stack. You can now identify application and database dependencies and modernize them in an orchestrated way through a centralized experience.

AWS Transform accelerates full-stack Windows modernization by up to five times across application, UI, database, and deployment layers. Along with porting .NET Framework applications to cross-platform .NET, it migrates SQL Server databases to Amazon Aurora PostgreSQL-Compatible Edition with intelligent stored procedure conversion and dependent application code refactoring. For validation and testing, AWS Transform deploys applications to Amazon Elastic Compute Cloud (Amazon EC2) Linux or Amazon Elastic Container Service (Amazon ECS), and provides customizable AWS CloudFormation templates and deployment configurations for production use. AWS Transform has also added capabilities to modernize ASP.NET Web Forms UI to Blazor.

There is much to explore, so in this post I’ll provide the first look at AWS Transform for full-stack Windows modernization capabilities across all layers.

Create a full-stack Windows modernization transformation job
AWS Transform connects to your source code repositories and database servers, analyzes application and database dependencies, creates modernization waves, and orchestrates full-stack transformations for each wave.

To get started with AWS Transform, I first complete the onboarding steps outlined in the getting started with AWS Transform user guide. After onboarding, I sign in to the AWS Transform console using my credentials and create a job for full-stack Windows modernization.

Create a new job for Windows Modernization
Create a new job by choosing SQL Server Database Modernization

After creating the job, I complete the prerequisites. Then, I configure the database connector for AWS Transform to securely access SQL Server databases running on Amazon EC2 and Amazon Relational Database Service (Amazon RDS). The connector can connect to multiple databases within the same SQL Server instance.

Create new database connector by adding connector name and AWS Account ID

Next, I set up a connector to connect to my source code repositories.

Add a source code connector by adding Connection name, AWS Account ID and Code Connector Arn

Furthermore, I have the option to choose if I would like AWS Transform to deploy the transformed applications. I choose Yes and provide the target AWS account ID and AWS Region for deploying the applications. The deployment option can be configured later as well.

Choose if you would like to deploy transformed apps

After the connectors are set up, AWS Transform connects to the resources and runs the validation to verify IAM roles, network settings, and related AWS resources.

After the successful validation, AWS Transform discovers databases and their associated source code repositories. It identifies dependencies between databases and applications to create waves for transforming related components together. Based on this analysis, AWS Transform creates a wave-based transformation plan.

Start assessment for discovered database and source code repositories

Assessing database and dependent applications
For the assessment, I review the databases and source code repositories discovered by AWS Transform and choose the appropriate branches for code repositories. AWS Transform scans these databases and source code repositories, then presents a list of databases along with their dependent .NET applications and transformation complexity.

Start wave planning of asessed databases and dependent repositories

I choose the target databases and repositories for modernization. AWS Transform analyzes these selections and generates a comprehensive SQL Modernization Assessment Report with a detailed wave plan. I download the report to review the proposed modernization plan. The report includes an executive summary, wave plan, dependencies between databases and code repositories, and complexity analysis.

View SQL Modernization Assessment Report

Wave transformation at scale
The wave plan generated by AWS Transform consists of four steps for each wave. First, it converts the SQL Server schema to PostgreSQL. Second, it migrates the data. Third, it transforms the dependent .NET application code to make it PostgreSQL compatible. Finally, it deploys the application for testing.

Before converting the SQL Server schema, I can either create a new PostgreSQL database or choose an existing one as the target database.

Choose or create target database

After I choose the source and target databases, AWS Transform generates conversion reports for my review. AWS Transform converts the SQL Server schema to PostgreSQL-compatible structures, including tables, indexes, constraints, and stored procedures.

Download Schema conversion reports

For any schema that AWS Transform can’t automatically convert, I can manually address them in the AWS Database Migration Service (AWS DMS) console. Alternatively, I can fix them in my preferred SQL editor and update the target database instance.

After completing schema conversion, I have the option to proceed with data migration, which is an optional step. AWS Transform uses AWS DMS to migrate data from my SQL Server instance to the PostgreSQL database instance. I can choose to perform data migration later, after completing all transformations, or work with test data by loading it into my target database.

Choose if you would like to migrate data

The next step is code transformation. I specify a target branch for AWS Transform to upload the transformed code artifacts. AWS Transform updates the codebase to make the application compatible with the converted PostgreSQL database.

Specify target branch destination for transformed codebase

With this release, AWS Transform for full-stack Windows modernization supports only codebases in .NET 6 or later. For codebases in .NET Framework 3.1+, I first use AWS Transform for .NET to port them to cross-platform .NET. I’ll expand on this in a following section.

After the conversion is completed, I can view the source and target branches along with their code transformation status. I can also download and review the transformation report.

Download transformation report

Modernizing .NET Framework applications with UI layer
One major feature we’re releasing today is the modernization of UI frameworks from ASP.NET Web Forms to Blazor. This is added to existing support for modernizing model-view-controller (MVC) Razor views to ASP.NET Core Razor views.

As mentioned previously, if I have a .NET application in legacy .NET Framework, then I continue using AWS Transform for .NET to port it to cross-platform .NET. For legacy applications with UIs built on ASP.NET Web Forms, AWS Transform now modernizes the UI layer to Blazor along with porting the backend code.

AWS Transform for .NET converts ASP.NET Web Forms projects to Blazor on ASP.NET Core, facilitating the migration of ASP.NET websites to Linux. The UI modernization feature is enabled by default in AWS Transform for .NET on both the AWS Transform web console and Visual Studio extension.

During the modernization process, AWS Transform handles the conversion of ASPX pages, ASCX custom controls, and code-behind files, implementing them as server-side Blazor components rather than web assembly. The following project and file changes are made during the transformation:

From To Description
*.aspx, *.ascx *.razor .aspx pages and .ascx custom controls become .razor files
Web.config appsettings.json Web.config settings become appsettings.json settings
Global.asax Program.cs Global .asax code becomes Program.cs code
*.master *layout.razor Master files become layout.razor files

Image showcasing how the specific project files are transformed

Other new features in AWS Transform for .NET
Along with UI porting, AWS Transform for .NET has added support for more transformation capabilities and enhanced developer experience. These new features include the following:

  • Port to .NET 10 and .NET Standard – AWS Transform now supports porting to .NET 10, the latest Long-Term Support (LTS) release, which was released on November 11, 2025. It also supports porting class libraries to .NET Standard, a formal specification for a set of APIs that are common across all .NET implementations. Furthermore, AWS Transform is now available with AWS Toolkit for Visual Studio 2026.
  • Editable transformation report – After the assessment is complete, you can now view and customize the transformation plan based on your specific requirements and preferences. For example, you can update package replacement details.
  • Real-time transformation updates with estimated remaining time – Depending on the size and complexity of the codebase, AWS Transform can take some time to complete the porting. You can now track transformation updates in real-time along with the estimated remaining time.
  • Next steps markdown – After the transformation is complete, AWS Transform now generates a next steps markdown file with the remaining tasks to complete the porting. You can use this as a revised plan to repeat the transformation with AWS Transform or use AI code-companions to complete the porting.

Things to know
Some more things to know are:

  • AWS Regions – AWS Transform for full-stack Windows modernization is generally available today in the US East (N. Virginia) Region. For Regional availability and future roadmap, visit the AWS Capabilities by Region.
  • Pricing – Currently, there is no added charge for Windows modernization features of AWS Transform. Any resources you create or continue to use in your AWS account using the output of AWS Transform are billed according to their standard pricing. For limits and quotas, refer to the AWS Transform User Guide.
  • SQL Server versions supported – AWS Transform supports the transformation of SQL Server versions from 2008 R2 through 2022, including all editions (Express, Standard, and Enterprise). SQL Server must be hosted on Amazon RDS or Amazon EC2 in the same Region as AWS Transform.
  • Entity Framework versions supported – AWS Transform supports the modernization of Entity Framework versions 6.3 through 6.5 and Entity Framework Core 1.0 through 8.0.
  • Getting started – To get started, visit AWS Transform for full-stack Windows modernization User Guide.

Prasad

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories