Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151900 stories
·
33 followers

'Foreign Tech Workers Are Avoiding Travel To the US'

1 Share
In an opinion piece for Computerworld, columnist Steven Vaughan-Nichols argues that restrictive visa policies and a hostile border climate under the Trump administration are driving foreign tech workers, researchers, and conference speakers away from the U.S. The result, he says, is a gradual shift of talent, events, and long-term innovation toward more welcoming regions such as Europe, Canada, and Asia. From the report: I go to a lot of tech conferences -- 13 in 2025 -- and many of those I attend are outside the U.S.; several are in London, one is in Amsterdam, another in Paris, and two in Tokyo. Wherever I went this past year, when we weren't talking about AI, Linux, the cloud, or open-source software, the top non-tech topic for non-Americans involved the sweeping changes that have occurred since President Donald J. Trump returned to office last January. The conversations generally ended with something like this: "I'm not taking a job or going to a conference in the United States." Honestly, who can blame them? Under Trump, America now has large "Keep Out!" and "No Trespassing!" signs effectively posted. I've known several top tech people who tried to come to the U.S. for technology shows with proper visas and paperwork, but were still turned away at the border. Who wants to fly for 8+ hours for a conference, only to be refused entry at the last minute, and be forced to fly back? I know many of the leading trade show organizers, and it's not just me who's seeing this. They universally agree that getting people from outside the States to agree to come to the U.S. is increasingly difficult. Many refuse even to try to come. As a result, show managers have begun to close U.S.-based events and are seeking to replace them with shows in Europe, Canada, and Asia. [...] Once upon a time, everyone who was anyone in tech was willing to uproot their lives to come to the U.S. Here, they could make a good living. They could collaborate, publish, and build companies in jurisdictions that welcome them, and meet their peers at conferences. Now, they must run a gauntlet at the U.S. border and neither a green card nor U.S. citizenship guarantees they won't be abused by the federal government. Trump's America seems bound and determined to become a second-rate tech power. His administration can loosen all the restrictions it wants on AI, but without top global talent, U.S. tech prowess will decline. That's not good for America, the tech industry or the larger world.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building Self-Referential Agents with .NET 10 & Aspire (Part 1)

1 Share

Series: PMCR-O Framework Tutorial

Canonical URL: https://shawndelainebellazan.com/article-building-self-referential-agents-part1

TL;DR

Learn to build autonomous AI agents using .NET 10, Ollama, and Aspire. This tutorial covers:

  • Production-ready infrastructure setup
  • Native JSON structured output (no regex parsing!)
  • "I AM" identity pattern for better agent behavior
  • GPU-accelerated local LLM inference

Code: GitHub - PMCR-O Framework

Why Local AI Agents Matter

Most AI tutorials rely on OpenAI's API. That's fine for demos, but production systems need:

  • Zero API costs during development
  • Data privacy (everything stays local)
  • Deterministic testing (no rate limits)
  • Full control over model lifecycle

Enter Ollama + .NET Aspire — the stack for self-hosted AI infrastructure.

Architecture Overview

┌─────────────────┐
│  .NET Aspire    │  ← Orchestration Layer
│   AppHost       │
└────────┬────────┘
         │
    ┌────┴────┐
    │         │
┌───▼───┐ ┌──▼────┐
│Ollama │ │Planner│  ← Agent Services
│Server │ │Service│
└───────┘ └───────┘

The "I AM" Pattern: Why It Matters

Traditional AI prompts:

❌ "You are a helpful assistant. Generate code for the user."

PMCR-O pattern:

✅ "I AM the Planner. I analyze requirements and create plans."

Research-backed: LLMs trained on first-person narration develop stronger task ownership (PROMPTBREEDER 2024).

Setup: Project Structure

# Create solution
mkdir PmcroAgents && cd PmcroAgents
dotnet new sln -n PmcroAgents

# Create projects
dotnet new aspire-apphost -n PmcroAgents.AppHost
dotnet new web -n PmcroAgents.PlannerService
dotnet new classlib -n PmcroAgents.Shared

# Add to solution
dotnet sln add **/*.csproj

The Aspire AppHost (Modern 2025 Setup)

using CommunityToolkit.Aspire.Hosting.Ollama;

var builder = DistributedApplication.CreateBuilder(args);

// Ollama with GPU support
var ollama = builder.AddOllama("ollama", port: 11434)
    .WithDataVolume()
    .WithLifetime(ContainerLifetime.Persistent)
    .WithContainerRuntimeArgs("--gpus=all");  // ← GPU acceleration

// Download model
var qwen = ollama.AddModel("qwen2.5-coder:7b");

// Agent service
var planner = builder.AddProject<Projects.PmcroAgents_PlannerService>("planner")
    .WithReference(ollama)
    .WaitFor(qwen);

builder.Build().Run();

What this does:

  1. Spins up Ollama in Docker
  2. Downloads qwen2.5-coder model (7.4GB)
  3. Injects Ollama connection string into Planner service
  4. Enables GPU passthrough for fast inference

Native JSON Output (No Regex!)

The Old Way ❌

// DON'T: Parse LLM text output with regex
var json = ExtractJsonWithBracketCounter(llmOutput);
var plan = JsonSerializer.Deserialize<Plan>(json);

Problems:

  • ~85% success rate
  • 50-200ms overhead
  • Breaks on nested objects

The New Way ✅

var chatOptions = new ChatOptions
{
    ResponseFormat = ChatResponseFormat.Json,  // ← Magic happens here
    AdditionalProperties = new Dictionary<string, object?>
    {
        ["schema"] = JsonSerializer.Serialize(new
        {
            type = "object",
            properties = new
            {
                plan = new { type = "string" },
                steps = new { type = "array" },
                complexity = new { 
                    type = "string", 
                    @enum = new[] { "low", "medium", "high" } 
                }
            }
        })
    }
};

Results:

  • ~99% success rate
  • <1ms deserialization
  • Schema-enforced validation

Planner Agent Implementation

public override async Task<AgentResponse> ExecuteTask(
    AgentRequest request,
    ServerCallContext context)
{
    _logger.LogInformation("🧭 I AM the Planner. Analyzing: {Intent}", request.Intent);

    var messages = new List<ChatMessage>
    {
        new(ChatRole.System, GetSystemPrompt()),
        new(ChatRole.User, request.Intent)
    };

    var response = await _chatClient.CompleteAsync(messages, chatOptions);

    return new AgentResponse 
    { 
        Content = response.Message.Text,
        Success = true 
    };
}

private static string GetSystemPrompt() => @"
# IDENTITY
I AM the Planner within the PMCR-O system.
I analyze requirements and create minimal viable plans.

# OUTPUT FORMAT
I output ONLY valid JSON matching this schema:
{
  ""plan"": ""high-level strategy"",
  ""steps"": [
    {""action"": ""concrete step"", ""rationale"": ""why this step""}
  ],
  ""estimated_complexity"": ""low|medium|high""
}
";

Testing It

cd PmcroAgents.AppHost
dotnet run

Navigate to http://localhost:15209 for the Aspire dashboard.

Example request:

{
  "intent": "Create a console app that prints 'Hello PMCR-O'"
}

Expected output:

{
  "plan": "Create minimal C# console app",
  "steps": [
    {
      "action": "Run: dotnet new console -n HelloPmcro",
      "rationale": "Use default template"
    },
    {
      "action": "Modify Program.cs",
      "rationale": "Add Console.WriteLine statement"
    }
  ],
  "estimated_complexity": "low"
}

Performance Benchmarks

Metric CPU (16-core) GPU (RTX 4090)
First inference 45-60s 3-5s
Subsequent 30-45s 2-3s
Memory usage 8GB 6GB

Key Takeaways

  1. Native JSON > Custom Parsing: Ollama's JSON mode eliminates fragile regex logic
  2. Aspire = DX Win: One dotnet run orchestrates everything
  3. GPU Acceleration: 10-15x faster inference with --gpus=all
  4. "I AM" Identity: First-person prompts improve agent agency

Next in Series

Part 2: Adding Maker, Checker, and Reflector agents to complete the PMCR-O cycle.

Resources

This article originally appeared on shawndelainebellazan.com — The home of Behavioral Intent Programming.

Building resilient systems that evolve. 🚀

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.35: Watch Based Route Reconciliation in the Cloud Controller Manager

1 Share

Up to and including Kubernetes v1.34, the route controller in Cloud Controller Manager (CCM) implementations built using the k8s.io/cloud-provider library reconciles routes at a fixed interval. This causes unnecessary API requests to the cloud provider when there are no changes to routes. Other controllers implemented through the same library already use watch-based mechanisms, leveraging informers to avoid unnecessary API calls. A new feature gate is being introduced in v1.35 to allow changing the behavior of the route controller to use watch-based informers.

What's new?

The feature gate CloudControllerManagerWatchBasedRoutesReconciliation has been introduced to k8s.io/cloud-provider in alpha stage by SIG Cloud Provider. To enable this feature you can use --feature-gate=CloudControllerManagerWatchBasedRoutesReconciliation=true in the CCM implementation you are using.

About the feature gate

This feature gate will trigger the route reconciliation loop whenever a node is added, deleted, or the fields .spec.podCIDRs or .status.addresses are updated.

An additional reconcile is performed in a random interval between 12h and 24h, which is chosen at the controller's start time.

This feature gate does not modify the logic within the reconciliation loop. Therefore, users of a CCM implementation should not experience significant changes to their existing route configurations.

How can I learn more?

For more details, refer to the KEP-5237.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Run Real Python in Browsers With Pyodide and WebAssembly

1 Share

There are many ways to bring Python to the browser (thanks, WebAssembly). But there’s only one way to bring Python’s full functionality (really no compromises) to the browser: Pyodide. Pyodide is a full Python runtime compiled to WebAssembly that allows you to run standard Python code directly in the browser. Yes, other tools exist, but the functionality has more limits than with Pyodide.

Pyodide is powerful because it’s a port of the CPython interpreter to WebAssembly (Wasm). Pyodide takes the standard CPython engine and re-engineers it to run inside a browser’s WebAssembly sandbox. This allows the browser to execute complex, real-world Python libraries at high speeds without needing any external servers or local installations. This means that, unlike smaller Python variants or transpilation approaches, when using Pyodide, you can:

  • Run full Python in the browser.
  • Support C-extension libraries like Pandas, NumPy and Matplotlib client-side.
  • Run Python entirely client-side without any backend.
  • Execute Python dynamically client-side.

Whereas with other Python-focused Wasm tools, you can’t. One small technical clarification: PyScript brings the same functionality to the browser. PyScript is a framework that uses Pyodide as its backend. It adds an HTML/templating layer to the Pyodide runtime.

The beauty of Pyodide is that it doesn’t require a complex build system or a specialized environment. If you can write a standard HTML file, you can run Pyodide.

  • Zero installation: You don’t need to install Python, manage virtual environments or pip-install a single thing. Everything happens within the browser the moment you load the page.
  • Minimal setup: You can pull Pyodide into your project via a CDN link. Once loaded, you’re just one function call away from executing Python logic: pyodide.runPython().
  • Direct communication: Pyodide includes a powerful bridge between Python and JavaScript. You can pass data structures between the two languages seamlessly — for example, using JavaScript to fetch data and Python to analyze it with a specialized library.

Pyodide is a full-weight runtime. It downloads and executes the entire CPython engine directly on your device rather than using a ‘lite’ version or sending code to a server for processing. That makes it a solid choice for applications like privacy-first data tools, analysis, data processing and offline-capable applications.

To show you how to get started with Pyodide, we’re going to build an application that:

  • Loads Python and Pandas in the browser with Pyodide.
  • Accepts an uploaded CSV.
  • Uses Python to:
    • Display the first rows of the dataset.
    • Populate a column selector.
    • Generate summary statistics.

And this all happens client-side!

I think it’s important to say you don’t need any project-specific tools or libraries installed on your machine to successfully execute this tutorial. You only need the following:

  • Modern browser
  • Internet connection
  • Text editor or IDE
  • CSV file (only if you want to see the full functionality of the project)

Because we’re working in the browser, the project code includes HTML, CSS and JavaScript, along with our Pyodide and Python code. All of our code will live in a single file, index.html. I’ll share the complete code file first and then provide detailed explanations of the Pyodide sections and how they work (HTML, CSS and JavaScript are outside the scope of this tutorial).

index.html

View the code on Gist.

Working With Pyodide

The first time we encounter Pyodide in index.html is with the line below:

View the code on Gist.

The code above downloads the Wasm version of Python. It also installs a Python interpreter inside the browser tab. Lastly, it exposes a JavaScript API (loadPyodide) that interacts with the interpreter.

Without this line of code, you can’t execute Python in the browser.

Pyodide Boots Python and Installs Python Packages

The next thing we’ll need Pyodide to do is initialize the Python interpreter, create the Python execution environment and download/install compiled Python packages into the environment. The code below essentially replaces python -m venv, pip install pandas and any backend service needed to run Pandas. Think of it as Python loading in the browser.

View the code on Gist.

Pyodide Bridges JavaScript and Python Memory

Now we need JavaScript to call Python like a function. Without Pyodide, you would need an API request, backend endpoint or some other workaround. This is where Pyodide makes JavaScript and Python interoperable.

In the code below, Pyodide copies a JavaScript string into Python’s global namespace. This makes browser data available to Python without using serialization APIs or sending it over HTTP.

View the code on Gist.

Execute Python Code

pyodide.runPython()executes the Python code in the browser. It takes in Python code as a string, maintains Python state between executions and allows multiple Python calls to share variables and data. The string is made of standard Python code, not a Python/JavaScript hybrid.

The code below is what reads the CSV into a Pandas DataFrame, displays the first few rows, populates the column dropdown dynamically and calculates summary statistics. Pyodide allows Python to access the browser DOM, so all updates will happen directly on the page without any server or API calls.

View the code on Gist.

The next code block, also using pyodide.runPython(), runs Python via Pyodide whenever the user selects a column from the dropdown. It checks if a column is selected, then extracts that column from the DataFrame and displays the first few values in the browser. If no column is selected, it clears the output. Pyodide allows Python to update the HTML directly, so the user sees the column data instantly without any server requests.

One important line that appears in both code blocks is from js import document. This makes JavaScript objects accessible in Python and allows Python to call browser APIs directly. With this line, Python can interact with the browser like a first-class language, updating the DOM and responding to actions without any server code.

Pyodide Helps Python Drive the UI

There’s another piece of code in the Python string I want to point out:

View the code on Gist.

This code updates the UI without switching languages. It does so by routing Python calls to JavaScript DOM methods and converting Python strings into JavaScript strings.

Conclusion

Pyodide turns the traditional frontend architecture on its head! It embeds a persistent Python runtime in the browser and provides a two-way bridge between JavaScript and Python. With Pyodide, Python libraries like Pandas can run client-side and interact directly with the DOM. It brings functionality that used to require a full Python backend straight to the client. What will you build in the browser with Pyodide?

The post Run Real Python in Browsers With Pyodide and WebAssembly appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

SE Radio 701: Max Guernsey, III and Luniel de Beer on Readiness in Software Engineering

1 Share

Max and Luniel co-authors of the book - "Ready: Why Most Software Projects Fail and How to Fix It", discuss the concept of Readiness in software engineering with host Brijesh Ammanath. While Agile workflows and technical practices help delivery, many software efforts still struggle to achieve desired outcomes. Rework, shifting requirements, delays, defects, and mounting technical debt plague software delivery and impede or altogether halt progress toward goals. The problem is often that implementation begins prematurely, before the team is properly set up for success. A strict system of explicit readiness work and gating, called Requirements Maturation Flow (RMF), solves this problem in a SDLC-independent way. Teams that have adopted RMF dramatically improve progress toward real goals while reducing stress on engineering teams. In this podcast, Max and Luniel deep dive into Requirements Maturation Flow (RMF) and explain its foundational pillars.

Objective -

  • Understand why most software projects fail, what causes rework, under-delivery and delays.
  • What is Requirements Maturation Flow and its 3 foundational practices?
  • Understanding the value of having Readiness as a explicit work item
  • Understanding Definition of Done
  • Understanding Definition of Ready

Brought to you by IEEE Computer Society and IEEE Software magazine.





Download audio: https://traffic.libsyn.com/secure/seradio/701-guernsey-de-beer-readiness-software-engineering.mp3?dest-id=23379
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The SysAdmin in 2026

1 Share

A new year - and so much to do! To start 2026, Richard flies solo again to discuss the issues he's seen on sysadmins' minds as we head into the new year. Obviously, AI is eating up a lot of the conversation from many different angles: tools that can help us be more productive, security issues in our organizations due to misuse, and now, AI-driven hacking. Security still looms large, and not just from an AI perspective - the latest round of supply chain attacks has led to litigation, putting new emphasis on making sure you're secure. Windows has a new leader, things are changing there, and there's the ongoing migration to the cloud. Does it still make sense? There seems to be more concern about data sovereignty than ever, and some meaningful conversations to have. Happy New Year!

Links

Recorded December 20, 2025





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/327af33d-1e97-4a68-8c7c-2dac11979ccf/audio/378d80ef-9d2a-4d84-8a3d-645d7c1efbad/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories