Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151737 stories
·
33 followers

Fast Focus: Building a Blazor MAUI Hybrid App

1 Share
From: VisualStudio
Duration: 21:00
Views: 180

Want to ship a Blazor app that runs as a real native app on iOS, Android, Windows, and macOS—without rewriting your UI four times? In this quick, code-first Live! 360 session, Rocky Lhotka walks through .NET MAUI Blazor Hybrid and shows how it bridges Blazor’s web UI model with native device capabilities (storage, OS APIs, hardware access) that the browser sandbox can’t reach.

You’ll learn how to choose between Blazor Server, WebAssembly, and Progressive Web App (and when a PWA is “enough”), why store deployment vs. URL deployment matters, and how the newer “MAUI Blazor Hybrid and Web App” template helps you debug faster by running the same app in the browser while still targeting mobile and desktop. Rocky also demonstrates how shared Razor components live in a common project, while platform-specific code stays isolated in MAUI’s Platforms/ folders—so you can light up device features without breaking cross-platform builds. He closes with a real-world example: a nonprofit Kids ID Kit app built to iterate quickly via the web while still delivering a true native experience on phones.

🔑 What You’ll Learn
• When Blazor Server, WebAssembly, Progressive Web App (PWA), or native is the right choice
• What the browser sandbox limits—and what native MAUI unlocks
• How .NET MAUI Blazor Hybrid hosts Blazor UI with full device access
• Why pairing mobile + web dramatically improves debugging and testing
• How to structure shared Razor UI vs platform-specific code cleanly
• Practical deployment tradeoffs: stores, enterprise distribution, and UX parity

⏱️ Chapters
00:00 Intro + goal: Blazor apps that run natively
00:35 Blazor hosting modes: Server, WASM, Auto
01:28 Progressive Web App (PWA) vs native: sandbox limits & deployment tradeoffs
05:07 Why MAUI + Blazor Hybrid (and when UI parity is OK)
07:38 Templates: Hybrid vs Hybrid + Web App (debugging advantage)
11:00 Solution structure: MAUI host, web host, shared Razor UI
15:04 Platform-specific code with Platforms/ + form factor example
16:49 Real-world app demo: Kids ID Kit + why web helps testing

👤Speaker: Rocky Lhotka

🔗 Links
• Download Visual Studio 2026: http://visualstudio.com/download
• Explore more Live! 360 sessions: https://aka.ms/L360Orlando25
• Join us at upcoming VS Live! events: https://aka.ms/VSLiveEvents

#blazor #dotnet #visualstudio

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Random.Code() - Advent of Code 2025 Playthrough - Part 7

1 Share
From: Jason Bock
Duration: 1:15:26
Views: 10

I'm going to move on to Day 11, and I think I have a way to do Part 2 for both Day 7 and 9 now....I think.

https://adventofcode.com/2025
https://github.com/JasonBock/AdventOfCode2025

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Martin Fowler on Preparing for AI’s Nondeterministic Computing

1 Share

Martin Fowler, Thoughtworks chief scientist and long-time expert on object-oriented programming, views AI as the biggest shift in programming he has seen in his entire career.

In an interview on the Pragmatic Engineer podcast, hosted by Gergely Orosz, Fowler admitted about AI that, “We’re still learning how to do this.”

The closest parallel for the industry would be the shift away from assembly language.

Assembly language was tedious to write, as much of the work involved moving memory values across registers. That’s why the move to higher-level programming languages (such as COBOL and Fortran) was such a blessing for programmers.

“At least in a relatively poor, high-level language like Fortran, I can write things like conditional statements and loops,” Fowler said.

These new languages were a higher level of abstraction than the hardware itself.

With large language models (LLMs), it’s a “similar degree of mindshift,” he said.

Deterministic vs. Nondeterministic Computing

But it is not that LLMs are another abstraction, but that they are a different type of computing altogether.

Namely, LLMs are a form of nondeterministic computing, which has different characteristics than everything we consider as “computing” today, which is deterministic computing.

Deterministic computing is strictly binary. A calculation is either correct or it is wrong. If it is incorrect, we can debug the code to see where it went wrong.

Nondeterministic computing is fuzzier. An LLM may produce one answer at one point, and another entirely different answer another time. The answers it builds rely on statistical reasoning, a set of probabilities built on top of binary math, but not as foolproof.

It completely changes how you have to think about computing, he said.

Where AI Is Working

Thoughtworks is a technology-driven consulting company, and so it has been keeping an eye on how AI has been used successfully.

One use case, according to Fowler, has been for quickly knocking up prototypes, thanks in part to the emergence of vibe coding. Here you can explore an idea “much more rapidly” than you could before.

But the real killer app has been using AI to help understand legacy systems. On the company’s most recent annual Radar report (#33) of emerging technologies, using generative AI (GenAI) to modernize legacy systems was the single AI technology that got the company’s highest “Adopt” rating.

For its customers trying to modernize old systems, Thoughtworks has created a routine that basically semantically analyzes the codebase, putting the results in a graph database, which can then be interrogated with a Retrieval-Augmented Generation (RAG) process to understand how the application runs.

“If you are doing any work on legacy systems, you should be using LLMs in some way to help you,” Fowler said.

Harder Problems for AI

But while LLMs can help us understand legacy code, whether they can modify that code in a safe way is another question.

Higher-level programming remains dodgy with LLMs, however. Here you have to break AI work into very thin “slices,” and review everything very closely, he said.

“You’ve got to treat every slice as a [pull request] from a rather dodgy collaborator who’s very productive in the lines-of-code sense of productivity, but you know you can’t trust a thing that they’re doing,” Fowler said.

Nonetheless, using AI in this way can save developers time, though maybe not as much time as the advocates have been claiming.

In particular, he advised us to “come up with a more rigorous way” of speaking to the LLMs, in order to get better results. Domain-driven design (DDD) and domain-specific languages may offer a way forward.

AI’s Similarities To Mechanical Engineering

The practice of structural engineering can also be helpful in better gauging where to use AI, Fowler pointed out.

“My wife’s a structural engineer. She always thinks in terms of the tolerances: ‘How much extra stuff do I have to do beyond what the math tells me because I need it for tolerances?'” Fowler said.

Just as we know how much weight a concrete bridge can take, so too should LLMs come with metrics describing the levels of precision they can support.

“What are the tolerances of nondeterminism that we have to deal with?” he asked. Knowing this, software developers will know where “not to skate too close to the edge.”

One book Fowler recommended to software developers to help think about nondeterminism is “Thinking, Fast and Slow,” by Daniel Kahneman.

“He does a really good job of trying to give you an intuition about numbers, and spotting some of the many mistakes and fallacies we make when we’re thinking in terms of probability and statistics,” Fowler said.

As always, Fowler is an eloquent speaker, and had some insights across a variety of subjects in this interview, including refactoring, agile processes, LLMs in the enterprise, patterns of enterprise application, and, of course, every object-oriented programmer’s favorite language, Smalltalk.

Enjoy the entire talk here:

The post Martin Fowler on Preparing for AI’s Nondeterministic Computing appeared first on The New Stack.

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

DocSummarizer Part 4 - Building RAG Pipelines

1 Share
DocSummarizer Part 4 - Building RAG Pipelines
Read the whole story
alvinashcraft
44 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How We Use goose to Maintain goose

1 Share

As AI agents grow in capability, more people feel empowered to code and contribute to open source. The ceiling feels higher than ever. That is a net positive for the ecosystem, but it also changes the day-to-day reality for maintainers. Maintainers like the goose team face a growing volume of pull requests and issues, often faster than they can realistically process.

We embraced this reality and put goose to work on its own backlog.

We actually used goose pre-1.0 to help us build goose 1.0. The original goose was a Python CLI, but we needed to move quickly to Rust, Electron, and an MCP-native architecture. goose helped us make that transition. Using it to triage issues and review changes felt like a natural extension, so we embedded goose directly into a GitHub Action.

Credit: That GitHub Action workflow was built by Tyler Longwell, who took an idea we had been exploring manually and turned it into something any maintainer could trigger with a single comment.

Before the GitHub Action

Before the GitHub Action existed, the goose team was already using goose to accelerate our issue workflow. Here's a real example.

A user reached out on Discord asking why an Ollama model was throwing an error in chat mode. Rather than digging through the codebase myself, I asked goose to explore the code, identify the root cause, and explain it back to me. Then, I asked goose to use the GitHub CLI to open an issue.

During that same session, goose mentioned it had 95% confidence it knew how to fix the problem. The change was small, so I asked goose to open a PR. It was merged the same day.

This kind of workflow has changed how I operate as a Developer Advocate. Before goose, when a user reported a problem, the process unfolded in fragments. I would ask clarifying questions, check GitHub for related issues, pull the latest code, grep through files, read the logic, and try to form a hypothesis about what was going wrong.

If I figured it out, I had two options:

  1. I could write up a detailed issue and add it to a developer's backlog, which meant someone else had to context-switch into the problem later.
  2. Or I could attempt the fix myself, which often led to more time spent and more back-and-forth during code review if I got something wrong.

Either way, the process stretched across hours or days. And if the problem wasn't high priority, it sometimes slipped through the cracks. The report would sit in Discord or a GitHub comment until it scrolled out of view, and the user would assume nobody was listening.

With goose, that entire process collapsed into a single conversation.

The local workflow works. But when I solve an issue locally with goose, I'm still the one driving. I stop what I'm doing, open a session, paste the issue context, guide goose through the fix, run the tests, and open the PR.

Scaling with a GitHub Action

The GitHub Action compresses that entire sequence into a single comment. A team member sees an issue, comments /goose, and moves on. goose spins up in a container, reads the issue, explores the codebase, runs verification, and opens a draft PR. The maintainer returns to a proposed solution rather than a blank slate.

We saw this play out with issue #6066. Users reported that goose kept defaulting to 2024 even though the correct datetime was in the context. The issue sat for two days. Then Tyler saw it, commented /goose solve this minimally at 1:59 AM, and went back to whatever he was doing (presumably sleeping). Fourteen minutes later, goose opened PR #6101.

The maintainer's role shifts from implementing to reviewing. The bottleneck in open source is rarely "can someone write this code." It's "can someone with enough context find the time to write this code." The GitHub Action decouples those two constraints. Any maintainer can trigger a fix attempt without deep familiarity with that part of the codebase.

This scales in a way manual triage cannot. A backlog contains feature requests, complex bugs, and quick fixes in equal measure. The Action lets you point at an issue and say "try this one" without committing your afternoon. If goose fails, you lose minutes of compute. If it succeeds, you save hours.

For contributors, responsiveness changes everything. When a user filed issue #6232 about slash commands not handling optional parameters, a maintainer quickly commented /goose can you fix this, and within the hour there was a draft PR with the fix and four new tests. Even if the PR is not perfect and needs adjustments, contributors see momentum.

Under the Hood

Maintainers summon goose with /goose followed by a prompt as a comment on an issue. GitHub Actions spins up a container with goose installed, passes in the issue metadata, and lets goose work. If goose produces changes and verification passes, the workflow opens a draft pull request.

But there's more happening under the hood than a simple prompt like "/goose fix this."

The workflow uses a recipe that defines phases to ensure goose actually accomplishes the job and doesn't do more than we ask it to.

Phase What goose does Why it matters
Understand Read the issue and extract all requirements to a file Forces the AI to identify what "done" looks like before writing code
Research Explore the codebase with search and analysis tools Prevents blind edits to unfamiliar code
Plan Decide on an approach Catches architectural mistakes before implementation
Implement Make minimal changes per the requirements "Is this in the requirements? If not, don't add it"
Verify Run tests and linters Catches obvious failures before a human sees the PR
Confirm Reread the original issue and requirements Prevents the AI from declaring victory while forgetting half the task

The recipe also gives goose access to the TODO extension, a built-in tool that acts as external memory. The phases tell goose what to do. The TODO helps goose remember what it's doing. As goose reads through the codebase and builds a solution, its context window fills up and earlier instructions can be compressed or lost. The TODO persists, so goose can always check what it's done and what's left.

The workflow also enforces guardrails around who can invoke /goose, which files it's allowed to touch, and the requirement that a maintainer review and approve every PR.

There's something strange about using goose to maintain goose. But it keeps us honest. We're our own first customer, and if the agent can't produce mergeable PRs here, we feel it immediately.

The future we're aiming for isn't one where AI replaces maintainers. It's one where a maintainer can point at a problem, say "try this," and come back to a concrete proposal instead of a blank editor.

If that becomes the norm, open source scales differently.

The GitHub Action workflow is public for anyone who wants to explore this pattern in their own CI pipeline.

Read the whole story
alvinashcraft
45 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Thousands of medical records found in auctioned storage unit

1 Share
It’s been a while since we’ve seen one of these types of reports, and yet…..  Imani Williams reports: Thousands of medical records containing sensitive patient information were discovered in a Memphis storage unit that went up for auction after the owner failed to pay rent for three months. Jason Lederfine, who buys storage units as...

Source

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories