Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150323 stories
·
33 followers

Growing Yourself as a Software Engineer, Using AI to Develop Software

1 Share

Sharing your work as a software engineer inspires others, invites feedback, and fosters personal growth, Suhail Patel said at QCon London. Normalizing and owning incidents builds trust, and it supports understanding the complexities. AI enables automation but needs proper guidance, context, and security guardrails.

By Ben Linders
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

We Got Claude to Fine-Tune an Open Source LLM

1 Share
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Software in the Age of AI

1 Share

In 2025 AI reshaped how teams think, build, and deliver software. We’re now at a point where “AI coding assistants have quickly moved from novelty to necessity [with] up to 90% of software engineers us[ing] some kind of AI for coding,” Addy Osmani writes. That’s a very different world to the one we were in 12 months ago. As we look ahead to 2026, here are three key trends we have seen driving change and how we think developers and architects can prepare for what’s ahead.

Evolving Coding Workflows

New AI tools changed coding workflows in 2025, enabling developers to write and work with code faster than ever before. This doesn’t mean AI is replacing developers. It’s opening up new frontiers to be explored and skills to be mastered, something we explored at our first AI Codecon in May.

AI tools in the IDE and on the command line have revived the debate about the IDE’s future, echoing past arguments (e.g., VS Code versus Vim). It’s more useful to focus on the tools’ purpose. As Kent Beck and Tim O’Reilly discussed in November, developers are ultimately responsible for the code their chosen AI tool produces. We know that LLMs “actively reward existing top tier software engineering practices” and “amplify existing expertise,” as Simon Willison has pointed out. And a good coder will “factor in” questions that AI doesn’t. Does it really matter which tool is used?

The critical transferable skill for working with any of these tools is understanding how to communicate effectively with the underlying model. AI tools generate better code if they’re given all the relevant background on a project. Managing what the AI knows about your project (context engineering) and communicating it (prompt engineering) are going to be key to doing good work.

The core skills for working effectively with code won’t change in the face of AI. Understanding code review, design patterns, debugging, testing, and documentation and applying those to the work you do with AI tools will be the differential.

The Rise of Agentic AI

With the rise of agents and Model Context Protocol (MCP) in the second half of 2025, developers gained the ability to use AI not just as a pair programmer but as an entire team of developers. The speakers at our Coding for the Agentic World live AI Codecon event in September 2025 explored new tools, workflows, and hacks that are shaping this emerging discipline of agentic AI.

Software engineers aren’t just working with single coding agents. They’re building and deploying their own custom agents, often within complex setups involving multi-agent scenarios, teams of coding agents, and agent swarms. This shift from conducting AI to orchestrating AI elevates the importance of truly understanding how good software is built and maintained.

We know that AI generates better code with context, and this is also true of agents. As with coding workflows, this means understanding context engineering is essential. However, the differential for senior engineers in 2026 will be how well they apply intermediate skills such as product thinking, advanced testing, system design, and architecture to their work with agentic systems.

AI and Software Architecture

We began 2025 with our January Superstream, Software Architecture in the Age of AI, where speaker Rebecca Parsons explored the architectural implications of AI, dryly noting that “given the pace of change, this could be out of date by Friday.” By the time of our Superstream in August, things had solidified a little more and our speakers were able to share AI-based patterns and antipatterns and explain how they intersect with software architecture. Our December 9 event will look at enterprise architecture and how architects can navigate the impact of AI on systems, processes, and governance. (Registration is still open—save your seat.) As these events show, AI has progressed from being something architects might have to consider to something that is now essential to their work.

We’re seeing successful AI-enhanced architectures using event-driven models, enabling AI agents to act on incoming triggers rather than fixed prompts. This means it’s more important than ever to understand event-driven architecture concepts and trade-offs. In 2026, topics that align with evolving architectures (evolutionary architectures, fitness functions) will also become more important as architects look to find ways to modernize existing systems for AI without derailing them. AI-native architectures will also bring new considerations and patterns for system design next year, as will the trend toward agentic AI.

As was the case for their engineer coworkers, architects still have to know the basics: when to add an agent or a microservice, how to consider cost, how to define boundaries, and how to act on the knowledge they already have. They also need to understand how an AI element relates to other parts of their system: What are the inputs and outputs? And how can they measure performance, scalability, cost, and other cross-functional requirements?

Companies will continue to decentralize responsibilities across different functions this year, and AI brings new sets of trade-offs to be considered. It’s true that regulated industries remain understandably wary of granting access to their systems. They’re rolling out AI more carefully with greater guardrails and governance, but they are still rolling it out. So there’s never been a better time to understand the foundations of software architecture. It will prepare you for the complexity on the horizon.

Strong Foundations Matter

AI has changed the way software is built, but it hasn’t changed what makes good software. As we enter 2026, the most important developer and architecture skills won’t be defined by the tool you know. They’ll be defined by how effectively you apply judgment, communicate intent, and handle complexity when working with (and sometimes against) intelligent assistants and agents. AI rewards strong engineering; it doesn’t replace it. It’s an exciting time to be involved.


Join us at the Software Architecture Superstream on December 9 to learn how to better navigate the impact of AI on systems, processes, and governance. Over four hours, host Neal Ford and our lineup of experts including Metro Bank’s Anjali Jain and Philip O’Shaughnessy, Vercel’s Dom Sipowicz, Intel’s Brian Rogers, Microsoft’s Ron Abellera, and Equal Experts’ Lewis Crawford will share their hard-won insights about building adaptive, AI-ready architectures that support continuous innovation, ensure governance and security, and align seamlessly with business goals.

O’Reilly members can register here. Not a member? Sign up for a 10-day free trial before the event to attend—and explore all the other resources on O’Reilly.



Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Using MIRO to Build a Living Archive of Learning | Scott Smith

1 Share

Scott Smith: Using MIRO to Build a Living Archive of Learning

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"We're in a servant leadership role. So, ask: is the team thriving? That's a huge indication of success." - Scott Smith

 

For Scott, success as a Scrum Master isn't measured by velocity charts or burn-down graphs—it's measured by whether the people are thriving. This includes everyone: the development team and the Product Owner. 

As a servant leader, Scott's focus is on creating conditions where teams can flourish, and he has practical ways to gauge that health. Scott does a light touch check on a regular basis and a deeper assessment quarterly. Mid-sprint, he conducts what he calls a "vibe" check—a quick pulse to understand how people are feeling and what they need. During quarterly planning, the team retrospects and celebrates achievements from the past quarter, keeping and tracking actions to ensure continuous improvement isn't just talked about but lived. Scott's approach recognizes that success is both about the work being done and the people doing it. When teams feel supported, heard, and valued, the work naturally flows better. This people-first perspective defines what great servant leadership looks like in practice.

 

Self-reflection Question: How often do you check in on whether your team is truly thriving, and what specific indicators tell you they are?

Featured Retrospective Format for the Week: MIRO as a Living History Museum

"Use the multiple retros in the MIRO board as a shared history museum for the team." - Scott Smith

 

Scott leverages MIRO not just as a tool for running retrospectives but as a living archive of team learning and growth. He uses MIROVERSE templates to bring diversity to retrospective conversations, exploring the vast library of pre-built formats that offer themed and structured approaches to reflection. The magic happens when Scott treats each retrospective board not as a disposable artifact but as part of the team's shared history museum. 

Over time, the accumulation of retrospective boards tells the story of the team's journey—what they struggled with, what they celebrated, what actions they took, and how they evolved. This approach transforms retrospectives from isolated events into a continuous narrative of improvement. Teams can look back at previous retros to see patterns, track whether actions were completed, and recognize how far they've come. MIRO becomes both the canvas for current reflection and the archive of collective learning, making improvement visible and tangible across time.

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Scott Smith

 

Scott Smith is a 53-year-old professional based in Perth, Australia. He balances a successful career with a strong focus on health and fitness, currently preparing for bodybuilding competitions in 2026. With a background in leadership and coaching, Scott values growth, discipline, and staying relevant in a rapidly changing world.

 

You can link with Scott Smith on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251204_Scott_Smith_Thu.mp3?dest-id=246429
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Queue-it’s Virtual Waiting Room System Design with Product Architect Moji Sarooghi

1 Share

In this episode, Moji Sarooghi, Distinguished Product Architect at Queue-it, breaks down the design principles and distributed systems behind Queue-it’s virtual waiting room. He explains how the team handles massive traffic spikes, upholds strict first-in, first-out fairness on request, and maintains reliability at a scale that would overwhelm most platforms. Moji also covers the shift from server-side integrations to Edge compute, how Safety Net protects against unexpected peaks, and why simplicity and failure-oriented design drive every architectural choice. A clear, technical exploration of scaling responsibly when millions depend on your system.

Episode page

---

  • (00:00) - Intro
  • (01:34) - Visitor Flow: How the Waiting Room Works
  • (03:24) - Edge vs. Server-Side Connectors
  • (06:10) - Why Edge Improves Simplicity & Security
  • (07:12) - Preventing Queue Bypass Attempts
  • (09:14) - Connector Types & Verification Logic
  • (12:04) - Safety Net: Automatic Peak Protection
  • (14:54) - Scheduled Waiting Rooms + Safety Net
  • (17:19) - FIFO at Scale
  • (18:57) - Estimating Wait Times at Scale
  • (20:40) - Designing for Reliability & High Traffic
  • (24:38) - How Outflow Is Calculated
  • (29:07) - Queue-It Token & Visitor Verification
  • (31:02) - Cookies & Secure Access
  • (32:35) - Key AWS Services in the Architecture
  • (34:57) - Future: Multi-Cloud, Edge, & Bring Your Own Proxy
  • (37:59) - Outro


Mojtaba Sarooghi is a Distinguished Product Architect at Queue-it. Moji was one of the company’s first employees, starting his journey as a software developer over 10 years ago. He is highly experienced with AWS services, product and architectural design, managing developer teams, and defining and executing on product vision.

This podcast is hosted by José Quaresma, researched by Joseph Thwaites and produced by Perseu Mandillo. 

© Queue-it, 2025





Download audio: https://media.transistor.fm/f3618de8/0e17ddfc.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Anatomy of an API: the small but mighty MapOpenApi()

1 Share

I’m really proud of the OpenAPI experience in Minimal APIs, especially in comparison to REST API frameworks in other ecosystems. Because we’re able to build on C#’s strong type system and .NET’s rich set of APIs for introspecting runtime behavior, we can create a really magical experience that lets us derive fairly accurate OpenAPI docs from APIs without a lot of user intervention. In the following code:

var builder = WebApplication.CreateBuilder();

builder.Services.AddOpenApi();

var app = builder.Build();

app.MapOpenApi();

app.MapPost("/todos/{id}", (int id, Todo todo)
    => TypedResults.Created($"/todos/{id}", todo));

app.Run();

record Todo(int Id, string Title, bool IsCompleted, DateTime CreatedAt);

The AddOpenApi call registers services that are required for OpenAPI document generation, including a set of providers that understand how to examine the endpoints in the app and derive structured descriptions for them (see this earlier blog post I wrote). The MapOpenApi method registers an endpoint that emits the OpenAPI document in JSON format. By default, the document is served at http://localhost:{port}/openapi/v1.json, where v1 is the default document name. The document that you get is rich with metadata about the parameters and responses that this API produces.

Screenshot of OpenAPI document served from local endpoint

Today, I want to hone in on the MapOpenApi method and talk a little bit about some of the design choices wrapped up in it. It’s a small and tight API, but it’s a total workhorse. Here’s what its method signature in the framework looks like:

public static IEndpointConventionBuilder MapOpenApi(
    this IEndpointRouteBuilder endpoints,
    [StringSyntax("Route")] string pattern = "/openapi/{documentName}.json")

Let’s walk through the details of the these three lines of code.

First, why MapOpenApi instead of something like UseOpenApi? The Map verb typically refers to components that are modeled as endpoints in the ASP.NET Core ecosystem, whereas the Use verb typically refers to components that are modeled as middleware. The choice to model this as an endpoint instead of a middleware is actually pretty cool because it lets the OpenAPI document participate in all the endpoint-specific behavior that is available in ASP.NET Core’s other APIs. For example, if you want to lock down your API docs behind auth? Easy. Want to cache the document so you’re not regenerating it on every request? Also easy. Your code ends up being a chain of fluent calls to modify the behavior of the endpoint.

app.MapOpenApi()
  .RequireAuthorization()
  .WithOutputCaching()

You might’ve noticed the [StringSyntax("Route")] attribute on the pattern parameter. That’s a cute little hint to your editor that says “hey, this is a route template, maybe colorize it accordingly.” So if you’re staring at your code in VS Code or Visual Studio, you’ll get nice syntax highlighting on the route parameters. It’s one of those small touches that makes the DX a bit nicer for this API. In addition to colorization, it also opts in the parameter to a bunch of static analysis that ASP.NET Core does automatically. For example, if you provide a route pattern template that is invalid for whatever reason, you’ll get a warning during build about this and be able to rectify the situation. This is part of the “shift-left” philosophy of API design, where errors and warnings happen as code is written and built, not when it is running.

The default route pattern is sensible enough that most folks won’t need to change it, but if you want to customize it, there are plenty of options for you. The most important thing is making sure your route template includes a {documentName} parameter so the framework knows which document you’re asking for. The code below lets you serve the OpenAPI document from a different route in your service.

app.MapOpenApi("/docs/{documentName}/openapi.json");

Here’s a fun one: we added support for emitting OpenAPI documents in YAML after the initial API shipped. Rather than polluting the API surface with a new overload or a MapOpenApiYaml method (gross!), I just leaned into the file extension in the route. If you change .json to .yaml in the route template, boom, you get YAML. I’m particularly proud of this because it keeps the API surface tiny while still being expressive.

app.MapOpenApi("/docs/{documentName}/openapi.yaml");

And because you’re calling a method to register an endpoint and not some middleware, you can call MapOpenApi multiple times to register different routes. If you want to serve both YAML and JSON variants of your OpenAPI documents, you just register two different endpoints with two different extensions.

app.MapOpenApi("/docs/{documentName}/openapi.yaml");
app.MapOpenApi("/docs/{documentName}/openapi.json");

The beauty of this API is that it’s concise and expressive without being too clever. That said, it does lean pretty heavily on understanding how the route templating system works, which might trip up folks who are new to ASP.NET Core. But honestly, the terseness works out well in practice since most people are gonna be just fine with the defaults: serve a JSON document at the default route and plug it into the rest of their OpenAPI tooling.

And that’s MapOpenApi in a nutshell. It’s one of those APIs that looks deceptively simple on the surface but has a lot of thought packed into all the little details. The endpoint-based model gives you flexibility, the route templating keeps things consistent with the rest of the ecosystem, and the file extension trick for YAML support is just chef’s kiss (if I do say so myself!).

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories