Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151682 stories
·
33 followers

Grief and the Nonprofessional Programmer

1 Share

I can’t claim to be a professional software developer—not by a long shot. I occasionally write some Python code to analyze spreadsheets, and I occasionally hack something together on my own, usually related to prime numbers or numerical analysis. But I have to admit that I identify with both of the groups of programmers that Les Orchard identifies in “Grief and the AI Split”: those who just want to make a computer do something and those who grieve losing the satisfaction they get from writing good code.

A lot of the time, I just want to get something done; that’s particularly true when I’m grinding through a spreadsheet with sales data that has a half-million rows. (Yes, compared to databases, that’s nothing.) It’s frustrating to run into some roadblock in pandas that I can’t solve without looking through documentation, tutorials, and several incorrect Stack Overflow answers. But there’s also the programming that I do for fun—not all that often, but occasionally: writing a really big prime number sieve, seeing if I can do a million-point convex hull on my laptop in a reasonable amount of time, things like that. And that’s where the problem comes in. . .if there really is a problem.

The other day, I read a post of Simon Willison’s that included AI-generated animations of the major sorting algorithms. No big deal in itself; I’ve seen animated sorting algorithms before. Simon’s were different only in that they were AI-generated—but that made me want to try vibe coding an animation rather than something static. Graphing the first N terms of a Fourier series has long been one of the first things I try in a new programming language. So I asked Claude Code to generate an interactive web animation of the Fourier series. Claude did just fine. I couldn’t have created the app on my own, at least not as a single-page web app; I’ve always avoided JavaScript, for better or for worse. And that was cool, though, as with Simon’s sorting animations, there are plenty of Fourier animations online.

I then got interested in animations that aren’t so common. I grabbed Algorithms in a Nutshell, started looking through the chapters, and asked Claude to animate a number of things I hadn’t seen, ending with Dijkstra’s algorithm for finding the shortest path through a graph. It had some trouble with a few of the algorithms, though when I asked Claude to generate a plan first and used a second prompt asking it to implement the plan, everything worked.

And it was fun. I made the computer do things I wanted it to do; the thrill of controlling machines is something that sticks with us from our childhoods. The prompts were simple and short—they could have been much longer if I wanted to specify the design of the web page, but Claude’s sense of taste was good enough. I had other work to do while Claude was “thinking,” including attending some meetings, but I could easily have started several instances of Claude Code and had them create simulations in parallel. Doing so wouldn’t have required any fancy orchestration because every simulation was independent of the others. No need for Gas Town.

When I was done, I felt a version of the grief Les Orchard writes about. More specifically: I don’t really understand Dijkstra’s algorithm. I know what it does and have a vague idea of how it works, and I’m sure I could understand it if I read Algorithms in a Nutshell rather than used it as a catalog of things to animate. But now that I had the animation, I realized that I hadn’t gone through the process of understanding the algorithm well enough to write the code. And I cared about that.

I also cared about Fourier transformations: I would never “need” to write that code again. If I decide to learn Rust, will I write a Fourier program, or ask Claude to do it and inspect the output? I already knew the theory behind Fourier transforms—but I realized that an era had ended, and I still don’t know how I feel about that. Indeed, a few months ago, I vibe coded an application that recorded some audio from my laptop’s microphone, did a discrete Fourier transform, and displayed the result. After pasting the code into a file, I took the laptop over to the piano, started the program, played a C, and saw the fundamental and all the harmonics. The era was already in the past; it just took a few months to hit me.

Why does this bother me? My problem isn’t about losing the pleasure of turning ideas into code. I’ve always found coding at least somewhat frustrating, and at times, seriously frustrating. But I’m bothered by the lack of understanding: I was too lazy to look up how Dijkstra works, too lazy to look up (again) how discrete Fourier works. I made the computer do what I wanted, but I lost the understanding of how it did it.

What does it mean to lose the understanding of how the code works? Anything? It’s common to place the transition to AI-assisted coding in the context of the transition from assembly language to higher-level languages, a process that started in the late 1950s. That’s valid, but there’s an important difference. You can certainly program a discrete fast Fourier transform in assembly; that may even be one of the last bastions of assembly programs, since FFTs are extremely useful and often have to run on relatively slow processors. (The “butterfly” algorithm is very fast.) But you can’t learn signal processing by writing assembly any more than you can learn graph theory. When you’re writing in assembler, you have to know what you’re doing in advance. The early programming languages of the 1950s (Fortran, Lisp, Algol, even BASIC) are much better for gradually pushing forward to understanding, to say nothing of our modern languages.

That is the real source of grief, at least for me. I want to understand how things work. And I admit that I’m lazy. Understanding how things work quickly comes in conflict with getting stuff done—especially when staring at a blank screen—and writing Python or Java has a lot to do with how you come to an understanding. I will never need to understand convex hulls or Dijkstra’s algorithm. But thinking more broadly about this industry, I wonder whether we’ll be able to solve the new problems if we delegate understanding the old problems to AI. In the past, I’ve argued that I don’t see AI becoming genuinely creative because creativity isn’t just a recombination of things that already exist. I’ll stick by that, especially in the arts. AI may be a useful tool, but I don’t believe it will become an artist. But anyone involved with the arts also understands that creativity doesn’t come from a blank slate; it also requires an understanding of history, of how problems were solved in the past. And that makes me wonder whether humans—at least in computing—will continue to be creative if we delegate that understanding to AI.

Or does creativity just move up the stack to the next level of abstraction? And is that next level of abstraction all about understanding problems and writing good specifications? Writing a detailed specification is itself a kind of programming. But I don’t think that kind of grief will assuage the grief of the programmer who loves coding—or who may not love coding but loves the understanding that it brings.



Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Clean Code to Clean Architecture: Refactoring a Fat Controller into Vertical Slices in ASP.NET Core

1 Share

## 1 Clean Code to Clean Architecture: Refactoring a Fat Controller into Vertical Slices in ASP.NET Core

Most teams do not decide to build a fat controller. They arrive there one deadline at a time. A controller starts as a thin HTTP entry point, then absorbs validation, orchestration, persistence, caching, notifications, and a few “temporary” business rules. Six months later, one endpoint touches five services, three repositories, and two external APIs. The problem is no longer style. It is architecture.

This article focuses on the first half of that recovery: how to identify the failure modes in a legacy ASP.NET Core controller, how to extract behavior into commands and queries with MediatR, and how to move validation out of attributes and into explicit, testable rules. The end state is not “more patterns.” It is a codebase where each feature can evolve with less fear, fewer unintended side effects, and clearer operational boundaries. The structure follows the outline you provided. ASP.NET Core supports both controller-based APIs and Minimal APIs, and Microsoft now documents Minimal APIs as the recommended approach for new API projects, which makes the refactoring path especially relevant for teams modernizing existing controller-heavy codebases.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Take your PostgreSQL-backed apps to the next level

1 Share

PostgreSQL is a powerful and hugely popular database engine, and it really comes alive across Microsoft developer platforms. You can build with PostgreSQL across Azure offerings, develop productively in Visual Studio Code with strong extensions and tooling, and connect your data to agentic development workflows and AI services. There’s amazing opportunity to bring those pieces together to modernize apps faster, migrate with confidence, and ship intelligent experiences on a proven database foundation. The challenge is that getting the most out of PostgreSQL across this full stack can be complex, especially when you are tuning performance, designing for resiliency, operating at scale, or building agent experiences that need reliable, well-modeled data access.

That is why we created the PostgreSQL Like a Pro video series. It provides practical guidance and real-world demos that help you take advantage of PostgreSQL on Azure, supercharged by AMD technologies, while using the Microsoft tools and services you already rely on from local development through production.

Here’s what you’ll learn from the pros

  • Building AI agents that actually work: You’ll see how to use the Model Context Protocol (MCP) to let your agents explore your database and run vector searches using natural language, all without leaving Microsoft Foundry.

How to build AI agents that actually work

  • Migrating without the manual grind: Check out our new AI-assisted migration tooling in VS Code. It uses an “agentic self-correction” approach to automatically catch and fix schema or code issues when you’re moving from Oracle to Azure Database for PostgreSQL.

How to migrate without the manual grind

  • Optimizing PostgreSQL performance on Azure: We’ll show you how Azure Database for PostgreSQL meets your performance and resiliency needs with fine-grained control and flexible deployment options.

How to optimize PostgreSQL performance on Azure

  • Scaling to meet the most demanding workloads: Watch how our new PostgreSQL offering, Azure HorizonDB, leverages decoupled compute and storage to enable amazing performance and scaling benefits for mission-critical apps.

How to scale to meet the most demanding workloads

What to expect

Preview the best practices, the architecture, and the results you can achieve when you pair PostgreSQL on Azure with AMD technology.

Ready to Postgres Like a Pro? Subscribe to the  YouTube channel and follow the PostgreSQL Hub for Azure Developers.

The post Take your PostgreSQL-backed apps to the next level appeared first on Microsoft for Developers.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From Idea to Production in 4 Hours with AI: A Case Study

1 Share

In this blog, I want to show you how I built something I estimated would take me 20-60 hours of work in about half a day. This included a React frontend, .NET Azure Functions backend APIs, Azure Static Web Apps, Terraform infrastructure, live CI/CD pipelines in Azure DevOps, and everything deployed to a custom domain. All that, and I wrote none of the code myself. I just wrote the specs and oversaw the work, like a general contractor.

Project Background

I run a software conference called Beer City Code. Every year, we do a raffle at the end of the day. Sponsors hand out tickets at their booths throughout the day. Attendees have to opt in to an email list to be eligible, and at the end of the day we draw for winners. For years, we double-checked everyone’s opt-in status and eligibility with spreadsheets. As a volunteer-run event, the value proposition just wasn’t there for the 20 – 60 hours of work to create an application for this.

This year, though, I decided to build a real web app for it with the AI coding approach called spec-driven development. Recent experiences with AI-assisted development suggested I might be able get this work down to 4 – 5 hours, making it worth the effort. Now with 4 hours invested, I can save a few hours of manual effort at every event, paying off in a couple years.

If I was going to do it this way, I wanted to include everything on my wishlist, not just an MVP (minimally viable product). With that in mind I decided to include not just the attendee eligibility lookup and the ability to manage one’s opt-in status, but also an admin dashboard, automatic Eventbrite data synchronization, and a raffle drawing UI.

The Code Is Now the Easy Part

Most of us the first time we picked up Claude Code, Cursor, or Copilot had a similar experience: we describe what we want in overly vague terms, the AI writes something plausible, it mostly works, and we think “oh, this is going to be great.” Then a few edits later and we start to run into incoherent architecture, inconsistent pattern usage, missing edge cases, or a data model that doesn’t make sense. The problem isn’t just mediocre AI-generated code. Nobody wrote a good specification.

As I’ve said in other posts, the human developer is the general contractor and AI is the crew of subcontractors. The thing about that analogy is: subcontractors need blueprints and oversight. You can’t hand a framing crew a napkin sketch and expect a house. You give them blueprints, and blueprints take time and effort. The quality of what gets built depends almost entirely on what they were given to build from.

For this blog, I briefly considered demonstrating this the hard way: building the app twice, once with a proper spec and once as a pure vibe-coder, and comparing the results. Then I realized that this would be like driving a car blindfolded to prove you need to be able to see to arrive safely at your destination. My experience working with AI-generated code tells me that the outcome isn’t really in doubt. The blindfolded version would have been non-functional, architecturally incoherent, and painful to maintain. Probably insecure. Almost certainly not what was needed. Running that experiment would have just produced throwaway code and wasted a half-day. So instead, I decided to document what I actually did.

Specification Creation

There exists a tool called spec-kit that is aimed at helping you build specs, and then AI-code off of them. I wanted to do a more hands-on version of this on my own, first, to learn more about how it works. So, I started in Claude.ai, not yet in Claude Code. This first session wasn’t about writing code at all. It was about figuring out what to build. I came in with a clear-enough idea of what I wanted, but “clear-enough” has a way of falling apart when someone starts asking the right questions. I prompted Claude to do exactly that: ask questions.

ME: I'd like help building specs for a utility for the conference I run, Beer City Code. 

The background:
We want sponsors to feel their booths were valuable, and part of that is getting attendees to visit them. We encourage that with a raffle drawing for prizes at the end of the day. The sponsors get tickets to hand out (in exchange for whatever feels valuable to them. In addition, we want attendees to opt-in to the email list, which is given to sponsors at the end of the event. To get them to opt-in, only opted-in attendees are eligible for the raffle.

The problem:
Not everyone registers themselves. Often someone at their company registers them, and opting them in is of no value to that person. With that in mind, we want to have a simple way they can provide identifiable information (their email and maybe their ticket or order number from Eventbrite?...something they would have if they have the ticket) into a web application and see their current opt-in status and eligibility (organizers and vendors aren't eligible, for example). Then, they should be able to change that status up to a certain cut-off datetime, shortly before the drawing.

The technical specs:
This is a low-budget conference, so I'd like this to run on very low-cost cloud resources in our Azure. I'm imagining a React FE web app running on Azure blob storage and a CDN plus an Azure function API using either Azure storage tables or something similar.

The known requirements:
* An admin UI that is accessible to only admins, perhaps just using a secret access code. In the admin UI, you should be able to upload eventbrite tickets as xlsx or csv (uploading the same list multiple times should NOT create duplicates, but only update, add or remove any changes...might be easiest to just delete everything and reprocess the upload), set a datetime when the ability to change opt-in status is locked down, a drawing UI that checks someone's eligibility and opt-in status by name (should fuzzy match if it's not spelled right or only part of the name)
* A status update UI with: a landing page to enter email and/or ticket number to verify identity, and error response if they don't match anyone, then a status page to show the current opt-in status, ticket type(s) (some people will have multiple tickets...for example a workshop ticket and VIP party ticket...if any of those is an organizer or sponsor booth ticket they, they aren't eligible even if their other ticket types are), and finally, a way for them (up until the deadline) to change their opt-in status.

I'll be coding this with AI agents, so I'd like your help answering all the detailed questions that need to be answered to have very detailed specs that can be followed by AIs to build this, then to actually generate those specs.

Before writing a single line of spec, it surfaced things as questions to me that I hadn’t thought through yet:

  • What ticket types should make someone ineligible for the raffle? My initial instinct was organizers, speakers, volunteers, and sponsor staff. But when I actually thought it through, speakers and volunteers are eligible — they’re community members, not paid staff. Without that clarification, the eligibility logic would have been wrong, and I wouldn’t have caught it until someone got wrongly excluded during the drawing.
  • What does identity verification actually require? Email and order number, both required — not either/or. That’s a UX decision with real implications for how attendees look themselves up and what error messages make sense.
  • What happens in the drawing UI when a name is drawn? Just pass/fail eligibility, or do you mark winners to avoid re-drawing them? Two completely different data models depending on the answer.

These aren’t questions I thought to answer upfront on my own. If coding this by hand, I would have made assumptions, coded against those assumptions, and discovered the gap later. With AI-assisted development, I might have just missed these altogether.

Grounding the Spec in Real Data

I also grounded the specs in real data. Rather than speccing out the Eventbrite integration from memory, I walked Claude through making a live API call to the endpoint to look at an actual attendee record, and then had it build the data model from those real field names.

ME: Shall we try calling it now to see what comes back?

CLAUDE: Love that idea — let's call it live and see exactly what the 
response looks like. To do that I'll need a couple things from you:

- Your Eventbrite Private Token
- Your Event ID

Once you paste those I'll make the call directly and we can see the 
real field names, ticket type structure, order numbers, and whatever 
else comes back in the response.

ME: https://www.eventbrite.com/e/beer-city-code-2026-tickets-[event ID]

CLAUDE: Great, got the event ID. Now I just need your private API token 
and I can make the call. You can find it at:

eventbrite.com → Account Settings → Developer Links → API Keys

ME: [API credentials provided]

CLAUDE: Let me make that API call now with your private token.

[attempts direct fetch — hits security restriction]

The security rules on my tools won't let me fetch an arbitrary API URL 
directly. The cleanest path is to make the call from your machine. 
Run this in your terminal:

    curl -H "Authorization: Bearer {token}" \
      "https://www.eventbriteapi.com/v3/events/{event_id}/attendees/
      ?expand=ticket_class" \
      -o response.json

ME: [uploads response.json]

CLAUDE: This is gold. The API response tells us almost everything we need.

Key findings:

- Three distinct identifiers per attendee row:
    id — unique per ticket row
    order_id — shared across all tickets in the same purchase
    barcodes[0].barcode — the full barcode on the physical ticket/PDF

Notice that the barcode actually contains the order ID as a prefix. 
For identity verification, order_id is likely what the attendee sees 
in their confirmation email — that's the right field to use alongside 
email.

In the rest of our conversation, the AI and I decided that the identity verification flow should use order_id, not the ticket barcode, because we could see from the sample confirmation emails I gave it exactly what an attendee would have in front of them. Maybe that’s a small detail, but if we hadn’t worked through it upfront, it might have been wrong.

I also knew the tech stack I wanted and shared that with Claude: React front-end running on Azure Static Web Apps, a .NET 9 Azure Functions API back-end, Azure Table Storage for the database, and Terraform for creating the cloud infrastructure. I put every one of those decisions into the spec before Claude Code wrote a single file. That’s important: architecture decided during spec creation is a guardrail. Architecture decided during implementation is chaos.

Where to Put the Specs

I knew Claude had an MCP server connection to the event’s Azure DevOps account. With that, the specs didn’t just get exported to a Word document, markdown, or need to be copy-and-pasted. Instead I told Claude to create the specs as live work items directly in the ADO project. Claude created 4 Epics, 12 Features, and several user stories with well-defined acceptance criteria. By the end of the session, there was an actual backlog in Azure DevOps that Claude Code could reference by story number. In a real-world project with a whole team, this allows the whole team, AIs and people, to grab tasks from the backlog and collaborate.

The CLAUDE.md file was the other half of the handoff. This is the file Claude Code reads automatically when you start a session in a repo. It’s how context transfers from one AI instance to another. The chat session built the spec. The CLAUDE.md captured the architecture, data model, Eventbrite field names, eligibility rules, API shape, and environment config all in one place.

Now I was ready to start developing, so I switched to Claude Code. My first prompt was:

ME: Read CLAUDE.md to understand the project fully. Then let's start with 
the repo structure and Terraform configuration (User Story #40).

Please:
1. Create the folder structure as defined in CLAUDE.md
2. Write the Terraform configuration in /terraform that provisions all 
   Azure resources — Resource Group, Storage Account (with Table Storage 
   + static website), Azure CDN, Function App (.NET 9 Consumption plan), 
   App Service Plan, and Application Insights
3. Use variables for all environment-specific values as described in CLAUDE.md
4. Include a terraform.tfvars.example file with placeholder values
5. Include a README in /terraform explaining the annual deployment process

That prompt worked because the context was there to smooth the transition. “Read CLAUDE.md” helped make the hand-off between the Claude.ai spec creation process and the Claude Code programming tasks.

Implement, Story by Story

For implementation I decided to use Claude Code running in my terminal, with access to the filesystem and the ability to run commands. This is a better fit for building software than a chat window. Terminal access means it can make coordinated changes in the repo, run CLIs like dotnet build, read compiler errors, and iterate.

The session discipline mattered as much as the tool. I didn’t hand Claude Code the whole backlog and say “build it.” Instead, I worked story by story. This made sure each story had its own focused prompt. After the Terraform story was complete and committed, the next prompt was:

ME: Read CLAUDE.md. Now let's build User Story #43 — the Azure Functions 
API project and Eventbrite sync.

Please:
1. Scaffold a .NET 9 Azure Functions v4 isolated worker project at 
   /api/BccRaffle.Functions/
2. Create the data models: AttendeeEntity.cs and ConfigEntity.cs matching 
   the Azure Table Storage schema in CLAUDE.md
3. Create EventbriteService.cs that calls the Eventbrite API with pagination 
   (GET /v3/events/1571846889359/attendees/?expand=ticket_class), parses the 
   opt-in answer from question_id 306469803, and maps to AttendeeEntity
4. Create AttendeeService.cs for Table Storage upsert operations — preserving 
   optin_override on sync
5. Create AdminSyncFunction.cs — POST /api/admin/sync — validates 
   X-Admin-Secret header, calls EventbriteService, upserts all attendees, 
   returns count

Notice what both prompts have in common: a story number, specific file paths, named interfaces, exact API endpoints, real field names from the spec. I didn’t let the AI invent any of that. I asked it to implement what was already decided. Each story was a bounded scope. Each commit was a restore point. When Terraform was done, I closed Story #40 in ADO and moved on. That rhythm (complete, verify, commit, next) kept the session from turning into a mess of half-finished or untested things. Note that there are ways to use AI to do this process in a loop with adversarial agents verifying success, but I’ll talk about that in a future blog post.

I iterated with this process through the stories. The AI handled the Terraform HCL configuration for the full Azure resource group, the .NET 9 Functions project scaffold, the Eventbrite sync with pagination and the opt-in answer parsing, the React components and routing, and the deployment pipeline YAML.

Where Human Judgment Still Mattered

Throughout the process, there were still several things that required my judgment:

  • An OData null filter on Azure Table Storage that the AI wrote using syntax the service doesn’t support — caught from the stack trace when the sync failed in production.
  • A missing staticwebapp.config.json fallback that caused React Router routes to 404 after deploy.
  • A pipeline YAML that used the wrong input name for the Azure service connection.

None of these were hard to fix once caught. All of them would have been invisible without someone actually reading the output and testing the result. Going back to the general contractor analogy, the inspecting of the work was still my responsibility, and important for the overall success of the project. Now let’s look at a list of all the things required from me that made this process work.

Here’s what came out the other end: the full attendee flow and admin portal, built in half a day without writing a line of code:

What Actually Made It Work

Below are several reasons I’d point to for how I was able to get a successful result, not just something that ran. These are things a less-experienced developer or a vibe-coder wouldn’t have known to do. They’re not tricks or prompt templates, just decisions about how to run the AI code generation process that helps to guarantee a better result.

Using the Right Tool for Each Phase

There’s a temptation to do everything in one place. Claude.ai is familiar, the chat window is comfortable, and it feels like you’re making progress. But a chat window is the wrong place to write code, and Claude Code is the wrong place to spec out requirements — it doesn’t ask questions, it just starts building.

The CLAUDE.md file was the deliberate handoff between the two. Everything decided in the spec session — field names, eligibility rules, API shape, data model, environment variables — got written into that file before Claude Code opened the repo. When I started the first Claude Code session with “Read CLAUDE.md,” it had full context without me re-explaining a single thing. The two tools complemented each other. I treated them as distinct phases with a clean handoff, not as interchangeable chat windows.

Specificity in My Spec Generation

Early in the chat session, Claude proposed which ticket types should make an attendee ineligible for the raffle. The initial list flagged five types: Organizer, Speaker, Volunteer, Sponsor Booth Staff, and Conference Only (Sponsor). That looked reasonable at a glance, but it was wrong. Speakers and volunteers are eligible for the raffle, so I pushed back:

ME: These ticket types are actually also eligible:
* Speaker
* Volunteer
* Conference Only (Sponsor)

I caught it because I’ve run this conference for over a decade and I know that speakers and volunteers are community members, not staff. That context isn’t in any training data. The AI had no way to know it until I put it in the spec. That one correction cut the ineligible list from five types to two. If I’d let the assumption stand, the eligibility logic would have silently excluded the wrong people, and I wouldn’t have caught it until someone got wrongly rejected during the actual drawing, probably in front of a room full of attendees.

Every detail nailed down in the specifications is one fewer thing the AI has to guess. And when the AI guesses wrong on a business rule, the bug is almost always invisible until the worst possible moment.

Using Real Data

I was worried about the Eventbrite integration. I knew roughly what the API returned, but “roughly” isn’t good enough when you’re building identity verification around specific field names. Rather than speccing from memory, I had Claude make a live API call mid-conversation and review the raw response. Claude pulled something out of it I hadn’t noticed:

CLAUDE: Notice that the barcode actually contains the order ID as a prefix (13982677663 + 22563517985 + 001). For identity verification, order_id is likely what the attendee sees in their confirmation email, so that's the right field to use alongside email.

I also uploaded a ticket PDF and confirmation email so Claude could see exactly what an attendee would have in front of them at the event. The spec ended up referencing order_id by name — not “order number” or “ticket number” or whatever I might have called it from memory — because we looked at the real thing. That field name flowed unchanged into the data model, the API endpoint, and the attendee-facing copy.

Architecture as Guardrails

I’d watched enough AI-assisted vibe-coding sessions go sideways with bad tech stack selection or other questionable architectural decisions, so I dropped them in early, even this one as an aside:

ME: FYI, the site needs to be hosted at [domain] and I'd like the Azure 
function built with C# and the most recent version of .NET that is 
supported in Azure Functions

Combined with the Azure blob storage and CDN preference I’d mentioned in the initial requirements, that locked the entire stack. Claude Code never had a choice to make about React vs. something else, or .NET vs. Node, or Terraform vs. manual provisioning. Every one of those decisions was already in the CLAUDE.md before implementation started. There was no “I’ll just try a quick approach here” that diverged from the plan.

Architecture is a decision that gets more expensive the later you make it. In AI-assisted development, where the pace of output is fast and the cost of course-correcting is high, that’s doubly true.

The Power of MCP Connections

The Azure DevOps integration hit a wall early on. The MCP server was pointed at my Trailhead work org instead of the beercitycode org where the project lived. I had to stop, reconfigure it, and reconnect. Annoying in the moment, but worth pushing through.

This matters more than it sounds. When I first started the Claude Code session, the repo was empty and Claude Code did exactly what you’d expect — it started generating a generic CLAUDE.md from scratch, based on nothing. I interrupted it. Rather than letting it invent the project structure from thin air, I pointed it at the ADO backlog we’d just built in the spec session. The context came from the source of truth, not from the AI’s best guess about what the project might need.

Once it was working, Claude confirmed:

CLAUDE: We're in — connected to beercitycode org, BCC Raffle Portal project, and it's using the Agile process template: Epic → Feature → User Story → Task.

From there, it created the epics, features, and user stories with full acceptance criteria without me leaving the chat window. The spec didn’t end as a document I’d have to manually copy somewhere. It landed as a live, numbered backlog in the same ADO project Claude Code would later pull from. When I told Claude Code “start with User Story #40,” there was an actual Story #40 with acceptance criteria waiting for it.

The broader point is that MCP connections turn AI tools from isolated chat windows into participants in your actual workflow. The friction of setting them up correctly is real — I hit it firsthand — but the alternative is transcribing context by hand between tools, which is both tedious and lossy. Get the connections right once and the handoffs take care of themselves.

Commit Discipline

This one keeps long implementation sessions from becoming a mess. The rhythm was simple: finish a story, verify it works, review the code it created, commit, then come back to the chat and report. After Terraform completed, I typed:

ME: That terraform stuff was completed successfully. What next?

Claude closed Story #40 in ADO and handed me the exact prompt to paste into Claude Code for Story #43 — pre-written, with the file paths, interface names, and API endpoints already filled in. I never had to decide what came next or figure out how to frame the next task. Each session started clean, targeted a single story, and ended with a commit. If something went sideways mid-story, /clear and start that story fresh. The commit history at the end of the day was a readable record of what got built and in what order.

The verification step earned its keep during the Eventbrite sync story. After deploying, the sync returned a generic error. I pasted the full stack trace into Claude Code, which surfaced the problem immediately: the query filter "OptInOverride ne null" is valid OData syntax in some contexts but not supported by Azure Table Storage. Claude Code had written it confidently and it looked completely reasonable — there was nothing in the code to suggest it was wrong. Two minutes to fix once caught. But completely invisible without actually running the sync against real infrastructure. Skipping verification and moving to the next story would have buried this under three more stories of code. By then the root cause would have been much harder to isolate.

The pattern held for every story: write it, run it, read the output, commit. Not because I was being cautious — because that rhythm is what keeps an AI-assisted session from turning into a pile of untested assumptions stacked on top of each other.

When This Is Worth Doing

The one thing I can’t overstate across all of this: my domain expertise and softare experience was the value I brought. Claude asked the questions, but my answers steered the process, because I had the domain knowledge and software skills to give good ones to both kinds of questions. That knowledge comes from running Beer City Code for over a decade AND from my long career of developing software professionally. I should also mention that this worked quite well because it was small and greenfield. Implementing the same feature in a large, complex, existing system would have been more challenging and required more guidance and review from me to keep it on the right path.

In short, spec-driven AI development has upfront cost, namely writing excellent specs upfront. This works best when it’s guided by an experienced hand, and when it is it can significantly reduce the amount of time required to produce something great. I think the general contractor analogy holds. A good general contractor will make sure a good amount of time is spent on the blueprints before anyone picks up a hammer. And for developers now, that same ability to guide a project isn’t optional or a nice bonus, it’s the whole job.

The post From Idea to Production in 4 Hours with AI: A Case Study appeared first on Trailhead Technology Partners.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Build Better AI Agents: 5 Developer Tips from the Agent Bake-Off

1 Share
The Google Cloud AI Agent Bake-Off highlights a shift from simple prompt engineering to rigorous agentic engineering, emphasizing that production-ready AI requires a modular, multi-agent architecture. The post outlines five key developer tips, including decomposing complex tasks into specialized sub-agents and using deterministic code for execution to prevent probabilistic errors. Furthermore, it advises developers to prioritize multimodality and open-source protocols like MCP to ensure agents are scalable, integrated, and future-proof against rapidly evolving model capabilities.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Enable Self-Service Identity Management for B2B SaaS in Auth0

1 Share
Learn how to use Auth0’s My Organization API and Embeddable UI Components to enable self-service SSO and delegated administration for your B2B SaaS customers.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories