Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151236 stories
·
33 followers

How to Give AI Coding Agents Better Direction

1 Share
&&
AI Tooling Design Philosophy
The Thesis

Ask a coding agent to "build a settings page" and you'll get a settings page. It will compile, it will run, and it will feel like every other generated settings page you've ever seen. The difference isn't a better prompt. It's direction.

Now tell the agent: "build a settings page for a field service app used on ruggedized Android tablets, high density, large touch targets, grouped by task frequency, with offline state indicators." You'll get something different. Something that feels like a product decision was made.

There's a misconception in AI-assisted development that the skill is in prompt engineering. Write more precise instructions, get better code. That's true up to a point. But it mistakes the instrument for the thing that actually matters: knowing what to ask for in the first place.

Comparison: generic AI-generated settings page vs directed settings page
Comparison: vague prompt output vs specific direction output
The Role

You're Not a Prompter. You're the Tech Lead.

When you sit down with a coding agent to build a cross-platform .NET app, your job isn't to describe features. It's to provide the architectural and experiential direction that the agent can't infer on its own.

An agent with no direction produces competent defaults. It will scaffold a page, pick a navigation pattern, wire up data binding, and the result will compile and run. But competent defaults don't ship products. Direction does. And direction has layers.

Layer 1

1 Empathy

Before architecture, before code, before the first prompt: who is using this and what do they need?

A field technician on a ruggedized Android tablet and a knowledge worker on a desktop monitor aren't the same user. One needs dense information with large touch targets and offline capability. The other needs keyboard shortcuts, multi-pane layouts, and high data throughput. An agent will happily build a desktop-optimized data grid that's unusable at 5 inches. That's not the agent failing. That's you not setting the constraint.

Start every agent session by writing down who the user is and what platform realities they live with. Hand that to the agent as context. The output shifts immediately.

Example

"The primary user is a warehouse supervisor checking inventory on a shared 8-inch Android tablet, often one-handed. They scan 40-60 items per shift. They need to see discrepancies at a glance, not drill into detail. Connectivity is spotty; the app needs to queue updates and sync when back online."

That's not a prompt. That's a design constraint. The agent now knows to use large list items, visual diff indicators, and an offline-first data pattern before you ask for any of it.

Layer 2

2 Specific Aesthetic and Architectural Direction

"Make it look good" produces generic Material Design. Every time. Structurally correct, emotionally empty.

The same applies to code. "Add navigation" gets you whatever pattern the model defaults to. But "use region-based navigation with a single ContentControl host because this is a linear task flow, not a dashboard" gets you something that fits your app's actual structure.

Vague direction produces valid output. Specific direction produces authored output. The gap between those two is the gap between a demo and a product.

Example

Instead of "create a dashboard page," try: "This is a daily ops dashboard. Use a two-column layout: left column is a live ListView of flagged items sorted by severity, right column is a detail pane that updates on selection. Use NavigationView with a flat top bar, not a sidebar; there are only four sections and the user switches between them constantly. Use BodyMedium for list items, TitleLarge for the detail header. The feel is utilitarian, not decorative."

Here's what the agent produces from the vague prompt:

Vague Prompt
<!-- "Create a dashboard page" -->
<Page>
  <NavigationView PaneDisplayMode="Left">
    <NavigationView.MenuItems>
      <NavigationViewItem Content="Dashboard" Icon="Home" />
      <NavigationViewItem Content="Analytics" Icon="View" />
      <NavigationViewItem Content="Users" Icon="People" />
      <NavigationViewItem Content="Settings" Icon="Setting" />
    </NavigationView.MenuItems>
    <ScrollViewer>
      <StackPanel Spacing="16" Padding="24">
        <TextBlock Text="Dashboard" Style="{StaticResource HeadlineMedium}" />
        <Grid ColumnDefinitions="*,*,*,*">
          <!-- Four vanity-metric cards -->
          <Border><!-- "Total Users: 12,453" --></Border>
          <Border><!-- "Revenue: $84,230" --></Border>
          <Border><!-- "Orders: 1,893" --></Border>
          <Border><!-- "Growth: +12.5%" --></Border>
        </Grid>
        <!-- Hardcoded activity list -->
        <ListView SelectionMode="None">
          <ListViewItem Content="New user registered" />
          <ListViewItem Content="Order #1234 completed" />
        </ListView>
      </StackPanel>
    </ScrollViewer>
  </NavigationView>
</Page>

Sidebar nav, four vanity-metric cards, a hardcoded activity list. It compiles. It runs. You've seen it a thousand times.

Now the same agent with the specific prompt:

Directed Prompt
<!-- Daily ops dashboard with architectural direction -->
<Page>
  <NavigationView PaneDisplayMode="Top"
                  IsSettingsVisible="False">
    <NavigationView.MenuItems>
      <NavigationViewItem Content="Operations" IsSelected="True" />
      <NavigationViewItem Content="Alerts" />
      <NavigationViewItem Content="Inventory" />
      <NavigationViewItem Content="Reports" />
    </NavigationView.MenuItems>
    <Grid ColumnDefinitions="360,*">
      <!-- Left: Flagged items, sorted by severity -->
      <Grid RowDefinitions="Auto,*">
        <StackPanel Orientation="Horizontal" Padding="16,12">
          <TextBlock Text="Flagged Items" Style="{StaticResource TitleSmall}" />
          <Border Background="{ThemeResource ErrorBrush}" CornerRadius="10">
            <TextBlock Text="{x:Bind ViewModel.FlaggedItems.Count}" />
          </Border>
        </StackPanel>
        <ListView Grid.Row="1"
                  ItemsSource="{x:Bind ViewModel.FlaggedItems}"
                  SelectedItem="{x:Bind ViewModel.SelectedItem, Mode=TwoWay}">
          <!-- Severity-encoded list items with
               color bar, title, source, and time -->
        </ListView>
      </Grid>
      <!-- Right: Detail pane, updates on selection -->
      <Grid Grid.Column="1"
            Visibility="{x:Bind ViewModel.HasSelection}">
        <TextBlock Text="{x:Bind ViewModel.SelectedItem.Title}"
                   Style="{StaticResource TitleLarge}" />
        <!-- Severity, Source, Timestamp metadata -->
        <!-- Detail content with scroll -->
      </Grid>
      <!-- Empty state -->
      <Grid Grid.Column="1"
            Visibility="{x:Bind ViewModel.HasNoSelection}">
        <TextBlock Text="Select an item to view details" />
      </Grid>
    </Grid>
  </NavigationView>
</Page>

Top nav because there are only four sections and the user switches constantly. A master-detail split with severity-encoded list items and selection binding. An empty state. Typography chosen for function, not decoration.

The vague prompt produced a screenshot. The specific prompt produced a working architecture.

Vague vs. Direct
Layer 3

3 Structural Metaphors

This is the one most developer-focused AI guidance misses: physical metaphors give layout a logic that cascades into every decision the agent makes.

A dashboard built around the metaphor of a control room (status panels, alert indicators, a central focus area) produces a fundamentally different layout than "show me the data." In Uno Platform terms, an AutoLayout with card grouping and ShadowContainer elevation says "these are distinct items with weight." A plain ItemsRepeater with no visual hierarchy says "here's a list."

These aren't interchangeable. The structure is the direction. Name the metaphor and the agent's layout decisions start to cohere.

Example

For a project management app, telling the agent "think of each project as a folder on a desk, and tasks as sticky notes inside it" produces a completely different layout than "show projects and tasks." The first gives you expandable card containers with compact inline items. The second gives you two flat lists with a master-detail pattern. Both are valid. Only one matches how your users actually think about their work.

Layer 4

4 Real Content

Generated placeholder data makes every app feel like a template. The moment you give the agent real copy, real data shapes, and real edge cases, the output shifts from demo to product.

For cross-platform apps, this matters more than most developers expect. Text that fits in English overflows in German. Dates that render cleanly in one locale break layout assumptions in another. A list that looks fine with 5 items collapses with 50. Feed the agent real content early and these problems surface before they become bugs.

Example

Instead of letting the agent generate "Task 1, Task 2, Task 3," give it: "Here are real task names from production: 'Replace hydraulic filter assembly - Unit 7B', 'Quarterly HVAC inspection (overdue)', 'URGENT: Compressor fault - Building C roof unit.' These are the actual string lengths. Some have status prefixes. Some are two lines on mobile."

Now the agent knows to handle text truncation, status badges, and multi-line list items from the start.

The Checklist

The Checklist

Before your next agent session, write four things down:

  1. Who is using this and on what device?
  2. What should the interface feel like, specifically? (Dense, calm, playful, utilitarian; not "good.")
  3. What metaphor structures the content? (Control room, card stack, timeline, form wizard.)
  4. What real content can you feed the agent right now?

Hand those four answers to the agent before the first prompt. Not buried in a system message you'll forget about; as the opening context of the conversation.

AI agents don't replace your taste. They amplify whatever direction you give them. Give them nothing and you get competent defaults. Give them clarity, structure, and a point of view, and you get something worth shipping.

The post How to Give AI Coding Agents Better Direction appeared first on Uno Platform.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

TX Text Control vs iText: Understanding Template-Based Document Generation in .NET

1 Share
This article explores how both approaches work, what the common workflows look like in real projects, and why template-based document generation often leads developers toward TX Text Control. We will compare the two approaches, discuss their advantages and disadvantages, and provide insights into how they can be used effectively in .NET applications.

Read the whole story
alvinashcraft
23 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Do other people not like colors? or: adventures with ANSI codes and grep

1 Share

It only ever seems to happen to me.

Read the whole story
alvinashcraft
30 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Containerize an ASP.NET Core BFF and Angular frontend using Aspire

1 Share
Using Damien Bowden's secure ASP.NET Core and Angular BFF template as a starting point, this post shows how to integrate Aspire to improve local development and prepare the application for containerized deployment.
Read the whole story
alvinashcraft
36 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Building Block Economy

1 Share
Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Your Migration’s Source of Truth: The Modernization Assessment

1 Share

I’ve been exploring GitHub Copilot’s modernization capabilities for .NET and Java applications, and I wanted to share what I’ve learned about the most critical piece of the puzzle: the assessment document.

Here’s what makes this tool interesting – it’s not just a code suggestions engine. It’s an agentic, end-to-end solution that analyzes your entire codebase, writes an assessment document, builds a migration plan, and can provision the Azure infrastructure to run it. The experience follows an Assess → Plan → Execute model, and I’ve found that assessment is the foundation everything else builds on.

Assess → Plan → Execute workflow diagram
The three-phase modernization workflow: Assessment drives planning, which drives execution.

At each step, a document is generated with Copilot’s findings and recommendations. You can interject at this point and provide feedback, enhancements, and corrections to assist GitHub Copilot for the next phases of your migration. Copilot gives you the opportunity to steer it so that your migration completes with the results you’re seeking.

The tooling ships through a VS Code extension (generally available for both .NET and Java) that puts the full modernization workflow directly in your editor: running assessments, migrating dependencies, containerizing apps, and deploying to Azure. There’s also a Modernization CLI in public preview for terminal-based and multi-repo batch scenarios, but for this post I’m working on one project in Visual Studio Code.

The assessment document is the most important artifact the tool produces, and I’m blown away by what insight it gives me. It reports what gets upgraded, what Azure resources get provisioned, and how your application gets deployed. Everything downstream – infrastructure-as-code, containerization, deployment manifests – takes its cues from what the assessment finds. I’ve found that understanding how to configure, read, and act on this document is the single highest-leverage skill in the modernization workflow.


Two Paths In – Recommended Assessment & Upgrade Paths

The VS Code extension gives you two ways to kick off an assessment. They lead to the same interactive dashboard, but they differ in how much configuration you want to do upfront.

Path 1: Recommended Assessment (Fast Start)

This is the “show me what I’m dealing with” path. No manual configuration required.

VS Code modernization pane - Recommended Assessment option The Quickstart section with Start Assessment option in VS Code

  1. Open the GitHub Copilot modernization pane in VS Code.
  2. Select Start Assessment (or Open Assessment Dashboard) from the Quickstart section.
  3. Choose Recommended Assessment.
  4. Pick one or more domains from the curated list – Java/.NET Upgrade, Cloud Readiness, Security – and click OK.

That’s it. The assessment runs against your codebase and results appear in the interactive dashboard. Each domain represents a common migration scenario with preconfigured settings, so you get meaningful results without touching a single configuration knob.

This felt like the right starting point when I just wanted a quick read on where an app stands before committing to a migration strategy.

Path 2: Custom Assessment (Targeted)

When you already know your target – say, AKS on Linux with containerization – the custom assessment lets you configure exactly what to analyze.

Select Custom Assessment from the assessment pane, then dial in:

Custom Assessment configuration dialog Custom assessment lets you target specific compute options and analysis domains

Setting Options
Assessment Domains Java/.NET Upgrade, Cloud Readiness, Security – pick one or combine all three
Analysis Coverage Issue only – just the problems. Issues & Technologies – problems plus tech inventory. Issues, Technologies & Dependencies – the full picture.
Target Compute Azure App Service, Azure Kubernetes Service (AKS), Azure Container Apps (ACA) – pick multiple to compare side-by-side
Target OS Linux or Windows
Containerization Enable or disable containerization analysis

When you select multiple Azure service targets, the dashboard lets you switch between them to compare migration approaches and view service-specific recommendations – that’s pretty slick when the hosting decision hasn’t been finalized yet.

I use this path when I know I’m going to AKS on Linux and I want to know exactly what’s blocking that.

Upgrade Paths the Assessment Covers

The assessment isn’t just about cloud readiness. It also evaluates framework and runtime upgrade paths with specific issue detection rules and remediation guidance:

Each upgrade path has its own set of detection rules – the tool knows, for instance, which APIs were removed between JDK 17 and 21, or which ASP.NET patterns have no direct equivalent in ASP.NET Core.

CLI note: If you need to assess dozens of applications across multiple repos, the Modernize CLI supports a modernize assess --multi-repo mode that reads a repos.json manifest, clones all listed repositories, and generates both per-app reports and an aggregated cross-portfolio report. For single-app work, though, VS Code is where I stay.


The Assessment Document

Ok.. let’s dig in. This is the artifact that matters most.

Every planning decision, every IaC file, every deployment manifest traces back to what the assessment found. I’m going to walk you through it in detail.

Where It Lives

Assessment reports are stored in your project directory under:

.github/modernize/assessment/

Each assessment run produces an independent report – you build up a history, not overwrite previous results. This means you can track how your migration posture evolves over time, or compare results after making code changes. I really like this approach.

Report Structure

The assessment report is organized into a header section and four analytical tabs. Let’s take a look at each one.

Top of the assessment report

Application Information – The Snapshot

The top of every report captures your application’s current state:

  • Runtime version detected – Java version or .NET version currently in use
  • Frameworks – Spring Boot, ASP.NET MVC, WCF, etc.
  • Build tools – Maven, Gradle, MSBuild, etc.
  • Project structure – module layout, solution structure
  • Target Azure service – the compute target(s) you selected during configuration

This section is the baseline. It tells the tool (and you) exactly what it’s working with.

Issue Summary – The Dashboard

The issue summary gives you a bird’s-eye view of migration readiness. It categorizes issues by domain and shows criticality percentages – essentially a snapshot of how much work lies ahead.

Issue Summary dashboard showing criticality breakdown by domain The Issue Summary shows at-a-glance migration readiness with criticality percentages

If you configured multiple Azure service targets, you can switch between them here to compare. An app that has 3 mandatory issues for App Service might have 7 for AKS, or vice versa. This comparison is often what drives the hosting decision.

Issues Tab – The Actionable Detail

This is where the assessment becomes a to-do list. Issues are categorized by domain:

  • Cloud Readiness – Azure service dependencies, migration blockers, platform-specific incompatibilities
  • Java/.NET Upgrade – JDK or framework version issues, deprecated APIs, removed features
  • Security – CVE findings, ISO 5055 compliance violations

Each issue carries a criticality level:

Level Meaning
🔴 Mandatory Must fix or the migration fails. These are hard blockers.
🟡 Potential Might impact migration. Needs human judgment – could be a problem depending on your specific deployment scenario.
🟢 Optional Low-impact, recommended improvements. Won’t block migration but worth addressing.

Expanding any issue reveals:

  • Affected files and line numbers – clickable links that navigate directly to the relevant source code
  • Detailed description – what the problem is, why it matters for your target platform
  • Known solutions – concrete remediation steps, not just “fix this”
  • Supporting documentation links – references to official migration guides, API docs, or security advisories

The combination of file-level precision and actionable remediation guidance is what makes this tab the operational core of the report. You can hand individual issues to developers and they have everything they need to act.

Expanded issue showing affected files, description, and remediation steps Each issue includes clickable file links, detailed descriptions, and concrete remediation guidance

Now we’re talking!

Report Operations – Share, Import, Compare

Assessment reports aren’t locked to a single developer’s machine. The tooling supports full collaboration workflows:

Export and share. Export any report from the dashboard and share it with teammates. Recipients can import the report without re-running the assessment – they see the same dashboard, the same issues, the same detail. This is particularly useful for architecture reviews where the people making decisions aren’t the ones running the tools.

Import from multiple sources. You can import reports from:

  • AppCAT CLI – import report.json files from Microsoft’s Application and Code Assessment Tool
  • Dr. Migrate – import app context files
  • Previously exported reports – round-trip sharing between team members

To import, select Import in the assessment reports page, or use Ctrl+Shift+P and search for Import Assessment Report.

Compare assessments. Run multiple assessments with different target configurations and compare the results side-by-side. This is the workflow I recommend when you’re evaluating hosting options: run one assessment targeting AKS, another targeting Container Apps, and compare the issue counts and mandatory blockers.

Track history. Because each assessment run generates an independent report, your report list becomes a timeline of your modernization progress. After fixing a batch of mandatory issues, re-run the assessment and see the numbers drop.

The Key Insight

The assessment document isn’t just a report you read once and file away. It’s the input that drives everything downstream:

  • Infrastructure planning reads the assessment to understand what Azure resources your app needs
  • IaC generation uses the target compute and dependency information to produce Bicep or Terraform
  • Containerization decisions depend on the containerization analysis findings
  • Deployment targets are selected based on the cloud readiness analysis for each compute option

When the modernization agent creates a plan in the next phase, it consumes the assessment. Get the assessment right, and the rest of the pipeline follows. Skip it or misconfigure it, and you’ll be course-correcting downstream.


From Assessment to Azure – The Deployment Bridge

The assessment tells you where you stand. The planning and execution phases get you to Azure. Here’s how they connect – all from inside VS Code.

Phase 1: Infrastructure Preparation

From the Copilot Chat Pane, use the modernize-azure-dotnet agent and ask it to help you create a plan to migrate to Azure. The agent can accept several types of input:

  • Application source code – codebase analysis to determine stack, dependencies, and resource requirements
  • Assessment reports – the reports from the previous phase (this is the primary bridge)
  • Architecture diagrams – pre-migration design documents in your repository
  • Compliance and security requirements – organizational policies provided as docs or natural language

You can combine these in your prompt to get a tailored infrastructure plan. For example, you might prompt the agent with:

“Create Azure infrastructure based on the assessment report, following our compliance policies in docs/security-requirements.md”

Or if you’re aiming for a full Azure Landing Zone:

“Create an Azure landing zone tailored to my application’s architecture and requirements”

The agent generates two files:

File Location Purpose
Plan file .github/modernize/{plan-name}/plan.md Infrastructure strategy, proposed architecture, resource list
Task list .github/modernize/{plan-name}/tasks.json Specific tasks the agent will perform during execution

The infrastructure plan covers the full Azure Landing Zone design: networking, identity, governance, and security foundations. IaC output is generated as Bicep or Terraform, depending on your prompt and preferences.

Review before execute. Both files are editable right in VS Code. I adjust resource configurations, modify the approach, add constraints – then execute the plan from the modernization pane. After execution, I review the Git diff to see exactly what was generated.

CLI note: The same planning workflow is available via modernize plan create and modernize plan execute if you prefer the terminal. The plans and outputs are identical regardless of which surface you use.

Phase 2: Containerization and Deployment

A second plan handles containerization and deployment. From the modernization pane, create a new plan with a prompt like:

“Containerize and deploy my app to Azure, subscription: <sub-id>, resource group: <rg-name>”

This phase covers:

  • Dockerfile generation – tailored to your application’s stack and dependencies
  • Container image validation – ensuring the image builds correctly
  • Deployment manifests – configuration files specific to your target Azure service (AKS, Container Apps, or App Service)
  • Reusable deployment scripts – generated for future use, so deployments are repeatable

You can also split these concerns – prompt for containerization only (“containerize my app and create a Dockerfile”) or deploy an already-containerized application (“deploy my app to the AKS cluster in subscription: <sub-id>, resource group: <rg-name>”).

Independence and Human Control

Two design principles I really like:

Phases are independent. Skip infrastructure preparation if you already have an environment provisioned. Prepare infrastructure now and deploy later. Containerize without deploying. Each phase is a separate plan that can be created and executed on its own timeline.

Human-in-the-loop throughout. Every plan is editable before execution. Every change is Git-tracked – you can review diffs, revert commits, and audit the full history of what the agent did. The modernization agent proposes; you approve.


Summary

I’ve shown you how the assessment document is the heart of GitHub Copilot’s modernization workflow – and how the entire journey from assessment to Azure deployment lives inside VS Code. The assessment determines what gets upgraded, what Azure resources get provisioned, and how your application gets deployed – and there’s a lot more we can do with this…

The key is understanding that everything downstream depends on getting this assessment right: your infrastructure plan, your IaC files, your containerization approach, and your deployment strategy all flow from what the assessment finds. And when the Modernize CLI or multi-repo batch scenarios come into play, the assessment format and planning workflow are the same – so the skills transfer directly.

Get Started

Ready to assess your own application? Install the GitHub Copilot Modernization extension for VS Code and run your first assessment today. You can learn more about the full modernization workflow in the GitHub Copilot application modernization documentation.

Have you tried GitHub Copilot’s modernization tools for migrating .NET or Java applications to Azure? I’d love to hear what you think in the comments below.

The post Your Migration’s Source of Truth: The Modernization Assessment appeared first on .NET Blog.

Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories