Read more of this story at Slashdot.
Read more of this story at Slashdot.
In the last year, many of us have started writing code differently.
We describe what we want, let AI generate an answer, review it, tweak the prompt, and try again. This loop—prompt, retry, adjust—has quietly become part of our daily workflow.
At first, it feels incredibly productive. But as the complexity of the task increases, something changes. The iteration cycle becomes longer, outputs become inconsistent, and the effort shifts from solving the problem to refining the prompt.
This is where a subtle but important shift in approach can help: moving from prompt-driven development to spec-driven development.
Most AI-assisted workflows today look something like this:
In practice, this often simplifies to:
Prompt → Retry → Guess
Figure: Prompt-driven vs spec-driven workflow comparison
For simple tasks, this works well. But for anything involving multiple inputs, constraints, or edge cases, the process can become unpredictable.
In my experience, the challenge is not the model—it is the lack of structure in how we describe the problem.
Instead of asking AI to “figure it out,” spec-driven development introduces a simple idea:
Define the problem clearly before asking for a solution.
A specification (spec) is not a long document—it is a structured way of describing:
When this structure is provided upfront, the interaction changes significantly.
Rather than iterating on vague prompts, you are guiding the system with a clear contract.
Let’s take a simple example: an order summary API (for example, a backend service hosted on Azure App Service).
“Write an API that returns order details for a user.”
A model can generate something reasonable, but in practice, the responses often vary:
Example response (typical output):
{ "userId": 123, "orders": [ { "id": 1, "amount": 250 } ] }Now consider providing a simple specification:
Specification:
- Input:
- userId
- page
- pageSize
- Output:
- userId
- orders[]
- orderId
- totalAmount
- orderDate
- pagination
- page
- pageSize
- totalRecords
- Constraints:
- Default pageSize = 10
- Return empty list if no orders
- Handle large datasets efficiently
Example response (based on the spec):
{ "userId": 123, "orders": [ { "orderId": 1, "totalAmount": 250, "orderDate": "2024-01-10" } ], "pagination": { "page": 1, "pageSize": 10, "totalRecords": 50 } }
The difference here is not just stylistic—it is structural.
An unstructured prompt leaves room for interpretation. A spec reduces ambiguity by defining expectations explicitly.
In practice, I have observed that providing structured inputs like this often leads to the following:
Rather than relying on trial-and-error, the interaction becomes more predictable and aligned with expectations.
This approach becomes even more useful when applied to existing code.
Instead of asking:
“Fix the bug in the Auth controller”
You can define expected behavior:
The task then becomes aligning the implementation with the defined spec.
This shifts the interaction from guesswork to validation—comparing current behavior with intended behavior.
Without Spec (Typical Prompt)
“Fix the login issue in Auth controller”
Possible outcomes include:
With Spec (Defined Behavior)
Spec defines:
Resulting behavior:
This mirrors the same pattern seen in the API example—moving from ambiguity to clearly defined behavior.
You do not need new tools or frameworks to try this.
A simple workflow that has worked well in practice:
This adds a small upfront step, but it often reduces back-and-forth iterations later.
One important point to note:
Writing a good spec requires understanding the problem.
Spec-driven development does not eliminate complexity—it surfaces it earlier.
In many cases, the hardest part is not writing code, but clearly defining:
This is also why specs evolve over time. They do not need to be perfect upfront. They improve as your understanding improves.
From what I have seen, this approach is most useful in scenarios where the problem involves multiple inputs, defined contracts, or structured outputs such as APIs, schema-driven systems, or refactoring existing code where consistency matters.
For simpler tasks such as small scripts, minor UI changes, or quick experiments, a detailed specification may not add much value. In those cases, a straightforward prompt is often sufficient.
Tools like GitHub Copilot, Azure AI Studio, and AI-assisted workflows in Visual Studio Code tend to be more effective when given clear, structured inputs.
Spec-driven development is not tied to any specific tool. It is a way of thinking about how we interact with these systems more effectively.
Many discussions around AI-assisted development focus on what tools can do.
This approach focuses on something slightly different:
How developers can structure problems more effectively before implementation.
In my experience, moving from prompts to specs does not eliminate iteration, but it makes that iteration more predictable and purposeful.
Infrastructure consistency is critical in large-scale Azure environments, especially in migration programs and DevOps-driven deployments. While Infrastructure as Code (IaC) using Terraform improves reproducibility, it does not fully eliminate:
To address this, we propose building an AI-powered Infrastructure Validation Agent that continuously validates and reconciles:
This blog explains the architecture, implementation, validation logic, and real-world applicability of such an agent.
In enterprise environments, infrastructure data flows through multiple stages:
| Source | Purpose |
|---|---|
| Excel / Design Sheets | Approved architecture specifications |
| Terraform | Infrastructure as Code implementation |
| Azure Portal | Actual deployed infrastructure |
The absence of unified validation leads to compliance risks, deployment errors, and operational inefficiencies.
The proposed solution is an AI-powered validation agent that:
| Module | Description |
|---|---|
| Excel Reader | Reads and standardizes input |
| Terraform Parser | Extracts resource configuration |
| Azure Fetcher | Queries deployed resources |
| Comparator Engine | Identifies mismatches |
| AI Validator | Enhances validation and recommendations |
| Report Generator | Produces actionable outputs |
`
import pandas as pd
ef read_excel(file_path):
df = pd.read_excel(file_path)
df.columns = df.columns.str.strip()
return df
excel_df = read_excel("infra_config.xlsx")
print(excel_df.head())
import hcl2
def parse_terraform(file_path):
with open(file_path, 'r') as file:
data = hcl2.load(file)
resources = []
for resource_type in data.get('resource', []):
for rtype, instances in resource_type.items():
for name, config in instances.items():
resource = {
"resource_type": rtype,
"resource_name": name,
"config": config
}
resources.append(resource)
return resources
tf_resources = parse_terraform("main.tf")
print(tf_resources)
from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient
credential = DefaultAzureCredential()
subscription_id = "your-subscription-id"
resource_client = ResourceManagementClient(credential, subscription_id)
def fetch_azure_resources():
resources = []
for resource in resource_client.resources.list():
res = {
"name": resource.name,
"type": resource.type,
"location": resource.location,
"id": resource.id
}
resources.append(res)
return resources
azure_resources = fetch_azure_resources()
print(azure_resources)
def normalize_excel(df):
return df.to_dict(orient='records')
def normalize_tf(tf_resources):
normalized = []
for res in tf_resources:
normalized.append({
"resource_name": res["resource_name"],
"resource_type": res["resource_type"],
"config": res["config"]
})
return normalized
def normalize_azure(azure_resources):
normalized = []
for res in azure_resources:
normalized.append({
"resource_name": res["name"],
"resource_type": res["type"],
"location": res["location"]
})
return normalized
def compare_resources(excel_data, tf_data, azure_data):
issues = []
for excel_res in excel_data:
name = excel_res['resource_name']
tf_match = next((r for r in tf_data if r['resource_name'] == name), None)
az_match = next((r for r in azure_data if r['resource_name'] == name), None)
if not tf_match:
issues.append({
"resource": name,
"issue": "Missing in Terraform",
"severity": "High"
})
if not az_match:
issues.append({
"resource": name,
"issue": "Missing in Azure",
"severity": "Critical"
})
if tf_match and az_match:
if excel_res['region'] != az_match.get('location'):
issues.append({
"resource": name,
"issue": "Region mismatch",
"expected": excel_res['region'],
"actual": az_match.get('location')
})
return issues
drift_report = compare_resources(
normalize_excel(excel_df),
normalize_tf(tf_resources),
normalize_azure(azure_resources)
)
print(drift_report)
Sample validation
| Resource | Issue | Expected | Actual | Severity |
|---|---|---|---|---|
| func-app-01 | Missing in Terraform | - | - | High |
| search-01 | SKU mismatch | Standard | Basic | Medium |
| webapp-01 | Region mismatch | East US | West Europe | High |
Throughout this guide, you'll create a Standard logic app workspace and project, build your workflow, and deploy it as a Standard logic app resource in Azure. This enables your workflow to run in a single-tenant Azure Logic Apps environment or within an App Service Environment v3 (restricted to Windows-based App Service plans).
Key advantages of Standard logic apps include:
You can locally develop, debug, run, and test workflows within the Visual Studio Code environment. Although both the Azure portal and Visual Studio Code support building, running, and deploying Standard logic app resources and workflows, Visual Studio Code allows you to perform all these actions locally, offering greater flexibility during development.
Prerequisites
Starting with version 2.81.5, the Azure Logic Apps (Standard) extension for Visual Studio Code includes a dependency installer that automatically installs all the required dependencies in a new binary folder and leaves any existing dependencies unchanged.
For more information, see Get started more easily with the Azure Logic Apps (Standard) extension for Visual Studio Code.
This extension includes the following dependencies:
|
Dependency |
Description |
|
Enables F5 functionality to run your workflow. | |
|
Provides a local data store and emulator to use with Visual Studio Code so that you can work on your logic app project and run your workflows in your local development environment. If you don't want Azurite to automatically start, you can disable this option: | |
|
Includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime. | |
|
Azure Functions Core Tools - 4.x version |
Installs the version based on your operating system (Windows, macOS, or Linux). |
|
Node.js version 16.x.x unless a newer version is already installed
|
Required to enable the Inline Code Operations action that runs JavaScript. |
Set up Visual Studio code
Connect to your Azure account
In the Azure window, on the Workspace section toolbar, from the Azure Logic Apps menu, select Create New Project.
From the templates list that appears, select either Stateful Workflowor Stateless Workflow.
Provide a name for your workflow and press Enter.
If Visual Studio Code prompts you to open your project in the current Visual Studio Code or in a new Visual Studio Code window, select Open in current window.
Visual Studio Code finishes creating your project.
The Explorer pane shows your project, which now includes automatically generated project files. For example, the project has a folder that shows your workflow's name. Inside this folder, the workflow.json file contains your workflow's underlying JSON definition.
Open the workflow.json file's shortcut menu, and select Open Designer.
If it asks for Enable connectors in Azure, select Use connectors from Azure
After you open a blank workflow in the designer, the Add a trigger prompt appears on the designer. You can now start creating your workflow by adding a trigger and actions and save it.
Run, test, and debug locally
The Terminal window opens so that you can review the debugging session.
Now, find the callback URL for the endpoint on the Request trigger.
Click on Run trigger
If it is stateful workflow, you’ll be able to see the status as shown below.
To view it, click on identifier.
It will open a new window with the results.
Note: Incase while using storage account in your workflow if you get any forbidden error then whitelist your IP in that storage account and rerun the workflow by choosing Run and debug in VS code.
Upon completion stop the debug by choosing the stop button and push the code to azure repo using git commands to push the code.
Use a Pipeline to Deploy the Created Workflow
Build.yaml
jobs: - job: logic_app_build displayName: "Build and publish Logic App" steps: - script: sudo apt-get update && sudo apt-get install -y zip displayName: 'Install zip utility' - task: CopyFiles@2 displayName: 'Create project folder' inputs: sourceFolder: '$(System.DefaultWorkingDirectory)' contents: | azure_logicapps/** targetFolder: 'project_output' - task: ArchiveFiles@2 displayName: 'Create project Zip' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/project_output/azure_logicapps' includeRootFolder: false archiveType: 'zip' archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' replaceExistingArchive: true - task: PublishPipelineArtifact@1 displayName: 'Publish project zip artifact' inputs: targetPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' artifact: 'logicAppCIArtifact' publishLocation: 'pipeline'Deploy.yaml
jobs: - deployment: deploy_logicapp_resources displayName: Deploy Logic App environment: ${{ parameters.environmentToDeploy }} strategy: runOnce: deploy: steps: - download: current artifact: logicAppCIArtifact - task: AzureFunctionApp@1 displayName: 'Deploy Logic App workflows' inputs: azureSubscription: ${{ parameters.azureServiceConnection }} appType: 'functionApp' appName: ${{ parameters.vars.LogicAppName }} package: '$(Pipeline.Workspace)/logicAppCIArtifact/$(Build.BuildId).zip' deploymentMethod: 'zipDeploy'
1182. This week, we solve the mystery of the colon: when do you actually need to capitalize the next word? We compare AP, Chicago, and MLA styles to give you a clear answer. Then, we look at common words with surprisingly "shadowy" histories — from the sudden appearance of the word "dog" to the apocryphal origin of "quiz."
The words with no origins segment was written by Karen Lunde. Find her on igofirst.org.
đź”— Join the Grammar Girl Patreon.
đź”— Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)
đź”— Watch my LinkedIn Learning writing courses.
đź”— Subscribe to the newsletter.
đź”— Find an edited transcript.
đź”— Get Grammar Girl books.
| HOST: Mignon Fogarty
| Grammar Girl is part of the Quick and Dirty Tips podcast network.
| Theme music by Catherine Rannus.
| Grammar Girl Social Media: YouTube. TikTok. Facebook. Threads. Instagram. LinkedIn. Mastodon. Bluesky.
Hosted on Acast. See acast.com/privacy for more information.