Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153059 stories
·
33 followers

Astronomers May Have Detected an Atmosphere Around a Tiny, Icy World Past Pluto

1 Share
"The Associated Press is reporting on a new study in Nature Astronomy suggesting that a tiny, icy world beyond Pluto harbors a thin, delicate atmosphere that may have been created by volcanic eruptions or a comet strike," writes longtime Slashdot reader fahrbot-bot. From the report: Just 300 miles (500 kilometers) or so across, this mini Pluto is thought to be the solar system's smallest object yet with a clearly detected global atmosphere bound by gravity, said lead researcher Ko Arimatsu of the National Astronomical Observatory of Japan. This so-called minor planet -- formally known as (612533) 2002 XV93 -- is considered a plutino, circling the sun twice in the time it takes Neptune to complete three solar orbits. At the time of the study, it was more than 3.4 billion miles (5.5 billion kilometers) away, farther than even Pluto, the only other object in the Kuiper Belt with an observed atmosphere. This cosmic iceball's atmosphere is believed to be 5 million to 10 million times thinner than Earth's protective atmosphere, according to the the study [...]. It's 50 to 100 times thinner than even Pluto's tenuous atmosphere. The likeliest atmospheric chemicals are methane, nitrogen or carbon monoxide, any of which could reproduce the observed dimming as the object passed before the star, according to Arimatsu. Further observations, especially by NASA's Webb Space Telescope, could verify the makeup of the atmosphere, according to Arimatsu.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What (un)exactly do you mean by semantic search?

1 Share
Ryan welcomes Bryan O’Grady,  Head of Field Research and Solutions Architecture at Qdrant, to discuss the differences between traditional text search engines powered by Lucene and modern vector databases, when vector search’s exact-match needs work for things like logs and security analytics and when semantic search works for user-facing discovery and non-exact results, and how Qdrant is growing into video embeddings and local-agent contexts.
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Moving Beyond Prompts: A Practical Introduction to Spec-Driven Development

1 Share

In the last year, many of us have started writing code differently.

We describe what we want, let AI generate an answer, review it, tweak the prompt, and try again. This loop—prompt, retry, adjust—has quietly become part of our daily workflow.

At first, it feels incredibly productive. But as the complexity of the task increases, something changes. The iteration cycle becomes longer, outputs become inconsistent, and the effort shifts from solving the problem to refining the prompt.

This is where a subtle but important shift in approach can help: moving from prompt-driven development to spec-driven development.

 

The Problem: Prompt → Retry → Guess

Most AI-assisted workflows today look something like this:

  • Write a prompt describing the task
  • Review the generated output
  • Adjust the prompt
  • Repeat until it looks acceptable

In practice, this often simplifies to:

Prompt → Retry → Guess

Figure: Prompt-driven vs spec-driven workflow comparison

For simple tasks, this works well. But for anything involving multiple inputs, constraints, or edge cases, the process can become unpredictable.

In my experience, the challenge is not the model—it is the lack of structure in how we describe the problem.

 

A Shift in Thinking: From Prompts to Specifications

Instead of asking AI to “figure it out,” spec-driven development introduces a simple idea:

Define the problem clearly before asking for a solution.

A specification (spec) is not a long document—it is a structured way of describing:

  • Inputs
  • Outputs
  • Constraints
  • Edge cases

When this structure is provided upfront, the interaction changes significantly.

Rather than iterating on vague prompts, you are guiding the system with a clear contract.

 

What This Looks Like in Practice

Let’s take a simple example: an order summary API (for example, a backend service hosted on Azure App Service).

Without a Spec (Typical Prompt)

“Write an API that returns order details for a user.”

A model can generate something reasonable, but in practice, the responses often vary:

  • Field names may be inconsistent
  • Pagination may be missing
  • Edge cases (no orders, large datasets) may not be handled
  • Structure may change across iterations

Example response (typical output):

{ "userId": 123, "orders": [ { "id": 1, "amount": 250 } ] }

With a Spec (Structured Input)

Now consider providing a simple specification:

Specification:

  • Input:
    • userId
    • page
    • pageSize
  • Output:
    • userId
    • orders[]
      • orderId
      • totalAmount
      • orderDate
    • pagination
      • page
      • pageSize
      • totalRecords
  • Constraints:
    • Default pageSize = 10
    • Return empty list if no orders
    • Handle large datasets efficiently

 

Example response (based on the spec):

{ "userId": 123, "orders": [ { "orderId": 1, "totalAmount": 250, "orderDate": "2024-01-10" } ], "pagination": { "page": 1, "pageSize": 10, "totalRecords": 50 } }

 

Why This Tends to Work

The difference here is not just stylistic—it is structural.

An unstructured prompt leaves room for interpretation. A spec reduces ambiguity by defining expectations explicitly.

In practice, I have observed that providing structured inputs like this often leads to the following:

  • More consistent field naming
  • Better handling of edge cases
  • Reduced need for repeated prompt refinement

Rather than relying on trial-and-error, the interaction becomes more predictable and aligned with expectations.

 

Applying This to Existing Code (Refactor Scenario)

This approach becomes even more useful when applied to existing code.

Instead of asking:

“Fix the bug in the Auth controller”

You can define expected behavior:

  • Input validation rules
  • Response formats
  • Error handling
  • Authorization behavior

The task then becomes aligning the implementation with the defined spec.

This shifts the interaction from guesswork to validation—comparing current behavior with intended behavior.

Example Comparison (Auth Scenario)

Without Spec (Typical Prompt)

“Fix the login issue in Auth controller”

Possible outcomes include:

  • Partial validation added
  • Inconsistent error responses
  • No clear handling of repeated failed attempts

With Spec (Defined Behavior)

Spec defines:

  • Validate username and password
  • Return consistent error responses
  • Lock account after 5 failed attempts
  • Do not expose internal errors

Resulting behavior:

  • Input validation is consistently applied
  • Error responses follow a defined structure
  • Edge cases like account lockout are handled explicitly

This mirrors the same pattern seen in the API example—moving from ambiguity to clearly defined behavior.

 

A Practical Way to Start

You do not need new tools or frameworks to try this.

A simple workflow that has worked well in practice:

  1. Ask – Describe the problem (prompt, discussion, or notes)
  2. Write a spec – Define inputs, outputs, constraints
  3. Refine – Remove ambiguity
  4. Generate – Use the spec as input
  5. Validate – Compare output with the spec

This adds a small upfront step, but it often reduces back-and-forth iterations later.

 

The Practical Challenge

One important point to note:

Writing a good spec requires understanding the problem.

Spec-driven development does not eliminate complexity—it surfaces it earlier.

In many cases, the hardest part is not writing code, but clearly defining:

  • What the system should do
  • What it should not do
  • How it should behave under edge conditions

This is also why specs evolve over time. They do not need to be perfect upfront. They improve as your understanding improves.

 

Where This Approach Helps

From what I have seen, this approach is most useful in scenarios where the problem involves multiple inputs, defined contracts, or structured outputs such as APIs, schema-driven systems, or refactoring existing code where consistency matters.

 

Where It May Not Be Necessary

For simpler tasks such as small scripts, minor UI changes, or quick experiments, a detailed specification may not add much value. In those cases, a straightforward prompt is often sufficient.

 

A Note on Tools

Tools like GitHub Copilot, Azure AI Studio, and AI-assisted workflows in Visual Studio Code tend to be more effective when given clear, structured inputs.

Spec-driven development is not tied to any specific tool. It is a way of thinking about how we interact with these systems more effectively.

 

References

Final Thoughts

Many discussions around AI-assisted development focus on what tools can do.

This approach focuses on something slightly different:

How developers can structure problems more effectively before implementation.

In my experience, moving from prompts to specs does not eliminate iteration, but it makes that iteration more predictable and purposeful.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building an AI Agent for Azure Infrastructure Validation

1 Share

 

1. Introduction

Infrastructure consistency is critical in large-scale Azure environments, especially in migration programs and DevOps-driven deployments. While Infrastructure as Code (IaC) using Terraform improves reproducibility, it does not fully eliminate:

  • Manual errors in design specifications
  • Drift between Terraform and deployed resources
  • Misalignment between approved design (Excel/architecture docs) and deployed state

To address this, we propose building an AI-powered Infrastructure Validation Agent that continuously validates and reconciles:

  1. Excel (Source of Truth)
  2. Terraform (.tf files)
  3. Azure Deployed Resources

This blog explains the architecture, implementation, validation logic, and real-world applicability of such an agent.


2. Problem Statement

In enterprise environments, infrastructure data flows through multiple stages:

Source Purpose
Excel / Design Sheets Approved architecture specifications
Terraform Infrastructure as Code implementation
Azure Portal Actual deployed infrastructure

3.Common Challenges

  • Configuration mismatches across stages
  • Drift due to manual portal changes
  • Incorrect SKU, region, or configuration deployment
  • Lack of automated validation before and after deployment

The absence of unified validation leads to compliance risks, deployment errors, and operational inefficiencies.

4. Solution Overview

The proposed solution is an AI-powered validation agent that:

  • Ingests Excel as configuration input
  • Parses Terraform configurations
  • Fetches deployed resource details from Azure

5. Architecture Overview

High-Level Architecture Components

    1. Input Layer
      • Excel file (configuration source)
    2. Processing Layer
      • Terraform Parser
      • Azure Resource Fetcher
      • AI-based Validator (optional reasoning layer)
    3. Comparison Engine
      • Schema-based comparison
      • Drift detection logic
    4. Output Layer
      • Validation report (JSON / Excel / HTML)
    5. Hosting
      • Azure Function App
    6. Optional Enhancements
      • Azure AI Search for semantic matching and reasoning

6. Agent Design (Modular Components)

Module Description
Excel Reader Reads and standardizes input
Terraform Parser Extracts resource configuration
Azure Fetcher Queries deployed resources
Comparator Engine Identifies mismatches
AI Validator Enhances validation and recommendations
Report Generator Produces actionable outputs

`

7. Agent Design
Step 1: Read Excel Input

import pandas as pd

ef read_excel(file_path):

df = pd.read_excel(file_path)

df.columns = df.columns.str.strip()

return df

excel_df = read_excel("infra_config.xlsx")

print(excel_df.head())

Step 2:Parse Terraform Files

import hcl2

def parse_terraform(file_path):

with open(file_path, 'r') as file:

data = hcl2.load(file)

resources = []

for resource_type in data.get('resource', []):

for rtype, instances in resource_type.items():

for name, config in instances.items():

resource = {

"resource_type": rtype,

"resource_name": name,

"config": config

}

resources.append(resource)

return resources

tf_resources = parse_terraform("main.tf")

print(tf_resources)

 

Step 3:Parse Terraform Files

from azure.identity import DefaultAzureCredential

from azure.mgmt.resource import ResourceManagementClient

credential = DefaultAzureCredential()

subscription_id = "your-subscription-id"

resource_client = ResourceManagementClient(credential, subscription_id)

def fetch_azure_resources():

resources = []

for resource in resource_client.resources.list():

res = {

"name": resource.name,

"type": resource.type,

"location": resource.location,

"id": resource.id

}

resources.append(res)

 

return resources

azure_resources = fetch_azure_resources()

print(azure_resources)

Step 4:Normalize Data

 

def normalize_excel(df):

return df.to_dict(orient='records')

 

def normalize_tf(tf_resources):

normalized = []

 

for res in tf_resources:

normalized.append({

"resource_name": res["resource_name"],

"resource_type": res["resource_type"],

"config": res["config"]

})

 

return normalized

 

def normalize_azure(azure_resources):

normalized = []

 

for res in azure_resources:

normalized.append({

"resource_name": res["name"],

"resource_type": res["type"],

"location": res["location"]

})

 

return normalized

 

Step 5: Validation Logic (Drift Detection)

def compare_resources(excel_data, tf_data, azure_data):

issues = []

 

for excel_res in excel_data:

name = excel_res['resource_name']

 

tf_match = next((r for r in tf_data if r['resource_name'] == name), None)

az_match = next((r for r in azure_data if r['resource_name'] == name), None)

 

if not tf_match:

issues.append({

"resource": name,

"issue": "Missing in Terraform",

"severity": "High"

})

 

if not az_match:

issues.append({

"resource": name,

"issue": "Missing in Azure",

"severity": "Critical"

})

 

if tf_match and az_match:

if excel_res['region'] != az_match.get('location'):

issues.append({

"resource": name,

"issue": "Region mismatch",

"expected": excel_res['region'],

"actual": az_match.get('location')

})

 

return issues

 

drift_report = compare_resources(

normalize_excel(excel_df),

normalize_tf(tf_resources),

normalize_azure(azure_resources)

)

 

print(drift_report)

Step 6: Export Report to Excel

Sample validation

Resource Issue Expected Actual Severity
func-app-01 Missing in Terraform - - High
search-01 SKU mismatch Standard Basic Medium
webapp-01 Region mismatch East US West Europe High


 

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Build and Deploy Logic App Workflows Using Visual Studio Code and CI/CD Pipeline

1 Share

Throughout this guide, you'll create a Standard logic app workspace and project, build your workflow, and deploy it as a Standard logic app resource in Azure. This enables your workflow to run in a single-tenant Azure Logic Apps environment or within an App Service Environment v3 (restricted to Windows-based App Service plans).

Key advantages of Standard logic apps include:

You can locally develop, debug, run, and test workflows within the Visual Studio Code environment. Although both the Azure portal and Visual Studio Code support building, running, and deploying Standard logic app resources and workflows, Visual Studio Code allows you to perform all these actions locally, offering greater flexibility during development.

Prerequisites

  1. Visual Studio Code
  2. Azure Account extension for Visual Studio Code
  3. Download and install the following Visual Studio Code dependencies for your specific operating system using either method:

Starting with version 2.81.5, the Azure Logic Apps (Standard) extension for Visual Studio Code includes a dependency installer that automatically installs all the required dependencies in a new binary folder and leaves any existing dependencies unchanged. 

For more information, see Get started more easily with the Azure Logic Apps (Standard) extension for Visual Studio Code.

This extension includes the following dependencies:

Dependency

Description

C# for Visual Studio Code

Enables F5 functionality to run your workflow.

Azurite for Visual Studio Code

Provides a local data store and emulator to use with Visual Studio Code so that you can work on your logic app project and run your workflows in your local development environment. If you don't want Azurite to automatically start, you can disable this option:

1. On the File menu, select Preferences > Settings.

2. On the User tab, select Extensions > Azure Logic Apps (Standard).

3. Find the setting named Azure Logic Apps Standard: Auto Start Azurite, and clear the selected checkbox.

.NET SDK 6.x.x

Includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime.

Azure Functions Core Tools - 4.x version

Installs the version based on your operating system (WindowsmacOS, or Linux).

These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.

Node.js version 16.x.x unless a newer version is already installed

 

Required to enable the Inline Code Operations action that runs JavaScript.

Set up Visual Studio code

  • To make sure that all the extensions are correctly installed, reload or restart Visual Studio Code.
  • Confirm that the Azure Logic Apps Standard: Project Runtime setting for the Azure Logic Apps (Standard) extension is set to version ~4:
  • On the File menu, go to Preferences > Settings.
  • On the User tab, go to > Extensions > Azure Logic Apps (Standard).
  • You can find the Azure Logic Apps Standard: Project Runtime setting here or use the search box to find other settings:

Connect to your Azure account

  1. On the Visual Studio Code Activity Bar, select the Azure icon.

 

 

In the Azure window, on the Workspace section toolbar, from the Azure Logic Apps menu, select Create New Project.

 

From the templates list that appears, select either Stateful Workflowor Stateless Workflow.

Provide a name for your workflow and press Enter. 

If Visual Studio Code prompts you to open your project in the current Visual Studio Code or in a new Visual Studio Code window, select Open in current window.

Visual Studio Code finishes creating your project.

The Explorer pane shows your project, which now includes automatically generated project files. For example, the project has a folder that shows your workflow's name. Inside this folder, the workflow.json file contains your workflow's underlying JSON definition.

Open the workflow.json file's shortcut menu, and select Open Designer.

If it asks for Enable connectors in Azure, select Use connectors from Azure

  • After the Select subscription list opens, select the Azure subscription to use for your logic app project.
  • After the resource groups list opens, select RG to use for your logic app project.
  • After you perform this step, Visual Studio Code opens the workflow designer.

After you open a blank workflow in the designer, the Add a trigger prompt appears on the designer. You can now start creating your workflow by adding a trigger and actions and save it.

 Run, test, and debug locally

  1. Make sure to start the emulator before you run your workflow:
  2. In Visual Studio Code, from the View menu, select Command Palette.
  3. After the command palette appears, enter Azurite: Start.
  4. On the Visual Studio Code Activity Bar, open the Run menu, and select Start Debugging (F5).

The Terminal window opens so that you can review the debugging session.

Now, find the callback URL for the endpoint on the Request trigger.

  1. Reopen the Explorer pane so that you can view your project.
  2. From the jsonfile's shortcut menu, select Overview.

Click on Run trigger

If it is stateful workflow, you’ll be able to see the status as shown below.

To view it, click on identifier.

It will open a new window with the results.

Note: Incase while using storage account in your workflow if you get any forbidden error then whitelist your IP in that storage account and rerun the workflow by choosing Run and debug in VS code.

Upon completion stop the debug by choosing the stop button and push the code to azure repo using git commands to push the code.

Use a Pipeline to Deploy the Created Workflow

Build.yaml

jobs: - job: logic_app_build displayName: "Build and publish Logic App" steps: - script: sudo apt-get update && sudo apt-get install -y zip displayName: 'Install zip utility' - task: CopyFiles@2 displayName: 'Create project folder' inputs: sourceFolder: '$(System.DefaultWorkingDirectory)' contents: | azure_logicapps/** targetFolder: 'project_output' - task: ArchiveFiles@2 displayName: 'Create project Zip' inputs: rootFolderOrFile: '$(System.DefaultWorkingDirectory)/project_output/azure_logicapps' includeRootFolder: false archiveType: 'zip' archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' replaceExistingArchive: true - task: PublishPipelineArtifact@1 displayName: 'Publish project zip artifact' inputs: targetPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' artifact: 'logicAppCIArtifact' publishLocation: 'pipeline'

Deploy.yaml

jobs: - deployment: deploy_logicapp_resources displayName: Deploy Logic App environment: ${{ parameters.environmentToDeploy }} strategy: runOnce: deploy: steps: - download: current artifact: logicAppCIArtifact - task: AzureFunctionApp@1 displayName: 'Deploy Logic App workflows' inputs: azureSubscription: ${{ parameters.azureServiceConnection }} appType: 'functionApp' appName: ${{ parameters.vars.LogicAppName }} package: '$(Pipeline.Workspace)/logicAppCIArtifact/$(Build.BuildId).zip' deploymentMethod: 'zipDeploy'

 

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Decoding the colon: AP vs. MLA style. Plus, words with no known origin.

1 Share

1182. This week, we solve the mystery of the colon: when do you actually need to capitalize the next word? We compare AP, Chicago, and MLA styles to give you a clear answer. Then, we look at common words with surprisingly "shadowy" histories — from the sudden appearance of the word "dog" to the apocryphal origin of "quiz."


The words with no origins segment was written by Karen Lunde. Find her on igofirst.org.


🔗 Join the Grammar Girl Patreon.

🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Find an edited transcript.

🔗 Get Grammar Girl books.


| HOST: Mignon Fogarty


| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Castria Communications
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian
  • Podcast Associate: Maram Elnagheeb


| Theme music by Catherine Rannus.


| Grammar Girl Social Media: YouTubeTikTokFacebookThreadsInstagramLinkedInMastodonBluesky.


Hosted on Acast. See acast.com/privacy for more information.





Download audio: https://sphinx.acast.com/p/open/s/69c1476c007cdcf83fc0964b/e/69f366c98dd960ac6191648e/media.mp3
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories