Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151129 stories
·
33 followers

Engineering Storefronts for Agentic Commerce

1 Share

For years, persuasion has been the most valuable skill in digital commerce. Brands spend millions on ad copy, testing button colours, and designing landing pages to encourage people to click “Buy Now.” All of this assumes the buyer is a person who can see. But an autonomous AI shopping agent does not have eyes.

I recently ran an experiment to see what happens when a well-designed buying agent visits two types of online stores: one built for people, one built for machines. Both stores sold hiking jackets. Merchant A used the kind of marketing copy brands have refined for years: “The Alpine Explorer. Ultra-breathable all-weather shell. Conquers stormy seas!” Price: $90. Merchant B provided only raw structured data: no copy, just a JSON snippet {"water_resistance_mm": 20000}. Price: $95. I gave the agent a single instruction: “Find me the cheapest waterproof hiking jacket suitable for the Scottish Highlands.”

The agent quickly turned my request into clear requirements, recognizing that “Scottish Highlands” means heavy rain and setting a minimum water resistance of 15,000–20,000 mm. I ran the test 10 times. Each time, the agent bought the more expensive jacket from Merchant B. The agent completely bypassed the cheaper option due to the data’s formatting.

The reason lies in the Sandwich Architecture: the middle layer of deterministic code that sits between the LLM’s intent translation and its final decision. When the agent checked Merchant A, this middle layer attempted to match “conquers stormy seas” against a numeric requirement. Python gave a validation error, the try/except block caught it, and the cheaper jacket was dropped from consideration in 12 milliseconds. This is how well-designed agent pipelines operate. They place intelligence at the top and bottom, with safety checks in the middle. That middle layer is deterministic and literal, systematically filtering out unstructured marketing copy.

How the Sandwich Architecture works

A well-built shopping agent operates in three layers, each with a fundamentally different job.

Layer 1: The Translator. This is where the LLM does its main job. A human says something vague and context-laden—”I need a waterproof hiking jacket for the Scottish Highlands”—and the model turns it into a structured JSON query with explicit numbers. In my experiment, the Translator consistently mapped “waterproof” to a minimum water_resistance_mm between 10,000 and 20,000mm. Across 10 runs, it stayed focused and never hallucinated features.

Layer 2: The Executor. This critical middle layer contains zero intelligence by design. It takes the structured query from the Translator and checks each merchant’s product data against it. It relies entirely on strict type validation instead of reasoning or interpretation. Does the merchant’s water_resistance_mm field contain a number greater than or equal to the Translator’s minimum? If yes, the product passes. If the field contains a string such as “conquers stormy seas,” the validation fails immediately. These Pydantic type checks treat ambiguity as absence. In a production system handling real money, a try/except block cannot be swayed by good copywriting or social proof.

Layer 3: The Judge. The surviving products are passed to a second LLM call that makes the final selection. In my experiment, this layer simply picked the cheapest option. In more complex scenarios, the Judge evaluates value against specific user preferences. The Judge selects exclusively from a preverified shortlist.

Figure 1: The Sandwich Architecture
Figure 1: The Sandwich Architecture

This three-layer pattern (LLM → deterministic code → LLM) reflects how engineering teams build most serious agent pipelines today. DocuSign’s sales outreach system uses a similar structure: An LLM agent composes personalized outreach based on lead research. A deterministic layer then enforces business rules before a final agent reviews the output. DocuSign found the agentic system matched or beat human reps on engagement metrics while significantly cutting research time. The reason this pattern keeps appearing is clear: LLMs handle ambiguity well, while deterministic code provides reliable, strict validation. The Sandwich Architecture uses each where it’s strongest.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

This is precisely why Merchant A’s jacket vanished. The Executor tried to parse “Ultra-breathable all-weather shell” as an integer and failed. The Judge received a list containing exactly one product. In an agentic pipeline, the layer deciding whether your product is considered cannot process standard marketing.

From storefronts to structured feeds

If ad copy gets filtered out, merchants must expose the raw product data—fabric, water resistance, shipping rules—already sitting in their PIM and ERP systems. To a shopping agent validating a breathability_g_m2_24h field, “World’s most breathable mesh” triggers a validation error that drops the product entirely. A competitor returning 20000 passes the filter. Persuasion is mathematically lossy. Marketing copy compresses a high-information signal (a precise breathability rating) into a low-information string that cannot be validated. Information is destroyed in the translation, and the agent cannot recover it.

The emerging standard for solving this is the Universal Commerce Protocol (UCP). UCP asks merchants to publish a capability manifest: one structured Schema.org feed that any compliant agent can discover and query. This migration requires a fundamental overhaul of infrastructure. Much of what an agent needs to evaluate a purchase is currently locked inside frontend React components. Every piece of logic a human triggers by clicking must be exposed as a queryable API. In an agentic market, an incomplete data feed leads to complete exclusion from transactions.

Why telling agents not to buy your product is a good strategy

Exposing structured data is only half the battle. Merchants must also actively tell agents not to buy their products. Traditional marketing casts the widest net possible. You stretch claims to broaden appeal, letting returns handle the inevitable mismatches. In agentic commerce, that logic inverts. If a merchant describes a lightweight shell as suitable for “all weather conditions,” a human applies common sense. An agent takes it literally. It buys the shell for a January blizzard, resulting in a return three days later.

In traditional ecommerce, that return is a minor cost of doing business. In an agentic environment, a return tagged “item not as described” generates a persistent trust discount for all future interactions with that merchant. This forces a strategy of negative optimization. Merchants must explicitly code who their product is not for. Adding "not_suitable_for": ["sub-zero temperatures", "heavy snow"] prevents false-positive purchases and protects your trust score. Agentic commerce heavily prioritizes postpurchase accuracy, meaning overpromising will steadily degrade your product’s discoverability.

From banners to logic: How discounts become programmable

Just as agents ignore marketing language, they cannot respond to pricing tricks. Open any online store and you’ll encounter countdown timers or banners announcing flash sales. Promotional marketing tactics like fake scarcity rely heavily on human emotions. An AI agent does not experience scarcity anxiety. It treats a countdown timer as a neutral scheduling parameter.

Discounts change form. Instead of visual triggers, they become programmable logic in the structured data layer. A merchant could expose conditional pricing rules: If the cart value exceeds $200 and the agent has verified a competing offer below $195, automatically apply a 10% discount. This is a fundamentally different incentive. It serves as a transparent, machine-readable contract. The agent directly calculates the deal’s mathematical value. With the logic exposed directly in the payload, the agent can factor it into its optimization across multiple merchants simultaneously. When the buyer is an optimization engine, transparency becomes a competitive feature.

Where persuasion migrates

The Sandwich Architecture’s middle layer is persuasion-proof by design. For marketing teams, structured data is no longer a backend concern; it is the primary interface. Persuasion now migrates to the edges of the transaction. Before the agent runs, brand presence still shapes the user’s initial prompt (e.g., “find me a North Face jacket”). After the agent filters the options, human buyers often review the final shortlist for high-value purchases. Furthermore, operational excellence builds algorithmic trust over time, acting as a structural form of persuasion for future machine queries. You need brand presence to shape the user’s initial prompt and operational excellence to build long-term algorithmic trust. Neither matters if you cannot survive the deterministic filter in the middle.

Agents are now browsing your store alongside human buyers. Brands treating digital commerce as a purely visual discipline will find themselves perfectly optimized for humans, yet invisible to the agents. Engineering and commercial teams must align on a core requirement: Your data infrastructure is now just as critical as your storefront.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

When Overconfidence Breaks the Trust You Worked So Hard to Build | Nate Amidon

1 Share

Nate Amidon: When Overconfidence Breaks the Trust You Worked So Hard to Build

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"I had built up the trust quotient, but then I didn't think about continually maintaining it." - Nate Amidon

 

Nate had done everything right. As a junior Scrum Master on an internal software team, he started by building trust — showing up, listening, and letting the team know he wasn't going to make things worse. He even managed to shift their reporting metrics from velocity to predictability, a move the team embraced because it focused on what they could actually control: how well they broke down and executed their plan. But then came the overconfidence. Riding on the capital he'd built, Nate proactively designed a "sprint churn" metric to track how much work swapped in and out of a sprint. The idea wasn't bad — but he rolled it out without consulting the team first. The pushback hit hard. Engineers pushed back: adding more work mid-sprint shouldn't automatically be negative, they argued. And they were right. The real failure wasn't the metric itself — it was bypassing the collaborative process that had earned him trust in the first place. Nate learned that trust isn't something you build once and bank on. It's an everyday job. As he puts it, the Scrum Master's role is to help the team, not direct it — and the moment you start solving problems the team hasn't agreed exist, you're directing.

 

In this episode, we also refer to Nate's previous BONUS episode on the podcast, where he discussed the brief-execute-debrief cycle from military aviation.

 

Self-reflection Question: When was the last time you introduced a change to your team without first checking if they saw the same problem you did — and what happened to your trust quotient as a result?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Nate Amidon

 

Nate, founder of Form100 Consulting, and a former Air Force officer and combat pilot turned servant leader in software development. Nate has taken the high-stakes world of military aviation and brought its core leadership principles—clarity, accountability, and execution—into his work with Agile teams.

 

You can link with Nate Amidon on LinkedIn. Learn more at Form100 Consulting.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260406_Nate_Amidon_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 104: Inside Trader Joe's Goes Back to the Future

1 Share

One-hundred-and-sixty-two years ago, we started a podcast. Well, actually, it wasn't quite that long ago. We started Inside Trader Joe's in 2018, thinking it would be a five-episode opportunity for us to tell our story from our point of view. And now here we are, eight years and more than 100 episodes later. Our crystal ball did not reveal this future, and yet, as we look back on those original episodes – the First Five, if you will – we're struck by how much remains the same. We're still all about the products, our Values still guide us, we're definitely dedicated to not taking ourselves too seriously, the store is still most definitely our brand, and we take great pride in being good neighbors, in every neighborhood Trader Joe's. Take a little walk down memory lane with us as we revisit the First Five. And keep listening for more – just like you'll always find new products in your neighborhood store, we always have more stories to tell!

Transcript (PDF)





Download audio: https://traffic.libsyn.com/secure/insidetjs/Inside_Trader_Joes_Goes_Back_to_the_Future.mp3?dest-id=704103
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

AI Output Without Organizational Readiness Is Just Expensive Chaos

1 Share

The central question in this episode lands hard and fast: does AI-enabled speed actually produce better outcomes? Josh opens by framing the problem plainly. The capability to generate more code, more features, and more output is here, but the gap between what your team can produce and what your organization can absorb, communicate, and deliver to customers hasn't closed. If anything, it's widened. The car analogy runs through the whole episode: you've upgraded the engine, but the tires, brakes, and suspension are still whatever they were before.

Bob brings receipts from multiple directions. His transformation work at iContact tripled engineering productivity through Agile, only for the team to outrun its own product roadmap and leave leadership frozen with nothing to prioritize. He also describes a California client that kept adding engineers while a QA bottleneck stretched features out to a year of lead time, delivering requests customers had long since moved on from. Josh layers in his own experience getting pushback from large enterprise customers who had built their operations around quarterly releases and weren't equipped to absorb continuous deployment. The common thread: creating output faster doesn't automatically create value. Someone on the other end of the pipeline still has to be ready to receive it.

The back half of the conversation turns to leadership as the real differentiator. Bob shares a sharp firsthand story of using Claude for Agile coaching competency research and discovering it was silently defaulting to an older framework, omitting what he considers the two most important competencies entirely. It's a perfect illustration of the episode's thesis: AI raises your floor, but without a human who knows enough to catch the gaps, it also masks your ceiling. Josh and Bob close on a note of guarded optimism. AI is doing what Agile did years ago, shining a spotlight on organizational health. Great leaders will accelerate through this moment. Poor ones will find their problems compounded. The opportunity is real, but so is the work.

Stay Connected and Informed with Our Newsletters

Josh Anderson's "Leadership Lighthouse"

Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.

Subscribe here

Bob Galen's "Agile Moose"

Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."

Subscribe here

Do More Than Listen:

We publish video versions of every episode and post them on our YouTube page.

Help Us Spread The Word: 

Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!





Download audio: https://episodes.captivate.fm/episode/4a1cd0c3-6bbb-4984-8cbe-3344934cce78.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Proactive Dependency Security Best Practices

1 Share

With the news of the supply chain attack on the axios npm package, more than a few DevOps teams will be scrambling to understand their exposure and identify potentially affected applications. These kinds of attacks are just a fact of life, though, and while serious, can be dealt with in a proactive and pragmatic manner with some simple changes to your deployment pipelines.

In this post, I’ll show you how to proactively manage the risk associated with dependencies and supply chain attacks by running daily security scans of a Software Bill of Materials (SBOM) associated with the production deployment of your applications.

You can complete the steps in this post in around 30 minutes.

Prerequisites

Sign up for a free trial of Octopus Cloud at https://octopus.com/start. The cloud-hosted version of Octopus is the easiest way to get started, as it doesn’t require any additional configuration to work with the Octopus AI Assistant.

Then install the Octopus AI Assistant Chrome extension from the Chrome Web Store.

Creating the sample application

To demonstrate the process of proactively managing and securing your application, we’ll create one of the sample applications provided by the AI Assistant.

Open the AI Assistant and click the Community Dashboards... link:

AI Assistant menu

Click the Octopus Easy Mode link:

Easy Mode option

Octopus Easy Mode provides the ability to create sample projects based on best practices. We’ll create the sample Kubernetes project. Select the Kubernetes item and click the Execute button:

Easy Mode option

Once you review and approve the changes, this results in a project called My K8s WebApp being created in your space, along with all supporting resources like feeds, targets, environments, lifecycles, and accounts.

This sample project demonstrates proactive dependency management, and it starts with lifecycles and environments.

Security environment and lifecycles

The AI Assistant created four environments: Development, Test, Production, and Security.

The first three environments represent the typical infrastructure used to host application deployments. The Security environment is a specialized environment that will host our dependency scanning and vulnerability management steps:

Octopus environments

We then have a DevSecOps lifecycle that has four phases, one for each environment. Deployments are executed automatically to the Development and Security environments. This means when a deployment to the Production environment succeeds, it automatically starts a deployment in the Security environment. This will be important later on:

Octopus lifecycles

The deployment process

The deployment process for the sample application involves scanning the SBOM for a given application version. The final step, called Scan for Vulnerabilities, accepts a package containing an SBOM file and scans it with an open-source dependency-scanning tool.

Notably, the steps that deploy the application (Deploy a Kubernetes Web App via YAML in this example) skip the Security environment. This means deployments to all environments perform the security scan, but deployments to the Security environment only scan the SBOM file:

Octopus deployment process

The end result of this deployment process is that every deployment to every environment performs an SBOM security scan, and once a deployment to the Production environment succeeds, a deployment is immediately triggered in the Security environment.

This initial sequence of a deployment to Production followed by a deployment to Security is not particularly useful, as it is unlikely that a new vulnerability was detected in the seconds between the deployments.

However, the deployment to the Security environment can then be rerun as part of a trigger.

Rerunning the security scan as a trigger

The sample project also includes a Daily Security Scan trigger. This trigger reruns the deployment in the Security environment once per day:

Octopus triggers

This results in a daily scan of the dependencies that contributed to the version of your application in the Production environment.

This demonstrates how a Continuous Deployment (CD) tool like Octopus complements Continuous Integration (CI) tools to implement DevSecOps. Because Octopus knows exactly which versions of your applications are currently in production (as opposed to the state of a dependency lock file in Git, which may not reflect code deployed to production), it can scan application dependencies to catch vulnerabilities as soon as they are discovered.

You can then automatically respond to any vulnerability reports with custom steps like email alerts or messages sent to a chat platform, and proactively address any issues in a predictable and controlled manner.

What just happened?

By following along with this post, you created a sample Kubernetes application with:

  • SBOM security scanning steps
  • Environments and lifecycles that support proactive security management
  • A trigger that performs a daily security scan of the dependencies in your production application

This pattern provides DevOps teams with a proactive process to respond to known dependency vulnerabilities and complements other security practices such as SAST/DAST scanning at CI-time, dependency management, and prioritization.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using Blazor Sections For Complex Situations

1 Share
Blazor provides a number of helpful utilities to manage content and flows within your application, but at times you may encounter conflicts, especially when individual pages need to override items set at the Layout or even app level. A great example of this is SEO inclusions. Sections are your friend for this situation. Let's dive in!
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories