Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147759 stories
·
33 followers

ServiceNow Expands AI Strategy with Anthropic Claude Integration for Agentic Workflows

1 Share

Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.

In today’s Cloud Wars Minute, I break down ServiceNow’s latest AI expansion with Anthropic and what it means for enterprise workflows.

Highlights

00:04 — I recently reported on ServiceNow’s expanded collaboration with OpenAI. That agreement makes OpenAI’s models the go-to solution for companies running upwards of 80 billion annual workflows on the ServiceNow platform.

00:17 — Now, ServiceNow has announced that Anthropic’s Claude models will be integrated into core ServiceNow workflows for tasks like app development, with Claude serving as the default model powering the ServiceNow Build Agent — the company’s tool for easy development of agentic workflows.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

00:37 — This is what ServiceNow Chairman and CEO Bill McDermott had to say about the announcement: “ServiceNow and Anthropic are turning intelligence into action through AI-native workflows for the world’s largest enterprises … Together, we are proving that deeply integrated platforms with an open ecosystem are how the future is built.”

01:12 — In addition to Build Agent, ServiceNow is integrating Claude alongside purpose-built solutions throughout the implementation lifecycle, with the aim of achieving a 50% reduction in the time it takes customers to deploy solutions built on the ServiceNow AI platform.

01:31 — ServiceNow and Anthropic are also building agent-based workflows for specific industries, including healthcare and life sciences, for tasks such as research and analysis. Just as it has done with OpenAI, ServiceNow is integrating Claude directly into workflows — and it’s this integration that can lead to much better outcomes for AI initiatives.

02:03 — By making these model choices the default, ServiceNow removes the guesswork from customer decision-making and enables customers to rely on the company’s expertise to achieve the best results.


The post ServiceNow Expands AI Strategy with Anthropic Claude Integration for Agentic Workflows appeared first on Cloud Wars.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI, A2A, and the Governance Gap

1 Share

Over the past six months, I’ve watched the same pattern repeat across enterprise AI teams. A2A and ACP light up the room during architecture reviews—the protocols are elegant, the demos impressive. Three weeks into production, someone asks: “Wait, which agent authorized that $50,000 vendor payment at 2 am?“ The excitement shifts to concern.

Here’s the paradox: Agent2Agent (A2A) and the Agent Communication Protocol (ACP) are so effective at eliminating integration friction that they’ve removed the natural “brakes“ that used to force governance conversations. We’ve solved the plumbing problem brilliantly. In doing so, we’ve created a new class of integration debt—one where organizations borrow speed today at the cost of accountability tomorrow.

The technical protocols are solid. The organizational protocols are missing. We’re rapidly moving from the “Can these systems connect?“ phase to the “Who authorized this agent to liquidate a position at 3 am?“ phase. In practice, that creates a governance gap: Our ability to connect agents is outpacing our ability to control what they commit us to.

To see why that shift is happening so fast, it helps to look at how the underlying “agent stack“ is evolving. We’re seeing the emergence of a three-tier structure that quietly replaces traditional API-led connectivity:

LayerProtocol examplesPurposeThe “human” analog
ToolingMCP (Model Context Protocol)Connects agents to local data and specific toolsA worker’s toolbox
ContextACP (Agent Communication Protocol)Standardizes how goals, user history, and state move between agentsA worker’s memory and briefing
CoordinationA2A (Agent2Agent)Handles discovery, negotiation, and delegation across boundariesA contract or handshake

This stack makes multi-agent workflows a configuration problem instead of a custom engineering project. That is exactly why the risk surface is expanding faster than most CISOs realize.

Think of it this way: A2A is the handshake between agents (who talks to whom, about what tasks). ACP is the briefing document they exchange (what context, history, and goals move in that conversation). MCP is the toolbox each agent has access to locally. Once you see the stack this way, you also see the next problem: We’ve solved API sprawl and quietly replaced it with something harder to see—agent sprawl, and with it, a widening governance gap.

Most enterprises already struggle to govern hundreds of SaaS applications. One analysis puts the average at more than 370 SaaS apps per organization. Agent protocols do not reduce this complexity; they route around it. In the API era, humans filed tickets to trigger system actions. In the A2A era, agents use “Agent Cards“ to discover each other and negotiate on top of those systems. ACP allows these agents to trade rich context—meaning a conversation starting in customer support can flow into fulfillment and partner logistics with zero human handoffs. What used to be API sprawl is becoming dozens of semiautonomous processes acting on behalf of your company across infrastructure you do not fully control. The friction of manual integration used to act as a natural brake on risk; A2A has removed that brake.

That governance gap doesn’t usually show up as a single catastrophic failure. It shows up as a series of small, confusing incidents where everything looks “green“ in the dashboards but the business outcome is wrong. The protocol documentation focuses on encryption and handshakes but ignores the emergent failure modes of autonomous collaboration. These are not bugs in the protocols; they’re signs that the surrounding architecture has not caught up with the level of autonomy the protocols enable.

Policy drift: A refund policy encoded in a service agent may technically interoperate with a partner’s collections agent via A2A, but their business logic may be diametrically opposed. When something goes wrong, nobody owns the end-to-end behavior.

Context oversharing: A team might expand an ACP schema to include “User Sentiment“ for better personalization, unaware that this data now propagates to every downstream third-party agent in the chain. What started as local enrichment becomes distributed exposure.

The determinism trap: Unlike REST APIs, agents are nondeterministic. An agent’s refund policy logic might change when its underlying model is updated from GPT-4 to GPT-4.5, even though the A2A Agent Card declares identical capabilities. The workflow “works“—until it doesn’t, and there’s no version trace to debug. This creates what I call “ghost breaks“: failures that don’t show up in traditional observability because the interface contract looks unchanged.

Taken together, these aren’t edge cases. They’re what happens when we give agents more autonomy without upgrading the rules of engagement between them. These failure modes have a common root cause: The technical capability to collaborate across agents has outrun the organization’s ability to say where that collaboration is appropriate, and under what constraints.

That’s why we need something on top of the protocols themselves: an explicit “Agent Treaty“ layer. If the protocol is the language, the treaty is the constitution. Governance must move from “side documentation“ to “policy as code.“

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Traditional governance treats policy violations as failures to prevent. An antifragile approach treats them as signals to exploit. When an agent makes a commitment that violates a business constraint, the system should capture that event, trace the causal chain, and feed it back into both the agent’s training and the treaty ruleset. Over time, the governance layer gets smarter, not just stricter.

Define treaty-level constraints: Don’t just authorize a connection; authorize a scope. Which ACP fields is an agent allowed to share? Which A2A operations are “read only“ versus “legally binding“? Which categories of decisions require human escalation?

Version the behavior, not just the schema: Treat Agent Cards as first-class product surfaces. If the underlying model changes, the version must bump, triggering a rereview of the treaty. This is not bureaucratic overhead—it’s the only way to maintain accountability in a system where autonomous agents make commitments on behalf of your organization.

Cross-organizational traceability: We need observability traces that don’t just show latency but show intent: Which agent made this commitment, under which policy? And who is the human owner? This is particularly critical when workflows span organizational boundaries and partner ecosystems.

Designing that treaty layer isn’t just a tooling problem. It changes who needs to be in the room and how they think about the system. The hardest constraint isn’t the code; it’s the people. We’re entering a world where engineers must reason about multi-agent game theory and policy interactions, not just SDK integration. Risk teams must audit “machine-to-machine commitments“ that may never be rendered in human language. Product managers must own agent ecosystems where a change in one agent’s reward function or context schema shifts behavior across an entire partner network. Compliance and audit functions need new tools and mental models to review autonomous workflows that execute at machine speed. In many organizations, those skills sit in different silos, and A2A/ACP adoption is proceeding faster than the cross-functional structures needed to manage them.

All of this might sound abstract until you look at where enterprises are in their adoption curve. Three converging trends are making this urgent: Protocol maturity means A2A, ACP, and MCP specifications have stabilized enough that enterprises are moving beyond pilots to production deployments. Multi-agent orchestration is shifting from single agents to agent ecosystems and workflows that span teams, departments, and organizations. And silent autonomy is blurring the line between “tool assistance“ and “autonomous decision-making“—often without explicit organizational acknowledgment. We’re moving from integration (making things talk) to orchestration (making things act), but our monitoring tools still only measure the talk. The next 18 months will determine whether enterprises get ahead of this or we see a wave of high-profile failures that force retroactive governance.

The risk is not that A2A and ACP are unsafe; it’s that they are too effective. For teams piloting these protocols, stop focusing on the “happy path“ of connectivity. Instead, pick one multi-agent workflow and instrument it as a critical product:

Map the context flow: Every ACP field must have a “purpose limitation“ tag. Document which agents see which fields, and which business or regulatory requirements justify that visibility. This isn’t an inventory exercise; it’s a way to surface hidden data dependencies.

Audit the commitments: Identify every A2A interaction that represents a financial or legal commitment—especially ones that don’t route through human approval. Ask, “If this agent’s behavior changed overnight, who would notice? Who is accountable?“

Code the treaty: Prototype a “gatekeeper“ agent that enforces business constraints on top of the raw protocol traffic. This isn’t about blocking agents; it’s about making policy visible and enforceable at runtime. Start minimal: One policy, one workflow, one success metric.

Instrument for learning: Capture which agents collaborate, which policies they invoke, and which contexts they share. Treat this as telemetry, not just audit logs. Feed patterns back into governance reviews quarterly.

If this works, you now have a repeatable pattern for scaling agent deployments without sacrificing accountability. If it breaks, you’ve learned something critical about your architecture before it breaks in production. If you can get one workflow to behave this way—governed, observable, and learn-as-you-go—you have a template for the rest of your agent ecosystem.

If the last decade was about treating APIs as products, the next one will be about treating autonomous workflows as policies encoded in traffic between agents. The protocols are ready. Your org chart is not. The bridge between the two is the Agent Treaty—start building it before your agents start signing deals without you. The good news: You don’t need to redesign your entire organization. You need to add one critical layer—the Agent Treaty—that makes policy machine-enforceable, observable, and learnable. You need engineers who think about composition and game theory, not just connection. And you need to treat agent deployments as products, not infrastructure.

The sooner you start, the sooner that governance gap closes.



Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From Agency Work to Product Success

1 Share

This episode we're joined by Stu Green, a product designer, agency founder, and serial app builder who's sold not one but two successful SaaS products.

We dig into the realities of building your own product versus running an agency, the role AI plays in modern product development, and whether the flood of AI-built apps is a threat or an opportunity for professionals.

Plus, we check out Bleet, an app that turns your meeting transcripts into social media content, and Paul shares how AI-powered personas are changing the way he approaches user research.

App of the Week: Bleet

You know you should be posting on LinkedIn. You've told yourself that every week for the past 6 months. But then you sit down, stare at the blank post box, and realize you have absolutely no idea what to write about. So you close the tab and promise yourself you'll do it tomorrow. You won't.

SS006035@2x.png

Bleet is an app built by Stu Green (and collaborator Nick) that solves this by mining the conversations you're already having. It takes your meeting recordings and transcripts, extracts the key topics using AI, and helps you turn them into social media posts. And the thing that sets it apart from just asking ChatGPT to write something for you is that it pulls your actual words and phrases from the conversation, piecing them together into posts that genuinely sound like you rather than generic AI slop.

How It Works

You connect your meeting recordings or transcripts (or even just speak a thought into the app), and Bleet will surface a list of topics you covered. From there, you pick the ones you want to post about and hit "create." You can dial in how much creative liberty the AI takes, from near-verbatim to lightly polished.

So you sit down for 10 minutes once a week, pick a handful of topics, schedule them up, and you're done. A single meeting can generate enough content for almost a week of daily posts.

What About Client Confidentiality?

The number one concern people raise is about sharing sensitive client information. Bleet strips out client names, specific people, and identifiable details. It focuses on the general topic and the ideas discussed, not the specifics of who said what in which meeting. And of course, you review everything before it goes anywhere, so if something feels too close to the bone, you just skip it or edit it.

Topic of the Week: Building Products vs. Running Agencies

Stu Green has lived both lives. He's run agencies, built products from scratch, and sold 2 SaaS businesses. So what's the difference between building for clients and building for yourself? Quite a lot, as it turns out.

Start by Solving Your Own Problem

Both of Stu's successful apps, a project management tool and HourStack (a time management app), started the same way: he needed something that didn't exist. The project management tool grew out of running his own consultancy. HourStack came from juggling small children and fragmented work hours, and wanting a way to visualize and stack little blocks of productive time.

If you're genuinely your own best customer, there's a good chance others like you exist. And if even 2 or 5 or 10 of them show up, you've got the start of something real.

The Myth of "I One-Shotted This"

AI has made it dramatically easier to build apps, but Stu is refreshingly honest about the gap between a demo and a product. Sure, he cloned entire apps in a single prompt and it looked great. But behind that impressive facade? Hours of iteration, hosting setup, video infrastructure, S3 servers, and a stack of decisions that require real product-building experience.

The people posting "I built this in one shot" on X are technically telling the truth, but they're showing you the Hollywood set, not the house behind the door. Getting from prototype to something you can actually charge money for still takes professional knowledge. You need to know what questions to ask, which answers are good, and when you're being led down a rabbit hole.

Two Tiers of AI Tools

Paul and Stu landed on a useful mental model: there are essentially 2 categories of AI building tools.

  1. Tools for everyone: Platforms like Lovable or Figma Make that let anyone create a basic app or prototype. Great for personal use, proof of concepts, and quick experiments.
  2. Tools for professionals: Things like Cursor and Claude Code that enhance a developer's ability to build production-quality software faster and better, but still require real expertise to use well.

Think of it like desktop publishing in the '90s. When it arrived, everyone panicked that graphic designers were finished. Instead, regular people made terrible flyers with Comic Sans, and the professionals used the same tools to produce better work, faster. AI-built apps are following the same pattern.

The 3-Stage Development Model

Paul offered a framework for thinking about where AI fits in the build process:

  1. Prototype and proof of concept: Anyone can do this with AI tools. Great for validating ideas quickly and cheaply.
  2. The production build: This still needs a professional. Security, scalability, accessibility, solid architecture: these are non-negotiable if people are paying to use your product.
  3. Post-launch iteration: Once a professional has laid a strong foundation, less technical people can step back in and make tweaks and improvements with AI assistance, because they're working within a well-built structure.

A Revenue-Sharing Model Worth Considering

Stu floated an interesting agency model: instead of charging a client the full upfront cost to build their app, what if you took partial ownership? The client pays a smaller retainer and upfront fee, you build and host the product, and you share in the revenue. If the app takes off, everyone wins. If it doesn't, your exposure is limited.

The key is picking partners carefully. They need to bring the marketing and audience side of the equation, because your job is the infrastructure and development. It's a model that silverorange, a Canadian agency, used successfully with e-commerce clients years ago, and it still holds up.

When to Sell

Stu sold both his apps when they hit what he calls "the plateau," that point where growth flattens and your churn rate starts catching up with new customer acquisition. At that stage, you either invest heavily to push through (hiring, scaling infrastructure, customer success teams) or you sell to someone who wants a product with proven recurring revenue.

For Stu, as a creative who'd rather build new things than manage database consultants and customer support, selling was the obvious choice. He used brokers both times, people who handle the paperwork, the letter of intent, and protect both sides of the deal. They take a cut, but they also sent chocolates, so it all evens out.

Finding the Right Ideas

With everyone building apps now, how do you pick the ones worth pursuing? Stu's answer is to not go it alone. Find partners who are excited enough about the idea to invest their time and audience. If you pitch an idea and nobody wants in, that's useful information. If someone does, you've got both validation and a distribution channel on day one.

He tested this with an AI running coach concept, reaching out to local running coaches in Jacksonville. When they responded with polite indifference, he moved on rather than sinking months into a product nobody was asking for.

Read of the Week: AI-Powered Personas

Paul shared his latest obsession: using AI to breathe new life into user personas. He's written 2 articles for Smashing Magazine that walk through the process:

  1. Functional Personas With AI: A Lean, Practical Workflow: How to build genuinely useful personas that focus on what people are trying to do, not just demographic data.
  2. AI In UX: Achieve More With Less: Broader lessons from using AI across user research, design, development, and content creation.

The approach: take all your research (surveys, interviews, call logs, analytics) plus deep online research from tools like Perplexity, feed it into AI, and generate highly detailed personas, far more detailed than the traditional single-page variety. Then load those personas into a project in ChatGPT, Claude, or Gemini, with instructions to answer questions from the persona's perspective.

The result is something you can consult in every meeting, on every decision. A product team can upload photos of next season's lineup and ask "what would our audience think?" A web team can test wireframes against the personas. Real user research still matters, of course, but this approach makes research-informed thinking available at a frequency and scale that traditional methods never could.

Marcus's Joke

"I tried to steal spaghetti from the shop, but the female guard saw me and I couldn't get pasta."

Courtesy of comedian Masai Graham. And yes, it's exactly as bad as you think.

Find The Latest Show Notes





Download audio: https://cdn.simplecast.com/audio/ae88e41b-a26d-4404-8e81-f97bca80d60d/episodes/c0a767a0-5136-494f-8ebc-491cc0254f24/audio/7fe11daa-652d-4cdf-9226-7efd57c36811/default_tc.mp3?aid=rss_feed&feed=XJ3MbVN3
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Assisted Coding: Stop Building Features, Start Building Systems with AI With Adam Bilišič

1 Share

AI Assisted Coding: Stop Building Features, Start Building Systems with AI

What separates vibe coding from truly effective AI-assisted development? In this episode, Adam Bilišič shares his framework for mastering AI-augmented coding, walking through five distinct levels that take developers from basic prompting to building autonomous multi-agent systems.

Vibe Coding vs AI-Augmented Coding: A Critical Distinction

"The person who is actually creating the app doesn't have to have in-depth overview or understanding of how the app works in the background. They're essentially a manual tester of their own application, but they don't know how the data structure is, what are the best practices, or the security aspects."

 

Adam draws a clear line between vibe coding and AI-augmented coding. Vibe coding allows non-developers to create functional applications without understanding the underlying architecture—useful for product owners to create visual prototypes or help clients visualize their ideas. 

AI-augmented coding, however, is what professional software engineers need to master: using AI tools while maintaining full understanding of the system's architecture, security implications, and best practices. The key difference is that augmented coding lets you delegate repetitive work while retaining deep knowledge of what's happening under the hood.

From Building Features to Building Systems

"When you start building systems, instead of thinking 'how can I solve this feature,' you are thinking 'how can I create either a skill, command, sub-agent, or other things which these tools offer, to then do this thing consistently again and again without repetition.'"

 

The fundamental mindset shift in AI-augmented coding is moving from feature-level thinking to systems-level thinking. Rather than treating each task as a one-off prompt, experienced practitioners capture their thinking process into reusable recipes. This includes documenting how to refactor specific components, creating templates for common patterns, and building skills that encode your decision-making process. The goal is translating your coding practices into something the AI can repeatedly execute for any new feature.

Context Management: The Critical Skill For Working With AI

"People have this tendency to install everything they see on Reddit. They never check what is then loaded within the context just when they open the coding agent. You can check it, and suddenly you see 40 or 50% of your context is taken just by MCPs, and you didn't do anything yet."

 

One of the most overlooked aspects of AI-assisted coding is context management. Adam reveals that many developers unknowingly fill their context window with MCP (Model Context Protocol) tools they don't need for the current task. The solution is strategic use of sub-agents: when your orchestrator calls a front-end sub-agent, it gets access to Playwright for browser testing, while your backend agent doesn't need that context overhead. Understanding how to allocate context across specialized agents dramatically improves results.

The Five Levels of AI-Augmented Coding

"If you didn't catch up or change your opinion in the last 2-3 years, I would say we are getting to the point where it will be kind of last chance to do so, because the technology is evolving so fast."

 

Adam outlines a progression from beginner to expert:

 

  • Level 1 - Master of Prompts: Learning to write effective prompts, but constantly repeating context about architecture and preferences

  • Level 2 - Configuration Expert: Using files like .cursorrules or CLAUDE.md to codify rules the agent should always follow

  • Level 3 - Context Master: Understanding how to manage context efficiently, using MCPs strategically, creating markdown files for reusable information

  • Level 4 - Automation Master: Creating custom commands, skills, and sub-agents to automate repetitive workflows

  • Level 5 - The Orchestrator: Building systems where a main orchestrator delegates to specialized sub-agents, each running in their own context window

The Power of Specialized Sub-Agents

"The sub-agent runs in his own context window, so it's not polluted by whatever the orchestrator was doing. The orchestrator needs to give him enough information so it can do its work."

 

At the highest level, developers create virtual teams of specialized agents. The orchestrator understands which sub-agent to call for front-end work, which for backend, and which for testing. Each agent operates in a clean context, focused on its specific domain. When the tester finds issues, it reports back to the orchestrator, which can spin up the appropriate agent to fix problems. This creates a self-correcting development loop that dramatically increases throughput.

 

In this episode, we refer to the Claude Code subreddit and IndyDevDan's YouTube channel for learning resources.

 

About Adam Bilišič

Adam Bilišič is a former CTO of a Swiss company with over 12 years of professional experience in software development, primarily working with Swiss clients. He is now the CEO of NodeonLabs, where he focuses on building AI-powered solutions and educating companies on how to effectively use AI tools, coding agents, and how to build their own custom agents.

 

You can connect with Adam Bilišič on LinkedIn and learn more at nodeonlabs.com. Download his free guide on the five levels of AI-augmented coding at nodeonlabs.com/ai-trainings/ai-augmented-coding#free-guide.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260217_Adam_Bilisic_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

XAML Source-Generation in .NET MAUI makes your app so much better

1 Share
From: Daniel Hindrikes
Duration: 5:26
Views: 71

With .NET 10, XAML Source Generation has arrived in .NET MAUI — and it changes more than you might think.

In this video, Daniel breaks down what XAML Source Generation is, how it works, and why it can improve performance, reliability, and your overall development experience. If you're building apps with .NET MAUI, this is something you don’t want to miss.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

e240 – From Sofia to London: We Chat with Boris Hristov About the Journey of the Present to Succeed Conference

1 Share

Show Notes – Episode #240

In this latest episode of The Presentation Podcast, our hosts, Troy Chollar of TLC Creative, Sandy Johnson, and Nolan Haimes are joined by Boris Hristov, founder of the Present to Succeed conference.

Listen in as they discuss the unique dual-location format for 2026 (taking place in both London and Sofia!). Boris shares insights on speaker selection, branding, and their high production values for the conference. Plus tips for using AI and design tools in presentations. And, bonus! Listeners will receive a 20% discount code for tickets, so be sure to get The Presentation Podcast listener promo code!

Highlights:

  • Overview of the Present to Succeed conference and its 2026 unique dual-location format.
  • Discussion of the conference’s history and growth since its inception in 2021.
  • Challenges and opportunities of expanding the conference to a new location – London.
  • Details about the hybrid format for in-person and online attendances.
  • Differences in content focus and structure between the Sofia and London events.
  • Speaker lineup and thematic focus for each event.
  • Emphasis on high production values and immersive experiences at the conferences.
  • Special discounts and offers available for attendees.

Resources from this Episode:  

Show Suggestions? Questions for your Hosts?

Email us at: info@thepresentationpodcast.com

New Episodes 1st and 3rd Tuesday Every Month

Thanks for joining us!

The post e240 – From Sofia to London: We Chat with Boris Hristov About the Journey of the Present to Succeed Conference appeared first on The Presentation Podcast.

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories