Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147087 stories
·
33 followers

Code Isn’t Slowing Your Project Down, Communication Is

1 Share

You get an idea and a moment later, the code’s already in your head. And before you know it – bam! – it’s on GitHub.

But hey, it was your side project, not your day job. Because once you step into a “real job,” things get complicated by the minute.

At first, you might think, Oh, we refine the task, put it in the next sprint, implement it, do a PR, testing… and it’ll be done in about two weeks. No bam! this time, but it’s still fast… comparatively.

That speed reminds you of working alone (or with a small, familiar team) where ideas move almost as quickly as they appear. Code follows a thought, and suddenly it’s in the repo.

But once an idea crosses teams or touches parts you don’t own, time stretches. Days turn into weeks, weeks into months – not because the code is harder, but because communication is: unfamiliar code, processes, priorities, even languages.

Who you work with is EVERYTHING

Now I’m going to challenge you even more: how long would it take to implement a system‑wide feature? I’m not talking about some fancy, super‑complex, revolutionary refactor, just something that stretches across many components that aren’t yours. You don’t know them. You don’t even know the maintainers.

And let’s be honest, you don’t want to talk – you’re a programmer, after all, right? You think, Well, a couple of sprints… maybe a few months… if you’re feeling optimistic.

When you’re working alone, the only communication is between your neurons, and that’s pretty fast, isn’t it?

The same goes when you collaborate with teammates you know well. You understand each other, don’t need to repeat much, and many things are already covered by your working agreements (whether written down or just implicit) so you barely have to mention them.

But step into a new or larger team, and suddenly communication is full of obstacles. You don’t know the people, you don’t know how they work, they might speak another language – or worse, code in another language! Cooperation suddenly feels almost impossible.

It will take months, because you need to talk to the others, discuss multiple opinions, understand different perspectives and reach number of compromises. Implementation is easy-peasy. 

The less we know about the project, the more we try to handle it ourselves

Recently we’ve been implementing a very custom solution for a big client. We generally don’t provide custom solutions, but sometimes customers, especially big ones, manage to convince us somehow

For most of our integrations, we rely on two key components:

  1. Component A handles queuing, throttling, and light request processing. 
  2. Component B handles more substantial model transformations and aligns our data with the external provider’s protocol.

In this case, no major model manipulation was required. However, we couldn’t simply send data from A to the provider. We were aware of the configuration possibilities in Component A.

So the question arose: Should we stick to our usual A+B setup, following the well‑worn path? Or should we step outside our comfort zone, reconfigure Component A, and see if we can eliminate the need for Component B altogether?

If both were ours, the answer would be easy. But we didn’t know much about A – we didn’t even know the maintainers.

The temptation to stick with the usual path grew. Then someone had the clever idea to explore Component A using Claude Code. Now we know it, we’re AI-powered! We can do anything, even talk to strangers! Bring them on! we thought. This time, we did it the right way.

Jokes aside, Components A and B (and their ownership by different teams) are a classic example of Conway’s Law: the tendency to copy organizational structure into system design. The harder it is for two people or teams to collaborate, the more likely they are to build separate, siloed parts of the system (have you heard about silos?).

Still doubt that poor communication shapes architecture? Reread the first paragraph – you probably didn’t even notice it the first time. Or check out the insightful presentation The Only Unbreakable Law.

Still unconvinced? Here’s a knockout argument: think back to the last time you tried to sort something out at a government office. How many doors did you have to knock on? How many forms did you have to fill out? How many organizational units did you have to visit? And all for just one issue…

Permit A38: the dilemma of artificial intelligence

In a well-designed company, this doesn’t always need to be an issue. Why would a corporate lawyer need to talk to a DevOps engineer? Or an accountant to a UI tester?

But when it comes to software, the boundaries aren’t always so clear. In our example, the less we know about the people maintaining a given component, the more likely we are to misuse it and try to solve the problem on our own. That often leads first to overcomplicated architecture, and then to technical and organizational silos.

This is how my team fought the Conway’s Law

You’re probably thinking there must be a way around this, and you’re right. There’s an approach called the Inverse Conway Maneuver: if the organization shapes the architecture, why not design teams to build the system we want?

Sounds clever, but it’s not easy, especially in established companies.

Management must be tech-savvy and understand the right architecture, and engineers need to grasp the target design and reasoning, since changes often involve reorganizing the codebase.

Organizational changes don’t have to impact the whole company and they can be applied on a smaller scale. Here’s a simple example from my experience.

We used to work on a product end-to-end and our area was defined by the product, not the architecture. Sounds nice, right?

However, we were backend developers, and since the product included front-end applications, we also had to develop and maintain the user-facing parts. We were never front-end experts, so we struggled a lot. We knew the product and its business specifics, but our tech gaps were a constant drag.

Eventually, we decided to hand it over to front-end experts. What a relief! Product Managers got what they wanted on time, backend developers could focus on our strengths, and frontend developers had fun fixing our mistakes.

We thought everything was sorted, until later, when we all got together in one room, despite being geographically distributed, to talk about the product. During that session, we uncovered a few weird issues we hadn’t noticed before.

The key was sharing knowledge about how each part worked and how they worked together, which revealed bugs invisible when working in isolation. It was eye-opening, and I can’t overlook another benefit: building team spirit through shared activities and discoveries, which greatly improved our future collaboration.

Sit down and listen to other teams

So here’s the thing: Conway’s Law isn’t going away. No matter how hard you try, no matter how many architecture diagrams you draw, your system will always reflect how your teams communicate… or don’t.

But before you panic, remember the government office analogy: you’re knocking on doors because that’s how the organization is structured, not because doors are inherently evil. The same applies here. Yes, your architecture will mirror organizational boundaries, but it’s on you whether you knock or find a workaround.

Every time you sit in a room with people from another team and actually listen, every time you take the time to understand how the system really works, you’re actively shaping the architecture – whether you realize it or not.

The post Code Isn’t Slowing Your Project Down, Communication Is appeared first on ShiftMag.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Issue 742

1 Share

Comment

I received plenty of feedback on my comment from last week about Swift for Wasm and Windows, and wanted to start this week by highlighting the projects that I heard about.

First up, Yuta Saito told me how Goodnotes has built their whole tech stack on Wasm, enabling them to re-use more than 100k lines of code between platforms. They are even hiring for a couple of roles (1, 2), if you’re interested in using Swift and Wasm together every day. Pedro Gomez also spoke about how they’re using it at NSSpain a few years ago.

Also on the Wasm side, Geoff Pado’s Barcode Generator for his Barc app uses Wasm to generate the barcode images. The project is even open source if you fancy learning from a real-world example.

I also heard about a significant Swift on Windows project, Spark, and I chatted briefly by email with Viktor Gedzenko and Alexander Smarus from Readdle this week about it. I first remember Spark as a native AppKit app back when it launched in 2016, and I also remember them launching a rewrite as a cross-platform Electron app in 2022. What I didn’t realise is that much of that Electron app is backed by Swift code. They use Electron’s N-API to interface with Swift, and there’s even a full tutorial available if you’re curious to give it a go. The process they described certainly had a few rough edges and they were clearly very early adopters, starting to use Swift on Windows in 2019. That said, both Viktor and Alexander were enthusiastic about their decision to use it.

Last week I was curious as to what the next “flagship” product might be for Swift on Windows, and you might be asking yourself if an app with an Electron front-end can ever be that? I absolutely think it can. I know I refer to my comment on things being 100% anything too often, but I still believe that we grow as a community whenever we open to the possibility of integrating Swift with other technologies rather than trying to keep everything Swift-only.

– Dave Verwer

Understanding Apple’s Retention Messaging API

Apple’s new Retention Messaging API lets your app show personalized messages, offers, or alternative subscription options right when a user is about to cancel in iOS subscription settings. This opens a powerful new way to reduce churn by engaging subscribers at the exact moment they’re deciding whether to leave, and RevenueCat handles the real-time responses, localization, and performance limits for you so you don’t have to build backend infrastructure from scratch. Learn how Apple’s Retention Messaging API works.

News

Xcode 26.3 unlocks the power of agentic coding

The fact that the Xcode 26.3 release candidate has built-in agent support for Claude Code and Codex has been widely covered this week, so it might not be news to you, but I’ll still link to it along with Apple’s introductory video. I’ll likely write more about this in next week’s newsletter. I have a half formed idea already, so let’s see what 7 days does to it! 😂


Exploring AI Driven Coding: Using Xcode 26.3 MCP Tools in Cursor, Claude Code and Codex

What’s potentially even more exciting than the built-in agent support in Xcode 26.3 is what Rudrank Riyam discovered by digging a little deeper into the release. The new release allows Xcode to interface with any agent or agentic IDE, not just the Claude Code and Codex agents that are now officially integrated into Xcode’s UI. Apple even documented the feature, although that article seems to be offline at the time of writing, which may mean that the article was published too early and been taken down.


Upcoming SDK minimum requirements

If you’ve been holding off on building your releases with the version 26 SDKs for iOS, iPadOS, tvOS, visionOS, and watchOS, now is the time to start preparing. The deadline for adopting the new SDKs is the 28th of April.

Code

HealthQL: SQL for Apple HealthKit

This new package from Grant Isom doesn’t need much summarising, as the purpose is right there in the title. Simple access to HealthKit data from a SQL-like language. There’s a DSL included, too, in case you crave type safety.

Design

Five ways we’ve been using our MCP server

I had missed that Sketch added MCP support at the end of last year, and it’s not just for letting your agent see what you’re working on. It gives full read/write access to your entire document hierarchy, and in this article Freddie Harrison details a selection of workflows that the MCP server enables.

Business and Marketing

New friendly place to share updates about your indie apps

This is a good idea from Filip Němeček and seems to already have some traction. If you’d like to promote your app to other developers, here’s a good place to start.


Birwaz

Omar Albeik:

I tried the existing tools but none of them handled RTL or complex scripts well, so I built Birwaz to fix it for myself.

It runs entirely in the browser, no uploads, no accounts, no backend. You pick your device, write your text, import translations (it reads .xcstrings natively), and it renders everything live across all your languages at once. It also uploads directly to App Store Connect through the API so you skip the drag-and-drop dance.

It has a beautiful UI and helps with a persistently annoying task. ❤️

And finally...

A keyboard for work that’s mysterious and important. Please enjoy each keystroke equally. 💼

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How AI Goes to Work

1 Share
scrabble tiles spelling gemini and chatgpt
Photo by Markus Winkler on Pexels.com

Sometimes it takes a single piece in a jigsaw puzzle for the whole picture to come into shape. And that happened earlier this weekend. I was able to crystallize a lot of what I have been thinking about technologies colloquially known as “Artificial Intelligence.” And it happened because of a 40-year-old piece of software that is used by tens of millions of people every day.

I’ve been writing about artificial intelligence for nearly a decade. In 2016, I argued in The New Yorker that we should think of it as augmented intelligence. Software that helps us deal with a world outpacing our capacity to process it. In 2022, I made the same case in The Spectator. The argument never changed. The world just took a while to catch up.

This week, I downloaded Claude for Excel and started working on a spreadsheet tracking my spending over the past twelve months. I took data from my credit cards and my bank accounts. I wanted to understand my spending patterns, and how I could budget smarter. And in doing so, something clicked. Under my breath, I said it: Finally!

I am no spreadsheet jockey. Never have been. I’ve always leaned on colleagues and friends who helped me build models, check assumptions, validate my cockamamie ideas. Now I work mostly by myself so I don’t have that luxury of colleagues anymore. My friends are busy. I am back to learning, adapting, and becoming more self-reliant. Yes, that includes getting intimate with Excel. Oh god!

After an hour of mucking about with Claude for Excel, it became clear that something was different. I wasn’t spending my time crafting elaborate prompts. I was just working. The intelligence was just hovering to help me. Right there, inside the workflow, simply augmenting what I was doing.

With all the information and data, I asked Claude to analyze and come up with a model for smart spending in 2026. I also had my portfolio details. Claude has access to the latest market data, from multiple sources. An hour or so later, I had methodically arrived at a working model. It was not the perfect model, but it was good enough for me to make common sense decisions.

Again, finally!

This is the easy on-ramp to the big bad scary world of AI, which will (and arguably already is) going to upend our information society. What Claude did inside Excel, I have experienced as a photographer. I use tools such as Adobe Photoshop and Topaz AI for editing my photos. These two have been a playground for what one could describe as “introductory AI.” They both started out pretty bad, but slowly, with every new release they have improved a lot.

Adobe has come under criticism for being the platform for generating artificial photos. It is a creative and moral debate, left for another time to someone else. It was easy for me to look at how both Adobe and Topaz have improved my everyday experience. I used to carry multiple lenses when I went on a photo adventure. Now I rarely take anything more than just my fixed lens Leica Q3 43. It is a 61-megapixel camera that produces big beefy files that can be easily upscaled for cropping by Topaz. To some that might be an issue, but I don’t see a problem with that. It is just AI at work.

Removing dust, expanding photos, upscaling resolution, intelligent masking, improving blemishes are all part of my workflow. These are tasks that took hours five years ago. Now, a handful of clicks. I am working smarter and faster. It is all AI, but I never think of it as “using AI.” It is just better software that allows me to do what I used to do faster, better. It also has allowed me to think even more creatively. Claude for Excel felt the same way.

We’ve spent the past few years obsessing over frontier models. With every new release the difference between the new models from Anthropic, OpenAI, and Google’s Gemini becomes less distinguishable. They are all fairly capable. Any edge one has over others is relatively short-lived. Whose LLM is larger, faster, more capable is really beside the point. In the end it all boils down to what we can really do with all that power.

This new power should turbocharge our capabilities. And in order for that, it has to live inside the tools we already use. Smaller, focused, embedded. It might not feel sexy, but when it shows up in spreadsheets, document editors, and email, the ordinary software of daily work, it becomes something we quickly become used to, and become dependent on. I’ve long been calling it augmented intelligence. Maybe it’s time to redefine it as “embedded intelligence.”


The history of technology offers great lessons and a rough road map to navigate in an unseen future. I have been looking at how software in general has evolved. While I am talking about software used for “business,” it also holds true for non-business software as well.

Before Software as a Service (SaaS) became the dominant form of delivering software, software came as a bundle, first in disks, then as downloads, to run on servers. These servers were first running inside the company’s four walls, and then eventually they were running inside a data center. These were one-size-fits-all software packages. As a company, you had developers customize this software to fit your specific corporate needs. You created a customized workflow. This was expensive, clunky, and slow.

But since everyone was moving away from the even more expensive and inflexible world of mainframe computing, no one minded. We had the 1990s “client server boom” that made software companies incredibly hot. Computing was moving from centralized mainframes to distributed systems and personal computers.

A decade or so later came SaaS. It delivered the same software via browser, and charged a monthly fee based on the number of people using the software. To be clear, I am speaking colloquially. This soon evolved into SaaS for anything and everything. Specialized tools for every function. Different industries needed different things, so suddenly we had many different kinds of the “same software.” This was (and still is) the “let many workflows bloom” era. That era has run its course.

Fast-forward to today. We are in the next phase of “software.” With AI, we will soon live in a world where the workflows will be customized not just based on corporate but also on personal needs. There is a common belief that it will all happen inside the chatbots—maybe—but don’t hold your breath. What is a more likely scenario, at least in the near term, is what I experienced with Claude for Excel.

There was no need to remake the platform (Excel) or write any custom code. I didn’t have to learn yet another tool. I didn’t need to change Excel. I didn’t learn a new interface. AI showed up inside the tool I was already using. It allowed me to just adopt it. And adapt to it. Without much friction.

When I was thinking about what really this specific piece of software was, and what Topaz AI did for me was different? First, they are trained for very specific tasks. And second, they tapped into sources of knowledge that keep helping me do the job better. Topaz AI, for example, updates every so often, as the models improve. They are learning from actual users, new technological approaches are coming to the fore, and computing is more capable. As a result, the improvements being offered to me are constant.

Claude for Excel does something similar. I always have access to the most updated version of Claude, but restricted to the spreadsheet. It knows every tab, every formula, every connection between cells. Structure, not just content. It is not a chatbot that sees pasted text.

The other interesting aspect is that Anthropic has been signing deals with data providers that people in finance actually use. Stock market data, PitchBook data, and other real data sources. This is not scraped from websites, this is verifiable information from sources that can be relied on for work. This is the key differentiation for “AI” at work.

Google, too, rolled Gemini into every tier of Workspace earlier this year. Is it very good? I don’t think so. Will it be good? I can bet good money on it. But since I already pay for Workspace and it is not a separate product, I don’t mind using it. Do I like the intrusion? Not really. It is subtle enough for me to ignore it if I don’t want to deal with it. (Also, How to turn off AI Summaries in Gmail.)

My friend Adam Bly’s System does the same for the doctors at Mayo Clinic. It is not making doctors go to a chatbot and ask questions. Instead it is meeting doctors where they are. DeepMind’s spinoff Isomorphic Labs embeds intelligence into the drug discovery workflow. It is woven into existing work, not bolted on top. Embedded.


AI’s progression is following the arc of any transformative technology. Every time technology prompts you to ask three questions. The first question is existential. The second is epistemological. The third is practical—that is when it becomes invisible. Let’s use Uber as an example.

Existential question: should we get into strangers’ cars? Epistemological question: Is this better than taxis, or just a regulatory arbitrage? Today, we simply open the app, order a ride, and don’t even think about it. The question is settled. It is part of our lives. It is no longer “Gig Economy” or “Ride Share.”

I saw it with cloud computing. As someone who was there at the genesis of the “cloud revolution,” the most common question I heard at my debut Structure conference was “should we put data on someone else’s servers?” Then it was “is this more reliable than our own data centers, or should we build our own cloud?” In the early days, only a few use cases for cloud emerged. Dropbox and Netflix for example. With the emergence of the app store, suddenly everything needed cloud and what it made possible. Fast forward to now, the cloud is just part of the technology infrastructure. We are seeing this with self-driving cars (essentially AI as applied to mobility and robotics workflow) as well.

AI, from my vantage point, is slowly entering that all important third phase. It is not there yet, but it sure is knocking on the doors. The question will no longer be whether to use it. The question is when it becomes invisible and part of everyday life.

The timing arc of every new technology follows a similar path. Silicon Valley turns out to be ahead by a few years, about seven or so. This prompts everyone outside of “Babylon” to think of every new trend as a bubble. A lot of it is indeed simply nonsense. On demand parking and scooters, for example. But it’s part of experimentation. Seven years or so after it launched, the questions about Uber and Amazon Web Services started to fade into the background. Normies started using the rideshare services.

You want to see where we are going, then look no further than the most talked about new thing, OpenClaw, an AI personal assistant previously known as Clawdbot. You can use it via a chat, and it sits on your local computer. You give OpenClaw access to a lot of your applications and services—-it is a big privacy and security nightmare—and it watches your conversations, learns context, and handles tasks when asked. Scheduling, summarizing, pulling information, coordinating across tools. The stuff that used to eat up hours of your day is done, without the drudgery. Others are using it in completely weird ways. Like figuring out how to tweet out what they are doing on Polymarket for example.

As an old timer this reminds me of Yahoo Pipes. They allowed people to dream of an interconnected web. So did services like IFTTT (If This Then That). Apple’s Shortcuts remain the big missed opportunity. OpenClaw is on that curve. Security problems or not, it is showing people what is possible. No wonder it got popular fast. Don’t look at the project as a standalone. But think of it as a sign of things to come.

It is so hot that earlier this month when there was an OpenClaw meet-up in San Francisco, the line was around the block. It had raw energy we have not seen in these parts for a long time. Is it there yet? No. But read the tea leaves. Longtime Mac developer Rui Carmo explains why on his blog. It’s “worth a look because it’s aiming at the right interface: chat is where people already are, and turning that into a command line for personal admin is a sensible direction.”

You can describe it as a crew of “Agents” doing all the work to make life easier for you. Or you can call it assistants. Or API calls. It doesn’t matter. What OpenClaw shows is how AI will work in the background. And that is what the “AI” future looks like for normal people. Not a separate AI app. Intelligence woven into tools you already use. Doing work you used to do yourself. Or used to hire someone to do—done by software. It feels very much akin to using widgets in 2005 and apps in 2010. A new way to do things.

I don’t want to go all rah-rah about this stuff. I am not naive about what comes next. The grunt work was the training. If the grunt work goes away, how do young people learn? They were learning how everything worked. It was training. The more models you make, the more you can intuitively understand what is right or wrong with a model. The reliance on automation makes people lose their instincts. Just look at how people blindly follow the directions on “Maps.” I often think about “What will be left for the human?” I don’t have an answer. If you do, leave a comment.


I have arrived at my point of view because for decades I have been seeing a continuous and endless explosion of data. The number of machines and sensors. Chips inside everything. The inevitable digitization of everything. With the arrival of always on smartphones, and ever faster networks, it was clear that our ability to wrangle information at human scale is over.

Now we live in a new world where everything moves at the speed of the network, and that speed is growing faster and faster. Data and machine learning were struggling to keep up, just as we were unable to make sense of it. And for that we desperately needed new ways of interacting with information.

The Silicon Valley hype machine has branded it artificial intelligence. But my more pragmatic way of seeing the world assumes the obvious need of these collections of technologies. I don’t see impending doom. Just as I don’t see an endless boom. Bubble or not, we are shifting gears into this world of information interaction.

My simpler explanation of “embedded intelligence” to myself makes me step away from the headlines and look at the present and the future in more realistic terms. My bet is that in five years, it will all be very different anyway. It always is. I am a believer in the power of silicon. When we have newer, more capable silicon, and more networks, we will end up with ever more capable computers in our hands. And the future will change.

For now, what I call embedded intelligence is a sensible on-ramp to the future. The hype may be about the frontier models. The disruption really is in the workflow.

February 6, 2026

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Reverse Engineering Your Software Architecture with Claude Code to Help Claude Code

1 Share
This post first appeared on Nick Tune’s Medium page and is being republished here with the author’s permission.
Example architecture flow reverse-engineered by Claude Code

I have been using Claude Code for a variety of purposes, and one thing I’ve realized is that the more it understands about the functionality of the system (the domain, the use cases, the end-to-end flows), the more it can help me.

For example, when I paste a production error log, Claude can read the stack trace, identify the affected code, and tell me if there is a bug. But when the issue is more complex, like a customer support ticket, and there is no stack trace, Claude is less useful.

The main challenge is that end-to-end processes are long and complex, spanning many code repositories. So just asking Claude Code to analyze a single repository wasn’t going to work (and the default /init wasn’t producing sufficient detail even for this single codebase).

So I decided to use Claude Code to analyze the system to map out the end-to-end flows relevant to the domain I work in so that Claude Code (and humans) can use this to handle more complex challenges.

This post shares what I knocked together in one day, building on knowledge and tooling I’ve already gained from real work examples and experiments.

This is one post in a series. You can find the other posts here:
https://medium.com/nick-tune-tech-strategy-blog/software-architecture-as-living-documentation-series-index-post-9f5ff1d3dc07

This post was written 100% by me. I asked Claude to generate the anonymized example at the end mirroring the type of content and style used in the real examples I created.

Setting the Initial Context

To begin my project, I created a very light requirements document:

# AI Architecture Analysis

This document contains the instructions for an important task - using AI to define the architecture of this system, so that it can be used by humans and AI agents to more easily understand the system.

## Objective

Map out all of the flows that this application is involved in (use sub agents where necessary to work in parallel). A flow should map out the end-to-process from an action in the UI (in the [readacted] repo) to a BFF, to backend APIs, or flows that are triggered by events.

Flows should be documented in Mermaid format to allow AI agents to understand, for versioning (in git), and for easy visualization.

## Requirements

Each flow should have a descriptive name and should include:

1. The URL path of the page where the interaction is triggered

2. The URL path of the BFF endpoint (and the repository it lives in)

3. The URL path of calls made to downstream services

4. Any database interactions

5. Any events produced or consumed (full name of event e.g. orders.orderPlaced)

6. Consumers of events (if easy to identify)

7. Any workflows triggered (like the synchronizeOrder)

To do this, you will need to look in other repositories which can be found in the parent folder. The GitHub client can also be used if necessary.

The list of flows should live in ../flows/index.md and each individual flow should be defined in a separate folder.

# Where to find information

- /docs/architecture contains various folders describing the design of this system and domain knowledge

- Each API project in this repository ([redacted], [redacted]) has an openapi.json. This must be used to identify all flows and validate. The [redacted] and [redacted] repositories also have openapi spec files

- The entities in the domain [redacted], [redacted] [redacted] have method that clearly describe the domain operations that can be performed on them. Equally, each operation is invoke from a use case that clearly describes the use case

The output I want is end-to-end flows like:
UI -> BFF -> API -> update DB -> publish event -> handler -> use case -> publish event -> …

I don’t want 10 different kinds of architecture diagrams and different levels of detail. I want Claude Code to understand the behavior of the system so it can identify anomalies (by looking at production data and logs) and analyze the impact of potential changes.

I also created some light information about the system in these two files:

System design files

The domain concepts file explains the entities in the system. Very brief explanation. The system overview file explains the relationship between this codebase and other repositories, which is crucial. Again, it’s very light—a bullet list of repository names and one or two sentences describing their relationship to this one.

Searching across multiple repositories

The instructions for this task live inside the main repository of the domain I work in. This is the center of the universe for this agent, but it needs to be able to read other repositories to join up the end-to-end flow.

The solution I use for this is in the description above:

To do this, you will need to look in other repositories which can be found in the parent folder. The GitHub client can also be used if necessary.

I give Claude the following permissions in .claude/settings.local.json and it can then access all the repositories on machine or use the GitHub client if it thinks there are repositories I don’t have available locally:

"permissions": {
"allow": [
...
"Read(//Users/nicktune/code/**)",
...

Telling Claude where to look

You’ll notice the requirements also give Claude tips on where to look for key information like Open API spec files, which are like an index of the operations the application supports.

This is useful as a validation mechanism later in the flow. I would ask Claude, “List all of the API endpoints and events produced or consumed by this application—are there any that aren’t part of any flows.” I can then see if we may have missed anything important.

Mapping the First Flow

I put Claude in plan mode and asked it to read the file. It then popped up a short questionnaire asking me about my needs and preferences. One of the questions it asked was process related: Should we map out the whole system in parallel, work step-by-step, etc.?

I said, let’s do the first one together and use this as a template for the others to follow.

It took about two hours to build the first flow as I reviewed what Claude produced and gave feedback on what I needed. For example, at first it created a sequence diagram which looked nice but was too hard to read for complex flows that involve many repositories.

Eventually, we settled on horizontal flow diagrams where each repository is a container and we defined what steps could be. At first, it went too granular with the steps adding individual method calls.

### diagram.mermaid Requirements

**CRITICAL RULES:**

1. **File Format**: Must be pure Mermaid syntax with `.mermaid` extension
- NO markdown headers
- NO markdown code fences (no ` ```mermaid ` wrapper)
- Starts directly with `flowchart LR`


2. **Use Swimlane Format**: `flowchart LR` (left-to-right with horizontal swimlanes)
- Each repository is a horizontal swimlane (subgraph)
- Flow progresses left to right
- Swimlane labels should be prominent (use emoji for visibility)
- Example: `subgraph systemA["🔧 systemA"]`


3. **Systems as Containers**:
- Each repository MUST be a `subgraph` (horizontal swimlane)
- Repository name is the subgraph label
- Operations are nodes inside the subgraph
- Use `direction LR` inside each subgraph


4. **Valid Step Types** - A step in the diagram can ONLY be one of the following:
- **HTTP Endpoint**: Full endpoint path (e.g., `POST /blah/{blahId}/subblah`)
- **Aggregate Method Call**: Domain method on an aggregate (e.g., `Order.place`, `Shipping.organiz`)
- **Database Operation**: Shown with cylinder shape `[(Database: INSERT order)]`
- **Event Publication**: (e.g., `Publish: private.ordering.order.placed`)
- **Workflow Trigger**: Must be labeled as workflow (e.g., `⚙ Workflow: syncOrders`)
- **Workflow Step**: Any step inside a workflow MUST include the workflow name as prefix (e.g., `syncOrderWorkflow: Update legacy order`, `updateOrderInfo: POST /legacy/fill-order`)
- **Lambda Invocation**: (e.g., `Lambda: blah-blah-lambda-blah-blah`)
- **UI Actions**: User interactions (e.g., `Show modal form`, `User enters firstName, lastName`)

I’ve added an anonymized flow at the end of this document.

I also had to add some corrections to Claude to ensure it was looking in all the right places and understanding what certain concepts mean in other parts of our system; we weren’t just iterating on the diagram for two hours.

Choosing the Next Flow

After the first flow, the next flows went much faster. I verified the output of each flow and gave Claude feedback, but generally around 15 minutes, and most of the time it was working so I could do other things while waiting.

One of the interesting challenges was deciding which flows are actually needed? What is a flow? Where should a flow start and end? What about relationships between flows?

Here I was in the driving seat quite a bit. I asked Claude to propose flows (just list them before analyzing) and then I asked it to show me how each API endpoint and event fit into the flows, and we used that to iterate a bit.

One of the things I had to do after Claude had produced the first draft is to ask, “Are you sure there are no other consumers for these events that are not listed here?” It would then do a more thorough search and sometimes find consumers in repositories I didn’t have locally. (It would use GitHub search.)

Learning value

As I reviewed each use case, I was learning things about the system that I didn’t fully understand or maybe there were nuances I wasn’t aware of. This alone would have justified all the effort I spent on this.

Then I started to imagine the value for people who are new to a codebase or a legacy system that nobody understands anymore. Or maybe someone who works in a different team and needs to figure out how a bug or a change in their domain is related to other domains.

Evolving the Requirements

As we went through the process, I regularly told Claude to update the requirements file. So after we’d finished the first flow we had instructions like this added to the file:

## Documentation Structure

Each flow must be documented in a separate folder with the following structure:

```
docs/architecture/flows/[flow-name]/
├── README.md # Complete documentation (all content in one file)
└── diagram.mermaid # Mermaid diagram
```


**IMPORTANT**: Use ONLY these two files. Do NOT create separate diagram-notes.md or other files. Keep all documentation consolidated in README.md for easier maintenance.

### README.md Contents

**Use the blueprint as a template**: `docs/architecture/flows/[redacted]/`

The file is now 449 lines long.

One of the reasons I did this was so that I could start a new Claude session, now or in the future, without a completely clean context window and execute the process to get similar results.

I did actually use a new session to map each new flow to validate that the requirements were somewhat repeatable. In general they were, but often Claude would ignore some parts of the requirements. So at the end, I told it to review the requirements and compare the outputs, and it would usually identify most of the errors it had made and fix them.

Here’s an example of some of the rules that started to build up. Some were to ensure Claude produced the right type of output, and some were to help Claude avoid common mistakes like Mermaid syntax errors.

### 2. Trace Workflows to Their Final Event

**Problem**: Missing events because you don't read the actual workflow implementation.

**Rule**: When you encounter a workflow, you MUST:
1. Find the workflow definition file (usually `.asl.json` for AWS Step Functions)
2. Read the ENTIRE workflow to see ALL events it publishes
3. Document EVERY event in sequence


**Example from our blueprint**:
- We initially thought `[readacted]` ended with `[readacted]`
- Reading `[redacted].asl.json` revealed it actually ends with `[readacted]`
- This event was CRITICAL to the flow continuing

**File locations**:
- Integrations workflows: `[another-repo]/domains/*/workflows/*.asl.json`
- Look for `SendEvent` or `publish` actions in the workflow steps

Claude Code never follows every instruction as described. But adding these rules does seem to increase the quality and reduce the number of iterations needed.

Testing the Output

After I mapped out four or five of the flows, I decided it was time to test it out—Does this information actually help Claude Code, or is it mainly just a human audience that benefits from the outputs?

So I went into our support ticket system and picked a ticket that looked complex. I asked my investigation agent to look at the ticket and identify the problem. But this time I added an extra step into its instructions:

4. Identify the affected flows in /docs/architecture/flows

As part of its analysis, it said:

Let me explore the architecture flows to understand how [redacted] and [redacted] should be handled:

Then it produced its analysis correctly identifying the flow and expected behaviors:

## Affected Flow

**[BlahA Upload and BlahB History Thing](../architecture/flows/[redacted]/README.md)**

The [redacted] workflow should populate [redacted] data via:
- [redacted] selection ([redacted] standard OR [redacted] enhanced based on `[redacted]` flag)
- Parallel execution of: DomainA, DomainB, DomainC, DomainD
- Publishing `order.blahBlahed` event on completion

And for the next steps, it wanted to query the events published by the system to verify that what actually happened matched the expected behaviors defined in the flow:

### Step 1: Query [redacted] Events for Both [redacted]

**Objective:** Compare event history to identify what happened differently between old and new [redacted]

**What to look for:**
- `[redacted event name]` - Did [redacted] complete for new [redacted]?
- `[redacted event name]` - Were [redacted] created?
- `[redacted event name]` - Were [redacted] created?
- `[redacted event name]` - Did workflow complete for new [redacted]?
- `[redacted event name]` - Final confirmation event
- Any error/failure events related to the workflow

Previously, Claude would have had to analyze the codebase to work out what should have happened. It takes a long time and takes up a lot of context window for complex tasks, and the analysis has to be verified.

Now, Claude knows immediately about the specific workflow and affected behaviors and can immediately begin planning an investigation (if the documentation is accurate enough). This analysis is structured with the key information that I need to see. I don’t need to iterate with Claude to produce an analysis in the format I need.

In this case, Claude didn’t resolve the problem immediately, but the conversation was more like I might have with a team member—someone who has a deeper understanding of how the system works and what might be wrong here rather than just using Claude to analyze patterns in data, read stack traces, or summarize text descriptions of the problem.

Accuracy and Hallucinations

I do think it’s right to be concerned about accuracy. We don’t want to make important choices about our system based on incomplete or incorrect details. And there have been significant inaccuracies that I had to spot and correct. (Imagine if I didn’t know they were wrong.)

I explored the challenge of accuracy in this later post showing how we can use deterministic tools likets-morphto build the model that humans and AI can both benefit from.

So here’s what I’m thinking:

  1. Sometimes we don’t need perfect accuracy. As long as the agent picks the right path, it can reinspect certain details or dive deeper as needed.
  2. We can build checks and steps in into our CI pipelines to update things.
  3. Regularly destroy and regenerate the flows (once a quarter?).
  4. Build verification agents or swarms.

When I spotted an error and asked a new agent to analyze the flow for inaccuracies, it rescanned the code and found what I saw. So I think option 4 is very credible—it’s just more effort to build a verification system (which could make the overall effort not worth it).

But I’m not sure this is the optimal way of approaching the situation. Instead…

The Next Phase of Platform Engineering

Avoiding the need to reverse engineer these flows will be key. And I’m starting to think this will become the main challenge for platform engineering teams: How can we build frameworks and tooling that expose our system as a graph of dependencies? Built into our platform so that AI agents don’t need to reverse engineer; they can just consult the source of truth.

Things should all happen transparently for software engineers—you follow the platform paved path, and everything just works. Companies that do this, and especially startups with no legacy, could immensely profit from AI agents.

Tools like EventCatalog are in a strong position here.

Example Flow

I just asked Claude to translate one of my company’s domain flows into a boring ecommerce example. The design and naming is not important; the type of information and the visualization is what I’m trying to convey.

Remember, this is based on one day of hacking around. I’m sure there are lots of improvement opportunities here. Let me know if you have seen anything better.

The README

# Place Order with Payment and Fulfillment

**Status**: Active
**Type**: Write Operation
**Complexity**: High
**Last Updated**: 2025-10-19

## Overview

This flow documents the process of placing an order in an ecommerce system, including payment authorization, inventory reservation, and shipment creation. This is the baseline order placement experience where:
- Orders start with `status: 'pending'`
- Payment is authorized before inventory reservation
- Inventory is reserved upon successful payment
- Shipment is created after inventory reservation

## Flow Boundaries

**Start**: Customer clicks "Place Order" button on checkout page

**End**: Publication of `shipping.shipment-created` event (public event with `DOMAIN` scope)

**Scope**: This flow covers the entire process from initial order submission through payment authorization, inventory reservation, shipment creation, and all asynchronous side effects triggered by these operations.

## Quick Reference

### API Endpoints

| Endpoint | Method | Repository | Purpose |
|----------|--------|------------|---------|
| `/checkout` | GET | storefront-app | Checkout page |
| `/api/orders` | POST | order-api | Creates order |
| `/api/payments/authorize` | POST | payment-api | Authorizes payment |
| `/api/inventory/reserve` | POST | inventory-api | Reserves inventory |
| `/api/shipments` | POST | shipping-api | Creates shipment |
| `/api/orders/{orderId}/status` | GET | order-api | Frontend polls for order status |

### Events Reference

| Event Name | Domain | Subject | Purpose | Consumers |
|------------|--------|---------|---------|-----------|
| `private.orders.order.created` | ORDERS | order | Order creation | PaymentHandler, AnalyticsHandler |
| `private.payments.payment.authorized` | PAYMENTS | payment | Payment authorized | InventoryReservationHandler |
| `private.payments.payment.failed` | PAYMENTS | payment | Payment failed | OrderCancellationHandler |
| `private.inventory.stock.reserved` | INVENTORY | stock | Inventory reserved | ShipmentCreationHandler |
| `private.inventory.stock.insufficient` | INVENTORY | stock | Insufficient stock | OrderCancellationHandler |
| `private.shipping.shipment.created` | SHIPPING | shipment | Shipment created | NotificationHandler |
| `shipping.shipment-created` | SHIPPING | shipment | **PUBLIC** Shipment created | External consumers |

### Database Tables

| Table | Operation | Key Fields | Purpose |
|-------|-----------|------------|---------|
| `orders` | INSERT | orderId, customerId, status='pending', totalAmount | Order aggregate storage |
| `order_items` | INSERT | orderItemId, orderId, productId, quantity, price | Order line items |
| `payments` | INSERT | paymentId, orderId, amount, status='authorized' | Payment aggregate storage |
| `inventory_reservations` | INSERT | reservationId, orderId, productId, quantity | Inventory reservation tracking |
| `shipments` | INSERT | shipmentId, orderId, trackingNumber, status='pending' | Shipment aggregate storage |

### Domain Operations

| Aggregate | Method | Purpose |
|-----------|--------|---------|
| Order | `Order.create()` | Creates new order with pending status |
| Order | `Order.confirmPayment()` | Marks payment as confirmed |
| Order | `Order.cancel()` | Cancels order due to payment or inventory failure |
| Payment | `Payment.authorize()` | Authorizes payment for order |
| Payment | `Payment.capture()` | Captures authorized payment |
| Inventory | `Inventory.reserve()` | Reserves stock for order |
| Shipment | `Shipment.create()` | Creates shipment for order |

## Key Characteristics

| Aspect | Value |
|--------|-------|
| Order Status | Uses `status` field: `'pending'` → `'confirmed'` → `'shipped'` |
| Payment Status | Uses `status` field: `'pending'` → `'authorized'` → `'captured'` |
| Inventory Strategy | Reserve-on-payment approach |
| Shipment Status | Uses `status` field: `'pending'` → `'ready'` → `'shipped'` |

## Flow Steps

1. **Customer** navigates to checkout page in storefront-app (`/checkout`)
2. **Customer** reviews order details and clicks "Place Order" button
3. **storefront-app UI** shows loading state with order confirmation message
4. **storefront-app** sends POST request to order-api (`/api/orders`)
- Request includes: customerId, items (productId, quantity, price), shippingAddress, billingAddress
5. **order-api** creates Order aggregate with `status: 'pending'` and persists to database
6. **order-api** creates OrderItem records for each item in the order
7. **order-api** publishes `private.orders.order.created` event
8. **order-api** returns orderId and order details to storefront-app
9. **storefront-app** redirects customer to order confirmation page

### Asynchronous Side Effects - Payment Processing

10. **order-events-consumer** receives `private.orders.order.created` event
11. **PaymentHandler** processes the event
- Calls payment-api to authorize payment
12. **payment-api** calls external payment gateway (Stripe, PayPal, etc.)
13. **payment-api** creates Payment aggregate with `status: 'authorized'` and persists to database
14. **payment-api** publishes `private.payments.payment.authorized` event (on success)
- OR publishes `private.payments.payment.failed` event (on failure)

### Asynchronous Side Effects - Inventory Reservation

15. **payment-events-consumer** receives `private.payments.payment.authorized` event
16. **InventoryReservationHandler** processes the event
- Calls inventory-api to reserve stock
17. **inventory-api** loads Inventory aggregate for each product
18. **inventory-api** calls `Inventory.reserve()` for each order item
- Validates sufficient stock available
- Creates reservation record
- Decrements available stock
19. **inventory-api** creates InventoryReservation records and persists to database
20. **inventory-api** publishes `private.inventory.stock.reserved` event (on success)
- OR publishes `private.inventory.stock.insufficient` event (on failure)

### Asynchronous Side Effects - Shipment Creation

21. **inventory-events-consumer** receives `private.inventory.stock.reserved` event
22. **ShipmentCreationHandler** processes the event
- Calls shipping-api to create shipment
23. **shipping-api** creates Shipment aggregate with `status: 'pending'` and persists to database
24. **shipping-api** calls external shipping carrier API to generate tracking number
25. **shipping-api** updates Shipment with trackingNumber
26. **shipping-api** publishes `private.shipping.shipment.created` event
27. **shipping-events-consumer** receives `private.shipping.shipment.created` event
- **ShipmentCreatedPublicHandler** processes the event
- Loads shipment from repository to get full shipment details
- Publishes public event: `shipping.shipment-created`
- **This marks the END of the flow**

### Order Status Updates

28. Throughout the flow, order-api receives events and updates order status:
- On `private.payments.payment.authorized`: Updates order with paymentId
- On `private.inventory.stock.reserved`: Updates order to `status: 'confirmed'`
- On `private.shipping.shipment.created`: Updates order to `status: 'shipped'`

### Failure Scenarios

**Payment Failure**:
- On `private.payments.payment.failed`: OrderCancellationHandler cancels order
- Order status updated to `'cancelled'`
- Customer notified via email

**Inventory Failure**:
- On `private.inventory.stock.insufficient`: OrderCancellationHandler cancels order
- Payment authorization is voided
- Order status updated to `'cancelled'`
- Customer notified via email with option to backorder

## Repositories Involved

- **storefront-app**: Frontend UI
- **order-api**: Order domain
- **payment-api**: Payment domain
- **inventory-api**: Inventory domain
- **shipping-api**: Shipping and fulfillment domain
- **notification-api**: Customer notifications

## Related Flows

- **Process Refund**: Flow for handling order refunds and returns
- **Update Shipment Status**: Flow for tracking shipment delivery status
- **Inventory Reconciliation**: Flow for syncing inventory counts with warehouse systems

## Events Produced

| Event | Purpose |
|-------|---------|
| `private.orders.order.created` | Notifies that a new order has been created |
| `private.payments.payment.authorized` | Notifies that payment has been authorized |
| `private.payments.payment.failed` | Notifies that payment authorization failed |
| `private.inventory.stock.reserved` | Notifies that inventory has been reserved |
| `private.inventory.stock.insufficient` | Notifies that insufficient inventory is available |
| `private.shipping.shipment.created` | Internal event that shipment has been created |
| `shipping.shipment-created` | **Public event** that shipment is created and ready for carrier pickup |

## Event Consumers

### `private.orders.order.created` Consumers

#### 1. order-events-consumer

**Handler**: `PaymentHandler`

**Purpose**: Initiates payment authorization process

**Actions**:
- Subscribes to event
- Calls `AuthorizePayment` use case
- Invokes payment-api to authorize payment with payment gateway
- Publishes payment result event

#### 2. order-events-consumer

**Handler**: `AnalyticsHandler`

**Purpose**: Tracks order creation for analytics

**Actions**:
- Subscribes to event
- Sends order data to analytics platform
- Updates conversion tracking

### `private.payments.payment.authorized` Consumer

#### payment-events-consumer

**Handler**: `InventoryReservationHandler`

**Purpose**: Reserves inventory after successful payment

**Actions**:
- Subscribes to event
- Calls `ReserveInventory` use case
- Loads order details
- Calls inventory-api to reserve stock for each item
- Publishes inventory reservation result event

### `private.payments.payment.failed` Consumer

#### payment-events-consumer

**Handler**: `OrderCancellationHandler`

**Purpose**: Cancels order when payment fails

**Actions**:
- Subscribes to event
- Calls `CancelOrder` use case
- Updates order status to 'cancelled'
- Triggers customer notification

### `private.inventory.stock.reserved` Consumer

#### inventory-events-consumer

**Handler**: `ShipmentCreationHandler`

**Purpose**: Creates shipment after inventory reservation

**Actions**:
- Subscribes to event
- Calls `CreateShipment` use case
- Calls shipping-api to create shipment record
- Integrates with shipping carrier API for tracking number
- Publishes shipment created event

### `private.inventory.stock.insufficient` Consumer

#### inventory-events-consumer

**Handler**: `OrderCancellationHandler`

**Purpose**: Cancels order when inventory is insufficient

**Actions**:
- Subscribes to event
- Calls `CancelOrder` use case
- Voids payment authorization
- Updates order status to 'cancelled'
- Triggers customer notification with backorder option

### `private.shipping.shipment.created` Consumer

#### shipping-events-consumer

**Handler**: `ShipmentCreatedPublicHandler`

**Purpose**: Converts private shipment event to public event

**Actions**:
- Subscribes to `private.shipping.shipment.created` event
- Loads shipment from repository
- Publishes public event: `shipping.shipment-created`

**Handler**: `NotificationHandler`

**Purpose**: Notifies customer of shipment creation

**Actions**:
- Subscribes to event
- Sends confirmation email with tracking number
- Sends SMS notification (if opted in)

## Database Operations

### orders Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: orderId, customerId, status='pending', totalAmount, createdAt
- **Repository**: `OrderRepository`

### order_items Table
- **Operation**: INSERT (batch)
- **Key Fields**: orderItemId, orderId, productId, quantity, price
- **Repository**: `OrderItemRepository`

### payments Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: paymentId, orderId, amount, status='authorized', gatewayTransactionId
- **Repository**: `PaymentRepository`

### inventory_reservations Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: reservationId, orderId, productId, quantity, reservedAt
- **Repository**: `InventoryReservationRepository`

### shipments Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: shipmentId, orderId, trackingNumber, status='pending', carrier
- **Repository**: `ShipmentRepository`

## External Integrations

- **Payment Gateway Integration**: Authorizes and captures payments via Stripe API
- Endpoint: `/v1/payment_intents`
- Synchronous call during payment authorization

- **Shipping Carrier Integration**: Generates tracking numbers via carrier API
- Endpoint: `/api/v1/shipments`
- Synchronous call during shipment creation

## What Happens After This Flow

This flow ends with the publication of the `shipping.shipment-created` public event, which marks the order as fully processed and ready for carrier pickup.

### State at Flow Completion
- Order: `status: 'shipped'`
- Payment: `status: 'authorized'` (will be captured on actual shipment)
- Inventory: Stock reserved and decremented
- Shipment: `status: 'pending'`, trackingNumber assigned

### Next Steps
After this flow completes:
- Warehouse team picks and packs the order
- Carrier picks up the shipment
- Shipping status updates flow tracks delivery
- Payment is captured upon confirmed shipment
- Customer can track order via tracking number

### External System Integration
Once the `shipping.shipment-created` event is published:
- Warehouse management system begins pick/pack process
- Customer notification system sends tracking updates
- Logistics partners receive shipment manifest
- Analytics systems track fulfillment metrics

## Diagram

See [diagram.mermaid](./diagram.mermaid) for the complete visual flow showing the progression through systems with horizontal swim lanes for each service.

The Mermaid:

The Mermaid

flowchart LR
Start([Customer clicks Place Order<br/>on checkout page])

subgraph storefront["🌐 storefront-app"]
direction LR
ShowCheckout[Show checkout page]
CustomerReview[Customer reviews order]
ShowConfirmation[Show order<br/>confirmation page]
end

CustomerWaitsForShipment([Customer receives<br/>shipment notification])

subgraph orderService["📦 order-api"]
direction LR
CreateOrderEndpoint["POST /api/orders"]
OrderCreate[Order.create]
OrderDB[(Database:<br/>INSERT orders,<br/>order_items)]
PublishOrderCreated["Publish: private.orders<br/>.order.created"]
ReceivePaymentAuth["Receive: private.payments<br/>.payment.authorized"]
UpdateOrderPayment[Update order<br/>with paymentId]
ReceiveStockReserved["Receive: private.inventory<br/>.stock.reserved"]
OrderConfirm[Order.confirmPayment]
UpdateOrderConfirmed[(Database:<br/>UPDATE orders<br/>status='confirmed')]
ReceiveShipmentCreated["Receive: private.shipping<br/>.shipment.created"]
UpdateOrderShipped[(Database:<br/>UPDATE orders<br/>status='shipped')]
end

subgraph paymentService["💳 payment-api"]
direction LR
ReceiveOrderCreated["Receive: private.orders<br/>.order.created"]
AuthorizeEndpoint["POST /api/payments/<br/>authorize"]
PaymentGateway["External: Payment<br/>Gateway API<br/>(Stripe)"]
PaymentAuthorize[Payment.authorize]
PaymentDB[(Database:<br/>INSERT payments)]
PublishPaymentAuth["Publish: private.payments<br/>.payment.authorized"]
end

subgraph inventoryService["📊 inventory-api"]
direction LR
ReceivePaymentAuth2["Receive: private.payments<br/>.payment.authorized"]
ReserveEndpoint["POST /api/inventory/<br/>reserve"]
InventoryReserve[Inventory.reserve]
InventoryDB[(Database:<br/>INSERT inventory_reservations<br/>UPDATE product stock)]
PublishStockReserved["Publish: private.inventory<br/>.stock.reserved"]
end

subgraph shippingService["🚚 shipping-api"]
direction LR
ReceiveStockReserved2["Receive: private.inventory<br/>.stock.reserved"]
CreateShipmentEndpoint["POST /api/shipments"]
CarrierAPI["External: Shipping<br/>Carrier API<br/>(FedEx/UPS)"]
ShipmentCreate[Shipment.create]
ShipmentDB[(Database:<br/>INSERT shipments)]
PublishShipmentCreated["Publish: private.shipping<br/>.shipment.created"]
ReceiveShipmentCreatedPrivate["Receive: private.shipping<br/>.shipment.created"]
LoadShipment[Load shipment<br/>from repository]
PublishPublicEvent["Publish: shipping<br/>.shipment-created"]
FlowEnd([Flow End:<br/>Public event published])
end

Start --> ShowCheckout
ShowCheckout --> CustomerReview
CustomerReview --> CreateOrderEndpoint
CreateOrderEndpoint --> OrderCreate
OrderCreate --> OrderDB
OrderDB --> PublishOrderCreated
PublishOrderCreated --> ShowConfirmation

PublishOrderCreated -.-> ReceiveOrderCreated
ReceiveOrderCreated --> AuthorizeEndpoint
AuthorizeEndpoint --> PaymentGateway
PaymentGateway --> PaymentAuthorize
PaymentAuthorize --> PaymentDB
PaymentDB --> PublishPaymentAuth

PublishPaymentAuth -.-> ReceivePaymentAuth
ReceivePaymentAuth --> UpdateOrderPayment

PublishPaymentAuth -.-> ReceivePaymentAuth2
ReceivePaymentAuth2 --> ReserveEndpoint
ReserveEndpoint --> InventoryReserve
InventoryReserve --> InventoryDB
InventoryDB --> PublishStockReserved

PublishStockReserved -.-> ReceiveStockReserved
ReceiveStockReserved --> OrderConfirm
OrderConfirm --> UpdateOrderConfirmed

PublishStockReserved -.-> ReceiveStockReserved2
ReceiveStockReserved2 --> CreateShipmentEndpoint
CreateShipmentEndpoint --> CarrierAPI
CarrierAPI --> ShipmentCreate
ShipmentCreate --> ShipmentDB
ShipmentDB --> PublishShipmentCreated

PublishShipmentCreated -.-> ReceiveShipmentCreated
ReceiveShipmentCreated --> UpdateOrderShipped

PublishShipmentCreated -.-> ReceiveShipmentCreatedPrivate
ReceiveShipmentCreatedPrivate --> LoadShipment
LoadShipment --> PublishPublicEvent
PublishPublicEvent --> FlowEnd

FlowEnd -.-> CustomerWaitsForShipment

style Start fill:#e1f5e1
style FlowEnd fill:#ffe1e1
style CustomerWaitsForShipment fill:#e1f5e1
style PublishOrderCreated fill:#fff4e1
style PublishPaymentAuth fill:#fff4e1
style PublishStockReserved fill:#fff4e1
style PublishShipmentCreated fill:#fff4e1
style PublishPublicEvent fill:#fff4e1
style OrderDB fill:#e1f0ff
style PaymentDB fill:#e1f0ff
style InventoryDB fill:#e1f0ff
style ShipmentDB fill:#e1f0ff
style UpdateOrderConfirmed fill:#e1f0ff
style UpdateOrderShipped fill:#e1f0ff
style PaymentGateway fill:#ffe1f5
style CarrierAPI fill:#ffe1f5



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Open Source Software, Public Policy, and the Stakes of Getting It Right

1 Share

Open Source software plays a central role in global innovation, research, and economic growth. That statement is familiar to anyone working in technology, but the scale of its impact is still startling. A 2024 Harvard-backed study estimates that the demand-side value of the Open Source ecosystem is approximately $8.8 trillion, and that companies would need to spend 3.5 times more on software if Open Source did not exist.

Those numbers underscore a simple truth: Open Source is not a niche concern or a developer-only issue. It is economic infrastructure. And like any critical infrastructure, it depends not only on technical excellence, but on policy environments that understand how it works.

This reality sits at the center of the Open Source Initiative’s (OSI) expanding work in public policy, a move that reflects how deeply Open Source is now entangled with global regulation, security, and emerging technologies like AI.

Check out the good work of the OSI and read the complete post at:

https://opensource.org/blog/open-source-software-public-policy-and-the-stakes-of-getting-it-right



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

AGL 455: Adam Christing on The Laughter Factor

1 Share

About Adam

laughter-factorAdam Christing brings people together with humor and heart! He’s a captivating keynote speaker and an award-winning event emcee. Adam has delighted over two million people across 49 of the 50 U.S. states and internationally. He is a performing member of Hollywood’s world-famous Magic Castle. He has been featured on Entertainment Tonight and more than 100 top podcasts, TV shows, and radio programs. Adam was recently featured on Harvard Business Review IdeaCast. He is the author of The Laughter Factor: The 5 Humor Tactics to Link, Lift, and Lead (Penguin Random House, BK Books).


Today We Talked About

  • Adam’s background
  • Comedy
  • Have Fun
  • Ha-uthenticity
  • Laugh Langauges
  • SAD – Surprise And Delight
  • 5 Tactics
    • Surprise
    • Poke
    • In-Joke
    • Wordplay
    • Amplify
  • Leadership
  • Laughter is a short-cut to trust
  • Dad Jokes
    • Feeling Safe
  • Brickwalls
    • Get closer together
  • Transformation over information

Connect with Adam


Leave me a tip $
Click here to Donate to the show


I hope you enjoyed this show, please head over to Apple Podcasts and subscribe and leave me a rating and review, even one sentence will help spread the word.  Thanks again!





Download audio: https://media.blubrry.com/a_geek_leader_podcast__/mc.blubrry.com/a_geek_leader_podcast__/AGL_455_Adam_Christing_on_The_Laughter_Factor.mp3?awCollectionId=300549&awEpisodeId=11884562&aw_0_azn.pgenre=Business&aw_0_1st.ri=blubrry&aw_0_azn.pcountry=US&aw_0_azn.planguage=en&cat_exclude=IAB1-8%2CIAB1-9%2CIAB7-41%2CIAB8-5%2CIAB8-18%2CIAB11-4%2CIAB25%2CIAB26&aw_0_cnt.rss=https%3A%2F%2Fwww.ageekleader.com%2Ffeed%2Fpodcast
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories