Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147084 stories
·
33 followers

Reverse Engineering Your Software Architecture with Claude Code to Help Claude Code

1 Share
This post first appeared on Nick Tune’s Medium page and is being republished here with the author’s permission.
Example architecture flow reverse-engineered by Claude Code

I have been using Claude Code for a variety of purposes, and one thing I’ve realized is that the more it understands about the functionality of the system (the domain, the use cases, the end-to-end flows), the more it can help me.

For example, when I paste a production error log, Claude can read the stack trace, identify the affected code, and tell me if there is a bug. But when the issue is more complex, like a customer support ticket, and there is no stack trace, Claude is less useful.

The main challenge is that end-to-end processes are long and complex, spanning many code repositories. So just asking Claude Code to analyze a single repository wasn’t going to work (and the default /init wasn’t producing sufficient detail even for this single codebase).

So I decided to use Claude Code to analyze the system to map out the end-to-end flows relevant to the domain I work in so that Claude Code (and humans) can use this to handle more complex challenges.

This post shares what I knocked together in one day, building on knowledge and tooling I’ve already gained from real work examples and experiments.

This is one post in a series. You can find the other posts here:
https://medium.com/nick-tune-tech-strategy-blog/software-architecture-as-living-documentation-series-index-post-9f5ff1d3dc07

This post was written 100% by me. I asked Claude to generate the anonymized example at the end mirroring the type of content and style used in the real examples I created.

Setting the Initial Context

To begin my project, I created a very light requirements document:

# AI Architecture Analysis

This document contains the instructions for an important task - using AI to define the architecture of this system, so that it can be used by humans and AI agents to more easily understand the system.

## Objective

Map out all of the flows that this application is involved in (use sub agents where necessary to work in parallel). A flow should map out the end-to-process from an action in the UI (in the [readacted] repo) to a BFF, to backend APIs, or flows that are triggered by events.

Flows should be documented in Mermaid format to allow AI agents to understand, for versioning (in git), and for easy visualization.

## Requirements

Each flow should have a descriptive name and should include:

1. The URL path of the page where the interaction is triggered

2. The URL path of the BFF endpoint (and the repository it lives in)

3. The URL path of calls made to downstream services

4. Any database interactions

5. Any events produced or consumed (full name of event e.g. orders.orderPlaced)

6. Consumers of events (if easy to identify)

7. Any workflows triggered (like the synchronizeOrder)

To do this, you will need to look in other repositories which can be found in the parent folder. The GitHub client can also be used if necessary.

The list of flows should live in ../flows/index.md and each individual flow should be defined in a separate folder.

# Where to find information

- /docs/architecture contains various folders describing the design of this system and domain knowledge

- Each API project in this repository ([redacted], [redacted]) has an openapi.json. This must be used to identify all flows and validate. The [redacted] and [redacted] repositories also have openapi spec files

- The entities in the domain [redacted], [redacted] [redacted] have method that clearly describe the domain operations that can be performed on them. Equally, each operation is invoke from a use case that clearly describes the use case

The output I want is end-to-end flows like:
UI -> BFF -> API -> update DB -> publish event -> handler -> use case -> publish event -> …

I don’t want 10 different kinds of architecture diagrams and different levels of detail. I want Claude Code to understand the behavior of the system so it can identify anomalies (by looking at production data and logs) and analyze the impact of potential changes.

I also created some light information about the system in these two files:

System design files

The domain concepts file explains the entities in the system. Very brief explanation. The system overview file explains the relationship between this codebase and other repositories, which is crucial. Again, it’s very light—a bullet list of repository names and one or two sentences describing their relationship to this one.

Searching across multiple repositories

The instructions for this task live inside the main repository of the domain I work in. This is the center of the universe for this agent, but it needs to be able to read other repositories to join up the end-to-end flow.

The solution I use for this is in the description above:

To do this, you will need to look in other repositories which can be found in the parent folder. The GitHub client can also be used if necessary.

I give Claude the following permissions in .claude/settings.local.json and it can then access all the repositories on machine or use the GitHub client if it thinks there are repositories I don’t have available locally:

"permissions": {
"allow": [
...
"Read(//Users/nicktune/code/**)",
...

Telling Claude where to look

You’ll notice the requirements also give Claude tips on where to look for key information like Open API spec files, which are like an index of the operations the application supports.

This is useful as a validation mechanism later in the flow. I would ask Claude, “List all of the API endpoints and events produced or consumed by this application—are there any that aren’t part of any flows.” I can then see if we may have missed anything important.

Mapping the First Flow

I put Claude in plan mode and asked it to read the file. It then popped up a short questionnaire asking me about my needs and preferences. One of the questions it asked was process related: Should we map out the whole system in parallel, work step-by-step, etc.?

I said, let’s do the first one together and use this as a template for the others to follow.

It took about two hours to build the first flow as I reviewed what Claude produced and gave feedback on what I needed. For example, at first it created a sequence diagram which looked nice but was too hard to read for complex flows that involve many repositories.

Eventually, we settled on horizontal flow diagrams where each repository is a container and we defined what steps could be. At first, it went too granular with the steps adding individual method calls.

### diagram.mermaid Requirements

**CRITICAL RULES:**

1. **File Format**: Must be pure Mermaid syntax with `.mermaid` extension
- NO markdown headers
- NO markdown code fences (no ` ```mermaid ` wrapper)
- Starts directly with `flowchart LR`


2. **Use Swimlane Format**: `flowchart LR` (left-to-right with horizontal swimlanes)
- Each repository is a horizontal swimlane (subgraph)
- Flow progresses left to right
- Swimlane labels should be prominent (use emoji for visibility)
- Example: `subgraph systemA["🔧 systemA"]`


3. **Systems as Containers**:
- Each repository MUST be a `subgraph` (horizontal swimlane)
- Repository name is the subgraph label
- Operations are nodes inside the subgraph
- Use `direction LR` inside each subgraph


4. **Valid Step Types** - A step in the diagram can ONLY be one of the following:
- **HTTP Endpoint**: Full endpoint path (e.g., `POST /blah/{blahId}/subblah`)
- **Aggregate Method Call**: Domain method on an aggregate (e.g., `Order.place`, `Shipping.organiz`)
- **Database Operation**: Shown with cylinder shape `[(Database: INSERT order)]`
- **Event Publication**: (e.g., `Publish: private.ordering.order.placed`)
- **Workflow Trigger**: Must be labeled as workflow (e.g., `⚙ Workflow: syncOrders`)
- **Workflow Step**: Any step inside a workflow MUST include the workflow name as prefix (e.g., `syncOrderWorkflow: Update legacy order`, `updateOrderInfo: POST /legacy/fill-order`)
- **Lambda Invocation**: (e.g., `Lambda: blah-blah-lambda-blah-blah`)
- **UI Actions**: User interactions (e.g., `Show modal form`, `User enters firstName, lastName`)

I’ve added an anonymized flow at the end of this document.

I also had to add some corrections to Claude to ensure it was looking in all the right places and understanding what certain concepts mean in other parts of our system; we weren’t just iterating on the diagram for two hours.

Choosing the Next Flow

After the first flow, the next flows went much faster. I verified the output of each flow and gave Claude feedback, but generally around 15 minutes, and most of the time it was working so I could do other things while waiting.

One of the interesting challenges was deciding which flows are actually needed? What is a flow? Where should a flow start and end? What about relationships between flows?

Here I was in the driving seat quite a bit. I asked Claude to propose flows (just list them before analyzing) and then I asked it to show me how each API endpoint and event fit into the flows, and we used that to iterate a bit.

One of the things I had to do after Claude had produced the first draft is to ask, “Are you sure there are no other consumers for these events that are not listed here?” It would then do a more thorough search and sometimes find consumers in repositories I didn’t have locally. (It would use GitHub search.)

Learning value

As I reviewed each use case, I was learning things about the system that I didn’t fully understand or maybe there were nuances I wasn’t aware of. This alone would have justified all the effort I spent on this.

Then I started to imagine the value for people who are new to a codebase or a legacy system that nobody understands anymore. Or maybe someone who works in a different team and needs to figure out how a bug or a change in their domain is related to other domains.

Evolving the Requirements

As we went through the process, I regularly told Claude to update the requirements file. So after we’d finished the first flow we had instructions like this added to the file:

## Documentation Structure

Each flow must be documented in a separate folder with the following structure:

```
docs/architecture/flows/[flow-name]/
├── README.md # Complete documentation (all content in one file)
└── diagram.mermaid # Mermaid diagram
```


**IMPORTANT**: Use ONLY these two files. Do NOT create separate diagram-notes.md or other files. Keep all documentation consolidated in README.md for easier maintenance.

### README.md Contents

**Use the blueprint as a template**: `docs/architecture/flows/[redacted]/`

The file is now 449 lines long.

One of the reasons I did this was so that I could start a new Claude session, now or in the future, without a completely clean context window and execute the process to get similar results.

I did actually use a new session to map each new flow to validate that the requirements were somewhat repeatable. In general they were, but often Claude would ignore some parts of the requirements. So at the end, I told it to review the requirements and compare the outputs, and it would usually identify most of the errors it had made and fix them.

Here’s an example of some of the rules that started to build up. Some were to ensure Claude produced the right type of output, and some were to help Claude avoid common mistakes like Mermaid syntax errors.

### 2. Trace Workflows to Their Final Event

**Problem**: Missing events because you don't read the actual workflow implementation.

**Rule**: When you encounter a workflow, you MUST:
1. Find the workflow definition file (usually `.asl.json` for AWS Step Functions)
2. Read the ENTIRE workflow to see ALL events it publishes
3. Document EVERY event in sequence


**Example from our blueprint**:
- We initially thought `[readacted]` ended with `[readacted]`
- Reading `[redacted].asl.json` revealed it actually ends with `[readacted]`
- This event was CRITICAL to the flow continuing

**File locations**:
- Integrations workflows: `[another-repo]/domains/*/workflows/*.asl.json`
- Look for `SendEvent` or `publish` actions in the workflow steps

Claude Code never follows every instruction as described. But adding these rules does seem to increase the quality and reduce the number of iterations needed.

Testing the Output

After I mapped out four or five of the flows, I decided it was time to test it out—Does this information actually help Claude Code, or is it mainly just a human audience that benefits from the outputs?

So I went into our support ticket system and picked a ticket that looked complex. I asked my investigation agent to look at the ticket and identify the problem. But this time I added an extra step into its instructions:

4. Identify the affected flows in /docs/architecture/flows

As part of its analysis, it said:

Let me explore the architecture flows to understand how [redacted] and [redacted] should be handled:

Then it produced its analysis correctly identifying the flow and expected behaviors:

## Affected Flow

**[BlahA Upload and BlahB History Thing](../architecture/flows/[redacted]/README.md)**

The [redacted] workflow should populate [redacted] data via:
- [redacted] selection ([redacted] standard OR [redacted] enhanced based on `[redacted]` flag)
- Parallel execution of: DomainA, DomainB, DomainC, DomainD
- Publishing `order.blahBlahed` event on completion

And for the next steps, it wanted to query the events published by the system to verify that what actually happened matched the expected behaviors defined in the flow:

### Step 1: Query [redacted] Events for Both [redacted]

**Objective:** Compare event history to identify what happened differently between old and new [redacted]

**What to look for:**
- `[redacted event name]` - Did [redacted] complete for new [redacted]?
- `[redacted event name]` - Were [redacted] created?
- `[redacted event name]` - Were [redacted] created?
- `[redacted event name]` - Did workflow complete for new [redacted]?
- `[redacted event name]` - Final confirmation event
- Any error/failure events related to the workflow

Previously, Claude would have had to analyze the codebase to work out what should have happened. It takes a long time and takes up a lot of context window for complex tasks, and the analysis has to be verified.

Now, Claude knows immediately about the specific workflow and affected behaviors and can immediately begin planning an investigation (if the documentation is accurate enough). This analysis is structured with the key information that I need to see. I don’t need to iterate with Claude to produce an analysis in the format I need.

In this case, Claude didn’t resolve the problem immediately, but the conversation was more like I might have with a team member—someone who has a deeper understanding of how the system works and what might be wrong here rather than just using Claude to analyze patterns in data, read stack traces, or summarize text descriptions of the problem.

Accuracy and Hallucinations

I do think it’s right to be concerned about accuracy. We don’t want to make important choices about our system based on incomplete or incorrect details. And there have been significant inaccuracies that I had to spot and correct. (Imagine if I didn’t know they were wrong.)

I explored the challenge of accuracy in this later post showing how we can use deterministic tools likets-morphto build the model that humans and AI can both benefit from.

So here’s what I’m thinking:

  1. Sometimes we don’t need perfect accuracy. As long as the agent picks the right path, it can reinspect certain details or dive deeper as needed.
  2. We can build checks and steps in into our CI pipelines to update things.
  3. Regularly destroy and regenerate the flows (once a quarter?).
  4. Build verification agents or swarms.

When I spotted an error and asked a new agent to analyze the flow for inaccuracies, it rescanned the code and found what I saw. So I think option 4 is very credible—it’s just more effort to build a verification system (which could make the overall effort not worth it).

But I’m not sure this is the optimal way of approaching the situation. Instead…

The Next Phase of Platform Engineering

Avoiding the need to reverse engineer these flows will be key. And I’m starting to think this will become the main challenge for platform engineering teams: How can we build frameworks and tooling that expose our system as a graph of dependencies? Built into our platform so that AI agents don’t need to reverse engineer; they can just consult the source of truth.

Things should all happen transparently for software engineers—you follow the platform paved path, and everything just works. Companies that do this, and especially startups with no legacy, could immensely profit from AI agents.

Tools like EventCatalog are in a strong position here.

Example Flow

I just asked Claude to translate one of my company’s domain flows into a boring ecommerce example. The design and naming is not important; the type of information and the visualization is what I’m trying to convey.

Remember, this is based on one day of hacking around. I’m sure there are lots of improvement opportunities here. Let me know if you have seen anything better.

The README

# Place Order with Payment and Fulfillment

**Status**: Active
**Type**: Write Operation
**Complexity**: High
**Last Updated**: 2025-10-19

## Overview

This flow documents the process of placing an order in an ecommerce system, including payment authorization, inventory reservation, and shipment creation. This is the baseline order placement experience where:
- Orders start with `status: 'pending'`
- Payment is authorized before inventory reservation
- Inventory is reserved upon successful payment
- Shipment is created after inventory reservation

## Flow Boundaries

**Start**: Customer clicks "Place Order" button on checkout page

**End**: Publication of `shipping.shipment-created` event (public event with `DOMAIN` scope)

**Scope**: This flow covers the entire process from initial order submission through payment authorization, inventory reservation, shipment creation, and all asynchronous side effects triggered by these operations.

## Quick Reference

### API Endpoints

| Endpoint | Method | Repository | Purpose |
|----------|--------|------------|---------|
| `/checkout` | GET | storefront-app | Checkout page |
| `/api/orders` | POST | order-api | Creates order |
| `/api/payments/authorize` | POST | payment-api | Authorizes payment |
| `/api/inventory/reserve` | POST | inventory-api | Reserves inventory |
| `/api/shipments` | POST | shipping-api | Creates shipment |
| `/api/orders/{orderId}/status` | GET | order-api | Frontend polls for order status |

### Events Reference

| Event Name | Domain | Subject | Purpose | Consumers |
|------------|--------|---------|---------|-----------|
| `private.orders.order.created` | ORDERS | order | Order creation | PaymentHandler, AnalyticsHandler |
| `private.payments.payment.authorized` | PAYMENTS | payment | Payment authorized | InventoryReservationHandler |
| `private.payments.payment.failed` | PAYMENTS | payment | Payment failed | OrderCancellationHandler |
| `private.inventory.stock.reserved` | INVENTORY | stock | Inventory reserved | ShipmentCreationHandler |
| `private.inventory.stock.insufficient` | INVENTORY | stock | Insufficient stock | OrderCancellationHandler |
| `private.shipping.shipment.created` | SHIPPING | shipment | Shipment created | NotificationHandler |
| `shipping.shipment-created` | SHIPPING | shipment | **PUBLIC** Shipment created | External consumers |

### Database Tables

| Table | Operation | Key Fields | Purpose |
|-------|-----------|------------|---------|
| `orders` | INSERT | orderId, customerId, status='pending', totalAmount | Order aggregate storage |
| `order_items` | INSERT | orderItemId, orderId, productId, quantity, price | Order line items |
| `payments` | INSERT | paymentId, orderId, amount, status='authorized' | Payment aggregate storage |
| `inventory_reservations` | INSERT | reservationId, orderId, productId, quantity | Inventory reservation tracking |
| `shipments` | INSERT | shipmentId, orderId, trackingNumber, status='pending' | Shipment aggregate storage |

### Domain Operations

| Aggregate | Method | Purpose |
|-----------|--------|---------|
| Order | `Order.create()` | Creates new order with pending status |
| Order | `Order.confirmPayment()` | Marks payment as confirmed |
| Order | `Order.cancel()` | Cancels order due to payment or inventory failure |
| Payment | `Payment.authorize()` | Authorizes payment for order |
| Payment | `Payment.capture()` | Captures authorized payment |
| Inventory | `Inventory.reserve()` | Reserves stock for order |
| Shipment | `Shipment.create()` | Creates shipment for order |

## Key Characteristics

| Aspect | Value |
|--------|-------|
| Order Status | Uses `status` field: `'pending'` → `'confirmed'` → `'shipped'` |
| Payment Status | Uses `status` field: `'pending'` → `'authorized'` → `'captured'` |
| Inventory Strategy | Reserve-on-payment approach |
| Shipment Status | Uses `status` field: `'pending'` → `'ready'` → `'shipped'` |

## Flow Steps

1. **Customer** navigates to checkout page in storefront-app (`/checkout`)
2. **Customer** reviews order details and clicks "Place Order" button
3. **storefront-app UI** shows loading state with order confirmation message
4. **storefront-app** sends POST request to order-api (`/api/orders`)
- Request includes: customerId, items (productId, quantity, price), shippingAddress, billingAddress
5. **order-api** creates Order aggregate with `status: 'pending'` and persists to database
6. **order-api** creates OrderItem records for each item in the order
7. **order-api** publishes `private.orders.order.created` event
8. **order-api** returns orderId and order details to storefront-app
9. **storefront-app** redirects customer to order confirmation page

### Asynchronous Side Effects - Payment Processing

10. **order-events-consumer** receives `private.orders.order.created` event
11. **PaymentHandler** processes the event
- Calls payment-api to authorize payment
12. **payment-api** calls external payment gateway (Stripe, PayPal, etc.)
13. **payment-api** creates Payment aggregate with `status: 'authorized'` and persists to database
14. **payment-api** publishes `private.payments.payment.authorized` event (on success)
- OR publishes `private.payments.payment.failed` event (on failure)

### Asynchronous Side Effects - Inventory Reservation

15. **payment-events-consumer** receives `private.payments.payment.authorized` event
16. **InventoryReservationHandler** processes the event
- Calls inventory-api to reserve stock
17. **inventory-api** loads Inventory aggregate for each product
18. **inventory-api** calls `Inventory.reserve()` for each order item
- Validates sufficient stock available
- Creates reservation record
- Decrements available stock
19. **inventory-api** creates InventoryReservation records and persists to database
20. **inventory-api** publishes `private.inventory.stock.reserved` event (on success)
- OR publishes `private.inventory.stock.insufficient` event (on failure)

### Asynchronous Side Effects - Shipment Creation

21. **inventory-events-consumer** receives `private.inventory.stock.reserved` event
22. **ShipmentCreationHandler** processes the event
- Calls shipping-api to create shipment
23. **shipping-api** creates Shipment aggregate with `status: 'pending'` and persists to database
24. **shipping-api** calls external shipping carrier API to generate tracking number
25. **shipping-api** updates Shipment with trackingNumber
26. **shipping-api** publishes `private.shipping.shipment.created` event
27. **shipping-events-consumer** receives `private.shipping.shipment.created` event
- **ShipmentCreatedPublicHandler** processes the event
- Loads shipment from repository to get full shipment details
- Publishes public event: `shipping.shipment-created`
- **This marks the END of the flow**

### Order Status Updates

28. Throughout the flow, order-api receives events and updates order status:
- On `private.payments.payment.authorized`: Updates order with paymentId
- On `private.inventory.stock.reserved`: Updates order to `status: 'confirmed'`
- On `private.shipping.shipment.created`: Updates order to `status: 'shipped'`

### Failure Scenarios

**Payment Failure**:
- On `private.payments.payment.failed`: OrderCancellationHandler cancels order
- Order status updated to `'cancelled'`
- Customer notified via email

**Inventory Failure**:
- On `private.inventory.stock.insufficient`: OrderCancellationHandler cancels order
- Payment authorization is voided
- Order status updated to `'cancelled'`
- Customer notified via email with option to backorder

## Repositories Involved

- **storefront-app**: Frontend UI
- **order-api**: Order domain
- **payment-api**: Payment domain
- **inventory-api**: Inventory domain
- **shipping-api**: Shipping and fulfillment domain
- **notification-api**: Customer notifications

## Related Flows

- **Process Refund**: Flow for handling order refunds and returns
- **Update Shipment Status**: Flow for tracking shipment delivery status
- **Inventory Reconciliation**: Flow for syncing inventory counts with warehouse systems

## Events Produced

| Event | Purpose |
|-------|---------|
| `private.orders.order.created` | Notifies that a new order has been created |
| `private.payments.payment.authorized` | Notifies that payment has been authorized |
| `private.payments.payment.failed` | Notifies that payment authorization failed |
| `private.inventory.stock.reserved` | Notifies that inventory has been reserved |
| `private.inventory.stock.insufficient` | Notifies that insufficient inventory is available |
| `private.shipping.shipment.created` | Internal event that shipment has been created |
| `shipping.shipment-created` | **Public event** that shipment is created and ready for carrier pickup |

## Event Consumers

### `private.orders.order.created` Consumers

#### 1. order-events-consumer

**Handler**: `PaymentHandler`

**Purpose**: Initiates payment authorization process

**Actions**:
- Subscribes to event
- Calls `AuthorizePayment` use case
- Invokes payment-api to authorize payment with payment gateway
- Publishes payment result event

#### 2. order-events-consumer

**Handler**: `AnalyticsHandler`

**Purpose**: Tracks order creation for analytics

**Actions**:
- Subscribes to event
- Sends order data to analytics platform
- Updates conversion tracking

### `private.payments.payment.authorized` Consumer

#### payment-events-consumer

**Handler**: `InventoryReservationHandler`

**Purpose**: Reserves inventory after successful payment

**Actions**:
- Subscribes to event
- Calls `ReserveInventory` use case
- Loads order details
- Calls inventory-api to reserve stock for each item
- Publishes inventory reservation result event

### `private.payments.payment.failed` Consumer

#### payment-events-consumer

**Handler**: `OrderCancellationHandler`

**Purpose**: Cancels order when payment fails

**Actions**:
- Subscribes to event
- Calls `CancelOrder` use case
- Updates order status to 'cancelled'
- Triggers customer notification

### `private.inventory.stock.reserved` Consumer

#### inventory-events-consumer

**Handler**: `ShipmentCreationHandler`

**Purpose**: Creates shipment after inventory reservation

**Actions**:
- Subscribes to event
- Calls `CreateShipment` use case
- Calls shipping-api to create shipment record
- Integrates with shipping carrier API for tracking number
- Publishes shipment created event

### `private.inventory.stock.insufficient` Consumer

#### inventory-events-consumer

**Handler**: `OrderCancellationHandler`

**Purpose**: Cancels order when inventory is insufficient

**Actions**:
- Subscribes to event
- Calls `CancelOrder` use case
- Voids payment authorization
- Updates order status to 'cancelled'
- Triggers customer notification with backorder option

### `private.shipping.shipment.created` Consumer

#### shipping-events-consumer

**Handler**: `ShipmentCreatedPublicHandler`

**Purpose**: Converts private shipment event to public event

**Actions**:
- Subscribes to `private.shipping.shipment.created` event
- Loads shipment from repository
- Publishes public event: `shipping.shipment-created`

**Handler**: `NotificationHandler`

**Purpose**: Notifies customer of shipment creation

**Actions**:
- Subscribes to event
- Sends confirmation email with tracking number
- Sends SMS notification (if opted in)

## Database Operations

### orders Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: orderId, customerId, status='pending', totalAmount, createdAt
- **Repository**: `OrderRepository`

### order_items Table
- **Operation**: INSERT (batch)
- **Key Fields**: orderItemId, orderId, productId, quantity, price
- **Repository**: `OrderItemRepository`

### payments Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: paymentId, orderId, amount, status='authorized', gatewayTransactionId
- **Repository**: `PaymentRepository`

### inventory_reservations Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: reservationId, orderId, productId, quantity, reservedAt
- **Repository**: `InventoryReservationRepository`

### shipments Table
- **Operation**: INSERT (via upsert)
- **Key Fields**: shipmentId, orderId, trackingNumber, status='pending', carrier
- **Repository**: `ShipmentRepository`

## External Integrations

- **Payment Gateway Integration**: Authorizes and captures payments via Stripe API
- Endpoint: `/v1/payment_intents`
- Synchronous call during payment authorization

- **Shipping Carrier Integration**: Generates tracking numbers via carrier API
- Endpoint: `/api/v1/shipments`
- Synchronous call during shipment creation

## What Happens After This Flow

This flow ends with the publication of the `shipping.shipment-created` public event, which marks the order as fully processed and ready for carrier pickup.

### State at Flow Completion
- Order: `status: 'shipped'`
- Payment: `status: 'authorized'` (will be captured on actual shipment)
- Inventory: Stock reserved and decremented
- Shipment: `status: 'pending'`, trackingNumber assigned

### Next Steps
After this flow completes:
- Warehouse team picks and packs the order
- Carrier picks up the shipment
- Shipping status updates flow tracks delivery
- Payment is captured upon confirmed shipment
- Customer can track order via tracking number

### External System Integration
Once the `shipping.shipment-created` event is published:
- Warehouse management system begins pick/pack process
- Customer notification system sends tracking updates
- Logistics partners receive shipment manifest
- Analytics systems track fulfillment metrics

## Diagram

See [diagram.mermaid](./diagram.mermaid) for the complete visual flow showing the progression through systems with horizontal swim lanes for each service.

The Mermaid:

The Mermaid

flowchart LR
Start([Customer clicks Place Order<br/>on checkout page])

subgraph storefront["🌐 storefront-app"]
direction LR
ShowCheckout[Show checkout page]
CustomerReview[Customer reviews order]
ShowConfirmation[Show order<br/>confirmation page]
end

CustomerWaitsForShipment([Customer receives<br/>shipment notification])

subgraph orderService["📦 order-api"]
direction LR
CreateOrderEndpoint["POST /api/orders"]
OrderCreate[Order.create]
OrderDB[(Database:<br/>INSERT orders,<br/>order_items)]
PublishOrderCreated["Publish: private.orders<br/>.order.created"]
ReceivePaymentAuth["Receive: private.payments<br/>.payment.authorized"]
UpdateOrderPayment[Update order<br/>with paymentId]
ReceiveStockReserved["Receive: private.inventory<br/>.stock.reserved"]
OrderConfirm[Order.confirmPayment]
UpdateOrderConfirmed[(Database:<br/>UPDATE orders<br/>status='confirmed')]
ReceiveShipmentCreated["Receive: private.shipping<br/>.shipment.created"]
UpdateOrderShipped[(Database:<br/>UPDATE orders<br/>status='shipped')]
end

subgraph paymentService["💳 payment-api"]
direction LR
ReceiveOrderCreated["Receive: private.orders<br/>.order.created"]
AuthorizeEndpoint["POST /api/payments/<br/>authorize"]
PaymentGateway["External: Payment<br/>Gateway API<br/>(Stripe)"]
PaymentAuthorize[Payment.authorize]
PaymentDB[(Database:<br/>INSERT payments)]
PublishPaymentAuth["Publish: private.payments<br/>.payment.authorized"]
end

subgraph inventoryService["📊 inventory-api"]
direction LR
ReceivePaymentAuth2["Receive: private.payments<br/>.payment.authorized"]
ReserveEndpoint["POST /api/inventory/<br/>reserve"]
InventoryReserve[Inventory.reserve]
InventoryDB[(Database:<br/>INSERT inventory_reservations<br/>UPDATE product stock)]
PublishStockReserved["Publish: private.inventory<br/>.stock.reserved"]
end

subgraph shippingService["🚚 shipping-api"]
direction LR
ReceiveStockReserved2["Receive: private.inventory<br/>.stock.reserved"]
CreateShipmentEndpoint["POST /api/shipments"]
CarrierAPI["External: Shipping<br/>Carrier API<br/>(FedEx/UPS)"]
ShipmentCreate[Shipment.create]
ShipmentDB[(Database:<br/>INSERT shipments)]
PublishShipmentCreated["Publish: private.shipping<br/>.shipment.created"]
ReceiveShipmentCreatedPrivate["Receive: private.shipping<br/>.shipment.created"]
LoadShipment[Load shipment<br/>from repository]
PublishPublicEvent["Publish: shipping<br/>.shipment-created"]
FlowEnd([Flow End:<br/>Public event published])
end

Start --> ShowCheckout
ShowCheckout --> CustomerReview
CustomerReview --> CreateOrderEndpoint
CreateOrderEndpoint --> OrderCreate
OrderCreate --> OrderDB
OrderDB --> PublishOrderCreated
PublishOrderCreated --> ShowConfirmation

PublishOrderCreated -.-> ReceiveOrderCreated
ReceiveOrderCreated --> AuthorizeEndpoint
AuthorizeEndpoint --> PaymentGateway
PaymentGateway --> PaymentAuthorize
PaymentAuthorize --> PaymentDB
PaymentDB --> PublishPaymentAuth

PublishPaymentAuth -.-> ReceivePaymentAuth
ReceivePaymentAuth --> UpdateOrderPayment

PublishPaymentAuth -.-> ReceivePaymentAuth2
ReceivePaymentAuth2 --> ReserveEndpoint
ReserveEndpoint --> InventoryReserve
InventoryReserve --> InventoryDB
InventoryDB --> PublishStockReserved

PublishStockReserved -.-> ReceiveStockReserved
ReceiveStockReserved --> OrderConfirm
OrderConfirm --> UpdateOrderConfirmed

PublishStockReserved -.-> ReceiveStockReserved2
ReceiveStockReserved2 --> CreateShipmentEndpoint
CreateShipmentEndpoint --> CarrierAPI
CarrierAPI --> ShipmentCreate
ShipmentCreate --> ShipmentDB
ShipmentDB --> PublishShipmentCreated

PublishShipmentCreated -.-> ReceiveShipmentCreated
ReceiveShipmentCreated --> UpdateOrderShipped

PublishShipmentCreated -.-> ReceiveShipmentCreatedPrivate
ReceiveShipmentCreatedPrivate --> LoadShipment
LoadShipment --> PublishPublicEvent
PublishPublicEvent --> FlowEnd

FlowEnd -.-> CustomerWaitsForShipment

style Start fill:#e1f5e1
style FlowEnd fill:#ffe1e1
style CustomerWaitsForShipment fill:#e1f5e1
style PublishOrderCreated fill:#fff4e1
style PublishPaymentAuth fill:#fff4e1
style PublishStockReserved fill:#fff4e1
style PublishShipmentCreated fill:#fff4e1
style PublishPublicEvent fill:#fff4e1
style OrderDB fill:#e1f0ff
style PaymentDB fill:#e1f0ff
style InventoryDB fill:#e1f0ff
style ShipmentDB fill:#e1f0ff
style UpdateOrderConfirmed fill:#e1f0ff
style UpdateOrderShipped fill:#e1f0ff
style PaymentGateway fill:#ffe1f5
style CarrierAPI fill:#ffe1f5



Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Open Source Software, Public Policy, and the Stakes of Getting It Right

1 Share

Open Source software plays a central role in global innovation, research, and economic growth. That statement is familiar to anyone working in technology, but the scale of its impact is still startling. A 2024 Harvard-backed study estimates that the demand-side value of the Open Source ecosystem is approximately $8.8 trillion, and that companies would need to spend 3.5 times more on software if Open Source did not exist.

Those numbers underscore a simple truth: Open Source is not a niche concern or a developer-only issue. It is economic infrastructure. And like any critical infrastructure, it depends not only on technical excellence, but on policy environments that understand how it works.

This reality sits at the center of the Open Source Initiative’s (OSI) expanding work in public policy, a move that reflects how deeply Open Source is now entangled with global regulation, security, and emerging technologies like AI.

Check out the good work of the OSI and read the complete post at:

https://opensource.org/blog/open-source-software-public-policy-and-the-stakes-of-getting-it-right



Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AGL 455: Adam Christing on The Laughter Factor

1 Share

About Adam

laughter-factorAdam Christing brings people together with humor and heart! He’s a captivating keynote speaker and an award-winning event emcee. Adam has delighted over two million people across 49 of the 50 U.S. states and internationally. He is a performing member of Hollywood’s world-famous Magic Castle. He has been featured on Entertainment Tonight and more than 100 top podcasts, TV shows, and radio programs. Adam was recently featured on Harvard Business Review IdeaCast. He is the author of The Laughter Factor: The 5 Humor Tactics to Link, Lift, and Lead (Penguin Random House, BK Books).


Today We Talked About

  • Adam’s background
  • Comedy
  • Have Fun
  • Ha-uthenticity
  • Laugh Langauges
  • SAD – Surprise And Delight
  • 5 Tactics
    • Surprise
    • Poke
    • In-Joke
    • Wordplay
    • Amplify
  • Leadership
  • Laughter is a short-cut to trust
  • Dad Jokes
    • Feeling Safe
  • Brickwalls
    • Get closer together
  • Transformation over information

Connect with Adam


Leave me a tip $
Click here to Donate to the show


I hope you enjoyed this show, please head over to Apple Podcasts and subscribe and leave me a rating and review, even one sentence will help spread the word.  Thanks again!





Download audio: https://media.blubrry.com/a_geek_leader_podcast__/mc.blubrry.com/a_geek_leader_podcast__/AGL_455_Adam_Christing_on_The_Laughter_Factor.mp3?awCollectionId=300549&awEpisodeId=11884562&aw_0_azn.pgenre=Business&aw_0_1st.ri=blubrry&aw_0_azn.pcountry=US&aw_0_azn.planguage=en&cat_exclude=IAB1-8%2CIAB1-9%2CIAB7-41%2CIAB8-5%2CIAB8-18%2CIAB11-4%2CIAB25%2CIAB26&aw_0_cnt.rss=https%3A%2F%2Fwww.ageekleader.com%2Ffeed%2Fpodcast
Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why "I'll Just Do It Myself" Is the Most Expensive PO Shortcut | Juliana Stepanova

1 Share

Juliana Stepanova: Why "I'll Just Do It Myself" Is the Most Expensive PO Shortcut

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

In this episode, we refer to previous discussions about team collaboration and Product Owner patterns.

The Great Product Owner: Opening Up to the Team for Solutions

"The PO who's not sitting and saying 'I know how it's right, I will solve it by myself,' but coming and saying 'Hey, let's think all together'—that's what gives very, very speed-up development into becoming a great PO." - Juliana Stepanova

 

Juliana describes the Product Owners she considers truly great as those who bring their challenges to the team rather than solving everything alone. Her example features a PO who was invited to recurring release meetings that consumed one and a half to two hours every two weeks—30 people in a room, largely a waste of time. Instead of suffering in silence or trying to fix it alone, this PO approached the team: "Hey guys, I have these meetings, and they're useless for me. How can we deal with that?" The team collaborated with the Scrum Master to explore multiple options. 

Together, they developed a streamlined, semi-automatic system that reduced the process to 10 minutes without requiring anyone to sit in a room. This solution was so effective that it was eventually adopted across the entire company, eliminating countless hours of wasted meetings. The key insight: great POs see themselves as part of the team, not above it. They're open to solutions from anyone and understand that collaboration—not individual genius—drives real improvements.

 

Self-reflection Question: When facing challenges that seem outside the team's domain, do you bring them to the team for collaborative problem-solving, or do you try to solve them alone?

The Bad Product Owner: The Loner Who Does Everyone's Job

"To make it quicker, I will skip asking the designer, I will directly put it by myself. I learned how to design five years ago. But afterwards, it's neglecting the whole team—you don't take into account the UX, and actually you need to rework." - Juliana Stepanova

 

The anti-pattern Juliana sees most frequently is the "loner" PO—someone who takes on other roles to move faster. The classic example: a PO who bypasses the UX/UI designer because "I learned design five years ago, I'll just do it myself." This behavior seems efficient in the moment but creates multiple problems. It disrespects the expertise of team members, undermines the collaborative nature of agile development, and almost inevitably leads to rework when the shortcuts create quality gaps. 

Juliana points out this isn't unique to POs—developers sometimes bypass testers for the same "efficiency" reasons. The solution isn't punishment but cultural reinforcement: helping people see the value of professional work, encouraging communication and openness, and building respect for each role's contribution. The key principle: if someone hasn't asked for help, don't assume they need yours. Focus on your own job, and offer assistance only when invited or when you explicitly ask "Do you need help?"

 

Self-reflection Question: When have you taken on someone else's role because it seemed faster, and what was the real cost of that shortcut?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Juliana Stepanova

 

Juliana is an Agile coach and Scrum master, with a focus in her work on transformation through people and processes rather than the other way round. She helps teams and leaders to create clarity, build trust and create value with purpose. Her work combines structure with empathy and is always focused on real collaboration and meaningful results.

 

You can link with Juliana Stepanova on LinkedIn.

 

You can also follow Juliana on Twitter.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260206_Juliana_Stepanova_F.mp3?dest-id=246429
Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Improving Your GitHub Developer Experience

1 Share

What are ways to improve how you’re using GitHub? How can you collaborate more effectively and improve your technical writing? This week on the show, Adam Johnson is back to talk about his new book, “Boost Your GitHub DX: Tame the Octocat and Elevate Your Productivity”.

Adam has written a series of books about improving developer experience (DX). In this episode, we dig into his newest book, which focuses on GitHub and how to get the most out of its features—from settings and keyboard shortcuts to hidden tools, CLI commands, and the command palette.

Adam also shares insights on the best ways to communicate on the platform. We discuss the nuances of GitHub-Flavored Markdown (GFM), best practices for effective communication in open source, the importance of maintaining civility in issue reports, and why he included a glossary of acronyms to help developers decode common shorthand like LGTM and FTFY.

This episode is sponsored by Honeybadger.

Course Spotlight: Introduction to Git and GitHub for Python Developers

What is Git, what is GitHub, and what’s the difference? Learn the basics of Git and GitHub from the perspective of a Pythonista in this step-by-step video course.

Topics:

  • 00:00:00 – Introduction
  • 00:02:20 – Why the focus on developer experience?
  • 00:03:41 – Process of writing the book
  • 00:06:26 – Filling in the gaps of knowledge
  • 00:11:52 – GitHub-Flavored Markdown
  • 00:16:00 – Sponsor: Honeybadger
  • 00:16:47 – Acronym glossary
  • 00:25:18 – GitHub command palette
  • 00:28:35 – What questions did you want to answer?
  • 00:29:42 – Whether to cover Copilot or not
  • 00:36:14 – Video Course Spotlight
  • 00:37:50 – Advice on working with coding agents
  • 00:40:46 – Defining the scope
  • 00:48:07 – GitHub pages and codespaces
  • 00:50:46 – Finding the hidden features
  • 00:51:53 – Data-oriented Django series
  • 00:53:59 – How to find the book
  • 00:54:51 – What are you excited about in the world of Python?
  • 00:57:27 – What do you want to learn next?
  • 00:58:00 – How can people follow your work online?
  • 00:58:22 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E283_03_Adam.bf73eccb2f84.mp3
Read the whole story
alvinashcraft
54 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From Manual Calculations to AI-Driven Grading Logic with the JavaScript DataGrid [Webinar Show Notes]

1 Share

From Manual Calculations to AI-Driven Grading Logic with the JavaScript DataGrid [Webinar Show Notes]

In this webinar, Prabhavathi Kannan demonstrated how to enhance the Syncfusion® JavaScript DataGrid with Azure OpenAI to create an AI-powered, predictive data-entry experience. The session showed how entire datasets can be sent to an AI model to predict values, calculate totals, assign grades, and update the grid dynamically without writing traditional calculation formulas.

If you missed the webinar or would like to review part of it, the recording has been uploaded to our YouTube channel and embedded below.

Time stamps

  • [00:00] Welcome and session introduction.
  • [00:46] Agenda and session goals.
  • [01:11] Poll: What would you want AI to predict?
  • [01:48] Overview of the AI-driven DataGrid concept.
  • [02:41] Syncfusion AI-ready toolkit overview.
  • [03:33] Prerequisites and setup requirements.
  • [03:57] Poll: Most-used Syncfusion control.
  • [04:27] Project setup in Visual Studio Code.
  • [05:39] Understanding the dataset and grid structure.
  • [06:31] Initial grid configuration and layout.
  • [07:29] Creating the grid and toolbar button.
  • [10:11] Wiring the Calculate Grade button.
  • [14:10] Running the base grid.
  • [14:41] Adding Azure OpenAI integration.
  • [17:21] Generating prompts for AI predictions.
  • [18:50] Executing AI logic and grading rules.
  • [20:04] Updating the grid with predicted values.
  • [21:56] Styling cells and adding animations.
  • [23:46] Live demo: AI-powered grading in action.
  • [24:54] Key takeaways and recap.
  • [25:21] Applying this approach to real-world apps.

What was built in this session

The demo application used the JavaScript DataGrid populated with student GPA data from three academic years. With a single button click, the dataset was sent to Azure OpenAI, which predicted the final-year GPA, calculated the total GPA, and assigned letter grades automatically.

Syncfusion AI-ready toolkit overview

Syncfusion’s AI-ready toolkit enables seamless integration with AI models such as Azure OpenAI, OpenAI, Gemini, and Anthropic. The toolkit has components for major frameworks, including JavaScript, React, Angular, Vue, Blazor, and ASP.NET.

Prerequisites and project setup

To follow along, developers need Visual Studio Code, Node.js, TypeScript, access to Azure OpenAI with a deployed model, and the Syncfusion JavaScript DataGrid packages.

How the AI integration works

The solution uses a structured prompt that combines grid data and grading rules into a single instruction. Azure OpenAI returns a JSON-only response, which is parsed and bound directly back to the DataGrid for predictable results.

Dynamic updates, styling, and animations

Predicted values are applied row by row with smooth animations. Custom cell styling uses color coding to highlight performance, providing immediate visual clarity.

Q&A

Q: Can we use it on projects for other companies?

A: Yes, you can use it on other companies’ projects. The key requirement is to generate an appropriate prompt for the AI based on the specific needs and component settings. Once the AI provides its response, you will need to programmatically update the component to reflect the changes and display them in the UI.

Q: How does this thing internally work?

A: First, determine whether the desired functionality is achievable within the component. If it is, generate a tailored prompt to obtain a suitable response from the AI that aligns with the component’s capabilities. Based on the AI’s output, execute the corresponding actions or updates within the component.

Q: Can you create a prompt that will allow AI to dynamically answer the question and adjust the grid to respond to what was entered into the prompt?

A: Yes, this is achievable. For an example of an interactive grid powered by AI, please refer to this Syncfusion JS 2 React demo.

Q: If a user enters a question, will the grid adjust, or does the grid need to be preprogrammed ahead of time?

A: Yes, dynamic adjustment is possible without preprogramming every possible query. The grid can respond intelligently to natural language inputs in real time. Please refer to this interactive AI grid demo for a practical illustration.

Takeaways

This approach eliminates manual calculations, simplifies business logic, and enables developers to build intelligent, AI-driven data grids applicable to many real-world scenarios beyond grading.

Related resources

Read the whole story
alvinashcraft
54 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories