BONUS: Conflict Is the Yellow Brick Road to Success — How Embracing Conflict Transforms Teams and Leaders
In this bonus episode, we explore why fear, conflict, and courage sit at the heart of true agility with Dan Tocchini, a leadership catalyst who has spent over four decades helping teams at organizations like ESPN, Disney, and Homeboy Industries break through the human barriers to high performance. Dan shares powerful stories and practical wisdom on how leaders can embrace conflict as a generative force, build trust through vulnerability, and restructure their teams for genuine agility.
The Power of Vulnerability in Leadership
"I'd rather have it on an honest basis, where she knows what I'm thinking, what I'm aiming at, and we're shoulder to shoulder, not head to head."
Dan's career-defining moment came when he told a CFO at ESPN — while he was competing against McKinsey for the same contract — that she was the problem behind her department's 75% turnover rate. Rather than sugarcoating or deflecting, Dan chose vulnerability and honesty, even at the risk of losing the contract. This radical transparency became his superpower. The CFO hired him, and within six months, turnover dropped to 15%. Dan stayed with ESPN for eight years. The lesson for Scrum Masters and leaders: you can only truly connect with someone if you're willing to be honest, even when it might cost you.
Listening for Openings, Not Outcomes
"Most people listen for outcomes. I listen for openings."
Dan draws a critical distinction between chasing outcomes and discovering openings. When faced with an angry car buyer who felt ripped off, Dan didn't try to close the sale. Instead, he leaned into the conflict, acknowledged the customer's perspective, and opened all the books. The result? A sale with 17% margin — above the dealership average — because the customer chose the price himself. For leaders, this means detaching from your desired outcome and focusing on understanding the opening in front of you. That shift builds trust and often produces better results than pushing for what you want.
Why Team Drama Is a Distraction Strategy
"Whenever there's drama, it's because people don't want you to see something."
Drama in teams happens because people are siloed, and they silo because they don't trust each other. They share only the information that serves their position without jeopardizing their role. The drama itself is a distraction — like a child throwing a tantrum so you'll forget what they did wrong. Dan's approach: ask three questions. What are they committed to causing? How much of that are they producing? And what's the story between the two? The problem is never the problem — the problem is how you think about the problem.
Restructuring for Agility: A Restaurant Case Study
"Your way of being needs to be bigger than the structure."
Dan illustrates agile restructuring through a top-25 restaurant in Boise where the general manager flows seamlessly between roles — bussing tables, coordinating with the kitchen, and leading the team — without ever pulling rank. The secret? He grounds his team before every shift with genuine connection, shared meals, and open dialogue. When he gives direction, people move — not from fear, but from respect. Structure alone won't solve problems; it only organizes them so you can see them better. Leaders must be committed to what the structure is designed to accomplish, altering it in motion when needed.
Conflict as a Generative Force
"What you're not willing to face will eventually defeat you."
Dan's core philosophy centers on embracing conflict rather than avoiding it. When people face conflict, they either seek comfort by avoiding it or realize what's at stake and find a way through. The Stoic principle "the obstacle is the way" applies: to find the path, you must hug the cactus and pull the problem close. In relationships — whether marriage, team, or client — breakdowns should deepen intimacy and trust. Dan reports that 90% of the time, authentically facing into mistakes with clients deepens relationships and keeps contracts alive.
What Keeps Dan Going After Four Decades
"People love to accomplish things they didn't think they could do. To me, that's exciting."
After more than 40 years in this work, Dan remains energized by working with people to accomplish challenges they initially thought impossible. He describes his work as akin to family — that same depth of connection and shared purpose. His one-liner: "We turn leadership into leadership." It sparks curiosity and opens conversations about what real leadership transformation looks like.
About Dan Tocchini
Dan Tocchini has spent 35+ years working with leadership teams across the spectrum — from ESPN to nonprofits like Defy Ventures — helping them evolve from functional to fully alive. His work focuses on the human systems that make agile succeed… or silently kill it.
You can find out more about Dan and his leadership training programs at TakeNewGround.com.
Why most AI-generated code fails at scale and the hybrid workflow you need to build apps that last.
We’ve all seen the videos: a developer prompts an AI for 30 seconds and boom 🤯a functional Flutter app appears. It runs. It looks decent. It even demos well.
I’ve been on the other side of that story.
I’ve refactored AI-generated Flutter code that worked perfectly at 100 users and quietly collapsed at 50,000. Not with a dramatic crash, but with jank, untestable logic, and a codebase no one on the team felt safe touching.
That’s when the truth becomes obvious:
AI is a great builder. It’s unreliable as an architect.
If you want to build a real product, not just win a hackathon, you need to stop vibe coding and start orchestrating.
What “Vibe Coding” Really Means
By vibe coding, I mean this:
Prompting AI until the app looks right, without fully understanding why it works.
There’s nothing wrong with vibe coding for exploration. The problem starts when demo-quality decisions silently become production architecture.
AI optimizes for syntax completion, not long-term system behavior. And that gap shows up fast when users, features, and expectations scale.
1. The Demo Trap: Why AI Code Breaks at Scale
When you ask an AI to build a Flutter screen, it solves the visible problem, not the architectural one. Here’s where AI-generated code usually struggles as usage grows.
The Global State Mess
AI often defaults to setState or a single massive ChangeNotifier.
At 1 screen, that’s fine. At 20 features, every UI change rebuilds unrelated widgets, and debugging state changes becomes guesswork.
On a mid-range Android device, excessive rebuilds can drop scrolling below 40 FPS, long before you notice in development.
Widget Bloat
AI doesn’t naturally think in composition or atomic design.
You’ll see:
400–500 line build() methods
deeply nested widgets
UI logic mixed with business rules
Flutter’s rendering pipeline is fast, but rebuilding entire subtrees unnecessarily will hurt performance and readability as the app grows.
Context Blindness
AI has no idea where your app is going.
It might:
suggest packages that are unmaintained
tightly couple features that should evolve independently
optimize for today’s UI, not tomorrow’s roadmap
None of this is malicious. It’s just not the AI’s job to think six months ahead.
2. The Hybrid Workflow: Fast and Sustainable
The answer isn’t avoiding AI. It’s using it at the right layers.
Think in terms of phases, not prompts.
Phase 1: AI-First Scaffolding (The Skeleton)
AI is excellent at removing friction from boring, repetitive work.
Use it to generate:
Data models (for example, converting API JSON into immutable models)
Boilerplate UI (login → core action → results)
Basic navigation flows
At this stage, speed matters more than perfection.
The goal isn’t “production-ready.” The goal is momentum.
Phase 2: Human-Led Architecture (The Nervous System)
This is where senior engineering judgment kicks in.
Once the skeleton exists, pause. Refactor before adding features.
1. Modularize Early
Don’t let unrelated logic live together.
Structure by features, not files:
lib/ └── features/ ├── auth/ ├── home/ └── chat/
This one decision determines how painful your next six months will be.
2. State Management Handoff
If AI gives you setState, treat it as a placeholder.
Move business logic into a predictable, testable layer (Riverpod, BLoC, or equivalent). The specific tool matters less than the principle:
UI should react to state — not own it.
3. Dependency Boundaries
Hardcoded services make testing and refactoring painful.
Introduce dependency injection early so:
services can be mocked
features remain isolated
refactors don’t cascade across the app
This is invisible work but it’s what keeps teams moving fast later.
3. Engineering for the Millionth User
Scaling is rarely about one big optimization. It’s about dozens of small, boring decisions done early.
Move Heavy Work Off the UI Thread
If your app performs local processing (image work, data parsing, AI inference), isolate it.
Flutter’s UI thread should stay boring. Anything expensive belongs elsewhere or your users will feel it.
Lazy Loading Is Not Optional
AI often generates eager lists because they’re simpler.
At scale:
switch to builders
paginate aggressively
assume lists will grow far beyond today’s expectations
Rendering only what the user sees is not an optimization, it’s table stakes.
We’ve all experienced a frozen web page followed by endless refreshing, frustrated sighs, and the occasional foot stomp, only to keep seeing the spinning wheel. In many cases, this is caused by a bottleneck in JavaScript’s main thread.
The main thread is a single-lane highway where the browser processes everything in a strict sequence. It handles clicks, manages scrolling, renders animations, and executes your logic. Because it can only do one thing at a time, adding heavy computations to the mix creates a massive traffic jam. When the main thread is overloaded, the entire UI dies.
We’ve talked a lot about WebAssembly (Wasm) lately, so it should come as no surprise that Wasm is a good solution here. By combining the raw power of Wasm with Web Workers, we can move those heavy calculations into a background lane. This takes the pressure off the main thread, allowing your users to continue scrolling and clicking without interruption.
To show you the benefit of using C with Wasm for calculations over JavaScript, this tutorial will help you build a high-performance Fibonacci calculator. We’ll then run the Fibonacci sequence recursively using Wasm powered by a Web Worker and calculate using JavaScript so you can see how the timing stacks up.
By offloading intensive math to a background thread, we will demonstrate how to keep the user interface functional and fluid even when the processor is under heavy load. Though the recursive Fibonacci algorithm isn’t suited for a working application, this project serves as a blueprint for keeping web applications responsive.
Open a terminal in your project and navigate towasm-worker-demo. Once you’re in that folder, typelsand make sure you see your files,index.html, main.js, worker.js, compute.c. Once you’re sure you’re in the right place, we need to download the Emscripten SDK.
The Emscripten SDK compiles C/C++ to Wasm. We can’t move forward without it. In your terminal, type the following command:
This will add a folder calledemsdkto your current directory.
Go into theemsdkfolder with the commandcd emsdk.
Install the latest verson of Emscripten:./emsdk install latest
Activate the SDK so your terminal can use it: ./emsdk activate latest
The last step here is to set up your environment variables for this terminal session:source ./emsdk_env.sh
Confirm accurate set up with the command:emcc -v
Head back to your main project folder in the terminal.
We’re ready to start building out the files.
Build Wasm calculation logic in C
Let’s start in thecompute.cfile. The algorithm we’re going to use for this demo is the recursive Fibonacci sequence. Cue the nightmares for anyone who’s gone to coding school. This algorithm, though inefficient for a production level application, is perfect for this demo. It creates billions of function calls and pushes the CPU to its limits.
Emscripten compiles our C code to a .wasm binary and a .js “glue” file. The .wasm binary is the compiled version of your C code. It contains the low-level instructions that the browser can execute at near-native speed. The .js “glue” file acts as the bridge between languages, providing the necessary code to load the binary and allowing JavaScript to call functions inside the WebAssembly module.
Type this command into your terminal to build the .wasm and .js files. In less than a minute you should see them appear in your project.
-O3: High-level optimization. It tells the compiler to make the code as fast as humanly possible.
-s MODULARIZE=1: Wraps the output into a Promise-based module, making it easier to load safely.
-s EXPORTED_FUNCTIONS: Tells Emscripten not to remove ourcalculate_fibfunction during optimization.
Web Workers
Workers allow the calculation to happen outside of the main threads. We don’t want to run this Wasm on the main thread because a high-speed calculation of this magnitude will hijack the browser’s attention and cause the UI to freeze until the task is finished. To avoid this, we use a Web Worker. A Web Worker is a dedicated background script that runs in its own isolated thread, completely independent of the user interface.
The code below initializes the background environment by loading the Emscripten “glue” script and waiting for the WebAssembly module to be fully ready. Once loaded, it maps the C-basedcalculate_fibfunction into a usable JavaScript variable viacwrap. Then it sets up an event listener to receive numbers from the main thread, performs the calculation in isolation, and sends the result back without interrupting the user’s experience.
You can run the server with the command http-serve. Navigate to http://localhost:8080/ and you’ll see the webpage. Enter your number in the input box (start with numbers under 49) and watch the calculation timers.
Putting it all together
While the worker handles the math, themain.js script acts as the command center. It is responsible for spawning the worker, sending it data, and updating the screen once the result is ready. This keeps the user interface alive and responsive, even while a massive calculation is happening just a few pixels away in the background.
The code below creates the worker instance and sets up a listener to catch the finished result. When you click the button, it records the start time and posts the input value to the worker, then waits for the response to calculate the total execution time.
For small numbers below 25, JavaScript is actually faster because the small startup cost of starting the WASM engine isn’t worth it for such a quick task. However, once we hit 25 and the calculations get heavier, the real value of WebAssembly starts to shine.
By the time you reach 50, the calculation is so massive it might never finish. But here is the most important part: because we are running this on a Web Worker, your browser remains alive. You can still click buttons, scroll, or even attempt to run the JavaScript version before the application eventually hits its limit. This proves that WASM isn’t just about raw speed. It is about keeping your UI responsive.
String searching is fundamental to modern applications, yet its performance impact is often overlooked. This article explores how common string search patterns can quietly slow down your code—and how small, intentional changes can unlock up to 3× faster execution. Backed by real benchmarks, it shows why paying attention to string search performance matters far more than most developers realize.
Compare Needlr's source generation and reflection strategies for dependency injection in C# to choose the right approach for AOT, performance, and flexibility.
Indefinite locks belong to a world where processes never crash and networks never split. That world does not exist. In a distributed system, “I hold the lock” can mean “I held the lock before my VM paused for 45 seconds.” A lease fixes that by putting a deadline on ownership and forcing the owner to keep renewing that claim.
A lease is a lock with a time limit. When the clock runs out, other nodes may take over. That single detail prevents stuck work after crashes and limits the damage during partitions.
This post shows a production shaped lease using Redis with atomic acquire, atomic renew, and a rule that most teams avoid until it hurts: when renewal fails, you stop acting like the owner.
The problem with indefinite locks
Classic locking assumes three things that distributed systems break daily.
The lock holder can always release the lock.
Everyone can observe the release.
Time does not jump.
None of those are reliable. Processes crash and never execute finally blocks. Networks drop packets and hide releases. Pause events stretch “a second” into “a minute.” If your system depends on an indefinite lock, it will either freeze forever or run twice.
A lease does not make failure disappear. It makes failure survivable.
Pattern definition and intent
A lease is exclusive ownership that expires without renewal.
Intent: Provide exclusive access with a bounded duration so the system can recover when an owner crashes or becomes unreachable.
Key property: Ownership is valid only until ExpiresAt, not until the owner feels like releasing it.
Core semantics and hard rules
A lease implementation needs three invariants.
Acquire is atomic and exclusive. Renew is atomic and only the current owner can extend the deadline. Loss of renew ends ownership immediately.
That last point is the hard rule. If you cannot renew, you are not the owner. Continuing to act after renewal failure is how “singleton” jobs become duplicate work.
Design choices that matter
TTL selection
TTL is a trade. Short TTL gives faster recovery and more churn. Long TTL gives slower recovery and less churn. A common starting point is 10 seconds, then adjust after measuring Redis latency, GC pauses, and deployment behavior.
Renew cadence
Renew earlier than TTL. Many teams renew at one third of TTL with jitter. The jitter avoids every node renewing on the same millisecond, which can create coordinator spikes and election flapping.
Time model
Let Redis own the timer. Redis TTL is enforced server side. Do not build correctness on local clocks.
Ownership token
Use a unique token per process instance, stored as the lease value. Renew and release must verify the token. Without this, one node can renew a lease it does not own.
Redis as the lease store
Redis provides the primitives you need.
Acquire: SET key token NX PX ttlMillis
Renew: Atomic compare token then extend TTL, implemented with Lua
Release: Atomic compare token then delete, implemented with Lua
Acquire
using StackExchange.Redis;
public sealed class RedisLeaseStore
{
private readonly IDatabase _db;
public RedisLeaseStore(IConnectionMultiplexer mux) => _db = mux.GetDatabase();
public async Task<bool> TryAcquireAsync(string key, string token, TimeSpan ttl, CancellationToken ct)
{
// StackExchange.Redis does not accept CancellationToken on all calls, so keep the operation small.
return await _db.StringSetAsync(
key: key,
value: token,
expiry: ttl,
when: When.NotExists);
}
}
Renew with Lua
Renew must be compare and extend in one operation. A read then write is not safe because another node could acquire between those steps.
Lua script:
If the stored value matches the token, extend TTL and return 1
Else return 0
public sealed partial class RedisLeaseStore
{
private static readonly LuaScript RenewScript = LuaScript.Prepare(@"
if redis.call('GET', KEYS[1]) == ARGV[1] then
return redis.call('PEXPIRE', KEYS[1], ARGV[2])
else
return 0
end
");
public async Task<bool> TryRenewAsync(string key, string token, TimeSpan ttl, CancellationToken ct)
{
var result = (long)await _db.ScriptEvaluateAsync(
RenewScript,
new RedisKey[] { key },
new RedisValue[] { token, (long)ttl.TotalMilliseconds });
return result == 1;
}
}
Release with Lua
Release is optional for safety, but useful for fast handoff during graceful shutdown. It must be token checked to avoid deleting another node’s lease.
public sealed partial class RedisLeaseStore
{
private static readonly LuaScript ReleaseScript = LuaScript.Prepare(@"
if redis.call('GET', KEYS[1]) == ARGV[1] then
return redis.call('DEL', KEYS[1])
else
return 0
end
");
public async Task<bool> ReleaseAsync(string key, string token, CancellationToken ct)
{
var result = (long)await _db.ScriptEvaluateAsync(
ReleaseScript,
new RedisKey[] { key },
new RedisValue[] { token });
return result == 1;
}
}
A lease abstraction you can use everywhere
Wrap the store behind a small interface. Keep it focused on behavior, not plumbing.
public sealed class RedisLease(RedisLeaseStore store, string key, TimeSpan ttl, string? token = null)
: ILease
{
public string Key { get; } = key;
public string Token { get; } = token ?? Guid.NewGuid().ToString("N");
public TimeSpan Ttl { get; } = ttl;
public Task<bool> AcquireAsync(CancellationToken ct) => store.TryAcquireAsync(Key, Token, Ttl, ct);
public Task<bool> RenewAsync(CancellationToken ct) => store.TryRenewAsync(Key, Token, Ttl, ct);
public async Task ReleaseAsync(CancellationToken ct)
{
await store.ReleaseAsync(Key, Token, ct);
}
}
Using the lease for leader only work
A lease does nothing until you build your control flow around it. The safest approach is a background loop that:
tries to acquire
renews on a schedule
cancels leader work immediately when renew fails
Leader work gate
public sealed class LeaderOnlyService
{
private volatile bool _isLeader;
public bool IsLeader => _isLeader;
public async Task RunIfLeaderAsync(Func<CancellationToken, Task> work, CancellationToken ct)
{
if (!_isLeader) return;
await work(ct);
}
internal void SetLeader(bool value) => _isLeader = value;
}
Lease driven leader loop
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
public sealed class LeaseLeaderLoop : BackgroundService
{
private readonly ILease _lease;
private readonly LeaderOnlyService _leaderOnly;
private readonly ILogger<LeaseLeaderLoop> _log;
private readonly TimeSpan _renewEvery;
private readonly Random _rng = new();
public LeaseLeaderLoop(ILease lease, LeaderOnlyService leaderOnly, ILogger<LeaseLeaderLoop> log)
{
_lease = lease;
_leaderOnly = leaderOnly;
_log = log;
_renewEvery = TimeSpan.FromMilliseconds(_lease.Ttl.TotalMilliseconds / 3);
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
if (!_leaderOnly.IsLeader)
{
var acquired = await _lease.AcquireAsync(stoppingToken);
if (acquired)
{
_leaderOnly.SetLeader(true);
_log.LogInformation("Lease acquired for {Key}", _lease.Key);
}
}
else
{
var renewed = await _lease.RenewAsync(stoppingToken);
if (!renewed)
{
_leaderOnly.SetLeader(false);
_log.LogWarning("Lease lost for {Key}", _lease.Key);
}
}
}
catch (Exception ex)
{
_leaderOnly.SetLeader(false);
_log.LogError(ex, "Lease loop error for {Key}", _lease.Key);
}
var jitterMs = _rng.Next(0, 250);
await Task.Delay(_renewEvery + TimeSpan.FromMilliseconds(jitterMs), stoppingToken);
}
}
public override async Task StopAsync(CancellationToken cancellationToken)
{
_leaderOnly.SetLeader(false);
await _lease.ReleaseAsync(cancellationToken);
await base.StopAsync(cancellationToken);
}
}
Example: singleton outbox dispatcher
public sealed class OutboxDispatcher(LeaderOnlyService leaderOnly)
{
public Task TickAsync(CancellationToken ct) =>
leaderOnly.RunIfLeaderAsync(async innerCt =>
{
// Fetch pending messages, publish them, mark sent.
await Task.CompletedTask;
}, ct);
}
Leader loss is immediate because RunIfLeaderAsync checks a volatile flag. Your leader work still needs to be cancellable and to check cancellation regularly.
Failure stories the lease prevents
Crash and stuck lock
With an indefinite lock, a crash can freeze the work forever. With a lease, the lease expires and another node can acquire it.
Pause and stale owner
A node can pause longer than TTL. When it wakes, it might still believe it owns the work. With proper renew checks, it fails renew and stops. That avoids duplicate work. If the paused node keeps writing anyway, you need fencing tokens at the write boundary. Leases reduce risk, they do not block every stale write by themselves.
Partition
A partition can isolate the leader from Redis. Renew fails. The node relinquishes leadership. Another node on the healthy side acquires the lease after expiry. If the isolated node keeps acting, that is a code bug. The lease pattern gives you a clear rule to enforce.
Testing strategy
Integration tests against Redis are worth the time.
Tests to include:
only one node can acquire the lease at a time
renew succeeds for the token holder
renew fails after expiry
release deletes only when token matches
simulated pause longer than TTL results in lease loss
Sketch of a basic acquisition test:
public async Task OnlyOneOwnerGetsTheLease()
{
var store = new RedisLeaseStore(ConnectionMultiplexer.Connect("localhost:6379"));
var ttl = TimeSpan.FromSeconds(5);
var a = new RedisLease(store, "lease:group-a", ttl, token: "A");
var b = new RedisLease(store, "lease:group-a", ttl, token: "B");
var gotA = await a.AcquireAsync(CancellationToken.None);
var gotB = await b.AcquireAsync(CancellationToken.None);
if (gotA == gotB) throw new Exception("Expected exclusive acquisition");
}
Add a pause test by acquiring, waiting past TTL without renewing, then acquiring from another lease.
Operational checklist
Metrics:
acquire success rate
renew success rate
lease loss events
contention rate per key
Redis latency for SET and EVAL
Alerts:
repeated lease loss for the same key
frequent leader changes for a key
renew failures correlated with Redis latency spikes
Logs:
key, token prefix, acquire vs renew, latency, and exception details
Common mistakes
using a lock library without understanding token checks
renewing without compare and extend
treating release as the safety mechanism
ignoring renew failures and continuing work
TTL set without measuring coordinator latency
assuming leases prevent stale writes without fencing
Wrap up
A lease is a lock with a deadline. It is safer because it admits reality: owners crash and networks split. Implement atomic acquire, atomic renew, and token checked release. Then enforce the hard rule. If you cannot renew, you stop acting like the owner.
Next up is fencing tokens. A lease determines who should lead. Fencing determines who is allowed to write.