Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.
This week, Dimitri, Rob, and Leonard jump on Web3 to host a discussion about:
đ¶ đïž GOOD KARAOKE! đž đ„ "The Oracle" by Leonard Lee (Original)
đŁ The genealogy of Oracle's World, from OAUG to Oracle Cloud World to Oracle AI World.
đŁ What RDBMS did you start off with? Dimitri and Rob try to prove who is more ancient.
đŁ How did Oracle come about and become the leading database company?
đŁ How did Larry Ellison become the richest man in the world in two eras?
đŁ How is the current GenAI hype similar to the database hype cycle of the 80s and the 90s.
đŁ Why did DBAs make so much money back at the height of the database era?
đŁ How is the RDBMS adapting to be the center of data in the GenAI era?
đŁ The mess that will be the AI data center, and the coming modernization problem.
đŁ Why Rob hates guys who wear track suits to high-end steakhouses.
đŁ Do you care if an AI can hit an A5 in a crazy tough song like "Golden?"
đŁ Why Apple continues to kill it with Apple Silicon.
It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!
Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!
If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!
Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62
Welcome Joren Vaes, design engineer at SOFICS
A collection of upcoming CFPs (call for papers) from across the internet and around the world.
The post Call For Papers Listings for 10/17 appeared first on Leon Adato.
This is a transcript of my keynote presentation at the Agile Cambridge conference in England on October 2nd, 2025. The topic was âThe Accountability Problem.â How do we define software department accountability so our business partners donât do it for us?
Thanks for having me. Iâm very happy to be here in Cambridge. This is my first time visiting, so I spent the afternoon Tuesday doing some sightseeing, including a lovely ride down the River Cam. I was delighted to learn yesterday that I had Simon Wardley to thank for chauffered punt rides, including the completely fictional story I was told about the mathematical bridge.
One of the things I love about Cambridge is its rich history. Of course, lots of history is important when you have...
...this monster eating up every second.
Thatâs the Chronophage outside of Corpus Christi college, if you arenât familiar with it, and much more impressive in person than in my terrible vertical picture with window glare.
Before we get going, I should explain my context. Youâll hear a lot of advice at this conference, and how much that advice is relevant to you has a lot to do with how much their context matches yours.
Iâm currently VP of Engineering at OpenSesame, and for the 23 years prior to that, I was a consultant. As VP, and as a consultant, I specialize in late-stage startups: entrepreneurial organizations that were successful enough that they were able to grow. These are companies with a product mindset that value entrepreneurial thinking, but theyâre also trying to grow up and be âreal companies,â and theyâre trying to figure out how to do that without losing their entrepreneurial edge.
So thatâs the context of my material: entrepreneurial companies building software products that they sell. If youâre not in that situation, I encourage you to mine my talk for ideas, but donât try to apply it blindly. And if you are in that situation... well, mine my talk for ideas, and donât try to apply it blindly!
A few more disclaimers. All the substantive content of this talkâthe words, diagrams, examples, and so forthâwere created with my actual meat brain, without any AI. Large images have been sourced from various locations, and are credited in the bottom left corner.
Iâve also dressed up some of the slides with decorative AI-generated images from ChatGPT 5, like that rapper holding a stop sign. If thereâs one thing GenAI is good at, itâs embellishment.
I should also mention that, although I work for OpenSesame, Iâm not speaking for OpenSesame. I created this talk on my own time, and Iâm technically on vacation right now. The opinions I express are my own.
Anyway, as I was saying, one of the things I love about Cambridge is its rich history. Iâm sure youâve all heard several times by now that the university was founded back in 1209, by people fleeing [waves hand dismissively] the other university. In comparison, my home town of Astoria, Oregon, which is the oldest permanent settlement on the west coast of the US, was founded in 1811. I think thatâs last Tuesday by British standards.
Part of the history surrounding Cambridge is this man: LP Hartley. He was born in Cambridgeshire in 1895, although he never went to Cambridge University. He went to... the other one. But, despite that choice, he went on to become a successful novelist.
His most famous novel is âThe Go-Between.â It begins with a wonderful opening line:
âThe past is a foreign country: they do things differently there.â
And that connects us back to Cambridge. The University of Cambridge Press published this book in 1985. Itâs by David Lowenthal, and it created an entire sub-genre of history called Heritage Studies. Itâs still in print today, in a revised edition.
The concept of this idea is that, although the past informs the present, the present also informs the past. Our thoughts and actions today extend from events that occurred in the past. But, at the same time, our understanding of the past is colored by our thoughts and actions today.
The past is a foreign country. They do things different there. But we canât visit the past. We canât see what they did differently. We can only interpret what theyâve left behind.
And like medieval scholars drawing elephants theyâve never seen, we make those interpretations through the lens of our own biases.
I love these medieval drawings of elephants. Theyâre so delightfully strange.
But Iâm not showing you these pictures to make a point about how difficult it is to draw an elephant when you havenât seen one.
If you go back to Corpus Christi, where they have the Chronophageânot right now! Iâll start talking about software soon, promise. Anyway, at Corpus Christi, they have Matthew Parisâ Chronica Majora. It contains this drawing of an elephant. You might assume that it came much later, because itâs so much more accurate. But all of these drawings were created around the same time, in the 13th century.
Itâs quite the difference, isnât it?
Iâm not showing you these images to make a point about medieval monks. Iâm actually showing them to make a point about your biases. In the modern era, we expect images to be true to life. We have cameras that give us nearly perfect representations of the world. But realism isnât what medieval monks were always trying to accomplish. Religion and metaphor were a central part of their lives, to a degree that I think we in the modern world have trouble understanding.
The elephants on the left arenât really elephants. Theyâre a way of presenting a moral lesson about your place in the world. The image serves that story. Itâs not there to teach you about elephants. Itâs there to teach you about God.
So if your first reaction to these elephants was to laugh at those ignorant medieval monks... then perhaps youâve fallen prey to your biases. The elephant doesnât look like an elephant because the metaphor was more important than the reality.
The past is a foreign country. They do things differently there.
[beat]
The past informs the present, but the present informs the past. We canât help but to interpret it through the lens of our own experience, and those biases distort the reality of what it was actually like to live there.
This idea fascinates me, because itâs not only true of the past; itâs true of everything. Our biases and experiences influence so much of how we interpret the world.
I taught teams Extreme Programming for a few decades, as a consultant. Now that Iâm VP of Engineering, Iâm still teaching it, in a way. One thing thatâs stood out to me over the years is that the people who struggle the most to learn XP are the ones who are more senior.
Junior developers have no problem! Itâs the senior developers who struggle. They have too much baggage from their preconceptions.
A good example of this comes from Microsoft. XP was popular in the early 2000s, and practices like test-driven development, which come from XP, were entering the mainstream. So Microsoft published a set of âGuidelines for Test-Driven Development.â
There was a big backlash, and Microsoft took their guidelines down pretty quickly, because they got them terribly, ridiculously, horribly wrong. Microsoft didnât actually practice XP, as far as I can tell, so they didnât know that XP is a way of keeping software design simple and evolving it in response to customer needs. In XP, you donât create your design in advance; you discover it as you go, and you focus on keeping it as simple as you can.
People who have practiced XP know that TDD is about tests and code evolving in step with each other, so that you learn as you go. A few lines of test code. See the tests fail. A few lines of production code. See the tests pass. A few improvements to the design. See the tests pass. A few more lines of test code. See the tests fail. And so on, and so forth, until the software is done, without following a preconceived path.
As with many companies past and present, the Microsoft way wasnât to evolve their design; it was to come up with a software design in advance, then build to that preconceived design. And so they saw what Kent Beck and others had said about TDD and interpreted it in the only way they knew how: as a way of coming up with a software design in advance, and then building to that design. Their guidelines for TDD were to:
Gather the requirements for your new feature
Make a list of tests that will satisfy the requirements
File work items for the tests that need to be written
Generate all the interfaces and classes youâll needâusing Visual Studio, of course!
Write all the tests
Write all the production code
Iâm not exaggerating! This is what they actually said. Refactoring, iteration, learning as you goâkey ideas of XP and TDDânowhere to be found.
Microsoftâs approach to TDD was the exact opposite of what TDD was about. But they were only able to interpret TDD through the lens of their corporate approach to software development. And to this day, you see this same misunderstanding about TDD repeated by people who are steeped in up-front thinking.
XP is a foreign country. We do things differently here.
[beat]
As people, we canât help but to interpret the world through lens of our own biases. But that means that we make assumptions about the world that arenât true, and we canât even recognize that weâre doing it. Itâs not just the past thatâs a foreign country... almost everything is.
That leads to problems. And in software, one of the biggest, is...
...the Accountability Problem.
[beat]
People who arenât software developers have probably seen more âprogrammingâ in movies and TV shows than in real life. Those shows are filled with magical people who can âhackâ anything off-camera and in moments.
âThe shipâs going to ram us, captain!â âQuick, hack into their retro-encabulator and reverse the polarity of their thrusters!â (frantic typing, dramatic music, camera zoooooooom) âIt just barely missed us! Hoorah!â
I only wish software development was that cool.
Of course, people do know thatâs fiction. Some of them might have even written code in school. But in school, people write small programs that fulfill an assignment and donât have to be maintained.
Or maybe theyâve vibe-coded an app using GenAI.
None of these experiences bear any relationship to the modern world of software development.
TV show hackers are just another deus ex machina... quite literally. Itâs lazy writing.
School projects donât require long-term maintenance or large-scale coordination.
Unsupervised AI coding assistants feel magical, but they break down once you get past the prototype stage.
All of these things trick people into thinking that software development is about code. About hands on keyboard. But thatâs not what itâs about at all.
To paraphrase Kent Beck, professional software development is about...
Communication and collaboration between large numbers of people with different perspectives.
Feedback loops that enable us to tell when weâre building the right thing, and the thing right... and when weâre not.
Simplicity, because itâs our ability to understand and change software that determines timelines and cost.
Courage to do the right thing even when itâs hard, and itâs often hard.
Respect for the people doing the work and the people affected by the work.
We know that software development is a matter of discovery and coordination. But to our business partners, weâre a foreign country. They can only see us through the lens of their experience.
Their experience is that software development is about writing code, in the same manner that someone might do a homework assignment. Itâs tedious, perhaps; time-consuming, maybe; but ultimately, a matter of buckling down and doing the assignment... following a straight path from here to there.
If you think this wayâif you think that software development is like a big homework assignmentâthen you start making a bunch of assumptions.
You assume that you only need to define the assignment correctly to get the right answer.
You assume that the assignment has one right answer, and thereâs a clear path to that answer.
You assume that people can tell you what that path is and how long it will take.
You assume that, when work isnât getting done according to that schedule, itâs because people arenât working hard enough.
And you assume that, when workâs behind, putting pressure on people will make them work harder and get it done on time.
Ultimately, you think software development looks like this [play animation]: a trip from point A to point B.
When in reality, itâs more like this [play animation]: a process of exploration and discovery, where the outcome isnât known until you get there.
Software development is a foreign country. We do things differently here.
These misconceptions arenât harmless. They extend deep into organizational structures. The biggest impact is how software development is run in most organizations. Most organizations use project-based governance. You create a plan, then you work the plan. If you execute the plan properly, youâll be successful, and youâll finish on time.
In this environment, itâs managementâs job to make sure that the plan is created correctly, worked correctly, and that people donât slack off.
How do you know management is doing their job? What are they accountable for?
Delivering software on time and on budget.
Itâs clean, itâs neat, itâs easy to understand, and it matches peopleâs misconceptions about software development.
And it results in bad software.
The whole premise that we can define the assignment in advance is incorrect. Software development is a process of discoveryâof iteration and refinement. We learn as we go, and that changes our plans.
This is an Agile development conference. Youâve heard it all before. Iâm not going to belabor the point.
But our business partners havenât heard it before, or if they have, itâs counter to their experiences. Like us seeing medieval pictures of elephants, like Microsoft with TDD, they canât help but interpret the world through their own biases. And those biases lead to project-based governance.
In their minds, anything less... is a lack of accountability.
So what can we do about this?
Ultimately, accountability is about being responsible for a set of results. At the executive level, everybody has to be accountable.
Marketing is responsible for generating leads for your Sales department. They say how many qualifying leads theyâre going to create, and theyâre accountable for having done so.
Partners also generates leads, or even sales, from people who are using complementary products and services. Theyâre accountable for bringing in partners, and for the revenue those partners generate.
Sales converts leads into paying customers. Theyâre accountable for the revenue generated by those customers.
Customer Success takes care of your customers. Theyâre accountable for retention, and for generating additional revenue from upsells.
Everyone is accountable for doing what they say theyâll do, including us in software development. But thereâs something different about how everyone else is accountable. Did you notice?
For other departments, accountability is about the results theyâre bringing to the organization, not the work theyâre putting in. Sales isnât saying, âweâre going to land customer X on date Y.â Everybody knows that sales take time, and things go sideways. So Sales says âwe donât know exactly which customers weâre going to land, or when, but overall, weâre going to generate X dollars of revenue.â Same for Marketing, and Partners, and Customer Success. We in software are the only ones who have to predict exactly what and when.
Our business colleagues arenât unreasonable. They understand that things go wrong. But they also believe, deep in their hearts, that if you arenât accountable, you wonât put forth your full effort.
And if we donât define how weâre going to be accountable, theyâll do it for us, in the only way they know how. Which features are you going to deliver? When? If you donât deliver them on time, you arenât being accountable.
We have to change the script.
So what should we be accountable for instead? What, exactly, do we do? What results do we create?
[beat]
We create new opportunities. Letâs say that the trajectory of your company is to grow its annual revenue by $10mm per year. Our job is to increase that rate of growth, to $12, $15, $20mm per year. Every time we ship a new feature, we should be increasing that rate of growth.
Our features should open up new markets, allowing Marketing to generate more leads.
We should provide useful APIs, allowing Partners to build new relationships.
We should respond to market trends, allowing Sales to convert more leads.
And we should fix the problems that get in customersâ way, reducing churn and increasing upsell.
What are we accountable for? Weâre accountable for improving our companiesâ trajectories. Every dollar invested into software development, other than keeping the lights on, should be reflected in permanent improvements to the value your company creates. That value may not be literal dollars or pounds; it may be helping to cure malaria or fighting climate change. But however you define value, the purpose of our work is to change that trajectory for the better.
Itâs easy to say that weâll be accountable for improving our companiesâ trajectories. But how do we actually demonstrate that weâre doing so?
Itâs nearly impossible to quantify the impact of any individual feature. It takes months to see an impact from a new feature, and even then, we canât say that feature X resulted in change in behavior Y. Letâs say churn went down by half a percent. Thatâs great! Did it go down because of the feature we just released? Or because of a different one? Or is it more that interest rates just dropped and we hired an amazing new director for our customer success department?
This is why itâs tempting to look at when youâll deliver a feature. Itâs easy to measure.
But ultimately, features are a means to an end, not the end itself. Thereâs an old clichĂ© that people donât want a shovel, they want a hole in the ground. And they donât want a hole in the ground, they want a building foundation. And they donât want a building foundation, they want a nice big stable. And they donât want a stable, they want war elephants that make their enemies say things like, âCarthago Delende Est!â
When we talk about delivering features, weâre talking about shovels when we should be talking about striking fear into the hearts of Roman soldiers.
So instead of talking about features, Iâve introduced a way of talking about value. At OpenSesame, weâre calling them âProduct Bets.â
Before we go further, a quick disclaimer. The term âbetâ is common among startups and other entrepreneurial organizations, so youâll hear the phrase âproduct betâ from a lot of different people. Each of us is using it in our own way. So my use of âproduct betsâ isnât the same as what you might have seen from somewhere else.
Okay, so what do we mean when we say product bet?
Ultimately, itâs a strategic investment in a business result. Itâs summarized with a single sentence that has two parts:
First, the business outcome: Strike fear into the hearts of Roman infantry!
Second, the means by which we do so: ...by fielding a battalion of war-capable elephants.
The result always comes first: strike fear. The mechanism comes second: war elephants. And even then, itâs high level. We need a stable, we need animal breeders and trainers, we need to train soldiers, we need a supply line. We need so many things, and not just software. Those are features. We donât talk about features in our product bet. We keep it high level. Just the headline.
Next, we need a sponsor. Who amongst our leadership team is going advocate for this result? At OpenSesame, itâs usually our Chief Product Officer. But sometimes itâs our Chief Customer Officer, whoâs in charge of sales and retention.
For the Carthaginians, of course, the sponsor is General Hannibal.
Next we talk about estimated present value. This is a core innovation. As I said, itâs nearly impossible to measure the impact of any feature, or even set of features. Thereâs too many confounding factors.
So we donât measure the impact. We estimate the impact.
My software department takes accountability for delivering estimated value, not measured value.
Now, thatâs not to say that we donât want to validate results. Jeff Patton talks about using Dave McClureâs Pirate Metrics to do so. I welcome and encourage that kind of validation. Ultimately, you have to decide if the bet was successful.
(Spoilers: Hannibalâs bet isnât going to be as successful as he was hoping.)
But the key idea of these product bets is that you donât have to measure value. You only have to decide if the bet was successful. If it is, we get credit for the estimated value, not the actual value, which saves us a lot of time and trouble.
Estimating value allows us to be accountable without predicting specific dates and features.
Remember that the head of Sales is accountable for delivering a certain amount of new business every year. Letâs say itâs 10 million dollars. Theyâre going to deploy a certain number of sales people towards small-to-medium businesses, some towards mid-market, some towards enterprise. Theyâre going to conduct training and organize incentive programs. Theyâre going to get everybody fired up about how they need to sell, sell, sell! Theyâre going to monitor calls, check SalesForce, make sure people are following up.
But theyâre not going to say, âEnterprise X is going to sign on date Y.â Because they canât. The buyerâs going to go on vacation. Legalâs going to demand redlines. A year in advance, nobody knows when the contract will be signed, or if it will even be signed at all. But overall, theyâve got enough going on that they can say, âyes, weâre going to close $10mm in sales this year.â
The same is true of us. A year in advance, we donât know which bets weâre going to do. We donât know how much itâs going to cost to build them. We donât know which ones are going to be successful and which ones are going to fail. But overall, we can say, âYes, weâre going to deliver bets that are worth $10mm in estimated value this year.â
And thatâs accountability.
[beat]
Wait a moment. âWe donât know how much itâs going to cost to build a bet?â How can we decide what to do if we donât know how much itâs going to cost?
At this point, all we have is a headline. Thereâs no way for us to know how much it will cost, because we donât know exactly what weâre going to build.
And if weâre doing Agile right, we will never know exactly what weâre going to build until after itâs done. As you all know, Agile software development is iterative and incremental. Itâs a process of discovery.
I like Eric Riesâ characterization of this idea: we build, we measure, we learn, over and over again. And we donât know what weâre going to do here [points at âbuildâ step] until we know what happened here [points at âlearnâ step]. As long as weâre genuinely learning, we canât know our costs in advance.
What we can do, though, is put a maximum limit on how much weâll spend. I call it the âmaximum wager,â to continue with the betting theme. We track our spending, and if weâre not successful by the time we hit the limit, the bet has failed. We shut it down and move on to the next one. Or, at the very least, take a hard look at where things are at and decide on a new wager. As long as the total spending is less than the present value, it could still be a good investment.
The amount of the maximum wager is for your leadership team to decide. Itâs not an estimate of cost. Itâs a gut check about risk and value. The higher the value of the bet, the more you can wager. But you donât want to wager so much that it would be crippling if the bet failed. Some bets will fail, and youâll get nothing for your efforts. Success doesnât mean fielding elephants. Success means winning a war with our elephants, and those Romans can be tricky.
The maximum wager is based on your leadership teamâs gut feel of the risk and value involved. Itâs not based on how much we think the bet will cost; itâs based on how much weâre willing to lose.
And then we do our best to make sure that potential loss is minimized. We use the "Build, Measure, Learn" loop to validate whether the bet is going to be successful early on. Maybe one loop is focused on taking elephants up into the mountains to see how they handle the harsh conditions, and another loop dedicates a âred teamâ to see if they can be spooked into fleeing during battle.
It turns out they can. It would be nice to discover that early, not in the middle of battle with the Romans.
Although we in software are accountable for estimated value, not actual value, we only get to take credit for successful bets. Itâs in our interest, and everyoneâs interest, to weed out the unsuccessful bets early, so we can spend more time focusing on the successful ones. And so, we should design our build-measure-learn loops to test for failure as early as possible.
With value and a maximum cost, we can perform an apples-to-apples comparison between bets and choose the one that seems like the best one to do next. Often, that will be the one with the highest value.
But donât be fooled by all these numbers! Theyâre just estimates and guesses. A smart leadership team will go with their gut, not just follow the numbers like robots. The numbers are there to feed a conversation: to get people thinking. Theyâre not there to substitute for experience and judgment.
The big question: Does this work?
For me, so far, yes. It took me nearly two years to get my leadership team to really engage with this approach, and I needed the strong support of my CEO and CPO to get there. My CEO, in particular, had to get pretty insistent before people would engage.
The fact is, putting together bets, even such high-level ones, takes work. It also makes people accountable, by putting concrete numbers on previously-vague statements about value, and despite everybodyâs desire for other people to be accountable, most leadership teams Iâve worked with arenât really looking to take on more accountability themselves.
But, thanks to my CPO and CEOâs support, I can say that we are building software using product bets. We identified a handful to take to the leadership team earlier this year. They estimated the value, then chose a specific set of bets for us to pursue based on our capacity. Itâs definitely elevated our conversation around product strategy, and I can see it getting even better as we gain familiarity with the approach.
What we havenât done yet is finish any bets. We just started our first formal bets this year. So I canât yet tell you how it will turn out.
What I can tell you is that Iâm getting a lot less pushback than I used to about features and dates. The conversation is focused on bets, not features and dates, and when we talk about what folks want from Engineering, itâs less about, "tell me when youâre going to be done," and more about how we can take on more bets.
So, even though I havenât yet used product bets to truly demonstrate accountability, they already seem to be helping.
Does it work? For me, so far, yes.
To summarize, weâre working on demonstrating accountability with product bets.
Specifically, weâre going to commit to delivering a certain amount of estimated value each year.
That estimated value comes from product bets. Each product bet is summarized by headline that focuses on a business result with a high level description of how weâll achieve that result.
The bets to pursue are decided by the leadership team, and each bet has a leadership sponsor who champions it within that team.
Bets have an estimated value, and we focus on the estimate rather than trying to prove out actual value.
The leadership team also defines a maximum wager for each bet, which is based on a gut feel of risk and benefits, not costs, and together with the present value allows us to perform apples-to-apples comparisons of the bets.
At this point, you might be wondering: where does that "present value" number come from?
The answer, like all things in business, is spreadsheets. Magical spreadsheets filled with arbitrary guesses.
The secret to spreadsheets is that they make our guesses look official. Professional. Good Business-y.
But seriously, yeah, spreadsheets. Let me show you.
Let me start out by explaining what âpresent valueâ is, just in case some of you arenât familiar with it.
The core idea of âpresent valueâ is that moneyâletâs say $10âis worth more today than it is tomorrow. Today, I can buy a couple of candy bars with $10. In a few decades, Iâll only be able to buy half a candy bar due to inflation.
This is called âthe time value of money,â but itâs very simple: money today is worth more than money tomorrow.
What this means is that earning $10 today is better than earning $10 next year, and even better still than earning $10 in two years. If inflation was 20%, $10 in future value next year would be equivalent to $8.33 in present value today. $10 in future value two years from now would be equivalent to $6.94 today. And so forth.
Of course, inflation isnât 20%, thank goodness. But when your company makes an investment, they expect a certain return on that investment. The return they expect is called âcost of capital.â Your leadership team will tell you the cost of capital to use. Itâs based on their judgment of how much they could get from using the money on other investments along with an adjustment for risk. For these examples, Iâm arbitrarily choosing a 20% cost of capital.
The neat thing about cost of capital is that you can wager your entire present value and still get a good return on investment. As long as the bet is successful, even if you spend all of the present value, youâre still making money.
If you ask me for an investment and promise to return $10 to me today, $10 next year, and so on for the next three years, youâll return $40 total. If my cost of capital is 20%, then I can look at the present value of each of those returns. Itâs $10 today, $8.33 next year, $6.94 the following year, and so forth. Adding up those future returns gives me the total present value, which is $31.06, which means that I can invest up to $31.06 and still get at least a 20% return on my investment.
Okay, so thatâs what present value is. Now, how do we determine what numbers to use?
As I said beforeâspreadsheets and guesses. You build a financial model that makes guesses about the future.
Iâm going to share the model I used, but I have to be honest: I had a lot of trouble getting my leadership team to engage with product bets at first. In order to get this off the ground, I had to provide the financial model myself... and honestly, I think it could be a lot better.
We have a new CFO at OpenSesame, so I showed him about the model Iâm about to show you. He saidâthis is a direct quoteââitâs an okay framework to start.â He also said, âcome talk to me early when you start on the next set of bets.â
So, yeah. Thank you for coming to my okay talk. Iâm sure it will be better next year.
In all seriousness, our CFO liked the general idea of product bets, and the categories I was using. He just thinks he can make the specifics more rigorous, which is great, and Iâm looking forward to his help.
The fact is, it doesnât really matter if the model is accurate or not. The important thing is to get people to engage with value rather than cost and dates being the primary driver of decision-making. You can use a rough, back-of-the-envelope model to get started. Thatâs what I did. As long as youâre consistent with your approach across bets, itâs still useful.
With that said, our product bets are broken down into five sections. Each one has its own little present value calculation.
Thereâs Sales, which represents the money we make from new customers as a result of the bet.
Upsell, which is the money we make from existing customers as a result of the bet.
Retention, which has to do with the fact that we sell subscriptions. Once we make a sale, we keep making money from that customer every year, so long as we can retain them. This is typical in the modern software-as-a-service world. So retention is a very important number.
Cost savings is reduction in spending, which counts as value, because spending $5 less on candy each year means I have $5 more in my pocket.
And then expenditures, which is additional spending weâll incur as a consequence of the bet. For example, maybe I spend $5 less on candy each year, but I have to spend $1 every year on a budget tracking app that reminds me not to waste money on candy.
To illustrate these ideas, let me introduce you to my new employer: War Elephants as a Service.
Weâre your one stop shop for all elephant-related warfare. We take care of the elephants, so you can take care of the invasion! Look at our glowing testimonials from top customers: Carthage... and Rome! Business is good. Or at least, it was. Thereâs not much demand for war elephants these days.
PS: Apologies for the mutant two-trunked elephant in the logo. Our ex-CEO tried to solve our financial problems with cost-cutting, so he replaced all of our graphic designers with AI. His last words as he was escorted out the building were, âIâve made a terrible mistake.â
But we have new CEO now! Babar is our new âChief Elephant Officer,â and he has an idea for keeping our business relevant in todayâs fast-paced world. Since nobody seems to want war elephants any more, weâre going to switch from âwar elephantsâ to âmore elephants!â Elephant parades! Elephant-themed merchandise! And especially, cute baby elephants! Nothing says âmore elephantsâ like an adorable fuzzy pachyderm.
Specifically, weâre going to open up new markets and improve retention by introducing family-friendly elephant activities. Thatâs our bet.
To quantify this bet, weâre going to look at the five categories I mentioned before: Sales to new customers, upsells to existing customers, retention, cost savings, and expenditures.
[points to âService Obtainable Marketâ row] For new sales, weâre going to look at the âservice obtainable market,â which is the total size of the market that we can reach for family-friendly elephant activities. Letâs say itâs 100 million dollars at the end of the first year, and grows over time as word gets out.
[points to âSales Rateâ row] Next, weâre going to estimate how much of that market we can capture. We face competition from zoos, but nobody has quite the expertise deploying large numbers of elephants that we do, so weâre going to say we can sell into 1% of the market, and that will also grow over time.
[points to âFuture Valueâ row] Multiplying the service obtainable market by our sales rate of 1% gives us the amount we expect to make each year in future dollars. [points to âPresent Valueâ row] Then we apply our present value formula at a 20% cost of capital and [points to âTotal Present Valueâ] add it all up to get a total present value of nearly $5 million from new sales.
It all looks very official, doesnât it? But how do we know itâs a 100 million dollar market? How do we know we can sell into 1% of it?
We donât! Itâs guesses. Educated guesses, maybe, but ultimately... guesses. Thatâs how these things work, and thatâs why you need your leadership team to get involved. You can make your models more and more rigorous, but at the end of the day, somebodyâs making their best guess, and those guesses should be overseen by the people in charge of those departments.
Next, we look at upsell. How many of our existing customers can we convince to try our new family-friendly elephant activities?
[points to âService Obtainable Marketâ row] As before, we start with the total market that we can reasonably reach. This is the amount we think that our existing customers would be willing to spend on our new offering. In our case, it turns out our customers arenât actually using their war elephants for war, but for things like parades. We think thereâs a good $25 million to be made from our existing customers, and we donât expect that to change much over time. To be clear, thatâs not what we make from our existing customers, itâs the extra amount we think theyâd pay for our new service.
[points to âSales Rateâ row] Then we look at our sales rate for that market. Given that our customers are already using their elephants for parades, we think theyâre going to be pretty receptive to us providing services to support them. We estimate that weâll be able to convert 5% of the upsell market, and that number will also grow over time.
[points to âTotal Present Valueâ] Multiply the numbers, apply present value formula, and we have the total upsell value of $6.3mm.
Now letâs talk about retention. Our retention numbers have been pretty badâas I said, countries donât really need war elephants any more. [points to âService Obtainable Marketâ row] But we still have a hundreds of millions of recurring revenue, even though itâs going down each year. Thatâs the ARR lineâannual recurring revenue.
[points to âRetention Changeâ row] By pivoting from a focus on war to a focus on the military parades our clients are actually using elephants for, we think we can stem the bleeding a bit. Not much... about a quarter of a percent each year, going up slightly over time.
[points to âTotal Present Valueâ] Multiply, present value, and there you have it. Three and a half million.
What about cost savings? [points to âWork Eliminatedâ row] Is this bet going to eliminate any of the existing work our employees do? Not really. [points to âExpenses Eliminatedâ row] Is it going to eliminate any expensive software subscriptions or other expenses? No, probably not.
[points to âTotal Present Valueâ] Normally, weâd add up the cost savings and apply the present value calculation, but the numbers total out to zero in this case.
And finally, expenditures. How much more are we going to spend as a result of this bet?
Well, thereâs the cost of developing the bet itself, which is our wager, but weâll bring that in later. In this section, weâre looking at the ongoing costs of running the program. [points to âFuture Valueâ row] Iâm going to hand-wave that a bitâyou might have multiple line items here normallyâbut letâs just say itâs $2mm per year, going up as the program becomes more popular. Elephants arenât cheap.
[points to âTotal Present Valueâ] Present value, etc., gives us a total of $8.5mm in expenditures.
Bringing it all together, we have $5mm in new sales, $6.3mm in upsell, $3.5mm in improved retention, $0 in cost savings, and $8.5mm in expenditures. That comes to a total present value of $6.3mm before our development costs.
Now, how much do we want to wager on development? The leadership team thinks this is a slam dunk, and a way to save the business, so theyâre going to wager nearly all of the value. Five million dollars. Remember, using cost of capital to determine present value means that we could wager the entire present value and still come out ahead... if the bet is successful.
That said, bets still have a risk of failure. Our leadership team is making some assumptions about how much people will be excited about baby elephants, so weâll want to work incrementally and iteratively to test their assumptions early.
To summarize, the present value of the bet is based on sales to new customers, upsell to existing customers, change in retention, cost savings, and non-development expenditures related to those benefits.
And thatâs how we come up with the numbers in the product bet. To bring it back around, weâre betting that we can open up new markets and improve retention with family-friendly elephant activities. Babar is the sponsor for this bet and he thinks itâs worth $6mm in present value, and heâs willing to spend up to $5mm to try to make it work.
To calculate the value of those categories, we took a back-of-the-napkin approach where we estimated the size of the market and our ability to sell into that market. Thereâs certainly room for more rigor, and I encourage you to talk to your finance team about how to improve the model.
But do remember that itâs all still guesses at the end of the day. Itâs better to have some model than a perfect model. The real benefit is in shifting the conversation from features and dates to about being accountable for value.
We may be a foreign country, but we can still speak our business partnersâ language.
[beat]
But how do we get them to talk to us?
A leader I respect once told me, âYou have 18-24 months after becoming VP of Engineering to make a difference. After that, the organizationâs problems become your problems.â
I think he was right on target. As a leader, your colleagues in other departments will reserve judgement for the first six months or so. Theyâll get impatient over the course of the next year. By the end of two years, theyâll be holding you accountable. If you donât define what that looks like, theyâll define it for you, and theyâre going to default to features and dates.
The problem with product bets, as an idea, is that they require leadership participation. You canât create these spreadsheets on your own. Even if you did, nobodyâs going to pay attention if you donât have their buy-in. Iâve tried variants of the product bet idea many times over the years and getting that participation has been extraordinarily difficult. Iâm a little surprised weâre able to do it at OpenSesame, to be honest.
Before you can get people to buy in to your definition of accountability, you need them to trust you. And in order for them to trust you, you need to be accountable.
Iâm not sure how to solve this chicken-and-egg problem for your organization. I can tell you how I solved it for mine. Any change you introduce has to be in the context of your specific situation, so Iâm not saying that you should do it my way. Some of my changes were pretty radical, and theyâre not going to be a good idea for every situation.
We donât have time to go into every detail, so this is going to be more of an overview than a how-to guide. Iâll provide resources for further investigation.
QR Code: FaST: An Innovative Way to Scale
When I joined OpenSesame, I started by getting the lay of the land and deciding what to do. One of the things I saw was that the teams were heavily siloed by technology area, rather than by product line. Cross-team delays werenât too bad, although they often can be in this situation, but it did mean that teamsâ work didnât line up to our business needs. So the first thing I did was to introduce Quentin Quartelâs Fluid Scaling Technology, or FaST.
We donât have time to discuss FaST today, but you can learn more about my approach to it by following this QR code. The short version is that we combined teams into product-centric âcollectivesâ and created a single queue of work for each collective. Each product has a dedicated collective and work queue. Those collectives self-organize into teams as needed to tackle the highest priority work.
FaST solved the problem of teams not matching business needs. A related problem was the teams planned their work in terms of technical priorities rather than business results. They called them âstories,â and âepics,â and recorded them in Jira, but they were more like technical tasks. At the same time that I introduced FaST, I also introduced the idea of âValuable Incrementsâ from my book. (In case itâs not clear on the slide, my book is The Art of Agile Development, and itâs now available in a second edition. You can find this material in the âAdaptive Planningâ section.)
A valuable increment is a similar idea to an epic, in that it groups together multiple stories, but an âepicâ is literally a âbig story.â A valuable increment isnât focused on size; itâs focused on value. Each VI is something that stands alone. When itâs done, you can release it, and youâll have gotten value out of it even if you never work on anything related to it ever again.
Introducing FaST and VIs allowed me to talk in terms of the business results my teams were creating for each product line, not just their technical accomplishments.
I also knew, from experience, that one of my biggest battles was going to be around estimates and forecasting. Before I could gain the trust of the organization, I needed to be able to demonstrate that I could do what I said I would. Up to this point, their experience of software development was that we never delivered on time. At the same time, I didnât want people to over focus on features and dates.
So I played a game that, to this day, Iâm not sure was the right approach. I had my engineering managers start collecting data so we could provide more accurate forecasts. While they did that, I told teams to stop providing estimates to stakeholders.
This caused a lot of anger in my stakeholders. They didnât like hearing that they couldnât have estimates. I told them that our estimates werenât accurate, and we were working on getting better information, but they still didnât like it. I think I only got away with it because there had been high-profile failures with the old approach, and I was still in my honeymoon period, but it still caused a lot of friction.
It worked out in the end, I think, because the new forecasts really are much more reliable, but I had to collect data for about six months before I could provide the new forecasts. That was an uncomfortable period. I could have kept the old approach to forecasting, but it definitely didnât work. Iâm not sure if âwrong estimatesâ would have been better than âno estimates.â On the one hand, a clean break meant that it was obvious that I had switched to a new approach, andâas I saidâit really works. On the other hand, I made some important members of the leadership team angry in the meantime.
Anyway, the way it works is that we get a âwisdom of the crowdâ estimate for each VI before works starts. That involves a product manager providing a very brief description of what the VI involvesâjust a minute or two of verbal explanation. People can ask clarifying questions, but there usually arenât many. Then everyone provides their gut feel of how long the work will take a team to accomplish, in weeks. We collect the answers without discussing them and record the median response. Thatâs the estimate. It only takes a few minutes per VI. Since our collectives have between 12 and 25 people, including managers, product managers, and designers, thereâs enough people to make the âcrowdâ part of âwisdom of the crowdâ work.
Our Wisdom of the Crowd estimates are stunningly accurate. The median estimate for a VI actually matches the median reality. Itâs amazing. The approach comes from Quentin Quartel and his FaST method, and Iâve never seen anything so good. Itâs easy and itâs accurate.
However, although Wisdom of the Crowd estimates are accurate, in aggregate, theyâre not very precise. We graph estimates versus actualsâyou can see it on the right there. About 30% of VIs take twice as long as estimated, and about 30% take half as long as estimated. Thatâs a pretty big range.
So we donât present the raw estimates to stakeholders. If we did, weâd be late half the time. Instead, we increase the estimate so weâre early more often than weâre late.
Doing this requires me to play a political balancing act. According to our data, never being late would require us to multiply our estimates by six or seven, and that wouldnât fly. We canât tell them that a small, two-week VI is going to take 3-4 months. On the other hand, itâs also not acceptable to be late half the time.
Right now, Iâve chosen to be 75% accurate. In other words, weâre early 75% of the time and late 25% of the time. For us, thatâs about a 2x multiplier, depending on the team. Iâve also told stakeholders to expect about 1 in 4 VIs to go longer than expected. So far, itâs working well.
If youâd like to know more about the analysis behind this technique, itâs in my book in the âForecastingâ section.
Collecting all that data for forecasting had a side benefit. My CEO pushed me to report productivityâthatâs a whole ânother storyâand I decided to do it by reporting the percentage of time spent on muda versus the percentage of time spent on adding value to the business. Muda is activity that doesnât add value. Itâs the grey sections in the graph: maintenance, bugs, and on call.
This isnât the real graph, for confidentiality reasons, but the story it tells is all too familiar: lots of time spent on deferred maintenance, lots of time spent on incidents, lots of bugs. And then just a fraction of time left over for doing valuable work.
I shared the real version of this graph with my leadership team and it was eye opening. All of the sudden, they understood exactly why things took so long, and why they didnât ever get what they wanted. They had thought we had way more capacity than we actually did.
I told them that my responsibility was to reduce mudaâthe grey partâand make more room for valuable workâthe blue part. That was an act of deliberate accountability, and it flipped the script. Yes, people still wanted me to be accountable for making teams deliver feature X on date Y, with all the fighting about deadlines that involves, but even more importantly, and primarily, I was accountable for decreasing muda. Thatâs precisely what I needed to be focused on, because that was our biggest problem.
And, over the past two years, thatâs exactly what Iâve done. I report on my progress every quarter, and every quarter itâs a little bit better than it was before. And every quarter, I get a little bit less pushback on predicting dates.
And then, finally, I just kept pushing. These two books are excellent resources on how to do so.
I introduced the original variant of the product bet idea in January 2024, or maybe even earlier. It didnât go anywhere. I brought it up again in March 2024. We sort of tried it, without leadership buy-in, and it sort of fizzled. I brought it up again, and again. I worked with my colleague, the VP of Product. I talked to the Chief Product Officer. I included it in a presentation to leadership about how Agile works. I piggybacked on the CEOâs passion for quantifying results. I stopped asking Leadership to create financial models and just created my own, then asked them to fill in the values. (Thatâs why theyâre not very rigorous.)
And then finally, in March of 2025, the stars aligned. The CPO started pushing the rest of the leadership team to get involved. We created five product bets, the leadership team filled in my spreadsheet, and we started working on the first bet. And now weâre off to the races. We just started our second bet a few months ago, and weâre talking about how to increase capacity for more bets.
Thereâs lot more to do, and lots more to learn, but now that the logjam has broken, I think itâs going to stick. Our new CFO is intrigued and Iâm able to show steady progress with my VIs and forecasting techniques. Iâm well on my way to erasing the stigma that engineering canât be trusted to deliver. I had 18-24 months to make a difference. Iâve just passed my 2nd year at OpenSesame, and Iâm still here. I think itâs going to work out.
Software development may be a foreign country to the rest of the business, but we can still be a trusted part of their empire.
To do so, we have to take accountability, rather than allowing it to be forced upon us. Rather than falling into the habit of delivering X features on Y date, we can be accountable for what really matters: results, just like our colleagues in sales, marketing, and other parts of the business. And the results we create are new opportunities. Enabling more prospects. New partners. More leads. Better retention.
Product bets allow us to be accountable for the estimated value of those results. So far, theyâve been working for me. I hope they work for you, too.
The crowd at T-Mobile Park has been waiting all week to move the earth again for the Seattle Mariners. They got their chance Friday night.
An eighth-inning grand slam by Geno Suarez sent more than 46,000 fans into a frenzy and triggered seismic activity registered by the Pacific Northwest Seismic Network (PNSN). The organization, which monitors earthquakes and volcanoes in Washington and Oregon, installed a sensor inside the stadium for the Mariners’ home playoff games.
Suarez’s second home run of the game put the M’s ahead of the Toronto Blue Jays 6-2 and sealed Game 5 of the American League Championship Series, which the Mariners now lead, 3-2. The team is one win from its first ever trip to the World Series as the best-of-7 series shifts back to Toronto.
PNSN’s device â nicknamed âRichter Rizzsâ after longtime Mariners broadcaster Rick Rizzs â picked up sizable seismic energy a week ago when Jorge Polanco hit a game-winning single to win a 15-inning marathon against the Detroit Tigers in Game 5 of the American League Division Series.
The device, which measures vertical ground motion, registered activity after big plays throughout Friday’s game, including an earlier home run by Suarez that gave the M’s a 1-0 lead and a home run by Cal Raleigh that tied the game at 2-2.
The grand slam was the biggest show.
PNSN has done monitoring during Seattle Seahawks games â including for the famous âBeast Quakeâ â and at concerts.
The idea to measure shaking at T-Mobile Park came after Raleigh said he could feel the stadium vibrating during Game 2 of the ALDS earlier this month.
A video during the Suarez slam on Friday (below) showed the PNSN team contributing to some of the shaking in right field as a laptop displayed the seismic activity in real time.
“After Geno’s grand slam, I’m not sure I’ve heard that building any louder than that,” M’s manager Dan Wilson said after the game. “You can’t say enough about the support we’ve received from these fans this year.”