BONUS: From 3,000 Scripts to 3 Tools - Building AI-Last Software With Conversational AI Pioneer Peter Swimm
In this special BONUS episode, Peter Swimm—conversational AI veteran, creator of BotKit (the open-source chatbot framework that powered Slack and Teams bots), and former Principal Product Manager at Microsoft Copilot Studio—shares what 25+ years in tech taught him about working with AI. From his brutal experiment of running an entire business on voice-based AI for a week, to why he treats AI more like R2-D2 than C-3PO, Peter offers a grounded, practical perspective on where AI fits in software development teams.
From BotKit to Copilot Studio: A Front-Row Seat to the AI Evolution
"We had the number one bot in the Slack app store, because there were only 8 bots, and ours used regex. To show you how far we've come."
Peter's journey into conversational AI started with a newspaper ad and a creative writing background. When Slack launched its API, Peter and BotKit co-creator Ben Brown immediately saw that building bots wasn't just a technical challenge—it was a social and creative one, like writing scripts for plays that interface with people in their daily lives. That insight powered BotKit into becoming the backbone of Slack and Teams bots, and eventually led to Microsoft acquiring the company. Peter spent years inside Microsoft shaping Copilot Studio, working on connectors that bridge the gap between APIs and real-world work. But the experience also gave him a healthy dose of perspective: he can show you slide decks from 2016 that promise the same things today's AI pitches promise, always saying "within 5 years." That pattern recognition shapes his practical, no-hype approach.
The 3,000 Scripts Experiment: Why AI-Last Beats AI-First
"At the end of the day, if I've been prompting all day, I should have a computer program that works offline, that works without a subscription. Otherwise, I didn't really make anything."
Peter ran a week-long experiment trying to run his entire business using only voice-based conversational AI. The result: 3,000 generated scripts. After static code analysis, he discovered it was really only 5 programs made thousands of times—and those 5 programs were really just 2 or 3 core abilities. He deleted 36 gigabytes of generated code and kept 50 megabytes of what actually worked. This brutal compression led him to an "AI-last" philosophy: build reliable runtime software that works confidently in one click, then use AI only for exploration, connection-making, and creative riffing. The payoff is striking—within 3 weeks of a given application, his team sees a 90% reduction in AI usage in the first week, dropping to 0% within 13 days, because once a computer program does everything you need, you don't need AI anymore.
R2-D2, Not C-3PO: How to Think About AI on Your Team
"I think of our AI use more like R2-D2 than C-3PO. R2-D2 doesn't talk—bonus points. He doesn't interject his fear. He saves your butt. He's silent until you need him, and visible when you need him."
Peter's Star Wars analogy captures his team's philosophy on AI integration. AI should be like a smarter linter—a quiet, capable tool that handles the boring, repetitive tasks so humans can focus on creativity and shipping. His team treats AI as a "super junior" with infinite time: set it up as if it invented Python, have it write buy-the-book code with unit tests, and then a human reviews and accepts (or rejects) the output. The tooling isn't consistent enough to ship autonomously or commit directly into the codebase—even frontier providers don't fully understand what their models do. The practical benefit is enormous for setup and configuration: what used to be a painful, arcane process of tracking down dozens of AWS or Azure docs becomes a 20-minute "hello world" that's actually a working proof of concept. Your job isn't to become an expert at cloud services—it's to ship product.
The Biggest Mistake: Automating Broken Processes at AI Speed
"All it does is automate all the mistakes you made, all the way, at AI speed."
When asked about the most common mistake organizations make with AI, Peter is blunt: they port their existing infrastructure into AI-governed systems instead of rebuilding from the ground up. Companies with a self-inflated opinion of their processes think AI is just a million-person force multiplier—so they'll ship faster. But if your process was broken before AI, you'll just generate broken output at unprecedented scale. That 3,000-script experiment proved this firsthand. Peter's recommendation: rebuild from the bolts up. Start with AI-last architecture where reliable, offline-capable software handles the core, and AI is reserved for the edges—filling gaps, translating between systems, and making connections that don't exist yet.
SaaS Is Bloated: The Case for AI Transformation Layers
"The one thing AI is good at is transforming between boundaries."
Peter's team has been divesting from SaaS providers, replacing the patchwork of middleware subscription plans that forced everyone to copy and paste between CMS, Excel, meeting notes, and email. His approach: product people use Notion, developers use GitHub, and the two cross-sync without needing Jira as an arbitration layer. Everyone tracks work in the tool they already live in. AI's real superpower here is translation—between APIs, between languages, between formats. Peter sees a future where small translation layers between CRUD operations replace the bloated, one-size-fits-all SaaS tools that are "built for 99% of users with generalized features nobody uses." His team also freed themselves from tools like Figma: the designer works in their preferred graphics program, the developer in their preferred IDE, and AI arbitrates the differences.
Teams, Velocity, and Reinvesting the AI Dividend
"5 to 7 people is still good, because you need a diverse set of people who are intensely focused on certain areas. But they should be allotted that savings in time to ship all the things that get cut."
Peter pushes back on the idea that AI changes the ideal team size. The 5-to-7 person team still works—what should change is what those people do with the time they save. Instead of loading teams onto more projects or increasing portfolio velocity, reinvest the AI productivity dividend into quality: ship with unit tests from day one, ship WCAG-compliant from day one, and stop cutting features to hit deadlines. Version 1.0 should no longer need an immediate 1.1 follow-up. Peter also challenges the notion that AI eliminates the need for experienced practitioners—velocity metrics become meaningless when a 6-week coding plan finishes in 25 minutes. What matters is using the saved time to make software genuinely better.
The Future: Demo-First Development and Solid Releases
"I can show you a working demo of the thing at the first meeting, and you can pay for it. And then we can make it better than your dreams."
Peter sees AI transforming the consulting and product development lifecycle from "launch, listen, and learn" to "listen, iterate, and launch." As a consultant, he now brings working demos to first meetings instead of $20,000 six-week proposals. Clients see the product in motion and immediately identify improvements—before money changes hands. This shifts the power dynamic: products iterate toward quality before launch, not after. Peter envisions a future where we ship solid releases that iterate in quality, with interfaces that show users only what's relevant to them instead of "90,000 buttons that don't apply to me."
About Peter Swimm
Peter Swimm is a conversational AI veteran with 25+ years in tech — from managing data centers to building Botkit (the open-source chatbot framework that powered Slack and Teams bots), to serving as Principal Product Manager at Microsoft Copilot Studio. He's the founder of Toilville, a consultancy helping businesses build conversational AI solutions.
You can link with Peter Swimm on LinkedIn and visit his website at peterswimm.com.
Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260418_Peter_Swimm_BONUS.mp3?dest-id=246429


