Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151976 stories
·
33 followers

BONUS From 3,000 Scripts to 3 Tools - Building AI-Last Software With Peter Swimm

1 Share

BONUS: From 3,000 Scripts to 3 Tools - Building AI-Last Software With Conversational AI Pioneer Peter Swimm

In this special BONUS episode, Peter Swimm—conversational AI veteran, creator of BotKit (the open-source chatbot framework that powered Slack and Teams bots), and former Principal Product Manager at Microsoft Copilot Studio—shares what 25+ years in tech taught him about working with AI. From his brutal experiment of running an entire business on voice-based AI for a week, to why he treats AI more like R2-D2 than C-3PO, Peter offers a grounded, practical perspective on where AI fits in software development teams.

From BotKit to Copilot Studio: A Front-Row Seat to the AI Evolution

"We had the number one bot in the Slack app store, because there were only 8 bots, and ours used regex. To show you how far we've come."

 

Peter's journey into conversational AI started with a newspaper ad and a creative writing background. When Slack launched its API, Peter and BotKit co-creator Ben Brown immediately saw that building bots wasn't just a technical challenge—it was a social and creative one, like writing scripts for plays that interface with people in their daily lives. That insight powered BotKit into becoming the backbone of Slack and Teams bots, and eventually led to Microsoft acquiring the company. Peter spent years inside Microsoft shaping Copilot Studio, working on connectors that bridge the gap between APIs and real-world work. But the experience also gave him a healthy dose of perspective: he can show you slide decks from 2016 that promise the same things today's AI pitches promise, always saying "within 5 years." That pattern recognition shapes his practical, no-hype approach.

The 3,000 Scripts Experiment: Why AI-Last Beats AI-First

"At the end of the day, if I've been prompting all day, I should have a computer program that works offline, that works without a subscription. Otherwise, I didn't really make anything."

 

Peter ran a week-long experiment trying to run his entire business using only voice-based conversational AI. The result: 3,000 generated scripts. After static code analysis, he discovered it was really only 5 programs made thousands of times—and those 5 programs were really just 2 or 3 core abilities. He deleted 36 gigabytes of generated code and kept 50 megabytes of what actually worked. This brutal compression led him to an "AI-last" philosophy: build reliable runtime software that works confidently in one click, then use AI only for exploration, connection-making, and creative riffing. The payoff is striking—within 3 weeks of a given application, his team sees a 90% reduction in AI usage in the first week, dropping to 0% within 13 days, because once a computer program does everything you need, you don't need AI anymore.

R2-D2, Not C-3PO: How to Think About AI on Your Team

"I think of our AI use more like R2-D2 than C-3PO. R2-D2 doesn't talk—bonus points. He doesn't interject his fear. He saves your butt. He's silent until you need him, and visible when you need him."

 

Peter's Star Wars analogy captures his team's philosophy on AI integration. AI should be like a smarter linter—a quiet, capable tool that handles the boring, repetitive tasks so humans can focus on creativity and shipping. His team treats AI as a "super junior" with infinite time: set it up as if it invented Python, have it write buy-the-book code with unit tests, and then a human reviews and accepts (or rejects) the output. The tooling isn't consistent enough to ship autonomously or commit directly into the codebase—even frontier providers don't fully understand what their models do. The practical benefit is enormous for setup and configuration: what used to be a painful, arcane process of tracking down dozens of AWS or Azure docs becomes a 20-minute "hello world" that's actually a working proof of concept. Your job isn't to become an expert at cloud services—it's to ship product.

The Biggest Mistake: Automating Broken Processes at AI Speed

"All it does is automate all the mistakes you made, all the way, at AI speed."

 

When asked about the most common mistake organizations make with AI, Peter is blunt: they port their existing infrastructure into AI-governed systems instead of rebuilding from the ground up. Companies with a self-inflated opinion of their processes think AI is just a million-person force multiplier—so they'll ship faster. But if your process was broken before AI, you'll just generate broken output at unprecedented scale. That 3,000-script experiment proved this firsthand. Peter's recommendation: rebuild from the bolts up. Start with AI-last architecture where reliable, offline-capable software handles the core, and AI is reserved for the edges—filling gaps, translating between systems, and making connections that don't exist yet.

SaaS Is Bloated: The Case for AI Transformation Layers

"The one thing AI is good at is transforming between boundaries."

 

Peter's team has been divesting from SaaS providers, replacing the patchwork of middleware subscription plans that forced everyone to copy and paste between CMS, Excel, meeting notes, and email. His approach: product people use Notion, developers use GitHub, and the two cross-sync without needing Jira as an arbitration layer. Everyone tracks work in the tool they already live in. AI's real superpower here is translation—between APIs, between languages, between formats. Peter sees a future where small translation layers between CRUD operations replace the bloated, one-size-fits-all SaaS tools that are "built for 99% of users with generalized features nobody uses." His team also freed themselves from tools like Figma: the designer works in their preferred graphics program, the developer in their preferred IDE, and AI arbitrates the differences.

Teams, Velocity, and Reinvesting the AI Dividend

"5 to 7 people is still good, because you need a diverse set of people who are intensely focused on certain areas. But they should be allotted that savings in time to ship all the things that get cut."

 

Peter pushes back on the idea that AI changes the ideal team size. The 5-to-7 person team still works—what should change is what those people do with the time they save. Instead of loading teams onto more projects or increasing portfolio velocity, reinvest the AI productivity dividend into quality: ship with unit tests from day one, ship WCAG-compliant from day one, and stop cutting features to hit deadlines. Version 1.0 should no longer need an immediate 1.1 follow-up. Peter also challenges the notion that AI eliminates the need for experienced practitioners—velocity metrics become meaningless when a 6-week coding plan finishes in 25 minutes. What matters is using the saved time to make software genuinely better.

The Future: Demo-First Development and Solid Releases

"I can show you a working demo of the thing at the first meeting, and you can pay for it. And then we can make it better than your dreams."

 

Peter sees AI transforming the consulting and product development lifecycle from "launch, listen, and learn" to "listen, iterate, and launch." As a consultant, he now brings working demos to first meetings instead of $20,000 six-week proposals. Clients see the product in motion and immediately identify improvements—before money changes hands. This shifts the power dynamic: products iterate toward quality before launch, not after. Peter envisions a future where we ship solid releases that iterate in quality, with interfaces that show users only what's relevant to them instead of "90,000 buttons that don't apply to me."

 

About Peter Swimm

 

Peter Swimm is a conversational AI veteran with 25+ years in tech — from managing data centers to building Botkit (the open-source chatbot framework that powered Slack and Teams bots), to serving as Principal Product Manager at Microsoft Copilot Studio. He's the founder of Toilville, a consultancy helping businesses build conversational AI solutions.

 

You can link with Peter Swimm on LinkedIn and visit his website at peterswimm.com.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260418_Peter_Swimm_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

6.0.1691.0

1 Share
No content.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

On being a fellow Dainosaur

1 Share

🦖

I read Edwin van Wijk’s excellent five-part series on The Dainosaur the other day and it hit close to home. Edwin and I sat in the same lecture halls for our IT bachelor’s (remember FCO-IM?). We graduated in the same cohort and started our first real jobs on exactly the same day, July 1st 1999, at the same company. We spent sixteen years there together before I moved on to Xpirit, now Xebia. Our careers ran on parallel tracks from the start. Same training, same company culture, same instincts about what good software looks like. So when he writes about moving from long-earned skepticism to serious, eyes-open use of AI, I recognize the whole trajectory because I lived a version of it myself.

Edwin’s series is not a hype piece. He starts from the same place most senior people did: decades of surviving hype cycles and a healthy distrust of anything that sounds too shiny. He treats AI less like a magic wand and more like a capable but unreliable colleague. Useful for angles, prompts, and acceleration, but never exempt from review. The real differentiator, he says, is not chasing better models but clarity: context, constraints, quality criteria, and disciplined review. Context engineering is the actual craft. That maps straight onto the architecture habits we were both trained in. Document assumptions, list rejected options, draw the boundaries.

The skepticism was earned

My resistance to AI was never about nostalgia or fear of change. Our industry changes all the time. It always has. If you want to stay relevant, you learn, adapt, and keep moving. That was never the issue. I’m also not blind to the novelty and impressive technology surrounding Gen AI.

Until the end of 2025, LLMs were mostly fancy autocomplete to me. Helpful for finding bugs or explaining code I did not work with daily, but serious coding efforts often produced slop. I have strong views on how architecture should feel and what good code should look like. My experience (mostly with GitHub Copilot inside Visual Studio Code) was that I spent more time cleaning up cruft and over-engineering than I gained from it. Coding is still a direct expression of the solutions I design in my head. Code (C# in my case) is sometimes the clearest way to capture domain rules or algorithms. Typing natural language to get there felt like working through a filter. I still feel that sometimes.

The same goes for aesthetics. I see a lot of AI-generated text, images, music, and video as soulless middle-of-the-road slop. There is a kitschy shine to it that I can smell from a mile away. Code, being more neutral and machine oriented, suffers less, but early outputs were repetitive spaghetti all the same. I refuse to waste time on lazy, ugly, bland content. Einar W. Høst’s recent post “I Refuse to Play the Imitation Game” captures how I feel about this perfectly: stop trying to sort bad human text from bad AI text and just move on to something worth your time.

Useful does not mean magical

What changed for me was the alignment of better models, better tools, and better workflows. Around the end of 2025, Claude Opus 4.5/4.6 and Claude Code had their momentum and many in the industry experienced their “Claude Code moment”. The IDE-first approach in Copilot always felt clunky: the conversation with the AI stayed secondary and “bolted on”. Tools like Claude Code flip this: the dialog became primary, with the codebase, tools, and skills attached. OpenCode became a prominent driver in my toolbelt because it lets me use every model in my Copilot subscription and keeps experimenting easy. I started with OpenSpec initially to shape design constraints in green-field work or new features, but that “light weight spec-driven framework” grew heavier and bloated after a few iterations. Later I moved to Compound Engineering or just simply alternating between Plan and Build modes and started to get acceptable results. Adding context-aware MCPs like context7 brought in up-to-date framework knowledge so the model stopped hallucinating obsolete NuGet packages or outdated .NET habits.

The generated code got better structured and I saw it pick up on patterns and conventions in existing code bases. Also, I started to figure out the right scope of changes that I can still review and judge properly. AI stopped feeling like an over-eager intern I had to babysit and started feeling more like leverage. I could go back to being the engineer instead of the childcare supervisor.

Yes, I have embraced using AI as a main driver for engineering work. I apply the same principles Edwin describes: context first, disciplined review, senior judgment in the loop. But I do not buy the hype that English is the new programming language. People claiming that have probably not touched any serious production code for at least 10 years. Also, people exclaiming “I can finally unleash my creativity without having to tell those pesky developers what I want”… congratulations, and good luck with that. I do not trust anyone selling that stuff near production code. Building a SaaS replacement in an afternoon on http://localhost:3000 is not the same as bringing it to production, hosting it, operating it, and maintaining it for years. I think the industry is heading for a FAFO moment. A lot of programs will ship that “developers” without judgment cannot properly evaluate. The fool-with-a-tool adage still applies here.

Accessibility is a real gain. AI does lower the barrier to making things. Prototypes become feasible faster. Sloppy code in a true prototype is fine. What makes me angry is when a vibe-coded demo gets mistaken for the real thing. I have seen PowerPoint mockups sold as finished products before. Now the same thing happens with AI prototypes. The cost of weak architecture, security holes, operational fragility, and dependency chaos just shifts downstream. Initial creation looks cheaper but later cleanup gets more expensive.

I embraced the utility, not the culture

The cultural side of AI still turns me off. The grifting, the gloating about entire job categories disappearing, the sneering “adapt or die” rhetoric, the reduction of human worth to productivity math. Sam Altman comparing the energy cost of training a model to raising and training a human is a perfect example of the tone-deaf attitude that nearly pushed me out of the industry.

The vendor dependency bothers me too. We are outsourcing actual software development to machines owned by a handful of parties. Lock-in was always something we tried to avoid. Now it is sold as progress. Recent outrage about pricing changes and rate limiting from Anthropic already show where this is heading.

Add the ethical stains around training data, the low-wage human reviewers sifting through terrible content, and the environmental cost of those massive data centers, and the picture gets darker. Karen Hao’s “Empire of AI” is worth reading as a counterweight. Or read Patrick Galey’s “101 reasons to not use Gen AI” to curb your enthusiasm a bit.

What it cost me

On a personal level I had to let go of something real. I genuinely enjoy writing code. Once I have the direction clear, the physical craft of typing, refactoring, discovering flaws, and seeing it work puts me in a flow state. That direct connection between brain and hands is how I get ideas into the world. It is the same feeling I get playing guitar, cooking, baking bread, or drawing. AI helps, but it inserts a layer between me and part of that craft. I still touch code when I need to fine-tune or debug, but the tactile pleasure of molding every important line myself is less frequent now. That loss is real even though the productivity gain is real too.

Why Edwin’s series matters

I appreciate Edwin’s honesty and openness in publishing the series. He took the risk of saying he changed his mind at a moment when some people will sneer “I already knew that years ago.” or “anyone thinking they will type a letter of code in 6 months is a lost cause”. I endure these mocking and dismissive comments in my own environment every day. But I saw the same bullshit he saw. I was not blind but I was unwilling to pretend the early tools and the surrounding noise were better than they were. My bullshit sensor is still working fine and I intend to keep using it.

Edwin’s journey is very recognizable. AI is now useful enough in my daily work that I use it as a primary engineering tool. It fits, it accelerates and it lets me focus on the parts that matter. But I embrace the utility, not the ideology. I keep my standards, my taste, and my right to call out nonsense when I see it. The tools changed but the need for judgment, architecture, and craft did not.

Human after all

I want to leave you with something that popped up in my timeline around the same time people were losing their minds about Claude Code and agent galore. It worked for me as a perfect counter balance and showed me that human creativity still greatly outweighs AI slop. They look weird in their hand made costumes, the music is gnarly and funny, but it music breathes and grooves like hell. And most importantly: they’re extremely creative and virtuoso players: Angine de Poitrine. I got my tickets and look forward to experiencing their live show in August.





Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Context Engineering: The Real Skill Behind High-Quality AI Output

1 Share
Unlock the power of AI! Learn context engineering: structuring information for accurate, relevant, and high-quality AI outputs. Elevate your results now!
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How to Make Your GitHub Profile Stand Out

1 Share

If you have a Github profile, you might overlook the many ways you can customize it – and that's completely understandable. After all, at its core, GitHub is a home for your code.

But beyond repositories and commits, your profile can say a lot about you as a developer.

When used intentionally, GitHub becomes more than a code hosting platform. It becomes your CV for your codebase. It tells your story, showcases your skills, and gives people a reason to trust your work.

In this article, we'll break down the different ways to make your GitHub profile stand out. From setting up your GitHub account to engaging storytelling for your repositories, there's lots you can do.

Let's get started!

Table of Contents

Step 1: Sign Up for a Github Account

To begin, you'll need a GitHub account. If you don’t have one, you can set one up here.

Once you have your account set up and you're logged in, we can move on to the next step.

Step 2: Add a Profile Image

Your profile image is often the first thing people notice. It could be a professional photo of yourself, or an image or avatar that represents you or your interests

As long as it’s appropriate, you’re good to go.

To add a profile image, you'll need to:

  • Open your profile menu/dashboard

  • Click on the image icon at the left

  • Click on the edit text on the image icon

  • Select the image to set as your profile picture

  • Click the "Set new profile picture" button

So, you should have something like this:

Image showing the new Profile image added

GitHub link to this page: https://github.com/settings/profile

And there you have it, your GitHub profile image is set.

On to the next one…

Step 3: Add Profile Details

This step is all about credibility and discoverability.

At the center of your profile settings you'll see fields like email, location, social media links and so on. We'll be adding those details so you can take advantage of the discoverability it lends to your profile.

Image showing public profile settings tab

GitHub link to this page: https://github.com/settings/profile

For this step, you'll want to add as much detail as possible (apart from your home address – I think we both know why).

For the location, you can just put in your city or country so others have a general idea of where you are in the world.

Step 4: Add a Profile README File

This is where you introduce yourself properly and tell your story.

A Profile README is a special repository named exactly the same as your GitHub username. Your README file appears directly on your profile page.

The READme should answer the following questions:

  • Who are you?

  • What are your project highlights?

  • What are you currently working on or learning?

  • Your hobbies or interests (optional)

While answering these questions, you should aim to keep it minimal and yet interesting. You don't want to overwhelm the visitor.

Here's how to create your README:

  • Click New repository

  • Name the repository exactly the same as your GitHub username

  • Check “Add a README file”

  • Make sure the repository is public

  • Click Create repository

Profile README file setup:

Image showing profile README file being created

GitHub link to this page: https://github.com/new

So if you answered the questions listed above, your README file should look something like this:

Image showing Profile README section already created

GitHub link to this page: https://github.com/chinazachisom/chinazachisom

It should also be showing directly on your GitHub profile like below:

Github Profile Showing the Newly Added README file

GitHub link to this page: https://github.com/chinazachisom

Step 5: Tell a Story About Each Repository

Now, this is where you can tell a story about each of your repositories using a README file.

NB: Each repository should have its own separate README file.

What to include in a repository README:

  • Project title

  • What the project is

  • The purpose (the “why”)

  • Key features

  • Challenges you faced and how you solved them

  • Setup or usage instructions (or a live link if hosted)

  • Technical concepts used (e.g., throttling, caching, lazy loading) (optional)

  • Images or video demos

You may also include badges, charts, contribution graphs or other visual enhancements that help highlight project quality, activity and impact.

With the above structure, you can tell the stories behind your projects, show your problem-solving skills, and make your work easier to understand and evaluate.

Repository README File Sample:

Image showing the README file for the new repository

GitHub link to this page: https://github.com/chinazachisom/Artsy

Conclusion

Your Github Profile is more than just a storage space for your codebase. It's your developer Identity as well.

Following these basic steps can help turn your Github into a portfolio infused with your personal brand. It makes your GitHub Profile stand out, which can help open doors for more opportunities.

Treat it like a CV for your code and let your work speak for you.

About the Author

Hi there! I'm Chinaza Chukwunweike, a Software Engineer passionate about building robust, scalable systems that make a real world impact. I'm also an advocate for continuous learning and improvement.

If you found this useful, please share it! And follow me for more Software Engineering tips, AI learning strategies, and productivity frameworks.



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SSIS Extension Updates – Apr 2026

1 Share

Back in June 2024, I announced I was changing the way I report updates to the SQL Server Integration Services extension for Visual Studio (in a post titled SSIS Extension Updates – Jun 2024). I have a calendar reminder that reminds me to check the links every quarter, and that reminder did its job admirably earlier this month, but I was heads down building updates to SSIS Catalog Compare and SSIS Framework for the Data Integration Lifecycle Management Suite.

Speaking of DILM Suite, I was building a fresh set of demo projects in Visual Studio 2026 Insiders Community Edition, preparing to do battle with the deployment process, when I noticed an update to the SSIS 2022+ extension.

There Are Two Integration Services Extensions

In 2024, the Microsoft SSIS team forked the Integration Services extension into pre-2022 and 2022+ versions.
Since that time, there haven’t been many updates to the pre-2022 version.

Given there are no new updates to the pre-2022 extension at this time, I decided not to report on that version.
If the Microsoft SSIS Team updates the pre-2022 extension in the future, I’ll include a section describing the update. But for now, I’ll simply include the link.

Extension Page Links

The links to each extension are:

SSIS 2022+: https://marketplace.visualstudio.com/items?itemName=SSIS.MicrosoftDataToolsIntegrationServices

SSIS Pre-2022: https://marketplace.visualstudio.com/items?itemName=SSIS.SqlServerIntegrationServicesProjects&ssr=false#overview

SSIS 2022+ Update

A new update is available for the SQL Server Integration Services Projects 2022+:

The latest update is version 2.2, released 1 Apr 2026.

Bug fixes:

  • Fixed issues in Import Project Wizard and SSIS in Azure Connection Wizard for VS 2026 18.4.
  • Fixed an installer issue where an existing OLE DB driver could be removed during SSIS installation.
  • Fixed a dark mode issue where SSIS design surfaces (such as Control Flow and Data Flow) appeared with white backgrounds.
  • Upgraded the SSIS VSTA dependency version to help prevent setup failures in environments with stricter security settings.
  • Improved ODBC buffer management by adding cache for realignment.

Known issues:

  • Upgrading from earlier versions currently depends on an upcoming Visual Studio Installer fix. Until then, to design and execute Analysis Task and related connections, install Microsoft Analysis Services Projects 2022+ as workaround
  • In the context menu (right mouse button) on objects in the project (e.g. the solution, a package) in Visual Studio, many of the entries appear many times. This happens only when Microsoft Analysis Services Projects 2022+ is installed together.

Conclusion

Lots of enterprises continue to use SSIS – especially for on-premises data engineering. In a recent conversation with Enterprise Data & Analytics data engineers, we surmised SSIS may likely remain available for as long as SQL Server is supported on-premises.

It’s a guess, yes; but an educated and somewhat informed guess.

Ivan Peev and I had a recent conversation about the state and future of SSIS. You can view a video of the livestream here. Ivan is founder of COZYROC, a third-party SSIS controls vendor, and creator of the SSIS+ Component Suite. COZYROC is a member of the DILM Integration Circle.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories