Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149892 stories
·
33 followers

It’s not the AI, it’s what you do with it

1 Share

Let’s draw a parallel between today’s software engineers and Renaissance painters.

For centuries, painters thrived because society depended on their ability to capture reality. But when a better tool arrived – the camera – the ground beneath them shifted. The craft didn’t disappear, but the meaning of value changed forever.

“We’re facing something similar,” said Tejas Kumar (Developer Advocate, IBM) at the Shift conference in Kuala Lumpur. Citing tools like Cursor, Lovable, Bolt.new, Vero, and Windsor, he reminded developers that coding agents are already writing software faster (and often more reliably) than humans.

Developers must rely on first-principles reasoning

The data backs this up. Job openings across S&P 500 companies dropped sharply after ChatGPT’s release. Yes, market cycles and the zero-interest-rate hiring bubble played their part, but the trend points to something bigger: the profession is being reshaped, and developers need to understand how to stay relevant in the years ahead.

To navigate this landscape, Kumar argued, developers need to lean on first-principles reasoning. The term gets tossed around often in tech circles, but it’s rarely defined clearly. He offered a straightforward explanation:

First-principles reasoning means starting from what is invariant – the parts that never change – and building your understanding from there.

Invariants are the fundamental laws of reality – things like gravity, light, or the rising and setting of the sun. They remain constant, no matter what tools we create.

Returning to the Renaissance analogy, Kumar explained that both painters and cameras are simply different ways of capturing the same invariant: light. The tools change, but the underlying truth stays the same.

This approach, he argued, helps us understand the deep value of AI.

The goal isn’t to cling to the tools we’ve always used but to identify the underlying invariant that AI supports. In this case, it is reclaiming time and human agency – giving developers the freedom to focus on meaning while delegating repetitive work to machines.

The invariant in AI? It gives us back lost time

The point became unmistakable when Tejas demonstrated a multi-step AI agent in real time. Without typing a single keystroke, he watched as the agent opened Chrome, searched for the event schedule, parsed the results, and added the correct session to his calendar.

While my hands were off, what could I have been doing? I could’ve been at the gym. Out on a run. Playing with children I don’t have yet – but pray for every day. I could have been doing something meaningful. Instead, I’ve outsourced this tedious work to my agent- and in return, I get back life.

AI does not simply automate tasks. It returns lost hours and makes room for creativity, rest, curiosity, and focus. That, he argued, is the true invariant AI addresses.

Tejas urged developers to embrace the moment – engineers have never had access to so many powerful open-source tools.

Breakthroughs don’t need new toys – they need new tricks

As the keynote wrapped, Kumar recounted the telescope’s origins. In 1608, Dutch glassmakers used clear spyglasses horizontally to scan the horizon. A year later, Galileo pointed the same tool upward, unlocking new worlds.

“He literally saw Jupiter. He saw Saturn,” Tejas said. “A tool used differently became the telescope we know today.”

This story illustrates a timeless lesson: transformative breakthroughs often come not from inventing new tools, but from using existing ones in unexpected ways. In today’s era of open-source models, MCP servers, frameworks like LangFlow, and an unprecedented supply of freely accessible AI technologies, Kumar posed a question to developers that was simple, but profound:

How are we using these tools and how might we use them differently to achieve more, or even discover entirely new possibilities?

In the age of AI you need to be CREATIVE

Tejas invited developers to embrace the moment rather than fear it. Never before have engineers had access to such a vast array of powerful open-source tools. LangFlow itself, he reminded the audience, is fully MIT-licensed and easy to self-host, letting anyone build scalable agents through a visual interface.

But his message went beyond tools or licenses. It was a call to creativity, a call to agency – a reminder for developers to lift their gaze, imagine new possibilities, and see where these tools can take us when used in unexpected ways.

The post It’s not the AI, it’s what you do with it appeared first on ShiftMag.

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Lightning-as-a-service for agriculture

1 Share
Darryl Lyons, co-founder and Chief Rainmaker at Rainstick, joins the show to dive into advancements in AgTech and how Rainstick is using bioelectricity to enhance agricultural productivity.
Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

You Want Microservices, But Do You Really Need Them?

1 Share

Do you know who managed to cut costs by a staggering 90% by abandoning microservices for a monolith in May 2023? Not a cash-strapped startup or an indie project—Amazon itself, for its Prime Video service. The same AWS that earns billions every year by selling microservices infrastructure admitted that, sometimes, a good old monolith wins. 

This reversal from the company that practically wrote the playbook on distributed systems sent shockwaves through the cloud-native community. Amazon later removed the original blog post, but the internet never forgets, as you’ll see later.

I’ve been speaking up against unnecessary or premature use of microservices architecture for five, six years now. After Amazon Prime Video went back to a monolith, I came across several eminent architects who are also speaking against microservices as default.

And yet in most tech circles, microservices are still viewed as the only way to build modern software. They dominate conferences, blogs, and job listings. Teams adopt them not because their requirements justify it, but because it feels like the obvious (and résumé-boosting) choice. “Cloud-native” has become synonymous with “microservices-by-default”, as if other approaches are as obsolete as floppy disks. 

Microservices do solve real problems, but at a massive scale. Most teams don’t actually operate at that scale.


With this article, I urge you to reflect on the question the industry has mostly stopped asking: Should microservices be the default choice for building at scale? We’ll look at reversal stories and insights from seasoned architects, and weigh the trade-offs and alternatives. After considering all of this, you can decide whether your problem really needs a constellation of microservices.

Microservices: The Agility-Complexity Trade-Off

On paper, microservices look impressive. Instead of one big monolith, you split your application into many small services. Each one can be written in any language, owned by a small team, and deployed on its own schedule. If you need more capacity, you can scale only the part that’s under load. The promise is elegant: independent deployability, autonomous teams, multi-language stacks, and elastic scaling.

But the catch is that every split creates a seam, and every seam is a potential failure point. Inside a monolith, function calls are instant and predictable. Across services, those same calls become network requests: slower, failure-prone, sometimes returning inconsistent data. With dozens (or hundreds) of services, you need version management, schema evolution, distributed transactions, tracing, centralized logging, and heavy-duty CI/CD pipelines just to keep things running.

This Gartner diagram captures the trade-off perfectly: microservices exchange the simplicity of one codebase for the complexity of many.

microservices gartner

At a massive scale (think Netflix), that trade-off may be worth it. But when operational benefits don’t outweigh the costs, teams end up paying a steep price in debugging, coordination, and glue code just to hold their product together.

Microservices make sense in very specific scenarios where distinct business capabilities need independent scaling and deployment. For example, payment processing (security-critical, rarely updated) differs fundamentally from recommendation engine (memory-intensive, constantly A/B tested). These components have different scaling patterns, deployment cycles, and risk profiles, which justify separate services.

The success of microservices hinges on clear business domain boundaries that match your team structure, as Conway’s Law predicts. If your organization naturally splits into autonomous teams that own distinct capabilities, microservices might work. (So, most “one-and-a-half pizza” startups don’t qualify, do they?) 

That’s why microservices work effectively for companies like Amazon and Uber—although not always.

In fact, most organizations lack the prerequisites: dedicated service ownership, mature CI/CD, robust monitoring, and crucially, scale that justifies the operational overhead. Startups that adapt microservices prematurely often regret their decision.

So ask yourself:

Are you using microservices to solve an independent scaling problem, or are you inviting more complexity than your solution needs?

The Great Microservices Reversal

Ironically, even though tech giants are the ones that are most likely to benefit from microservices, many of these very same companies are walking back their microservices architectures, and the results are eye-opening.

Amazon Prime Video: 90% Cost Reduction with a Monolith

In May 2023, Amazon engineers admitted the unthinkable: Prime Video had abandoned microservices for a monolith. Their Video Quality Analysis (VQA) team had built what looked like a textbook distributed system: AWS Step Functions and Lambda monitored thousands of video streams through independent, scalable components. On paper, it was serverless perfection.

In practice, it was a disaster. “We realized that distributed approach wasn’t bringing a lot of benefits in our specific use case,” said Marcin Kolny in the now-archived Prime Video Engineering blog. Their “infinitely scalable” system crumbled at just 5% of expected load due to orchestration overhead.

The fix was embarrassingly simple: collapse everything into a single process. It resulted in 90% lower costs and faster performance

Twilio Segment: From 140 Services to One Fast Monolith

Back in 2018, Twilio Segment, a customer data platform, documented a similar reversal in their brutally honest post “Goodbye Microservices”.

Their system had sprawled into 140+ services, creating operational chaos. At one point, three full-time engineers spent most of their time firefighting instead of building. As they admitted, “Instead of enabling us to move faster, the small team found themselves mired in exploding complexity. Essential benefits of this architecture became burdens. As our velocity plummeted, our defect rate exploded.”

Their solution was radical but effective: collapse all 140+ services into a single monolith. The impact was immediate. Test suites that once took an hour now finished in milliseconds. Developer productivity soared: they shipped 46 improvements to shared libraries in a year, up from 32 in the microservices era. 

Shopify: Sanity over Hype

Shopify runs one of the largest Ruby on Rails codebases in the world (2.8M+ lines). Instead of chasing microservices, they deliberately chose a modular monolith: a single codebase with clear component boundaries.

Shopify’s engineers concluded that “microservices would bring their own set of challenges”, so they chose modularity without the operational overhead.

All these examples beg the question:

If even the pioneers of microservices are retreating, why are we still treating it as gospel?

Expert Voices against Microservices Mania

Some of the most respected voices in software architecture—people behind many of the systems we all admire—are also cautioning against microservices and repeating mistakes they’ve seen play out at scale. (After all, cheerleaders don’t play the game; cloud DevRels rarely build at scale.)

Rails Creator: Simplicity over Sophistication

David Heinemeier Hansson (DHH), the creator of Ruby on Rails, has long advocated simplicity over architectural trends. His analysis of the Amazon Prime Video reversal puts it bluntly:

“The real-world results of all this theory are finally in, and it’s clear that in practice, microservices pose perhaps the biggest siren song for needlessly complicating your system.”

DHH’s image of a siren song is apt: microservices promise elegance but leave teams wrecked on the rocks of complexity.

Microservices: Mistake of The Decade?

Jason Warner, former CTO of GitHub, doesn’t mince words while commenting on microservices: 

“I’m convinced that one of the biggest architectural mistakes of the past decade was going full microservice.”

Warner understands scale: GitHub runs at internet scale, and he’s led engineering at Heroku and Canonical. His critique cuts deeper because it’s lived experience, beyond theoretical advice:

“90% of all companies in the world could probably just be a monolith running against a primary db cluster with db backups, some caches and proxies and be done with it.”

GraphQL Co-Creator: “Don’t”

Then there’s Nick Schrock, co-creator of GraphQL. If anyone had a reason to cheer for distributed systems, it’d be him. Instead, he says:

“Microservices are such a fundamentally and catastrophically bad idea that there are going to be an entire cohort of multi-billion companies built that do nothing but contain the damage that they have wrought.”

microservices bad idea Graph QL cocreator

He goes on to describe microservices as organizational gambles:

“[Y]ou end up with these services that you have to maintain forever that match the org structure and the product requirements from five years ago. Today, they don’t make a lot of sense.”

The person who literally built tools to fix distributed system pain says don’t distribute unless you must, maybe it’s time to listen.

Other Voices Questioning Microservice Maximalism

Other engineering leaders are also reconsidering microservice maximalism. 

At Uber, Gergely Orosz admitted:

“We’re moving many of our microservices to macroservices (well-sized services). Exactly b/c testing and maintaining thousands of microservices is not only hard – it can cause more trouble long-term than it solves the short-term.”

Uber still runs microservices where they’re justified, but they’re choosing their battles.

Kelsey Hightower, known for his work with Kubernetes and Google Cloud, cut through the microservices hype with CS101:

“I’m willing to wager a monolith will outperform every microservice architecture. Just do the math on the network latency between each service and the amount of serialization and deserialization of each request.”

He subsequently deleted this tweet, but the network math still grades microservices.

When pioneers like these, including those who actually solved distributed systems at scale, start waving red flags, it’s worth taking note. 

My question here is:

If GitHub’s CTO thinks 90% of companies don’t need microservices, are you sure yours is part of the 10%?

The Hidden Costs of Microservices

Microservices demand such caution because of these hidden costs that teams often underestimate.

Operational Costs

A monolith is simple: in-process function calls. 

Microservices replace that with networks. Every request now travels across machines, through load balancers, service meshes, and authentication layers, creating more failure points and infrastructure needs. You suddenly need service discovery (how services find each other), distributed tracing (tracking requests across services), centralized logging (aggregating logs from multiple services), and monitoring systems that understand service topology.

Each of these is necessary, but together they’re complex and expensive. Duplicated data requires extra storage. Constant service-to-service calls rack up network egress fees. Cloud costs scale faster than the apps they host. Prime Video’s workflow spent more on orchestrating S3 data transfers between services than on actual processing. 

Developer Productivity Drain

In microservices, the hard part isn’t writing code; it’s navigating distributed system interactions.

In “The macro problem with microservices“, Stack Overflow identifies a critical productivity drain: distributed state forces developers to write defensive code that constantly checks for partial failures. 

In a monolith, a developer can follow a code path end-to-end within one repo. In microservices, one feature might span four or five repos with different dependencies and deploy cycles. Adding a single field triggers weeks of coordination: you need to update one service, then wait for consumers to adopt, version your APIs, manage rollouts, and so on. Different teams will also typically maintain different microservices using different tech stacks, so there’s a risk that they unintentionally break something as well. Breaking changes that a compiler would catch in a monolith now surface as runtime errors in production.

Testing and Deployment Complexity

Monolith integration and end-to-end tests are faster because they run locally, in memory. Distributed systems don’t allow that luxury: real confidence requires integration and end-to-end tests across numerous service boundaries. So these tests are slower, more brittle, and require staging environments that resemble production, all of which effectively double infrastructure costs and slow feedback loops.

Many teams discover this only after their test suite becomes a bottleneck. Deployment orchestration adds another layer. Rolling updates across interdependent services require careful sequencing to avoid breaking contracts. Version incompatibility disturbs frequently: Service A works with Service B v2.1 but breaks with v2.2.

Failed deployments leave systems partially updated and difficult to recover.

Data Management and Consistency

The most underestimated complexity of microservices lies in data consistency across service boundaries.

Monoliths benefit from ACID transactions: operations complete entirely or fail entirely. Microservices split that across services, forcing you to build distributed saga (multi-step workflows with rollback logic), live with eventual consistency (data only becomes correct after a delay), or write compensation logic (extra code to undo partial failures). What was once a single database transaction now spans network hops, retries, and partial failures. Debugging inconsistent orders or payments gets much harder when state is duplicated across services. 

As research confirms, data duplication, correctness challenges, and transactional complexity are the top pain points in microservice systems.

The Compounding Effect

These complexities multiply. Operational overhead makes debugging harder, which slows testing, which makes deployments riskier, which creates more incidents. Microservices don’t just shift complexity from code to operations; they tax every part of your engineering process. 

Unless your scale demands it, that tax often outweighs the benefits. 

Think about it:

If every network hop adds complexity and cost, does your use case really justify the price?

Beyond Microservices: Smarter Architectural Alternatives

Before defaulting to microservices, it’s worth considering how simpler, well-structured architectures can deliver comparable scalability without the distributed complexity tax. Two noteworthy alternatives are modular monoliths and service-oriented architectures.

Modular Monoliths: Structure without Distribution

Unlike traditional monoliths that become tangled messes, modular monoliths enforce strict internal boundaries through clear module APIs and disciplined separation. Each module exposes well-defined interfaces, enabling teams to work independently while deploying a single, coherent system.

modular monolith architecture

As Kent Beck explains in “Monolith -> Services: Theory & Practice”, modular monoliths manage coupling through organizational discipline rather than distributed networks. The key difference: modules still communicate via explicit contracts like microservices, but they use fast, reliable function calls instead of HTTP requests that are vulnerable to network latency and partial failures.

Why does it work?

  • Simpler operations: microservices-level organization with monolithic simplicity
  • Stronger consistency: full ACID transactions
  • Easier debugging: one traceable system, no hunting for bugs in the ELK haystack
  • Better performance: function calls beat network hops

Here’s some real-world proof: Shopify’s 2.8 million-line codebase handles 30TB per minute with separate teams owning distinct modules, yet everything deploys together. Facebook runs similarly. (And principal architect Keith Adams jokes that if you want to be talked out of microservices, he’s your guy.)

With recent developments in frameworks like Spring Modulith, Django, Laravel, and Rails (as seen at scale with Shopify), modular monoliths are poised to gain wider traction in the years ahead.

Service-Oriented Architecture: The Middle Ground

Service-oriented architecture (SOA) sits between monoliths and microservices, favoring larger, domain-driven services instead of dozens or hundreds of tiny ones. These services often communicate via an enterprise service bus (ESB), which reduces orchestration overhead while preserving separation of concerns.

service oriented acrhitecture

Instead of splitting authentication, user preferences, and notifications into separate microservices, SOA might combine them into a single “User Service”, simplifying coordination while preserving autonomy and targeted scaling. SOA provides enterprise-grade modularity without ultra-fine-grained distribution overhead.

Here’s why it works:

  • Right-sized boundaries: fewer, domain-aligned services instead of sprawl
  • Targeted scalability: scale services tied to real business domains
  • Pragmatic complexity: avoids ultra-fine-grained overhead while retaining modular reasoning

SOA has also been proven to work at scale. Norwegian Air Shuttle, Europe’s 9th-largest airline, used SOA to boost agility across complex flight operations. Credit Suisse’s SOA rollout powered millions of service calls per day back in the early 2000s.

Choosing Wisely: Fit over Hype

The problem you’re solving should justify your architecture.

I often use this analogy in consulting: You don’t need a sword to cut a lemon—a knife suffices. And as timeless wisdom reminds us, simplicity is the ultimate sophistication. 

In all likelihood, you’re not Google (you don’t need Google-level fault tolerance), or Amazon (you don’t need massive write availability), or LinkedIn (you don’t handle billions of events a day). Most applications don’t operate at that scale, demanding fundamentally different solutions than ultra-distributed architectures.

For most systems, well-structured modular monoliths (for most common applications, including startups) or SOA (enterprises) deliver comparable scalability and resilience as microservices, without the distributed complexity tax. Alternatively, you may also consider well-sized services (macroservices, or what Gartner proposed as miniservices) instead of tons of microservices.

It’s worth asking:

If simpler architectures can deliver comparable scalability, why are you choosing the complexity of microservices?

Docker: Built for Any Architecture

Docker isn’t just for microservices—it works great across all kinds of architectures like monoliths, SOA, APIs, and event-driven systems. The real benefit is that Docker gives you consistent performance, easier deployment, and flexibility to scale up your apps no matter what architectural approach you’re using.

Docker packages applications cleanly, keeps environments consistent from laptop to production, simplifies dependency management, and isolates applications from the host system. A Dockerized monolith offers all these benefits, minus the orchestration overhead of microservices. 

Microsoft’s guidance on containerizing monoliths clarifies that scaling containers is “far faster and easier than deploying additional VMs”, whether you run one service or fifty. Twilio Segment observed that containerized monoliths can “horizontally scale your environment easily by spinning up more containers and shutting them down when demand subsides.” For many applications, scaling the whole app is exactly what’s needed.

As for DevOps, a monolith in Docker is lighter to operate than a full-blown microservices setup. Logging aggregation becomes simpler when you’re collecting from identical containers rather than disparate services with different formats. Monitoring and debugging remain centralized, and troubleshooting avoids tracing requests across service boundaries.

So, it’s definitely worth considering:

Even without the complexity of microservices, Docker gives you the same advantages — clean deployments, easy scaling, and consistent environments. So why not keep it?

Wrapping Up

A few years ago, my then-8-year-old wanted a bicycle. He’d mostly ride around our apartment complex, maybe venture into the nearby lane. He didn’t need 21 gears, but those shiny shifters had him smitten—imagine riding faster by changing those gears! He absolutely wanted that mechanically complex beauty. (It’s hard to argue with a starry-eyed kid… or a founder :P).

Once he started riding the new bike, the gears slipped, the chain jammed, and the bicycle spent more time broken than on the road. Eventually, we had to dump it. 

I wasn’t able to convince him back then that a simpler bicycle could’ve served him better, but maybe this article will convince a few grown-ups making architectural decisions.

We techies love indulging in complex systems. (Check: were you already thinking, What’s complex about bicycles with gears??) But the more moving parts you add, the more often they break. Complexity often creates more problems than it solves.

The point I’m making isn’t to dump microservices entirely—it’s to pick an architecture that fits your actual needs, not what the cloud giant is pushing (while quietly rolling back their own commit). Most likely, modular monoliths or well-designed SOA will serve your needs better and make your team more productive.

So here’s the million-dollar question: 

Will you design for cloud-native hype or for your own business requirements?

Do you really need microservices?





Download video: https://www.youtube.com/embed/kb-m2fasdDY
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Row Goals: Part 4

1 Share

Row Goals: Part 4


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post Row Goals: Part 4 appeared first on Darling Data.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Hard Fork’s 50 Most Iconic Technologies of 2025

1 Share
“You can’t tell the story of 2025 without these icons.”
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS Building Reliable Software with Unreliable AI Tools With Lada Kesseler

1 Share

AI Assisted Coding: Building Reliable Software with Unreliable AI Tools

In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering.

From Skeptic to Pioneer: Lada's AI Coding Journey

"I got a new skill for free!"

 

Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI.

Understanding Vibecoding vs. AI-Assisted Development

"AI assisted coding requires judgment, and it's never been as important to exercise judgment as now."

 

Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices.

The Answer Injection Anti-Pattern When Working With AI

"You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect."

 

One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability.

 

Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you.

Never Trust a Single LLM: Multi-Agent Collaboration

"Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements."

 

Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review.

Code Quality Matters MORE with AI

"This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!"

 

Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively.

Managing Complexity: The Open Question

"If I just let it do things, it'll just run itself to the wall at crazy speeds, because it's really good at running. So I have to be there managing complexity for it."

 

One of the most honest insights Lada shares is the current limitation of AI: complexity management. While AI excels at implementing features quickly, it struggles to manage the growing complexity of systems over time. Lada finds herself acting as the complexity manager, making architectural decisions and keeping the system maintainable while AI handles implementation details. She poses a critical question for the future: "Can it manage complexity? Can we teach it to manage complexity? I don't know the answer to that." This honest assessment reminds us that fundamental software engineering skills—architecture, refactoring, testing—remain as vital as ever.

Context is Everything: Highway vs. Parking Lot

"You need to be attuned to the environment. You can go faster or slow, and sometimes going slow is bad, because if you're on a highway, you're gonna get hurt."

 

Lada introduces a powerful metaphor for choosing development speed: highway versus parking lot. When learning or experimenting with non-critical systems, you can go fast, don't worry about perfection, and leverage AI's speed fully. But when building production systems where reliability matters, different rules apply. The key is matching your development approach to the risk level and context. She emphasizes safety nets: "In one project, we used AI, and we didn't pay attention to the code, as it wasn't important, because at any point, we could actually step back and refactor. We were not unsafe." This perspective helps developers make better judgment calls about when to accelerate and when to slow down.

The Era of Discovery: We've Only Just Begun

"We haven't even touched the possibilities of what is there out there right now. We're in the era of gentleman scientists—newbies can make big discoveries right now, because nobody knows what AI really is capable of."

 

Perhaps most exciting is Lada's perspective on where we stand in the AI-assisted development journey: we're at the very beginning. Even the creators of these tools are figuring things out as they go. This creates unprecedented opportunities for practitioners at all levels to experiment, discover patterns, and share learnings with the community. Lada has documented her discoveries in an interactive patterns and anti-patterns website, a Calgary Software Crafters presentation, and her Substack blog—contributing to the collective knowledge base that's being built in real-time.

Resources For Further Study

 

About Lada Kesseler

 

Lada Kesseler is a passionate software developer specializing in the design of scalable, robust software systems. With a focus on best development practices, she builds applications that are easy to maintain, adapt, and support. Lada combines technical expertise with a keen eye for clean architecture and sustainable code, driving innovation in modern software engineering.

 

Currently exploring how these values translate to AI-assisted development and figuring out what it takes to build reliable software with unreliable tools.

 

You can link with Lada Kesseler on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251128_Lada_Kesseler_F.mp3?dest-id=246429
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories