“Don’t ask the model to build your whole app. Break your request into smaller parts and generate one function, hook, or component at a time.”
You’ve probably heard this advice if you use AI tools like Copilot or ChatGPT to write code. It’s solid advice because smaller prompts tend to produce cleaner output and fewer hallucinations. They also give you more control over what lands in your codebase.
However, even when your prompts are super-descriptive, and the snippets look good, this workflow eventually runs into the same limitation. Without an overarching architecture that ties everything together, nothing connects at scale.
Every time you start a new chat, you’re generating isolated pieces of code with no shared memory, version history, or consistency. Once the chat ends, the model forgets what it built. When you return later to extend or reuse that code, it’s often easier to generate something new than to improve what already exists.
So what if your AI workflow didn’t have to start from scratch each time? What if every generated function, hook, or component had a home, a version, and a record of how it was used?
That’s what composable architecture makes possible. It gives your AI workflow a structure that connects every generated piece into a living system. Components become reusable, versioned, and documented, and your work compounds instead of disappearing with every new chat.
In this article, you’ll see what happens when you follow current best-practice prompting and why it still creates friction at scale. You’ll learn how composable architecture closes that gap by introducing a framework for reuse, versioning and collaboration. You’ll also discover how Bit Cloud and Hope AI make that system practical by scaffolding modular components that persist beyond a single project.
Consider a React UserAvatar component that Copilot generates. The snippet is systemically valid and functionally complete:
export function UserAvatar({ name, img, onCick }) {
return (
<button className="avatar" onCick={onCick}>
{img ? <img src={img} /> : <div className="fallback">{name[0]}</div>}
<span className="dot online" />
</button>
);
The problem isn’t with the generated code; it’s the lack of a system to organize it. Without a clear workflow to carry it forward, you end up with:
UserAvatar isn’t aware of other UI pieces. Reuse means re-implementing props, class names, or state logic from scratch.These issues create a limiting factor in how AI code evolves. Without a schema that preserves context, version history, and dependencies, AI-generated code can’t evolve into reusable or maintainable modules.
Composable architecture brings the structure that AI-generated code lacks. Instead of snippets drifting away after each session, every piece of functionality becomes a versioned module with its own documentation, tests and history. Persistence ensures nothing gets lost between sessions. Versioning records every iteration, making changes traceable. Also, you have clear interfaces and dependency graphs that give modules shared context and architectural continuity, ensuring the system grows as one organized library rather than a pile of unrelated fragments.

Flat AI workflow vs. composable workflow.
Let’s take an e-commerce UI for example. In a composable workflow, the Button, Card and ProductTile are defined and published as independent modules. A developer updates the Button to improve keyboard accessibility. Before the change is published, the system shows which components depend on Button and which apps will be affected. The developer opens a change request, tests the Button in isolation and in dependent components, tags a new minor version, and publishes it. Consumers of that Button can then opt into the new version or stay on the previous one.
At the same time, a designer browsing the component library sees the existing Card variants, usage examples and test coverage. They extend an existing Card variant rather than rebuilding it, and submit it for review. The library records the change history, the dependency graph and the published versions, so every change is visible and traceable.
With this kind of structure, changes flow through clear contracts and shared versions, turning scattered snippets into a unified system that evolves with every update.
Scaffolding in Bit follows a prompt-driven, architecture-first workflow. The steps below show how to use Hope AI in Bit Cloud to scaffold, structure and manage reusable components in a way that keeps your codebase modular and maintainable.
Every component begins with a clear request. Hope AI uses your prompt as its first brief to understand what to build. It should describe the core functionality and purpose of the component as simply as possible.
For example, you could prompt:
Create a product card component with image, title, price and an add-to-cart button for an e-commerce site.
When you submit the prompt, Hope AI doesn’t generate code right away. Instead, it interprets your request and starts shaping an architectural plan for the component.
In Bit Cloud, Hope AI provides an architecture that defines the structure before any implementation. This includes the modules involved, the interfaces between them, and the dependencies they rely on.

Image showing the architecture generated by Hope.
At this stage, you review the proposed architecture to confirm that it aligns with the component’s intent, follows a logical structure and connects to existing modules where relevant. This gives you a clear picture of how the component will be generated and how it fits into the system.
Once you approve the architecture, Hope AI generates the actual implementation, which is a fully structured module.
The interface in Bit Cloud displays the generated component’s documentation, dependency map, API references, and test coverage. Each component exists as a standalone unit with a clear lifecycle, making it easier to update, test and reuse without digging through application code.
To extend the design system, you can ask Hope AI to build on existing work:
Create a product grid making use of @hackmamba-creators/design.content.card
Hope AI detects the reference, understands the dependency, and connects the new component to the existing one. This means the new product grid inherits the styling conventions and design patterns of the original card component while respecting its established interface.
When a component is ready, you open a Change Request to review the implementation. This is where Bit’s Ripple CI automates governance at scale. It doesn’t just run tests; it automatically identifies the true “blast radius,” mapping every single component and application that will be affected by your change and validating them. This gives you 100% confidence to release.
Once published to Bit Cloud, your component becomes a first-class “Digital Asset” in your organization’s “Digital Asset Factory.” Each asset is stored as a versioned package, preserving its structure and contracts no matter where it’s consumed. It remains discoverable, documented, and versioned, allowing teams to reuse components confidently across multiple projects and environments.

Reusing the component externally.
The main difference between flat AI and composable AI workflows comes down to immediacy versus persistence. Flat workflows prioritize generating code quickly, while composable workflows focus on structure, reuse and long-term maintainability.
Here’s a clear comparison:
Prompting in smaller pieces is a good practice. It helps reduce errors and keeps code under control, but it does not solve the deeper problem. Without an architectural layer, the output of AI remains disposable. Code that works today often fragments tomorrow.
Composable architecture fills that gap. By treating every AI-generated piece as a component with a lifecycle, you move from isolated snippets toward a system that grows in value. Bit and Hope AI make this approach practical by generating components that are documented, versioned and shareable from the start.
The advantage this approach brings is structural integrity. Instead of scattering short-lived fragments across projects, your AI workflow builds a library of reusable modules and interconnected building blocks. That shift turns AI-generated code from temporary solutions into a modular architecture that compounds over time, offering a more sustainable way to manage code in an era of AI-assisted development.
If you are already experimenting with AI tools in your daily work, this is the next step. Try scaffolding components with Bit Cloud and see how a composable workflow changes the way your code evolves.
The post Your AI workflow Is Missing a Composable Architecture appeared first on The New Stack.
It’s been a crazy 2025 where I focused on depth over noise, sharing insights on long-term experience and real-world problems. After digging into the post, I ended up breaking this up into two posts to give proper credit to all that happened in 2025. You’re also going to find that rather than chasing shiny AI trends, my work this year centered on helping technologists navigate AI complexity with clarity, especially when data protection was concerned.Â
I did keynotes, technical conferences, published writing, and performed community engagement and all the while the central thread was focused on practical impact: work that engineers can use, teams can adopt, and leaders can trust.
This year I delivered 11 keynote-level talks, both at technical conferences and universities. I didn’t just do thought leadership, but explorations of why choices matter and how technical professionals can succeed in uncertainty.
My favorite keynotes this year:
The themes that carried out through these presentations included:
Rather than simply reporting on trends, these talks emphasized decision-making frameworks, helping audiences understand the deeper forces shaping our industry and how to act on them.
In 2025, I was honored to speak at numerous technical conferences, sharing hands-on guidance and actionable takeaways. These presentations did cover some introductory material, but more often went in deep to help professionals solve real challenges in databases, DevOps, AI, and cloud infrastructure.
One highlight which may surprise folks that stood out for me was “PostgreSQL’s Rise to Power: Why the Open-Source Giant is Dominating the Database Landscape” at FOSSY 2025 in Portland, Oregon. This open-source event session examined the practical trends and architectural underpinnings that are driving PostgreSQL’s adoption across organizations of all sizes. There was a high number of younger attendees and when a 25-year old came up to excitedly speak to me about VIM, I was over the MOON!
I spoke at numerous events in Microsoft, Oracle, DevOps, Open-Source and AI Communities this last year:
These demos and talks reflected my goal of tackling real issues that everyone is facing in tech today, not just the latest buzzwords.
I ended up on the cover of the Financial IT magazine, which surprised me as much as anyone else, as no one had let me know beforehand that I was going to be headlined!
Writing remained a core part of my work at Redgate in 2025. Instead of short takes or trend pieces, I prioritized long-form, reference-quality content that database professionals, no matter if new or experienced, could reference.
Several Redgate/Simple Talk articles published this year included:
I also wrote a total of 35 posts this year on DBAKevlar. I was really thrilled that I was able to contribute to my own blog again, as I started it back in 2008, which means there’s 18 years of investment demonstrating my own technical journey.
In addition to writing, I continued my involvement with the Simple Talks podcast, where we unpack technology adoption, career experiences, industry challenges, and emerging topics like AI governance and data security.
I also participated in Simple Talk’s “State of the Database Landscape 2025” podcast alongside Louis, Steve, and Grant, discussing trends in security, AI adoption, and database professional development. In the first half of 2025, there were so many, I wondered if all I was going to do was podcasts and keynotes for the year.
As I’m breaking this up into two parts, the next post I’ll get into community, mentoring and advisory work that was part of my 2025, so stay tuned!

O'Reilly Media's Tim O'Reilly looks back at 20 years of the maker movement.
The post 20 Years of Make: Triumph of the Makers appeared first on Make: DIY Projects and Ideas for Makers.