Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151804 stories
·
33 followers

2025 was the year AI got a vibe check

1 Share
AI’s early-2025 spending spree featured massive raises and trillion-dollar infrastructure promises. By year’s end, hype gave way to a vibe check, with growing scrutiny over sustainability, safety, and business models.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Your AI workflow Is Missing a Composable Architecture

1 Share

“Don’t ask the model to build your whole app. Break your request into smaller parts and generate one function, hook, or component at a time.”

You’ve probably heard this advice if you use AI tools like Copilot or ChatGPT to write code. It’s solid advice because smaller prompts tend to produce cleaner output and fewer hallucinations. They also give you more control over what lands in your codebase.

However, even when your prompts are super-descriptive, and the snippets look good, this workflow eventually runs into the same limitation. Without an overarching architecture that ties everything together, nothing connects at scale.

Every time you start a new chat, you’re generating isolated pieces of code with no shared memory, version history, or consistency. Once the chat ends, the model forgets what it built. When you return later to extend or reuse that code, it’s often easier to generate something new than to improve what already exists.

So what if your AI workflow didn’t have to start from scratch each time? What if every generated function, hook, or component had a home, a version, and a record of how it was used?

That’s what composable architecture makes possible. It gives your AI workflow a structure that connects every generated piece into a living system. Components become reusable, versioned, and documented, and your work compounds instead of disappearing with every new chat.

In this article, you’ll see what happens when you follow current best-practice prompting and why it still creates friction at scale. You’ll learn how composable architecture closes that gap by introducing a framework for reuse, versioning and collaboration. You’ll also discover how Bit Cloud and Hope AI make that system practical by scaffolding modular components that persist beyond a single project.

Why Flat AI Workflows Don’t Scale

Consider a React UserAvatar component that Copilot generates. The snippet is systemically valid and functionally complete:

export function UserAvatar({ name, img, onCick }) {
 return (
 <button className="avatar" onCick={onCick}>
 {img ? <img src={img} /> : <div className="fallback">{name[0]}</div>}
 <span className="dot online" />
 </button>
 );


The problem isn’t with the generated code; it’s the lack of a system to organize it. Without a clear workflow to carry it forward, you end up with:

  • No persistence: This component exists only within the chat session. Unless it’s manually saved or added to a repo, it disappears once the session ends, untracked and temporary.
  • No versioning: Each tweak or change spawns a new snippet with no lineage. There’s no version history showing what changed or which version is current.
  • No shared context: The UserAvatar isn’t aware of other UI pieces. Reuse means re-implementing props, class names, or state logic from scratch.
  • No architectural continuity: Without persistence, versioning, or shared context, there’s no foundation to evolve from. The system just keeps regenerating new pieces instead of building on what came before.

These issues create a limiting factor in how AI code evolves. Without a schema that preserves context, version history, and dependencies, AI-generated code can’t evolve into reusable or maintainable modules.

How Composable Architecture Fixes Flat AI Workflow Problems

Composable architecture brings the structure that AI-generated code lacks. Instead of snippets drifting away after each session, every piece of functionality becomes a versioned module with its own documentation, tests and history. Persistence ensures nothing gets lost between sessions. Versioning records every iteration, making changes traceable. Also, you have clear interfaces and dependency graphs that give modules shared context and architectural continuity, ensuring the system grows as one organized library rather than a pile of unrelated fragments.

Flat AI workflow vs. composable workflow.

Let’s take an e-commerce UI for example. In a composable workflow, the Button, Card and ProductTile are defined and published as independent modules. A developer updates the Button to improve keyboard accessibility. Before the change is published, the system shows which components depend on Button and which apps will be affected. The developer opens a change request, tests the Button in isolation and in dependent components, tags a new minor version, and publishes it. Consumers of that Button can then opt into the new version or stay on the previous one.

At the same time, a designer browsing the component library sees the existing Card variants, usage examples and test coverage. They extend an existing Card variant rather than rebuilding it, and submit it for review. The library records the change history, the dependency graph and the published versions, so every change is visible and traceable.

With this kind of structure, changes flow through clear contracts and shared versions, turning scattered snippets into a unified system that evolves with every update.

How To Scaffold Reusable Components Using Bit

Scaffolding in Bit follows a prompt-driven, architecture-first workflow. The steps below show how to use Hope AI in Bit Cloud to scaffold, structure and manage reusable components in a way that keeps your codebase modular and maintainable.

  1. Start with a prompt

Every component begins with a clear request.  Hope AI uses your prompt as its first brief to understand what to build. It should describe the core functionality and purpose of the component as simply as possible.

For example, you could prompt:

Create a product card component with image, title, price and an add-to-cart button for an e-commerce site.

When you submit the prompt, Hope AI doesn’t generate code right away. Instead, it interprets your request and starts shaping an architectural plan for the component.

  1. Review the proposed architecture

In Bit Cloud, Hope AI provides an architecture that defines the structure before any implementation. This includes the modules involved, the interfaces between them, and the dependencies they rely on.

Image showing the architecture generated by Hope.

At this stage, you review the proposed architecture to confirm that it aligns with the component’s intent, follows a logical structure and connects to existing modules where relevant. This gives you a clear picture of how the component will be generated and how it fits into the system.

  1. Generate the component

Once you approve the architecture, Hope AI generates the actual implementation, which is a fully structured module.

The interface in Bit Cloud displays the generated component’s documentation, dependency map, API references, and test coverage. Each component exists as a standalone unit with a clear lifecycle, making it easier to update, test and reuse without digging through application code.

  1. Reuse existing components

To extend the design system, you can ask Hope AI to build on existing work:

Create a product grid making use of @hackmamba-creators/design.content.card

Hope AI detects the reference, understands the dependency, and connects the new component to the existing one. This means the new product grid inherits the styling conventions and design patterns of the original card component while respecting its established interface.

  1. Version and collaborate

When a component is ready, you open a Change Request to review the implementation. This is where Bit’s Ripple CI automates governance at scale. It doesn’t just run tests; it automatically identifies the true “blast radius,” mapping every single component and application that will be affected by your change and validating them. This gives you 100% confidence to release.

Once published to Bit Cloud, your component becomes a first-class “Digital Asset” in your organization’s “Digital Asset Factory.” Each asset is stored as a versioned package, preserving its structure and contracts no matter where it’s consumed. It remains discoverable, documented, and versioned, allowing teams to reuse components confidently across multiple projects and environments.

Reusing the component externally.

Comparing Key Characteristics of Composable AI vs. Flat AI Workflows 

The main difference between flat AI and composable AI workflows comes down to immediacy versus persistence. Flat workflows prioritize generating code quickly, while composable workflows focus on structure, reuse and long-term maintainability.

Here’s a clear comparison:

  • Speed: Flat AI workflows focus on delivering instant results, generating code quickly with minimal upfront planning. In contrast, composable workflows spend a bit more time defining structure, which saves time over the life of the project.
  • Persistence: Flat AI workflows don’t store what’s generated. The snippets live only in the current context and disappear afterward. Meanwhile, composable workflows create documented, versioned components that persist across sessions and projects.
  • Portability: Code produced by flat AI workflows are tied to one project or context, while composable workflows generate components that move cleanly across projects without breaking dependencies.
  • Collaboration: Flat AI workflows lack a shared source of truth, which leads to duplicate variants and manual fixes. Whereas, composable workflows publish components as shared modules, making collaboration easier across teams and projects.
  • Scalability: Flat AI workflows fragment as codebases grow, making maintenance harder. Composable workflows scale cleanly through reusable, interoperable building blocks.

Wrapping Up

Prompting in smaller pieces is a good practice. It helps reduce errors and keeps code under control, but it does not solve the deeper problem. Without an architectural layer, the output of AI remains disposable. Code that works today often fragments tomorrow.

Composable architecture fills that gap. By treating every AI-generated piece as a component with a lifecycle, you move from isolated snippets toward a system that grows in value. Bit and Hope AI make this approach practical by generating components that are documented, versioned and shareable from the start.

The advantage this approach brings is structural integrity. Instead of scattering short-lived fragments across projects, your AI workflow builds a library of reusable modules and interconnected building blocks. That shift turns AI-generated code from temporary solutions into a modular architecture that compounds over time, offering a more sustainable way to manage code in an era of AI-assisted development.

If you are already experimenting with AI tools in your daily work, this is the next step. Try scaffolding components with Bit Cloud and see how a composable workflow changes the way your code evolves.

The post Your AI workflow Is Missing a Composable Architecture appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Udacity Offers 40% Off

1 Share
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Deno 2.6 Adds NPM And JSR Tool

1 Share
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

2025 Year in Review: Advocacy at Redgate, Part I

1 Share

It’s been a crazy 2025 where I focused on depth over noise, sharing insights on long-term experience and real-world problems. After digging into the post, I ended up breaking this up into two posts to give proper credit to all that happened in 2025. You’re also going to find that rather than chasing shiny AI trends, my work this year centered on helping technologists navigate AI complexity with clarity, especially when data protection was concerned. 

I did keynotes, technical conferences, published writing, and performed community engagement and all the while the central thread was focused on practical impact: work that engineers can use, teams can adopt, and leaders can trust.

Setting a Direction

This year I delivered 11 keynote-level talks, both at technical conferences and universities.  I didn’t just do thought leadership, but explorations of why choices matter and how technical professionals can succeed in uncertainty.

My favorite keynotes this year:

The themes that carried out through these presentations included:

  • The evolving role of database professionals in hybrid and cloud environments
  • How organizations can integrate AI responsibly and strategically
  • The cultural and governance implications of data democratization
  • Practical leadership in technology transformation

Rather than simply reporting on trends, these talks emphasized decision-making frameworks, helping audiences understand the deeper forces shaping our industry and how to act on them.

Deep Practical Engagement at Technical Conferences

In 2025, I was honored to speak at numerous technical conferences, sharing hands-on guidance and actionable takeaways. These presentations did cover some introductory material, but more often went in deep to help professionals solve real challenges in databases, DevOps, AI, and cloud infrastructure.

One highlight which may surprise folks that stood out for me was “PostgreSQL’s Rise to Power: Why the Open-Source Giant is Dominating the Database Landscape” at FOSSY 2025 in Portland, Oregon. This open-source event session examined the practical trends and architectural underpinnings that are driving PostgreSQL’s adoption across organizations of all sizes.  There was a high number of younger attendees and when a 25-year old came up to excitedly speak to me about VIM, I was over the MOON!

I spoke at numerous events in Microsoft, Oracle, DevOps, Open-Source and AI Communities this last year:

  • Zero to Understanding: Oracle for the SQL Server Expert
  • Why the Command Line is Still King in the DevOps World
  • Leading Through Transformation and the Impact of AI to Organizations
  • Guard Rails of Data Democratization with AI in Today’s World
  • DevOps in the Age of AI: Human Powered Evolution
    …and more- 14 new technical sessions for 2025, not counting keynote content.  Each of my technical sessions focused on practical lessons from the trenches of modern data engineering.

These demos and talks reflected my goal of tackling real issues that everyone is facing in tech today, not just the latest buzzwords.

Technical Writing with Staying Power

I ended up on the cover of the Financial IT magazine, which surprised me as much as anyone else, as no one had let me know beforehand that I was going to be headlined!

Writing remained a core part of my work at Redgate in 2025. Instead of short takes or trend pieces, I prioritized long-form, reference-quality content that database professionals, no matter if new or experienced, could reference.

Several Redgate/Simple Talk articles published this year included:

I also wrote a total of 35 posts this year on DBAKevlar.  I was really thrilled that I was able to contribute to my own blog again, as I started it back in 2008, which means there’s 18 years of investment demonstrating my own technical journey.

Community Through Podcasts

In addition to writing, I continued my involvement with the Simple Talks podcast, where we unpack technology adoption, career experiences, industry challenges, and emerging topics like AI governance and data security.

I also participated in Simple Talk’s “State of the Database Landscape 2025” podcast alongside Louis, Steve, and Grant, discussing trends in security, AI adoption, and database professional development.  In the first half of 2025, there were so many, I wondered if all I was going to do was podcasts and keynotes for the year.

But…there’s more

As I’m breaking this up into two parts, the next post I’ll get into community, mentoring and advisory work that was part of my 2025, so stay tuned!

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

20 Years of Make: Triumph of the Makers

1 Share

O'Reilly Media's Tim O'Reilly looks back at 20 years of the maker movement.

The post 20 Years of Make: Triumph of the Makers appeared first on Make: DIY Projects and Ideas for Makers.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories