Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151574 stories
·
33 followers

Comprehension Debt: The Hidden Cost of AI-Generated Code

1 Share

The following article originally appeared on Addy Osmani’s blog site and is being reposted here with the author’s permission.

Comprehension debt is the hidden cost to human intelligence and memory resulting from excessive reliance on AI and automation. For engineers, it applies most to agentic engineering.

There’s a cost that doesn’t show up in your velocity metrics when teams go deep on AI coding tools. Especially when its tedious to review all the code the AI generates. This cost accumulates steadily, and eventually it has to be paid—with interest. It’s called comprehension debt or cognitive debt.

Comprehension debt is the growing gap between how much code exists in your system and how much of it any human being genuinely understands.

Unlike technical debt, which announces itself through mounting friction—slow builds, tangled dependencies, the creeping dread every time you touch that one module—comprehension debt breeds false confidence. The codebase looks clean. The tests are green. The reckoning arrives quietly, usually at the worst possible moment.

Margaret-Anne Storey describes a student team that hit this wall in week seven: They could no longer make simple changes without breaking something unexpected. The real problem wasn’t messy code. It was that no one on the team could explain why design decisions had been made or how different parts of the system were supposed to work together. The theory of the system had evaporated.

That’s comprehension debt compounding in real time.

I’ve read Hacker News threads that captured engineers genuinely wrestling with the structural version of this problem—not the familiar optimism versus skepticism binary, but a field trying to figure out what rigor actually looks like when the bottleneck has moved.

How AI assistance impacts coding speed and skill formation

A recent Anthropic study titled “How AI Impacts Skill Formation” highlighted the potential downsides of over-reliance on AI coding assistants. In a randomized controlled trial with 52 software engineers learning a new library, participants who used AI assistance completed the task in roughly the same time as the control group but scored 17% lower on a follow-up comprehension quiz (50% versus 67%). The largest declines occurred in debugging, with smaller but still significant drops in conceptual understanding and code reading. The researchers emphasize that passive delegation (“just make it work”) impairs skill development far more than active, question-driven use of AI. The full paper is available at arXiv.org.

There is a speed asymmetry problem here

AI generates code far faster than humans can evaluate it. That sounds obvious, but the implications are easy to underestimate.

When a developer on your team writes code, the human review process has always been a bottleneck—but a productive and educational one. Reading their PR forces comprehension. It surfaces hidden assumptions, catches design decisions that conflict with how the system was architected six months ago, and distributes knowledge about what the codebase actually does across the people responsible for maintaining it.

AI-generated code breaks that feedback loop. The volume is too high. The output is syntactically clean, often well-formatted, superficially correct—precisely the signals that historically triggered merge confidence. But surface correctness is not systemic correctness. The codebase looks healthy while comprehension quietly hollows out underneath it.

I read one engineer say that the bottleneck has always been a competent developer understanding the project. AI doesn’t change that constraint. It creates the illusion you’ve escaped it.

And the inversion is sharper than it looks. When code was expensive to produce, senior engineers could review faster than junior engineers could write. AI flips this: A junior engineer can now generate code faster than a senior engineer can critically audit it. The rate-limiting factor that kept review meaningful has been removed. What used to be a quality gate is now a throughput problem.

I love tests, but they aren’t a complete answer

The instinct to lean harder on deterministic verification—unit tests, integration tests, static analysis, linters, formatters—is understandable. I do this a lot in projects heavily leaning on AI coding agents. Automate your way out of the review bottleneck. Let machines check machines.

This helps. It has a hard ceiling.

A test suite capable of covering all observable behavior would, in many cases, be more complex than the code it validates. Complexity you can’t reason about doesn’t provide safety though. And beneath that is a more fundamental problem: You can’t write a test for behavior you haven’t thought to specify.

Nobody writes a test asserting that dragged items shouldn’t turn completely transparent. Of course they didn’t. That possibility never occurred to them. That’s exactly the class of failure that slips through, not because the test suite was poorly written, but because no one thought to look there.

There’s also a specific failure mode worth naming. When an AI changes implementation behavior and updates hundreds of test cases to match the new behavior, the question shifts from “is this code correct?” to “were all those test changes necessary, and do I have enough coverage to catch what I’m not thinking about?” Tests cannot answer that question. Only comprehension can.

The data is starting to back this up. Research suggests that developers using AI for code generation delegation score below 40% on comprehension tests, while developers using AI for conceptual inquiry—asking questions, exploring tradeoffs—score above 65%. The tool doesn’t destroy understanding. How you use it does.

Tests are necessary. They are not sufficient.

Lean on specs, but they’re also not the full story.

A common proposed solution: Write a detailed natural language spec first. Include it in the PR. Review the spec, not the code. Trust that the AI faithfully translated intent into implementation.

This is appealing in the same way Waterfall methodology was once appealing. Rigorously define the problem first, then execute. Clean separation of concerns.

The problem is that translating a spec to working code involves an enormous number of implicit decisions—edge cases, data structures, error handling, performance tradeoffs, interaction patterns—that no spec ever fully captures. Two engineers implementing the same spec will produce systems with many observable behavioral differences. Neither implementation is wrong. They’re just different. And many of those differences will eventually matter to users in ways nobody anticipated.

There’s another possibility with detailed specs worth calling out: A spec detailed enough to fully describe a program is more or less the program, just written in a non-executable language. The organizational cost of writing specs thorough enough to substitute for review may well exceed the productivity gains from using AI to execute them. And you still haven’t reviewed what was actually produced.

The deeper issue is that there is often no correct spec. Requirements emerge through building. Edge cases reveal themselves through use. The assumption that you can fully specify a non-trivial system before building it has been tested repeatedly and found wanting. AI doesn’t change this. It just adds a new layer of implicit decisions made without human deliberation.

Learn from history

Decades of managing software quality across distributed teams with varying context and communication bandwidth has produced real, tested practices. Those don’t evaporate because the team member is now a model.

What changes with AI is cost (dramatically lower), speed (dramatically higher), and interpersonal management overhead (essentially zero). What doesn’t change is the need for someone with a deep system context to maintain a coherent understanding of what the codebase is actually doing and why.

This is the uncomfortable redistribution that comprehension debt forces.

As AI volume goes up, the engineer who truly understands the system becomes more valuable, not less. The ability to look at a diff and immediately know which behaviors are load-bearing. To remember why that architectural decision got made under pressure eight months ago.

To tell the difference between a refactor that’s safe and one that’s quietly shifting something users depend on. That skill becomes the scarce resource the whole system depends on.

There’s a bit of a measurement gap here too

The reason comprehension debt is so dangerous is that nothing in your current measurement system captures it.

Velocity metrics look immaculate. DORA metrics hold steady. PR counts are up. Code coverage is green.

Performance calibration committees see velocity improvements. They cannot see comprehension deficits because no artifact of how organizations measure output captures that dimension. The incentive structure optimizes correctly for what it measures. What it measures no longer captures what matters.

This is what makes comprehension debt more insidious than technical debt. Technical debt is usually a conscious tradeoff—you chose the shortcut, you know roughly where it lives, you can schedule the paydown. Comprehension debt accumulates invisibly, often without anyone making a deliberate decision to let it. It’s the aggregate of hundreds of reviews where the code looked fine and the tests were passing and there was another PR in the queue.

The organizational assumption that reviewed code is understood code no longer holds. Engineers approved code they didn’t fully understand, which now carries implicit endorsement. The liability has been distributed without anyone noticing.

The regulation horizon is closer than it looks

Every industry that moved too fast eventually attracted regulation. Tech has been unusually insulated from that dynamic, partly because software failures are often recoverable, and partly because the industry has moved faster than regulators could follow.

That window is closing. When AI-generated code is running in healthcare systems, financial infrastructure, and government services, “the AI wrote it and we didn’t fully review it” will not hold up in a post-incident report when lives or significant assets are at stake.

Teams building comprehension discipline now—treating genuine understanding, not just passing tests, as non-negotiable—will be better positioned when that reckoning arrives than teams that optimized purely for merge velocity.

What comprehension debt actually demands

The right question for now isn’t “how do we generate more code?” It’s “how do we actually understand more of what we’re shipping?” so we can make sure our users get a consistently high quality experience.

That reframe has practical consequences. It means being ruthlessly explicit about what a change is supposed to do before it’s written. It means treating verification not as an afterthought but as a structural constraint. It means maintaining the system-level mental model that lets you catch AI mistakes at architectural scale rather than line-by-line. And it means being honest about the difference between “the tests passed” and “I understand what this does and why.”

Making code cheap to generate doesn’t make understanding cheap to skip. The comprehension work is the job.

AI handles the translation, but someone still has to understand what was produced, why it was produced that way, and whether those implicit decisions were the right ones—or you’re just deferring a bill that will eventually come due in full.

You will pay for comprehension sooner or later. The debt accrues interest rapidly.



Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

995: Next.js Vendor Lock-in No More

1 Share

In this episode, Scott and Wes sit down with Tim Neutkens and Jimmi Lai from the Next.js team to dig into the new Adapters API, what it takes to run Next.js across platforms like Cloudflare and Netlify, and how caching and infrastructure choices affect performance. They also go deep on TurboPack’s internals, why Next.js doesn’t run on Vite, and the evolution of bundling in the framework.

Show Notes

  • 00:00 Welcome to Syntax!
  • 01:14 Introduction to Next.js and the Adapter Platform
  • 02:23 The Adapters API: Features and Community Needs
  • 04:46 Building and Testing the Adapters API
  • 07:37 Infrastructure Requirements for Next.js Apps
  • 11:38 Caching Strategies and Performance Optimization
  • 13:29 The Role of Cache Components in Next.js
  • 17:21 First Steps of Optimizations.
  • 19:48 Blessed Adapters and Community Contributions
  • 22:56 Future Directions and Runtime Support
  • 25:05 Challenges with Different Runtimes and Debugging
  • 26:45 Webpack vs. TurboPack: The Evolution of Next.js
  • 29:45 Why Not Run on Vite?
  • 32:47 Navigating Bundler Challenges
  • 36:59 Building TurboPack: Lessons Learned
  • 41:42 Incremental Compilation and Performance
  • 43:50 Framework Comparisons and Performance Metrics
  • 46:42 Exploring Future Directions for TurboPack
  • 49:44 TurboPack’s Integration and API Development
  • 52:50 Standardization in Bundler Tools
  • 56:52 TurboPack’s Adoption and User Experience
  • 57:49 Sick Picks + Shameless Plugs

Sick Picks

Shameless Plugs

Hit us up on Socials!

Syntax: X Instagram Tiktok LinkedIn Threads

Wes: X Instagram Tiktok LinkedIn Threads

Scott: X Instagram Tiktok LinkedIn Threads

Randy: X Instagram YouTube Threads





Download audio: https://traffic.megaphone.fm/FSI5959488954.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Hidden Cost of Splitting the Scrum Master Role — And Why Stance Changes Make or Break Your Impact | Efe Gümüs

1 Share

Efe Gümüs: The Hidden Cost of Splitting the Scrum Master Role — And Why Stance Changes Make or Break Your Impact

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"The biggest problem when I reflect on it now is the stance changes, because as Scrum Masters, we have to establish our impartiality when we are facilitating and when we are coaching." - Efe Gümüs

 

Efe started his career as a network operation automation engineer, fresh out of an electrical and electronics engineering degree. When his manager asked him to take on a part-time Scrum Master role alongside his developer duties, the challenge of switching between those two stances became immediately real. As a developer, your mind focuses on solving problems. As a Scrum Master, your job is to help teams own the solution — not solve it yourself. That split led Efe to a bigger realization about scope and boundaries. When he stepped too far into the Scrum Master role, he created an unintended authority dynamic. When he stepped too far back, he became invisible. The turning point came when he stopped an alignment call that wasn't working — the information was flowing one way, and the Scrum Masters didn't feel like peers. By naming the problem and co-creating the session format, he found the middle ground: describe your expectations, get agreement, and let people tell you where they need help. One small action from you can move a problem forward in two or three steps — but only if you know about it.

 

Self-reflection Question: When was the last time you paused a meeting that wasn't working and explicitly renegotiated how the group would interact — and what held you back from doing it sooner?

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Efe Gümüs

 

Efe is an out-of-the-box Agile Coach and Scrum Master who brings fresh perspectives to Agile by connecting it with everyday life. He uses metaphors to reveal mindset patterns and applies continuous feedback loops beyond work, including music production and gym training, constantly refining performance, creativity, and personal growth and resilience.

 

You can link with Efe Gümüs on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260413_Efe_Gumus_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When AI Makes Your Pipeline Faster But Your Product Worse

1 Share

Bob Galen and Josh Anderson dig into the two ends of the AI-accelerated pipeline: the upstream product strategy crisis, where AI amplifies leadership indecision, and the downstream delivery chaos when customers and internal teams can't absorb what's being shipped. Real talk, real examples.

Stay Connected and Informed with Our Newsletters

Josh Anderson's "Leadership Lighthouse"

Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.

Subscribe here

Bob Galen's "Agile Moose"

Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."

Subscribe here

Do More Than Listen:

We publish video versions of every episode and post them on our YouTube page.

Help Us Spread The Word: 

Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!





Download audio: https://episodes.captivate.fm/episode/45ba74e5-76f0-4e54-9cfa-7c318f7162db.mp3
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Maker Monday: 10 non-traditional clock projects

1 Share

We love it when folks get creative with a Raspberry Pi, and Maker Monday is a great time to showcase this. While there are definitely many Raspberry Pi devices out there doing important jobs in factories and industrial settings, it’s nice to see when people do something artistic — or just plain funny — with a Raspberry Pi at home. A great example of this is the many ways people have taken the ancient concept of telling the time and given it a twist. Here are just some of our favourites from the latest issue of Raspberry Pi Official Magazine.

01. Counter-rotating clock

Shifted perspective

This is the project that sparked the idea for this list: the hour hand remains stationary while the entire face rotates instead. Editor Lucy does not enjoy it.

02. Falling clock

Manufactured error

When Burke McCabe decided to fix his clock, which was making a screeching noise, he added a camera so that it would know when it was being looked at and then launch itself off the wall.

03. Colourful word clock

Say the time

Word clocks use a word-search-style mixture of words and letters to tell you the time, any time, in the 12-hour format. There are word clocks available in other languages too.

04. Smart Nixie tubes

Retro tech

A striking bit of retro technology updated with a Raspberry Pi 3 to accurately tell the time. The perfect prop for your steampunk movie.

05. Moon and tide clock

Alternate time-telling

Less weird and more ‘cool and classy’, this modified vintage carriage clock uses e-ink displays to show the current moon phase and the tide timings. It looks really lovely.

06. Robot arm clock

Automated manual progression

When your clock breaks, how do you fix it? Well, if you’re anything like Hendrik Ohrens, you program a robot arm to manually move the minute hand. Obviously!

07. Pico solar system display

The cosmic ballet

This early but memorable Raspberry Pi Pico project simulates the positions of the planets in our solar system depending on what time it’s set to. You could set it to the future (or past) if you wish to miss work.

08. Flip clock

Got you babe

Flip clocks are retro chic and were memorably featured in Groundhog Day. With modern 3D printing and Raspberry Pi know-how, it’s dead easy to make one yourself. Every day.

09. Self-snoozing alarm

Get more rest

A bit like that box that closes itself, this lovely retro-style alarm clock art piece will go off as usual — only for the robot arm to snooze it for you.

10. Dual-spiral marble clock

Time rolls on

Very simply, the spirals slowly turn, changing the position of each marble so that they accurately line up with the hour on one spiral and the minute on the other. Why? Why not!

Issue 164 of Raspberry Pi Official Magazine is out now!

This article appeared in issue 164 of Raspberry Pi Official Magazine, which you can grab from Tesco, Sainsbury’s, Asda, WHSmith, and other newsagents, as well as the Raspberry Pi Store in Cambridge. It’s also available from our online store, which ships around the world. And you can get a digital version via our app on Android or iOS.

You can also subscribe to the print version of our magazine. Not only do we deliver worldwide, but people who sign up to the six- or twelve-month print subscription get a FREE Raspberry Pi Pico 2 W!

The post Maker Monday: 10 non-traditional clock projects appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Build an AI Content Creation App in Blazor with AI AssistView

1 Share

Build an AI Content Creation App in Blazor with AI AssistView

TL;DR: Build an AI-assisted content creation app in Blazor using Syncfusion AI AssistView as the chat surface. Add Blog/KB modes with different system instructions, suggestions, and output templates, then layer in file attachments, speech-to-text, and text-to-speech via JS interop.

If you’ve ever tried to use a single “write me content” prompt for everything, you’ve seen the problem: blog drafts turn into support docs, and KB articles start reading like marketing copy. The UI is fine, the instructions and structure are what break down.

In this walkthrough, you’ll build a small Blazor Web App (.NET 10) that uses the Syncfusion® Blazor AI AssistView component as the chat surface and routes requests through a mode-aware prompt builder so your output stays Blog or Knowledge Base consistently.

Before getting started with this implementation, make sure you’re familiar with the baseline setup (Syncfusion packages and AssistView rendering) described in the official documentation.

What’s new in the content creation app

Here’s what we’re adding on top of the getting-started baseline:

1. Blog vs KB modes

Blogs and KB articles have different success criteria. The mode switch lets you make those differences explicit, so the model stops guessing.

2. Mode-aware prompting

Instead of sending args.Prompt directly, we’ll build a final prompt that includes:

  • shared rules (markdown, accuracy, etc.)
  • a mode-specific output template

This is the part that typically improves consistency the most.

3. Real workflow features

  • Attachments: bring your own reference doc/log/draft.
  • Speech-to-text: capture ideas faster.
  • Text-to-speech: review long output hands-free.
  • Clear prompts: reset and start a fresh session.

How it works (architecture in one minute)

The pattern is simple and scales well:

  1. UI (AI AssistView) collects the user prompt.
  2. A mode selector (Blog vs KB) chooses a configuration:
    • system instruction (high-level role + constraints)
    • prompt suggestions (starter prompts that match the mode)
    • output template (the structure you want back)
  3. Your PromptRequested handler builds a final prompt:
    • shared rules (Markdown, accuracy)
    • mode template (Blog vs KB outline)
    • user request
  4. You call Gemini (or any model) and return the response via UpdateResponseAsync.

This is a practical midpoint between raw chat and a full workflow engine.

After selecting a prompt from the prompt suggestions
After selecting a prompt from the prompt suggestions
After scrolling through the end of the AI response output
After scrolling through the end of the AI response output

That gives you a working AssistView UI. But if you keep one generic instruction and one mixed-suggestion list, users will get an inconsistent output structure. The rest of this post is about fixing that by making mode a first-class concept.

Prerequisites

  • .NET 10 SDK (or your target SDK)
  • A Blazor Web App with Syncfusion Blazor configured
  • Google.GenAI configured with an API key (Gemini)
  • Browser permission for microphone + speech synthesis (for voice features)

With the groundwork in place, let’s dive into the step‑by‑step implementation

Step 1: Add a Blog/KB mode model

Create a mode enum and a config object:

public enum ContentMode
{
    Blog,
    KnowledgeBase
}

public sealed class ModeConfig
{
    public required string DisplayName { get; init; }
    public required string SystemInstruction { get; init; }
    public required List<string> Suggestions { get; init; }
}

Then define your mode configs:

private ContentMode mode = ContentMode.Blog;

private readonly Dictionary<ContentMode, ModeConfig> modes = new()
{
    [ContentMode.Blog] = new ModeConfig
    {
        DisplayName = "Blog",
        SystemInstruction = @"You are an expert content creator. Generate a developer blog post with clear headings, short paragraphs, and actionable steps. End with 8 FAQs.",
        Suggestions = new List<string>
        {
            "Generate a blog post outline on sustainable energy.",
            "Create a draft blog introduction about Blazor development.",
            "Suggest SEO keywords for a tech blog on AI.",
            "Rewrite this section to be more concise."
        }
    },
    [ContentMode.KnowledgeBase] = new ModeConfig
    {
        DisplayName = "KB",
        SystemInstruction = @"You are an expert support writer. Generate a developer knowledge base article with Summary, Environment, Symptoms, Resolution steps, and Troubleshooting. End with 8 FAQs.",
        Suggestions = new List<string>
        {
            "Write a KB article on troubleshooting network issues.",
            "Draft a KB: 'App fails to start' with resolution steps.",
            "Create a troubleshooting checklist for intermittent timeouts.",
            "Summarize the likely root causes and next diagnostics."
        }
    }
};

private ModeConfig CurrentMode => modes[mode];

Step 2: Wire mode suggestions into AI AssistView

Point AssistView at the current mode’s prompt suggestion list:

<SfAIAssistView @ref="assistView"
                PromptSuggestions="CurrentMode.Suggestions"
                PromptRequested="@PromptRequest">
</SfAIAssistView>

Step 3: Add a simple mode selector (and clear chat on switch)

The key detail here: clear prompts when switching modes. Otherwise, the previous tone/structure can leak into the next request.

<div style="width:155px; padding-bottom:20px; margin-left:34px">
    <label for="contentType">Content Type</label>
    <SfDropDownList TValue="string"
                    ID="contentType"
                    TItem="ContentType"
                    Placeholder="Select a content type"
                    DataSource="@ContentTypes"
                    @bind-Index="@selectedContentTypeIndex">
        <DropDownListFieldSettings Value="ID"
                                   Text="Text">
        </DropDownListFieldSettings>
        <DropDownListEvents TValue="string"
                            TItem="ContentType"
                            ValueChange="OnValueChange">
        </DropDownListEvents>
    </SfDropDownList>
</div>
public class ContentType
{
    public string? ID { get; set; }
    public string? Text { get; set; }
}
private int? selectedContentTypeIndex{ get; set; } = 0;

List<ContentType> ContentTypes = Enum.GetValues(typeof(ContentMode))
    .Cast<ContentMode>()
    .Select(mode => new ContentType
    {
        ID = mode.ToString().ToLower(),
        Text = mode.ToString()
    })
    .ToList();

public void OnValueChange(ChangeEventArgs<string, ContentType> args)
{
    if (Enum.TryParse<ContentMode>(args.ItemData.Text, out var parsed))
    {
        mode = parsed;

        // Prevent tone/structure “leakage” across modes
        assistView?.Prompts?.Clear();
        StateHasChanged();
    }
}
Blog and KB option on the page
Blog and KB option on the page

Step 4: Make SystemInstruction mode-aware

In your PromptRequested handler, set SystemInstruction based on the mode:

GenerateContentConfig config = new GenerateContentConfig()
{
    SystemInstruction = new Content()
    {
        Parts = new List<Part>
        {
            new Part { Text = CurrentMode.SystemInstruction }
        }
    }
};

Step 5: Add a structured prompt builder (shared rules + mode template)

This is where output consistency really improves:

private string BuildUserPrompt(string rawPrompt)
{
    var sharedRules = """
        Use Markdown.
        Use headings and bullet points where helpful.
        Be accurate and avoid making up product behaviors.
        """;

    var modeTemplate = mode switch
    {
        ContentMode.Blog => """
            Output structure:
            - # Title
            - Intro (direct answer + bullets)
            - Main sections with H2/H3 headings
            - Common mistakes
            - Conclusion
            - FAQs (3)
            """,
        ContentMode.KnowledgeBase => """
            Output structure:
            - # Title
            - Summary
            - Environment / Applies to
            - Symptoms
            - Resolution (numbered steps)
            - Troubleshooting
            - FAQs (3)
            """,
        _ => ""
    };

    return $"""
        {sharedRules}
        {modeTemplate}
        User request:
        {rawPrompt}
        """;
}

Step 6: Call Gemini with the final prompt

Then call Gemini with the final prompt and update AssistView:

private async Task PromptRequest(AssistViewPromptRequestedEventArgs args)
{
    var config = new GenerateContentConfig
    {
        SystemInstruction = new Content
        {
            Parts = new List<Part> { new Part { Text = CurrentMode.SystemInstruction } }
        }
    };

    try
    {
        var finalPrompt = BuildUserPrompt(args.Prompt);

        var content = await client.Models.GenerateContentAsync(
            "gemini-2.5-flash",
            finalPrompt,
            config);

        var responseText = content.Candidates[0].Content.Parts[0].Text;

        await assistView.UpdateResponseAsync(responseText);
        args.Response = assistView.Prompts[^1].Response;
    }
    catch (Exception ex)
    {
        args.Response = $"Error: {ex.Message}";
    }
}

Notes for reliability:

  • BuildUserPrompt(...) is where you centralize formatting and guardrails.
  • UpdateResponseAsync(…) pushes the model response back into the Syncfusion Blazor AI AssistView UI.

Optional features

Attachments

Attachments are how this becomes a real content workflow: users can drop in a draft, requirements, logs, or reference notes.

<SfAIAssistView AttachmentSettings="@attachmentSettings"
                PromptRequested="@PromptRequest"
                PromptSuggestions="CurrentMode.Suggestions" />
private AssistViewAttachmentSettings attachmentSettings = new()
{
    Enable = true,
    SaveUrl = "https://blazor.syncfusion.com/services/production/api/FileUploader/Save",
    RemoveUrl = "https://blazor.syncfusion.com/services/production/api/FileUploader/Remove"
};
AI AssistView with file attachments
AI AssistView with file attachments

Note: In production, use your own endpoints and validate file type/size server-side.

Speech-to-text

If you’re brainstorming out loud (or you’re building for accessibility), speech-to-text is a big win. You can wire it using Syncfusion’s speech components plus a small JS helper to write into a content-editable footer.

Here’s how you can do it in code:

<div class="integration-section">
    <SfAIAssistView @ref="assistView"
                    PromptSuggestions="CurrentMode.Suggestions"
                    PromptRequested="@PromptRequest"
                    AttachmentSettings="@attachmentSettings">
        <AssistViews>
            <AssistView>
                <BannerTemplate>
                    <div class="banner-content">
                        <div class="e-icons e-listen-icon"></div>
                        <i>Click the below mic-button to convert your voice to text.</i>
                    </div>
                </BannerTemplate>
                <FooterTemplate>
                    <div class="e-footer-wrapper">
                        <div id="assistview-footer"
                             class="content-editor"
                             contenteditable="true"
                             placeholder="Click to speak or start typing..."
                             @ref="@EditableDiv"
                             @oninput="@UpdateContent"
                             @onkeydown="@OnKeyDown">
                             @AssistViewFooterValue
                         </div>
                         <div class="option-container">
                             <SfSpeechToText ID="speechToText"
                                             TranscriptChanging="@OnTranscriptChange"
                                             SpeechRecognitionStopped="@HandleStopRecognition"
                                             CssClass="@($"e-flat {SpeechToTextCssClass}")">
                             </SfSpeechToText>
                             <SfButton ID="assistview-sendButton"
                                       IconCss="e-assist-send e-icons"
                                       CssClass="@ButtonCssClass"
                                       @onclick="SendButtonClicked">
                             </SfButton>
                         </div>
                     </div>
                 </FooterTemplate>
             </AssistView>
         </AssistViews>
    </SfAIAssistView>
</div>

In the code part,

private ElementReference EditableDiv;
private string AssistViewFooterValue = String.Empty;
private string FooterContent = String.Empty;
private string ButtonCssClass = String.Empty;
private string SpeechToTextCssClass = "visible";

private async Task UpdateContent()
{
    FooterContent = await JSRuntime.InvokeAsync<String>("isFooterContainsValue", EditableDiv);
    ToggleVisibility();
}
private void ToggleVisibility()
{
    ButtonCssClass = string.IsNullOrWhiteSpace(FooterContent) ? "" : "visible";
    SpeechToTextCssClass = string.IsNullOrWhiteSpace(FooterContent) ? "visible" : "";
}
private async Task OnKeyDown(KeyboardEventArgs e)
{
    if (e.Key == "Enter" && !e.ShiftKey)
    {
        await SendButtonClicked();
    }
}
private async Task SendButtonClicked()
{
    await assistView.ExecutePromptAsync(FooterContent);
}
private async void OnTranscriptChange(TranscriptChangeEventArgs args)
{
    AssistViewFooterValue = args.Transcript;
    await JSRuntime.InvokeVoidAsync("updateContentEditableDiv", EditableDiv, AssistViewFooterValue);
    await InvokeAsync(StateHasChanged);
}
private async Task HandleStopRecognition()
{
    FooterContent = AssistViewFooterValue;
    ToggleVisibility();
    await InvokeAsync(StateHasChanged);
}

Add a new JavaScript file named speech.js and add the following code in it,

/* Text to Speech JavaScript functions - starts*/
// Checks if the content editable element contains meaningful text and cleans up.
function isFooterContainsValue(elementref) {
    if (!elementref.innerText.trim() !== '') {
        if ((elementref.innerHTML === '<br>' || elementref.innerHTML.trim() === '')) {
            elementref.innerHTML = '';
        }
    }
    return elementref.innerText || "";
}
// Clears the text content of a content editable element.
function emptyFooterValue(elementref) {
    if (elementref) {
        elementref.innerHTML = "";
    }
}
// Updates the text content of a content editable element with a specified value.
function updateContentEditableDiv(element, value) {
    if (element) {
        element.innerText = value;
    }
}
/* Text to Speech JavaScript functions - ends */

Reference the script in App.razor.

<body>
    <script src="@Assets["speech.js"]"></script>
    …
</body>

In the aicontentcreator.css file, add the following CSS styles,

/*Text to speech section - starts*/
.integration-section .e-view-container {
    margin: auto;
}

.integration-section #assistview-sendButton {
    width: 40px;
    height: 40px;
    font-size: 20px;
    border: none;
    background: none;
    cursor: pointer;
}

.integration-section #speechToText.visible,
.integration-section #assistview-sendButton.visible {
    display: inline-block;
}

.integration-section #speechToText,
.integration-section #assistview-sendButton {
    display: none;
}

.integration-section .e-footer-wrapper {
    display: flex;
    border: 1px solid #c1c1c1;
    padding: 5px 5px 5px 10px;
    margin: 5px 5px 0 5px;
    border-radius: 30px;
}

.integration-section .content-editor {
    width: 100%;
    overflow-y: auto;
    font-size: 14px;
    min-height: 25px;
    max-height: 200px;
    padding: 10px;
}

.integration-section .content-editor[contentEditable=true]:empty:before {
    content: attr(placeholder);
    color: #6b7280;
}

.integration-section .option-container {
    align-self: flex-end;
}

.integration-section .e-view-container {
    margin: auto;
}

.integration-section .banner-content .e-audio:before {
    font-size: 25px;
}

.integration-section .banner-content {
    display: flex;
    flex-direction: column;
    gap: 10px;
    text-align: center;
    padding-top: 80px;
}

.integration-section .banner-content .e-listen-icon:before {
    font-size: 25px;
}

@media only screen and (max-width: 750px) {
    .integration-section {
        width: 100%;
    }
}
/*Text to speech section - ends*/
Speech-to-Text functionality added in the AI AssistView component
Speech-to-Text functionality added in the AI AssistView component

Text-to-speech

Text-to-speech is ideal for reviewing long drafts. Add a toolbar button that:

  • Extracts plain text from the response HTML.
  • Calls speechSynthesis.speak(...) via JS interop.
  • Toggles to a stop button while speaking.
<SfAIAssistView>
…
    <BannerTemplate>
        <div class="banner-content">
        …
            <div class="e-icons e-audio"></div>
            <i>Ready to assist voice enabled !</i>
        </div>
    </BannerTemplate>

    <ResponseToolbar ItemClicked="ResponseToolbarItemClicked">
        <ResponseToolbarItem IconCss="e-icons e-assist-copy" Tooltip="Copy"></ResponseToolbarItem>
        <ResponseToolbarItem IconCss="@audioIconCss" Tooltip="@audioTooltip"></ResponseToolbarItem>
        <ResponseToolbarItem IconCss="e-icons e-assist-like" Tooltip="Like"></ResponseToolbarItem>
        <ResponseToolbarItem IconCss="e-icons e-assist-dislike" Tooltip="Need Improvement" </ResponseToolbarItem>
    </ResponseToolbar>
</SfAIAssistView>

In the code part,

private bool IsSpeaking = false;
private string audioIconCss = "e-icons e-audio";
private string audioTooltip = "Read Aloud";
private DotNetObjectReference<AIContentCreator>? dotNetRef;

protected override void OnInitialized()
{
    dotNetRef = DotNetObjectReference.Create(this);
}

// Handles toolbar item clicks to toggle text-to-speech functionality for AI responses
private async void ResponseToolbarItemClicked(AssistViewToolbarItemClickedEventArgs args)
{
    var prompts = assistView.Prompts;
    if (prompts.Count > args.DataIndex && prompts[args.DataIndex].Response != null)
    {
        string responseHtml = prompts[args.DataIndex].Response;
        string text = await JSRuntime.InvokeAsync<string>("extractTextFromHtml", responseHtml);

        if (args.Item.IconCss == "e-icons e-audio" || args.Item.IconCss == "e-icons e-assist-stop")
        {
            if (IsSpeaking)
            {
                await JSRuntime.InvokeVoidAsync("cancel");
                IsSpeaking = false;
                audioIconCss = "e-icons e-audio";
                audioTooltip = "Read Aloud";
            }
            else if (!string.IsNullOrEmpty(text))
            {
                IsSpeaking = await JSRuntime.InvokeAsync<bool>("speak", text, dotNetRef);
                if (IsSpeaking)
                {
                    audioIconCss = "e-icons e-assist-stop";
                    audioTooltip = "Stop";
                }
                else
                {
                    await JSRuntime.InvokeVoidAsync("console.warn", "Failed to start speech synthesis.");
                }
            }
            await InvokeAsync(StateHasChanged);
        }
    }
}

[JSInvokable]
public void OnSpeechEnd()
{
    IsSpeaking = false;
    audioIconCss = "e-icons e-audio";
    audioTooltip = "Read Aloud";
    StateHasChanged();
}

public void Dispose()
{
    dotNetRef?.Dispose();
    dotNetRef = null;
}

In the speech.js file, add the following Text to Speech (TTS) functions code.

/* Text to Speech JavaScript functions - starts*/

// Initialize the speechSynthesisInterop object to store speech-related data if it doesn't exist.
window.speechSynthesisInterop = window.speechSynthesisInterop || {};

// Converts HTML content to plain text by stripping HTML tags.
function extractTextFromHtml(html) {
    const tempDiv = document.createElement('div');
    tempDiv.innerHTML = html;
    return (tempDiv.textContent || tempDiv.innerText || '').trim();
}

// Initiates text-to-speech synthesis to read the provided text aloud.
function speak(text, dotNetRef) {
    // Check if the browser supports the Web Speech API
    if ('speechSynthesis' in window) {
        // Create a new speech synthesis utterance with the provided text
        const utterance = new SpeechSynthesisUtterance(text);
        // Call the Blazor OnSpeechEnd method when speech ends
        utterance.onend = () => {
            dotNetRef.invokeMethodAsync('OnSpeechEnd');
        };
        utterance.onerror = (event) => {
            console.error('Speech synthesis error:', event);
            dotNetRef.invokeMethodAsync('OnSpeechEnd');
        };
        // Start speaking the utterance
        window.speechSynthesis.speak(utterance);
        // Store the utterance in the global interop object for cancellation
        window.speechSynthesisInterop.currentUtterance = utterance;
        return Promise.resolve(true);
    } else {
        console.warn('Web Speech API is not supported.');
        return Promise.resolve(false);
    }
}

// Cancels any ongoing speech synthesis.
function cancel() {
    if ('speechSynthesis' in window) {
        // Stop any active speech synthesis
        window.speechSynthesis.cancel();
        // Clear the stored utterance reference
        window.speechSynthesisInterop.currentUtterance = null;
    }
}
/* Text to Speech JavaScript functions - ends */
Read aloud option in the AI AssistView component
Read aloud option in the AI AssistView component

Clear prompts

Add a header toolbar item in the AI AssistView to clear the chat:

@using Syncfusion.Blazor.Navigations
<AssistViewToolbar ItemClicked="ToolbarItemClicked">
    <AssistViewToolbarItem Type="ItemType.Spacer"></AssistViewToolbarItem>
    <AssistViewToolbarItem IconCss="e-icons e-refresh" Tooltip="Clear all prompts"></AssistViewToolbarItem>
</AssistViewToolbar>

In the code part,

private void ToolbarItemClicked(AssistViewToolbarItemClickedEventArgs args)
{
    if (args.Item.IconCss == "e-icons e-refresh")
    {
        assistView.Prompts.Clear();
    }
}
Header Toolbar item to clear all prompts
Header Toolbar item to clear all prompts

Next steps

  • Start from the Syncfusion “getting started” baseline
  • Add Blog/KB mode and the structured prompt builder from this post
  • Then pick one production hardening step:
    • Replace demo attachment endpoints with your own.
    • Add model/provider abstraction to switch between Gemini/OpenAI/Azure.
    • Log prompts/responses for evaluation (and to improve your templates.
AI-Assisted Content Creation App
AI-Assisted Content Creation App

Frequently Asked Questions

How do I make the app generate Blogs and KB articles reliably?

Use a Blog/KB mode selector and change systemInstruction, suggestion prompts, and the output template per mode.

How do I stop text-to-speech once it starts reading a response?

Use JS interop to call speech synthesis cancel.

Should I keep chat history when switching Blog to KB?

Usually, no. Clearing prompts on mode switch helps prevent the previous mode’s structure/tone from leaking into the new draft.

Can I support multiple models/providers with the same UI?

Yes, keep AI calls behind an abstraction and drive the model selection from configuration. You can refer here for use case examples.

How do I ensure responses are rendered nicely with headings and lists?

The AI AssistView now supports the markdown content rendering support (v33.1.44) when used via UpdateResponseAsync method.

Conclusion

Thanks for reading! By combining Syncfusion Blazor AI AssistView with a mode-aware prompt builder, you can deliver an AI writing experience in Blazor that feels purpose-built for both Blog creation and Knowledge Base documentation. The key is treating “mode” as a first-class concept: swap system instructions, suggestions, and output templates so the model consistently produces the right structure. From there, attachments, speech-to-text, and read-aloud features turn a basic chat into a practical content workflow, fast to use, easier to review, and ready to extend for production scenarios.

Try our Blazor component by downloading a free 30-day trial or from our NuGet package. Feel free to have a look at our online examples and documentation to explore other available features.

If you have any questions, please let us know in the comments section below. You can also contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories