Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148229 stories
·
33 followers

🤖🗣️ Local AI Voices in .NET — VibeVoice & Qwen TTS

1 Share

⚠ This blog post was created with the help of AI tools. Yes, I used a bit of magic from language models to organize my thoughts and automate the boring parts, but the geeky fun and the 🤖 in C# are 100% mine.

Hi!

Let’s look at these 2 code snippets… what’s behind them?

🧠 Snippet 1 — VibeVoice (Native TTS in .NET)

using ElBruno.VibeVoiceTTS;

using var tts = new VibeVoiceSynthesizer();
await tts.EnsureModelAvailableAsync(); // auto-download model if needed

float[] audio = await tts.GenerateAudioAsync("Hello! Welcome to VibeVoiceTTS.", "Carter");
tts.SaveWav("output.wav", audio);

This generates a WAV file from text using the VibeVoice-Realtime-0.5B model, running locally via ONNX.
The first time you run it, the model is automatically downloaded.

No REST calls. No API keys. No cloud dependency.

🧠 Snippet 2 — QwenTTS (Local TTS + Voice Cloning Ready)

using ElBruno.QwenTTS.Pipeline;

// Models are downloaded automatically on first execution
using var pipeline = await TtsPipeline.CreateAsync("models");
await pipeline.SynthesizeAsync("Hello world!", "ryan", "hello.wav", "english");

This example uses a Qwen3-TTS ONNX pipeline to generate speech locally, fully in C#.


Why I Built This

My goal has always been simple:

Make AI easy and natural for .NET developers.

We’ve made great progress in:

  • Embeddings
  • Agents
  • RAG
  • Local models
  • AI orchestration

But when it came to Text-to-Speech, there was a gap.

Most solutions required:

  • Python
  • External services
  • Complex wrappers
  • Non-.NET idioms

I didn’t like that. IMHO, then TTS should feel like C# — not like glue code around another ecosystem. With these repositories I’ll give a try.


What Makes This Different?

Both libraries are built around a few core principles:

✅ 100% Local Execution

Models run on your machine (or your server).

✅ ONNX + .NET Runtime

No Python in production.

✅ Auto Model Management

Models download automatically the first time you use them.

✅ Idiomatic C# APIs

Async/await. Disposable patterns. Clean abstractions.

If you can use HttpClient, you can use these libraries.
If you understand Task, you can generate AI-powered speech.


VibeVoice — Simple and Direct

Repository: https://github.com/elbruno/ElBruno.VibeVoiceTTS

NuGet: https://www.nuget.org/packages/ElBruno.VibeVoiceTTS

VibeVoice is ideal if you want:

  • Fast setup
  • Built-in voice presets
  • Clean WAV output
  • Minimal configuration

It uses the VibeVoice-Realtime-0.5B ONNX model and exposes a straightforward synthesizer API.


QwenTTS — Flexible and Powerful

Repository: https://github.com/elbruno/ElBruno.QwenTTS

NuGet: https://www.nuget.org/packages/ElBruno.QwenTTS

QwenTTS is built around Qwen3-TTS, exported to ONNX and integrated into a C# pipeline.

It supports:

  • Multiple speakers
  • Multi-language scenarios
  • More advanced synthesis control
  • Voice cloning capabilities (via dedicated pipeline)

This opens the door to:

  • Custom AI assistants
  • Personalized voice experiences
  • Voice-enabled RAG systems
  • AI avatars

Why Local TTS Matters

Running TTS locally gives you:

  • 🔒 Privacy — no text leaves your machine
  • 💰 No per-request costs
  • ⚡ Low latency
  • 🧪 A safe playground for experimentation
  • 📦 Full control over deployment

If you’re exploring:

  • Local AI
  • Foundry Local
  • Offline AI scenarios
  • Edge deployments

These libraries are a practical starting point.


Bonus: Voice Cloning (Work in progress)

The QwenTTS repository includes support for voice cloning via a dedicated pipeline.

This means you can:

  • Generate speech in a reference voice
  • Personalize assistant experiences
  • Experiment with identity-driven AI systems

Final Thoughts

For me, generating natural speech locally should be as simple as:

  • Adding a NuGet package
  • Writing a few lines of C#
  • Running your app

That’s it.

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno




Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Rachel Appel on Dark Patterns

1 Share

Episode 891

Rachel Appel on Dark Patterns

Breaking News: You won't believe what Rachel says about dark paterns! The answer will leave you astonished! JetBrains Developer Advocate Rachel Appel describes deceptive UI practices. She cautions developers and marketers to avoid these practices, which often lead to a loss of trust and other long-term problems. She informs users what to watch out for, and how to remain vigilant.

Read the whole story
alvinashcraft
57 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing ToDoOrDie

1 Share

ToDoOrDie HomePage

Have you ever noticed that after a few weeks with any to-do app, you end up with a list full of tasks that, if you’re being honest with yourself, you’re probably never going to get around to doing? They just sit there, day after day, week after week, always taking up space in your headspace.

I always wanted an option in the various to-do apps I have used over the years to just auto-remove uncompleted tasks after a week or so. So I built something different. An app where, after a set amount of time, unfinished tasks just go away. You can always add them back if they still matter. But the goal is to keep your list focused on the things you can actually accomplish and to let go of the things that, despite your best intentions, were never going to happen.

I’ve wanted this app to exist for a long time. Toward the end of 2025, I started experimenting with Inertia.js, and decided it was time to make this happen. There’s no real commercial intent here. It’s just something I always wished existed, and now it does.

The app itself is relatively simple, but building it gave me the chance to dig deeper into Inertia.js, which has been pretty amazing. I have a few more projects in the works, and it really does feel like the combination of coding agents and LLMs alongside Inertia.js and Ruby on Rails is the one-person framework we’ve been looking for all these years.

ToDoOrDie Dashboard

If you have a minute, sign up at todoordie.com. Kick the tires. Let me know what you think.

On a side note, here is a quick list of some of the new(er) to me things I got
to experiment with:

  1. Inertia.js - in particular the client-side prefetch took some trial and
    error to make the task state come and go as you would expect. And of course
    React. I am still a big fan of Turbo/Stimulus, but I feel like I have seen the
    light.
  2. Designing with LLMs
  3. Copywriting with an LLM - I wanted the app to have a specific (slightly over the
    top) language.
  4. Sign In with Apple - As horrible as I imagined.
  5. Kamal - I am a long-time Heroku user for commercial apps and HatchBox for my
    personal projects. This was the first time I did the full Kamal experience
    which morphed into the build happening on GitHub after a successful CI run
    and eventually deploying with a notification sent to Sentry.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Run SQL queries on local parquet and delta files using DuckDB

1 Share

Yesterday I showed how we could query local parquet and delta files using pandas and deltalake. Although these libraries work, once you start loading big parquet files you see your system stall while your memory usage spikes.

A colleague suggested to give DuckDB a try. I never heard about it, but let’s discover it together.

What is DuckDB?

DuckDB is an in-process analytical database — think SQLite, but built for OLAP workloads instead of transactional ones. It runs entirely inside your Python process (no server to spin up, no connection string to manage) and is optimized for the kinds of queries data engineers run every day: large scans, aggregations, joins, and window functions over columnar data.


A few things that make it stand out:

  • It reads files directly. You don't import data into DuckDB before querying it. You point it at a Parquet file, a folder of Parquet files, or a Delta table, and it queries them in place. No ETL step, no intermediate copy.
  • It's columnar and vectorized. DuckDB processes data in column-oriented batches, which is exactly how Parquet stores data on disk. This alignment means it can read only the columns and row groups it needs, skipping the rest entirely — a technique called predicate pushdown.
  • It's fast. Benchmarks consistently put DuckDB ahead of pandas and Spark for single-node analytical queries, often by a significant margin. It also parallelizes across your CPU cores automatically.
  • It speaks standard SQL. No custom API to learn. If you know SQL, you know DuckDB.

Sounds great right?

Installation

pip install duckdb

That's it. No Docker, no JVM, no config files.

Querying local Parquet files

Once you have your OneLake files synced locally via OneLake File Explorer, they'll live somewhere like:

C:\Users\<you>\OneLake - <workspace>\<lakehouse>.Lakehouse\Tables\<table name>

DuckDB can query these directly. Note: use forward slashes in your SQL strings — backslashes cause issues.

import duckdb

# Query a single Parquet file
result = duckdb.query("""
    SELECT *
    FROM 'C:/Users/you/OneLake - MyWorkspace/MyLakehouse.Lakehouse/Tables/orders/part-00001.parquet'
    LIMIT 5
""").df()

# Query all Parquet files in a folder using a glob
result = duckdb.query("""
    SELECT *
    FROM 'C:/Users/you/OneLake - MyWorkspace/MyLakehouse.Lakehouse/Tables/orders/*.parquet'
    LIMIT 5
""").df()

The .df() at the end returns a pandas DataFrame, which is handy for further processing or display in a notebook.



Only read what you need

This is where DuckDB really earns its place in a data engineering toolkit. When you add a WHERE clause, DuckDB doesn't read the entire file and then filter — it uses the Parquet file's built-in metadata (row group statistics) to skip chunks of the file that can't possibly match your filter.

# DuckDB reads only the row groups that could contain status = 'shipped'
result = duckdb.query("""
    SELECT customer_id, order_date, amount
    FROM 'C:/Users/you/OneLake - MyWorkspace/MyLakehouse.Lakehouse/Tables/orders/*.parquet'
    WHERE status = 'shipped'
      AND order_date >= '2024-01-01'
""").df()

Similarly, if you only select a few columns, DuckDB reads only those columns off disk — not the full row. On a wide table with 50+ columns, this can reduce I/O by an order of magnitude.

Querying Delta Lake tables

Delta tables are stored as a folder of Parquet files plus a _delta_log/ transaction log. DuckDB supports Delta natively via the delta_scan function, which reads the transaction log to determine which files are part of the current table state.

table_path = "C:/Users/you/OneLake - MyWorkspace/MyLakehouse.Lakehouse/Tables/orders"

result = duckdb.query(f"""
    SELECT *
    FROM delta_scan('{table_path}')
    WHERE status = 'shipped'
""").df()

This is important: if you just glob the Parquet files in a Delta folder directly, you might accidentally include files that were deleted or replaced by later transactions. Using delta_scan ensures you're reading a consistent, correct snapshot of the table.


Using a persistent connection for Multiple Queries

By default, duckdb.query() uses an in-memory connection that's discarded after each call. For notebook workflows where you want to run multiple queries, create a connection once and reuse it:

python

import duckdb

con = duckdb.connect()  # In-memory, but persistent across calls

# Register a path as a named view for convenience
con.execute("""
    CREATE OR REPLACE VIEW orders AS
    SELECT * FROM delta_scan('C:/Users/you/.../Tables/orders')
""")

con.execute("""
    CREATE OR REPLACE VIEW customers AS
    SELECT * FROM 'C:/Users/you/.../Tables/customers/*.parquet'
""")

# Now query as if they were tables
result = con.execute("""
    SELECT o.order_id, c.name
    FROM orders o
    JOIN customers c ON o.customer_id = c.customer_id
    LIMIT 10
""").df()

This makes your notebooks much cleaner, especially when working with multiple tables.

Mixing DuckDB with Pandas

DuckDB integrates seamlessly with the rest of the Python data stack. You can query a pandas DataFrame directly from SQL:

import pandas as pd

# Some existing DataFrame in your notebook
df = pd.read_parquet("...")

# Query it with SQL — DuckDB can see Python variables
result = duckdb.query("SELECT * FROM df WHERE amount > 1000").df()

When to reach for DuckDB?

DuckDB isn't always the right tool — but it's the right tool more often than people expect. Here's a quick guide:

Situation DuckDB ?
Need to aggregate or filter a large Parquet table without loading it all ✅ Yes
Running ad hoc SQL exploration in a notebook ✅ Yes
Joining multiple local Parquet/Delta tables ✅ Yes
Need Delta time travel (reading historical versions) ❌ Use delta-rs
Simple row-level transforms on a small table ⚠️ Pandas is fine

For single-node workloads on data that fits on a local disk — which covers a huge portion of real-world data engineering tasks — DuckDB is hard to beat.

Wrapping up

DuckDB fills a gap that pandas can't: fast, memory-efficient SQL over large local files, with no infrastructure overhead. When you combine that with OneLake File Explorer's ability to sync your Fabric Lakehouse tables locally, you get a surprisingly powerful local development setup — query terabytes of data with standard SQL, iterate quickly, and push results back to Fabric when you're done.

I added DuckDB as part of my default data analytics toolkit and I would recommend doing the same.

More information

DuckDB – An in-process SQL OLAP database management system

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Comparing Amazon Q and GitHub Copilot Agentic AI in VS Code

1 Share
This head-to-head test compared Amazon Q Developer and GitHub Copilot Pro using a real-world editorial workflow to evaluate their performance as "agentic" assistants beyond simple coding. Both tools utilized the Claude 3.5 Sonnet model for these tests.
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Simplifying Grid Layout in .NET MAUI Using Extension Methods

1 Share

The grid extension methods from the Community Toolkit can make working with grids in .NET MAUI cleaner. Learn more!

The grid is one of the layouts that provides the best performance for your .NET MAUI apps, and I love it because it’s also very flexible. You can create designs that look complex, and a grid even works as a replacement for the old RelativeLayout.

But beyond all its advantages, today we’ll talk about some tips and tricks to help you get the most out of it using the extension methods that the Maui.Community.Toolkit provides for the grid.

What Is an Extension Method?

An extension method is a C# feature that allows you to add methods to an existing type, such as classes, interfaces or structs. Thanks to them, you don’t need to inherit from a class or modify its original code to use new functionality. It’s like giving superpowers to a class that already exists.

For example, if the grid originally didn’t have a method called Rows(), you could create an extension method to add it, and it would behave as if that method were part of the grid itself.

What Are Grid Extension Methods?

Grid extensions are a set of extensions specifically designed for the grid included in the Maui.Community.Toolkit. Let’s explore which extensions are available and what each of them does!

Row

Its purpose is to set the Grid.Row value and, optionally, the RowSpan (Grid.RowSpan) if you want an element to extend horizontally across multiple rows.

This method can be used with any element that inherits from BindableObject, such as Label, Button, Image, etc.

Here’s an example of how to implement it:

❌ Without Extension Methods

var button = new Button { Text = "This Button is in Row 1 and spans 5 rows" }; 
    Grid.SetRow(button, 1); 
    Grid.SetRowSpan(button, 5); 
var grid = new Grid { Children = { button } };

✅ Using Extension Methods

new Grid { Children = { new Button().Text("This Button is in Row 1 and spans 5 rows").Row(1, 5) } };

Column

Its purpose is to set the Grid.Column value and, optionally, the ColumnSpan (Grid.ColumnSpan) if you want an element to extend vertically across multiple columns. This method can be used with any element that inherits from BindableObject.

Here’s an example of how to implement it:

❌ Without Extension Methods

var button = new Button { Text = "This Button is in Column 2 and spans 4 columns" }; 
    Grid.SetColumn(button, 2); 
    Grid.SetColumnSpan(button, 4); 
var grid = new Grid { Children = { button } };

✅ Using Extension Methods

new Grid { Children = { new Button().Text("This Button is in Column 2 and spans 4 columns").Column(2, 4) } };

Defining Rows & Columns

The Toolkit provides an extension that lets you define your Grid’s rows and columns in a simpler and more direct way, offering shorter, more expressive equivalents for common GridLength values. This is possible thanks to the following helpers:

  • Columns.Define
  • Rows.Define

To use them, you just need to add the following line at the top of your class:

using static CommunityToolkit.Maui.Markup.GridRowsColumns;

This using static allows you to replace long expressions like GridLength.Auto or new GridLength(2, GridLength.Star) with their shorter forms such as Auto, Star, Stars(2), etc.

For better clarity, here’s a comparison table:

Comparison table: What you would normally write: GridLength.Auto - With Grid extensions: Auto - Meaning: Adjusts to the content size - What you would normally write: new GridLength(1, GridLength.Star) - With Grid extensions: Star - Meaning: Takes available space (1*) - What you would normally write: new GridLength(2, GridLength.Star) - With Grid extensions: Stars(2) - Meaning: Takes 2× the available space (2*) - What you would normally write: new GridLength(20, GridLength.Absolute) - With Grid extensions: 20 - Meaning: Absolute size of 20**

And here’s how that translates into code:

❌ Without Extension Methods

ColumnDefinitions = new ColumnDefinitionCollection 
{ 
    new ColumnDefinition { Width = new GridLength(20, GridLength.Absolute) }, 
    new ColumnDefinition { Width = new GridLength(1, GridLength.Star) }, 
    new ColumnDefinition { Width = new GridLength(2, GridLength.Star) } 
};

✅ Using Extension Methods

ColumnDefinitions = Columns.Define(20, Star, Stars(2));

When you compare both versions, look at everything you gain:

  • Far less code—from six lines down to just one
  • Much more readable and expressive
  • And the best part: way easier to maintain

Defining Rows & Columns Using Enums

This is my favorite one! Normally, when defining the rows and columns of a grid, you use numeric values: row 2, column 2, etc. But the more rows and columns you have, the harder it becomes to identify, read and maintain that layout.

Now imagine that instead of numbers, you could use names. Well, now you can!

The Community Toolkit allows you to name your rows and columns using enums (for example: Row.Username, Column.UserInput). This makes your code so much more readable, easier to understand and incredibly simple when identifying or modifying any coordinate inside the Grid.

And now, let’s see how to do it!

Step 1: Import the Required Helpers

using static CommunityToolkit.Maui.Markup.GridRowsColumns;

Step 2: Create Your Enum for Both Rows and Columns

First, the rows:

enum Row { Username, Password, Submit }

Then the columns:

enum enum Column { Description, UserInput }

Step 3: Use the Enum

Now that we’ve created our enums, let’s define the grid’s rows and columns using them, just like in the following example:

RowDefinitions = Rows.Define( 
(Row.Username, 30), 
(Row.Password, 30), 
(Row.Submit, Star)), 

ColumnDefinitions = Columns.Define( 
(Column.Description, Star), 
(Column.UserInput, Star)),

You can also use the enum to position your controls.

Instead of doing something like this:

.Row(0).Column(1)

You can simply do:

.Row(Row.Username).Column(Column.Description)

That’s it!! Much cleaner, more expressive and easier to understand at a glance.

RowSpan & ColumnSpan

RowSpan and ColumnSpan are extension methods that allow you to define the number of rows or columns, respectively, that the grid should occupy. For example, if my grid has three rows and two columns, you would have something like this:

new Button() 
  .Text("Button to test")  
  .RowSpan(3) 
  .ColumnSpan(2)

And you can get even more out of your enum by using these special helpers:

All<TEnum>() – This indicates that the control should occupy all the rows and/or columns defined in your enum, respectively. For example, if I have the following enums:

enum Row { Username, Password, Submit }
enum Column { Description, UserInput }

My definition would look like this:

new Button() 
   .Text("Button to test") 
   .RowSpan(All<Row>()) 
   .ColumnSpan(All<Column>())

Last<TEnum> – Allows you to place a control in the last row or last column defined in your enum. You can do it as follows:

⚠️ For this example, we’ll use the enums defined above (Rows and Columns). In the code, a button is added to the last row and last column of the grid.

new Button() 
   .Text("Button to test") 
   .Row(Last<Row>()) 
   .Column(Last<Column>());

Conclusion

And that’s it! In this article, you explored how the grid extension methods from the CommunityToolkit.Maui.Markup make working with grids in .NET MAUI simpler, cleaner and much more expressive. You learned what each helper does, how to define rows and columns with a shorter syntax, and how the Row/Column extension methods improve readability and reduce boilerplate.

Now you have the tools to make grid extensions your ally—helping you build layouts faster, write more maintainable UI code and create interfaces that are both easier to understand and easier to evolve over time.

If you have any questions or want me to cover more related topics, feel free to leave a comment. I’d be happy to help you! See you in the next article! ‍♀️✨

References

The explanation was based on the official documentation:


Need a More Robust Grid?

The Progress Telerik UI for .NET MAUI DataGrid is one of the most powerful native MAUI grids available. It comes with 70+ other components in a free 30-day trial if you want to explore it in detail.

Try Telerik UI for .NET MAUI

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories