Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150232 stories
·
33 followers

Why changing keyboard shortcuts in Visual Studio isn’t as simple as it seems

1 Share

A straight look at what’s behind the keys

We’ve all tried unlearning a keyboard shortcut – it feels like forgetting how to breathe. Muscle memory doesn’t mess around. We wrestle with this every time someone suggest a “quick” shortcut change. It’s not just editing a keybinding but navigating a history that makes Visual Studio so customizable for developers like us.

Picture yourself deep in code, chugging coffee, ready to close a tab. You hit Ctrl+W because Chrome, VS Code, and every other tool uses it. But in Visual Studio? You likely need Ctrl+F4, a combo straight out of the Windows 98 era. Or maybe you try commenting out a line if code with Ctrl+/, a standard elsewhere, but Visual Studio adopted it late. Why? The team isn’t clueless – every shortcut ties to years of workflows we depend on.

Let’s walk through why that history powers Visual Studio and why changing a shortcut like Ctrl+W is such a challenge.

One command, multiple shortcuts

Visual Studio lets you handle the same task with different shortcuts to match your workflow. To close a tab, you can hit Ctrl+F4, a go-to for longtime users. If you come from tools like VS Code or Chrome and prefer Ctrl+W, Visual Studio supports that too. This flexibility rocks – you stick with what you know or adopt newer standards without losing your groove.

But it gets tricky. Many key combos in Visual Studio already do something and reassigning one can disrupt established workflows. For example, Ctrl+W closes tabs in most tools, but in Visual Studio, it selects the current word – a shortcut coders have relied on since the 2000s. If that’s wired into your fingers, changing it could derail you. Visual Studio keeps both shortcuts, letting you use what works while supporting everyone else’s habits.

That ability to support multiple shortcuts is just the start of Visual Studio’s customization, though – it goes deeper with how it tailors the IDE to you.

Developer profiles

When you launch Visual Studio, it doesn’t throw you into a generic setup. It prompts you to choose a developer profile – General, Web, C#, C++, and others. This choice shapes your shortcuts, layout, and entire coding experience to fit how you work. Visual Studio’s history of letting developers carry over habits from other IDEs or editors ensures your shortcuts feel right from the start.

Here’s the catch: the same command can use different shortcuts based on your profile. In the C# profile, you build a solution with F6. In the General profile, you hit Ctrl+Shift+B. It’s not chaos – it stems from years of developers like us telling the team what fits our work.

Profiles aren’t the only way Visual Studio adapts to your coding style, though – there’s another layer that makes switching tools even smoother.

Keyboard schemes

To make jumping between tools less jarring, Visual Studio offers keyboard schemes – like VS Code’s shortcuts or ReSharper’s keymap. It’s like plugging your own keyboard into a shared machine. These schemes build on Visual Studio’s history of supporting diverse coding styles, letting you dive in without starting from scratch.

keyboard schemes image

But with all this customization, how do we know what shortcuts you’re actually using and why? That’s where things get murky.

The intent behind the shortcut

When we consider changing a shortcut, we dig into telemetry to see how you use Visual Studio. It reveals which shortcuts you hit, how often, and when. But here’s the tough part: it doesn’t explain why. If you press Ctrl+W, do you select a word, as Visual Studio intends, or expect to close a tab because VS Code or Chrome does that? We see the keypresses, but your intent remains a mystery.

That’s where the art lies. Some of us rely on Ctrl+W for its original role; others follow muscle memory from another tool. Without knowing who’s who, changing a shortcut risk breaking someone’s workflow.

This uncertainty complicates things further when you factor in how Visual Studio organizes shortcuts behind the scenes.

Scopes

Visual Studio’s commanding system has a killer feature: scoped shortcuts. Every shortcut applies to a specific scope, so you can bind the same shortcut to different commands in different contexts. To close a tab with Ctrl+W, we register it in the Global scope. But any scope can override that. For example, Ctrl+W selects the current word in the Text Editor scope. The active scope depends on where your focus is – the editor, Solution Explorer, or another tool window.

To remap Ctrl+W to close tabs, we register it in the Global scope and ensure no other scope overrides it. This setup gives you flexibility but adds complexity when changing shortcuts, as we must account for every scope’s bindings.

And just when you think you’ve got a handle on that, another wrinkle shows up in how some shortcuts are structured.

Sequenced shortcuts

Visual Studio supports sequenced shortcuts, where you press multiple keys to trigger a command. For example, in the Text Editor scope, Ctrl+E, Ctrl+W toggles word wrap. Many sequenced shortcuts start with Ctrl+E, followed by another key. If we bind a command to just Ctrl+E, it fires immediately, cutting off any chance for the second key in the sequence to register. This breaks all those Ctrl+E-based sequences, as Visual Studio stops listening for additional keypresses once it detects Ctrl+E.

This means we must carefully check existing sequences before assigning single-key shortcuts to avoid breaking workflows that rely on multi-key combos.

With all these layers – multiple shortcuts, profiles, schemes, scopes, sequences, and unknown user intent – changing a shortcut becomes a high-stakes juggling act.

The balancing act

Every shortcut in Visual Studio connects to our coding habits – late-night bug hunts, team workflows we’ve refined for years. When we add or change a shortcut, we don’t just pick a new key. We examine the entire keyboard, identify what’s in use, and sometimes shuffle other shortcuts to make room. For instance, if we set Ctrl+W to close tabs to align with modern tools, we might need to reassign “Select Current Word” to avoid leaving anyone stranded. It’s a delicate balance to keep every developer’s flow intact, and that history of customization makes Visual Studio ours.

Ctrl+W in Visual Studio 2026

This walked you through the process we followed to map Ctrl+W to close the current tab in Visual Studio 2026. For C# profile users, we held off on this change to avoid disrupting existing workflows, especially given potential conflicts with sequenced shortcuts. If you’re using the C# profile and want Ctrl+W to close tabs, you can easily set it up yourself in the keybinding settings.

What’s next?

So, what shortcuts do you want to see next? Got a key combo you need or one that’s driving you nuts? Throw it in the comments – the team’s reading, and your input could help steer where Visual Studio goes from here.

Resources

The post Why changing keyboard shortcuts in Visual Studio isn’t as simple as it seems appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

How do I check whether the user has permission to create files in a directory?

1 Share

A customer wanted to accept a directory entered by the user and verify that the user has permission to create files in that folder. The directory itself might not even be on a local hard drive; it could be a DVD or a remote network volume. They tried calling Get­File­Attributes, but all they were told was that it was a directory.¹ How can they find out whether the user can create files in it?

The file attributes are largely legacy flags carried over from MS-DOS. The actual control over what operations are permitted comes not from the file attributes but from the security attributes.

Fortunately, you don’t have to learn how to parse security attributes. You can just specify the desired attributes when you open the file or directory. In other words, to find out if you can do the thing, ask for permission to do the thing.

The security attribute that controls whether users can create new files in a directory is FILE_ADD_FILE. You can find a complete list in the documentation under File Access Rights Constants.

Directories are a little tricky because you have to open them with backup semantics.

bool HasAccessToDirectory(PCWSTR directoryPath, DWORD access)
{
    HANDLE h = CreateFileW(directoryPath, access,
        FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE, nullptr,
        OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, nullptr);
    if (h == INVALID_FILE_HANDLE) {
        return false;
    } else {
        CloseHandle(h);
        return true;
    }
}

bool CanCreateFilesInDirectory(PCWSTR directoryPath)
{
    return HasAccessToDirectory(directoryPath, FILE_ADD_FILE);
}

You can choose other access flags to detect other things. For example, checking for FILE_ADD_SUBDIRECTORY checks whether the user can create subdirectories, and checking for FILE_DELETE_CHILD checks whether the user can delete files and remove subdirectories from that directory. If you want to check multiple things, you can OR them together, because security checks require that you be able to do all of the things you requested before it will let you in.

bool CanCreateFilesAndSubdirectoriesInDirectory(PCWSTR directoryPath)
{
    return HasAccessToDirectory(directoryPath,
                FILE_ADD_FILE | FILE_ADD_SUBDIRECTORY);
}

Note that these are moment-in-time checks. You will have to be prepared for the possibility that the user has lost access by the time you actually try to perform the operation. But this will at least give you an opportunity to tell the user up front, “You don’t have permission to create files in this folder. Pick another one.”²

As I noted, this technique applies to files as well. If you want to know if the user can write to a file, open it for writing and see if it succeeds!

¹ And we learned some time ago that the read-only attribute on directories doesn’t actually make the directory read-only.

² This could be handy if the act of creating the files happens much later in the workflow. For example, maybe you’re asking the user where to save the query results. The query itself might take a long time, so you don’t want to let the user pick a directory, and then 30 minutes later, put up a dialog box saying “Oops, I couldn’t save the files in that directory. Maybe you should have picked a better one 30 minutes ago.”

The post How do I check whether the user has permission to create files in a directory? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Blazor CI/CD with GitHub Actions: Automate Deployment to Azure Static Web Apps

1 Share

Blazor CICD with GitHub Actions Automate Deployment to Azure Static Web Apps

TL;DR: Automate your Blazor app deployment using GitHub Actions. This guide walks you through building a secure CI/CD pipeline for Blazor WebAssembly, running unit and Playwright tests, deploying to Azure Static Web Apps, monitoring with Application Insights, and hardening workflows with CodeQL, secret scanning, and Dependabot.

High-performing DevOps teams don’t just ship code faster; they deliver with confidence. According to the 2024 DevOps report, elite teams deploy 182× more often, restore service 2293× faster, and reduce lead time for changes by 127× compared to low performers. The secret? Eliminating manual steps from the delivery pipeline.

Blazor WebAssembly apps are ideal for this kind of acceleration. The framework compiles to static files, and Azure Static Web Apps can serve each new build globally within seconds. By integrating GitHub Actions into your workflow, you can:

  • Test every commit automatically so regressions never reach production.
  • Deploy successful builds instantly to Azure, with no zip uploads and no portal clicks.
  • Validate the live site with Playwright smoke tests and capture runtime errors in Application Insights.

This guide provides the exact YAML configuration, scripts, and Azure setup to automate your Blazor CI/CD pipeline end-to-end, helping you ship updates faster, with less risk, and zero extra overhead.

Prerequisites

Before you create the workflow, make sure you have the following in place:

  • GitHub account and repository: You’ll need a GitHub account with a repository. For this guide, we assume main as the default branch.
  • Azure subscription: Ensure you have an Azure subscription with permission to create a Static Web App. The Starter tier works perfectly for this setup.
  • .NET SDK: Install the .NET 8 SDK or later on your local machine to build and deploy the Blazor application.
  • Blazor WASM project: Have a Blazor WebAssembly project ready for deployment.
  • Runner choice: Decide on your runner.
    • GitHub-hosted runners (e.g., ubuntu-latest) work out of the box.
    • Self-hosted runners are also fine, but make sure the .NET SDK is installed.
  • Deployment token: Finally, set up your deployment token:
    • In the Azure Portal, open your Static Web AppManage deployment tokencopy the token.
    • Then, in GitHub, go to Settings → Secrets → Actions → New secret, and paste the token as AZURE_STATIC_WEB_APPS_API_TOKEN.

Deployment sample

You can find the full example and YAML configuration in the GitHub repo.

Step 1: Create the build & test workflow

Begin by creating a folder named .github/workflows at the root of your repository. Inside it, add a file called blazor-ci.yml.

build & test workflow

What happens under the hood

  • Triggers: The workflow runs on every push to main and on every pull request targeting main. That keeps both feature branches and trunk clean.
  • SDK installation: actions/setup-dotnet downloads the latest .NET 8 runtime, caches it, and adds it to $PATH.
  • Build vs. publish: dotnet build compiles the code, and dotnet publish generates the static wwwroot folder that Azure needs.
  • Code coverage: The --collect switch stores coverage data so you can upload it to a badge service or track change‑failure rate trends later.
  • Artifacts: Uploading publish/wwwroot saves about 30 seconds in the deploy job because you avoid rebuilding.

Step 2: Deploy to Azure Static Web Apps

Next, add a second job within the same YAML file, immediately after the build job.

Deploy to Azure Static Web Apps

Why use needs: build?

The needs keyword ensures that the deploy job runs only if the build job succeeds. If tests fail, the pipeline stops.

What this action does?

  • It compresses wwwroot, uploads it to the production slot, and waits for Azure to finish the swap.
  • It emits outputs like static_web_app_url (production) and preview_url (for PRs). Save these for Playwright tests.

Pull request previews

Static Web Apps automatically spin up a temporary environment for every PR. The same deployment step handles this seamlessly, and reviewers receive a private URL like: https://pr-42-sitename.azurestaticapps.net.

Step 3: Run post-deploy smoke tests with playwright

Run end‑to‑end (E2E) tests to catch problems that unit tests miss, such as broken routing and missing static files. Playwright is a fast choice because GitHub ships ready‑to‑use browser bundles.

Create tests/e2e with two files:

playwright.config.ts:

playwright.config.ts:

example.spec.ts:

example.spec.ts:

Now append a third job to the workflow:

append a third job to the workflow

If a test fails, the job stops and marks the build as failed. The site will remain offline until the issue is resolved.

Add --reporter=html and upload the playwright-report folder as an artifact to get a rich dashboard of screenshots.

Step 4: Monitor production with application insights

Static Web Apps integrates with Application Insights out of the box. Here’s what you do:

  1. Turn on Application Insights in the Azure Portal under Static Web App → Settings → Application Insights.
  2. Copy the connection string.
  3. Add it to staticwebapp.config.json at the root of your project:
    {
      "navigationFallback": {
        "rewrite": "/index.html"
      },
      "logging": {
        "connectionString": "InstrumentationKey=..."
      }
    }
  4. Save the string as a GitHub secret APPINSIGHTS_CONNECTIONSTRING, if you prefer to inject it at build time.

Automated health gate

Insert a final step inside the e2e job (after Playwright) that asks App Insights for exceptions  in the last five minutes:

- name: Fail if Exceptions > 0
  uses: azure/cli@v1
  with:
    inlineScript: |
      count=$(az monitor app-insights query \
        --app MySite \
        --analytics-query "exceptions | where timestamp > ago(5m) | count" \
        --query "tables[0].rows[0][0]")

      if [ $count -gt 0 ]; then
        echo "::error::Application errors detected: $count"
        exit 1
      fi

If your deployment introduces runtime crashes, the pipeline fails, so you can roll back before customers notices.

Step 5: Secure and harden the pipeline

Treat CI/CD as part of your supply chain and harden it early.

1. Run dependency scanning (CodeQL)

Initialize and run CodeQL analysis to surface known CVEs in third-party packages.

- name: Initialize CodeQL
  uses: github/codeql-action/init@v3
  with:
    languages: csharp

- name: Run CodeQL analysis
  uses: github/codeql-action/analyze@v3

2. Enable secret scanning + push protection

Turn it on under Repo → Code Security → Secret Scanning so GitHub blocks commits that leak API keys.

3. Configure dependabot for automatic version updates

Add .github/dependabot.yml so NuGet and npm updates land as PRs, each going through the same pipeline.

4. Pin action versions

Lock actions to full commit SHAs (e.g., Azure/static-web-apps-deploy@42a7b9…) instead of @v2 to eliminate supply-chain drift.

5. Secure self‑hosted runners

Run them in a dedicated subnet without long‑lived credentials and use OIDC federation with Azure instead of PATs.

Step 6: Troubleshooting quick reference

Here are common issues and quick fixes:

  1. deployment_token was not provided
    • Double‑check the secret name.
    • It must be exactly AZURE_STATIC_WEB_APPS_API_TOKEN.
  2. 404 Static Web App not found
    • The token points to a different resource group.
    • Regenerate it from the correct Static Web App.
  3. No projects found during dotnet build
    • Your solution is in a subfolder.
    • Add working-directory: src or pass the .sln.
  4. Slow restore every run
    • Cache the NuGet folder:
      - name: Use NuGet cache
        uses: actions/setup-dotnet@v4
        with:
          dotnet-version: '8.0.x'
          cache: true
      
  5. Playwright can’t connect
    • Ensure the SITE_URL environment variable is empty in push builds to main.
    • Read static_web_app_url, not preview_url, after a merge.

Putting it all together

Your finished .github/workflows/blazor-ci.yml now contains three jobs:

  1. Build: compiles, runs unit‑tests, publishes, and uploads artifacts.
  2. Deploy: pulls artifacts and deploys to Static Web Apps (production or PR slot).
  3. e2e: runs Playwright smoke tests and checks Application Insights for runtime errors.

A single green tick signals:

  • The code compiles on a clean machine.
  • Unit tests pass.
  • The site is live at its final URL.
  • Core pages respond correctly.
  • No new exceptions appeared.

You’ve automated all four DORA metrics: deployment frequency, lead‑time for changes (minutes), mean‑time‑to‑restore (App Insights alerts), and change‑failure rate (Playwright + unit tests).

Syncfusion Blazor components can be transformed into stunning and efficient web apps.

Conclusion

Thank you for reading! By investing just a few hours, you can replace manual ZIP uploads with a repeatable, observable, and secure CI/CD pipeline. With GitHub Actions, each commit is automatically built and tested, the compiled site is deployed to Azure Static Web Apps, Playwright smoke tests are executed, and Application Insights is checked for errors.

This streamlined setup reduces lead time, increases deployment frequency, and lowers the risk of change failure, meeting all four DORA metrics without any manual steps.

For questions or assistance, feel free to reach out via our support forumsupport portal, or feedback portal. We’re always happy to help!.

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Transition to Cross-Platform Development

1 Share

In this post, we will analyze the importance of bringing your applications to any operating system, as well as the challenges that this entails. We will also see how prebuilt controls can help you create cross-platform apps more quickly and efficiently.

Why Should You Build Cross-Platform Applications?

According to StatCounter, a site specialized in gathering information about market share for browsers, devices, social media, etc., between July 2024 and July 2025, on average Android ranks as the operating system platform with the crown, capturing a market share of 45.67%, followed by Windows with 25.94% and finally the Apple operating systems, which together account for about 23.4%.

Market share of operating systems in the past year, based on StatCounter data

With these statistics, it is possible to understand that, while we could target the development of an application to a specific operating system, this would reduce the number of users we could reach. This would mean giving way to other similar applications that can fill the gap we leave by not focusing on developing for the platforms with the highest market share. However, creating cross-platform applications involves a series of key challenges in their development, as we will see next.

Key Challenges When Moving from Platform-Specific Development

Developing and maintaining applications for each platform may not be as easy as it sounds.

First, it is almost certain that the programming languages across different platforms are not the same. While Android typically uses Java or Kotlin, in the Apple ecosystem, Swift or Objective-C is used. This implies having several teams of developers trained to achieve the same functionality on each platform but using different frameworks, which is often costly.

Swift code example

import UIKit

class ViewController: UIViewController {
    override func viewDidLoad() {
        super.viewDidLoad()

        let button = UIButton(type: .system)
        button.setTitle("Press", for: .normal)
        button.frame = CGRect(x: 100, y: 100, width: 120, height: 40)
        
        // Event handler
        button.addTarget(self, action: #selector(buttonTapped), for: .touchUpInside)
        
        view.addSubview(button)
    }
    
    @objc func buttonTapped() {
        print("Button pressed in Swift!")
    }
}

Kotlin code example

import android.os.Bundle
import android.widget.Button
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity

class MainActivity : AppCompatActivity() {
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)

        val button = Button(this).apply {
            text = "Press"
            setOnClickListener {
                Toast.makeText(this@MainActivity, "Button pressed in Kotlin!", Toast.LENGTH_SHORT).show()
            }
        }

        setContentView(button)
    }
}

Another point to consider is that development tools are platform-specific, which means having multiple devices and IDEs configured differently to maintain native applications.

Likewise, it is very likely that there will be inconsistencies in the visual design of the applications, as each operating system has default controls and visual guidelines that can alter the appearance of the application, which can confuse users transitioning between devices.

Example of a Figma-based comparison between iOS and Android design versions

Fortunately, there are frameworks that can greatly assist us in building cross-platform applications in less time, allowing deployment using the same codebase across multiple operating systems, as we will see next.

.NET MAUI: A Robust Choice for Cross-Platform Development

When analyzing the challenges of creating natively for multiple platforms, Microsoft has developed a framework called .NET MAUI, which allows us to create a code base that we can reuse to build apps primarily for Android, Windows, and iOS.

Screenshot from the official .NET MAUI website

Visual Studio offers a template for working exclusively with .NET MAUI composed of a single project divided into folders, allowing for easy project management. It is possible to create other related projects to separate the different layers of the application and even reuse them with other types of .NET projects.

In general, .NET MAUI projects feature a graphical interface layer comprised of controls, pages, navigation elements and adaptive layouts that enable the creation of applications overall. The definition of the graphical interface is usually done through pages with XAML code (although it is possible to create them using pure C# code), which allows defining the UI once to maintain a similar appearance across all devices and platforms.

XAML code example

<Grid Background="LightGray">
    <Button Content="Click here"
            HorizontalAlignment="Center"
            VerticalAlignment="Center"
            Width="120"
            Height="40"
            Background="LightBlue"
            Foreground="Black"
            FontWeight="Bold"/>
</Grid>

To provide interaction in the application, it is possible to create a business logic base using C# code, which, like the graphical interface, is written only once. In .NET MAUI, it is common to use the MVVM pattern, which allows separating models, views and view models to avoid mixing UI elements with business logic.

C# code example

public partial class MainViewModel : ObservableObject
{
    [ObservableProperty]
    private string message = "Press the button";

    [RelayCommand]
    private void OnButtonClicked()
    {
        Message = "The button was clicked!";
    }
}

It is worth noting that the .NET MAUI ecosystem has grown significantly in recent years, translating into a series of NuGet packages that allow you to do anything you can imagine, from creating local databases, integrating Lottie animations, or enhancing your graphical interfaces through modern .NET MAUI controls like those from the Progress Telerik library.

Combining all the advantages mentioned above, it is possible to create applications that have native-like performance as the team has focused its efforts in the latest versions of .NET MAUI to resolve hundreds of issues related to platform performance, providing us today with a robust, multi-platform ecosystem for creating applications for the major operating systems.

Delivering Native Experiences with Less Complexity Through Prebuilt Components

While .NET MAUI provides controls for creating applications in general, there will undoubtedly come a time when you need to create complex functionalities with more sophisticated and advanced components to achieve tasks that native controls do not offer. For example, you may need a .NET MAUI Scheduler control with advanced functionality to schedule by day, month, etc., a .NET MAUI PDF viewer that allows users to make annotations or even an image editor integrated into your application.

Sample of a complex cross-platform app

It is possible to create these components from scratch; however, it almost certainly would take some time because, remember, you need to make these components cross-platform, which sometimes involves writing native code for each platform to make them work properly.

Additionally, it is always necessary to perform multiple performance tests to verify that poorly optimized components do not slow down or hinder the application. You should also not forget to test on various devices with different operating system versions to check that they behave optimally on all.

If you want to avoid the above, you can choose to use prebuilt components that allow you to focus on implementing the application requirements instead of worrying about creating components that you might never reuse. Among the benefits of these components are:

  • Included support: You can create support tickets to ask specific questions about a control and even suggest new features.
  • Extensive control catalog: Companies typically have a wide catalog of controls for different tasks.
  • Customization options: Control APIs have customization options to change styles and colors.
  • Constant updates: There are constant update cycles that fix bugs, add features and improve performance.

Companies like Progress have been pioneers in developing controls for .NET MAUI since its inception, resulting in a robust and optimized catalog of controls for creating interfaces of any type. Their catalog includes controls for almost any popular framework today, such as Angular, React, ASP.NET Core, .NET MAUI and WPF, among many others.

Telerik Components for Building Cross-Platform Apps with .NET MAUI

As part of their offering for .NET MAUI, the company has the extraordinary amount of more than 60 controls that will allow you to create any application you can imagine. Below are some examples of controls that can help you in your day-to-day tasks.

Prebuilt Component Examples to Accelerate Migration to .NET MAUI

Let’s discuss some controls that can help you make your migrations from native and complex views to .NET MAUI pages without a headache.

First, in the .NET MAUI framework, the most common control for handling collections is the CollectionView; however, it lacks advanced functions such as filtering, sorting, pagination, tree view mode, etc. In these cases, the Telerik UI library has controls that allow you to create forms from data models, grids for data, controls to represent information trees, etc.

Live example of the DataGrid control highlighting advanced capabilities like Search-as-You-Type

Similarly, at some point, you may want to add controls like a ComboBox, autocomplete functionality, masks in text boxes, custom pickers or a Range Slider to your applications. All these functionalities and more can be found in the form of editors as part of the Telerik suite for .NET MAUI.

Live example of the Image Editor control showcasing powerful editing features included by default

In terms of navigation, the default pages in .NET MAUI may not be sufficient for you, whether you need to implement an accordion with information, create panels based on dock positions, allow users to input signatures or distribute your elements in a layout with a wrap mode. Once again, Telerik has several controls to achieve these goals.

Example of the SignaturePad control, enabling users to easily capture and personalize their signatures

Finally, within their catalog, they also have gauge controls; controls for displaying multiple formats of charts, calendars and schedulers; interactive controls like chat views, popups and progress bars; and many more that will make your applications look phenomenal and user-friendly.

Design of an application leveraging a wide range of Telerik controls for .NET MAUI

Conclusion

Throughout this article, we have analyzed why it is important for companies and individuals to provide their customers with applications in the main app stores, as well as the challenges this entails. Likewise, we have examined how prebuilt controls such as those from the Telerik library can help you make successful migrations faster using .NET MAUI.

Try Telerik UI for .NET MAUI

Read the whole story
alvinashcraft
30 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Pipe dreams to pipeline realities: an Aspire Pipelines story

1 Share

Liked this blog post? It was originally posted on Safia Abdalla’s blog, https://blog.safia.rocks/, check out more content there!

OK, sit down folks, because this one is gonna be a long one. This is the all-encompassing blog post about the thing I’ve been working on for the past few weeks: Aspire Pipelines. I’ve been writing about specific aspects of this feature over the past few weeks, including my post from a while back about “modes” in Aspire and last week’s post about the CLI redesign for Aspire. Today, I want to talk about how the feature evolved from its initial inception, take a look at specific implementation details and how they changed, cover the feature in its current state, and talk a little bit about what’s next for this area.

Let’s get started by establishing some context. If you’re not familiar with it, Aspire is a framework for modeling cloud-based applications in code, running them locally, and deploying them. The bulk of this blog post is about the last point: the messy work of deploying applications to the cloud. When you deploy an application to the cloud, there’s a ton of stuff that needs to happen: build container images, provision databases, set up networking, assigning permissions, scanning assets for secrets, applying data migrations as needed, and on and on and on. Some of these tasks can happen at the same time on the same machine, some of it can’t. Some of it is easy to automate and some of it is not. Managing and modeling all that orchestration is messy, and that’s the problem we’ve been trying to tackle.

To not overwhelm ourselves (or let’s be real, overwhelm me), let’s hone in on a very specific and simple scenario. Let’s say you’re deploying a typical web app: a frontend, an API service, a database, and some blob storage. Let’s simplify things further by saying that you are deploying it for the first time, there’s no databases already provisioned with data so we don’t need to worry about migrations or any of that. A simple case. I have an app on my computer, I need to deploy it to the web. What do? Let’s talk about how we tackled this problem by discussing the first inception of deployment support in Aspire, which came about in the 9.4 release.

The Beginning: Aspire 9.4 and Basic Callbacks

If you’re not familiar with Aspire, let’s cover some of the key technical aspects that will help clarify some of the groundwork that was laid out in this release. Aspire lets you model your application services in a code-based application host. This file defines the compute resources and infrastructure resources that exist in your application. For the typical web app we are working with, the AppHost might be implemented as follows.

var builder = DistributedApplication.CreateBuilder(args);

builder.AddAzureContainerAppsEnvironment("env");

var storage = builder.AddAzureStorage("myapp-storage");
var database = builder.AddPostgres("myapp-db");

var api = builder.AddCSharpApp("api", "./api.cs")
    .WithHttpEndpoint()
    .WithReference(storage)
    .WithReference(database);

builder.AddViteApp("frontend", "./frontend")
    .WithReference(api);

builder.Build().Run();

Each of the components described in the AppHost is considered a resource in the Aspire resource model (think: all the databases, services, and infrastructure your app needs). The AppHost could interact with a number of entities. One of those entities is the Aspire CLI which communicates with the AppHost over RPC to do things like execute the AppHost and start up local orchestration or query specific APIs within the AppHost to generate assets or get a sense of application state.

Hopefully, that is enough context to describe the key components that were introduced with relation to deployment in 9.4. Those core components included:

  • An aspire deploy command in the Aspire CLI that served as the entrypoint for the core deployment functionality
  • A DeployingCallbackAnnotation that could be registered in the Aspire resource model. This annotation contained…
  • A DeployingCallback which encompassed arbitrary behavior that should be triggered during deploy

Here’s what the AppHost above would look like with a simple DeployingCallbackAnnotation that defined the behavior of the AzureContainerAppsEnvironment that was declared.

var builder = DistributedApplication.CreateBuilder(args);

builder.AddAzureContainerAppsEnvironment("env")
    .WithAnnotation(new DeployingCallbackAnnotation(async (context) =>
    {
        // Some code to deploy to ACA here
    }));

var storage = builder.AddAzureStorage("myapp-storage");
var database = builder.AddPostgres("myapp-db");

var api = builder.AddCSharpApp("api", "./api.cs")
    .WithHttpEndpoint()
    .WithReference(storage)
    .WithReference(database);

builder.AddViteApp("frontend", "./frontend")
    .WithReference(api);

builder.Build().Run();

The initial design used callbacks to maintain maximal flexibility. We didn’t want to lock users into a specific deployment target or opinionated workflow. With callbacks, you could write code to deploy to AWS, push to a custom container registry, run database migrations, or really anything else. The callback was just a function: you had complete control over what happened inside it. While other aspects of the design have evolved, the notion of encapsulating the actual core logic in a callback hasn’t changed.

The annotation pattern itself is worth calling out because it shows up throughout Aspire. Resources in the application model are just state, so behaviors are attached via annotations rather than by modifying the resource directly. Annotations essentially allow us to decorate resources with extra metadata about how they should behave in different scenarios. Annotations are used to describe what endpoints a resource exposes or what environment variables should be injected into the process associated with a resource.

As mentioned earlier, the CLI serves as a thin client that coordinates with the Aspire application host, which contains the actual state associated with the running application. Here’s how the pieces fit together when you run aspire deploy:

Aspire deploy workflow diagram

In addition to these core APIs, Aspire 9.4 also brought about the introduction of the PublishingActivityReporter: an API surface area that allowed the AppHost (basically, the code where you define your application and its dependencies) to communicate to a client (in this case, the CLI) the progress of various activities that were ongoing in the deployment. I blogged more about this particularly API and the way it evolved in last week’s blog post. The PublishingActivityReporter also provided APIs that allowed the AppHost to send requests to prompt the user for values in the CLI. This functionality built on the same central APIs that were used to support prompting for values in the web-based Aspire dashboard.

All these building blocks allowed the user to register code in the AppHost that would define what the deployment behavior would be. When the aspire deploy command was called, it would query the AppHost and initiate the invocation of all DeployingCallbackAnnotations that existed in the application model. Inside this code, you could prompt the user for values as the deployment was ongoing and present notifications about the progress of the deployment to the users.

So, at this point in time, we had:

  • A command in the CLI client that served as the entry point for executing a deployment
  • Infrastructure in the AppHost that would discover user-specified deployment code and execute it when the entrypoint was invoked
  • A way to notify the client of progress on the deployment

These core components were useful enough to implement some interesting things. A couple of weeks back I posted about using these initial deployment APIs to deploy a static site with Azure Front Door.

Here’s what was missing at this point though: there was no orchestration of callback execution order, no dependency management between callbacks, and no built-in error handling or retry logic. If you had multiple callbacks, they ran sequentially in the order they were registered. If one failed, there was no rollback or recovery. If you wanted to deploy to Azure at this point, you’d have to write all the provisioning logic yourself, coordinate the order of operations manually, and handle all the error cases. That’s a lot of work for what should be a common scenario. Don’t worry though, we fixed that in the next release and had the opportunity to put the APIs that would theoretically work in practice in a big way.

Getting Real: Aspire 9.5 and Azure Deployment

That big way came about in Aspire 9.5, when we introduced built-in support for deploying to Azure with the aspire deploy command. Instead of users having to write all the provisioning logic themselves like they would have in 9.4, we now provided it out of the box. The implementation was modeled under a single code entry-point that would then execute the process of completing the deployment in four distinct steps:

  1. It would acquire the configuration details associated with the Azure subscription and resource group that the user intended to deploy to. If it didn’t find any configuration details saved from the typical configuration sources (user secrets, environment variables, or configuration files), it would prompt the user for them using the PublishingActivityReporter APIs from 9.4. This is where that interactive communication channel between the AppHost and CLI really paid off. Users could get prompted during the deployment for any missing values without having to restart the whole process and update configuration sources with their values.

  2. It would provision the infrastructure resources in the application model. Infrastructure resources consist of things like databases, storage accounts, and container registries that are distinct from compute resources. This is where we would map Aspire’s resource abstractions (like AddAzureRedis() or AddCosmosDB()) to actual Azure services. We also provision managed identities and role assignments here to handle authentication between services.

  3. It would build container images for any compute resources defined in the app. If you’re running Docker or Podman locally, we use your container runtime to do the builds. If you’re deploying a .NET project, we use .NET’s built-in container build support (which doesn’t require Docker at all for builds). Once built, we tag and push these images to Azure Container Registry (ACR) that was provisioned in the previous step.

  4. It would deploy the compute resources using the images built in step 3 and the infrastructure allocated in step 2. At this point, we were primarily targeting Azure Container Apps (ACA) as the compute platform, though the design allowed for other targets in the future.

And voilà! You have a deployed application with all its infrastructure dependencies. It’s important to call out that while I used the terms “steps” above, none of these behaviors were modeled that way in the application code. The code for this essentially looked like:

public async Task DeployAsync(DeployingContext context)
{
  await GetProvisioningOptions();
  await ProvisionInfrastructureResources();
  await BuildComputeImages();
  await DeployComputeImages();
}

In addition to there being no distinct concept of steps, there’s also no clear concept of dependencies. The fact that things are called in the order that they are is a result of the invocation order in the application code. One of the key things we played around with at this point in time was the level of granularity we utilized as far as what deployments we sent over to Azure for a given application. Granularity of deployments is a key concept that we’ll keep referring to in this blog, but let’s hone in on this particular question.

These details are specific to Azure but they apply to other deployment engines as well. When we ask Azure to provision resources in the cloud, we can send a request to provision all of the resources that are registered in the application model at once. This leaves it up to Azure to identify the best granularity and concurrency for the deployment on its end. The big pro here is that Azure can potentially optimize the deployment internally and cache state across the entire operation. The downside, however, is that when things fail, our client gets a top-level failure that encapsulates all the resources instead of knowing exactly which specific resource caused the problem.

We landed in a model where we request Azure to provision each resource individually: one call for CosmosDB, another for Azure Storage, another for the Azure Container Registry. This gives us the ability to provide more granular error reporting on the client side (you know exactly that CosmosDB provisioning failed, not just “something went wrong”), but we lose the ability to leverage Azure’s state caching optimizations. It’s a tradeoff between visibility and performance that we made in the initial design. State caching is an important thing to call out here. We don’t do any state caching by default, but let’s put a pin on this and we’ll revisit it later.

In the code above, you’ll observe that each chunky step in the execution is its own function and we would await the first before continuing with the second. That means if provisioning CosmosDB takes 3 minutes and provisioning Storage takes 2 minutes, we’re waiting a full 5 minutes total, even though they could theoretically execute in parallel. Any dependencies that just needed CosmosDB would have to wait even though they could start much sooner. Same with building container images: if you have multiple services, we build them one at a time. This sequential execution was a significant bottleneck and it was visible to users. The deployment experience just wasn’t as snappy as it should be.

We put a pin on state caching for provisioned resources, but this gap also applies to parameters that are required during the deployment process. It was great that you were prompted for missing values every time you tried to deploy, but we never saved those values anywhere. That means every time you deployed you would be prompted for values. Needless to say, I got some complaints thrown my way for this.

The main progress made in this release was identifying the quirks involved that were specific to deploying to Azure and identifying where the gaps were. In the next iteration of Aspire (versioned as 13 for reasons I won’t get into) many of the problems we’ve been discussing around granularity, state persistence, and concurrency become resolved.

I should clarify that the choice to target Azure isn’t because Aspire only supports Azure, it’s just a side-effect of working at the Blue Sky company.

The Pipeline Emerges: Aspire 13

One of the biggest problems we sought to address in the next release is the sequential nature of the execution model in the deployment framework. This changes with the introduction of a DistributedApplicationPipeline in Aspire 13. The pipeline consists of a set of PipelineSteps which define concrete behavior associated with a particular named action. Now, instead of defining each of the pieces of functionality above as functions in code, we can describe them as steps in the pipeline.

public IDistributedApplicationPipeline Task DeployAsync(IDistributedApplicationPipeline pipeline)
{
    pipeline.AddStep("get-provisioning-options", (context) => await GetProvisioningOptions());
    pipeline.AddStep("provision-infra", (context) => await ProvisionInfrastructureResources());
    pipeline.AddStep("build-images", (context) => await BuildComputeImages());
    pipeline.AddStep("deploy-compute", (context) => await DeployComputeImages());
    return pipeline;
}

When the Aspire application boots up in pipeline execution mode (side note: “this isn’t a real mode”, for more on that, read this blog post), it resolves all of the steps in the pipeline and their dependent steps then executes them. In addition to being registered directly on the pipeline, steps can also be registered on resources within the application model using the annotations we referenced above. That means a resource modeled within Aspire can describe what it is and how it behaves in the pipeline. The resolution step discovers steps that are defined both directly on the pipeline and those that exist on resources in the application model. The pipeline that is executed is the joined set of those two.

One of the things that evolved with the implementation of pipeline is the level of granularity of steps. Earlier we mentioned granularity when it comes to the provisioning of infrastructure resources and covered the pros/cons specific to the deployment of these assets. However, we can take granularity further for other aspects of the deployment pipeline: like the building of images and deploying of compute resources. This allows us to model maximal concurrency across the pipeline. If you read my last blog post on the UI work involved to visualize this concurrency in the CLI, this behavior is essentially the “backend” of that implementation. Here’s a visual for what a complete and concurrent pipeline to support deploying our example app with a frontend, API service, a CosmosDB database, and Azure Storage might look like.

Complex deployment pipeline diagram

Yeah, that’s a lot. The key things to note are in the colors. All the provisioning steps can run at the same time. The two build steps kick off in parallel as well. As soon as we are done building our images and as soon as the container registry is provisioned, we can push the images that we built to the remote registry. We’re not sitting around waiting for any unnecessary steps to complete.

This is a good segway into the other aspect of this feature: the ability to wire up dependencies between steps in the pipeline. In the case above, the top-level “provision-resources” has dependencies on the provisioning steps associated with each individual resource. Dependencies can be attached across any step in the pipeline. From the diagram, you can also see that there’s levels to this. (Sorry, I couldn’t resist.) The levels map closely to the code sample that we looked at earlier. One level captures the set of steps related to building compute images, another for pushing container images, and a third for deploying both compute resources and infrastructure resources.

The granularity of the pipeline allows us to create relationships between individual components within the resource model. For example, as soon as the container image for the API backend is built, we can push the image to the remote registry. That step has a dependency on the compute environment being provisioned (it hosts the image registry), but as soon as that is done we can deploy the compute resource. Links across granular steps exist across the levels (or meta-steps as they are modeled in the API) that they are parented to.

One of the other things that changed in Aspire 13 is the introduction of deployment state caching in the API. I shared more detail about this in one of my previous blog posts. This change allowed us to save the values of parameters that we prompted for and the provisioned cloud resources in a file on disk. Multiple calls to aspire deploy can reuse the state that is saved. Since we’ve mentioned the aspire deploy command again, let’s talk about how the CLI for this experience evolved in this release. As part of leaning in towards this pipeline thing, Aspire 13 also includes support for a new aspire do command that allows you to execute arbitrary steps that are modeled in your AppHost. It works pretty similarly to the aspire deploy command that’s modeled in the diagram above, except instead of finding and discovering DeployingCallback annotations the CLI finds and executes PipelineSteps in the implementation.

Fin

If you’ve got experience with this area, you might’ve realized that we’ve essentially exposed a way to model build and deployment pipelines within Aspire. And that’s exactly it. If you’ve used GitHub Actions or Jenkins, you’ve seen pipelines with steps and dependencies. There’s also many pipelines-in-code implementations in various ecosystems in the form of build automation tools (Cake in C#, Rake in Ruby, doit in Python). Same idea, but this one runs inside your application code and knows about your app’s resources in real-time. The crux of the magic is the combination of data in the DistributedApplicationModel and execution in the DistributedApplicationPipeline joined together to allow you to declare code that is not only debuggable, but aware in real-time of the state of your application. And that’s the magic trick here: it knows what your app looks like and can make smart decisions based on that.

As you might imagine, this is just the beginning of this new Aspire Pipelines story. There’s a lot of loose threads to tie up and ideas to pursue:

  • Earlier in this post, I mentioned concepts around resiliency and retries for deployment steps. This is not yet in place but we’ve laid enough infrastructure to bring this in.
  • The current iteration of the deployment state management APIs is a little rough and is definitely due for a revamp in future releases.
  • There’s a future for enhancing the pipeline steps API to make it easier to model certain types of steps: for example, steps that involve calling into an external process or steps that involve interacting with a container runtime.

There’s more to add to this list as more folks build on top of these primitives. I’m excited to see how this space evolves once this feature actually launches in Aspire 13 (check out the Aspire site for more details). Until next time, happy deploying!

The post Pipe dreams to pipeline realities: an Aspire Pipelines story appeared first on Aspire Blog.

Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build a Real-Time KPI Dashboard in .NET MAUI Using Charts and MVVM

1 Share

Build a Real-Time KPI Dashboard in .NET MAUI Using Charts and MVVM

TL;DR: Create a real-time KPI dashboard in .NET MAUI with charts and MVVM architecture. This guide walks you through building a responsive XAML layout, binding dynamic data using observable collections, and implementing live updates for metrics such as revenue trends, leads by channel, and regional performance. Ideal for cross-platform apps that require fast, interactive data visualization.

In today’s fast-paced business world, decision-makers need instant access to accurate and actionable insights. Unfortunately, traditional static dashboards often fail to keep pace, leaving teams with outdated information and hindering critical decisions. The solution is a real-time KPI dashboard, one that continuously updates to reflect changes in revenue, sales leads, and regional performance. Powered by dynamic data streaming and interactive visualizations, this approach ensures that insights are always current and relevant.

In this blog, we’ll learn how to build a sleek, real-time KPI dashboard using the Syncfusion® .NET MAUI Toolkit Charts and the MVVM pattern. 

Core features of the KPI dashboard

To provide a clear and actionable view of business performance, the dashboard includes several key components:

  • Revenue trend: A Line chart that illustrates revenue growth over time, helping you track performance and identify patterns.
  • Units by channel: A Column chart that breaks down unit acquisition across different channels, offering insight into which sources contribute most to overall sales.
  • Revenue by region: A Doughnut chart that highlights regional contributions to total revenue, making it easy to compare market performance.
  • Dynamic insights: Quick metrics such as Momentum and Top Performers are displayed to deliver actionable insights at a glance.

Step 1: Dashboard layout design

To begin, we’ll create a clean and responsive KPI workspace using a two-column, four-row grid in .NET MAUI. This structure organizes dashboard components logically while maintaining flexibility across devices.

Refer to the following code example.

<ContentPage …>

    <Grid RowDefinitions="Auto,Auto,Auto,Auto" ColumnDefinitions="3*,2*" RowSpacing="12" ColumnSpacing="12">
        <!-- Header -->
        <Border Grid.Row="0" Grid.ColumnSpan="2" Style="{StaticResource Card}">
            <Label Text="Real-Time KPI Dashboard" Style="{StaticResource CardTitle}"/>
        </Border>

        <!-- Insight cards (3 columns) -->
        <Grid Grid.Row="1" Grid.ColumnSpan="2" ColumnDefinitions="*,*,*">
            <!-- 3 compact Border cards bound to Insights[0..2] -->
        </Grid>

        <!-- Charts -->
        <Border Grid.Row="2" Grid.Column="0" Style="{StaticResource Card}">
            <!-- Revenue Trend: Line Series -->
        </Border>

        <Border Grid.Row="3" Grid.Column="0" Style="{StaticResource Card}">
            <!-- Leads by Channel: Column Series -->
        </Border>

        <Border Grid.Row="2" Grid.RowSpan="2" Grid.Column="1" Style="{StaticResource Card}">
            <!-- Revenue by Region: Doughnut Series -->
        </Border>
    </Grid>
</ContentPage>

Step 2: Implement MVVM architecture

Moving forward, we’ll set up the MVVM (Model-View-ViewModel) architecture, which is essential for building scalable, maintainable, and testable .NET MAUI apps, especially dynamic dashboards. MVVM separates the UI (View) from the business logic (ViewModel) and data (Model), ensuring clean architecture and easier updates.

Creating model classes for chart data and insights

Model classes define the structure of your data, including time points for revenue trends, category totals for leads, insight items for quick metrics, and raw sales records for aggregation.

Refer to the following code example.

// Time series point (Line Series)
public class TimePoint
{
    public DateTime Time { get; set; }
    public double Value { get; set; }
    ...
}

// Category point (Column/Doughnut)
public class CategoryPoint
{
    public string Category { get; set; } = "";
    public double Value { get; set; }
    ...
}

// Insight card item
public class InsightItem
{
    public string Name { get; set; } = "";
    public string Value { get; set; } = "";
    public string Severity { get; set; } = "Info";
    public string Icon { get; set; } = "";
    ...
}

// Raw CSV record (Kaggle: Ascension merch supply chain)
public class SalesRecord
{
    public DateTime Date { get; set; }
    public string Channel { get; set; } = "";
    public string Region { get; set; } = "";
    public double UnitsSold { get; set; }
    public double Revenue { get; set; }
}

Step 3: Build the ViewModel for real-time updates

Next, we’ll create the ViewModel, which acts as the data bridge between the UI and models. It reads the CSV file once and seeds the dashboard, streams one record per day to simulate live data, aggregates revenue trends by day for smooth line charts, manages ObservableCollections for dynamic chart rendering, and supports real-time updates for all KPI components.

Refer to the following code example.

private async Task LoadCsvDataFromResourcesAsync(string fileNameInRaw)
{
    // Open CSV from app resources, validate headers, parse records, and sort by date.
}

private async Task StartRealtimeLoopAsync()
{
    // Run a timer loop on each tick, updating the UI until data ends or cancellation occurs.
}

private void UpdateRealtime()
{
    // Add the next record to the rolling window, drop out-of-range items, and trigger recompute.
}

Step 4: Create modular views with user controls

Finally, we’ll bring the dashboard to life by adding interactive charts that make your KPIs clear, actionable, and visually compelling. These visual components transform raw data into insights that are easy to interpret at a glance.

Revenue trend

Start by introducing a Line Chart to illustrate revenue changes over time. This visualization provides a smooth, continuous view of performance trends, allowing users to identify patterns and anomalies quickly. To keep the interface clean, enable tooltips for instant value checks without cluttering the interface.

Refer to the following code example.

<Chart:SfCartesianChart>
    <Chart:SfCartesianChart.XAxes>
        <Chart:DateTimeAxis />
    </Chart:SfCartesianChart.XAxes>

    <Chart:SfCartesianChart.YAxes>
        <Chart:NumericalAxis />
    </Chart:SfCartesianChart.YAxes>

    <Chart:LineSeries ItemsSource="{Binding RevenueTrend}"
                       XBindingPath="Time"
                       YBindingPath="Value"
                       StrokeWidth="2"
                       Fill="#0EA5E9"
                       ShowMarkers="False"
                       EnableTooltip="True" />
</Chart:SfCartesianChart>

Refer to the following image.

Revenue trend visualized with a line chart in .NET MAUI charts
Visualizing revenue trend using .NET MAUI Toolkit Line Chart

Leads by channel

To compare performance across different channels, incorporate a Column Chart. This chart emphasizes category-based metrics, making it easy to spot strong and weak performers. Use clear data labels and custom colors to enhance readability and visual appeal.

Refer to the following code example.

<Chart:SfCartesianChart>
    <Chart:SfCartesianChart.XAxes>
        <Chart:CategoryAxis />
    </Chart:SfCartesianChart.XAxes>

    <Chart:SfCartesianChart.YAxes>
        <Chart:NumericalAxis />
    </Chart:SfCartesianChart.YAxes>

    <Chart:ColumnSeries ItemsSource="{Binding LeadsByChannel}"
                         XBindingPath="Category"
                         YBindingPath="Value"
                         CornerRadius="6"
                         ShowDataLabels="True"
                         EnableTooltip="True"
                         PaletteBrushes="{Binding LeadsBrushes}" />
</Chart:SfCartesianChart>

Refer to the following image.

Leads by channel visualized with a column chart in .NET MAUI charts
Visualizing leads by channel using .NET MAUI Toolkit Column Chart

Revenue share by region

Next, visualize regional contributions with a Doughnut Chart. This chart highlights proportional data, making it ideal for understanding market distribution. Include data labels and a legend for clarity, and position labels outside the chart for a polished look.

Refer to the following code example.

<Chart:SfCircularChart>
    <Chart:DoughnutSeries ItemsSource="{Binding RevenueByRegion}"
                           XBindingPath="Category"
                           YBindingPath="Value"
                           ShowDataLabels="True"
                           EnableTooltip="True"
                           PaletteBrushes="{Binding CustomBrushes}">
        <Chart:DoughnutSeries.DataLabelSettings>
            <Chart:CircularDataLabelSettings Position="Outside" />
        </Chart:DoughnutSeries.DataLabelSettings>
    </Chart:DoughnutSeries>

    <Chart:SfCircularChart.Legend>
        <Chart:ChartLegend />
    </Chart:SfCircularChart.Legend>
</Chart:SfCircularChart>

Refer to the following image.

Revenue share by region visualized with a doughnut chart in .NET MAUI charts
Visualizing revenue share by region using .NET MAUI Toolkit Doughnut Chart

Insight cards

Finally, complement your charts with insight cards for quick-glance metrics. These cards reveal critical signals, including momentum, top-performing regions, and leading channels. Use color coding to indicate success or warnings, and concise text for instant comprehension.

Refer to the following code example.

<Border Style="{StaticResource InsightCard}">
    <Grid ColumnDefinitions="Auto,*" ColumnSpacing="16">
        <Path Data="M17…Z" Fill="Black" WidthRequest="40" HeightRequest="40" />
        <VerticalStackLayout>
            <Label Text="{Binding Insights[0].Value}" FontSize="20" FontAttributes="Bold" />
            <Label Text="{Binding Insights[0].Name}" FontSize="12" TextColor="#A5B4FC" />
        </VerticalStackLayout>
    </Grid>
</Border>

By following these steps, you’ll build the core of a dynamic, real-time KPI dashboard in .NET MAUI, utilizing charts and MVVM architecture, to deliver actionable insights across platforms.

Real-time KPI dashboard in .NET MAUI with charts and MVVM
Creating a real-time KPI dashboard using .NET MAUI Toolkit Charts and MVVM

Supercharge your cross-platform apps with Syncfusion's robust .NET MAUI controls.

GitHub reference

For more details, refer to the building KPI dashboards using .NET MAUI Toolkit Charts GitHub demo.

Conclusion

Thanks for following this dashboard walkthrough! You’ve learned how to build a clean, real-time KPI dashboard using .NET MAUI Toolkit Charts and the MVVM pattern. This approach ensures synchronized updates, modular design, and maintainable architecture. Feel free to extend the ViewModel, customize the charts, and adjust the layout to suit your specific metrics and application requirements.

Ready to elevate your .NET MAUI apps? Try Syncfusion charting components today and start visualizing volatility like a pro. Try out the steps discussed in this blog and leave your feedback in the comments section below.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories