Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152749 stories
·
33 followers

Choosing a Cross-Platform Strategy for .NET: MAUI vs Uno

1 Share

Building a cross-platform app usually starts with a deceptively simple goal: one team, one codebase, multiple targets. The reality is that your framework choice shapes everything from the UI architecture, to the delivery speed, testing strategy, hiring, and how much “platform weirdness” you’ll be living with.

There are plenty of cross-platform options, including React Native, Ionic, NativeScript. All of these are fundamentally web-based or hybrid UI stacks. However, if your goal is to stay all-in on C#/.NET for the application UI and core, the two primary options to evaluate are .NET MAUI and Uno Platform.

This post is a practical guide to that choice, and if you’re building a serious product that needs broad reach, there’s a strong case that Uno should be on your shortlist early.

Start By Defining What You’re Optimizing For

Most teams are making tradeoffs whether they say it out loud or not. Are you trying to share business logic but accept platform-specific UI? Are you trying to share as much UI as possible across devices? Do you need a native look and feel, or do you need a consistent design system everywhere?

A quick way to clarify this is to write down your “non-negotiables.” For example:

  • Which platforms must ship first, and which can wait?
  • Is web a real target or not?
  • Do you need offline support?
  • Are there deep device integrations (camera, Bluetooth, push notifications)?
  • Should the UI follow each platform’s conventions, or match your design system?

Once those answers are clear, the MAUI vs Uno gets a little easier. Let’s let at four questions that will help you decide even more clearly between these two platforms.

The Questions That Actually Decide MAUI Vs Uno

In practice, the Uno vs MAUI debate collapses into a handful of questions:

1. Does web matter as a first-class target?
If WebAssembly is part of the product story, that should heavily influence your decision toward Uno.

2. How broad does your platform reach need to be over the next 12–24 months?
Many teams start with mobile and Windows, then discover web or Linux demand later. Others know from day one they need wide coverage provided by Uno.

3. Do you care more about native UI conventions or consistent visuals across platforms?
Both are valid, but they lead to different architecture and design decisions. If you want a more native look and behavior, MAUI can be a good fit; if you want more consistent visuals across platforms (especially including web), Uno often comes out ahead.

4. How complex is your UI and workflow surface area?
Dense forms, dashboards, heavy data entry, and virtualization make tradeoffs show up faster. This is where consistency in layout/styling, predictable rendering, and solid performance patterns matter, and where Uno often shines for “serious app UI,” while simpler UI surfaces may not justify anything beyond MAUI’s more straightforward controls approach.

If you’re scoring high on web, broad reach, and UI consistency (especially in enterprise apps) you’ll often find yourself leaning Uno.

When .NET MAUI Is A Great Fit

MAUI is a solid option, especially for teams who want to stay close to the mainstream Microsoft path. MAUI tends to fit well when:

  • Your scope is primarily iOS/Android (plus maybe Windows desktop), and web is not central
  • You prefer a native control approach and platform conventions
  • Your team is comfortable planning for some platform-specific UI work

Where MAUI shines is its end-to-end Microsoft story and a developer experience that feels familiar to many .NET teams. The key to success is going in with eyes open: cross-platform UI almost always produces edge cases, and the teams that do best are the ones that intentionally contain those escape hatches.

Why Uno Often Deserves To Be The Starting Point In 2026

If MAUI is a strong “mainstream Microsoft” path, Uno is a strong “maximum reach .NET” path. For many product teams building long-lived apps, Uno’s strengths line up with modern requirements—especially around web and broader platform coverage.

Uno is particularly compelling when:

  • Web is a real target (WebAssembly matters to your roadmap)
  • You want broad platform reach without treating web as a second-class citizen
  • Your app has serious workflow complexity (forms, dashboards, dense UI)
  • You care about UI consistency and a predictable design system across devices

One of the most useful things Uno does is make a key tradeoff explicit: where you want the app to feel “most native,” and where you want it to behave consistently everywhere. Instead of discovering inconsistencies late in the cycle, you can choose intentionally early and align design, testing, and performance expectations around that choice.

One more reason Uno is worth a fresh look in 2026 is the platform’s push into AI-assisted developer workflows. They’ve been shipping features aimed at shortening the UI build loop—things like design-to-code and tooling that help you go from intent to working UI faster. If you’re curious, Uno has a great overview on their website.

The Decision Inside The Decision: Native UI Vs Consistent UI

A lot of teams think they’re choosing a framework when they’re really choosing a product philosophy.

If you lean toward native UI, you’re optimizing for:

  • platform conventions and “it feels right” UX
  • OS-level behaviors
  • closer alignment with native platform UI patterns

If you lean toward consistency, you’re optimizing for:

  • a predictable design system across devices
  • fewer platform-specific UI surprises
  • easier cross-platform QA and visual validation

A simple rule of thumb is that consumer apps often benefit from native conventions, while enterprise apps often benefit from consistency and predictability. It’s not a law, but it’s a useful starting point.

A Fast Way To Decide Without Debating For Months

If the choice is still unclear, don’t argue about it, prototype it. A two-week spike can settle most questions quickly. Build a thin vertical slice you’d ship in real life:

  • Authentication
  • One “real” complex screen (forms + validation + a list that needs virtualization)
  • One device integration (camera or push notifications)
  • Basic offline caching
  • Telemetry and crash reporting

Then measure what actually matters: development loop speed, UI fidelity, performance on real devices, build/release friction, and how often you needed a platform-specific workaround.

Closing Thoughts

Both MAUI and Uno can help you succeed with your cross-platform project. The best choice is the one that matches your platform targets, UX goals, team strengths, and maintenance horizon. That said, if your product roadmap includes web as a target, broad reach across platforms, and serious application UI, it’s hard to ignore the case for Uno Platform as a .NET-first foundation.

This post pairs with our Blue Blazes podcast episode featuring Sam Basu from Uno Platform, where we dig into real-world tradeoffs and where cross-platform .NET is headed. Check it out now on video or podcast.

The post Choosing a Cross-Platform Strategy for .NET: MAUI vs Uno appeared first on Trailhead Technology Partners.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI-Powered Code Editor Cursor Introduces Dynamic Context Discovery to Improve Token-Efficiency

1 Share

Cursor has introduces a new approach to minimize the context size of requests sent to large language models. Called dynamic context discovery, this method moves away from including large amounts of static context upfront, allowing the agent to dynamically retrieve only the information it needs. This reduces token usage and limits the inclusion of potentially confusing or irrelevant details.

By Sergio De Simone
Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

JavaScript Canvas - WebGL A 3D Cube

1 Share
Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Walled Culture

1 Share

While major recording artists are sued for alleged plagiarism and most creators earn pennies for their work, media industry profits continue to soar. Libraries face mounting barriers to providing access to ebooks—often while being sued by the very publishers whose books they buy.

In this episode of Future Knowledge, tech and culture writer Glyn Moody discusses his book Walled Culture: How Big Content Uses Technology and the Law to Lock Down Culture and Keep Creators Poor. Moody traces how copyright laws designed for a world of physical scarcity have been repurposed for the digital age—creating legal and technical “walls” that restrict access to knowledge, limit creativity, and overwhelmingly benefit large media corporations over creators and the public. Joining the conversation is Maria Bustillos, writer and editor at the Brick House Cooperative.

Grab your copy of Walled Culture: https://walledculture.org 

This conversation was recorded on 11/10/2022. Watch the full video recording at: https://archive.org/details/book-talk-walled-culture

Check out all of the Future Knowledge episodes at https://archive.org/details/future-knowledge





Download audio: https://media.transistor.fm/fc37c35e/738f9a74.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

New in .NET 10 and C# 14: Enhancements in APIs Request/Response Pipeline

1 Share

This blog post is originally published on https://blog.elmah.io/new-in-net-10-and-c-14-enhancements-in-apis-request-response-pipeline/

.NET 10 is officially out, along with C# 14. Microsoft has released .NET 10 as Long-Term Support (LTS) as a successor to .NET 8. Like every version, it is not just an update but brings something new to the table. In this series, we will explore which aspects of software can be upgraded with the latest release. Today, in the series of What is new in .NET 10 and C#14, we will evaluate on-ground changes in the APIs using minimal examples.

New in .NET 10 and C# 14: Enhancements in APIs Request/Response Pipeline

APIs don't need any introduction. From loading the cart to your mobile to showing this blog on your computer, everything is fetched by APIs. In today's dynamic websites, a simple action requires multiple API calls. So, a simple lag can hurt users' experience, no matter what application you build. .NET 10 brings some thoughtful changes for APIs. Some major improvements, like JIT inlining and lower GC burden, made things faster across operations. However, there are enhancements in API pipelining as well. Let's observe how API operations are improved in .NET 10.

Benchmarking Minimal APIs with .NET 8 and .NET 10

I have created a directory MinimalApiComparison where three projects will reside. Consider the structure:

API structure

One folder per API version and another for the benchmarks.

.NET 8 minimal API project

Let's start by creating a minimal API on .NET 8.

Step 1: Create a .NET 8 project

mkdir ApiNet8
cd ApiNet8

dotnet new webapi -n ApiNet8 --framework net8.0

Step 2: Set up the code

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/weather", () =>
{
    return new WeatherForecast(DateTime.Now, 25, "Sunny");
});

app.Run();

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

Just a single endpoint named /weather.

.NET 10 Minimal API project

Now, let's set up a similar project for .NET 10.

Step 1: Create a .NET 10 project

mkdir ApiNet10
cd ApiNet10

dotnet new webapi -n ApiNet10 --framework net10.0

Step 2: Set up the code

var builder = WebApplication.CreateBuilder(args);

builder.Services.ConfigureHttpJsonOptions(o =>
{
    o.SerializerOptions.AllowOutOfOrderMetadataProperties = true;
});

var app = builder.Build();

app.MapGet("/weather", static () =>
    {
        return new WeatherForecast(DateTime.Now, 25, "Sunny");
    })
    .WithName("GetWeather")
    .WithSummary("Faster pipeline in .NET 10");

app.Run();

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

The property AllowOutOfOrderMetadataProperties allows skipping strict ordering checks in JSON deserialization. It is an optimisation step that reduces parsing branches in compilation and speeds up model binding. static () => Prevents closure allocation by restricting the lambda from capturing variables. In preceding versions, lambda saves the inner variable in Gen 0 memory. While in the latest updates, static lambda results in 0 allocation and a faster, memory-efficient API.

Benchmark project

Once we set up both target projects, let us move to the benchmark to evaluate them.

Step 1: Create the project

Our benchmark project will be a console application.

dotnet new console -n ApiBenchmarks
cd ApiBenchmarks

Step 2: Install the BenchmarkDotNet package

We are using `BenchmarkDotNet to observe the results. For more information about this package, check out How to Monitor Your App's Performance with .NET Benchmarking.

dotnet add package BenchmarkDotNet

Step 3: Set up the URLs benchmarking code

using System.Net.Http.Json;
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Order;

[MemoryDiagnoser]
[SimpleJob(warmupCount: 3, iterationCount: 10)]
[Orderer(SummaryOrderPolicy.FastestToSlowest)]
public class ApiComparisonBenchmarks
{
    private readonly HttpClient _client8;
    private readonly HttpClient _client10;

    public ApiComparisonBenchmarks()
    {
        _client8 = new HttpClient { BaseAddress = new Uri("http://localhost:5100") };
        _client10 = new HttpClient { BaseAddress = new Uri("http://localhost:5200") };
    }

    [Benchmark]
    public async Task WeatherNet8()
    {
        var result = await _client8.GetFromJsonAsync<WeatherForecast>("/weather");
    }

    [Benchmark]
    public async Task WeatherNet10()
    {
        var result = await _client10.GetFromJsonAsync<WeatherForecast>("/weather");
    }
}

public record WeatherForecast(DateTime Date, int TemperatureC, string Summary);

I created two methods WeatherNet8 to hit the .NET 8 minimal API and WeatherNet10 for the .NET 10 version. I have specified server URLs with each client explicitly. The [Benchmark] decorator will do the rest of the job.

Step 4: Run the projects

Open 3 terminals, on the first one run:

cd ApiNet8
dotnet run --urls "http://localhost:5100"

Run the following on the second one:

cd ApiNet10
dotnet run --urls "http://localhost:5200"

While on the third one:

cd ApiBenchmarks
dotnet run -c Release

So each of the API will be running on the designated port. ApiBenchmark hits its endpoint at /weather and performs benchmarking.

Output

Output

We can observe a difference in the same api of both projects. .NET 10 did the job quite fast, and other parameters are also lower, indicating the performance improvements in the latest version. .NET 10 has 3x smaller Standard deviation and error due to less jitter in execution times by the grace of JIT improvements and JSON parsing novelty. Pipeline configurations like AllowOutOfOrderMetadataProperties and static () => reduced the per-request overhead. AllowOutOfOrderMetadataProperties lowers CPU burden and reduces System.Text.Json paths in compilation. And static lambda not only eliminated closure allocation but made the delegate work faster with lower Garbage Collector pressure.

Our big win was not only in the pipeline configuration but also in faster request-response improvements in ASP.NET Core. .NET 10 has optimized middleware dispatch overhead, added static pipeline analysis and faster endpoint selection, resulting in lower mean and tail latency. Our observation was just on a minimal API with small objects. These enhancements will bring noticeable results when you work on real projects with larger API operations.

Conclusion

.NET 10 is released in November 2025 and is supported for three years as a long-term support (LTS) release. For the APIs pipeline, .NET 10 brought key enhancements. In this post, I showed these enhancements with on-ground benchmark analysis. Better JIT inlining, lower CPU and GC pressure, and numerous other factors have contributed to better APIs.

Source code: https://github.com/elmahio-blog/MinimalApiComparison-.git



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

How to Work with the ORC File Format in Python – A Guide with Examples

1 Share

If you've worked with big data or analytics platforms, you've probably heard about ORC files. But what exactly are they, and how can you work with them in Python?

In this tutorial, I'll walk you through the basics of reading, writing, and manipulating ORC files using Python. By the end, you'll understand when to use ORC and how to integrate it into your data pipelines.

You can find the code on GitHub.

Table of Contents

  1. What is the ORC File Format?

  2. Prerequisites

  3. Reading ORC Files in Python

  4. Writing ORC Files with Compression

  5. Working with Complex Data Types

  6. A More Helpful Example: Processing Log Data

  7. When Should You Use ORC?

What Is the ORC File Format?

ORC stands for Optimized Row Columnar. It's a columnar storage file format designed for Hadoop workloads. Unlike traditional row-based formats like CSV, ORC stores data by columns, which makes it incredibly efficient for analytical queries.

Here's why ORC is popular:

  • ORC files are highly compressed, often 75% smaller than text files

  • Columnar format means you only read the columns you need

  • You can add or remove columns without rewriting data

  • ORC includes lightweight indexes for faster queries

Most organizations use ORC for their big data processing because it works well with Apache Hive, Spark, and Presto.

Prerequisites

Before we get started, make sure you have:

  • Python 3.10 or a later version installed

  • Basic understanding of DataFrames (pandas or similar)

  • Familiarity with file I/O operations

You'll need to install these libraries:

pip install pyarrow pandas

So why do we need PyArrow? PyArrow is the Python implementation of Apache Arrow, which provides excellent support for columnar formats like ORC and Parquet. It's fast, memory-efficient, and actively maintained.

Reading ORC Files in Python

Let's start by reading an ORC file. First, I'll show you how to create a sample ORC file so we have something to work with.

Creating a Sample ORC File

Here's how we'll create a simple employee dataset and save it as ORC:

import pandas as pd
import pyarrow as pa
import pyarrow.orc as orc

# Create sample employee data
data = {
    'employee_id': [101, 102, 103, 104, 105],
    'name': ['Alice Johnson', 'Bob Smith', 'Carol White', 'David Brown', 'Eve Davis'],
    'department': ['Engineering', 'Sales', 'Engineering', 'HR', 'Sales'],
    'salary': [95000, 65000, 88000, 72000, 71000],
    'years_experience': [5, 3, 7, 4, 3]
}

df = pd.DataFrame(data)

# Convert to PyArrow Table and write as ORC
table = pa.Table.from_pandas(df)
orc.write_table(table, 'employees.orc')

print("ORC file created successfully!")

This outputs:

ORC file created successfully!

Let me break down what's happening here. We start with a pandas DataFrame containing employee information. Then we convert it to a PyArrow table, which is PyArrow's in-memory representation of columnar data. Finally, we use orc.write_table() to write it to disk in ORC format.

The conversion to a PyArrow table is necessary because ORC is a columnar format, and PyArrow handles the translation from row-based pandas to column-based storage.

Reading the ORC File

Now that we have an ORC file, let's read it back:

# Read ORC file
table = orc.read_table('employees.orc')

# Convert to pandas DataFrame for easier viewing
df_read = table.to_pandas()

print(df_read)
print(f"\nData types:\n{df_read.dtypes}")

Output:

   employee_id           name   department  salary  years_experience
0          101  Alice Johnson  Engineering   95000                 5
1          102      Bob Smith        Sales   65000                 3
2          103    Carol White  Engineering   88000                 7
3          104    David Brown           HR   72000                 4
4          105      Eve Davis        Sales   71000                 3

Data types:
employee_id          int64
name                object
department          object
salary               int64
years_experience     int64
dtype: object

The orc.read_table() function loads the entire ORC file into memory as a PyArrow table. We then convert it back to pandas for familiar DataFrame operations.

Notice how the data types are preserved. ORC maintains schema information, so your integers stay integers and strings stay strings.

Reading Specific Columns

Here's where ORC really shines. When working with large datasets, you often don't need all columns. ORC lets you read only what you need:

# Read only specific columns
table_subset = orc.read_table('employees.orc', columns=['name', 'salary'])
df_subset = table_subset.to_pandas()

print(df_subset)

Output:

            name  salary
0  Alice Johnson   95000
1      Bob Smith   65000
2    Carol White   88000
3    David Brown   72000
4      Eve Davis   71000

This is called column pruning, and it's a massive performance optimization. If your ORC file has 50 columns but you only need 3, you're reading a fraction of the data. This translates to faster load times and lower memory usage.

Writing ORC Files with Compression

ORC supports multiple compression codecs. Let's explore how to use compression when writing files:

# Create a larger dataset
large_data = {
    'id': range(10000),
    'value': [f"data_{i}" for i in range(10000)],
    'category': ['A', 'B', 'C', 'D'] * 2500
}

df_large = pd.DataFrame(large_data)
table_large = pa.Table.from_pandas(df_large)

# Write with ZLIB compression (default)
orc.write_table(table_large, 'data_zlib.orc', compression='ZLIB')

# Write with SNAPPY compression (faster but less compression)
orc.write_table(table_large, 'data_snappy.orc', compression='SNAPPY')

# Write with ZSTD compression (good balance)
orc.write_table(table_large, 'data_zstd.orc', compression='ZSTD')

import os
print(f"ZLIB size: {os.path.getsize('data_zlib.orc'):,} bytes")
print(f"SNAPPY size: {os.path.getsize('data_snappy.orc'):,} bytes")
print(f"ZSTD size: {os.path.getsize('data_zstd.orc'):,} bytes")

Output:

ZLIB size: 23,342 bytes
SNAPPY size: 44,978 bytes
ZSTD size: 6,380 bytes

Different compression codecs offer different trade-offs. ZLIB gives better compression but is slower. SNAPPY is faster but produces larger files. ZSTD offers a good balance between compression ratio and speed.

For most use cases, I recommend ZSTD. It's fast enough for real-time processing and provides excellent compression.

Working with Complex Data Types

ORC handles nested data structures well. Here's how to work with lists and nested data:

# Create data with complex types
complex_data = {
    'user_id': [1, 2, 3],
    'name': ['Alice', 'Bob', 'Carol'],
    'purchases': [
        ['laptop', 'mouse'],
        ['keyboard'],
        ['monitor', 'cable', 'stand']
    ],
    'ratings': [
        [4.5, 5.0],
        [3.5],
        [4.0, 4.5, 5.0]
    ]
}

df_complex = pd.DataFrame(complex_data)
table_complex = pa.Table.from_pandas(df_complex)
orc.write_table(table_complex, 'complex_data.orc')

# Read it back
table_read = orc.read_table('complex_data.orc')
df_read = table_read.to_pandas()

print(df_read)
print(f"\nType of 'purchases' column: {type(df_read['purchases'][0])}")

Output:

   user_id   name                purchases          ratings
0        1  Alice          [laptop, mouse]       [4.5, 5.0]
1        2    Bob               [keyboard]            [3.5]
2        3  Carol  [monitor, cable, stand]  [4.0, 4.5, 5.0]

Type of 'purchases' column: <class 'numpy.ndarray'>

ORC preserves list structures, which is incredibly useful for storing JSON-like data or aggregated information. Each cell can contain a list, and ORC handles the variable-length storage efficiently.

A More Helpful Example: Processing Log Data

Let's put this together with a practical example. Imagine you're processing web server logs:

from datetime import datetime, timedelta
import random

# Generate sample log data
log_data = []
start_date = datetime(2025, 1, 1)

for i in range(1000):
    log_data.append({
        'timestamp': start_date + timedelta(minutes=i),
        'user_id': random.randint(1000, 9999),
        'endpoint': random.choice(['/api/users', '/api/products', '/api/orders']),
        'status_code': random.choice([200, 200, 200, 404, 500]),
        'response_time_ms': random.randint(50, 2000)
    })

df_logs = pd.DataFrame(log_data)

# Write logs to ORC
table_logs = pa.Table.from_pandas(df_logs)
orc.write_table(table_logs, 'server_logs.orc', compression='ZSTD')

# Later, query only failed requests
table_subset = orc.read_table('server_logs.orc')
df_subset = table_subset.to_pandas()

# Filter for errors
errors = df_subset[df_subset['status_code'] >= 400]
print(f"Total errors: {len(errors)}")
print(f"\nError breakdown:\n{errors['status_code'].value_counts()}")
print(f"\nSlowest error response: {errors['response_time_ms'].max()}ms")

Output:

Total errors: 387

Error breakdown:
status_code
404    211
500    176
Name: count, dtype: int64

Slowest error response: 1994ms

This example shows how ORC files are suitable file formats for log storage. You can write logs continuously, compress them efficiently, and query them quickly. The columnar format means you can filter by status code without reading endpoint or response time data.

When Should You Use ORC?

Use ORC when you:

  • Work with big data platforms (Hadoop, Spark, Hive)

  • Need efficient storage for analytics workloads

  • Have wide tables where you often query specific columns

  • Want built-in compression and indexing

Don't use ORC when you:

  • Need row-by-row processing – use Avro instead

  • Work with small datasets – CSV is simpler in such cases

  • Need human-readable files – use JSON

  • Don't have big data infrastructure

Conclusion

ORC is a powerful format for data engineering and analytics. With PyArrow, working with ORC in Python is both straightforward and performant.

You've learned how to read and write ORC files, use compression, handle complex data types, and apply these concepts to real-world scenarios. The columnar storage and compression make ORC an excellent choice for big data pipelines.

Try integrating ORC into your next data project. You'll likely see significant improvements in storage costs and query performance.

Happy coding!



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories