Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
121892 stories
·
29 followers

Highlights from Git 2.45

1 Share

The open source Git project just released Git 2.45 with features and bug fixes from over 96 contributors, 38 of them new. We last caught up with you on the latest in Git back when 2.44 was released.

To celebrate this most recent release, here is GitHub’s look at some of the most interesting features and changes introduced since last time.

Preliminary reftable support

Git 2.45 introduces preliminary support for a new reference storage backend called “reftable,” promising faster lookups, reads, and writes for repositories with any number of references.

If you’re unfamiliar with our previous coverage of the new reftable format, don’t worry, this post will catch you up to speed (and then some!). But if you just want to play around with the new reference backend, you can initialize a new repository with --ref-format=reftable like so:

$ git init --ref-format=reftable /path/to/repo
Initialized empty Git repository in /path/to/repo/.git
$ cd /path/to/repo
$ git commit --allow-empty -m 'hello reftable!'
[main (root-commit) 2eb0810] hello reftable!
$ ls -1 .git/reftable/
0x000000000001-0x000000000002-565c6bf0.ref
tables.list
$ cat .git/reftable/tables.list
0x000000000001-0x000000000002-565c6bf0.ref

With that out of the way, let’s jump into the details. If you’re new to this series, or didn’t catch our initial coverage of the reftable feature, don’t worry, here’s a refresher. When we talk about references in Git, we’re referring to the branches and tags that make up your repository. In essence, a reference is nothing more than a name (like refs/heads/my-feature, or refs/tags/v1.0.0) and the object ID of the thing that reference points at.

Git has historically stored references in your repository in one of two ways: either “loose” as a file inside of $GIT_DIR/refs (like $GIT_DIR/refs/heads/my-feature) or “packed” as an entry inside of the file at $GIT_DIR/packed_refs.

For most repositories today, the existing reference backend works fine. For repositories with a truly gigantic number of references, however, the existing backend has some growing pains. For instance, storing a large number of references as “loose” can lead to directories with a large number of entries (slowing down lookups within that directory) and/or inode exhaustion. Likewise, storing all references in a single packed_refs file can become expensive to maintain, as even small reference updates require a significant I/O-cost to rewrite the entire packed_refs file on each update.

That’s where the reftable format comes in. Reftable is an entirely new format for storing Git references. Instead of storing loose references, or constantly updating a large packed_refs file, reftable implements a binary format for storing references that promises to achieve:

  • Near constant-time lookup for individual references, and near constant-time verification that a given object ID is referred to by at least one reference.
  • Efficient lookup of entire reference namespaces through prefix compression.
  • Atomic reference updates that scale with the size of the reference update, not the number of overall references.

The reftable format is incredibly detailed (curious readers can learn more about it in more detail by reading the original specification), but here’s a high-level overview. A repository can have any number of reftables (stored as *.ref files), each of which is organized into variable-sized blocks. Blocks can store information about a collection of references, refer to the contents of other blocks when storing references across a collection of blocks, and more.

The format is designed to both (a) take up a minimal amount of space (by storing reference names with prefix compression) and (b) support fast lookups, even when reading the .ref file(s) from a cold cache.

Most importantly, the reftable format supports multiple *.ref files, meaning that each reference update transaction can be processed individually without having to modify existing *.ref files. A separate compaction process describes how to “merge” a range of adjacent *.ref files together into a single *.ref file to maintain read performance.

The reftable format was originally designed by Shawn Pearce for use in JGit to better support the large number of references stored by Gerrit. Back in our Highlights from Git 2.35 post, we covered that an implementation of the reftable format had landed in Git. In that version, Git did not yet know how to use the new reftable code in conjunction with its existing reference backend system, meaning that you couldn’t yet create repositories that store references using reftable.

In Git 2.45, support for a reftable-powered storage backend has been integrated into Git’s generic reference backend system, meaning that you can play with reftable on your own repository by running:

$ git init --ref-format=reftable /path/to/repo

[source, source, source, source, source, source, source, source, source, source]

Preliminary support for SHA-1 and SHA-256 interoperability

Returning readers of this series will be familiar with our ongoing coverage of the Git project’s hash function transition. If you’re new around here, or need a refresher, don’t worry!

Git identifies objects (the blobs, trees, commits, and tags that make up your repository) by a hash of their contents. Since its inception, Git has used the SHA-1 hash function to hash and identify objects in a repository.

However, the SHA-1 function has known collision attacks (e.g., Shattered, and Shambles), meaning that a sufficiently motivated attacker can generate a colliding pair of SHA-1 inputs, which have the same SHA-1 hash despite containing different contents. (Many providers, like GitHub, use a SHA-1 implementation that detects and rejects inputs that contain the telltale signs of being part of a colliding pair attack. For more details, see our post, SHA-1 collision detection on GitHub.com).

Around this time, the Git project began discussing a plan to transition from SHA-1 to a more secure hash function that was not susceptible to the same chosen-prefix attacks. The project decided on SHA-256 as the successor to Git’s use of SHA-1 and work on supporting the new hash function began in earnest. In Git 2.29 (released in October 2020), Git gained experimental support for using SHA-256 instead of SHA-1 in specially-configured repositories. That feature was declared no longer experimental in Git 2.42 (released in August 2023).

One of the goals of the hash function transition was to introduce support for repositories to interoperate between SHA-1 and SHA-256, meaning that repositories could in theory use one hash function locally, while pushing to another repository that uses a different hash function.

Git 2.45 introduces experimental preliminary support for limited interoperability between SHA-1 and SHA-256. To do this, Git 2.45 introduces a new concept called the “compatibility” object format, and allows you to refer to objects by either their given hash, or their “compatibility” hash. An object’s compatibility hash is the hash of an object as it would have been written under the compatibility hash function.

To give you a better sense of how this new feature works, here’s a short demo. To start, we’ll initialize a repository in SHA-256 mode, and declare that SHA-1 is our compatibility hash function:

$ git init --object-format=sha256 /path/to/repo
Initialized empty Git repository in /path/to/repo/.git
$ cd /path/to/repo
$ git config extensions.compatObjectFormat sha1

Then, we can create a simple commit with a single file (README) whose contents are “Hello, world!”:

$ echo 'Hello, world!' >README
$ git add README
$ git commit -m "initial commit"
[main (root-commit) 74dcba4] initial commit
 Author: A U Thor <author@example.com>
 1 file changed, 1 insertion(+)
 create mode 100644 README

Now, we can ask Git to show us the contents of the commit object we just created with cat-file. As we’d expect, the hash of the commit object, as well as its root tree are computed using SHA-256:

$ git rev-parse HEAD | git cat-file --batch
74dcba4f8f941a65a44fdd92f0bd6a093ad78960710ac32dbd4c032df66fe5c6 commit 202
tree ace45d916e870ce0fadbb8fc579218d01361da4159d1e2b5949f176b1f743280
author A U Thor <author@example.com> 1713990043 -0400
committer C O Mitter <committer@example.com> 1713990043 -0400

initial commit

But we can also tell git rev-parse to output any object IDs using the compatibility hash function, allowing us to ask for the SHA-1 object ID of that same commit object. When we print its contents out using cat-file, its root tree OID is a different value (starting with 7dd4941980 instead of ace45d916e), this time computed using SHA-1 instead of SHA-256:

$ git rev-parse --output-object-format=sha1 HEAD
2a4f4a2182686157a2dc887c46693c988c912533

$ git rev-parse --output-object-format=sha1 HEAD | git cat-file --batch
2a4f4a2182686157a2dc887c46693c988c912533 commit 178
tree 7dd49419807b37a3afd2f040891a64d69abb8df1
author A U Thor <author@example.com> 1713990043 -0400
committer C O Mitter <committer@example.com> 1713990043 -0400

initial commit

Support for this new feature is still considered experimental, and many features may not work quite as you expect them to. There is still much work ahead for full interoperability between SHA-1 and SHA-256 repositories, but this release delivers an important first step towards full interoperability support.

[source]


  • If you’ve ever scripted around your repository, then you have no doubt used git rev-list to list commits or objects reachable from some set of inputs. rev-list can also come in handy when trying to diagnose repository corruption, including investigating missing objects.

    In the past, you might have used something like git rev-list --missing=print to gather a list of objects which are reachable from your inputs, but are missing from the local repository. But what if there are missing objects at the tips of your reachability query itself? For instance, if the tip of some branch or tag is corrupt, then you’re stuck:

    $ git rev-parse HEAD | tr 'a-f1-9' '1-9a-f' >.git/refs/heads/missing
    $ git rev-list --missing=print --all | grep '^?'
    fatal: bad object refs/heads/missing
    

    Here, Git won’t let you continue, since one of the inputs to the reachability query itself (refs/heads/missing, via --all) is missing. This can make debugging missing objects in the reachable parts of your history more difficult than necessary.

    But with Git 2.45, you can debug missing objects even when the tips of your reachability query are themselves missing, like so:

    $ git rev-list --missing=print --all | grep '^?'
    ?70678e7afeacdcba1242793c3d3d28916a2fd152
    

    [source]

  • One of Git’s lesser-known features are “reference logs,” or “reflogs” for short. These reference logs are extremely useful when asking questions about the history of some reference, such as: “what was main pointing at two weeks ago?” or “where was I before I started this rebase?”.

    Each reference has its own corresponding reflog, and you can use the git reflog command to see the reflog for the currently checked-out reference, or for an arbitrary reference by running git reflog refs/heads/some/branch.

    If you want to see what branches have corresponding reflogs, you could look at the contents of .git/logs like so:

    $ find .git/logs/refs/heads -type f | cut -d '/' -f 3-
    

    But what if you’re using reftable? In that case, the reflogs are stored in a binary format, leaving tools like find out of your reach.

    Git 2.45 introduced a new sub-command git reflog list to show which references have corresponding reflogs available to them, regardless of whether or not you are using reftable.

    [source]

  • If you’ve ever looked closely at Git’s diff output, you might have noticed the prefixes a/ and b/ used before file paths to indicate the before and after versions of each file, like so:

    $ git diff HEAD^ -- GIT-VERSION-GEN
    diff --git a/GIT-VERSION-GEN b/GIT-VERSION-GEN
    index dabd2b5b89..c92f98b3db 100755
    --- a/GIT-VERSION-GEN
    +++ b/GIT-VERSION-GEN
    @@ -1,7 +1,7 @@
    #!/bin/sh
    
    GVF=GIT-VERSION-FILE
    -DEF_VER=v2.45.0-rc0
    +DEF_VER=v2.45.0-rc1
    
    LF='
    '
    

    In Git 2.45, you can now configure alternative prefixes by setting the diff.srcPrefix and diff.dstPrefix configuration options. This can come in handy if you want to make clear which side is which (by setting them to something like “before” and “after,” respectively). Or if you’re viewing the output in your terminal, and your terminal supports hyperlinking to paths, you could change the prefix to ./ to allow you to click on filepaths within a diff output.

    [source]

  • When writing a commit message, Git will open your editor with a mostly blank file containing some instructions, like so:

    # Please enter the commit message for your changes. Lines starting
    # with '#' will be ignored, and an empty message aborts the commit.
    #
    # On branch main
    # Your branch is up to date with 'origin/main.
    

    Since 2013, Git has supported customizing the comment character to be something other than the default #. This can come in handy, for instance, if you’re trying to refer to a GitHub issue by its numeric shorthand (e.g. #12345). If you write #12345 at the beginning of a line in your commit message, Git will treat the entire line as a comment and ignore it.

    In Git 2.45, Git allows not just any single ASCII character, but any arbitrary multi-byte character or even an arbitrary string. Now, you can customize your commit message template by setting core.commentString (or core.commentChar, the two are synonyms for one another) to your heart’s content.

    [source]

  • Speaking of comments, git config learned a new option to help document your .gitconfig file. The .gitconfig file format allows for comments beginning with a # character, meaning that everything following that # until the next newline will be ignored.

    The git config command gained a new --comment option, which allows specifying an optional comment to leave at the end of the newly configured line, like so:

    $ git config --comment 'to show the merge base' merge.conflictStyle diff3
    $ tail -n 2 .git/config
    [merge]
    conflictStyle = diff3 # to show the merge base
    

    This can be helpful when tweaking some of Git’s more esoteric settings to try and remember why you picked a particular value.

    [source]

  • Sometimes when you are rebasing or cherry-picking a series of commits, one or more of those commits become “empty” (i.e., because they contain a subset of changes that have already landed on your branch).

    When rebasing, you can use the --empty option to specify how to handle these commits. --empty supports a few options: “drop” (to ignore those commits), “keep” (to keep empty commits), or “stop” which will halt the rebase and ask for your input on how to proceed.

    Despite its similarity to git rebase, git cherry-pick never had an equivalent option to --empty. That meant that if you were cherry-picking a long sequence of commits, some of which became empty, you’d have to type either git cherry-pick --skip (to drop the empty commit), or git commit --allow-empty (to keep the empty commit).

    In Git 2.45, git cherry-pick learned the same --empty option from git rebase, meaning that you can specify the behavior once at the beginning of your cherry-pick operation, instead of having to specify the same thing each time you encounter an empty commit.

    [source]

The rest of the iceberg

That’s just a sample of changes from the latest release. For more, check out the release notes for 2.45, or any previous version in the Git repository.

The post Highlights from Git 2.45 appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete

GitHub Copilot Workspace: Welcome to the Copilot-native developer environment

1 Share
We’re redefining the developer environment with GitHub Copilot Workspace–where any developer can go from idea, to code, to software in natural language. Sign up here.

In the past two years, generative AI has foundationally changed the developer landscape largely as a tool embedded inside the developer environment. In 2022, we launched GitHub Copilot as an autocomplete pair programmer in the editor, boosting developer productivity by up to 55%. Copilot is now the most widely adopted AI developer tool. In 2023, we released GitHub Copilot Chat—unlocking the power of natural language in coding, debugging, and testing—allowing developers to converse with their code in real time.

After sharing an early glimpse at GitHub Universe last year, today, we are reimagining the nature of the developer experience itself with the technical preview of GitHub Copilot Workspace: the Copilot-native developer environment. Within Copilot Workspace, developers can now brainstorm, plan, build, test, and run code in natural language. This new task-centric experience leverages different Copilot-powered agents from start to finish, while giving developers full control over every step of the process.

Copilot Workspace represents a radically new way of building software with natural language, and is expressly designed to deliver–not replace–developer creativity, faster and easier than ever before. With Copilot Workspace we will empower more experienced developers to operate as systems thinkers, and materially lower the barrier of entry for who can build software.

Welcome to the first day of a new developer environment. Here’s how it works:

It all starts with the task…

It starts with a task. Open GitHub Copilot Workspace from a GitHub Issue, Pull Request, or Repository. (Screenshot of an issue in the octoacademy repository.)

For developers, the greatest barrier to entry is almost always at the beginning. Think of how often you hit a wall in the first steps of a big project, feature request, or even bug report, simply because you don’t know how to get started. GitHub Copilot Workspace meets developers right at the origin: a GitHub Repository or a GitHub Issue. By leveraging Copilot agents as a second brain, developers will have AI assistance from the very beginning of an idea.

…Workspace builds the full plan

Progress from your task to a specification, outlining what you want to achieve with Copilot Workspace. The steps are editable, enabling you to iterate on ideas.

From there, Copilot Workspace offers a step-by-step plan to solve the issue based on its deep understanding of the codebase, issue replies, and more. It gives you everything you need to validate the plan, and test the code, in one streamlined list in natural language.

And it’s entirely editable…

Then adjust your plan of action, adding steps, and general notes. Once the plan is implemented, you can view changes with a PR diff view and make edits as needed.

Everything that GitHub Copilot Workspace proposes—from the plan to the code—is fully editable, allowing you to iterate until you’re confident in the path ahead. You retain all of the autonomy, while Copilot Workspace lifts your cognitive strain.

Once you're happy with the code, you can use the integrated terminal to run unit tests, builds and appropriate checks.

And once you’re satisfied with the plan, you can run your code directly in Copilot Workspace, jump into the underlying GitHub Codespace, and tweak all code changes until you are happy with the final result. You can also instantly share a workspace with your team via a link, so they can view your work and even try out their own iterations.

All that’s left then is to file your pull request, run your GitHub Actions, security code scanning, and ask your team members for human code review. And best of all, they can leverage your Copilot Workspace to see how you got from idea to code.

Also: GitHub Copilot Workspace is mobile compatible

And because ideas can happen anywhere, GitHub Copilot Workspace was designed to be used from any device—empowering a real-world development environment that can work on a desktop, laptop, or on the go.

This is our mark on the future of the development environment: an intuitive, Copilot-powered infrastructure that makes it easier to get started, to learn, and ultimately to execute.

Enabling a world with 1B developers

Early last year, GitHub celebrated over 100 million developers on our platform—and counting. As programming in natural language lowers the barrier of entry to who can build software, we are accelerating to a near future where one billion people on GitHub will control a machine just as easily as they ride a bicycle. We’ve constructed GitHub Copilot Workspace in pursuit of this horizon, as a conduit to help extend the economic opportunity and joy of building software to every human on the planet.

At the same time, we live in a world dependent on—and in short supply of—professional developers. Around the world, developers add millions of lines of code every single day to evermore complex systems and are increasingly behind on maintaining the old ones. Just like any infrastructure in this world, we need real experts to maintain and renew the world’s code. By quantifiably reducing boilerplate work, we will empower professional developers to increasingly operate as systems thinkers. We believe the step change in productivity gains that professional developers will experience by virtue of Copilot and now Copilot Workspace will only continue to increase labor demand.

That’s the dual potential of GitHub Copilot: for the professional and hobbyist developer alike, channeling creativity into code just got a whole lot easier.

Today, we begin the technical preview for GitHub Copilot Workspace.
Sign up now.
We can’t wait to see what you will build from here.

The post GitHub Copilot Workspace: Welcome to the Copilot-native developer environment appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete

The Backend for Frontend Pattern

1 Share
Learn how to keep tokens more secure by using the Backend for Frontend (BFF) architectural pattern.

Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete

What is Vite (and why is it so popular)?

1 Share
At StackBlitz, we're not shy about how much we love Vite. Here's what you need to know about the next-generation Javascript build tool.
Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete

Deprecating support for -ms-high-contrast and -ms-high-contrast-adjust

1 Share
Today, we're announcing the deprecation of the CSS -ms-high-contrast media query and -ms-high-contrast-adjust property, in favor of the standard-based forced colors feature that we implemented in Chromium-based browsers in 2020. Contrast themes is a very important accessibility feature of Windows, which makes text more visible and easier to read. In the past, Internet Explorer and Microsoft Edge with the EdgeHTML engine, made it possible for websites to honor a user's contrast theme setting by using the -ms-high-contrast and -ms-high-contrast-adjust CSS features. In 2020, we went one step further and worked with Chromium to standardize support for contrast themes on the web, so that it doesn't only work in Edge, but also in any engine that implements support for the feature. We renamed the feature to forced colors. Today, forced colors support is supported in Chromium-based browsers as well as Firefox. [caption id="attachment_25484" align="alignnone" width="1024"]The effect of a Windows Contrast theme on Microsoft Edge and on the website that's rendered, thanks for the forcec colors mode The Aquatic Windows contrast theme impacting the Microsoft Edge UI and the rendered website thanks to forced colors.[/caption] To learn more about the forced colors feature, check out the following links:

Deprecating the legacy ms-prefixed CSS features

When we shipped the forced colors feature in Chromium-based Edge for the first time, we also wanted the websites that used the legacy CSS features from Internet Explorer and Microsoft Edge with the EdgeHTML engine to keep working. So, we maintained support for the -ms-high-contrast media query, and the -ms-high-contrast-adjust property. Today, we're announcing our deprecation process for these CSS features. Continue reading to learn what to expect, and how to migrate to the new properties.

Deprecation period

To reduce interoperability issues and to gather feedback, we plan to slowly deprecate the legacy -ms-high-contrast media query and -ms-high-contrast-adjust property in Microsoft Edge. We are planning to completely disable the legacy implementation by Edge 138, but this plan might change depending on the feedback that we receive during this deprecation trial.

Testing the deprecation early

We're introducing a way for you to check that your new forced color styles work correctly before we completely disable the legacy high-contrast implementation. To check your styles, you can disable the legacy implementation locally in Microsoft Edge:
  • Open a new window or tab.
  • Go to edge://flags/#edge-deprecate-ms-high-contrast in that tab.
  • Enable the Deprecate '-ms-high-contrast' and '-ms-high-contrast-adjust' flag, and then restart Microsoft Edge.

DevTools warning

As part of the deprecation process, Microsoft Edge will also display a warning in the DevTools Console tool for any sites that use the legacy properties in their stylesheets starting with Edge version 126.

Origin trials

Finally, to make it possible for you to phase out the legacy implementation and keep your website functioning well after it's been deprecated, we're beginning an Origin Trial in Edge 132. See Microsoft Edge Origin Trials for more details. In the time leading up to the deprecation, Microsoft Edge will be reaching out to accessibility testers and sites with known usage of the legacy properties to prevent breakages when the deprecation happens.

How to update your styles to the new forced colors standard

If your site uses the legacy -ms-high-contrast media query and -ms-high-contrast-adjust property to modify its styles when Windows is set to a contrast theme, we recommend that you adopt the new forced colors mode standard before the legacy properties are deprecated. The table below shows how the legacy properties can be transferred to the new standards:
With Internet Explorer and Microsoft Edge with the EdgeHTML engine With Microsoft Edge and other browsers that support forced colors
@media (-ms-high-contrast: active) {} @media (forced-colors: active) {}
@media (-ms-high-contrast: black-on-white) {} @media (forced-colors: active) and (prefers-color-scheme: light) {} Note: this is not exactly equal to the legacy black-on-white media query, which matched only specific default contrast themes. The new implementation will observe the luminosity of the user's background color to determine whether prefers-color-scheme: light/dark is appropriate to match. In Chromium, a forced background with a luminosity of <0.33 will be a match for dark color schemes; otherwise, prefers-color-color-scheme: light will match.
@media (-ms-high-contrast: white-on-black) {} @media (forced-colors: active) and (prefers-color-scheme: dark) {} Same note as the previous row.
-ms-high-contrast-adjust: none; forced-color-adjust: none;
Note that there are some key differences that you'll need to account for, when migrating your contrast theme styles to the new forced colors mode standard. These include changes to the style cascade, system color keywords, and native form controls design. For more details, please see Styling for Windows high contrast with new standards for forced colors.

How to test forced colors mode on your website

To check how your website renders when using a contrast theme, you can either change your Windows settings to use a contrast theme or emulate it via DevTools. To change your Windows settings:
  • On Windows 10: go to Settings > Ease of Access > High contrast, and then click Turn on high contrast.
  • On Windows 11: go to Settings > Accessibility > Contrast themes, select a theme from the Contrast themes drop-down menu, and then click Apply.
If you want to test your website on other operating systems, such as macOS or Linux, or if you don't want to change your Windows theme, you can also emulate the forced colors mode by using Microsoft Edge DevTools:
  • Open DevTools by pressing F12 or Ctrl+Shift+I.
  • Open the Rendering tool by clicking More tools (+) > Rendering.
  • Scroll down to Emulate CSS media feature forced-colors.
  • Select forced-colors:active to emulate forced colors mode. Or select forced-colors:none to stop emulating forced colors.
  • You can also choose a specific forced colors theme by using the Emulate CSS media feature prefers-color-scheme dropdown menu and setting its value to either prefers-color-scheme:light or prefers-color-scheme:dark.
By using the emulation feature in DevTools, you can preview how your website will look to users of different contrast themes and adjust your styles accordingly.

Backwards compatibility

If you're required to support contrast themes for both Internet Explorer and Microsoft Edge with the EdgeHTML engine, as well as newer versions of Microsoft Edge based on Chromium, we recommend using a combination of the legacy and standard properties for maximum compatibility. For example, if your styles look like this: Update these styles to with the following rules:

Let us know how things go

If you encounter any issues during your testing, please send us feedback in either of these two ways:
  • To send us feedback directly from Microsoft Edge: go to Settings and more (...) > Help and feedback > Send feedback.
  • Or, to report a problem directly with the Chromium implementation of the new forced colors mode standard, create a new issue using Chromium's bug tracker.
Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete

A Beer Color Meter for Windows and Android with Avalonia UI

1 Share

In this article we present an Android and Windows Desktop app written with Avalonia UI. The app allows you to

  • pick whatever image – but preferably one with a glass of beer in it,
  • calculate the average color of the image’s pixels,
  • find the nearest official beer color to that image, and
  • visually validate the result with a slider through the official beer colors.

After a version with WinUI 3 and a version with Uno Platform, this is the third incarnation of our Beer Color Meter. This is how it looks on my old phone:

And here’s how it looks like on Windows:

Beer Color Meter’s concepts, Model, and Data Access Layer are still identical to the WinUI version – we’re not going into these details again. Our original version used Windows Community’s Toolkit ImageCropper to allow the user to indicate the relevant part of the picture (the beer). We did not find an alternative for this control, so we will -just like in our Uno version– consider all the pixels of the full source image instead of a selection.

Getting started with Avalonia UI

This is our first project with Avalonia UI, so we had to configure our development box. Since we regularly update and run the Uno Check Tool, we were pretty confident that all Android-related configurations are OK. According to Avalonia’s documentation it suffices to

  • install the solution templates (“dotnet new install Avalonia.Templates”), and

Avalonia UI comes with documentation, sample apps, an interactive playground, and Awesome Avalonia, a curated list of libraries and resources.

Avalonia UI is based on WPF – with which we have some experience.  There’s specific guidance for converting applications from WPF. Avalonia delegates rendering of its UI to SkiaSharp – with which we also have some experience.

When creating a new Avalonia UI solution in Visual Studio, just let their implementation of Template Studio guide you to configure it:

Compared to other cross platform ecosystems, an Avalonia UI solution is remarkably straightforward, with one core project and one head per target operating system:

We didn’t need to touch any of the platform projects for our Beer Color Meter.

Unlike most of the other modern XAML environments, Avalonia UI comes with a designer:

Running the code

Start up one of the platform head projects (in our case Desktop or Android) to run your code. For Android specifics, check out our blog post on Uno Platform. You can run in the Android Emulator, or on a connected physical device.

Porting the WinUI code

Just like we expected, Beer Color Meter’s concepts, Model, and Data Access Layer are identical to the WinUI version. After all, these are just plain C# classes.

Since Avalonia UI is not based on WinUI but on WPF, there were some differences in the XAML:

<UserControl xmlns="https://github.com/avaloniaui"
             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
             xmlns:vm="clr-namespace:XamlBrewer.Avalonia.BeerColorMeter.ViewModels"
             mc:Ignorable="d" d:DesignWidth="500" d:DesignHeight="800"
             x:Class="XamlBrewer.Avalonia.BeerColorMeter.Views.MainView"
             x:DataType="vm:MainViewModel">
  <Design.DataContext>
    <!-- This only sets the DataContext for the previewer in an IDE,
         to set the actual DataContext for runtime, set the DataContext property in code (look at App.axaml.cs) -->
    <vm:MainViewModel />
  </Design.DataContext>

  <Grid>
    <!-- ... -->
  </Grid>
</UserControl>
<LinearGradientBrush StartPoint="100% 0%"
                      EndPoint="100% 100%">
  <GradientStops>
    <GradientStop Color="#FF000000"
                  Offset="0" />
    <!-- ... -->
    <GradientStop Color="#FFF8F8DC"
                  Offset="1" />
  </GradientStops>
</LinearGradientBrush>
<Border x:Name="Result"
      HorizontalAlignment="Stretch"
      VerticalAlignment="Stretch"
      CornerRadius="8"
      Grid.Row="3"
      Grid.Column="1">
  <Grid>
    <!-- ... --/
  </Grid>
</Border>
  • The syntax for referencing to a resource (e.g. in the Image control) is different from WinUI
<Image x:Name="FullImage"
        Source="resm:XamlBrewer.Avalonia.BeerColorMeter.Assets.Beer.jpg" />

Avalonia UI prefers the MVVM pattern. Out-of-the-box it offers a choice between ReactiveUI and Microsoft Toolkit MVVM. We went for the latter – with which we have some experience 😉:

By default a MainViewModel is created, it derives from ViewModelBase, an ObservableObject.

public partial class MainViewModel : ViewModelBase
{ }

public class ViewModelBase : ObservableObject
{ }

The MainViewModel provides design-time data to the designer (as shown in the UserControl snippet above). It is also set as data context to the user control:

if (ApplicationLifetime is IClassicDesktopStyleApplicationLifetime desktop)
{
    desktop.MainWindow = new MainWindow
    {
        DataContext = new MainViewModel()
    };
}
else if (ApplicationLifetime is ISingleViewApplicationLifetime singleViewPlatform)
{
    singleViewPlatform.MainView = new MainView
    {
        DataContext = new MainViewModel()
    };
}

In almost all cases we would be happy with this, but Beer Color Meter is -on purpose- a less trivial app. Both its button actions need direct access to the UI thread:

  • the Pick Image button needs to provide the TopLevel -the visual root- to open a file picker, and
  • the Calculate button needs access to the Image control, to calculate the average color of its pixels.

There’s no clean way to do these things from a ViewModel, so wrote the code in the View. It’s bound to the Click events:

<Button Content="Pick image"
        Click="PickImage_Click" />

MVVM Toolkit’s AsyncRelayCommand seemed appropriate to hook the asynchronous call to the click event handler:

private ICommand PickFileCommand => new AsyncRelayCommand(PickFileAsync);

private void PickImage_Click(object sender, RoutedEventArgs e)
{
    PickFileCommand.Execute(null);
}

Here’s Avalonia’s way to open a file picker:

private async Task PickFileAsync()
{
    var topLevel = TopLevel.GetTopLevel(this);
    if (topLevel != null)
    {
        var files = await topLevel.StorageProvider.OpenFilePickerAsync(new FilePickerOpenOptions { AllowMultiple = false });
        if (files != null && files.Any())
        {
            var file = files.First();
            await OpenFile(file);
        }
    }
}

Via a Bitmap instance, we provide the content of the selected file as Source to the Image control:

private async Task OpenFile(IStorageFile file)
{
    if (file != null)
    {
        await using var stream = await file.OpenReadAsync();
        FullImage.Source = new Bitmap(stream);
    }
}

Accessing image pixels

Avalonia’s own Bitmap does not expose its pixels, but we know that Avalonia uses SkiaSharp as its rendering engine. So it should not come as a surprise that we chose SkiaSharp’s SKBitmap as the main actor in our pixel color calculation algorithm:

if (FullImage.Source is not Bitmap bitmap)
{
    return;
}

var stream = new MemoryStream();
bitmap.Save(stream);
var skb = SKBitmap.Decode(stream.ToArray());

// Calculate average color
byte[] sourcePixels = skb.Bytes;
var nbrOfPixels = sourcePixels.Length / 4;
int color1 = 0, color2 = 0, color3 = 0;
for (int i = 0; i < sourcePixels.Length; i += 4)
{
    color1 += sourcePixels[i];
    color2 += sourcePixels[i + 1];
    color3 += sourcePixels[i + 2];
}

Observe that we named the color variables color1, color2, and color3 – and not red, green and blue. The reason is obvious: we need to consider the encoding of the color, to avoid surprises like this:

Here’s the code:

Color color;
if (skb.ColorType == SKColorType.Bgra8888)
{
    color = Color.FromArgb(255, (byte)(color3 / nbrOfPixels), (byte)(color2 / nbrOfPixels), (byte)(color1 / nbrOfPixels));
}
else if (skb.ColorType == SKColorType.Rgba8888)
{
    color = Color.FromArgb(255, (byte)(color1 / nbrOfPixels), (byte)(color2 / nbrOfPixels), (byte)(color3 / nbrOfPixels));
}
else
{
    throw new Exception("Unsupported color type");
}

Result.Background = new SolidColorBrush(color);

Conclusion

This was our very first app with Avalonia UI, and it went very smooth. We were able to port a not-so-trivial Windows app to the Android platform with minimal effort. Beer Color Meter has room for improvement -especially in the user interface- but we wanted to stick as close as possible to our initial WinUI project. Just as with our Uno Platform version, the Avalonia UI version has less code than the original one.

Our Avalonia UI Beer Color Meter for Windows and Android lives here on GitHub.

Enjoy!



Read the whole story
alvinashcraft
3 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories