Today, we begin the technical preview for GitHub Copilot Workspace.
Sign up now.
We can’t wait to see what you will build from here.
It’s important to understand how the application behaves and have the ability to override that behavior in runtime based on some conditions. For example, we don’t want to send malicious prompt to LLM, and we don’t want to expose more information than needed to the end users.
A couple of months ago, we added a possibility in Semantic Kernel to handle such scenarios using Filters. This feature is an improvement over previous implementation based on event handlers and today we want to introduce even more improvements based on feedback we received from SK users!
Let’s start from current version of Filters, which should allow to understand current issues. After that, we will talk about how new filters resolve the issues and see some usage examples.
Here is an example of function filter, which will be executed before and after function invocation:
public class MyFilter : IFunctionFilter
{
public void OnFunctionInvoking(FunctionInvokingContext context)
{
// Method which is executed before function invocation.
}
public void OnFunctionInvoked(FunctionInvokedContext context)
{
// Method which is executed after function invocation.
}
}
First, current IFunctionFilter
interface does not support asynchronous methods. This aspect is important, because it should be possible to make additional asynchronous operations, like calling another kernel function, making a request to database or caching LLM result.
Another limitation is that it’s not possible to handle an exception that occurred during function execution and override the result. This feature would be especially useful during automatic function invocation, when LLM wants to execute a couple of functions, and in case of exception it will be possible to handle it and override the result for LLM with some default value.
While it’s good to have separate methods for each function invocation event (like in IFunctionFilter
interface), this approach has a disadvantage – these methods are not connected to each other, so in order to share some state, it needs to be saved on class level. This is not necessarily a bad thing, but let’s take an example when we want to measure how much time our function executes, and we want to start measurement in OnFunctionInvoking
method and stop it with sending results to telemetry tool in OnFunctionInvoked
method. In this case, we will be forced to set System.Diagnostics.Stopwatch
instance on class level, which is not a common pattern.
We are excited to announce, that new version of Filters will resolve the problems described above.
Existing filters were renamed in order to use more specific naming. New naming works better with new type of filter, which we are going to present later in this article. New names for existing filters are the following:
• IFunctionFilter -> IFunctionInvocationFilter
• IPromptFilter -> IPromptRenderFilter
Also, the interface for function and prompt filters was changed – instead of having two separate methods, there is only one, which makes it easier to implement.
Here is an example of function invocation filter:
public class MyFilter : IFunctionInvocationFilter
{
public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next)
{
// Perform some actions before function invocation
await next(context);
// Perform some actions after function invocation
}
}
The method is asynchronous, which makes it easy to call other asynchronous operations using async/await
pattern.
Together with context, there is also a next
delegate, which executes next filter in pipeline, in case there are multiple filters registered, or function itself. If next
delegate is not invoked, the next filters and function won’t be invoked as well. This provides more control, and it is useful in case there are some reasons to avoid function execution (e.g. malicious prompt or function arguments).
Another benefit of next
delegate is exception handling. With this approach, it’s possible to handle exceptions in .NET-friendly way using try/catch
block:
public class ExceptionHandlingFilterExample(ILogger logger) : IFunctionInvocationFilter
{
private readonly ILogger _logger = logger;
public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next)
{
try
{
await next(context);
}
catch (Exception exception)
{
this._logger.LogError(exception, "Something went wrong during function invocation");
// Example: override function result value
context.Result = new FunctionResult(context.Result, "Friendly message instead of exception");
// Example: Rethrow another type of exception if needed
// throw new InvalidOperationException("New exception");
}
}
}
Same set of features is available for streaming scenarios. Here is an example how to override function streaming result using IFunctionInvocationFilter
:
public class StreamingFilterExample : IFunctionInvocationFilter
{
public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next)
{
await next(context);
// In streaming scenario, async enumerable is available in context result object.
// To override data: get async enumerable from context result, override data and set new async enumerable in context result:
var enumerable = context.Result.GetValue<IAsyncEnumerable<int>>();
context.Result = new FunctionResult(context.Result, OverrideStreamingDataAsync(enumerable!));
}
private async IAsyncEnumerable<int> OverrideStreamingDataAsync(IAsyncEnumerable<int> data)
{
await foreach (var item in data)
{
// Example: override streaming data
yield return item * 2;
}
}
}
Prompt render filters have similar signature:
public class PromptFilterExample : IPromptRenderFilter
{
public async Task OnPromptRenderAsync(PromptRenderContext context, Func<PromptRenderContext, Task> next)
{
// Example: get function information
var functionName = context.Function.Name;
await next(context);
// Example: override rendered prompt before sending it to AI
context.RenderedPrompt = "Safe prompt";
}
}
This filter is executed before prompt rendering operation, and next
delegate executes next prompt filters in pipeline or prompt rendering operation itself. When next delegate is executed, it’s possible to observe rendered prompt and override it, in case we want to provide even more information (e.g. RAG scenarios) or remove sensitive information from it.
This is a new type of filter for automatic function invocation scenario (also known as function calling
).
This filter is similar to IFunctionInvocationFilter
, but it is executed in different scope, that has more information about execution. It means, that context model will also have more information, including:
Here is a full overview of API that IAutoFunctionInvocationFilter
provides:
public class AutoFunctionInvocationFilter(ILogger logger) : IAutoFunctionInvocationFilter
{
private readonly ILogger _logger = logger;
public async Task OnAutoFunctionInvocationAsync(AutoFunctionInvocationContext context, Func<AutoFunctionInvocationContext, Task> next)
{
// Example: get function information
var functionName = context.Function.Name;
// Example: get chat history
var chatHistory = context.ChatHistory;
// Example: get information about all functions which will be invoked
var functionCalls = FunctionCallContent.GetFunctionCalls(context.ChatHistory.Last());
// Example: get request sequence index
this._logger.LogDebug("Request sequence index: {RequestSequenceIndex}", context.RequestSequenceIndex);
// Example: get function sequence index
this._logger.LogDebug("Function sequence index: {FunctionSequenceIndex}", context.FunctionSequenceIndex);
// Example: get total number of functions which will be called
this._logger.LogDebug("Total number of functions: {FunctionCount}", context.FunctionCount);
// Calling next filter in pipeline or function itself.
// By skipping this call, next filters and function won't be invoked, and function call loop will proceed to the next function.
await next(context);
// Example: get function result
var result = context.Result;
// Example: override function result value
context.Result = new FunctionResult(context.Result, "Result from auto function invocation filter");
// Example: Terminate function invocation
context.Terminate = true;
}
}
Provided examples show how to use function, prompt and auto function invocation filters. With new design, it should be possible to get more observability and have more control over function execution.
We’re always interested in hearing from you. If you have feedback, questions or want to discuss further, feel free to reach out to us and the community on the discussion boards on GitHub! We would also love your support, if you’ve enjoyed using Semantic Kernel, give us a star on GitHub.
The post Filters in Semantic Kernel appeared first on Semantic Kernel.
Are your apps keyboard accessible? Let’s take a look:
How did that go? Was it easy? Did it match your typical experience navigating your app?
Ensuring your app experiences are just as awesome when navigated exclusively via keyboard is essential to building an app experience that is inclusive and accessible to all.
To understand what exactly constitutes keyboard accessibility, the Web Content Accessibility Guidelines (WCAG) is a great place to start.
WCAG is a set of technical standards on web accessibility that is widely referenced and extended to various applications and platforms beyond web. It has become a global standard and legal benchmark and continues to evolve with the evolving landscape of technology.
Of the various guidelines is Guideline 2.1, a guideline that is often overlooked, which says that developers should “Make all functionality available from a keyboard”.
This includes four success criteria:
Success Criterion 2.1.1 Keyboard
All functionality of the content is operable through a keyboard interface without requiring specific timings for individual keystrokes, except where the underlying function requires input that depends on the path of the user’s movement and not just the endpoints.
Success Criterion 2.1.2 No Keyboard Trap
If keyboard focus can be moved to a component of the page using a keyboard interface, then focus can be moved away from that component using only a keyboard interface, and, if it requires more than unmodified arrow or tab keys or other standard exit methods, the user is advised of the method for moving focus away.
Success Criterion 2.1.3 Keyboard (No Exception)
All functionality of the content is operable through a keyboard interface without requiring specific timings for individual keystrokes.
Success Criterion 2.1.4 Character Key Shortcuts
If a keyboard shortcut is implemented in content using only letter (including upper- and lower-case letters), punctuation, number, or symbol characters, then at least one of the following is true:
- Turn off A mechanism is available to turn the shortcut off;
- Remap A mechanism is available to remap the shortcut to include one or more non-printable keyboard keys (e.g., Ctrl, Alt);
- Active only on focus The keyboard shortcut for a user interface component is only active when that component has focus.
A fundamental understanding of these criteria will help you get started in developing apps that are keyboard accessible.
On top of various other considerations, .NET MAUI was designed with the intention of enabling easier development of keyboard accessible experiences. Consequently, developers familiar with Xamarin.Forms keyboard behaviors noticed some changes that were made to improve keyboard accessibility in their apps.
For all functionality to be operable through a keyboard interface, it is essential that all interactive controls are keyboard focusable (can receive keyboard focus) and keyboard navigable (can be navigated to and from using the keyboard). This also includes avoiding making invisible content keyboard accessible. Just as we should expect visible controls to be keyboard focusable and navigable, we should expect invisible/nonexistent controls to not be keyboard accessible or present whatsoever.
To avoid keyboard traps, we ensure keyboard navigation is possible into, within, and out of all relevant controls within the current view. For example, if you navigate a screen with multiple CollectionViews, .NET MAUI aligns with standard keyboard accessibility expectations, enabling you to easily navigate into, though, and out of any of the CollectionViews via standard keyboard navigation patterns.
So how exactly does .NET MAUI enable you to create keyboard accessible experiences more easily? Here are just 3 examples:
One area in which .NET MAUI intentionally accounts for keyboard accessibility is with modal pages. When a modal page appears, as with all other pages, it is important to ensure that everything on the page is accessible. With modal pages in particular, however, it is also especially important to ensure that anything on the underlying page is not keyboard accessible and is not surfacing up to the modal page.
When a modal page appears, the first keyboard focusable control on the page should receive focus. Then, all the content on the modal page should be accessible, and all the interactive controls, which should include an exit option (commonly “Save” or “Close”) from the modal page, should be keyboard focusable. Once and only once the modal page is exited, the focus should be returned to the underlying page, and the first keyboard focusable control on the underlying page should receive focus once more.
This complexity is handled by the .NET MAUI framework, so your modal pages navigate accessibly right out of the box!
In developing .NET MAUI, another thing we learned is that it is not possible to “unfocus” an entry on earlier versions of Android. Some control must always be focused. In Xamarin.Forms, “unfocusing” an entry was made “possible” by setting focus on the page layout; unfortunately, this approach created major accessibility issues. For these reasons, .NET MAUI does not allow for this inaccessible behavior by default and highly recommends using a different approach.
The motivation behind utilizing “focus” and “unfocus” in the first place is often tied to showing and hiding the soft input keyboard. Instead of manipulating the focus to achieve this, manage keyboard behavior by using the new SoftInputExtensions
APIs!
For example:
if (entry.IsSoftInputShowing())
await entry.HideSoftInputAsync(System.Threading.CancellationToken.None);
If the SoftInputExtensions
or other alternative solutions do not work for your keyboard focus needs, the .NET MAUI team would love to learn more about your scenario. Please share with us so that we can better understand your development needs!
That being said, the optional HideSoftInputOnTapped
property was introduced in .NET 8. Applying this property enables users to tap on a page to hide the soft input keyboard, and we recommend it only be used in exceptional scenarios.
As with all awesome, accessible solutions, designing with accessibility in mind means designing for all. This is especially the case for keyboard accessibility, where enabling nifty keyboard behaviors benefits all keyboard users, from those who leverage keyboard as their primary input mode, to power users who are partial to using keyboard shortcuts, also known as keyboard accelerators.
In .NET MAUI, we built out a solution for keyboard accelerators. With keyboard accelerators, all keyboard and desktop users can leverage keyboard shortcuts to activate menu item commands!
As captured in .NET MAUI documentation, here is how you can get started with attaching keyboard accelerators to a MenuFlyoutItem
in XAML or C#:
<MenuFlyoutItem Text="Cut"
Clicked="OnCutMenuFlyoutItemClicked">
<MenuFlyoutItem.KeyboardAccelerators>
<KeyboardAccelerator Modifiers="Ctrl"
Key="X" />
</MenuFlyoutItem.KeyboardAccelerators>
</MenuFlyoutItem>
cutMenuFlyoutItem.KeyboardAccelerators.Add(new KeyboardAccelerator
{
Modifiers = KeyboardAcceleratorModifiers.Ctrl,
Key = "X"
});
Be sure to include the keyboard accelerators in your .NET MAUI app if you aren’t already, and apply your new knowledge from WCAG Success Criterion 2.1.4!
With .NET MAUI, you have the power to build your apps to be fully keyboard accessible and void of keyboard traps, and to do so more easily than ever before.
If you are new to The Journey to Accessible Apps, welcome! Be sure to check out my previous blog posts to learn more about building accessible apps, and how .NET MAUI makes it easy.
You can gain more context about other keyboard accessibility improvements made in .NET MAUI, by checking out the last blog post on meaningful content ordering and the decision to remove tab index.
.NET MAUI helps you to build accessible apps more easily than ever before. As always, let us know how we can make it even easier for you!
The post The Journey to Accessible Apps: Keyboard Accessibility and .NET MAUI appeared first on .NET Blog.
The open source Git project just released Git 2.45 with features and bug fixes from over 96 contributors, 38 of them new. We last caught up with you on the latest in Git back when 2.44 was released.
To celebrate this most recent release, here is GitHub’s look at some of the most interesting features and changes introduced since last time.
Git 2.45 introduces preliminary support for a new reference storage backend called “reftable,” promising faster lookups, reads, and writes for repositories with any number of references.
If you’re unfamiliar with our previous coverage of the new reftable format, don’t worry, this post will catch you up to speed (and then some!). But if you just want to play around with the new reference backend, you can initialize a new repository with --ref-format=reftable
like so:
$ git init --ref-format=reftable /path/to/repo
Initialized empty Git repository in /path/to/repo/.git
$ cd /path/to/repo
$ git commit --allow-empty -m 'hello reftable!'
[main (root-commit) 2eb0810] hello reftable!
$ ls -1 .git/reftable/
0x000000000001-0x000000000002-565c6bf0.ref
tables.list
$ cat .git/reftable/tables.list
0x000000000001-0x000000000002-565c6bf0.ref
With that out of the way, let’s jump into the details. If you’re new to this series, or didn’t catch our initial coverage of the reftable feature, don’t worry, here’s a refresher. When we talk about references in Git, we’re referring to the branches and tags that make up your repository. In essence, a reference is nothing more than a name (like refs/heads/my-feature
, or refs/tags/v1.0.0
) and the object ID of the thing that reference points at.
Git has historically stored references in your repository in one of two ways: either “loose” as a file inside of $GIT_DIR/refs
(like $GIT_DIR/refs/heads/my-feature
) or “packed” as an entry inside of the file at $GIT_DIR/packed_refs
.
For most repositories today, the existing reference backend works fine. For repositories with a truly gigantic number of references, however, the existing backend has some growing pains. For instance, storing a large number of references as “loose” can lead to directories with a large number of entries (slowing down lookups within that directory) and/or inode exhaustion. Likewise, storing all references in a single packed_refs
file can become expensive to maintain, as even small reference updates require a significant I/O-cost to rewrite the entire packed_refs
file on each update.
That’s where the reftable format comes in. Reftable is an entirely new format for storing Git references. Instead of storing loose references, or constantly updating a large packed_refs
file, reftable implements a binary format for storing references that promises to achieve:
The reftable format is incredibly detailed (curious readers can learn more about it in more detail by reading the original specification), but here’s a high-level overview. A repository can have any number of reftables (stored as *.ref
files), each of which is organized into variable-sized blocks. Blocks can store information about a collection of references, refer to the contents of other blocks when storing references across a collection of blocks, and more.
The format is designed to both (a) take up a minimal amount of space (by storing reference names with prefix compression) and (b) support fast lookups, even when reading the .ref
file(s) from a cold cache.
Most importantly, the reftable format supports multiple *.ref
files, meaning that each reference update transaction can be processed individually without having to modify existing *.ref
files. A separate compaction process describes how to “merge” a range of adjacent *.ref
files together into a single *.ref
file to maintain read performance.
The reftable format was originally designed by Shawn Pearce for use in JGit to better support the large number of references stored by Gerrit. Back in our Highlights from Git 2.35 post, we covered that an implementation of the reftable format had landed in Git. In that version, Git did not yet know how to use the new reftable code in conjunction with its existing reference backend system, meaning that you couldn’t yet create repositories that store references using reftable.
In Git 2.45, support for a reftable-powered storage backend has been integrated into Git’s generic reference backend system, meaning that you can play with reftable on your own repository by running:
$ git init --ref-format=reftable /path/to/repo
[source, source, source, source, source, source, source, source, source, source]
Returning readers of this series will be familiar with our ongoing coverage of the Git project’s hash function transition. If you’re new around here, or need a refresher, don’t worry!
Git identifies objects (the blobs, trees, commits, and tags that make up your repository) by a hash of their contents. Since its inception, Git has used the SHA-1 hash function to hash and identify objects in a repository.
However, the SHA-1 function has known collision attacks (e.g., Shattered, and Shambles), meaning that a sufficiently motivated attacker can generate a colliding pair of SHA-1 inputs, which have the same SHA-1 hash despite containing different contents. (Many providers, like GitHub, use a SHA-1 implementation that detects and rejects inputs that contain the telltale signs of being part of a colliding pair attack. For more details, see our post, SHA-1 collision detection on GitHub.com).
Around this time, the Git project began discussing a plan to transition from SHA-1 to a more secure hash function that was not susceptible to the same chosen-prefix attacks. The project decided on SHA-256 as the successor to Git’s use of SHA-1 and work on supporting the new hash function began in earnest. In Git 2.29 (released in October 2020), Git gained experimental support for using SHA-256 instead of SHA-1 in specially-configured repositories. That feature was declared no longer experimental in Git 2.42 (released in August 2023).
One of the goals of the hash function transition was to introduce support for repositories to interoperate between SHA-1 and SHA-256, meaning that repositories could in theory use one hash function locally, while pushing to another repository that uses a different hash function.
Git 2.45 introduces experimental preliminary support for limited interoperability between SHA-1 and SHA-256. To do this, Git 2.45 introduces a new concept called the “compatibility” object format, and allows you to refer to objects by either their given hash, or their “compatibility” hash. An object’s compatibility hash is the hash of an object as it would have been written under the compatibility hash function.
To give you a better sense of how this new feature works, here’s a short demo. To start, we’ll initialize a repository in SHA-256 mode, and declare that SHA-1 is our compatibility hash function:
$ git init --object-format=sha256 /path/to/repo
Initialized empty Git repository in /path/to/repo/.git
$ cd /path/to/repo
$ git config extensions.compatObjectFormat sha1
Then, we can create a simple commit with a single file (README
) whose contents are “Hello, world!”:
$ echo 'Hello, world!' >README
$ git add README
$ git commit -m "initial commit"
[main (root-commit) 74dcba4] initial commit
Author: A U Thor <author@example.com>
1 file changed, 1 insertion(+)
create mode 100644 README
Now, we can ask Git to show us the contents of the commit object we just created with cat-file
. As we’d expect, the hash of the commit object, as well as its root tree are computed using SHA-256:
$ git rev-parse HEAD | git cat-file --batch
74dcba4f8f941a65a44fdd92f0bd6a093ad78960710ac32dbd4c032df66fe5c6 commit 202
tree ace45d916e870ce0fadbb8fc579218d01361da4159d1e2b5949f176b1f743280
author A U Thor <author@example.com> 1713990043 -0400
committer C O Mitter <committer@example.com> 1713990043 -0400
initial commit
But we can also tell git rev-parse
to output any object IDs using the compatibility hash function, allowing us to ask for the SHA-1 object ID of that same commit object. When we print its contents out using cat-file
, its root tree OID is a different value (starting with 7dd4941980
instead of ace45d916e
), this time computed using SHA-1 instead of SHA-256:
$ git rev-parse --output-object-format=sha1 HEAD
2a4f4a2182686157a2dc887c46693c988c912533
$ git rev-parse --output-object-format=sha1 HEAD | git cat-file --batch
2a4f4a2182686157a2dc887c46693c988c912533 commit 178
tree 7dd49419807b37a3afd2f040891a64d69abb8df1
author A U Thor <author@example.com> 1713990043 -0400
committer C O Mitter <committer@example.com> 1713990043 -0400
initial commit
Support for this new feature is still considered experimental, and many features may not work quite as you expect them to. There is still much work ahead for full interoperability between SHA-1 and SHA-256 repositories, but this release delivers an important first step towards full interoperability support.
[source]
git rev-list
to list commits or objects reachable from some set of inputs. rev-list
can also come in handy when trying to diagnose repository corruption, including investigating missing objects.
In the past, you might have used something like git rev-list --missing=print
to gather a list of objects which are reachable from your inputs, but are missing from the local repository. But what if there are missing objects at the tips of your reachability query itself? For instance, if the tip of some branch or tag is corrupt, then you’re stuck:
$ git rev-parse HEAD | tr 'a-f1-9' '1-9a-f' >.git/refs/heads/missing
$ git rev-list --missing=print --all | grep '^?'
fatal: bad object refs/heads/missing
Here, Git won’t let you continue, since one of the inputs to the reachability query itself (refs/heads/missing
, via --all
) is missing. This can make debugging missing objects in the reachable parts of your history more difficult than necessary.
But with Git 2.45, you can debug missing objects even when the tips of your reachability query are themselves missing, like so:
$ git rev-list --missing=print --all | grep '^?'
?70678e7afeacdcba1242793c3d3d28916a2fd152
[source]
One of Git’s lesser-known features are “reference logs,” or “reflogs” for short. These reference logs are extremely useful when asking questions about the history of some reference, such as: “what was main pointing at two weeks ago?” or “where was I before I started this rebase?”.
Each reference has its own corresponding reflog, and you can use the git reflog
command to see the reflog for the currently checked-out reference, or for an arbitrary reference by running git reflog refs/heads/some/branch
.
If you want to see what branches have corresponding reflogs, you could look at the contents of .git/logs like so:
$ find .git/logs/refs/heads -type f | cut -d '/' -f 3-
But what if you’re using reftable? In that case, the reflogs are stored in a binary format, leaving tools like find
out of your reach.
Git 2.45 introduced a new sub-command git reflog list
to show which references have corresponding reflogs available to them, regardless of whether or not you are using reftable.
[source]
If you’ve ever looked closely at Git’s diff output, you might have noticed the prefixes a/
and b/
used before file paths to indicate the before and after versions of each file, like so:
$ git diff HEAD^ -- GIT-VERSION-GEN
diff --git a/GIT-VERSION-GEN b/GIT-VERSION-GEN
index dabd2b5b89..c92f98b3db 100755
--- a/GIT-VERSION-GEN
+++ b/GIT-VERSION-GEN
@@ -1,7 +1,7 @@
#!/bin/sh
GVF=GIT-VERSION-FILE
-DEF_VER=v2.45.0-rc0
+DEF_VER=v2.45.0-rc1
LF='
'
In Git 2.45, you can now configure alternative prefixes by setting the diff.srcPrefix
and diff.dstPrefix
configuration options. This can come in handy if you want to make clear which side is which (by setting them to something like “before” and “after,” respectively). Or if you’re viewing the output in your terminal, and your terminal supports hyperlinking to paths, you could change the prefix to ./
to allow you to click on filepaths within a diff output.
[source]
When writing a commit message, Git will open your editor with a mostly blank file containing some instructions, like so:
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# On branch main
# Your branch is up to date with 'origin/main.
Since 2013, Git has supported customizing the comment character to be something other than the default #. This can come in handy, for instance, if you’re trying to refer to a GitHub issue by its numeric shorthand (e.g. #12345
). If you write #12345
at the beginning of a line in your commit message, Git will treat the entire line as a comment and ignore it.
In Git 2.45, Git allows not just any single ASCII character, but any arbitrary multi-byte character or even an arbitrary string. Now, you can customize your commit message template by setting core.commentString
(or core.commentChar
, the two are synonyms for one another) to your heart’s content.
[source]
Speaking of comments, git config
learned a new option to help document your .gitconfig
file. The .gitconfig
file format allows for comments beginning with a #
character, meaning that everything following that #
until the next newline will be ignored.
The git config
command gained a new --comment
option, which allows specifying an optional comment to leave at the end of the newly configured line, like so:
$ git config --comment 'to show the merge base' merge.conflictStyle diff3
$ tail -n 2 .git/config
[merge]
conflictStyle = diff3 # to show the merge base
This can be helpful when tweaking some of Git’s more esoteric settings to try and remember why you picked a particular value.
[source]
Sometimes when you are rebasing or cherry-picking a series of commits, one or more of those commits become “empty” (i.e., because they contain a subset of changes that have already landed on your branch).
When rebasing, you can use the --empty
option to specify how to handle these commits. --empty
supports a few options: “drop” (to ignore those commits), “keep” (to keep empty commits), or “stop” which will halt the rebase and ask for your input on how to proceed.
Despite its similarity to git rebase
, git cherry-pick
never had an equivalent option to --empty
. That meant that if you were cherry-picking a long sequence of commits, some of which became empty, you’d have to type either git cherry-pick --skip
(to drop the empty commit), or git commit --allow-empty
(to keep the empty commit).
In Git 2.45, git cherry-pick
learned the same --empty
option from git rebase
, meaning that you can specify the behavior once at the beginning of your cherry-pick
operation, instead of having to specify the same thing each time you encounter an empty commit.
[source]
That’s just a sample of changes from the latest release. For more, check out the release notes for 2.45, or any previous version in the Git repository.
The post Highlights from Git 2.45 appeared first on The GitHub Blog.
We’re redefining the developer environment with GitHub Copilot Workspace–where any developer can go from idea, to code, to software in natural language. Sign up here. |
In the past two years, generative AI has foundationally changed the developer landscape largely as a tool embedded inside the developer environment. In 2022, we launched GitHub Copilot as an autocomplete pair programmer in the editor, boosting developer productivity by up to 55%. Copilot is now the most widely adopted AI developer tool. In 2023, we released GitHub Copilot Chat—unlocking the power of natural language in coding, debugging, and testing—allowing developers to converse with their code in real time.
After sharing an early glimpse at GitHub Universe last year, today, we are reimagining the nature of the developer experience itself with the technical preview of GitHub Copilot Workspace: the Copilot-native developer environment. Within Copilot Workspace, developers can now brainstorm, plan, build, test, and run code in natural language. This new task-centric experience leverages different Copilot-powered agents from start to finish, while giving developers full control over every step of the process.
Copilot Workspace represents a radically new way of building software with natural language, and is expressly designed to deliver–not replace–developer creativity, faster and easier than ever before. With Copilot Workspace we will empower more experienced developers to operate as systems thinkers, and materially lower the barrier of entry for who can build software.
Welcome to the first day of a new developer environment. Here’s how it works:
For developers, the greatest barrier to entry is almost always at the beginning. Think of how often you hit a wall in the first steps of a big project, feature request, or even bug report, simply because you don’t know how to get started. GitHub Copilot Workspace meets developers right at the origin: a GitHub Repository or a GitHub Issue. By leveraging Copilot agents as a second brain, developers will have AI assistance from the very beginning of an idea.
From there, Copilot Workspace offers a step-by-step plan to solve the issue based on its deep understanding of the codebase, issue replies, and more. It gives you everything you need to validate the plan, and test the code, in one streamlined list in natural language.
Everything that GitHub Copilot Workspace proposes—from the plan to the code—is fully editable, allowing you to iterate until you’re confident in the path ahead. You retain all of the autonomy, while Copilot Workspace lifts your cognitive strain.
And once you’re satisfied with the plan, you can run your code directly in Copilot Workspace, jump into the underlying GitHub Codespace, and tweak all code changes until you are happy with the final result. You can also instantly share a workspace with your team via a link, so they can view your work and even try out their own iterations.
All that’s left then is to file your pull request, run your GitHub Actions, security code scanning, and ask your team members for human code review. And best of all, they can leverage your Copilot Workspace to see how you got from idea to code.
And because ideas can happen anywhere, GitHub Copilot Workspace was designed to be used from any device—empowering a real-world development environment that can work on a desktop, laptop, or on the go.
This is our mark on the future of the development environment: an intuitive, Copilot-powered infrastructure that makes it easier to get started, to learn, and ultimately to execute.
Early last year, GitHub celebrated over 100 million developers on our platform—and counting. As programming in natural language lowers the barrier of entry to who can build software, we are accelerating to a near future where one billion people on GitHub will control a machine just as easily as they ride a bicycle. We’ve constructed GitHub Copilot Workspace in pursuit of this horizon, as a conduit to help extend the economic opportunity and joy of building software to every human on the planet.
At the same time, we live in a world dependent on—and in short supply of—professional developers. Around the world, developers add millions of lines of code every single day to evermore complex systems and are increasingly behind on maintaining the old ones. Just like any infrastructure in this world, we need real experts to maintain and renew the world’s code. By quantifiably reducing boilerplate work, we will empower professional developers to increasingly operate as systems thinkers. We believe the step change in productivity gains that professional developers will experience by virtue of Copilot and now Copilot Workspace will only continue to increase labor demand.
That’s the dual potential of GitHub Copilot: for the professional and hobbyist developer alike, channeling creativity into code just got a whole lot easier.
Today, we begin the technical preview for GitHub Copilot Workspace.
Sign up now.
We can’t wait to see what you will build from here.
The post GitHub Copilot Workspace: Welcome to the Copilot-native developer environment appeared first on The GitHub Blog.