Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149167 stories
·
33 followers

Highlights from Git 2.52

1 Share

The open source Git project just released Git 2.52 with features and bug fixes from over 94 contributors, 33 of them new. We last caught up with you on the latest in Git back when 2.51 was released.

To celebrate this most recent release, here is GitHub’s look at some of the most interesting features and changes introduced since last time.

Tree-level blame information

If you’re a seasoned Git user, then you are no doubt familiar with git blame, Git’s tool for figuring out which commit most recently modified each line at a given filepath. Git’s blame functionality is great for figuring out when a bug was introduced, or why some code was written the way it was.

If you want to know which commit last modified any portion of a given filepath, that’s easy enough to do with git log -1 -- path/to/my/file, since -1 will give us only the first commit which modifies that path. But what if instead you want to know which commit most recently modified every file in some directory? Answering that question may seem contrived, but it’s not. If you’ve ever looked at a repository’s file listing on GitHub, the middle column of information has a link to the commit which most recently modified that path, along with (part of) its commit message.

Screenshot of GitHub's repository file.
GitHub’s repository file listing, showing tree-level blame information.

The question remains: how do we efficiently determine which commit most recently modified each file in a given directory? You could imagine that you might enumerate each tree entry, feeding it to git log -1 and collecting the output there, like so:

$ git ls-tree -z --name-only HEAD^{tree} | xargs -0 -I{} sh -c '
    git log -1 --format="$1 %h %s" -- $1
  ' -- {}  | column -t -l3 
.cirrus.yml     1e77de10810  ci: update FreeBSD image to 14.3
.clang-format   37215410730  clang-format: exclude control macros from SpaceBeforeParens
.editorconfig   c84209a0529  editorconfig: add .bash extension
.gitattributes  d3b58320923  merge-file doc: set conflict-marker-size attribute
.github         5db9d35a28f  Merge branch 'js/ci-github-actions-update'
[...]

That works, but not efficiently. To see why, consider a case with files A, B, and C introduced by commits C1, C2, and C3, respectively. To blame A, we walk from C3 back to C1 in order to determine that C1 was the most recent commit to modify A. That traversal passed through C2 and C3, but since we were only looking for modifications to A, we’ll end up revisiting those commits when trying to blame B and C. In this example, we visit those three commits six times in total, which is twice the necessary number of history traversals.

Git 2.52 introduces a new command which comes up with the same information in a fraction of the time: git last-modified. To get a sense for how much faster last-modified is than the example above, here are some hyperfine results:

Benchmark 1: git ls-tree + log
  Time (mean ± σ):      3.962 s ±  0.011 s    [User: 2.676 s, System: 1.330 s]
  Range (min … max):    3.940 s …  3.984 s    10 runs

Benchmark 2: git last-modified
  Time (mean ± σ):     722.7 ms ±   4.6 ms    [User: 682.4 ms, System: 40.1 ms]
  Range (min … max):   717.3 ms … 731.3 ms    10 runs

Summary
  git last-modified ran
    5.48 ± 0.04 times faster than git ls-tree + log

The core functionality behind git last-modified was written by GitHub over many years (originally called blame-tree in GitHub’s fork of Git), and is what has powered our tree-level blame since 2012. Earlier this year, we shared those patches with engineers at GitLab, who tidied up years of development into a reviewable series of patches which landed in this release.

There are still some features in GitHub’s version of this command that have yet to make their way into a Git release, including an on-disk format to cache the results of previous runs. In the meantime, check out git last-modified, available in Git 2.52.

[source, source, source]

Advanced repository maintenance strategies

Returning readers of this series may recall our coverage of the git maintenance command. If this is your first time reading along, or you could use a refresher, we’ve got you covered.

git maintenance is a Git command which can perform repository housekeeping tasks either on a scheduled or ad-hoc basis. The maintenance command can perform a variety of tasks, like repacking the contents of your repository, updating commit-graphs, expiring stale reflog entries, and much more. Put together, maintenance ensures that your repository continues to operate smoothly and efficiently.

By default (or when running the gc task), git maintenance relies on git gc internally to repack your repository, and remove any unreachable objects. This has a couple of drawbacks, namely that git gc performs “all-into-one” repacks to consolidate the contents of your repository, which can be sluggish for very large repositories. As an alternative, git maintenance has an incremental-repack strategy, but this never prunes out any unreachable objects.

Git 2.52 bridges this gap by introducing a new geometric task within git maintenance that avoids all-into-one repacks when possible, and prunes unreachable objects on a less frequent basis. This new task uses tools (like geometric repacking) that were designed at GitHub and have powered GitHub’s own repository maintenance for many years. Those tools have been in Git since 2.33, but were awkward to use or discover since their implementation was buried within git repack, not git gc.

The geometric task here works by inspecting the contents of your repository to determine if we can combine some number of packfiles to form a geometric progression by object count. If it can, it performs a geometric repack, condensing the contents of your repository without pruning any objects. Alternatively, if a geometric repack would pack the entirety of your repository into a single pack, then a full git gc is performed instead, which consolidates the contents of your repository and prunes out unreachable objects.

Git 2.52 makes it a breeze to keep even your largest repositories running smoothly. Check out the new geometric strategy, or any of the many other capabilities of git maintenance can do in 2.52.

[source]


The tip of the iceberg…

Now that we’ve covered some of the larger changes in more detail, let’s take a closer look at a selection of some other new features and updates in this release.

  • This release saw a couple of new sub-commands be added to git refs, Git’s relatively new tool for providing low-level access to your repository’s references. Prior to this release, git refs was capable of migrating between reference backends (e.g., to have your repository store reference data in the reftable format), along with verifying the internal representation of those references.

    git refs now includes two new sub-commands: git refs list and git refs exists. The former is an alias for git for-each-ref and supports the same set of options. The latter works like git show-ref --exists, and can be used to quickly determine whether or not a given reference exists.

    Neither of these new sub-commands introduce new functionality, but they do consolidate a couple of common reference-related operations into a single Git command rather than many individual ones.

    [source]

  • If you’ve ever scripted around Git, you are likely familiar with Git’s rev-parse command. If not, you’d be forgiven for thinking that rev-parse is designed to just resolve the various ways to describe a commit into a full object ID. In reality, rev-parse can perform functionality totally unrelated to resolving object IDs, including shell quoting, option parsing (as a replacement for getopt), printing local GIT_ environment variables, resolving paths inside of $GIT_DIR and so much more.

    Git 2.52 introduces the first step to giving some of this functionality a new home via its new git repo command. The git repo command—currently designated as experimental—is designed to be a general-purpose tool for retrieving pieces of information about your repository. For example, you can check whether or not a repository is shallow or bare, along with what type of object and reference format it uses, like so:

    $ keys='layout.bare layout.shallow object.format references.format'
    $ git repo info $keys
    layout.bare=false
    layout.shallow=false
    object.format=sha1
    references.format=files

    The new git repo command can also print out some general statistics about your repository’s structure and contents via its git repo structure sub-command:

    $ git repo structure
    Counting objects: 497533, done.
    | Repository structure | Value  |
    | -------------------- | ------ |
    | * References         |        |
    |   * Count            |   2871 |
    |     * Branches       |     58 |
    |     * Tags           |   1273 |
    |     * Remotes        |   1534 |
    |     * Others         |      6 |
    |                      |        |
    | * Reachable objects  |        |
    |   * Count            | 497533 |
    |     * Commits        |  91386 |
    |     * Trees          | 208050 |
    |     * Blobs          | 197103 |
    |     * Tags           |    994 |

    [source, source, source]

  • Back in 2.28, the Git project introduced the init.defaultBranch configuration option to provide a default branch name for any repositories created with git init. Since its introduction, the default value of that configuration option was “master”, though many set init.defaultBranch to “main” instead.

    Beginning in Git 3.0, the default value for init.defaultBranch will change to “main”. That means that any repositories created in Git 3.0 or newer using git init will have their default branch named “main” without the need for any additional configuration.

    If you want to get a sneak peak of that, or any other planned change for Git 3.0, you can build Git locally with the WITH_BREAKING_CHANGES build-flag to try out the new changes today.

    [source, source]

  • By default, Git uses SHA-1 to provide a content-addressable hash of any object in your repository. In Git 3.0, Git will instead use SHA-256 which offers more appealing security properties. Back in our coverage of Git 2.45, we talked about some new changes which enable writing out separate copies of new objects using both SHA-1 and SHA-256 as a transitory step towards interoperability between the two.

    In Git 2.52, the rest of that work towards interoperability begins. Though the changes that landed in this release are focused on laying the groundwork for future interoperability features, the hope is that eventually you can use a Git repository with one hash algorithm, while pushing and pulling from another repository using a different hash algorithm.

    [source]

  • Speaking of other bleeding-edge changes in Git, this release is the first to (optionally) use Rust code for some internal functionality within Git. This mode is optional and guarded behind a new WITH_RUST build flag. When built with this mode enabled, Git will use a Rust implementation for encoding and decoding variable-width integers.

    Though this release only introduces a Rust variant of some minor utility functionality, it sets up the infrastructure for much more interesting parts of Git to be rewritten in Rust.

    Rust support is not yet mandatory, so Git 2.52 will continue to run just fine on platforms that don’t have a Rust compiler. However, Rust support will be required for Git 3.0, at which point many more components of Git will likely depend on Rust code.

    [source, source, source]

  • Long-time readers may recall our coverage of changed-path Bloom filters within Git from back in 2.28. If not, a changed-path Bloom filter is a probabilistic data structure that can approximate which file path(s) were modified by a commit (relative to its first parent). Since Bloom filters never have false negatives (i.e. indicating a commit did not modify some path when it in fact did), they can be used to accelerate many path-scoped traversals throughout Git (including last-modified above!).

    More recently, we covered new ways of using Bloom filters within Git, like providing multiple paths of interest at the same time (e.g., git log /my/subdir /my/other/subdir) which previously were not supported with Bloom filters. At that time, we wrote that there were ongoing discussions about supporting Bloom filters in even more of Git’s expressive pathspec syntax.

    This release delivers the result of those discussions, and now supports the performance benefits of using Bloom filters in even more scenarios. One example here is when a pathspec contains wildcards in some, but not all of its components, like foo/bar/*/baz, where Git will now use its Bloom filter for the non-wildcard components of the path. To read about even more scenarios that can now leverage Bloom filters, check out the link below.

    [source]

  • This release also saw a number of performance improvements across many areas of the project. git describe learned how to use a priority queue to speed up performance by 30%. git remote picked up a couple of new tricks to optimize renaming references with its rename sub-command. git ls-files can keep the index sparse in cases where it couldn’t before. git log -L became significantly faster by avoiding some unnecessary tree-level diffs when processing merge commits. Finally, xdiff (the library that powers Git’s file-level diff and merge engine) benefitted from a pair of optimizations (here, and here) in this release, and even more optimizations that will likely land in a future release.

    [source, source, source, source]

  • Last but not least, some updates to Git’s sparse-checkout feature, which learned a new “clean” sub-command. git sparse-checkout clean can help you recover from tricky cases where some files are left outside of your sparse-checkout definition when changing which part(s) of the repository you have checked out.

    The details of how one might get into this situation, and why recovering from it with pre-2.52 tools alone was so difficult, are surprisingly technical. If you’re interested in all of the gory details, this commit has all of the information about this change.

    In the meantime, if you use sparse-checkout and have ever had difficulty cleaning up when switching your sparse-checkout definition, give git sparse-checkout clean a whirl with Git 2.52.

    [source]

…the rest of the iceberg

That’s just a sample of changes from the latest release. For more, check out the release notes for 2.52, or any previous version in the Git repository.

The post Highlights from Git 2.52 appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Microsoft 365 Copilot Connectors: What You Need to Know

1 Share

Key Takeaways:

  • Copilot connectors let Microsoft 365 understand and search external data.
  • Semantic indexing adds meaning so Copilot answers natural-language questions intelligently.
  • Semantic labels make topic-based searches smarter.
  • Admins stay in control as permissions follow Entra rules.

Microsoft 365 Copilot Connectors provide a way of making data outside of Microsoft 365 available to Copilot.

Although Microsoft 365 Copilot is probably best known for its ability to interact with Microsoft 365 data, businesses typically also have data stored in other data sources.

What if Copilot could read your Google Drive or Salesforce records just as easily as your SharePoint files?

Microsoft 365 Copilot was originally designed to act as a tool for interacting with documents and data stored within the Microsoft ecosystem. Realistically however, nearly every business has data that lives in other locations. For example, an organization might have data residing within the Google platform or within a SaaS application such as Salesforce.

This is where Microsoft 365 Copilot connectors come into play. Copilot connectors make it so that Copilot can interact with data that is stored outside of the Microsoft ecosystem.

If your data lives everywhere, why shouldn’t your AI assistant?

The main benefit to using Copilot Connectors is that connectors can bring all of an organization’s data together in one place, making it easier for users to interact with that data.

It’s not magic – it’s plumbing. But very smart plumbing.

When an organization uses a Copilot connector to link to an external data source, the data within that external data source is ingested into Microsoft Graph, where it resides alongside native Microsoft data, such as the data associated with applications such as SharePoint or Outlook.

When a user attempts to locate data using Microsoft Search, the search is able to return results from both Microsoft and external data sources. As an example, the search results might include data from sources such as the Microsoft 365 applications (Outlook, SharePoint, etc.) as well as business data from external sources such as ServiceNow, MediaWiki, Salesforce, or Jira.

Better still, because Copilot is designed around Microsoft Graph, it means that the user’s Copilot prompts will automatically take into account data that is stored natively in the Microsoft cloud, as well as data that has been pulled in from other sources and is now referenced within the Microsoft Graph. To put it another way, users no longer need to search every app or data source separately, users can receive aggregated search results within Microsoft Search.

Microsoft further enhances the search experience by accompanying Copilot responses with a list of references. These references display the data source and a document preview.

Without Copilot connectors

  • Search each app individually
  • Switch tools constantly
  • Miss insights due to data fragmentation

With Copilot connectors

  • One unified search
  • AI understands the meaning, not just keywords
  • External + Microsoft data appear together

Semantic indexing – Copilot doesn’t just read your data, it interprets it

In order for Microsoft Copilot to work properly with external data sources, it needs to do more than to simply ingest the data into Microsoft Graph. Copilot must actually understand the data. Otherwise, it will be unable to properly answer user’s natural language queries pertaining to that data.

Semantic indexing helps Copilot to understand what you mean, not just the words that you type. To understand why this is important, imagine for a moment that you are looking for a recipe online using your favorite search engine. If you enter a generic phrase such as “blackberry dessert”, you are probably going to get results such as blackberry cobbler, blackberry pie, and blackberry ice cream. It’s likely a safe bet that none of those web pages included the phrase “blackberry dessert” and yet the search engine found the recipes anyway because it understood that a pie is a dessert.

This is layer of understanding is exactly what semantic indexing provides to Microsoft Copilot. Semantic indexing helps Copilot to find what you are looking for, even when your query is a little bit vague. This level of understanding goes a long way toward making Microsoft Search more useful and delivering better overall satisfaction with the search experience for your end users.

At a minimum, Copilot connectors index each document’s title and content. However, some connectors index additional pieces of information.

Semantic labels – When keywords fail, labels guide Copilot to the right shelf.

Semantic labels can also help Copilot to deliver better results for natural language queries. It is worth noting that semantic labels exist separately from semantic indexing and do not affect the indexing process.

A semantic label is essentially a tag that can help make it easier to locate certain information. This tagging process is somewhat similar to putting a sticky note on a book as a way of helping to remember where you found information that you want to come back to later on.

Microsoft 365 Copilot connectors high-level overview
Microsoft 365 Copilot connectors high-level overview (Image Credit: Microsoft)

Semantic labels are useful for delivering better results from topic and keyword searches or for searches in which the system needs to understand the contextual relationship between various data sources.

While semantic labels can be useful, there are some types of searches that simply do not benefit from their use. For example, if you are performing a multi-parameter query or a query that is based on something other than topics or keywords, then semantic labels won’t help you.

If for example, you were to search for all of the projects assigned to a particular user, then semantic labels will not typically be used, because semantic labels focus on content and keywords, not on metadata.

What types of Copilot connectors are available?

In order for an organization to access business data stored outside of the Microsoft ecosystem using Copilot, the organization will need a connector that is specifically designed to work with the external service. Fortunately, Microsoft provides over 100 prebuilt connectors that are available within the Copilot Connectors gallery.

A few of the available prebuilt connectors include:

CategoryExample Connectors
ProductivityGoogle Drive, Box
CRMSalesforce
ITSMServiceNow
Microsoft NativeAzure, SharePoint
Some of the prebuilt connectors available for Microsoft 365 Copilot

Not surprisingly, there are also prebuilt connectors that can link to native Microsoft content sources, such as Microsoft Azure.

Custom Copilot connectors

Although the Microsoft Copilot Connectors gallery contains numerous connectors that are ready to use, there is not a readily available connector for every conceivable data source. Fortunately, however, Microsoft provides an Application Programming Interface (API) that you can use to build your own custom Copilot connectors that you can use as a toolkit to link Copilot to various external items. Some organizations have gone as far as to make the creation of copilot connectors part of their DevOps workflows.

If you aren’t a developer, you may be able to link Copilot to external content sources without having to know how to use an API or a Software Development Kit (SDK). Many software vendors and independent experts have created their own custom Copilot connectors and made them available for download on GitHub.

Data access permissions – If you can’t access it, neither can Copilot. It’s as simple as that.

Microsoft 365 Copilot is designed to respect access control permissions that have been put in place. That way, a user cannot use Copilot to gain access to information that they would not ordinarily have access to. In order to do so, admins register an external connection to an app or a data source. And then use the Admin Center to grant permissions through Microsoft Entra.

Because Microsoft Entra handles the user authentication process, it understands which users are signed in and which source systems and resources those users have access to. The end result is that Copilot respects all of the access controls that have been put in place, meaning that users will never be exposed to data that they can’t already access.

The post Introducing Microsoft 365 Copilot Connectors: What You Need to Know appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The “Most Hated” CSS Feature: asin(), acos(), atan() and atan2()

1 Share

This is a series! It all started a couple of articles ago, when we found out that, according to the State of CSS 2025 survey, trigonometric functions were the “Most Hated” CSS feature.

Most Hated Feature: trigonometric functions

I’ve been trying to change that perspective, so I showcased several uses for trigonometric functions in CSS: one for sin() and cos() and another on tan(). However, that’s only half of what trigonometric functions can do. So today, we’ll poke at the inverse world of trigonometric functions: asin()acos()atan() and atan2().

CSS Trigonometric Functions: The “Most Hated” CSS Feature

  1. sin() and cos()
  2. tan()
  3. asin()acos()atan() and atan2() (You are here!)

Inverse functions?

Recapping things a bit, given an angle, the sin()cos() and tan() functions return a ratio presenting the sine, cosine, and tangent of that angle, respectively. And if you read the last two parts of the series, then you already know what each of those quantities represents.

What if we wanted to go the other way around? If we have a ratio that represents the sine, cosine or tangent of an angle, how can we get the original angle? This is where inverse trigonometric functions come in! Each inverse function asks what the necessary angle is to get a given value for a specific trigonometric function; in other words, it undoes the original trigonometric function. So…

  • acos() is the inverse of cos(),
  • asin() is the inverse of sin(), and
  • atan() and atan2() are the inverse of tan().

They are also called “arcus” functions and written as arcos()arcsin() and arctan() in most places. This is because, in a circle, each angle corresponds to an arc in the circumference.

The length of this arc is the angle times the circle’s radius. Since trigonometric functions live in a unit circle, where the radius is equal to 1, the arc length is also the angle, expressed in radians.

Their mathy definitions are a little boring, to say the least, but they are straightforward:

  • y = acos(x) such that x = cos(y)
  • y = asin(x) such that x = sin(y)
  • y = atan(x) such that x = tan(y)

acos() and asin()

Using acos() and asin(), we can undo cos(θ) and sin(θ) to get the starting angle, θ. However, if we try to graph them, we’ll notice something odd:

acos() and asin() graphed. The inverse sine curve crosses the x-axis at -1 and 1. The inverse cosine curve also crosses at -1 and 1.

The functions are only defined from -1 to 1!

Remember, cos() and sin() can take any angle, but they will always return a number between -1 and 1. For example, both cos(90°) and cos(270°) (not to mention others) return 0, so which value should acos(0) return? To answer this, both acos() and asin() have their domain (their input) and range (their output) restricted:

  • acos() can only take numbers between -1 and 1 and return angles between  and 180°.
  • asin() can only take numbers between -1 and 1 and return angles between -90° and 90°.

This limits a lot of the situations where we can use acos() and asin(), since something like asin(1.2) doesn’t work in CSS* — according to the spec, going outside acos() and asin() domain returns NaN — which leads us to our next inverse function…

atan() and atan2()

Similarly, using atan(), we can undo tan(θ) to get θ. But, unlike asin() and acos(), if we graph it, we’ll notice a big difference:

atan() graphed. The curve's midpoint is positioned at 0 and 0, and the curve extends infinitely in the X direction.

This time it is defined on the whole number line! This makes sense since tan() can return any number between -Infinity and Infinity, so atan() is defined in that domain.

atan() can take any number between -Infinity and Infinity and returns angles -90° and 90°.

This makes atan() incredibly useful to find angles in all kinds of situations, and a lot more versatile than acos() and asin(). That’s why we’ll be using it, along atan2(), going forward. Although don’t worry about atan2() for now, we’ll get to it later.

Finding the perfect angle

In the last article, we worked a lot with triangles. Specifically, we used the tan() function to find one of the sides of a right-angled triangle from the following relationships:

The tangent of theta is equal to the opposite side divided by the adjacent side.

To make it work, we needed to know one of its sides and the angle, and by solving the equation, we would get the other side. However, in most cases, we do know the lengths of the triangle’s sides and what we are actually looking for is the angle. In that case, the last equation becomes:

Theta is equal to the atan of opposite side divided by the adjacent side.

Triangles and Conic Gradients

Finding the angle comes in handy in lots of cases, like in gradients, for instance. In a linear gradient, for example, if we want it to go from corner to corner, we’ll have to match the gradient’s angle depending on the element’s dimensions. Otherwise, with a fixed angle, the gradient won’t change if the element gets resized:

.gradient {
  background: repeating-linear-gradient(ghostwhite 0px 25px, darkslategray 25px 50px);
}

This may be the desired look, but I think that most often than not, you want it to match the element’s dimensions.

Using linear-gradient(), we can easily solve this using to top right or to bottom left values for the angle, which automatically sets the angle so the gradient goes from corner to corner.

.gradient {
  background: repeating-linear-gradient(to top right, ghostwhite 0px 25px, darkslategray 25px 50px);
}

However, we don’t have that type of syntax for other gradients, like a conic-gradient(). For example, the next conic gradient has a fixed angle and won’t change upon resizing the element.

.gradient {
  background: conic-gradient(from 45deg, #84a59d 180deg, #f28482 180deg);
}

Luckily, we can fix this using atan()! We can look at the gradient as a right-angled triangle, where the width is the adjacent side and the height the opposite side:

A square bisected diagonally from the bottom-left corner to the top-right corner, creating two right triangles. The theta angle is labeled in the bottom-left corner and the width is labeled along the bottom edge.

Then, we can get the angle using this formula:

.gradient {
  --angle: atan(var(--height-gradient) / var(--width-gradient));
}

Since conic-gradient() starts from the top edge — conic-gradient(from 0deg) — we’ll have to shift it by 90deg to make it work.

.gradient {
  --rotation: calc(90deg - var(--angle));
  background: conic-gradient(from var(--rotation), #84a59d 180deg, #f28482 180deg);
}

You may be wondering: can’t we do that with a linear gradient? And the answer is, yes! But this was just an example to showcase atan(). Let’s move on to more interesting stuff that’s unique to conic gradients.

I got the next example from Ana Tudor’s post on “Variable Aspect Ratio Card With Conic Gradients”:

Pretty cool, right?. Sadly, Ana’s post is from 2021, a time when trigonometric functions were specced out but not implemented. As she mentions in her article, it wasn’t possible to create these gradients using atan(). Luckily, we live in the future! Let’s see how simple they become with trigonometry and CSS.

We’ll use two conic gradients, each of them covering half of the card’s background.

A square bisected in the middle with a diagonal line going from the top-right corner to the bottom-left, creating two right triangles, each with a different conic gradient applied to it.

To save time, I’ll gloss over exactly how to make the original gradient, so here is a quick little step-by-step guide on how to make one of those gradients in a square-shaped element:

Since we’re working with a perfect square, we can fix the --angle and --rotation to be 45deg, but for a general use case, each of the conic-gradients would look like this in CSS:

.gradient {
  background: 
    /* one below */
    conic-gradient(
      from var(--rotation) at bottom left,
      #b9eee1 calc(var(--angle) * 1 / 3),
      #79d3be calc(var(--angle) * 1 / 3) calc(var(--angle) * 2 / 3),
      #39b89a calc(var(--angle) * 2 / 3) calc(var(--angle) * 3 / 3),
      transparent var(--angle)
    ),
    /* one above */
    conic-gradient(
      from calc(var(--rotation) + 180deg) at top right,
      #fec9d7 calc(var(--angle) * 1 / 3),
      #ff91ad calc(var(--angle) * 1 / 3) calc(var(--angle) * 2 / 3),
      #ff5883 calc(var(--angle) * 2 / 3) calc(var(--angle) * 3 / 3),
      transparent var(--angle)
    );
}

And we can get those --angle and --rotation variables the same way we did earlier — using atan(), of course!

.gradient {
  --angle: atan(var(--height-gradient) / var(--width-gradient));
  --rotation: calc(90deg - var(--angle));
}

What about atan2()?

The last example was all abou atan(), but I told you we would also look at the atan2() function. With atan(), we get the angle when we divide the opposite side by the adjacent side and pass that value as the argument. On the flip side, atan2() takes them as separate arguments:

  • atan(opposite/adjacent)
  • atan2(opposite, adjacent)

What’s the difference? To explain, let’s backtrack a bit.

We used atan() in the context of triangles, meaning that the adjacent and opposite sides were always positive. This may seem like an obvious thing since lengths are always positive, but we won’t always work with lengths.

Imagine we are in a x-y plane and pick a random point on the graph. Just by looking at its position, we can know its x and y coordinates, which can have both negative and positive coordinates. What if we wanted its angle instead? Measuring it, of course, from the positive x-axis.

Showing a coordinate located at 3, 2 on an x-y graph. A line is drawn between it and the center point located at 0, 0, and the angle of the line is labeled as theta.

Well, remember from the last article in this series that we can also define tan() as the quotient between sin() and cos():

The tangent of an angle is equal to the sine of the angle divided by the cosine of the angle.

Also recall that when we measure the angle from the positive x-axis, then sin() returns the y-coordinate and cos() returns the x-coordinate. So, the last formula becomes:

The tangent of an angle equals the y coordinate divided by the x coordinate.

And applying atan(), we can directly get the angle!

And angle is equal to the a-tangent times the value of the y-coordinate divided by the x-coordinate.

This formula has one problem, though. It should work for any point in the x-y plane, and since both x and y can be negative, we can confuse some points. Since we are dividing the y-coordinate by the x-coordinate, in the eyes of atan(), the negative y-coordinate looks the same as the negative x-coordinate. And if both coordinates are negative, it would look the same as if both were positive.

The negative y-coordinate divided by the negative x-coordinate is equal to the y-coordinate divided by the x-coordinate. Also, the negative y-coordinate divided by the x-coordxinate is equal to the y-coordinate divided by the negative x-coordinate.

To compensate for this, we have atan2(), and since it takes the y-coordinate and x-coordinate as separate arguments, it’s smart enough to return the angle everywhere in the plane!

Let’s see how we can put it to practical use.

Following the mouse

Using atan2(), we can make elements react to the mouse’s position. Why would we want to do that? Meet my friend Helpy, Clippy’s uglier brother from Microsoft.

Helpy wants to always be looking at the user’s mouse, and luckily, we can help him using atan2(). I won’t go into too much detail about how Helpy is made, just know that his eyes are two pseudo-elements:

.helpy::before,
.helpy::after {
  /* eye styling */
}

To help Helpy, we first need to let CSS know the mouse’s current x-y coordinates. And while I may not like using JavaScript here, it’s needed in order to pass the mouse coordinates to CSS as two custom properties that we’ll call --m-x and --m-y.

const body = document.querySelector("body");

// listen for the mouse pointer
body.addEventListener("pointermove", (event) => {
  // set variables for the pointer's current coordinates
  let x = event.clientX;
  let y = event.clientY;

  // assign those coordinates to CSS custom properties in pixel units
  body.style.setProperty("--m-x", `${Math.round(x)}px`);
  body.style.setProperty("--m-y", `${Math.round(y)}px`);
});

Helpy is currently looking away from the content, so we’ll first move his eyes so they align with the positive x-axis, i.e., to the right.

.helpy::before,
.helpy::after {
  rotate: 135deg;
}

Once there, we can use atan2() to find the exact angle Helpy has to turn so he sees the user’s mouse. Since Helpy is positioned at the top-left corner of the page, and the x and y coordinates are measured from there, it’s time to plug those coordinates into our function: atan2(var(--m-y), var(--m-x)).

.helpy::before,
.helpy::after {
  /* rotate the eyes by it's starting position, plus the atan2 of the coordinates */
  rotate: calc(135deg + atan2(var(--m-y), var(--m-x)));
}

We can make one last improvement. You’ll notice that if the mouse goes on the little gap behind Helpy, he is unable to look at the pointer. This happens because we are measuring the coordinates exactly from the top-left corner, and Helpy is positioned a little bit away from that.

To fix this, we can translate the origin of the coordinate system directly on Helpy by subtracting the padding and half its size:

Showing the range of Helpy's eyesight, going from left-to-right to top-to-bottom. A diagonal line bisects that range revealing an angle that is labeled theta.

Which looks like this in CSS:

.helpy::before,
.helpy::after  {
  rotate: calc(
    135deg +
      atan2(
        var(--m-y) - var(--spacing) - var(--helpy-size) / 2,
        var(--m-x) - var(--spacing) - var(--helpy-size) / 2
      )
  );
}

This is a somewhat minor improvement, but moving the coordinate origin will be vital if we want to place Helpy in any other place on the screen.

Extra: Getting the viewport (and anything) in numbers

I can’t finish this series without mentioning a trick to typecast different units into simple numbers using atan2() and tan(). It isn’t directly related to trigonometry but it’s still super useful. It was first described amazingly by Jane Ori in 2023, and goes as follows.

If we want to get the viewport as an integer, then we can…

@property --100vw {
  syntax: "<length>;";
  initial-value: 0px;
  inherits: false;
}

:root {
  --100vw: 100vw;
  --int-width: calc(10000 * tan(atan2(var(--100vw), 10000px)));
}

And now: the --int-width variable holds the viewport width as an integer. This looks like magic, so I really recommend reading Jane Ori’s post to understand it. I also have an article using it to create animations as the viewport is resized!

What about reciprocals?

I noticed that we are still lacking the reciprocals for each trigonometric function. The reciprocals are merely 1 divided by the function, so there’s a total of three of them:

  • The secant, or sec(x), is the reciprocal of cos(x), so it’s 1 / cos(x).
  • The cosecant, or csc(x), is the reciprocal of sin(x), so it’s 1 / sin(x).
  • The cotangent, or cot(x) is the reciprocal of tan(x), so it’s 1 / tan(x).

The beauty of sin()cos() and tan() and their reciprocals is that they all live in the unit circle we’ve looked at in other articles in this series. I decided to put everything together in the following demo that shows all of the trigonometric functions covered on the same unit circle:

That’s it!

Welp, that’s it! I hope you learned and had fun with this series just as much as I enjoyed writing it. And thanks so much for those of you who have shared your own demos. I’ll be rounding them up in my Bluesky page.

CSS Trigonometric Functions: The “Most Hated” CSS Feature

  1. sin() and cos()
  2. tan()
  3. asin()acos()atan() and atan2() (You are here!)

The “Most Hated” CSS Feature: asin(), acos(), atan() and atan2() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AWS Weekly Roundup: AWS Lambda, load balancers, Amazon DCV, Amazon Linux 2023, and more (November 17, 2025)

1 Share

The weeks before AWS re:Invent, my team is full steam ahead preparing content for the conference. I can’t wait to meet you at one of my three talks: CMP346 : Supercharge AI/ML on Apple Silicon with EC2 Mac, CMP344: Speed up Apple application builds with CI/CD on EC2 Mac, and DEV416: Develop your AI Agents and MCP Tools in Swift.

Last week, AWS announced three new AWS Heroes. The AWS Heroes program recognizes a vibrant, worldwide group of AWS experts whose enthusiasm for knowledge-sharing has a real impact within the community. Welcome to the community, Dimple, Rola, and Vivek.

We also opened the GenAI Loft in Tel Aviv, Israel. AWS Gen AI Lofts are collaborative spaces and immersive experiences for startups and developers. The Loft content is tailored to address local customer needs – from startups and enterprises to public sector organizations, bringing together developers, investors, and industry experts under one roof.

GenAI Loft - TLV

The loft is open in Tel Aviv until Wednesday, November 19. If you’re in the area, check the list of sessions, workshops, and hackathons today.

If you are a serverless developer, last week was really rich with news. Let’s start with these.

Last week’s launches
Here are the launches that got my attention this week:

Additional updates
Here are some additional projects, blog posts, and news items that I found interesting:

  • Amazon Elastic Kubernetes Service gets independent affirmation of its zero operator access design – Amazon EKS offers a zero operator access posture. AWS personnel cannot access your content. This is achieved through a combination of AWS Nitro System-based instances, restricted administrative APIs, and end-to-end encryption. An independent review by NCC Group confirmed the effectiveness of these security measures.
  • Make your web apps hands-free with Amazon Nova Sonic – Amazon Nova Sonic, a foundation model from AAmazon Bedrock, provides you with the ability to create natural, low-latency, bidirectional speech conversations for applications. This provides users with the ability to collaborate with applications through voice and embedded intelligence, unlocking new interaction patterns and enhancing usability. This blog post demonstrates a reference app, Smart Todo App. It shows how voice can be integrated to provide a hands-free experience for task management.
  • AWS X-Ray SDKs & Daemon migration to OpenTelemetry – AWS X-Ray is transitioning to OpenTelemetry as its primary instrumentation standard for application tracing. OpenTelemetry-based instrumentation solutions are recommended for producing traces from applications and sending them to AWS X-Ray. X-Ray’s existing console experience and functionality continue to be fully supported and remains unchanged by this transition.
  • Powering the world’s largest events: How Amazon CloudFront delivers at scale – Amazon CloudFront achieved a record-breaking peak of 268 terabits per second on November 1, 2025, during major game delivery workloads—enough bandwidth to simultaneously stream live sports in HD to approximately 45 million concurrent viewers. This milestone demonstrates the CloudFront massive scale, powered by 750+ edge locations across 440+ cities globally and 1,140+ embedded PoPs within 100+ ISPs, with the latest generation delivering 3x the performance of previous versions.

Upcoming AWS events
Check your calendars so that you can sign up for these upcoming events:

Join the AWS Builder Center to learn, build, and connect with builders in the AWS community. Browse here for upcoming in-person events, developer-focused events, and events for startups.

That’s all for this week. Check back next Monday for another Weekly Roundup!

— seb

This post is part of our Weekly Roundup series. Check back each week for a quick roundup of interesting news and announcements from AWS!

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new in Git 2.52.0?

1 Share

The Git project recently released Git 2.52. After a relatively short 8-week release cycle for 2.51, due to summer in the Northern Hemisphere, this release is back to the usual 12-week cycle. Let’s look at some notable changes, including contributions from the GitLab Git team and the wider Git community.

New git-last-modified(1) command

Many Git forges like GitLab display files in a tree view like this:

Name Last commit Last update
README.md README: *.txt -> *.adoc fixes 4 months ago
RelNotes Start 2.51 cycle, the first batch 4 weeks ago
SECURITY.md SECURITY: describe how to report vulnerabilities 4 years
abspath.c abspath: move related functions to abspath 2 years
abspath.h abspath: move related functions to abspath 2 years
aclocal.m4 configure: use AC_LANG_PROGRAM consistently 15 years ago
add-patch.c pager: stop using the_repository 7 months ago
advice.c advice: allow disabling default branch name advice 4 months ago
advice.h advice: allow disabling default branch name advice 4 months ago
alias.h rebase -m: fix serialization of strategy options 2 years
alloc.h git-compat-util: move alloc macros to git-compat-util.h 2 years ago
apply.c apply: only write intents to add for new files 8 days ago
archive.c Merge branch 'ps/parse-options-integers' 3 months ago
archive.h archive.h: remove unnecessary include 1 year
attr.h fuzz: port fuzz-parse-attr-line from OSS-Fuzz 9 months ago
banned.h banned.h: mark strtok() and strtok_r() as banned 2 years

<br></br>

Next to the files themselves, we also display which commit last modified each respective file. This information is easy to extract from Git by executing the following command:


$ git log --max-count=1 HEAD -- <filename>

While nice and simple, this has a significant catch: Git does not have a way to extract this information for each of these files in a single command. So to get the last commit for all the files in the tree, we'd need to run this command for each file separately. This results in a command pipeline similar to the following:


$ git ls-tree HEAD --name-only | xargs --max-args=1 git log --max-count=1 HEAD --

Naturally, this isn't very efficient:

  • We need to spin up a fresh Git command for each file.

  • Git has to step through history for each file separately.

As a consequence, this whole operation is quite costly and generates significant load for GitLab.

To overcome these issues, a new Git subcommand git-last-modified(1) has been introduced. This command returns the commit for each file of a given commit:


$ git last-modified HEAD


e56f6dcd7b4c90192018e848d0810f091d092913        add-patch.c
373ad8917beb99dc643b6e7f5c117a294384a57e        advice.h
e9330ae4b820147c98e723399e9438c8bee60a80        advice.c
5e2feb5ca692c5c4d39b11e1ffa056911dd7dfd3        alloc.h
954d33a9757fcfab723a824116902f1eb16e05f7        RelNotes
4ce0caa7cc27d50ee1bedf1dff03f13be4c54c1f        apply.c
5d215a7b3eb0a9a69c0cb9aa43dcae956a0aa03e        archive.c
c50fbb2dd225e7e82abba4380423ae105089f4d7        README.md
72686d4e5e9a7236b9716368d86fae5bf1ae6156        attr.h
c2c4138c07ca4d5ffc41ace0bfda0f189d3e262e        archive.h
5d1344b4973c8ea4904005f3bb51a47334ebb370        abspath.c
5d1344b4973c8ea4904005f3bb51a47334ebb370        abspath.h
60ff56f50372c1498718938ef504e744fe011ffb        banned.h
4960e5c7bdd399e791353bc6c551f09298746f61        alias.h
2e99b1e383d2da56c81d7ab7dd849e9dab5b7bf0        SECURITY.md
1e58dba142c673c59fbb9d10aeecf62217d4fc9c        aclocal.m4

The benefit of this is obviously that we only have to execute a single Git process now to derive all of that information. But even more importantly, it only requires us to walk the history once for all files together instead of having to walk it multiple times. This is achieved by:

  1. Start walking the history from the specified commit.

  2. For each commit:

    1. If it doesn't modify any of the paths we're interested in we continue to the next commit.
    2. If it does, we print the commit ID together with the path. Furthermore, we remove the path from the set of interesting paths.
  3. When the list of interesting paths becomes empty we stop.

Gitaly has already been adjusted to use the new command, but the logic is still guarded by a feature flag. Preliminary testing has shown that git-last-modified(1) is in most situations at least twice as fast compared to using git log --max-count=1.

These changes were originally written by multiple developers from GitHub and were upstreamed into Git by Toon Claes.

git-fast-export(1) and git-fast-import(1) signature-related improvements

The git-fast-export(1) and git-fast-import(1) commands are designed to be mostly used by interoperability or history rewriting tools. The goal of interoperability tools is to make Git interact nicely with other software, usually a different version control system, that stores data in a different format than Git. For example hg-fast-export.sh is a “Mercurial to Git converter using git-fast-import."

Alternately, history-rewriting tools let users — usually admins — make changes to the history of their repositories that are not possible or not allowed by the version control system. For example, reposurgeon says in its introduction that its purpose is “to enable risky operations that version-control systems don't want to let you do, such as (a) editing past comments and metadata, (b) excising commits, (c) coalescing and splitting commits, (d) removing files and subtrees from repo history, (e) merging or grafting two or more repos, and (f) cutting a repo in two by cutting a parent-child link, preserving the branch structure of both child repos."

Within GitLab, we use git-filter-repo to let admins perform some risky operations on their Git repositories. Unfortunately, until Git 2.50 (released last June), both git-fast-export(1) and git-fast-import(1) didn't handle cryptographic commit signatures at all. So, although git-fast-export(1) had a --signed-tags=<mode> option that allows users to change how cryptographic tag signatures are handled, commit signatures were simply ignored.

Cryptographic signatures are very fragile because they are based on the exact commit or tag data that was signed. When the signed data or any of its preceding history changes, the cryptographic signature becomes invalid. This is a fragile but necessary requirement to make these signatures useful.

But in the context of rewriting history this is a problem:

  • We may want to keep cryptographic signatures for both commits and tags that are still valid after the rewrite (e.g. because the history leading up to them did not change).

  • We may want to create new cryptographic signatures for commits and tags where the previous signature has become invalid.

Neither git-fast-import(1) nor git-fast-export(1) allow for these use cases though, which limits what tools like git-filter-repo or reposurgeon can achieve.

We have made some significant progress:

  • In Git 2.50 we added a --signed-commits=<mode> option to git-fast-export(1) for exporting commit signatures, and support in git-fast-import(1) for importing them.

  • In Git 2.51 we improved the format used for exporting and importing commit signatures, and we made it possible for git-fast-import(1) to import both a signature made on the SHA-1 object ID of the commit and one made on its SHA-256 object ID.

  • In Git 2.52 we added the --signed-commits=<mode> and --signed-tags=<mode> options to git-fast-import(1), so the user has control over how to handle signed data at import time.

There is still more to be done. We need to add the ability to:

  • Retain only those commit signatures that are still valid to git-fast-import(1).

  • Re-sign data where the signature became invalid.

We have already started to work on these next steps and expect this to land in Git 2.53. Once done, tools like git-filter-repo(1) will finally start to handle cryptographic signatures more gracefully. We will keep you posted in our next Git release blog post.

This project was led by Christian Couder.

New and improved git-maintenance(1) strategies

Git repositories require regular maintenance to ensure that they perform well. This maintenance performs a bunch of different tasks: references get optimized, objects get compressed, and stale data gets pruned.

Until Git 2.28, these maintenance tasks were performed by git-gc(1). The problem with this command is that it wasn't built with customizability in mind: While certain parameters can be configured, it is not possible to control which parts of a repository should be optimized. This means that it may not be a good fit for all use cases. Even more importantly, it made it very hard to iterate on how exactly Git performs repository maintenance.

To fix this issue and allow us to iterate again, Derrick Stolee introduced git-maintenance(1). In contrast to git-gc(1), it is built with customizability in mind and allows the user to configure which tasks specifically should be running in a certain repository. This new tool was made the default for Git’s automated maintenance in Git 2.29, but, by default, it still uses git-gc(1) to perform the maintenance.

While this default maintenance strategy works well in small or even medium-sized repositories, it is problematic in the context of large monorepos. The biggest limiting factor is how git-gc(1) repacks objects: Whenever there are more than 50 packfiles, the tool will merge all of them together into a single packfile. This operation is quite CPU-intensive and causes a lot of I/O operations, so for large monorepos this operation can easily take many minutes or even hours to complete.

Git already knows how to minimize these repacks via “geometric repacking.” The idea is simple: The packfiles that exist in the repository must follow a geometric progression where every packfile must contain at least twice as many objects as the next smaller one. This allows Git to amortize the number of repacks required while still ensuring that there is only a relatively small number of packfiles overall. This mode was introduced by Taylor Blau in Git 2.32, but it was not wired up as part of the automated maintenance.

All the parts exist to make repository maintenance way more scalable for large monorepos: We have the flexible git-maintenance(1) tool that can be extended to have a new maintenance strategy, and we have a better way to repack objects. All that needs to be done is to combine these two.

And that's exactly what we did with Git 2.52! We have introduced a new “geometric” maintenance strategy that you can configure in your Git repositories. This strategy is intended as a full replacement for the old strategy based on git-gc(1). Here is the config code you need:


$ git config set maintenance.strategy geometric

From hereon, Git will use geometric repacking to optimize your objects. This should lead to less churn while ensuring that your objects are in a better-optimized state, especially in large monorepos.

In Git 2.53, we aim to make this the default strategy. So stay tuned!

This project was led by Patrick Steinhardt.

New subcommand for git-repo(1) to display repository metrics

Performance of Git operations in a repository are often dependent on certain characteristics of its underlying structure. At GitLab, we host some extremely large repositories and having insight into the general structure of a repository is critical to understand performance. While it is possible to compose various Git commands and other tools together to surface certain repository metrics, Git lacks a means to surface info about a repository's shape/structure via a single command. This has led to the development of other external tools, such as git-sizer(1), to fill this gap.

With the release of Git 2.52, a new “structure” subcommand has been added to git-repo(1) with the aim to surface information about a repository's structure. Currently, it displays info about the number of references and objects in the repository in the following form:


$ git repo structure


| Repository structure | Value  |
| -------------------- | ------ |
| * References         |        |
|   * Count            |   1772 |
|     * Branches       |      3 |
|     * Tags           |   1025 |
|     * Remotes        |    744 |
|     * Others         |      0 |
|                      |        |
| * Reachable objects  |        |
|   * Count            | 418958 |
|     * Commits        |  87468 |
|     * Trees          | 168866 |
|     * Blobs          | 161632 |
|     * Tags           |    992 |

In subsequent releases we hope to expand on this and provide other interesting data points like the largest objects in the repository.

This project was led by Justin Tobler.

Improvements related to the Google Summer of Code 2025

We had three successful projects with the Google Summer of Code.

Refactoring in order to reduce Git's global state

Git contains several global variables used throughout the codebase. This increases the complexity of the code and reduces the maintainability. As part of this project, Ayush Chandekar worked on reducing the usage of the the_repository global variable via a series of patches.

The project was mentored by Christian Couder and Ghanshyam Thakkar.

Machine-readable Repository Information Query Tool

Git lacks a centralized way to retrieve repository information, requiring users to piece it together from various commands. While git-rev-parse(1) has become the de-facto tool for accessing much of this information, doing so falls outside its primary purpose.

As part of this project, Lucas Oshiro introduced a new command, git-repo(1), which will house all repository-level information. Users can now use git repo info to obtain repository information:


$ git repo info layout.bare layout.shallow object.format references.format

layout.bare=false
layout.shallow=false
object.format=sha1
references.format=reftable

The project was mentored by Patrick Steinhardt and Karthik Nayak.

Consolidate ref-related functionality into git-refs

Git offers multiple commands for managing references, namely git-for-each-ref(1), git show-ref(1), git-update-ref(1), and git-pack-refs(1). This makes them harder to discover and creates overlapping functionality. To address this, we introduced the git-refs(1) command to consolidate these operations under a single interface. As part of this this project, Meet Soni extended the command by adding the following subcommands:

  • git refs optimize to optimize the reference backend

  • git refs list to list all references

  • git refs exists to verify the existence of a reference

The project was mentored by Patrick Steinhardt and shejialuo.

What's next?

Ready to experience these improvements? Update to Git 2.52.0 and start using git last-modified.

At GitLab, we will of course ensure that all of these improvements will eventually land in a GitLab instance near you!

Learn more in the official Git 2.52.0 release notes and explore our complete archive of Git development coverage.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GitLab engineer: How I improved my onboarding experience with AI

1 Share

Starting a new job is exciting, and overwhelming. New teammates, new tools, and, in GitLab’s case, a lot of documentation. Six weeks ago, I joined GitLab’s Growth team as a fullstack engineer. Anyone who has gone through onboarding at GitLab knows it’s transparent, extensive, and thorough.

GitLab's onboarding process includes a lot of docs, videos, and trainings that will bring you up to speed. Also, in line with GitLab's values, my team encouraged me to start contributing right away. I quickly realized that onboarding here is both diligent and intense. Luckily, I had a secret helper: GitLab Duo.

My main use cases

I’ve found GitLab Duo's AI assistance, available throughout the software development lifecycle, useful in three key areas: exploration, reviewing, and debugging. With GitLab Duo, I was able to get my first tiny MR deployed to production in the first week and actively contribute to the personal homepage in GitLab 18.5 in the weeks after.

Exploration

Early in onboarding, I often remembered reading something but couldn’t recall where. GitLab has a public-facing handbook, an internal handbook, and GitLab Docs. It can be difficult to search across all of them efficiently.

GitLab Duo simplifies this task: I can describe what I’m looking for in natural language via GitLab Duo Chat and search across all resources at once.

Example prompt:

I remember reading about how RSpec tests are done at GitLab. Can you find relevant documentation across the Handbook, the internal handbook and the GitLab Docs?

Before starting work on an issue, I use GitLab Duo to identify edge cases and hidden dependencies. GitLab Duo will relate the requirements of the issue against the whole GitLab codebase, assess similar features, and prepare all the findings. Based on its output I am able to refine the issue with my product manager and designer, and make sure my implementation covers all edge cases or define future iterations.

Example prompt:

Analyze this issue in the context of its epic and identify:

  • Implementation questions to ask PM/design before coding
  • Edge cases not covered in requirements
  • Cross-feature dependencies that might be affected
  • Missing acceptance criteria

I also check that my planned solution follows GitLab best practices and common patterns.

Example prompt:

I want to implement XZY behavior — how is this usually done at GitLab, and what other options do I have?

Reviewing

I always let GitLab Duo review my merge requests before assigning human reviewers. It often catches small mistakes, suggests improvements, and highlights edge cases I missed. This shortens the review cycle and helps my teammates focus on more complex and bigger-picture feedback.

Since I’m still new to GitLab’s codebase and coding practices, some review comments are hard to interpret. In those cases, GitLab Duo helps me understand what a reviewer means and how it relates to my code.

Example prompt:

I don’t understand the comment on this MR about following the user instead of testing component internals, what does it mean and how does it relate to my implementation?

Debugging

Sometimes pipeline tests on my merge requests failed unexpectedly. If I can’t tell whether my changes are the cause, GitLab Duo helps me investigate and fix the failures. Using GitLab Duo Agentic Chat, Duo can apply changes to debug the failing job.

Example prompt:

The pipeline job “rspec system pg16 12/32” is failing, but I don’t know whether that relates to my changes. Can you check, if my changes are causing the pipeline failure and, if so, guide me through the steps of fixing it.

How Duo aligns with GitLab’s values

Using GitLab Duo doesn’t just help me, it also supports GitLab’s CREDIT values:

  • Collaboration: I ask teammates fewer basic questions. And when I do ask questions, they’re more thoughtful and informed. This respects their time.

  • Results for customers: By identifying edge cases early and improving code quality, GitLab Duo helps me deliver better outcomes for customers.

  • Efficiency: Streamlined preparation, faster reviews, and improved debugging make me more efficient.

  • Diversity, inclusion & belonging: AI guidance might mitigate misunderstandings and different barriers to entry based on differing backgrounds and abilities.

  • Iteration: The ability to try ideas faster and identify potential improvements, enables faster iteration.

  • Transparency: GitLab Duo makes the already transparent documentation at GitLab more accessible.

Staying cautious with AI

It never has been as easy and difficult to be competent as in the days of AI. It can be a powerful tool, but AI does get things wrong. Therefore, I avoid automation bias by always validating AI's outputs. If I don’t understand the output, I don’t use it. I’m also cautious of over-reliance. Studies suggest that heavy AI use can lead to cognitive offloading and worse outcomes in the long run. One study shows that users of AI perform worse in exams. To avoid negatively affecting my skills, I use AI as a discussion partner rather than just implementing the code it generates.

Summary

Onboarding is always a stressful time, but using GitLab Duo made mine smoother and less overwhelming. I learned more about GitLab’s codebase, culture, and best practices than I could have managed on my own.

Want to make GitLab Duo part of your onboarding experience? Sign up for a free trial today.

Resources

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories