Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149163 stories
·
33 followers

What’s new in Git 2.52.0?

1 Share

The Git project recently released Git 2.52. After a relatively short 8-week release cycle for 2.51, due to summer in the Northern Hemisphere, this release is back to the usual 12-week cycle. Let’s look at some notable changes, including contributions from the GitLab Git team and the wider Git community.

New git-last-modified(1) command

Many Git forges like GitLab display files in a tree view like this:

Name Last commit Last update
README.md README: *.txt -> *.adoc fixes 4 months ago
RelNotes Start 2.51 cycle, the first batch 4 weeks ago
SECURITY.md SECURITY: describe how to report vulnerabilities 4 years
abspath.c abspath: move related functions to abspath 2 years
abspath.h abspath: move related functions to abspath 2 years
aclocal.m4 configure: use AC_LANG_PROGRAM consistently 15 years ago
add-patch.c pager: stop using the_repository 7 months ago
advice.c advice: allow disabling default branch name advice 4 months ago
advice.h advice: allow disabling default branch name advice 4 months ago
alias.h rebase -m: fix serialization of strategy options 2 years
alloc.h git-compat-util: move alloc macros to git-compat-util.h 2 years ago
apply.c apply: only write intents to add for new files 8 days ago
archive.c Merge branch 'ps/parse-options-integers' 3 months ago
archive.h archive.h: remove unnecessary include 1 year
attr.h fuzz: port fuzz-parse-attr-line from OSS-Fuzz 9 months ago
banned.h banned.h: mark strtok() and strtok_r() as banned 2 years

<br></br>

Next to the files themselves, we also display which commit last modified each respective file. This information is easy to extract from Git by executing the following command:


$ git log --max-count=1 HEAD -- <filename>

While nice and simple, this has a significant catch: Git does not have a way to extract this information for each of these files in a single command. So to get the last commit for all the files in the tree, we'd need to run this command for each file separately. This results in a command pipeline similar to the following:


$ git ls-tree HEAD --name-only | xargs --max-args=1 git log --max-count=1 HEAD --

Naturally, this isn't very efficient:

  • We need to spin up a fresh Git command for each file.

  • Git has to step through history for each file separately.

As a consequence, this whole operation is quite costly and generates significant load for GitLab.

To overcome these issues, a new Git subcommand git-last-modified(1) has been introduced. This command returns the commit for each file of a given commit:


$ git last-modified HEAD


e56f6dcd7b4c90192018e848d0810f091d092913        add-patch.c
373ad8917beb99dc643b6e7f5c117a294384a57e        advice.h
e9330ae4b820147c98e723399e9438c8bee60a80        advice.c
5e2feb5ca692c5c4d39b11e1ffa056911dd7dfd3        alloc.h
954d33a9757fcfab723a824116902f1eb16e05f7        RelNotes
4ce0caa7cc27d50ee1bedf1dff03f13be4c54c1f        apply.c
5d215a7b3eb0a9a69c0cb9aa43dcae956a0aa03e        archive.c
c50fbb2dd225e7e82abba4380423ae105089f4d7        README.md
72686d4e5e9a7236b9716368d86fae5bf1ae6156        attr.h
c2c4138c07ca4d5ffc41ace0bfda0f189d3e262e        archive.h
5d1344b4973c8ea4904005f3bb51a47334ebb370        abspath.c
5d1344b4973c8ea4904005f3bb51a47334ebb370        abspath.h
60ff56f50372c1498718938ef504e744fe011ffb        banned.h
4960e5c7bdd399e791353bc6c551f09298746f61        alias.h
2e99b1e383d2da56c81d7ab7dd849e9dab5b7bf0        SECURITY.md
1e58dba142c673c59fbb9d10aeecf62217d4fc9c        aclocal.m4

The benefit of this is obviously that we only have to execute a single Git process now to derive all of that information. But even more importantly, it only requires us to walk the history once for all files together instead of having to walk it multiple times. This is achieved by:

  1. Start walking the history from the specified commit.

  2. For each commit:

    1. If it doesn't modify any of the paths we're interested in we continue to the next commit.
    2. If it does, we print the commit ID together with the path. Furthermore, we remove the path from the set of interesting paths.
  3. When the list of interesting paths becomes empty we stop.

Gitaly has already been adjusted to use the new command, but the logic is still guarded by a feature flag. Preliminary testing has shown that git-last-modified(1) is in most situations at least twice as fast compared to using git log --max-count=1.

These changes were originally written by multiple developers from GitHub and were upstreamed into Git by Toon Claes.

git-fast-export(1) and git-fast-import(1) signature-related improvements

The git-fast-export(1) and git-fast-import(1) commands are designed to be mostly used by interoperability or history rewriting tools. The goal of interoperability tools is to make Git interact nicely with other software, usually a different version control system, that stores data in a different format than Git. For example hg-fast-export.sh is a “Mercurial to Git converter using git-fast-import."

Alternately, history-rewriting tools let users — usually admins — make changes to the history of their repositories that are not possible or not allowed by the version control system. For example, reposurgeon says in its introduction that its purpose is “to enable risky operations that version-control systems don't want to let you do, such as (a) editing past comments and metadata, (b) excising commits, (c) coalescing and splitting commits, (d) removing files and subtrees from repo history, (e) merging or grafting two or more repos, and (f) cutting a repo in two by cutting a parent-child link, preserving the branch structure of both child repos."

Within GitLab, we use git-filter-repo to let admins perform some risky operations on their Git repositories. Unfortunately, until Git 2.50 (released last June), both git-fast-export(1) and git-fast-import(1) didn't handle cryptographic commit signatures at all. So, although git-fast-export(1) had a --signed-tags=<mode> option that allows users to change how cryptographic tag signatures are handled, commit signatures were simply ignored.

Cryptographic signatures are very fragile because they are based on the exact commit or tag data that was signed. When the signed data or any of its preceding history changes, the cryptographic signature becomes invalid. This is a fragile but necessary requirement to make these signatures useful.

But in the context of rewriting history this is a problem:

  • We may want to keep cryptographic signatures for both commits and tags that are still valid after the rewrite (e.g. because the history leading up to them did not change).

  • We may want to create new cryptographic signatures for commits and tags where the previous signature has become invalid.

Neither git-fast-import(1) nor git-fast-export(1) allow for these use cases though, which limits what tools like git-filter-repo or reposurgeon can achieve.

We have made some significant progress:

  • In Git 2.50 we added a --signed-commits=<mode> option to git-fast-export(1) for exporting commit signatures, and support in git-fast-import(1) for importing them.

  • In Git 2.51 we improved the format used for exporting and importing commit signatures, and we made it possible for git-fast-import(1) to import both a signature made on the SHA-1 object ID of the commit and one made on its SHA-256 object ID.

  • In Git 2.52 we added the --signed-commits=<mode> and --signed-tags=<mode> options to git-fast-import(1), so the user has control over how to handle signed data at import time.

There is still more to be done. We need to add the ability to:

  • Retain only those commit signatures that are still valid to git-fast-import(1).

  • Re-sign data where the signature became invalid.

We have already started to work on these next steps and expect this to land in Git 2.53. Once done, tools like git-filter-repo(1) will finally start to handle cryptographic signatures more gracefully. We will keep you posted in our next Git release blog post.

This project was led by Christian Couder.

New and improved git-maintenance(1) strategies

Git repositories require regular maintenance to ensure that they perform well. This maintenance performs a bunch of different tasks: references get optimized, objects get compressed, and stale data gets pruned.

Until Git 2.28, these maintenance tasks were performed by git-gc(1). The problem with this command is that it wasn't built with customizability in mind: While certain parameters can be configured, it is not possible to control which parts of a repository should be optimized. This means that it may not be a good fit for all use cases. Even more importantly, it made it very hard to iterate on how exactly Git performs repository maintenance.

To fix this issue and allow us to iterate again, Derrick Stolee introduced git-maintenance(1). In contrast to git-gc(1), it is built with customizability in mind and allows the user to configure which tasks specifically should be running in a certain repository. This new tool was made the default for Git’s automated maintenance in Git 2.29, but, by default, it still uses git-gc(1) to perform the maintenance.

While this default maintenance strategy works well in small or even medium-sized repositories, it is problematic in the context of large monorepos. The biggest limiting factor is how git-gc(1) repacks objects: Whenever there are more than 50 packfiles, the tool will merge all of them together into a single packfile. This operation is quite CPU-intensive and causes a lot of I/O operations, so for large monorepos this operation can easily take many minutes or even hours to complete.

Git already knows how to minimize these repacks via “geometric repacking.” The idea is simple: The packfiles that exist in the repository must follow a geometric progression where every packfile must contain at least twice as many objects as the next smaller one. This allows Git to amortize the number of repacks required while still ensuring that there is only a relatively small number of packfiles overall. This mode was introduced by Taylor Blau in Git 2.32, but it was not wired up as part of the automated maintenance.

All the parts exist to make repository maintenance way more scalable for large monorepos: We have the flexible git-maintenance(1) tool that can be extended to have a new maintenance strategy, and we have a better way to repack objects. All that needs to be done is to combine these two.

And that's exactly what we did with Git 2.52! We have introduced a new “geometric” maintenance strategy that you can configure in your Git repositories. This strategy is intended as a full replacement for the old strategy based on git-gc(1). Here is the config code you need:


$ git config set maintenance.strategy geometric

From hereon, Git will use geometric repacking to optimize your objects. This should lead to less churn while ensuring that your objects are in a better-optimized state, especially in large monorepos.

In Git 2.53, we aim to make this the default strategy. So stay tuned!

This project was led by Patrick Steinhardt.

New subcommand for git-repo(1) to display repository metrics

Performance of Git operations in a repository are often dependent on certain characteristics of its underlying structure. At GitLab, we host some extremely large repositories and having insight into the general structure of a repository is critical to understand performance. While it is possible to compose various Git commands and other tools together to surface certain repository metrics, Git lacks a means to surface info about a repository's shape/structure via a single command. This has led to the development of other external tools, such as git-sizer(1), to fill this gap.

With the release of Git 2.52, a new “structure” subcommand has been added to git-repo(1) with the aim to surface information about a repository's structure. Currently, it displays info about the number of references and objects in the repository in the following form:


$ git repo structure


| Repository structure | Value  |
| -------------------- | ------ |
| * References         |        |
|   * Count            |   1772 |
|     * Branches       |      3 |
|     * Tags           |   1025 |
|     * Remotes        |    744 |
|     * Others         |      0 |
|                      |        |
| * Reachable objects  |        |
|   * Count            | 418958 |
|     * Commits        |  87468 |
|     * Trees          | 168866 |
|     * Blobs          | 161632 |
|     * Tags           |    992 |

In subsequent releases we hope to expand on this and provide other interesting data points like the largest objects in the repository.

This project was led by Justin Tobler.

Improvements related to the Google Summer of Code 2025

We had three successful projects with the Google Summer of Code.

Refactoring in order to reduce Git's global state

Git contains several global variables used throughout the codebase. This increases the complexity of the code and reduces the maintainability. As part of this project, Ayush Chandekar worked on reducing the usage of the the_repository global variable via a series of patches.

The project was mentored by Christian Couder and Ghanshyam Thakkar.

Machine-readable Repository Information Query Tool

Git lacks a centralized way to retrieve repository information, requiring users to piece it together from various commands. While git-rev-parse(1) has become the de-facto tool for accessing much of this information, doing so falls outside its primary purpose.

As part of this project, Lucas Oshiro introduced a new command, git-repo(1), which will house all repository-level information. Users can now use git repo info to obtain repository information:


$ git repo info layout.bare layout.shallow object.format references.format

layout.bare=false
layout.shallow=false
object.format=sha1
references.format=reftable

The project was mentored by Patrick Steinhardt and Karthik Nayak.

Consolidate ref-related functionality into git-refs

Git offers multiple commands for managing references, namely git-for-each-ref(1), git show-ref(1), git-update-ref(1), and git-pack-refs(1). This makes them harder to discover and creates overlapping functionality. To address this, we introduced the git-refs(1) command to consolidate these operations under a single interface. As part of this this project, Meet Soni extended the command by adding the following subcommands:

  • git refs optimize to optimize the reference backend

  • git refs list to list all references

  • git refs exists to verify the existence of a reference

The project was mentored by Patrick Steinhardt and shejialuo.

What's next?

Ready to experience these improvements? Update to Git 2.52.0 and start using git last-modified.

At GitLab, we will of course ensure that all of these improvements will eventually land in a GitLab instance near you!

Learn more in the official Git 2.52.0 release notes and explore our complete archive of Git development coverage.

Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

GitLab engineer: How I improved my onboarding experience with AI

1 Share

Starting a new job is exciting, and overwhelming. New teammates, new tools, and, in GitLab’s case, a lot of documentation. Six weeks ago, I joined GitLab’s Growth team as a fullstack engineer. Anyone who has gone through onboarding at GitLab knows it’s transparent, extensive, and thorough.

GitLab's onboarding process includes a lot of docs, videos, and trainings that will bring you up to speed. Also, in line with GitLab's values, my team encouraged me to start contributing right away. I quickly realized that onboarding here is both diligent and intense. Luckily, I had a secret helper: GitLab Duo.

My main use cases

I’ve found GitLab Duo's AI assistance, available throughout the software development lifecycle, useful in three key areas: exploration, reviewing, and debugging. With GitLab Duo, I was able to get my first tiny MR deployed to production in the first week and actively contribute to the personal homepage in GitLab 18.5 in the weeks after.

Exploration

Early in onboarding, I often remembered reading something but couldn’t recall where. GitLab has a public-facing handbook, an internal handbook, and GitLab Docs. It can be difficult to search across all of them efficiently.

GitLab Duo simplifies this task: I can describe what I’m looking for in natural language via GitLab Duo Chat and search across all resources at once.

Example prompt:

I remember reading about how RSpec tests are done at GitLab. Can you find relevant documentation across the Handbook, the internal handbook and the GitLab Docs?

Before starting work on an issue, I use GitLab Duo to identify edge cases and hidden dependencies. GitLab Duo will relate the requirements of the issue against the whole GitLab codebase, assess similar features, and prepare all the findings. Based on its output I am able to refine the issue with my product manager and designer, and make sure my implementation covers all edge cases or define future iterations.

Example prompt:

Analyze this issue in the context of its epic and identify:

  • Implementation questions to ask PM/design before coding
  • Edge cases not covered in requirements
  • Cross-feature dependencies that might be affected
  • Missing acceptance criteria

I also check that my planned solution follows GitLab best practices and common patterns.

Example prompt:

I want to implement XZY behavior — how is this usually done at GitLab, and what other options do I have?

Reviewing

I always let GitLab Duo review my merge requests before assigning human reviewers. It often catches small mistakes, suggests improvements, and highlights edge cases I missed. This shortens the review cycle and helps my teammates focus on more complex and bigger-picture feedback.

Since I’m still new to GitLab’s codebase and coding practices, some review comments are hard to interpret. In those cases, GitLab Duo helps me understand what a reviewer means and how it relates to my code.

Example prompt:

I don’t understand the comment on this MR about following the user instead of testing component internals, what does it mean and how does it relate to my implementation?

Debugging

Sometimes pipeline tests on my merge requests failed unexpectedly. If I can’t tell whether my changes are the cause, GitLab Duo helps me investigate and fix the failures. Using GitLab Duo Agentic Chat, Duo can apply changes to debug the failing job.

Example prompt:

The pipeline job “rspec system pg16 12/32” is failing, but I don’t know whether that relates to my changes. Can you check, if my changes are causing the pipeline failure and, if so, guide me through the steps of fixing it.

How Duo aligns with GitLab’s values

Using GitLab Duo doesn’t just help me, it also supports GitLab’s CREDIT values:

  • Collaboration: I ask teammates fewer basic questions. And when I do ask questions, they’re more thoughtful and informed. This respects their time.

  • Results for customers: By identifying edge cases early and improving code quality, GitLab Duo helps me deliver better outcomes for customers.

  • Efficiency: Streamlined preparation, faster reviews, and improved debugging make me more efficient.

  • Diversity, inclusion & belonging: AI guidance might mitigate misunderstandings and different barriers to entry based on differing backgrounds and abilities.

  • Iteration: The ability to try ideas faster and identify potential improvements, enables faster iteration.

  • Transparency: GitLab Duo makes the already transparent documentation at GitLab more accessible.

Staying cautious with AI

It never has been as easy and difficult to be competent as in the days of AI. It can be a powerful tool, but AI does get things wrong. Therefore, I avoid automation bias by always validating AI's outputs. If I don’t understand the output, I don’t use it. I’m also cautious of over-reliance. Studies suggest that heavy AI use can lead to cognitive offloading and worse outcomes in the long run. One study shows that users of AI perform worse in exams. To avoid negatively affecting my skills, I use AI as a discussion partner rather than just implementing the code it generates.

Summary

Onboarding is always a stressful time, but using GitLab Duo made mine smoother and less overwhelming. I learned more about GitLab’s codebase, culture, and best practices than I could have managed on my own.

Want to make GitLab Duo part of your onboarding experience? Sign up for a free trial today.

Resources

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success

1 Share

Guests:

 Topics: 

  • Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like? 
  • Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents?
  • Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response?
  • Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions?
  • We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise?
  • As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors?
  • We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success?  What are the key metrics you are using right now?

Resources:





Download audio: https://traffic.libsyn.com/secure/cloudsecuritypodcast/EP252_not248_CloudSecPodcast.mp3?dest-id=2641814
Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Integrate Playwright MCP for AI-Driven Test Automation

1 Share

Test automation has come a long way, from scripted flows to self-healing and now AI-driven testing. With the introduction of Model Context Protocol (MCP), Playwright can now interact with AI models and external tools to make smarter testing decisions. This guide walks you through integrating MCP with Playwright in VSCode, starting from the basics, enabling you to build smarter, adaptive tests today.

What Is Playwright MCP?
  • Playwright: An open-source framework for web testing and automation. It supports multiple browsers (Chromium, Firefox, and WebKit) and offers robust features like auto-wait, capturing screenshots, along with some great tooling like Codegen, Trace Viewer.
  • MCP (Model Context Protocol): A protocol that enables external tools to communicate with AI models or services in a structured, secure way.

By combining Playwright with MCP, you unlock:

  • AI-assisted test generation.
  • Dynamic test data.
  • Smarter debugging and adaptive workflows.
Why Integrate MCP with Playwright?
  • AI-powered test generation: Reduce manual scripting.
  • Dynamic context awareness: Tests adapt to real-time data.
  • Improved debugging: AI can suggest fixes for failing tests.
  • Smarter locator selection: AI helps pick stable, reliable selectors to reduce flaky tests.
  • Natural language instructions: Write or trigger tests using plain English prompts.
Getting Started in VS Code
Prerequisites
  • Node.js
    • Download: nodejs.org
    • Minimum version: v18.0.0 or higher (recommended: latest LTS)
    • Check version:  
node --version

 

  • Playwright
    Install Playwright: 
npm install @playwright/test
Step 1: Create Project Folder
mkdir playwrightMCP-demo cd playwrightMCP-demo
Step 2: Initialize Project
npm init playwright@latest
Step 3: Install MCP Server for VS Code
  • Search for 'MCP: Open user configuration' (type ‘>mcp’ in the search box)

You will see a file mcp.json is created in your user -> app data folder, which is having the server details.

{ "servers": { "playwright": { "command": "npx", "args": [ "@playwright/mcp@latest" ], "type": "stdio" } }, "inputs": [] }
Alternatively, install an MCP server directly GitHub MCP server registry using the Extensions view in VS Code.

From GitHub MCP server registry

 

Verify installation:

  • Open Copilot Chat → select Agent Mode → click Configure Tools → confirm microsoft/playwright-mcp appears in the list.
Step 4: Create a Simple Test Using MCP

Once your project and MCP setup are ready in VS Code, you can create a simple test that demonstrates MCP’s capabilities. MCP can help in multiple scenarios, below is the example for Test Generation using AI:

Scenario: AI-Assisted Test Generation- Use natural language prompts to generate Playwright tests automatically.
Test Scenario - Validate that a user can switch the Playwright documentation language dropdown to Python, search for “Frames,” and navigate to the Frames documentation page. Confirm that the page heading correctly displays “Frames.”

Sample Prompt to Use in VS Code (Copilot Agent Mode):Create a Playwright automated test in JavaScript that verifies navigation to the 'Frames' documentation page following below steps and be more specific about locators to avoid strict mode violation error 

  • Navigate to : Playwright documentation
  • select  Python from the dropdown options, labelled Node.js
  • Type the keyword Frames into the search box.
  • Click the search result for the Frames documentation page
  • Verify that the page header reads Frames.
  • Log success or provide a failure message with details.

Copilot will generate the test automatically in your tests folder

Step 5: Run Test
npx playwright test
Conclusion
Integrating Playwright with MCP in VS Code helps you build smarter, adaptive tests without adding complexity. Start small, follow best practices, and scale as you grow.

Note - Installation steps may vary depending on your environment. Refer to MCP Registry · GitHub for the latest instructions.
Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Get your app on the fast track with Android Performance Spotlight Week!

1 Share
Posted by Ben Weiss - Senior Developer Relations Engineer, Performance Paladin

When working on new features, app performance often takes a back seat. However, while it's not always top of mind for developers, users can see exactly where your app's performance lags behind. When that new feature takes a long time to load or is slow to render, your users can become frustrated. And unhappy users are more likely to abandon the feature you spent so much time on.

App performance is a core part of user experience and app quality, and recent studies and research shows that it's highly correlated with increased user satisfaction, higher retention, and better review scores.

And we're here to help… Welcome to Android Performance Spotlight Week! All week long, we're providing you with low-effort, high-impact tools and guidance to get your app on the fast track to better performance. We help you lay the foundation and then dive deeper into helping your app become a better version of itself.

The R8 optimizer and Profile Guided Optimizations are foundational tools to improve overall app performance. And that's why we just released significant improvements to Android Studio tooling for performance and with the Android Gradle Plugin 9.0 we're introducing new APIs to make it easier for you to do the right thing when configuring the R8 Android app optimizer. Jetpack Compose version 1.10, which is now in beta, ships with several features that improve app rendering performance. In addition to these updates, we're bringing you a refresher on improving app health and performance monitoring. Some of our partners are going to tell their performance improvement stories as well.



Stay tuned to the blog all week as we'll be updating this post with a digest of all the content released. We're excited to share these updates and help you improve your app's performance.

Here's a closer look at what we'll be covering:

Monday: Deliberate performance optimization with R8

November 17, 2025

We're kicking off with a deep dive into the R8 optimizer. It's not just about shrinking your app's size, it's about gaining a fundamental understanding of how the R8 optimizer can improve performance in your app and why you should use it right away. We just published the largest overhaul of new technical guidance to date. The guides cover how to enable, configure and troubleshoot the R8 optimizer. On Monday you'll also see case studies from top partners showing the real-world gains they achieved.



Read the blog post and developer guide.

Tuesday: Debugging and troubleshooting R8

November 18, 2025

We tackle the "Why does my app crash after enabling R8?" question head-on. We know advanced optimization can sometimes reveal edge cases, so we're focusing on debugging and troubleshooting R8 related issues. We'll show you how to use new features in Android Studio to de-obfuscate stack traces, identify common configuration problems, and implement best practices to get the most out of R8. We want you to feel confident, not just hopeful, when you flip the switch.

Content coming on November 18, 2025

Wednesday: Deeper performance considerations

November 19, 2025

Mid-week, we explore high-impact performance offerings beyond the R8 optimizer. We'll show you how to supercharge your app's startup and interactions using Profile Guided Optimization with Baseline Profiles and Startup Profiles. They are ready and proven to deliver another massive boost. We also have exciting news on Jetpack Compose rendering performance improvements. Plus, we'll share how to optimize your app's health by managing background work effectively.

Content coming on November 19, 2025

Thursday: Measure and improve

November 20, 2025

It's not an improvement if you can't prove it. Thursday is dedicated to performance measurement. We'll share our complete guide, starting from local measurement and debugging with tools like Jetpack Macrobenchmark and the new UiAutomator API to capture jank and startup times, all the way to monitoring your app in the wild. You'll learn about Play Vitals and other new APIs to understand your real user performance and quantify your success.

Content coming on November 20, 2025

Friday: Ask Android Live

November 21, 2025

We cap off the week with an in-depth, live conversation. This is your chance to talk directly with the engineers and Developer Relations team who build and use these tools every day. We'll have a panel of experts from the R8 and other performance teams ready to answer your toughest questions live. Get your questions ready!

Content coming on November 21, 2025


📣 Take the Performance Challenge!

We're not just sharing guidance. We're challenging you to put it into action!

Here's our challenge for you this week: Enable R8 full mode for your app.

  1. Follow our developer guides to get started: Enable app optimization.

  2. Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by using or adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after.

We're confident you'll see a meaningful improvement in your app's performance.

While you're at it, use the social tags #AskAndroid to bring your questions. Throughout the week our experts are monitoring and answering your questions.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Use R8 to shrink, optimize, and fast-track your app

1 Share

Posted by Ben Weiss - Senior Developer Relations Engineer


Welcome to day one of Android Performance Spotlight Week!

We're kicking things off with the single most impactful, low-effort change you can make to improve your app's performance: enabling the R8 optimizer in full mode.

You probably already know R8 as a tool to shrink your app's size. It does a fantastic job of removing unused code and resources, reducing your app's size. But its real power, the one it's really g-R8 at, is as an optimizer.

When you enable full mode and allow optimizations, R8 performs deep, whole-program optimizations, rewriting your code to be fundamentally more efficient. This isn't just a minor tweak.

After reading this article, check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube.



How R8 makes your app more performant

Let's shine a spotlight on the largest steps that the R8 optimizer takes to improve app performance.

Tree shaking is the most important step to reduce app size. During this phase the R8 optimizer removes unused code from libraries that your app depends on as well as dead code from your own codebase.

Method inlining replaces a method call with the actual code, which improves runtime performance.

Class merging, and other strategies are applied to make the code more compact. All your beautiful abstractions, such as interfaces and class hierarchies don't matter at this point and are likely to be removed.

Code minification is used to change the names of classes, fields, and methods to shorter, meaningless ones. So instead of MyDataModel you might end up with a class called a. This is what causes the most confusion when reading stack traces from an R8 optimized app. (Note that we have improved this in AGP 9.0!)

Resource shrinking further reduces an app's size by removing unused resources such as xml files and drawables.

By applying these steps the R8 optimizer improves app startup times, enables smoother UI rendering, with fewer slow and frozen frames and improves overall on-device resource usage.

Case Study: Reddit's performance improvements with R8

As one example of the performance improvements that R8 can bring, let's take a look at an example from Reddit. After enabling R8 in full mode, the Reddit for Android app saw significant performance improvements in various areas.

Caption: How R8 improved Reddit's app performance


The team observed a 40% faster cold startup, a 30% reduction in "Application Not Responding" (ANR) errors, a 25% improvement in frame rendering, and a 14% reduction in app size .


These enhancements are crucial for user satisfaction. A faster startup means less waiting and quicker access to content. Fewer ANRs lead to a more stable and reliable app, reducing user frustration. Smoother frame rendering removes UI jank, making scrolling and animations feel fluid and responsive. This positive technical impact was also clearly visible in user sentiment.

You can read more about their improvements on our blog.


Non-technical side effects of using R8

During our work with partners we have seen that these technical improvements have a direct impact on user satisfaction and can be reflected in user retention, engagement and session length. User stickiness, which can be measured with daily, weekly or monthly active users, has also been positively affected by technical performance improvements. And we've seen app ratings on the Play Store rise in correlation with R8 adoption. Sharing this with your product owners, CTOs and decision makers can help speed up your app's performance.



So let's call it what it is: Deliberate performance optimization is a virtue.

Guiding you to a more performant app

We heard that our developer guidance for R8 needed to be improved. So we went to work. The developer guidance for the R8 optimizer now is much more actionable and provides comprehensive guidance to enable and debug R8.

The documentation guides you on the high-level strategy for adoption, emphasizing the importance of choosing optimization-friendly libraries and, crucially, adopting R8's features incrementally to ensure stability. This phased approach allows you to safely unlock the benefits of R8 while providing you with guidance on difficult-to-debug issues.

We have significantly expanded our guidance on Keep Rules, which are the primary mechanism for controlling the R8 optimizer. We now provide a section on what Keep Rules are, how to apply them and guide you with best practices for writing and maintaining them. We also provide practical and actionable use cases and examples, helping you understand how to correctly prevent R8 from removing code that is needed at runtime, such as code accessed via reflection or use of the JNI native interface.

The documentation now also covers essential follow-up steps and advanced scenarios. We added a section on testing and troubleshooting, so you can verify the performance gains and debug any potential issues that arise. The advanced configurations section explains how to target specific build variants, customize which resources are kept or removed, and offers special optimization instructions for library authors, ensuring you can provide an optimized and R8-friendly package for other developers to use.

Enable the R8 optimizer's full potential

The R8 optimizer defaults to using "full mode" since version 8.0 of the Android Gradle Plugin. If your project has been developed over many years, it might still include a legacy flag to disable it. Check your gradle.properties file for this line and remove it.

android.enableR8.fullMode=false // delete this line to enable R8's full potential

Now check whether you have enabled R8 in your app's build.gradle.kts file for the release variant. It's enabled by setting isMinifyEnabled and isShrinkResources to true. You can also pass default and custom configuration files at this step.

release {

   isMinifyEnabled = true

   isShrinkResources = true

   proguardFiles(

       getDefaultProguardFile("proguard-android-optimize.txt"),

       "keep-rules.pro"

   )

}


Case Study: Disney+ performance improvements

Engineers at Disney+ invest in app performance and are optimizing the app's user experience. Sometimes even seemingly small changes can make a huge impact. While inspecting their R8 configuration, the team found that the -dontoptimize flag was being used. It was brought in by a default configuration file, which is still used in many apps today.


After replacing proguard-android.txt with proguard-android-optimize.txt, the Disney+ team saw significant improvements in their app's performance.



After a new version of the app containing this change was rolled out to users, Disney+ saw 30% faster app startup and 25% fewer user-perceived ANRs. 

Today many apps still use the proguard-android.txt file which contains the -dontoptimize flag. And that's where our tooling improvements come in.

Tooling support

Starting with Android Studio Narwhal 3 Feature Drop, you will see a lint warning when using proguard-android.txt 

And from AGP 9.0 onwards we are entirely dropping support for the file. This means you will have to migrate to proguard-android-optimize.txt.

We've also invested in new Android Studio features to make debugging R8-optimized code easier than ever. Starting in AGP 9.0 you can now automatically de-obfuscate stack traces within Android Studio's logcat for R8-processed builds, helping you pinpoint the exact line of code causing an issue, even in a fully optimized app. This will be covered in more depth in tomorrow's blog post on this Android Performance Spotlight Week.

Next Steps

Check out the Performance Spotlight Week introduction to the R8 optimizer on YouTube.



📣 Take the Performance Challenge!

It's time to see the benefits for yourself.

We challenge you to enable R8 full mode for your app today.

  1. Follow our developer guides to get started: Enable app optimization.

  2. Check if you still use proguard-android.txt and replace it with proguard-android-optimize.txt.

  3. Then, measure the impact. Don't just feel the difference, verify it. Measure your performance gains by adapting the code from our Macrobenchmark sample app on GitHub to measure your startup times before and after.

We're confident you'll see a meaningful improvement in your app's performance. Use #optimizationEnabled for any questions on enabling or troubleshooting R8. We're here to help.

Bring your questions for the Ask Android session on Friday

Use the social tag #AskAndroid to bring any performance questions. Throughout the week we are monitoring your questions and will answer several in the Ask Android session on performance on Friday, November 21. Stay tuned for tomorrow, where we'll dive even deeper into debugging and troubleshooting. But for now, get started with R8 and get your app on the fast track.


Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories