An upgraded Gemini 2.5 Native Audio model across Google products and live speech translation in the Google Translate app.
An upgraded Gemini 2.5 Native Audio model across Google products and live speech translation in the Google Translate app.
When was the last time you heard someone ask in a standup, “How could we do this more sustainably?”
Topics like green software and carbon efficiency are unfortunately rarely at the top of busy development teams’ priority lists. What’s more, there are very few “green software practitioners” out there. But we believe we’re at a unique moment in time where this can all change. The next generation of AI-enabled developer tooling has the opportunity to create near-effortless, always-on engineering for sustainability.
The GitHub Next and GitHub Sustainability teams have been collaborating to prove this concept and value through a series of internal and external pilot projects.
We call it Continuous Efficiency.
We believe that once it’s ready for broader adoption, Continuous Efficiency will have the potential to make a significant positive impact for developers, businesses, and sustainability.
Digital sustainability and green software are intrinsically aligned to “efficiency,” which is at the core of software engineering. Many developers would benefit from performant software, better standardization of code, change quality assurance, and more.
Building for sustainability has measurable business value, including:
Despite this, sustainability rarely makes it onto the roadmap, priority list, or even the backlog. But imagine a world in which the codebase could continuously improve itself…

Continuous Efficiency means effortless, incremental, validated improvements to codebases for increased efficiency. It’s an emergent practice based on a set of tools and techniques that we are starting to develop and hope to see the developer community expand on.
This emerges at the intersection of Continuous AI and Green Software.
Continuous AI is AI-enriched automation to software collaboration. We are exploring LLM-powered automation in platform-based software development and CI/CD workflows.
Green Software is designed and built to be more energy-efficient and have a lower environmental impact. This practice tends to result in software that is cheaper, more performant, and more resilient.
While Continuous Efficiency is a generally applicable concept, we have been building implementations on a specific GitHub platform and infrastructure called Agentic Workflows. It’s publicly available and open source, but currently in “research demonstrator” status (read: experimental prototype, pre-release, subject to change and errors!). Agentic Workflows is an experimental framework for exploring proactive, automated, event-driven agentic behaviors in GitHub repositories, running safely in GitHub Actions.
Our work in this space has been focused on two areas:
With modern LLMs and agentic workflows, we can now express engineering standards and code-quality guidelines directly in natural language and apply them at a scale that was previously unattainable.
This capability goes far beyond traditional linting and static analysis approaches in three important ways:
Examples of our work:
| Case study: Code base reviews Green software rules implementation We have implemented a wide range of standard and specific Green Software rules, tactics and patterns. These can be applied fully agentically to entire codebases and repos. Example: We teamed up with the resolve project to scan their codebase with a number of rules, and agentically delivered proposed improvements. The outputs weren’t all perfect—but one of the recently approved and merged pull requests makes a small performance improvement by “hoisting” RegExp literals from within hot functions. The project gets 500M+ downloads per month on npm. So this small impact will scale! | Case study: Implementing standards Web sustainability guidelines (WSG) The W3C WSG is a great resource to help people make web products and services more sustainable. We implemented the Web Development section into a set of 20 agentic workflows, so now the guidelines can be used by AI too! Example: We have run the WSG workflows on a number of GitHub and Microsoft web properties and found opportunities and built resolutions to improve them—ranging from deferred loading to using native browser features and latest language standards. |
Performance engineering is notoriously difficult because real-world software is profoundly heterogeneous. Every repository brings a different mix of languages and architectures, and even within a single codebase, the sources of performance issues can span from algorithmic choices to cache behavior to network paths.
Expert performance engineers excel at navigating this complexity, but the sheer variety and volume of work across the industry demands better tooling and scalable assistance.
We’ve been thinking about the “grand challenge” of how to build a generic agent that can walk up to any piece of software and make demonstrable performance improvements. One that could navigate the vast ambiguity and heterogeneity of software in the wild—no small task!
Semi-automatic performance engineering aims to meet that need with an automated, iterative workflow where an agent researches, plans, measures, and implements improvements under human guidance. The process begins with “fit-to-repo” discovery—figuring out how to build, benchmark, and measure a given project—before attempting any optimization. Modern LLM-based agents can explore repositories, identify relevant performance tools, run microbenchmarks, and propose targeted code changes.
Early results vary quite dramatically, but some show promise that guided automation can meaningfully improve software performance at scale.
| Case study: Daily perf improver Daily Perf Improver is a three-phase workflow, intended to run in small daily sprints. It can do things like: (1) Research and plan improvements (2) Infer how to build and benchmark the repository (3) Iteratively propose measured optimizations Example:On a focused recent pilot on FSharp.Control.AsyncSeq it has already delivered real gains by producing multiple accepted pull requests, including a rediscovered performance bug fix and verified microbenchmark-driven optimizations.Daily Perf Improver Research Demonstrator |
GitHub agentic workflows enable you to write automation in natural language (Markdown) instead of traditional YAML or scripts. You author a workflow in a .md file that begins with a YAML-like “front matter” (defining triggers, permissions, tools, safe-outputs, etc.), followed by plain-English instructions. At build time you run the gh aw compile command (part of the agentic workflows CLI) which compiles the Markdown into a standard GitHub Actions workflow (.yml) that can be executed by the normal GitHub Actions runtime.
When the compiled workflow runs, it launches an AI agent (for example via GitHub Copilot CLI, or other supported engines like Claude Code or OpenAI Codex) inside a sandboxed environment. The agent reads the repository’s context, applies the human-written natural-language instructions (for example “look for missing documentation, update README files, then open a pull request”), and produces outputs such as comments, pull requests, or other repository modifications. Because it’s running in the GitHub Actions environment, permission boundaries, safe-output restrictions, logs, auditability, and other security controls remain.
Our internal process for creating Continuous Efficiency workflows follows a simple, repeatable pattern:
create-agentic-workflow agent.If you’re a developer who loves the experimentation phase, you can already get started with running agentic workflows in GitHub Actions now! There are a range of examples that you can immediately try out (including a “Daily performance improver”) or author your own using natural language.
GitHub Sustainability will soon be publishing rulesets, workflows, and more—if you’re interested in being an early adopter or design partner, please get in touch with me.
The post The future of AI-powered software optimization (and how it can help your team) appeared first on The GitHub Blog.
Mazen and Robin are joined by fan-favorite Taylor Desseyn to discuss how the React Native job market has shifted in 2025 and why community matters more than ever. They break down what skills companies want now and how developers can stand out in a tighter market.
Show Notes
Connect With Us!
This episode is brought to you by Infinite Red!
Infinite Red is an expert React Native consultancy located in the USA. With over a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.