Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148544 stories
·
33 followers

What’s New in Microsoft 365 Copilot | February 2026

1 Share

Welcome to the February 2026 edition of What's New in Microsoft 365 Copilot! Every month, we highlight new features and enhancements to keep Microsoft 365 admins up to date with Copilot features that help your users be more productive and efficient in the apps they use every day.

Also new this month—the Microsoft Agent 365 blog and discussion space on Microsoft Tech Community. We recommend following it for the latest product news and insights on observability, security, and governance of agents in your organization.

Let’s take a closer look at what’s new this month:

User capabilities: 

IT admin capabilities: 

User capabilities

Text selection and expanded grounding for Copilot Chat

Users can now select specific text from Copilot responses and ask about it for focused follow-up questions. Simply highlight the text and an “Ask Copilot” button will appear. This precision targeting eliminates overly broad answers by letting users drill into exactly the content they need. Your users get faster, more controlled assistance for explanations, summaries, translations, and next steps. This feature is rolling out in March.

 

Users can now ground their Copilot prompts on SharePoint lists or sites when using Copilot Chat in work mode. To include a SharePoint list or site in a prompt, simply type forward slash “/” and either start typing the name or find the SharePoint list or site under the “Sites” tab. This integration brings your organization's structured data directly into the AI conversation context. Your users can get more accurate, relevant responses by grounding prompts on specific SharePoint List data. This feature is rolling out in March.

 

When users without a Microsoft 365 Copilot license use Copilot Chat in Outlook alongside an open email, Copilot will ground the conversation in the open email, which is visually indicated in the prompt box by displaying the subject line of the email. Additionally, if a user highlights just a part of the email, Copilot can ground the conversation only on that highlighted text. These features help users by visually indicating when Copilot is grounded on an email, or part of it. These features rolled out in February.

Agents, integrations, and access points for the Copilot app

Project Manager Agent helps users plan, organize, and manage work through AI-assisted project tracking. Starting with core task management, this agent simplifies project coordination with advanced capabilities coming over time. Deploy this feature to give your organization AI-powered task management assistance. This feature is rolling out to Public Preview in March and is rolling out worldwide in April.

 

Copilot Chat integrated with Copilot Search lets users explore their search results and interact with Copilot in the chat pane at the same time, so they can ask more detailed questions about those results. By keeping Copilot Chat available during the search process, users can avoid switching between tools, so they can easily find information and get AI support simultaneously, boosting productivity. This feature is rolling out in March.

 

Copilot mobile widgets and action button enable users to chat with Copilot while they’re on the go. The widgets bring Microsoft 365 Copilot features right to a mobile device’s Home and Lock screens. With just one tap of the Copilot widget or Copilot action button, users can start a chat with Copilot, activate a voice conversation, or open the camera to attach a photo to Copilot Chat. This makes it easier to catch up, ask questions, and brainstorm with Copilot on a mobile device. Mobile widgets on iOS and Android, and action button on iOS rolled out in February.

 

Users can now create brand kits within the Create experience by uploading their organization's brand guidelines document. This streamlines brand asset management by extracting colors, fonts, and style elements automatically. Now teams can ensure brand consistency across Copilot-generated content in the Create experience with minimal manual setup. This feature rolled out in February.

 

Email attachment search enables users to find email attachments through Copilot, surfacing results in full search and when Outlook or SharePoint filters are selected. This improves file discovery by making email attachments searchable alongside other content, and users can locate important documents faster regardless of where they're stored. This feature rolled out in February.

Agents in communities and updated recaps for Copilot in Teams

Communities in Microsoft Teams bring community conversations and leadership engagement to a user’s existing Teams collaboration, so they can discover and participate in communities directly within Teams without switching apps. Now agents in communities are also available in Teams. The agent helps turn community conversations into shared organizational knowledge by drafting suggested responses to unanswered questions using existing discussions and specified SharePoint sites. This feature rolled out to Public Preview in February and is rolling out worldwide in April.

 

AI summaries in meeting recaps now include the visuals that shaped the conversation. When a screen is shared during a recorded meeting, key on-screen moments are captured and placed directly alongside the relevant sections of the meeting summary, so users can see the screen as it appeared in the discussion. The notes themselves remain focused on the conversation, but now they’re paired with the visual context that brought those ideas to life. The result is a more intuitive, scannable recap that helps teams quickly reconnect decisions to what was presented, without scrubbing through the recording. This feature rolled out in February.

Staying aligned after a meeting shouldn’t mean settling for a one-size-fits-all recap. With new customizable recap templates, users can shape their AI-generated notes to match exactly how their team works. They can choose from two ready-made templates, a Speaker Summary that organizes insights by participant, or an Executive Summary that highlights key takeaways. Users can also design custom templates using a simple free-text prompt to describe the structure they want—even paste in a format they’ve used before—and their AI notes will instantly adapt. Users can also save custom templates for future reuse, giving every meeting the same level of clarity, consistency, and efficiency. Available across all languages that support AI summaries, this feature rolled out to public preview in December and rolled out worldwide in February.

 

The Copilot experience in Teams meeting is updating to Copilot Chat, matching the unified experience across Teams chats and channels, the Microsoft 365 Copilot app, and other Microsoft 365 apps. Now Copilot in Teams meetings can analyze chat history, meeting transcripts, and calendar content to generate smart recaps, rewrite messages, and surface relevant insights. Whether reviewing a thread or following up after a call, Copilot in Teams meetings will now deliver context-aware summaries and suggestions based on your activity and goals. This feature rolled out in February.

Meeting scheduling and preparation for Copilot in Outlook

When scheduling meetings with multiple attendees via Copilot Chat, Copilot can now recommend time slots that maximize availability across attendees. When no suitable times are found, Copilot widens the search range to suggest alternatives and clearly explains why specific options are recommended. Copilot can also visually display each option in the context of a broader schedule, and respects personalized time settings like working hours, time zones, and meeting preferences. This feature rolled out in February for new Outlook and will roll out in March for classic Outlook

Similarly, it's now easier for users to schedule meetings with Copilot directly from an email thread. From an open email or email thread, users can simply click "Schedule with Copilot," and Copilot takes it from there by finding available times to meet, booking meeting rooms, drafting agendas and sending invites, all in one guided chat flow. This feature began rolling out in February for new Outlook and will roll out in March for classic Outlook. 

  

In classic Outlook, Copilot now brings real-time insights, context summaries, and relevant documents to meeting prep. Users can chat with Copilot for deeper preparation on upcoming meetings, enabling them to arrive at meetings fully prepared. This feature rolled out for Classic Outlook in Public Preview in January and is rolling out worldwide in March.

 

Copilot now provides meeting time analytics that let users query and visualize how much time they spend in meetings, including breakdowns by category and month over month comparisons. Users can ask Copilot questions like, “How much time did I spend in meetings last month?”, “How much time did I spend in meetings last month, split by category—and how does that compare to previous months?”,  or “Create a month‑by‑month bar chart comparing my meeting time.” This enables users to make more intentional scheduling decisions without additional reporting tools. This feature rolled out in February.

Edits documents by default with Copilot in Word

The default Copilot chat experience in Word now allows Copilot to directly edit documents. All changes made by Copilot are fully reviewable and reversible, and users can turn this experience off if needed. This helps users work faster without choosing modes or tools, so Copilot is simply ready when they are. This feature started rolling out in February.

 

Users can now prompt Copilot on a blank document to get started, and "Edit with Copilot” will automatically turn on in Word. This reduces the work of getting started and keeps users in a continuous AI-assisted flow where they can keep iterating. This feature started rolling out in February.

https://www.microsoft.com/en-us/microsoft-365/roadmap?id=543241

 

Copilot is agentic in PowerPoint

Copilot is now agentic in PowerPoint on the Web, letting users create, edit, and refine presentations through natural conversation directly in a presentation. Users can start a new presentation or build on an existing one, using ‘Edit with Copilot’ to generate slides, update content, improve layouts, and polish design—while preserving formatting, structure, and branding. Copilot uses files, meetings, emails, and more to help shape content and iterate quickly, and it connects to brand kits so users can apply branded templates, insert brand‑approved images, and check for brand compliance. This feature started rolling out to the Web in February.

 

Agents in OneDrive

Agents in OneDrive help users stay grounded in the full context of work. Instead of asking the same questions for each file, users can create an agent that understands an entire set of related documents, plans, specs, meeting notes, research, or decks. The agent responds with answers based on that shared content, functioning as an AI teammate built from chosen files and folders. This feature rolled out in February.

 

Expanded grounding, access, and coordination for agents

Agent Recommendations proactively suggests helpful agents during Copilot conversations based on user context and needs. This feature connects users with the right specialized agent for their current task without manual searching. This helps your organization improve agent adoption and effectiveness by automatically surfacing relevant agents when users need them most. This feature rolled out in February.

 

Adaptive Card content can be refreshed within custom engine agent experiences. This feature enhances the interactivity and real-time capabilities of custom-built agents. This enables an organization's custom agents to deliver more dynamic, up-to-date content to users. This feature is rolling out in March.

 

AI agents can coordinate with each other on complex tasks by calling other agents as tools. This multi-agent architecture allows agents to leverage specialized capabilities from other agents, so organizations can build more sophisticated AI workflows through agent-to-agent collaboration. This feature is rolling out in March.

 

Access to agents built with Copilot Studio and Foundry through Outlook gives users access to their organization's custom-built agents directly within the Outlook experience. This integration extends agent capabilities to email workflows without requiring users to switch applications. Admins can deploy custom agents where users spend significant time managing communications. This feature is rolling out in March.

  

Newly created declarative agents can now ground answers in scanned PDFs and image-based documents from SharePoint. This unlocks a major class of enterprise content that was previously difficult for agents to process. This enables organizations to leverage AI assistance with document archives and legacy scanned materials. This feature rolled out in February.

IT admin capabilities

Power user report and intelligent summaries for Copilot Dashboard

Admins can now target their enablement efforts by identifying Copilot power users in the Power User Report in the Copilot Adoption PBI. The report classifies users as power, habitual, novice, and non‑Copilot users based on usage frequency and consistency. This feature is rolling out in March.

 

Use intelligent summaries to quickly surface what’s working and where targeted attention can accelerate Copilot adoption. Intelligent summaries highlight key adoption trends to focus on areas of success. Suggested prompts enable deeper exploration of underlying trends and drivers. This feature is rolling out in March.

 

Copilot access and readiness with Microsoft 365 admin center

When requesting a Microsoft 365 Copilot license, users can now include a business justification explaining why they need Copilot. This information is surfaced to IT admins during review, helping them make faster, more informed approval decisions while supporting governance and audit requirements. By providing clear context upfront, organizations can streamline license approvals while maintaining control and visibility. This feature is rolling out in March.

 

 

The new Copilot readiness page in the Microsoft 365 admin center brings structure and clarity to the configuration and rollout of Microsoft 365 Copilot. The readiness page organizes recommended settings into clear categories of deployment essentials, data security, and user experience. This makes it easier to understand scope, prioritize actions, and track progress. With completion status, user coverage insights, and guided recommendations surfaced in one place, IT teams can plan, sequence, and deploy Microsoft 365 Copilot more confidently and consistently. This feature rolled out to Public Preview in February and is rolling out worldwide in March.

 

Federated Copilot connectors are now available in Public Preview by default across all tenants, allowing users to authenticate and securely access live data from supported external services such as Canva, HubSpot, Notion, Linear, Intercom, Google Contacts, and Google Calendar. Admins can govern availability from the Microsoft 365 admin center—reviewing, enabling, and disabling these connectors—while users connect with their own credentials. Federated connector data is retrieved in real time (not indexed) and is currently supported only in Researcher. This feature rolled out in February.

 

The Connector Usage Report in Microsoft 365 admin center provides information on how connectors are augmenting Microsoft 365 Copilot experiences, with metrics including number of active connectors, and agents that reference connectors. With insights into daily trends, connector response counts, and individual-level engagement, organizations can optimize training, improve adoption strategies, and identify high-impact integrations. This feature is rolling out in March.

 

Admins can now enable AI-powered skill updates for user profiles directly from the Microsoft 365 admin center. This feature helps organizations maintain accurate skill data across their user base using Microsoft Graph activity. This feature rolled out in February.

Risk-based inventory of AI agents for Microsoft Defender/h4>

Microsoft Defender AI Security Posture Management now provides SOC teams with a risk-based inventory of AI agents across Microsoft Foundry and Copilot Studio. Analysts can view an agent’s overall security posture, easily implement security recommendations, and identify vulnerabilities such as misconfigurations and excessive permissions. Agent activity is captured for investigation with hunting available through Defender's unified experience. This gives a security team the visibility needed to manage AI agents with the same rigor as human identities. This feature rolled out to Public Preview in November and rolled out worldwide in February.

 

 

Did you know? The Microsoft 365 Roadmap is where you can get the latest updates on productivity apps and intelligent cloud services. Microsoft 365 Copilot release notes is where you can see the Microsoft 365 Copilot features that are generally available (Current Channel for Microsoft 365 apps) and specific to each platform. Check back regularly to see what features are in development, coming soon and generally available. Please note that the dates mentioned in this article are tentative and subject to change.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Learning In Public: Cleaning Up Claude Code Settings

1 Share
Ben Nadel explores the use of Claude Code to help clean up Claude's local settings file (settings.local.json) to remove redundancy and unnecessary one-off paths....
Read the whole story
alvinashcraft
57 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Protect Your MCP Tools With Auth0 FGA in TypeScript

1 Share
Learn how to secure your Model Context Protocol (MCP) tools using Auth0 FGA and TypeScript. Implement relationship-based access control for AI applications.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Yet Another Way to Center an (Absolute) Element

1 Share

TL;DR: We can center absolute-positioned elements in three lines of CSS. And it works on all browsers!

.element {
  position: absolute;
  place-self: center; 
  inset: 0;
}

Why? Well, that needs a longer answer.

In recent years, CSS has brought a lot of new features that don’t necessarily allow us to do new stuff, but certainly make them easier and simpler. For example, we don’t have to hardcode indexes anymore:

<ul style="--t: 8">
  <li style="--i: 1"></li>
  <li style="--i: 2"></li>
  <!--  ...  -->
  <li style="--i: 8"></li>
</ul>

Instead, all this is condensed into the sibling-index() and sibling-count() functions. There are lots of recent examples like this.

Still, there is one little task that feels like we’ve doing the same for decades: centering an absolutely positioned element, which we usually achieve like this:

.element {
  position: absolute;
  top: 50%;
  left: 50%;
  
  translate: -50% -50%;
}

We move the element’s top-left corner to the center, then translate it back by 50% so it’s centered.

There is nothing wrong with this way — we’ve been doing it for decades. But still it feels like the old way. Is it the only way? Well, there is another not-so-known cross-browser way to not only center, but also easily place any absolutely-positioned element. And what’s best, it reuses the familiar align-self and justify-self properties.

Turns out that these properties (along with their place-self shorthand) now work on absolutely-positioned elements. However, if we try to use them as is, we’ll notice our element doesn’t even flinch.

/* Doesn't work!! */
.element {
  position: absolute;
  place-self: center; 
}

So, how do align-self and justify-self work for absolute elements? It may be obvious to say they should align the element, and that’s true, but specifically, they align it within its Inset-Modified Containing Block (IMCB). Okay… But what’s the IMCB?

Imagine we set our absolute element width and height to 100%. Even if the element’s position is absolute, it certainly doesn’t grow infinitely, but rather it’s enclosed by what’s known as the containing block.

The containing block is the closest ancestor with a new stacking context. By default, it is the html element.

We can modify that containing block using inset properties (specifically top, right, bottom, and left). I used to think that inset properties fixed the element’s corners (I even said it a couple of seconds ago), but under the hood, we are actually fixing the IMCB borders.

Diagram showing the CSS for an absolutely-positioning element with inset properties and how those values map to an element.

By default, the IMCB is the same size as the element’s dimensions. So before, align-self and justify-self were trying to center the element within itself, resulting in nothing. Then, our last step is to set the IMCB so that it is the same as the containing block.

.element {
  position: absolute;
  place-self: center; 
  top: 0;
  right: 0;
  bottom: 0;
  left: 0;
}

Or, using their inset shorthand:

.element {
  position: absolute;
  place-self: center; 
  inset: 0;
}

Only three lines! A win for CSS nerds. Admittedly, I might be cheating since, in the old way, we could also use the inset property and reduce it to three lines, but… let’s ignore that fact for now.

We aren’t limited to just centering elements, since all the other align-self and justify-self positions work just fine. This offers a more idiomatic way to position absolute elements.

Pro tip: If we want to leave a space between the absolutely-positioned element and its containing block, we could either add a margin to the element or set the container’s inset to the desired spacing.

What’s best, I checked Caniuse, and while initially Safari didn’t seem to support it, upon testing, it seems to work on all browsers!


Yet Another Way to Center an (Absolute) Element originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

The ultimate dev skill is Integration Testing – Podcast interview with Internet of Bugs [Podcast #209]

1 Share

Today Quincy Larson interviews Carl Brown, who runs the Internet of Bugs YouTube channel and has worked as a dev at Amazon, IBM, Sun Microsystems, and startups for over 37 years.

We talk about:

  • The hype versus the utility in LLMs and agent code generation tools

  • Why you might want to target developer jobs at smaller companies, and how these differ from "big tech"

  • How everyone will face agism eventually. Carl argues that a consulting career is a great escape hatch.

Watch the podcast on the freeCodeCamp.org YouTube channel or listen on your favorite podcast app.

Links from our discussion:

Ted Chiang "ChatGPT Is a Blurry JPEG of the Web" article Carl mentions: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

The Karpathy on Moltbook saga:

Karpathy hyping up MoltBook

https://x.com/elonmusk/status/2017370646767145419

Noon Jan 30

Doubles Down after "being accused of overhyping" Moltbook
https://x.com/karpathy/status/2017442712388309406

9:39 PM Jan 30

Tweet showing Karpathy's (redacted) private information from a MoltBook security breach
https://x.com/theonejvo/status/2017732898632437932

4:53PM Jan 31

Fortune quotes Karpathy saying MoltBook is "a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers"
https://fortune.com/2026/02/02/moltbook-security-agents-singularity-disaster-gary-marcus-andrej-karpathy/

Feb 2

Quote from Cory Doctorow about code failing well: https://pluralistic.net/2026/01/06/1000x-liability/ Excerpt from Cory's Mastodon with that quote in it: https://mamot.fr/@pluralistic/115848576290992814 Mastodon from Carl to Cory telling him I'm going to use that quote (which he boosted): https://mastodon.social/@carlbrown/115867074293449215

Article on Claude 4.6 being good at finding bugs with fuzzing: https://red.anthropic.com/2026/zero-days/

Reference to it from Computer Security Guru Bruce Schneier: https://www.schneier.com/blog/archives/2026/02/llms-are-getting-a-lot-better-and-faster-at-finding-and-exploiting-zero-days.html

Older paper on LLMs being good at fuzzing prior to this new claim about claude 4.6: https://arxiv.org/html/2508.01750v1

Falsehoods programmers believe about names from Patio11: https://img.sauf.ca/pictures/2025-10-23/61fb6db44e7173cd9318753c955f7dda.pdf

Same kind of article, but this one is about time instead of names (Carl said he was wrong in that Partick/Patio11 didn't write this one, but it's worth passing along): https://infiniteundo.com/post/25509354022/more-falsehoods-programmers-believe-about-time

Article with discussion of ageism in tech with the Zuckerberg quote Carl was thinking of: https://www.forbes.com/sites/stevenkotler/2015/02/14/is-silicon-valley-ageist-or-just-smart/

Book on (interpersonal) networking that Carl recommends: https://www.penguinrandomhouse.com/books/227558/never-eat-alone-expanded-and-updated-by-keith-ferrazzi-and-tahl-raz/

And another one: https://www.penguinrandomhouse.com/books/105512/dig-your-well-before-youre-thirsty-by-harvey-mackay/

Carl's video on how AdTech is fracturing Society: https://www.youtube.com/watch?v=FmYXyWbis9w

Carl's Website: https://internetofbugs.com/

Community news section:

  1. freeCodeCamp just published a comprehensive course that will teach you the fundamental concepts, protocols, and architectures of computer networking. You'll learn key network engineering topics like topology, subnetting, flow control, routing, IPv4 addressing, DNS, and more. (12 hour YouTube course): https://www.freecodecamp.org/news/computer-networking-fundamentals/

  2. And we just published our second-ever chess course. This time you'll learn the Italian Game, one of the most common chess openings. This handbook and accompanying video course are taught by freeCodeCamp engineer Ihechikara Abba, who has a chess Elo rating of 2285. He will lay out the many traps that white can set for black, and how to not fall for them. (full-length handbook and 1 hour YouTube course): https://www.freecodecamp.org/news/the-chess-italian-game-handbook-traps-for-white/

  3. freeCodeCamp also published a full-length book on Product-Led Research. This is a must-read for any manager within a tech company. It's written by a CTO and security researcher named Omer Rosenbaum, who says: “if you manage Research like it's Development, things aren't going to go well for you.” He breaks down the most common research frameworks and methodologies, and contextualizes them through a series of case studies. (full-length book): https://www.freecodecamp.org/news/product-led-research-a-practical-guide-for-randd-leaders-full-book/

  4. If you're a Python developer and use the Django web development framework, this tutorial will help you optimize the heck out of your APIs. Mari will teach you how to use profiling and logging to find bottlenecks in your codebase. Then she'll show you how to get extra performance through caching, so you can serve users at scale. (20 minute read): https://www.freecodecamp.org/news/how-to-optimize-django-rest-apis-for-performance/

  5. Today's song of the week is 1984 synth jazz classic "No One Emotion" by George Benson. I love the driving synth bass, the vocal harmonies, and excellent guitar solo by Michael Sambello – the guy who made the She's a Maniac song. If you're looking for a pick me up jam this song any day of the week. https://www.youtube.com/watch?v=Q-MyvbolxG0



Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Allocating on the Stack

1 Share

The Go Blog

Allocating on the Stack

Keith Randall
27 February 2026

We’re always looking for ways to make Go programs faster. In the last 2 releases, we have concentrated on mitigating a particular source of slowness, heap allocations. Each time a Go program allocates memory from the heap, there’s a fairly large chunk of code that needs to run to satisfy that allocation. In addition, heap allocations present additional load on the garbage collector. Even with recent enhancements like Green Tea, the garbage collector still incurs substantial overhead.

So we’ve been working on ways to do more allocations on the stack instead of the heap. Stack allocations are considerably cheaper to perform (sometimes completely free). Moreover, they present no load to the garbage collector, as stack allocations can be collected automatically together with the stack frame itself. Stack allocations also enable prompt reuse, which is very cache friendly.

Stack allocation of constant-sized slices

Consider the task of building a slice of tasks to process:

func process(c chan task) {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

Let’s walk through what happens at runtime when pulling tasks from the channel c and adding them to the slice tasks.

On the first loop iteration, there is no backing store for tasks, so append has to allocate one. Because it doesn’t know how big the slice will eventually be, it can’t be too aggressive. Currently, it allocates a backing store of size 1.

On the second loop iteration, the backing store now exists, but it is full. append again has to allocate a new backing store, this time of size 2. The old backing store of size 1 is now garbage.

On the third loop iteration, the backing store of size 2 is full. append again has to allocate a new backing store, this time of size 4. The old backing store of size 2 is now garbage.

On the fourth loop iteration, the backing store of size 4 has only 3 items in it. append can just place the item in the existing backing store and bump up the slice length. Yay! No call to the allocator for this iteration.

On the fifth loop iteration, the backing store of size 4 is full, and append again has to allocate a new backing store, this time of size 8.

And so on. We generally double the size of the allocation each time it fills up, so we can eventually append most new tasks to the slice without allocation. But there is a fair amount of overhead in the “startup” phase when the slice is small. During this startup phase we spend a lot of time in the allocator, and produce a bunch of garbage, which seems pretty wasteful. And it may be that in your program, the slice never really gets large. This startup phase may be all you ever encounter.

If this code was a really hot part of your program, you might be tempted to start the slice out at a larger size, to avoid all of these allocations.

func process2(c chan task) {
    tasks := make([]task, 0, 10) // probably at most 10 tasks
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

This is a reasonable optimization to do. It is never incorrect; your program still runs correctly. If the guess is too small, you get allocations from append as before. If the guess is too large, you waste some memory.

If your guess for the number of tasks was a good one, then there’s only one allocation site in this program. The make call allocates a slice backing store of the correct size, and append never has to do any reallocation.

The surprising thing is that if you benchmark this code with 10 elements in the channel, you’ll see that you didn’t reduce the number of allocations to 1, you reduced the number of allocations to 0!

The reason is that the compiler decided to allocate the backing store on the stack. Because it knows what size it needs to be (10 times the size of a task) it can allocate storage for it in the stack frame of process2 instead of on the heap1. Note that this depends on the fact that the backing store does not escape to the heap inside of processAll.

Stack allocation of variable-sized slices

But of course, hard coding a size guess is a bit rigid. Maybe we can pass in an estimated length?

func process3(c chan task, lengthGuess int) {
    tasks := make([]task, 0, lengthGuess)
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

This lets the caller pick a good size for the tasks slice, which may vary depending on where this code is being called from.

Unfortunately, in Go 1.24 the non-constant size of the backing store means the compiler can no longer allocate the backing store on the stack. It will end up on the heap, converting our 0-allocation code to 1-allocation code. Still better than having append do all the intermediate allocations, but unfortunate.

But never fear, Go 1.25 is here!

Imagine you decide to do the following, to get the stack allocation only in cases where the guess is small:

func process4(c chan task, lengthGuess int) {
    var tasks []task
    if lengthGuess <= 10 {
        tasks = make([]task, 0, 10)
    } else {
        tasks = make([]task, 0, lengthGuess)
    }
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

Kind of ugly, but it would work. When the guess is small, you use a constant size make and thus a stack-allocated backing store, and when the guess is larger you use a variable size make and allocate the backing store from the heap.

But in Go 1.25, you don’t need to head down this ugly road. The Go 1.25 compiler does this transformation for you! For certain slice allocation locations, the compiler automatically allocates a small (currently 32-byte) slice backing store, and uses that backing store for the result of the make if the size requested is small enough. Otherwise, it uses a heap allocation as normal.

In Go 1.25, process3 performs zero heap allocations, if lengthGuess is small enough that a slice of that length fits into 32 bytes. (And of course that lengthGuess is a correct guess for how many items are in c.)

We’re always improving the performance of Go, so upgrade to the latest Go release and be surprised by how much faster and memory efficient your program becomes!

Stack allocation of append-allocated slices

Ok, but you still don’t want to have to change your API to add this weird length guess. Anything else you could do?

Upgrade to Go 1.26!

func process(c chan task) {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

In Go 1.26, we allocate the same kind of small, speculative backing store on the stack, but now we can use it directly at the append site.

On the first loop iteration, there is no backing store for tasks, so append uses a small, stack-allocated backing store as the first allocation. If, for instance, we can fit 4 tasks in that backing store, the first append allocates a backing store of length 4 from the stack.

The next 3 loop iterations append directly to the stack backing store, requiring no allocation.

On the 4th iteration, the stack backing store is finally full and we have to go to the heap for more backing store. But we have avoided almost all of the startup overhead described earlier in this article. No heap allocations of size, 1, 2, and 4, and none of the garbage that they eventually become. If your slices are small, maybe you will never have a heap allocation.

Stack allocation of append-allocated escaping slices

Ok, this is all good when the tasks slice doesn’t escape. But what if I’m returning the slice? Then it can’t be allocated on the stack, right?

Right! The backing store for the slice returned by extract below can’t be allocated on the stack, because the stack frame for extract disappears when extract returns.

func extract(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    return tasks
}

But you might think, the returned slice can’t be allocated on the stack. But what about all those intermediate slices that just become garbage? Maybe we can allocate those on the stack?

func extract2(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    tasks2 := make([]task, len(tasks))
    copy(tasks2, tasks)
    return tasks2
}

Then the tasks slice never escapes extract2. It can benefit from all of the optimizations described above. Then at the very end of extract2, when we know the final size of the slice, we do one heap allocation of the required size, copy our tasks into it, and return the copy.

But do you really want to write all that additional code? It seems error prone. Maybe the compiler can do this transformation for us?

In Go 1.26, it can!

For escaping slices, the compiler will transform the original extract code to something like this:

func extract3(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    tasks = runtime.move2heap(tasks)
    return tasks
}

runtime.move2heap is a special compiler+runtime function that is the identity function for slices that are already allocated in the heap. For slices that are on the stack, it allocates a new slice on the heap, copies the stack-allocated slice to the heap copy, and returns the heap copy.

This ensures that for our original extract code, if the number of items fits in our small stack-allocated buffer, we perform exactly 1 allocation of exactly the right size. If the number of items exceeds the capacity our small stack-allocated buffer, we do our normal doubling-allocation once the stack-allocated buffer overflows.

The optimization that Go 1.26 does is actually better than the hand-optimized code, because it does not require the extra allocation+copy that the hand-optimized code always does at the end. It requires the allocation+copy only in the case that we’ve exclusively operated on a stack-backed slice up to the return point.

We do pay the cost for a copy, but that cost is almost completely offset by the copies in the startup phase that we no longer have to do. (In fact, the the new scheme at worst has to copy one more element than the old scheme.)

Wrapping up

Hand optimization can still be beneficial, especially if you have a good estimate of the slice size ahead of time. But hopefully the compiler will now catch a lot of the simple cases for you and allow you to focus on the remaining ones that really matter.

There are a lot of details that the compiler needs to ensure to get all these optimizations right. If you think that one of these optimizations is causing correctness or (negative) performance issues for you, you can turn them off with -gcflags=all=-d=variablemakehash=n. If turning these optimizations off helps, please file an issue so we can investigate.

Footnotes

1 Go stacks do not have any alloca-style mechanism for dynamically-sized stack frames. All Go stack frames are constant sized.

Previous article: Using go fix to modernize Go code
Blog Index

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories