We’re excited to announce the public preview of the User Configuration API in the Microsoft Graph beta endpoint. The User Configuration API is a new set of endpoints that you can use to create, read, update, and delete user configuration objects in Exchange Online mail folders. User configuration objects—also known as folder associated items (FAIs)—are items associated with a specific mail folder, and each configuration object in a folder must have a unique key.
Many solutions need a reliable way to store and retrieve per-folder configuration data alongside mailbox content—whether that’s application state, settings, or other metadata scoped to a folder. The userConfiguration resource supports multiple payload styles so you can store what best fits your scenario:
The beta release includes full CRUD support for userConfiguration objects:
For example, you can read a configuration from either the signed-in user (/me) or a specific user (/users/{id}), scoped to a mail folder:
GET /me/mailFolders/{mailFolderId}/userConfigurations/{userConfigurationId}GET /users/{usersId}/mailFolders/{mailFolderId}/userConfigurations/{userConfigurationId}The API uses dedicated permissions for mailbox configuration items:
MailboxConfigItem.Read (least privileged) and MailboxConfigItem.ReadWrite (higher privileged) MailboxConfigItem.ReadWriteMailboxConfigItem.ReadWrite MailboxConfigItem.ReadWrite As always, choose the least privileged permissions your application needs.
The easiest way to begin is to explore the API surface and try calls interactively.
Review the resource and method docs:
Use Graph Explorer to test requests quickly:
As you explore the User Configuration API in beta, we want to hear what’s working well and where we can improve—especially around usability, gaps, and real-world scenarios.
Send feedback to: exouserconfigurationapifeedback@microsoft.com
The post Introducing the Microsoft Graph User Configuration API (preview) appeared first on Microsoft 365 Developer Blog.
We enter 2026 with the SharePoint Framework stronger than ever. Adoption has expanded, feedback has sharpened our priorities, and the platform continues to power intelligent and scalable experiences across Microsoft 365. December marked an important milestone with the release of SPFx version 1.22, which delivered platform improvements based directly on customer and partner feedback. Building on that momentum, we are announcing the 1.22.2 release that focuses on addressing known audit related issues with a new model to address these on monthly basis to reduce any security concerns. This year we continue accelerating again on SPFx side, bringing new features, AI assisted scenarios, and improvements shaped by our global community feedback and input.
Since launching this monthly blog series in September 2025, our goal has been to provide clear insight into the SPFx roadmap and maintain an open and predictable communication rhythm with our ecosystem. With the January 2026 update, we continue that commitment by highlighting the near term roadmap, the addressing vulnerabilities with 1.22.2, and our longer term investment areas that will support developers building modern Microsoft 365 experiences.
Your feedback continues to shape the direction of SPFx. Real world observations and suggestions from customers and partners have guided our priorities, informed design decisions, and helped validate the work that moves from preview to production readiness. This collaboration ensures that SPFx evolves in a way that supports enterprise scale solutions and delivers the flexibility that teams expect when building across Microsoft 365.
Looking ahead, 2026 is set to be an important year for SPFx. In addition to the new monthly minor release model, we are preparing a set of roadmap updates that focus on developer productivity, long term sustainability, and better alignment with the evolving Microsoft 365 platform investments. These upcoming investments remain grounded in stability, performance, and the ability to build richer and more integrated user experiences. As always, we will update the public SPFx roadmap and share details through these monthly posts as plans continue to progress.
Thank you for your continued partnership and for helping guide the future of SPFx as we enter 2026 
We are excited to announce release of a new SharePoint Framework debug toolbar within SharePoint Online which is starting to roll into production within upcoming weeks. Toolbar enhances the developer experience with new UX level functionalities when developers are debugging their solutions in live SharePoint sites. When solutions are debugged, additional debugging toolbar is shown in the top section of the page with additional options for the developers. We will continue evolving this experience in future with additional options and settings also based on your feedback.
See more details on the debug tool bar from following documentation:
As a developer, you will see occasionally a new feedback option popping up on the UX, so that we can more efficiently collect your input for the future planning. Thank you for your time and input already advance.
Here’s a quick video by Bert Jansen and Vesa Juvonen showcasing how the debug toolbar works when you are developing solutions with SharePoint Framework.
Starting in 2026 we will bring more predictability to how we address vulnerabilities reported by the npm audit command. Our goal is to resolve any reported issues as quickly as possible.
Many npm audit warnings are false positives. They come from the local development environment for SPFx and do not represent actual risks for SharePoint Framework. These findings would only matter if the flagged npm package was executed on a server. SharePoint Framework solutions do not run server code, and these packages are only used during build and debugging on the developer computer.
Even though these findings are not runtime risks for SharePoint Framework, we want to avoid unnecessary concern and confusion. To support clean and predictable development environments, we will address reported vulnerabilities through potential monthly minor releases. These updates, such as version 1.22.2, focus on keeping npm audit reports clean and helping developers work with confidence.
See details on the 1.22.2 release on the release notes:
We are evolving towards a quarterly release cycle, providing more predictability on new feature introductions and updates. We will update the public roadmap with any schedule and feature updates as we move forward in this journey.
Here is the set of investments which we are planning to ship within the upcoming SPFx releases:
This will be a server side update without a requirement to provide a client-side updates as new SPFx version.
This release focuses on open sourcing the templates and tooling to create SPFx solutions, enabling our ecosystem to optionally build their own templates. We also want to focus on providing additional value for optimizing developer experience and providing new extensibility options.
This version continues providing new extensibility options aligned with the future direction of SharePoint. We are also expecting to have other new features and capabilities as part of this release, which will be disclosed a bit later.
We also continue further innovation in the AI space with a focus on both customer features and developer tooling. More on this in future roadmap updates during 2026.
We encourage you to continue providing feedback to support our product planning for the upcoming semesters. We already have an extensive list of ideas and enhancements in mind but are always interested in your input.
We continue expanding the SharePoint platform to unlock more innovation across Microsoft 365:
We encourage you to explore these capabilities and see how they can help you build the next generation of solutions for your organization and customers.
If you are planning to build experiences for Microsoft 365, we strongly recommend joining our community calls and the broader Microsoft 365 and Power Platform Community activities. These cover Microsoft 365 Copilot, Power Platform, SharePoint, Microsoft Teams, Copilot Studio, Microsoft Graph, Microsoft Viva, and more. You can find call details and community assets at https://aka.ms/community/home.
You might also be interested in our SharePoint partner showcase series where we highlight solutions built with SharePoint. Each episode includes a video and a blog post with additional details. If you are creating something with SharePoint and would like to be featured, you can let us know by signing up through the provided form and we will contact you to schedule a recording.
We are excited to share that the SharePoint Hackathon returns in March 2026 with an event running from March 2 to March 16. This year includes updated submission categories that reflect the new features and capabilities of SharePoint. We were inspired by the outstanding examples of intelligent portals powered by AI and SPFx in the 2025 hackathon and we look forward to seeing what the community creates next. See more on the SharePoint 25th anniversary and SharePoint Hackathon 2026 from https://aka.ms/SPat25.
Follow us also on LinkedIn or in X to stay up to date on Microsoft 365 Platform announcements.
Got feedback or input on this blog post – leave a comment and we will get back to you 
Happy coding! Sharing is Caring!
The post SharePoint Framework (SPFx) roadmap update – January 2026 appeared first on Microsoft 365 Developer Blog.
Most people think ASCII art is simple, and a nostalgic remnant of the early internet. But when the GitHub Copilot CLI team asked for a small entrance banner for the new command-line experience, they discovered the opposite: An ASCII animation in a real-world terminal is one of the most constrained UI engineering problems you can take on.
Part of what makes this even more interesting is the moment we’re in. Over the past year, CLIs have seen a surge of investment as AI-assisted and agentic workflows move directly into the terminal. But unlike the web—where design systems, accessibility standards, and rendering models are well-established—the CLI world is still fragmented. Terminals behave differently, have few shared standards, and offer almost no consistent accessibility guidelines. That reality shaped every engineering decision in this project.
Different terminals interpret ANSI color codes differently. Screen readers treat fast-changing characters as noise. Layout engines vary. Buffers flicker. Some users override global colors for accessibility. Others throttle redraw speed. There is no canvas, no compositor, no consistent rendering model, and no standard animation framework.
So when an animated Copilot mascot flying into the terminal appeared, it looked playful. But behind it was serious engineering work, unexpected complexity, a custom design toolchain, and a tight pairing between a designer and a long-time CLI engineer.
That complexity only became fully visible once the system was built. In the end, animating a three-second ASCII banner required over 6,000 lines of TypeScript—most of it dedicated not to visuals, but to handling terminal inconsistencies, accessibility constraints, and maintainable rendering logic.
This is the technical story of how it came together.
Before diving into the build process, it’s worth calling out why this problem space is more advanced than it looks.
Unlike browsers (DOM), native apps (views), or graphics frameworks (GPU surfaces), terminals treat output as a stream of characters. There’s no native concept of:
Because of this, every “frame” has to be manually repainted using cursor movements and redraw commands. There’s no compositor smoothing anything over behind the scenes. Everything is stdout writes + ANSI control sequences.
ANSI escape codes like \x1b[35m (bright magenta) or \x1b[H (cursor home) behave differently across terminals—not just in how they render, but in whether they’re supported at all. Some environments (like Windows Command Prompt or older versions of PowerShell) have limited or no ANSI support without extra configuration.
But even in terminals that do support ANSI, the hardest part isn’t the cursor movement. It’s the colors.
When you’re building a CLI, you realistically have three approaches:
For the Copilot CLI animation, this meant treating color as a semantic system, not a literal one: Instead of committing specific RGB values, the team mapped high-level “roles” (eyes, goggles, shadow, border) to ANSI colors that degrade gracefully across different terminals and accessibility settings.
Terminals are used by developers with a wide range of visual abilities—not just blind users with screen readers, but also low-vision users, color-blind users, and anyone working in high-contrast or customized themes.
That means:
This is also why the Copilot CLI animation ended up behind an opt-in flag early on—accessibility constraints shaped the architecture from the start.
These constraints guided every decision in the Copilot CLI animation. The banner had to work when colors were overridden, when contrast was limited, and even when the animation itself wasn’t visible.
Ink lets you build terminal interfaces using React components, but:
Which meant animation logic had to be handcrafted.
There are tools for ASCII art, but virtually none for:
Even existing ANSI preview tools don’t simulate how different terminals remap colors or handle cursor updates, which makes accurate design iteration almost impossible without custom tooling. So the team had to build one.
Cameron Foxly (@cameronfoxly), a brand designer at GitHub with a background in animation, was asked to create a banner for the Copilot CLI.
“Normally, I’d build something in After Effects and hand off assets,” Cameron said. “But engineers didn’t have the time to manually translate animation frames into a CLI. And honestly, I wanted something more fun.”
He’d seen the static ASCII intro in Claude Code and knew Copilot deserved more personality.
The 3D Copilot mascot flying in to reveal the CLI logo felt right. But after attempting to create just one frame manually, the idea quickly ran into reality.
“It was a nightmare,” Cameron said. “If this is going to exist, I need to build my own tool.”
Cameron opened an empty repository in VS Code, and began asking GitHub Copilot for help scaffolding an animation MVP that could:
Within an hour, he had a working prototype that was monochrome, but functional.
Below is a simplified example variation of the frame loop logic Cameron prototyped:
import fs from "fs";
import readline from "readline";
/**
* Load ASCII frames from a directory.
*/
const frames = fs
.readdirSync("./frames")
.filter(f => f.endsWith(".txt"))
.map(f => fs.readFileSync(`./frames/${f}`, "utf8"));
let current = 0;
function render() {
// Move cursor to top-left of terminal
readline.cursorTo(process.stdout, 0, 0);
// Clear the screen below the cursor
readline.clearScreenDown(process.stdout);
// Write the current frame
process.stdout.write(frames[current]);
// Advance to next frame
current = (current + 1) % frames.length;
}
// 75ms = ~13fps. Higher can cause flicker in some terminals.
setInterval(render, 75);
This introduced the first major obstacle: color. The prototype worked in monochrome, but the moment color was added, inconsistencies across terminals—and accessibility constraints—became the dominant engineering problem.
The Copilot brand palette is vibrant and high-contrast, which is great for web but exceptionally challenging for terminals.
ANSI terminals support:
Even in 256-color mode, terminals remap colors based on:
Which means you can’t rely on exact hues. You have to design with variability in mind.
Cameron needed a way to paint characters with ANSI color roles while previewing how they look in different terminals.
He took a screenshot of the Wikipedia ANSI table, handed it to Copilot, and asked it to scaffold a palette UI for his tool.
A simplified version:
function applyColor(char, color) {
// Minimal example: real implementation needed support for roles,
// contrast testing, and multiple ANSI modes.
const codes = {
magenta: "\x1b[35m",
cyan: "\x1b[36m",
white: "\x1b[37m"
};
return `${codes[color]}${char}\x1b[0m`; // Reset after each char
}
This enabled Cameron to paint ANSI-colored ASCII like you would in Photoshop, one character at a time.
But now he had to export it into the real Copilot CLI codebase.
Ink is a React renderer for building CLIs using JSX components. Instead of writing to the DOM, components render to stdout.
Cameron asked Copilot to help generate an Ink component that would:
import React from "react";
import { Box, Text } from "ink";
/**
* Render a single ASCII frame.
*/
export const CopilotBanner = ({ frame }) => (
<Box flexDirection="column">
{frame.split("\n").map((line, i) => (
<Text key={i}>{line}</Text>
))}
</Box>
);
And a minimal animation wrapper:
export const AnimatedBanner = () => {
const [i, setI] = React.useState(0);
React.useEffect(() => {
const id = setInterval(() => setI(x => (x + 1) % frames.length), 75);
return () => clearInterval(id);
}, []);
return <CopilotBanner frame={frames[i]} />;
};
This gave Cameron the confidence to open a pull request (his first engineering pull request in nine years at GitHub).
“Copilot filled in syntax I didn’t know,” Cameron said. “But I still made all the architectural decisions.”
Now it was time for the engineering team to turn a prototype into something production-worthy.
Andy Feller (@andyfeller), a long-time GitHub engineer behind the GitHub CLI, partnered with Cameron to bring the animation into the Copilot CLI codebase.
Unlike browsers—which share rendering engines, accessibility APIs, and standards like WCAG—terminal environments are a patchwork of behaviors inherited from decades-old hardware like the VT100. There’s no DOM, no semantic structure, and only partial agreement on capabilities across terminals. This makes even “simple” UI design problems in the terminal uniquely challenging, especially as AI-driven workflows push CLIs into daily use for more developers.
“There’s no framework for terminal animations,” Andy explained. “We had to figure out how to do this without flickering, without breaking accessibility, and across wildly different terminals.”
Andy broke the engineering challenges into four broad categories:
Most terminals repaint the entire viewport when new content arrives. At the same time, CLIs come with a strict usability expectation: when developers run a command, they want to get to work immediately. Any animation that flickers, blocks input, or lingers too long actively degrades the experience.
This created a core tension the team had to resolve: how to introduce a brief, animated banner without slowing startup, stealing focus, or destabilizing the terminal render loop.
In practice, this was complicated by the fact that terminals behave differently under load. Some:
To avoid flicker while keeping the CLI responsive across popular terminals like iTerm2, Windows Terminal, and VS Code, the team had to carefully coordinate several interdependent concerns:
The result was an animation treated as a non-blocking, best-effort enhancement—visible when it could be rendered safely, but never at the expense of startup performance or usability.
“ANSI color consistency simply doesn’t exist,” Andy said.
Most modern terminals support 8-bit color, allowing CLIs to choose from 256 colors. However, how those colors are actually rendered varies widely based on terminal themes, OS settings, and user accessibility overrides. In practice, CLIs can’t rely on exact hues—or even consistent contrast—across environments.
The Copilot banner introduced an additional complexity: although it’s rendered using text characters, the block-letter Copilot logo functions as a graphical object, not readable body text. Under accessibility guidelines, non-text graphical elements have different contrast requirements than text, and they must remain perceivable without relying on fine detail or precise color matching.
To account for this, the team deliberately chose a minimal 4-bit ANSI palette—one of the few color modes most terminals allow users to customize—to ensure the animation remained legible under high-contrast themes, low-vision settings, and color overrides.
This meant the team had to:
Rather than encoding brand colors directly, the animation maps semantic roles—such as borders, eyes, highlights, and text—to ANSI color slots that terminals can reinterpret safely. This allows the banner to remain recognizable without assuming control over the user’s color environment.


Cameron’s prototype was a great starting point for Andy to incorporate into the Copilot CLI but it wasn’t without its challenges:
First, the animation was broken down into distinct animation elements that could be used to create separate light and dark themes:
type AnimationElements =
| "block_text"
| "block_shadow"
| "border"
| "eyes"
| "head"
| "goggles"
| "shine"
| "stars"
| "text";
type AnimationTheme = Record<AnimationElements, ANSIColors>;
const ANIMATION_ANSI_DARK: AnimationTheme = {
block_text: "cyan",
block_shadow: "white",
border: "white",
eyes: "greenBright",
head: "magentaBright",
goggles: "cyanBright",
shine: "whiteBright",
stars: "yellowBright",
text: "whiteBright",
};
const ANIMATION_ANSI_LIGHT: AnimationTheme = {
block_text: "blue",
block_shadow: "blackBright",
border: "blackBright",
eyes: "green",
head: "magenta",
goggles: "cyan",
shine: "whiteBright",
stars: "yellow",
text: "black",
};
Next, the overall animation and subsequent frames would capture content, color, duration needed to animate the banner:
interface AnimationFrame {
title: string;
duration: number;
content: string;
colors?: Record<string, AnimationElements>; // Map of "row,col" positions to animation elements
}
interface Animation {
metadata: {
id: string;
name: string;
description: string;
};
frames: AnimationFrame[];
}
Then, each animation frame was captured to separate frame content from stylistic and animation details, resulting in over 6,000 lines of TypeScript to safely animate three seconds of the Copilot logo across terminals with wildly different rendering and accessibility behaviors:
const frames: AnimationFrame[] = [
{
title: "Frame 1",
duration: 80,
content: `
┌┐
││
││
└┘`,
colors: {
"1,0": "border",
"1,1": "border",
"2,0": "border",
"2,1": "border",
"10,0": "border",
"10,1": "border",
"11,0": "border",
"11,1": "border",
},
},
{
title: "Frame 2",
duration: 80,
content: `
┌── ──┐
│ │
█▄▄▄
███▀█
███ ▐▌
███ ▐▌
▀▀█▌
▐ ▌
▐
│█▄▄▌ │
└▀▀▀ ──┘`,
colors: {
"1,0": "border",
"1,1": "border",
"1,2": "border",
"1,8": "border",
"1,9": "border",
"1,10": "border",
"2,0": "border",
"2,10": "border",
"3,1": "head",
"3,2": "head",
"3,3": "head",
"3,4": "head",
"4,1": "head",
"4,2": "head",
"4,3": "goggles",
"4,4": "goggles",
"4,5": "goggles",
"5,1": "head",
"5,2": "goggles",
"5,3": "goggles",
"5,5": "goggles",
"5,6": "goggles",
"6,1": "head",
"6,2": "goggles",
"6,3": "goggles",
"6,5": "goggles",
"6,6": "goggles",
"7,3": "goggles",
"7,4": "goggles",
"7,5": "goggles",
"7,6": "goggles",
"8,3": "eyes",
"8,5": "head",
"9,4": "head",
"10,0": "border",
"10,1": "head",
"10,2": "head",
"10,3": "head",
"10,4": "head",
"10,10": "border",
"11,0": "border",
"11,1": "head",
"11,2": "head",
"11,3": "head",
"11,8": "border",
"11,9": "border",
"11,10": "border",
},
},
Finally, each animation frame is rendered building segments of text based on consecutive color usage with the necessary ANSI escape codes:
{frameContent.map((line, rowIndex) => {
const truncatedLine = line.length > 80 ? line.substring(0, 80) : line;
const coloredChars = Array.from(truncatedLine).map((char, colIndex) => {
const color = getCharacterColor(rowIndex, colIndex, currentFrame, theme, hasDarkTerminalBackground);
return { char, color };
});
// Group consecutive characters with the same color
const segments: Array<{ text: string; color: string }> = [];
let currentSegment = { text: "", color: coloredChars[0]?.color || theme.COPILOT };
coloredChars.forEach(({ char, color }) => {
if (color === currentSegment.color) {
currentSegment.text += char;
} else {
if (currentSegment.text) segments.push(currentSegment);
currentSegment = { text: char, color };
}
});
if (currentSegment.text) segments.push(currentSegment);
return (
<Text key={rowIndex} wrap="truncate">
{segments.map((segment, segIndex) => (
<Text key={segIndex} color={segment.color}>
{segment.text}
</Text>
))}
</Text>
);
})}
The engineering team approached the banner with the same philosophy as the GitHub CLI’s accessibility work:
“CLI accessibility is under researched,” Andy noted. “We’ve learned a lot from users who are blind as well as users with low vision, and those lessons shaped this project.”
Because of this, the animation is opt-in and gated behind its own flag—so it’s not something developers see by default. And when developers run the CLI in –screen-reader mode, the banner is automatically skipped so no decorative characters or motion are sent to assistive technologies.
By the end of the refactor, the team had:
This pattern—storing frames as plain text, layering semantic roles, and applying themes at runtime—isn’t specific to Copilot. It’s a reusable approach for anyone building terminal UIs or animations.
A “simple ASCII banner” turned into:
“The most rewarding part was stepping into open source for the first time,” Cameron said. “With Copilot, I was able to build out my MVP ASCII animation tool into a full open source app at ascii-motion.app,. Someone fixed a typo in my README, and it made my day.”
As Andy pointed out, building accessible experiences for CLIs is still largely unexplored territory and far behind the tooling and standards available for the web.
Today, developers are already contributing to Cameron’s ASCII Motion tool, and the Copilot CLI team can ship new animations without rebuilding the system.
This is what building for the terminal demands: deep understanding of constraints, discipline around accessibility, and the willingness to invent tooling where none exists.
The GitHub Copilot CLI brings AI-assisted workflows directly into your terminal — including commands for explaining code, generating files, refactoring, testing, and navigating unfamiliar projects.
The post From pixels to characters: The engineering behind GitHub Copilot CLI’s animated ASCII banner appeared first on The GitHub Blog.
Today’s data release marks our second full year of regular releases since the launch of the GitHub Innovation Graph. The Innovation Graph serves as a stable, regularly updated source for aggregated statistics on public software development activity around the world, informing public policy, strengthening research, guiding funding decisions, and equipping organizations with the evidence needed to build secure and resilient AI systems.
With our new data release, we’ve updated the bar chart race videos to the git pushes, repositories, developers, and organizations global metrics pages.
Let’s take a look back at some of the progress the Innovation Graph has helped drive.
One of the most rewarding aspects of the past year has been seeing the growing range of research questions addressed with Innovation Graph data. Recent papers have explored everything from global collaboration networks to the institutional foundations of digital capabilities.
These studies showcase how network analysis techniques can be applied to Innovation Graph data, in addition to earlier work we referenced last year linking open source to economic value, innovation measurement, labor markets, and AI-driven productivity through other methodologies.
Research by an economist at the Federal Reserve Board uses GitHub data to examine how the density of Protestant mission stations correlates with present-day participation in digital production across African countries.
Researchers from MIT, Carnegie Mellon, and the University of Chicago analyze international collaboration patterns in the Innovation Graph’s economy collaborators dataset, shedding light on how common colonial histories influence modern software development collaboration activities.
A social network analysis by researchers at Midwestern State University and Tarleton State University highlights the tightly connected, small-world structure of global OSS collaboration.
These researchers extend countries’ software economic complexity into the digital economy by leveraging the geographic distribution of programming languages in open source software, showing that software economic complexity predicts GDP, income inequality, and emissions, which have important policy implications.
The Innovation Graph and related GitHub datasets were featured prominently in academic and policy discussions at a wide range of venues, including:
We were also encouraged to see Innovation Graph data referenced in major international reporting. In 2025, two pieces in The Economist drew on GitHub data examining China’s approach to open technology (June 17, 2025) and India’s potential role as a distinctive kind of AI superpower (September 18, 2025). Coverage like this reinforces the role that data on open source activity can play in understanding geopolitical and economic shifts.
Once again, Innovation Graph data contributed to several flagship reports, including:
We continue to value these opportunities to support macro-level measurement efforts, and we’re equally excited by complementary work that dives deeper into regional, institutional, and community-level dynamics.
As we move through 2026, we’re grateful for the community that has formed around the Innovation Graph, and we’re looking forward to building the next chapter together. Our focus will be on deepening collaboration, welcoming new perspectives, and creating clearer pathways for people to apply the Innovation Graph data in their own contexts, from strategy and research to product development and policy.
The post Year recap and future goals for the GitHub Innovation Graph appeared first on The GitHub Blog.