Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152112 stories
·
33 followers

Eclsipse Foundation offers enterprise-grade open source alternative to Microsoft’s VS Code Marketplace

1 Share

Platform engineering requires something of a leap of faith. Developers need to “believe” that foundation-level tools, libraries, and repositories will always offer a fully stocked larder of services, all optimized to the requisite weight, sharpness, and durability. 

Seeking to ensure its store cupboard is presented properly, the Eclipse Foundation announced on Tuesday the Open VSX Managed Registry. The technology represents the open source community’s first foundation-operated managed service for critical developer infrastructure.

What is Open VSX?

While Microsoft’s proprietary parentage means it logically owns the VS Code Marketplace, Open VSX is the open source, vendor-neutral extension registry for tools built on the VS Code extension API. With Open VSX denoting Visual Studio eXtensions, an extension registry in this sense can be defined as a central repository for software developers to search (or publish) and install code extensions (bug fixers, automated formatting, and code autocompletion tools, etc.) or plugins.

Open VSX Managed Registry offers an ecosystem of AI-native IDEs, cloud development environments, and VS Code-compatible platforms. These include Amazon’s Kiro, Google’s Antigravity, Cursor, VSCodium, Windsurf (an AI-native coding assistant), Ona (built on Gitpod foundations), and others. Commercial adopters receive a 99.95% uptime SLA, service credits, defined support tiers, and enterprise-grade operational assurance for sustained production-scale usage.

“Open VSX remains open and accessible to developers, open source projects, and organizations of all sizes, but long-term reliability and security don’t happen by accident.” – Mike Milinkovich, executive director, Eclipse Foundation.

Balancing openness & robustness

Working the helm of this project is Mike Milinkovich, executive director of the Eclipse Foundation. Explaining that there’s a real balancing act here, Milinkovich tells The New Stack that his team’s approach is all about finding the right level at which to stay true to open-source principles while ensuring the funding is in place to enable critical infrastructure at scale to operate.

“Open VSX remains open and accessible to developers, open source projects, and organizations of all sizes, but long-term reliability and security don’t happen by accident. This model allows us to preserve openness while ensuring the platform can be operated and trusted at scale,” Milinkovich says.

As software engineers now work with AI-driven development tools that accelerate automation, drive continuous installs, and create new ever-busier channels of machine-to-machine traffic, extension registries have become high-throughput elements of a worthy always-on infrastructure. 

From community to business continuity

Thabang Mashologu, CMO of the Eclipse Foundation, tells The New Stack that there’s an important point of progress to note on the evolutionary curve for extension registries; what was once a technology that enjoyed primarily community-scale usage now needs to reflect sustained commercial platform dependency at a global scale.

“The priority for Open VSX Managed Registry is simple: keep critical open source infrastructure open, secure, reliable, and sustainable for the developers and projects that depend on it,” Mashologu says. “Free access remains for the broader community while vendors and enterprises benefit from a resilient, vendor-neutral platform that delivers the stability and performance they need to build and scale with confidence.”

Open VSX now serves more than 300 million downloads per month, with peak daily traffic exceeding 200 million requests. The registry hosts over 10,000 extensions from more than 7,000 “publishers” (meaning teams, special interest groups, commercial software engineering units, but mostly individuals), and it continues to grow rapidly as adoption expands across AI-native developer tooling and cloud-based platforms.

AWS, Google, & Cursor sign up

Initial customers of the Open VSX Managed Registry include Amazon Web Services, Google, and Cursor. Collectively, these organizations say they are adopting the managed service to secure production-grade reliability, defined service levels, and predictable scaling for enterprise developer platforms.

Operating a global extension registry at this scale requires significant investment in compute capacity, bandwidth, storage, security operations, and the engineering expertise necessary to maintain availability and resilience. AI-driven development is accelerating that demand. Again, these factors underline the Eclipse Foundation’s decision to battle-harden the total offering here.

With automated workflows and coding agents, a single developer can now generate infrastructure load comparable to that of dozens of traditional users, increasing both traffic volume and operational complexity. The service is designed for organizations that use Open VSX as critical infrastructure in commercial products, AI-scale services, or enterprise development environments; as such, it aligns operational accountability with the expectations of production systems.

The price of freedom

Individual developers and open source projects never pay to use the Open VSX Registry. Publishing, search, and standard development workflows remain unchanged. Open source IDEs and community projects continue to benefit from what the Eclipse Foundation calls “generous” free-tier limits.

The team further states that managed service is typically significantly more cost-effective than self-hosting equivalent global infrastructure at scale. At the commercial level, it states that organizations can now rely on defined service levels while maintaining vendor neutrality and transparent governance.

There’s a defined shift happening here. Eclipse CMO Mashologu calls this point in time out as a moment when AI agents have “changed the economics” of developer infrastructure.

Where extension registries were typically accessed by human developers, AI agents, as part of platform engineering projects, now require a new level of machine-scale traffic throughput. This likely underpins the Eclipse Foundation’s two-tiered approach to bolstering both the open community and commercial enterprise use cases.

The post Eclsipse Foundation offers enterprise-grade open source alternative to Microsoft’s VS Code Marketplace appeared first on The New Stack.

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Is Showing UI Designers the Door

1 Share

So this month Marcus and I get into a slightly uncomfortable question. If AI can knock out decent interfaces from a text prompt, where does that leave the people whose day job is opening Figma and making screens look nice?

We start with Google Stitch, which has been getting a lot of attention lately. Then we zoom out into something I have become mildly obsessed with, which is building AI skills. Not prompt snippets, but reusable, documented processes that let you get consistent work out of AI without drowning it in context.

App of the Month

This month’s tool is Google Stitch (v2), Google’s AI UI generator. You describe what you want, it produces an interface, and you can do some light manual tweaking.

It is not a full replacement for Figma. The editing controls are basic. The bigger story is what it represents. We are now at the point where a decent, usable UI can be generated fast enough that the real value shifts from "can you draw the screens" to "can you judge what good looks like." That is where experience, and yes, taste, starts to matter.

If you want to compare approaches, I mentioned Figr again, which I still prefer for the quality of what it produces.

Are UI Designers Becoming Vinyl?

The question Stitch raises is not "can AI design interfaces". It clearly can. The question is what happens to the job market when "good enough" becomes cheap, fast, and widely available.

I found myself telling 2 different clients recently that they could probably skip hiring a UI designer. They had tight budgets, tight timelines, and already had solid brand guidelines or a design system. In those situations, I could push the work through AI, iterate it a bit, and get something perfectly serviceable.

That line of advice made me feel a bit grubby. Not because it was wrong for those clients, but because it hints at a bigger shift.

My worry is that UI design becomes like vinyl records. Most people will not need it. A small number will care deeply and pay for it. The middle ground shrinks.

Marcus made the important caveat here. Some designers will still be in demand because they bring something AI cannot easily fake. A distinctive visual style. Creative judgment. Brand thinking. The ability to make something feel like it came from a real point of view, not a model averaging the internet.

We also talked about where UI designers can expand their value, because "I make pretty screens" is not a great long-term career plan.

  • Broaden into UX and problem solving. Look past the interface and into the business problem, user needs, and research.
  • Own the stuff between screens. AI still tends to think screen by screen. Humans are better at flows, journeys, and the messy reality of how people actually get from A to B.
  • Lean into information architecture. For websites especially, the structure and content model matter as much as the visual design.

We used a music analogy that will probably annoy some people, which makes it perfect. AI tools can generate "background" output that is fine for low-stakes use. They will not replace great musicians. But they will reduce the number of gigs available.

AI Skills As a Career Asset

After we finished terrifying UI designers, we moved on to something more useful. I think a lot of roles are going to need an AI toolkit. Not a handful of clever prompts, but a proper library of reusable skills.

When I say "AI skills," I mean documented processes that an AI can follow reliably. Think SOPs you can run repeatedly, not prompt snippets you copy and paste.

I now have around 60 skills in my library, and it is growing constantly. Outside of the Boagworld website, it might be the most valuable business asset I have.

The reason is consistency and context management. AI can produce terrible output when you dump too much information on it at once. Skills let you break work into focused chunks and chain them.

We talked about 3 levels of skills:

Company-level skills

Standard processes that keep things consistent. Proposals. Expense claims. Holiday booking. The sort of stuff that should not depend on one person remembering every step.

Team or discipline skills

For example, UX teams can create skills for personas, journey mapping, surveys, and top task analysis. That helps remove bottlenecks and lets colleagues do decent work without reinventing the wheel.

Individual skills

This is where it gets interesting for your career. These are the skills that capture how you do something, including all the weird little bits you have learned over the years.

A key point here is that the value is not only in having the skill. It is in creating it. Writing down a process forces you to surface assumptions and explain what "good" looks like.

We also got into AI agents. If you describe your skills well, an agent can chain them to complete bigger jobs. I gave a sales example where a meeting transcript can be turned into a CRM entry, follow-up tasks, company research, and a draft proposal with very little manual effort.

That is exciting. It is also mildly terrifying if you are attached to the idea of being indispensable.

Read of the Month

I mentioned an article that helped me connect a few threads in my own work. UX, conversion rate optimization, and design leadership can look like 3 different things until you realize they all operate on the same system.

The piece is called "How CRO and UX Work Together to Increase Website Conversion".

It frames CRO and UX as two sides of the same coin. CRO asks, "Did they convert?" UX asks, "Was it easy and enjoyable?" I would add that UX also cares about what happens after conversion, because retention is often where the real money is.

The shared foundation is data. Analytics, event tracking, heat maps, session recordings. The same signals can tell you where people struggle and where the biggest conversion wins are likely to be.

It also reinforced something I believe strongly. CRO and UX should not sit in separate silos. Both work best when they cover the entire journey, not just one page at a time.

Marcus’ Joke

"I just purchased an original Van Gogh coffee table. I know it’s original because there’s a bit of veneer missing."

Find The Latest Show Notes





Download audio: https://cdn.simplecast.com/media/audio/transcoded/eea3ff50-d316-4ff7-b8db-24c157eb37ff/ae88e41b-a26d-4404-8e81-f97bca80d60d/episodes/audio/group/cd8f66d1-ea76-4abd-9f49-371cec955fd4/group-item/36db6d17-92f7-4503-a5ec-5ae2c7cc0a2a/128_default_tc.mp3?aid=rss_feed&feed=XJ3MbVN3
Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

When Internal and External Team Members Have Divergent Goals — The Silent Killer of Agile Teams | Viktor Glinka

1 Share

Viktor Glinka: When Internal and External Team Members Have Divergent Goals — The Silent Killer of Agile Teams

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"The root causes for destructive team patterns often lie outside the team itself." - Viktor Glinka

 

Viktor shares a story from a manufacturing organization where one team stood out — and not in a good way. The team was composed of both internal and external members, and what no one saw coming was that their implicit goals were fundamentally divergent: the external members were focused on maximizing revenue for their own company, while the internal members cared deeply about product quality. The signs were visible to anyone who approached them — they barely talked to each other and preferred to work individually. When Viktor tried to raise the topic of cooperation and trust, he was met with awkward silence. One team member finally told him: "I don't want the team to blow up. In my previous experience, I raised this topic and that was the end of the team." Fear kept the truth underground. Viktor brought his observations to the manager, who acknowledged the lack of a shared goal as the root cause — but couldn't fix it because he wasn't authorized to manage the external people. The takeaway was clear: three key success factors for any team are the right team composition with people who want to work together, a shared goal that unites diverse perspectives, and clear expectations set by their manager.

 

In this segment, we talk about LeSS self-designing team workshops and the importance of team composition in scaled setups.

 

Self-reflection Question: Does your team have a shared goal that everyone — including external members and contractors — genuinely understands and cares about? When was the last time you checked?

Featured Book of the Week: The Art of Doing Twice the Work in Half the Time by Jeff Sutherland

Viktor recommends The Art of Doing Twice the Work in Half the Time by Jeff Sutherland as the book that sparked his passion for Scrum. As he puts it: "I know the title is very controversial and often criticized, but I could deeply relate to the stories inside the book. They sparked a passion that is still with me." Viktor also recommends a bonus book: Reinventing Organizations by Frederic Laloux, which showed him the real power of self-organization and validated what he had already started experimenting with in his project management career. It pushed him to explore holacracy, sociocracy, intent-based leadership, and coaching.

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Viktor Glinka

 

Viktor is an organisational consultant and Professional Scrum Master who helps teams and leaders find simpler ways to deliver value while keeping the human side of work at the center. He's practical, curious, and focused on real outcomes rather than buzzwords. His true passion is adaptability - both in business and in personal life.

 

You can link with Viktor Glinka on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260421_Viktor_Glinka_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How AI Helps You Express Your Vibe

1 Share
Ever wondered what your favorite vacation photo sounds like? In this episode, we dive into how Lyria 3 AI helps you find your creative voice by transforming inside jokes, memories, and images into custom audio. #AI #Lyria3 #MadeByGoogle #TechPodcast

Hosted on Acast. See acast.com/privacy for more information.





Download audio: https://sphinx.acast.com/p/open/s/63e39eb02e631f0011a284ac/e/69e744ac66c3374f7ecc90b3/media.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Explaining Your Code Over and Over. Let Code Studio Do It Once

1 Share

Stop Explaining Your Code Over and Over. Let Code Studio Do It Once

TL;DR: Syncfusion Code Studio can read your codebase and generate structured documentation automatically. You can control how that documentation is written using Custom Agents, and reuse documentation workflows using Skills. The result is clear, searchable documents that help new developers understand the system faster and avoid wasted work.

From codebase chaos to clear documentation

Imagine a new developer joining your team. They open your project and see 50 folders and 500 files in your project.

Then, they check the README, which says: “Start with auth, then payment, then notifications.”

  • But where are these modules exactly?
  • What functions do they export?
  • How do they interact with each other?

They have no idea.

So, they ask a senior engineer. Now, a senior engineer is spending an hour explaining code that should have been documented years ago. Or worse, the new hire misunderstands the system, ships a bug, or rebuilds something that already exists.

👉 The real problem isn’t the code. It’s the lack of organized, readable documentation.

What if your entire codebase could explain itself?

That’s exactly what Syncfusion® Code Studio does. It automatically transforms your code into professional documentation that new developers can read within hours to understand your entire architecture.

What really breaks without good documentation

Here’s what actually happens when your codebase lacks organized documentation:

1. New hires are lost (Day 1 problem)

A new developer joins the team, clones the repository, and sees 500 files spread across dozens of files. There’s no organized documentation to guide them. They start reading random source code and still understand nothing. They interrupt seniors with a question, “What does this module do?” over and over.

2. Everyone rebuilds what already exists

Without documentation showing all existing modules and their APIs, developers end up rebuilding features that already exist. You might even discover three different implementations of the same thing, such as user authentication.

3. Onboarding takes weeks, not days

Slow onboarding kills productivity. Instead of contributing value in their first few days or weeks, new hires spend their time trying to figure out how the system works.

4. Bugs pile up because of misunderstandings

When developers don’t understand the architecture, mistakes are inevitable. A new engineer may not realize how error handling works and accidentally write code that fails silently instead of logging errors, leading to bugs that are hard to diagnose and fix.

All of this traces back to one issue: your codebase has knowledge, but it isn’t accessible.

Here’s the solution: Syncfusion Code Studio writes your code documentation automatically

Syncfusion Code Studio is an AI-powered integrated development environment (IDE) with built-in assistance to support modern software development workflows. It reads your entire codebase and automatically generates clear, structured documentation in just minutes.

To learn more about Code Studio, please visit our introduction blog.

Think of it this way: If your codebase is a messy library of 500 books, Code Studio reads all 500 books and then creates a completely well-organized library catalog automatically.

Without Code Studio:

  • New hires are confused and spend weeks reading raw source code.
  • Senior developers repeatedly explain the same concepts.

With Code Studio, you get:

  • Clear documentation that explains your entire system.
  • Beginner-friendly explanations that cover both what the code does and why it exists.
  • Powerful search, so developers can find what they need in seconds.

As a result, a new developer can read your documentation for just a few minutes and understand more than spending several days digging through unstructured code.

Prerequisites

Before we start, ensure you have:

  • Installed and configured Syncfusion Code Studio using our installation guide.
  • Python is installed in your system.

How Syncfusion Code Studio generates documentation using a custom agent

Syncfusion Code Studio can automatically read your codebase and generate documentation, but different teams need documentation in different styles. That’s where the Custom Agents come in.

Custom Agents let you control how documentation is written, who it’s written for, and how it’s structured.

What is a custom agent?

A custom agent is like a template or instructions that tells Code Studio exactly how to format and write your documentation.

Think of it this way:

  • Code Studio = A powerful AI that understands your codebase deeply.
  • Custom agent = A set of instructions (stored as a text file) that tells Code Studio how to write your documentation.

Why do you need a custom agent?

1. Different audiences need different documents

A senior engineer needs detailed technical explanations, while a junior developer needs simple, beginner-friendly guidance. Without a custom agent, Code Studio wouldn’t know which style to use. A custom agent tells it exactly who the document is for.

Example: “For this team, write for junior developers.”

2. Consistency

Without clear rules, documentation can become inconsistent over time. Custom agents ensure every generated document follows the same format, style, and structure.

3. Full control

You are not guessing what Code Studio will generate. You explicitly write the rules, so the output matches your expectations exactly.

4. Reusability

You create a custom agent once and reuse it whenever your code changes without rewriting instructions each time.

What actually happens (Step-by-step)

  1. You create a custom agent (a text file with rules) such as “write for beginners, limit code to max 10 lines per code example, use analogies, avoid jargon”.
  2. You tell Code Studio: “Generate documents for this project”.
  3. Code Studio reads your codebase and your custom agent instructions.
  4. Code Studio generates documentation exactly as you instruct.
  5. Result: Documentation that matches your team’s needs perfectly.

The simple analogy:

  • Code Studio is an intelligent librarian who reads all your books and writes a summary.
  • Custom Agent is your instructions to that librarian (“Make the summary for beginners” or “Make it technical for experts“).

The librarian (Code Studio) does the work. Your instructions (custom agent) control the style and rules.

To learn more about custom agents, visit our documentation.

How to create a custom review agent in Code Studio

Follow these steps to create a custom agent that controls how Code Studio reviews or documents your code.

Step 1: Open chat panel

First, open the Code Studio chat panel using the Ctrl+Shift+I (Windows) or Cmd+Shift+I (macOS) shortcut key. This opens the chat view.

Step 2: Open the settings menu

In the chart view, click the gear icon in the top-right corner to open the settings menu.

Step 3: Navigate to custom agents

Select “Custom Agents” from the settings menu.

Step 4: Create a new agent

Click the Create new Custom agent button to start creating your agent.

Step 5: Choose where to save

Select where the agent should be stored:

  • .codestudio/agents → Available only for the current workspace.
  • User data → available across all your Code Studio workspaces.

Step 6: Name your agent

Next, enter a name such as “document-agent” and click “Create”.

Step 7: Define the agent instructions

Code Studio creates a text file called document-agent.agent.md’. This is where you write your review instructions.

This file contains two main parts:

  • Header (optional, between lines): Add metadata such as the agent’s name and description, etc.
  • Body (required): Write the rules, instructions, and guidelines the agent should follow.

You may add additional header fields if needed. Refer to the custom agent’s documentation for more details.

Step 8: Save and activate

Save the file. Once saved, your custom agent will appear in the mode dropdown and be ready.

Code example for Documentgenerator.agent.md file:

---
description: Transform any codebase into beginner-friendly documentation and tutorials
name: Codebase Documenter
---
# Codebase Documenter Agent - Quick Reference
Transform complex code into clear, accessible documentation for beginners.
## Workflow (7 Stages)
1. **Analyze Repository**: Scan `*.py`, `*.js`, `*.ts`, `*.java` etc.; skip `node_modules/`, `tests/`, `.git/`; max ~100KB/file
2. **Identify Core Abstractions**: Find 5-10 key classes/functions with beginner-friendly names and analogies
3. **Analyze Relationships**: Map how abstractions interact; tie every abstraction to the architecture
4. **Determine Chapter Order**: Foundational concepts first, high-level to low-level
5. **Write Chapters**: Motivation, key concepts, usage examples (<10 lines), diagrams, cross-references; format: `{number:02d}_{safe_name}.md`
6. **Combine Into Tutorial**: Create `index.md` (architecture diagram + TOC) in `output/[project-name]/`
7. **Create `mkdocs.yml` & Serve**: Auto-generate config, install MkDocs, serve, and open in Simple Browser
## Key Guidelines
- **Tone**: Warm, beginner-friendly — use "you", avoid jargon, use real-world analogies
- **Code**: Max 10 lines per block; explain immediately after; no links to files outside `docs_dir` (use backticks instead)
- **Visuals**: Mermaid diagrams (`flowchart`, `sequenceDiagram`, `classDiagram`); max 5-7 elements
## MkDocs Setup (Always Do Automatically)
After writing docs, without waiting for the user, run these steps:
1. **Install** (if needed): `pip install mkdocs mkdocs-material`
2. **Create `mkdocs.yml`** at project root:
   - `site_name` from project name; `docs_dir: output/[project-name]`
   - `theme: material` with dark/light toggle, `navigation.tabs`, `search.highlight`, `content.code.copy`
   - `pymdownx.superfences` with mermaid custom fence + `extra_javascript: [https://unpkg.com/mermaid@10/dist/mermaid.min.js]`
   - `nav:` listing every chapter in order
3. **Serve**: Run `mkdocs serve` as a background terminal from project root
4. **Open**: Once `Serving on http://127.0.0.1:8000/` appears, open `http://127.0.0.1:8000/` in Simple Browser
## Quality Checklist
✅ All abstractions covered | ✅ Mermaid diagrams clear | ✅ Code blocks <10 lines | ✅ Beginner-friendly tone | ✅ Navigation links included | ✅ `mkdocs.yml` created | ✅ No broken links outside `docs_dir` | ✅ Simple Browser opened automatically

Note: What is MkDocs in the example instructions, and do you need it?
MkDocs turns your generated .md files into a styled, browsable website, similar to converting plain text files into a mini documentation site.

It’s completely optional. Even without MkDocs, the AI still generates all documentation as .md files.

To skip MkDocs, remove the following three parts from the instructions before using them:

  1. Stage 7 in the Workflow section.
  2. The entire ## MkDocs Setup section.
  3. In the Quality Checklist, remove the last three items:
    1. mkdocs.yml created”,
    2. “No broken links outside docs_dir”,
    3. “Simple Browser opened automatically”

Refer to the following image.

Create a custom review agent in Code Studio
Create a custom review agent in Code Studio

How to use a custom agent for creating documentation in Code Studio

Once your custom agent is created, using it is simple:

  1. Open the chat view using Ctrl+Shift+I (Windows/Linux) or Cmd+Shift+I (Mac) shortcuts.
  2. Select your custom agent.
  3. Type your prompt, such as “create documentation for this project.”

The agent will automatically:

  • Analyze your codebase and generate all documentation as .md files inside “output/[project-name]/.”
  • Serve the documentation as a local website and open it directly in Code Studio’s Simple Browser.

Refer to the following GIF for a better understanding.

Converting the codebase to documentation using a custom agent in Code Studio
Converting the codebase to documentation using a custom agent in Code Studio

Creating codebase documentation using Skills in Code Studio (Alternative approach)

You have learned one way to create documentation using Custom Agents. Code Studio also offers another approach: Skills.

Both methods achieve the same goal: automatically generate documentation from your codebase, but they work in different ways.

Skills are folders of instructions, scripts, and resources that Code Studio can load when needed to perform specialized tasks.

Skills vs Custom Agents: When to use which

Purpose

  • Custom Agents define a persona with tailored instructions, tool access, model preferences, and multi-step workflows. They are ideal for tasks such as planning, code reviews, or security audits.
  • The Skills bundle specialized workflows, including scripts, examples, and resources, for specific tasks.

Content format

  • Custom agents are .agent.md files containing YAML frontmatter followed by instructions.
  • Skills are directories that include a SKILL.md file (YAML header plus instructions) along with optional scripts or examples.

For example, if you need to generate technical API documentation, you can create a Technical Docs Skill once and reuse it across projects. Refer to the Skills in Code Studio documentation to learn how to create a skill.

Where Skills live (Understanding the folder structure)

Skills are folders stored in your project. You can save them in multiple locations. Common skill locations(you can use any of these locations to save your skill):

  • .github/skills/
  • .codestudio/skills/
  • .agents/skills/

Example structure (using .github/skills/):

your-project/
├── .github/
│   └── skills/
│       └── Your skill folder
│           ├── SKILL.md                      
│           ├── structure/                    (optional)
│           │   └── your structure md file    
│           └── template/                     (optional)
│               └── your template md file
└── ... (your other files)

Example SKILL.md file:

---
description: Generate technical API documentation from your codebase
name: generate-technical-docs
 ---
# Generate Technical Documentation from Codebase
## What is This Skill?
Automatically extract technical documentation (API references) from your codebase by analyzing code structure, functions, classes, interfaces, and dependencies.
## When to Use
- Generate API references from your source code
- Document function signatures, parameters, and return types
- Create technical documentation for developers
- Auto-generate structured API docs for any codebase
# Technical Documentation Generator - Quick Reference
Extract and document all APIs from your codebase automatically.
## Workflow (4 Stages)
1. **Analyze Repository**: Scan `*.py`, `*.js`, `*.ts`, `*.java`, `*.cs` files; skip `node_modules/`, `tests/`, `.git/`; max ~100KB/file
2. **Extract Technical Metadata**: Find all functions, classes, interfaces, parameters, return types, dependencies
3. **Generate API Reference**: Create structured documentation following rules in `structure/technical-docs-structure.md`; format: `output/[project-name]/api-reference.md`
4. **Create `mkdocs.yml` & Serve**: Auto-generate config, install MkDocs, serve, and open in browser
## MkDocs Setup (Always Do Automatically)
After writing docs, without waiting for the user, run these steps:
1. **Install** (if needed): `pip install mkdocs mkdocs-material`
2. **Create `mkdocs.yml`** at project root with `docs_dir: output/[project-name]`
3. **Serve**: Run `mkdocs serve` as background terminal
4. **Open**: Once serving, open browser to `http://127.0.0.1:8000/`
## Quality Checklist
✅ All public APIs documented | ✅ Method signatures with types | ✅ File locations included | ✅ `mkdocs.yml` created | ✅ Docs served on localhost | ✅ Browser opened
 
## Additional Resources
- `structure/technical-docs-structure.md` - Rules and guidelines for what technical docs should include
- All docs generated in: `output/[project-name]/`

Note: Here the‘mkdocs’steps are also completely optional. Without it, the AI still generates all docs as .md files. I have added ‘structure/technical-docs-structure.md’ file as a reference for the structure that needs to be followed when creating technical docs. It’s also completely optional.

Example ‘structure/technical-docs-structure.md’ file:

# Technical Documentation Structure & Rules
## Purpose
For developers who need **API details, signatures, and integration info**. Focus on WHAT and HOW, not WHY or how-to guides.
## Required Sections
1. **Overview** - 1-2 sentence module description
2. **Classes & Interfaces** - All public classes with methods, parameters, return types
3. **Functions** - All exported functions with signatures and file locations
4. **Data Models** - Interfaces, types with exact code structure
5. **Dependencies** - External libraries and internal imports
## Tone & Style
- Technical, precise language
- Target: Senior/intermediate developers
- Code examples: signatures only, no implementation
- No beginner explanations, tutorials, or "why" sections
## What MUST Include
✅ Exact method signatures with types  
✅ Parameter names and types  
✅ Return types with descriptions  
✅ File paths and line numbers  
✅ All public APIs  
✅ External dependencies  
✅ Type definitions  
 ## What NOT to Include
❌ Implementation details  
❌ Internal/private methods  
❌ Long code examples  
❌ Tutorial steps or getting started guides  
❌ Beginner-friendly explanations  
❌ Philosophy or "why" sections

How to use a Skill

Once you’ve created a Skill in Syncfusion Code Studio, using it is simple and intuitive. Skills let you run predefined workflows, such as generating documentation, directly from the chat. The steps below show how to select a Skill and trigger it to run automatically.

Step 1: Open the Code Studio chat

First, open the Code Studio chat panel. Press Ctrl+Shift+I on Windows or Linux, or Cmd+Shift+I on macOS. This launches the chat interface where skills and agents are available.

Step 2: Find the Skill you want

Next, type / in the chat panel to display all available skills. Browse through the list and locate the skill you want to use.

Step 3: Select your Skill

Now, click on the skill you created or the one that best matches your task. Selecting the skill activates it and prepares Code Studio to follow its defined workflow.

Step 4: Enter your request

After selecting the skill, type your request in the chat input. For example, you can enter:
“Generate technical documentation for this project” or any other instruction relevant to the skill.

Step 5: Review the output

Finally, the skill takes over. It analyzes your codebase, generates the output based on the skill’s purpose, serves the result on a local server, and opens it automatically in Code Studio’s simple browser. For a clearer understanding of this flow, refer to the accompanying GIF.

Creating codebase documentation using Skills in Code Studio
Creating codebase documentation using Skills in Code Studio

Best practices: Making your documentation excellent

To get the best results from Syncfusion Code Studio, especially if you’re new to automatic documentation generation, follow these best practices.

1. Start with one small piece, not everything

Resist the urge to document your entire codebase on day one. Instead:

  • Choose one module that is difficult to understand.
  • Generate documentation only for that module.
  • Share it with a new or recently onboarded developer and ask:
  • “Did this help you understand how this module works?”
  • Once you confirm it’s effective, expand documentation to the rest of the codebase.

This approach helps you validate quality early and avoid unnecessary rework.

 2. Always check the generated documents yourself before sharing

After Code Studio generates documentation, take time to review it before sharing it with the team.

  • Read through the documentation completely.
  • Ensure it accurately reflects the current behavior and structure of your code.

A quick review ensures correctness and builds confidence in the generated output.

3. Create different documents for different people (using custom agents)

Not everyone on your team needs the same level or type of documentation. Use custom agents to tailor documentation for different audiences.

  • Junior developers: Beginner‑friendly explanations with clear examples.
  • Senior developers: Technical details, API references, and architecture deep dives.
  • Team leads: High‑level overviews that explain how systems fit together.

By using multiple custom agents, you can generate different documentation styles from the same codebase.

4. Get feedback from people who don’t know your code

The best feedback comes from people who haven’t memorized your system.

  • Share the documentation with someone who recently joined the team
  • Ask questions like:
    • “Was this easy to understand?” and “What felt unclear?”
  • Improve the documentation based on their feedback

This ensures your documentation truly supports onboarding and knowledge sharing.

Frequently Asked Questions

Is the generated documentation always accurate?

Code Studio analyzes your code and generates documentation, but you should always review the output yourself. Test all code examples, correct any inaccuracies or unclear explanations, and ask someone unfamiliar with your code to read it and provide feedback. Think of Code Studio as a powerful assistant that completes about 80% of the work, while you handle the final 20% through review and refinement.

Do I need MkDocs to use Code Studio’s documentation?

No. MkDocs is completely optional. Code Studio generates documentation as .md files regardless of whether you use MkDocs. MkDocs simply turns Markdown files into a styled documentation website. You can work directly with the generated .md files or use MkDocs if you want a more polished, professional documentation site.

Can Code Studio document legacy code or older projects?

Absolutely. Code Studio works with any codebase, regardless of its age or complexity.

How do I write a good Custom Agent rule for my team’s style?

Start by reviewing documentation you already like, whether from other projects or tools you admire. Identify what makes that documentation effective, such as simple language, clear structure, or well-chosen examples. Write those preferences explicitly as rules in your Custom Agent. For example, you might specify using simple words, limiting paragraphs to 100 words, or keeping code examples under 10 lines. Test the rules on a small module, then ask a teammate whether the result feels right. Refine the rules based on feedback and document them clearly so everyone on the team understands and follows the same documentation standards.

What if I have multiple Code Studio Custom Agents or Skills for different documentation types?

That is an excellent practice. You can create a Technical Docs Skill for API references, a Getting Started Guide Skill for onboarding new users, an Architecture Overview Agent for system design documentation, and an API Reference Skill for endpoint-level details. Each Skill or Agent analyzes the same codebase but produces a different type of documentation. Creating a separate Skill or Agent for each documentation style allows your team to choose the most appropriate one based on what they are documenting.

Ready to stop writing documentation manually?

Thanks for reading! Stop spending weeks writing and maintaining documentation. Syncfusion Code Studio reads your entire codebase and automatically generates professional documentation within minutes. It’s about creating complete, organized, searchable documentation that explains your entire system.

With Code Studio, you get:

  • Full documentation generated automatically: Architecture overviews, module guides, API references, code examples.
  • Professional, consistent format: Everything looks polished and organized.
  • Real onboarding speed: New developers become productive in days, not weeks.

The result: Your team spends less time explaining code and more time building features. New hires learn independently. Senior developers stay focused on building. Your entire team grows without burnout.

That’s the power of automatically converting your entire codebase into comprehensive documentation.

Ready to stop writing documentation manually? Download Syncfusion Code Studio today and see how your team can ship with confidence.

If you have questions, contact us through our support forumssupport portal, or feedback portal. We are happy to assist you!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Removing byte[] allocations in .NET Framework using ReadOnlySpan

1 Share

In this post I describe a simple way to remove some byte[] allocations, no matter which version of .NET you're targeting, including .NET Framework. This will likely already be familiar to you if you write performance sensitive code with modern .NET, but I recently realised that this can be applied to older runtimes as well, like .NET Framework.

This post looks at the changes to your C# code to reduce the allocations, how the compiler implements the change behind the scenes, and some of the caveats and sharp edges to watch out for.

Span<T> and ReadOnlySpan<T> are a performance mainstay for .NET

ReadOnlySpan<T> and Span<T> were introduced into .NET a long time ago now, but they have had a significant impact on the code you can (and arguably should) write, particularly when it comes to performance sensitive code. These provide a "window" or "view" over existing data, without creating copies of that data.

The classic example is when you're manipulating string objects; instead of using SubString(), and creating additional copies of segments of the string, you can use AsSpan() to create ReadOnlySpan<char>() segments that can be manipulated almost as though they are separate string instances, but without all the copying.

This is probably the most common use of Span<T> in application code, but fundamentally the use of Span<T> to provide a view over any piece of memory means it's useful in many other situations. The fact that the backing of a Span<T> can be almost anything, means you can keep the same "public" API which potentially "swapping out" the backend.

Another common example of this is if you have some parsing (or similar) code and you need a buffer to store the temporary results. Prior to Span<T>, you would almost certainly have allocated a normal array on the heap for this, but with Span<T>, "stack allocating" using stackalloc becomes just as easy, and reduces pressure on the garbage collector:


Span<byte> buffer = requiredSize <= 256                  // If the required buffer size is small 
                        ? stackalloc byte[requiredSize]  // enough, then allocate on the stack.
                        : new byte[requiredSize];        // Fallback to a normal heap allocation

Virtually all new .NET runtime APIs are added with Span<T> or ReadOnlySpan<T> support, and you can even use them in old runtimes like .NET Framework via the System.Memory NuGet package (though you don't get all the same perf benefits that you do with .NET Core).

The ability to easily and safely (without needing directly falling back to unsafe and pointers) work with blocks of memory regardless of where they're from has really made Span<T> vital for any code that cares about performance. But this ability to provide an "arbitrary" view over memory also provides a way for the compiler to perform additional optimizations, as we'll see in the next section.

Removing byte[] allocations with ReadOnlySpan<byte>

The ability for the compiler to provide a view over arbitrary memory is what drives the optimization I'm going to talk about for the rest of this post.

Let's imagine you have some byte[] that you need for something. Some kind of processing requires it. You know the data it needs to contain upfront, so you store the array in a static readonly field, so that the data is only created once:

public static class MyStaticData
{
    private static readonly byte[] ByteField = new byte[] { 1, 2, 3, 4 };
}

This works absolutely fine, but it means when you first access that data, the runtime needs to create an instance of the array, fill it with the data, and store it in the field. After that, accessing the field is cheap, but the initial creation adds a small delay to the first use of that type.

However, starting with C# 8.0, and as long as that you only need a readonly view of the data, you can use a slightly different pattern, by exposing a ReadOnlySpan<byte> property instead of a field:

public static class MyStaticData
{
    // Before
    private static readonly byte[] ByteField = new byte[] { 1, 2, 3, 4 };

    // After
    private static ReadOnlySpan<byte> ReadOnlySpanProp => new byte[] { 1, 2, 3, 4 };
}

Now, normally, that's the sort of code that should be setting off performance alarm bells. It looks like it will be creating a new byte[] every time you access the property😱 But that's not what's happening.

We'll take a detailed look at the generated IL code shortly, for now we'll just talk at a high level.

When the compiler sees the pattern above, it does the following:

  • Embed the byte[] data into the final assembly's metadata
  • When ReadOnlySpanProp is invoked, instead of creating a byte[], create a ReadOnlySpan<byte> that points directly to the data in the assembly

So the returned ReadOnlySpan<byte> isn't pointing to data that exists on the heap or even on the stack; it's pointing to data that's embedded directly in the assembly. That means there's no allocation at all, which removes that startup overhead and means there's no pressure at all on the garbage collector 🎉

It's worth noting as well that this is a compiler feature, which means that as long as a System.ReadOnlySpan<T> type is available, you can use it. So as long as you add the System.Memory NuGet package to your .NET Framework app, you too can benefit from this zero-allocation technique!

Also, this doesn't just apply to converting static readonly byte[] fields to static ReadOnlySpan<byte> properties; it also applies to local variables too. Which means things like the following, which look like they allocate an array, actually don't:

public static void TestData()
{
    // This looks like it allocates, but it doesn't
    ReadOnlySpan<byte> arr = new byte[] { 0, 1, 2 };
}

Another minor thing to point out is that this also works with UTF-8 Strings Literals, which are logically represented as a byte[] by the type system. So this is also zero allocation:

public static class MyStaticData
{
    private static ReadOnlySpan<byte> ReadOnlySpanUtf8 => "Hello world"u8;
}

That's all great, but when I first used the byte[] approach, I was a little concerned. After all, it looked like it would be allocating and terribly inefficient, so I wanted to be sure. And what better way than checking the IL code the compiler generates.

What's happening behind the scenes?

There are multiple ways to check the generated code that the compiler generates. If you just want to check a "snippet" of code, then sharplab.io is a quick and easy option. Alternatively, there's ILSpy, or the JetBrains tools like dotPeek and Rider, and I'm sure Visual Studio has plugins for it.

To comfort myself, I first created a new .NET project using dotnet new classlib, and then I tweaked it to use .NET Framework. To be clear, the techniques shown so far work on all target frameworks, but I wanted to specifically test with .NET Framework, to prove that it's not just "new" frameworks this works with. I tweaked the project file as shown below:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <!-- Change the target framework to .NET Framework👇 -->
    <TargetFramework>net48</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <!-- Use the latest C# version (C# 14 with .NET 10 SDK)👇 -->
    <LangVersion>latest</LangVersion>
  </PropertyGroup>
  
  <!-- Reference System.Memory so we can use ReadOnlySpan<T>👇 -->
  <ItemGroup>
    <PackageReference Include="System.Memory" Version="4.6.3" />
  </ItemGroup>
</Project>

I then created the very simple class below, compiled, and used Rider to view the generated IL:

public static class MyStaticData
{
    private static ReadOnlySpan<byte> ReadOnlySpanProp => new byte[] { 1, 2, 3, 4 };
    private static ReadOnlySpan<byte> ReadOnlySpanUtf8 => "Hello world"u8;
}

I've commented the IL below to describe what it's doing, but the important thing is that we don't see any calls to newarr, InitializeArray(), or ToArray(), or other problematic calls. Instead, we see IL code that loads an address which points to data embedded in the PE image (i.e. the assembly), loads the length of the data (4 bytes), and then passes the pointer and length to the new ReadOnlySpan<T>() constructor and returns it. No copying, no new arrays, just a wrapper around bytes that are already loaded into memory 🎉

.class public abstract sealed auto ansi beforefieldinit
  MyStaticData
    extends [mscorlib]System.Object
{

  .field private static initonly unsigned int8 One

  .method private hidebysig static specialname valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8>
    get_ReadOnlySpanProp() cil managed
  {
    .maxstack 8

    // 👇 Push the address of the static field that contains the array data as a blob onto the stack
    IL_0000: ldsflda      int32 '<PrivateImplementationDetails>'::'9F64A747E1B97F131FABB6B447296C9B6F0201E79FB3C5356E6C77E89B6A806A'
    // 👇 Push the value '4' onto the stack
    IL_0005: ldc.i4.4
    // 👇 Create a new ReadOnlySpan<byte>
    IL_0006: newobj       instance void valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8>::.ctor(void*, int32)
    IL_000b: ret // Return the span

  } // end of method MyStaticData::get_ReadOnlySpanProp

  .method private hidebysig static specialname valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8>
    get_ReadOnlySpanUtf8() cil managed
  {
    .maxstack 8

    // 👇 Push the address of the static field that contains the UTF-8 data as a blob onto the stack
    IL_0000: ldsflda      valuetype '<PrivateImplementationDetails>'/'__StaticArrayInitTypeSize=12' '<PrivateImplementationDetails>'::'27518BA9683011F6B396072C05F6656D04F5FBC3787CF92490EC606E5092E326'
    // 👇 Push the value '11' onto the stack (the length of "Hello world" in UTF-8)
    IL_0005: ldc.i4.s     11 // 0x0b
    // 👇 Create a new ReadOnlySpan<byte>
    IL_0007: newobj       instance void valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8>::.ctor(void*, int32)
    IL_000c: ret // Return the span

  } // end of method MyStaticData::get_ReadOnlySpanUtf8
}

Great, we can see that it's clearly working as we expected and this is .NET Framework, so it is just a compiler feature and has no runtime requirements, so we can use it everywhere.

But we need to be careful… I showed that it works for byte[], but it doesn't work for everything

Be careful, things can go wrong…

If you've read this far, you might be thinking "great, I'll use this for all my static array data", but I'm going to stop you there. Here-be dragons. The pattern above is only safe to use:

  • If you have a byte[], sbyte[], or bool[].
  • If all the values in the array are constants
  • If the array is immutable (i.e. you return a ReadOnlySpan<T> not a Span<T>).

Breaking any of these rules may be disastrous for performance, so we'll examine each in turn.

Only byte, sbyte, and bool are allowed

The compiler optimizations shown so far can only be applied to byte-sized primitives, i.e. byte, sbyte, and bool. That's because the constant data would be stored in a little endian format, and needs to be translated to the runtime endian format, e.g. if the application is run on hardware which utilizes big endian numbers.

That means, that if you do the following (using int instead of byte), then the code compiles just fine, but unfortunately it doesn't generate the "zero allocation" code that you might expect:

public static class MyStaticData
{
    // ⚠️ Using `int` instead of `byte` _does_ cause an array 
    // to be allocated (on .NET Framework and < .NET 7)
    private static ReadOnlySpan<int> ReadOnlySpanPropInt => new int[] { 1, 2, 3, 4 };
}

If we check the generated IL for a .NET Framework app with the above, we can see the problematic newarr and InitializeArray calls. The compiler actually does some work to avoid the really problematic pattern which would create an array every time, by creating the array once, caching it in a static field, and then using that cached data for subsequent calls, but it still has a startup cost, and does more work than the optimized byte[] approach:

.method private hidebysig static specialname valuetype [System.Memory]System.ReadOnlySpan`1<int32>
  get_ReadOnlySpanPropInt() cil managed
{
  .maxstack 8

  // 👇Try to load the cached int[] data from the static 'cache' field
  IL_0000: ldsfld       int32[] '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72_A6
  IL_0005: dup                                  // Duplicate the variable
  IL_0006: brtrue.s     IL_0020                 // If the data isn't null, we have it cached, so jump to the end
  IL_0008: pop                                  // The value was null, remove the duplicate
  IL_0009: ldc.i4.4                             // Load the length of the data (4)
  IL_000a: newarr       [mscorlib]System.Int32  // Allocate a new array on the heap
  IL_000f: dup                                  // Keep a copy of the array variable
  // 👇 Load the address of the int[] data embedded in the assembly
  IL_0010: ldtoken      field valuetype '<PrivateImplementationDetails>'/'__StaticArrayInitTypeSize=16' '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72
  // 👇 Initialize the new array with the int[] data
  IL_0015: call         void [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
  IL_001a: dup                                  // Duplicate the variable
  // 👇 Store the now-populated array into the static 'cache' field
  IL_001b: stsfld       int32[] '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72_A6
  // 👇 Create the `ReadOnlySpan<int>` wrapping the array
  IL_0020: newobj       instance void valuetype [System.Memory]System.ReadOnlySpan`1<int32>::.ctor(!0/*int32*/[])
  IL_0025: ret

} // end of method MyStaticData::get_ReadOnlySpanPropInt

So the "good" news is that this isn't much different to just using a static readonly int[], but it's still not ideal, and definitely isn't the zero-allocation version that you get with byte[].

Additionally, if you're on .NET 7+, a new API was added which actually does support this pattern. So if we change the target framework (to .NET 10 in this case), and recompile, then the IL is back to the zero allocation version, thanks to the call to RuntimeHelpers::CreateSpan, which handles fixing-up any endianness issues:

.method private hidebysig static specialname valuetype [System.Runtime]System.ReadOnlySpan`1<int32>
    get_ReadOnlySpanPropInt() cil managed
  {
    .maxstack 8

    // 👇 Load the address of the data
    IL_0000: ldtoken      field valuetype '<PrivateImplementationDetails>'/'__StaticArrayInitTypeSize=16_Align=4' '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B724
    // 👇 Call RuntimeHelpers::CreateSpan and return
    IL_0005: call         valuetype [System.Runtime]System.ReadOnlySpan`1<!!0/*int32*/> [System.Runtime]System.Runtime.CompilerServices.RuntimeHelpers::CreateSpan<int32>(valuetype [System.Runtime]System.RuntimeFieldHandle)
    IL_000a: ret

  } // end of method MyStaticData::get_ReadOnlySpanPropInt

So in summary, your mileage will vary here, and you don't really gain anything unless you're on .NET 7+. If you need to target older frameworks, then you're potentially better off just sticking to a good old static readonly int[] field instead.

All values must be constants

The next issue is that the whole approach shown in this post only works if all the values in the collection are constants. For example, the following example which uses a static readonly value inside the array compiles just fine:

public static class MyStaticData
{
    private static readonly byte One = 1;
    private static ReadOnlySpan<byte> ReadOnlySpanPropNonConstant => new byte[] { One, 2, 3, 4 };
}

but even on .NET 7+, this won't do the zero-allocation approach that you might be expecting. Instead, you get some really nasty "allocate a new array every time" behaviour 😱:

.method private hidebysig static specialname valuetype [System.Runtime]System.ReadOnlySpan`1<unsigned int8>
  get_ReadOnlySpanPropNonConstant() cil managed
{
  .maxstack 8

  IL_0000: ldc.i4.4                                  // Laod the length of the array
  IL_0001: newarr       [System.Runtime]System.Byte  // Create a new array
  IL_0006: dup                                       // Duplicate the variable reference
  // 👇 Get a reference to the data, and initialize the array
  IL_0007: ldtoken      field int32 '<PrivateImplementationDetails>'::'1E6175315920374CAA0A86B45D862DEE3DDAA28257652189FC1DFBE07479436A'
  IL_000c: call         void [System.Runtime]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [System.Runtime]System.Array, valuetype [System.Runtime]System.RuntimeFieldHandle)
  IL_0011: dup
  IL_0012: ldc.i4.0                                      // Load the index to change '0'
  IL_0013: ldsfld       unsigned int8 MyStaticData::One  // Load the static field `One`
  IL_0018: stelem.i1                                     // Set array[0] = One
  // 👇 return the ReadOnlySpan<byte> around the new array
  IL_0019: call         valuetype [System.Runtime]System.ReadOnlySpan`1<!0/*unsigned int8*/> valuetype [System.Runtime]System.ReadOnlySpan`1<unsigned int8>::op_Implicit(!0/*unsigned int8*/[])
  IL_001e: ret
} // end of method MyStaticData::get_ReadOnlySpanPropNonConstant

That's…bad 😬 And it does it on every property access. Definitely watch out for that one, on all target frameworks.

Only use ReadOnlySpan<T>, not Span<T>

You have a similar "dangerous" scenario if you use Span<T> instead of ReadOnlySpan<T>:

public static class MyStaticData
{
    private static Span<byte> SpanProp => new byte[] { 1, 2, 3, 4 };
}

In this case, because you're returning mutable data (Span<T> instead of ReadOnlySpan<T>), the compiler can't use any of its fancy tricks, because the data needs to be mutable. All it can do is create a new array, initialize it with the correct initial values, and then hand it back wrapped in a mutable Span<T>:

.method private hidebysig static specialname valuetype [System.Runtime]System.Span`1<unsigned int8>
  get_SpanProp() cil managed
{
  .maxstack 8

  // [32 43 - 32 68]
  IL_0000: ldc.i4.4
  IL_0001: newarr       [System.Runtime]System.Byte
  IL_0006: dup
  IL_0007: ldtoken      field int32 '<PrivateImplementationDetails>'::'9F64A747E1B97F131FABB6B447296C9B6F0201E79FB3C5356E6C77E89B6A806A'
  IL_000c: call         void [System.Runtime]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [System.Runtime]System.Array, valuetype [System.Runtime]System.RuntimeFieldHandle)
  IL_0011: call         valuetype [System.Runtime]System.Span`1<!0/*unsigned int8*/> valuetype [System.Runtime]System.Span`1<unsigned int8>::op_Implicit(!0/*unsigned int8*/[])
  IL_0016: ret

} // end of method MyStaticData::get_SpanProp

The failure path here is understandable, because there's really no way to do a safe zero-allocation approach when the data needs to be mutable. The big problem is that it's not obvious that it's a super-allocatey property instead of a zero-allocation version. If you accidentally fat-finger and write Span<T> instead of ReadOnlySpan<T>, or, you know, Claude does, then it's really not obvious from simply reviewing the code…

The only good news is that if you use modern features, namely collection expressions, you might catch the issue!

Reducing the risk of errors with collection expressions

So how do collection expressions help here? Well, those last two points, where one of the values isn't a constant, or where the variable is Span<T> instead of ReadOnlySpan<T> simply won't compile if you use the static property pattern with collection expressions:

public static class MyStaticData
{
    // Doesn't compile (That's good!)
    private static ReadOnlySpan<byte> ReadOnlySpanPruopNonConstantCollectionExpression => [One, 2, 3, 4];

    // Doesn't compile (That's good!)
    private static Span<byte> SpanPropCollectionExpression => [1, 2, 3, 4];
}

Attempting to compile this gives CS9203 errors:

Error CS9203 : A collection expression of type 'ReadOnlySpan<byte>' cannot be used in this context because it may be exposed outside of the current scope.
Error CS9203 : A collection expression of type 'Span<byte>' cannot be used in this context because it may be exposed outside of the current scope.

The above errors in Rider

This gives you something of a safety-net. As long as you always use collection expressions for this scenario, you're blocked from making the most egregious errors. The case where you are using int is allowed, but as already flagged, that's not as bad, because it's actually supported on .NET 7+, and you still only create a single instance of the array and cache it in <.NET 7.

Unfortunately, collection expressions only save you in the static property case. If you are creating local variables, then collection expressions don't save you on .NET Framework (or on any .NET versions <.NET 8)

public static class MyStaticData
{
    private static readonly byte One = 1;

    public static void TestData()
    {
        // Oh no, these all allocate on .NET Framework!
        ReadOnlySpan<int> intArray = [1, 2, 3, 4]; // .NET 7+ doesn't allocate for this one

        ReadOnlySpan<byte> nonConstantArray = [One, 2, 3, 4]; // But you need .NET 8+ to avoid
        Span<byte> spanArray = [1, 2, 3, 4];                  // allocations for these two!
    }
}

If we take a look at the IL generated for .NET Framework for this method, we can see that the int[] case uses the "create a static array and cache it" approach, while the non-constant and Span<T> cases create a new array every time, the same as happens with a static property:

    .method public hidebysig static void
    TestData() cil managed
  {
    .maxstack 5
    .locals init (
      [0] valuetype [System.Memory]System.ReadOnlySpan`1<int32> intArray,
      [1] valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8> nonConstantArray,
      [2] valuetype [System.Memory]System.Span`1<unsigned int8> spanArray
    )

    // [10 5 - 10 6]
    IL_0000: nop

    // Load or initialize the static int[] field data
    IL_0001: ldsfld       int32[] '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72_A6
    IL_0006: dup
    IL_0007: brtrue.s     IL_0021
    IL_0009: pop
    IL_000a: ldc.i4.4
    IL_000b: newarr       [mscorlib]System.Int32
    IL_0010: dup
    IL_0011: ldtoken      field valuetype '<PrivateImplementationDetails>'/'__StaticArrayInitTypeSize=16' '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72
    IL_0016: call         void [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
    IL_001b: dup
    IL_001c: stsfld       int32[] '<PrivateImplementationDetails>'::CF97ADEEDB59E05BFD73A2B4C2A8885708C4F4F70C84C64B27120E72AB733B72_A6
    IL_0021: newobj       instance void valuetype [System.Memory]System.ReadOnlySpan`1<int32>::.ctor(!0/*int32*/[])
    IL_0026: stloc.0      // intArray

    // For the non-constant array, a new array is created each time
    IL_0027: ldloca.s     nonConstantArray
    IL_0029: ldc.i4.4
    IL_002a: newarr       [mscorlib]System.Byte
    IL_002f: dup
    IL_0030: ldtoken      field int32 '<PrivateImplementationDetails>'::'1E6175315920374CAA0A86B45D862DEE3DDAA28257652189FC1DFBE07479436A'
    IL_0035: call         void [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
    IL_003a: dup
    IL_003b: ldc.i4.0
    IL_003c: ldsfld       unsigned int8 MyStaticData::One
    IL_0041: stelem.i1
    IL_0042: call         instance void valuetype [System.Memory]System.ReadOnlySpan`1<unsigned int8>::.ctor(!0/*unsigned int8*/[])

    // For the `Spam<byte>` array, a new array is created each time
    IL_0047: ldloca.s     spanArray
    IL_0049: ldc.i4.4
    IL_004a: newarr       [mscorlib]System.Byte
    IL_004f: dup
    IL_0050: ldtoken      field int32 '<PrivateImplementationDetails>'::'9F64A747E1B97F131FABB6B447296C9B6F0201E79FB3C5356E6C77E89B6A806A'
    IL_0055: call         void [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle)
    IL_005a: call         instance void valuetype [System.Memory]System.Span`1<unsigned int8>::.ctor(!0/*unsigned int8*/[])

    // [15 5 - 15 6]
    IL_005f: ret

  } // end of method MyStaticData::TestData

So unfortunately, collection expressions don't save you here. Of course, you likely can (and should) be using stackalloc here for these small arrays, so this isn't necessarily a big deal. But you do need to know how to do this.

So what should we make of all this?

Conclusion

The good news is that if you use the right patterns, using static ReadOnlySpan<byte> properties to replace existing static readonly byte[] fields that contain read-only data can give a zero-allocation and essentially zero-startup cost improvement, even on .NET Framework.

However, if the field that you're "converting" is not byte[], bool[] or sbyte[], then you should think carefully about whether to convert it. int[] and other types are supported for similar optimizations on .NET 7+, but this requires runtime support, so if you're also targeting .NET Framework, .NET Standard, or .NET 6 and below, then I would seriously consider whether it's worth making the change.

You likely will see perf benefits on .NET 7+, but as far as I can tell, you're talking about a ~15% speed improvement for the initial creation of the array. But if you're calling RuntimeHelpers.CreateSpan() with every access, versus just loading a field, does that actually improve steady state performance? I don't know, I haven't checked, I'm just wondering😄

Where you really need to be careful is to only use constant values in your arrays (no static readonly values, please) and only use ReadOnlySpan<T>, not Span<T>. Luckily, you'll catch these automatically in your static properties if you're using collection expressions, as they simply won't compile. Which just another reason you should use collection expressions everywhere you can!😃

Replacing static byte[] fields with static ReadOnlySpan<byte> is probably the most common scenario you'll find, but you can also apply this to local variables. However, I suspect that scenario is going to be less common, simply because that's so clearly very allocating, it means that presumably you "don't care about performance" here, in which case there's no point making the ReadOnlySpan<byte> change.

There's another reason for not touching local definitions, which is that the collection expression "solution" described above doesn't cause compilation failures with local variables, so there isn't the same easy guardrails there.

If you're anything like me, then the fact that there are so many edge cases where you fall off a performance cliff is somewhat surprising. Generally the .NET team try quite hard to avoid these cliffs, or at least add analyzers to help steer you in the right direction. There seems to be little here to stop you doing the "wrong" thing.

Looking through the various issues and discussions, that's something that's come up multiple times, but it seems like the difficulty is generally "the problematic code patterns are actually valid sometimes". There's also the "well, you should be using stackalloc anyway" argument, as well as "collection expressions partially protect you":

So all-in-all, this approach seems to be "use at your own risk". I still think it might be nice to have optional analyzers at least to try to protect you (and maybe someone's already written those). Nevertheless, the ability to reduce initialization costs to 0 if you have a bunch of static data is definitely a win; just make sure you only use it in a safe way!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories