Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151638 stories
·
33 followers

Speeding up interactive rebase in JetBrains IDEs

1 Share

Introduction

Git integration in JetBrains IDEs has been evolving for more than fifteen years, and throughout that time, we followed one guiding principle: At the lowest level, we simply run porcelain Git commands, parse their output, and avoid doing anything Git itself would not do. All user scenarios and UI are built on top of that. This approach kept the integration reliable and made it much less likely that the IDE would corrupt the repository state.

Over time, Git grew more complex, repositories grew larger, and some operations became noticeably slower. By then, the pattern was hard to miss. We saw more community projects focused on Git performance, users kept reporting slow command execution, and we could reproduce the issue ourselves. Even rewording a single commit in the IntelliJ IDEA monorepo could take tens of seconds, depending on the machine and OS.

Interactive rebase was one of the clearest pain points, along with several IDE actions built on top of it. So we decided to focus on low-level optimizations there and turn the work into a dedicated internship project.

Interactive rebase: A technical deep dive

To see where those seconds went, we need to look at what Git actually does during an interactive rebase.

Internally, Git has three main kinds of objects, stored as files in the .git/objects directory: blobs, trees, and commits. Every object is identified by a unique 20-byte SHA-1 hash.

  • A blob simply contains the contents of a file.
  • A tree is a recursive object that corresponds to a directory. It can contain individual files, represented as an entry with a file name, mode, and the respective blob’s hash, as well as subdirectories, represented by the names and hashes of other trees. Because an object’s hash is unique, Git can reuse files and directories when they are identical.
  • A commit is essentially a tree with metadata. It contains the hash of its parent commit(s), author and committer information with timestamps, and a commit message. Each commit represents a snapshot of the entire directory; the diff between a commit and its parent is computed by comparing their two trees.

The index is a map linking file names to blob objects, sorted by file name. It acts as a scratchpad for Git operations. For example, during a merge, the index expands to hold three entries for a single conflicted file. It keeps these entries unmerged so that Git can create conflict markers in the working directory. After you resolve the conflicts, running git add marks the entries as merged. Following these operations, Git writes a new tree object from the current index using git write-tree.

Now, let’s consider how an interactive rebase is performed. To build a sequence of commits according to the git-rebase-todo file, Git checks them out sequentially, updates the working tree, and populates the index so that it can create a tree object from it. This can impact performance. However, in some scenarios, we can construct these trees without touching the index.

In-memory rebase optimization: How it works

The optimization for Edit Commit Message… is the simplest case. If you look at the sequence of commits from the selected one up to the top of the branch, the underlying tree hashes do not change during this operation. For the selected commit, we only need to change the commit message and committer information. Then, for every commit after that, we just rebuild the chain by updating the parent commit and computing a new hash.

Git provides low-level plumbing commands for managing Git objects. Using git cat-file, we can extract and parse the body of an object. We can create a new commit object by passing a tree hash and metadata to git commit-tree. Once the whole sequence has been rebuilt, we can use git update-ref to atomically update the branch reference.

The git merge-tree command can perform a three-way merge directly in memory. It takes the tree hashes and returns the resulting tree, failing if there is a merge conflict. So, for rebases that modify trees but do not cause conflicts, we can still avoid touching the working tree and index.

The same idea extends to a general interactive rebase. If we know the rebase plan, such as reordering, dropping, squashing, or renaming commits, we can build the new sequence in memory using the same commands.

That is the approach we implemented. When you perform commit-editing operations, the IDE first tries a fast in-memory path. If it runs into a merge conflict, it silently falls back to a regular Git rebase and stops so you can resolve the conflicts. Otherwise, it updates the branch reference atomically.

We applied the same optimization to other operations. For example, in the Git Log, when you select a commit, the Changed Files pane appears on the right. Here, you can select a subset of files and click Extract Selected Changes to Separate Commit… to split one commit into two and never cause a merge conflict. It works by recursively building a split tree in memory and omitting the changes at the specified paths.

Upstream Git is moving in a similar direction as well. The git replay command performs a fast in-memory rebase, but it is still experimental and does not support interactive rebase or GPG signing.

Results

On the IntelliJ monorepo, the average execution time of interactive rebase dropped from tens of seconds to just a few seconds. The exact numbers varied across operating systems, but the overall improvement was consistent.

We also enabled the in-memory optimization in EAP builds during the 2026.1 release cycle. The histograms below show the distribution of interactive rebase execution times in data collected from EAP builds, compared with 2025.3.

macOS

Windows

Linux

Conflicts

While collecting data on interactive rebase executions, we could also measure how often conflicts occurred. The data shows that around 12% of interactive rebases resulted in merge conflicts, and about 1% failed due to errors. In both cases, we fall back to the regular interactive rebase process.

The in-memory optimization reduces average execution time across all operating systems. There is still room for improvement, especially on Windows, where the worst-case time is still quite high.

After testing this optimization internally at JetBrains and in EAP builds, we decided to enable it by default in the upcoming 2026.1 release. This applies to standard interactive rebase and actions based on it, such as reword, drop, and squash, as well as extracting selected changes into a separate commit. We expect this to make commit-history editing faster and less disruptive.

Credits and further reading

The broader developer community was a huge help in shaping our implementation. We would like to acknowledge:

Interested in the implementation details? Check out the solution in the IntelliJ Platform sources.

Any feedback is welcome! Please leave a comment or email us directly at vcs-team+ir@jetbrains.com.

Read the whole story
alvinashcraft
47 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building Red Hat MCP-ready images with image mode for Red Hat Enterprise Linux

1 Share

Building a bootable OS image should feel as seamless as building a container. That's the goal of image mode for Red Hat Enterprise Linux (RHEL). A key advantage for developers using image mode with RHEL is the integration of AI-assisted troubleshooting directly into the development loop.

By leveraging the model context protocol (MCP), you can connect VS Code or Cursor to two specialized intelligence streams: One for local system telemetry and one for global proactive security.

Red Hat provides two MCP servers that can help you diagnose issues with your image mode for RHEL servers:

Step 1: Generate your AI bridge keys

Before configuring your IDE or image, you need a dedicated SSH key pair for the MCP server. Run this in your build environment:

$ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_mcp \
-C "rhel-mcp-agent" -N ""

For the example below, you must obtain values for the LIGHTSPEED_CLIENT_ID and LIGHTSPEED_CLIENT_SECRET variables required to connect the MCP Server for Red Hat Lightpeed to the Red Hat Lightspeed services.. To obtain these, log into the Red Hat Lightspeed console at console.redhat.com and configure a service account or API client for Red Hat Lightspeed.

After you have the values, you can set them to automatically load on Linux in $HOME/.bashrc. Set these variables in your environment before the IDE launches the MCP server.

# Values for these variables are from console.redhat.com
export LIGHTSPEED_CLIENT_ID="[Your ID Here]"
export LIGHTSPEED_CLIENT_SECRET="[Your Secret Here]"

Step 2: Configuring your AI agent

This configuration script is for a generic IDE configuration file (for example, mcp.json) compatible with editors such as VS Code and Cursor. Add the following to your IDE's MCP configuration file. Note how you mount your .ssh directory so the MCP container can use the key you just created.

{
  "mcpServers": {
    "rhel-runtime": {
      "type": "stdio",
      "command": "podman",
      "args": [
        "run", "-i", "--rm", 
        "-v", "${env:HOME}/.ssh:/root/.ssh:ro",
        "quay.io/redhat/rhel-mcp-server:latest"
      ],
      "env": {
        "LINUX_MCP_USER": "mcp",
        "LINUX_MCP_HOST": "192.168.122.50",
        "LINUX_MCP_SSH_KEY_PATH": "/root/.ssh/id_ed25519_mcp"
      }
    },
    "redhat-lightspeed": {
      "type": "stdio",
      "command": "podman",
      "args": [
        "run", "-i", "--rm", 
        "--env", "LIGHTSPEED_CLIENT_ID", 
        "--env", "LIGHTSPEED_CLIENT_SECRET",
        "quay.io/redhat-services-prod/insights-mcp:latest"
      ]
    }
  }
}

Step 3: Designing the registered image

In your Containerfile, prepare the environment for both remote host configuration (rhc) and secure AI access with the mcp user:

FROM quay.io/redhat/redhat-bootc:9.4

# Install rhc, cloud-init, and openssh-server
RUN dnf -y install rhc cloud-init openssh-server && dnf clean all

# Create the dedicated mcp user bridge
RUN useradd -m -G wheel mcp && \
    echo "mcp ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/mcp

# Enable services for first boot
RUN systemctl enable cloud-init sshd

COPY . /app
RUN bootc install

Step 4: Zero-touch registration and access with cloud-init

Paste the public key you generated in step 1 into your cloud-config. This allows the MCP server to log in automatically without a password. An example cloud-config:

#cloud-config
users:
  - name: mcp
    sudo: ALL=(ALL) NOPASSWD:ALL
    shell: /bin/bash
    ssh_authorized_keys:
      # cat ~/.ssh/id_ed25519_mcp.pub
      - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmcpAgentExampleKey12345 rhel-mcp-agent

rh_subscription:
  org: "1234567"
  activation-key: "development-stack-key"
  auto-attach: true

runcmd:
  - [ rhc, connect ]

Troubleshooting the configuration

If your IDE reports a connection refused error, have a look at these three common friction points:

  • A bootc system takes a few seconds to initialize sshd. Wait 10 seconds and retry.
  • Ensure that LINUX_MCP_HOST in mcp.json matches the actual IP of the running container (obtained with podman inspect <id>).
  • Try ssh -i ~/.ssh/id_ed25519_mcp mcp@<container-ip>. If this fails, your MCP server will also fail. Check for a local firewall blocking port 22.

MCP servers in action

By integrating the model context protocol (MCP) into your RHEL image, your coding assistant gains two streams of information you can utilize proactively to make bootable images more reliable and performant. For example :

  • Red Hat Lightspeed for RHEL queries Red Hat's proactive analytics on a scheduled basis. The Red Hat Lightspeed MCP server provides a real-time bridge between LLMs and the Red Hat Lightspeed for RHEL proactive analytics. You can use it to find out how an upcoming Red Hat Enterprise Linux release will affect your specific environment as well as to flag newly discovered common vulnerabilities and exposures (CVE) relevant to the packages in your image. This identifies security risks and "best practice" drift before you commit a single line of code to production.
  • The RHEL MCP server gives your AI agent an on-demand, live look at the state of your operating system. This allows for immediate root-cause analysis on performance issues by reading system telemetry, inspecting resource pressure (CPU, memory), and checking for overloaded system components like journal files.
  • RHEL MCP server can directly read journalctl and inspect critical systemd units, such as NetworkManager or sshd, and help your coding assistant to quickly diagnose issues in areas such as network connectivity, firewall misconfiguration, and service dependencies that cause connection refusals. You get this without having to analyze logs yourself, or to manually scrape data and copy/paste it into an assistant.

Next steps

Red Hat MCP servers can help you move beyond manual system troubleshooting. By integrating image mode for RHEL with the model context protocol, you streamline your pipeline. You get a single, bootable container image that's secure, fully registered, and instantly debuggable by AI agents right inside your IDE.

To learn more and get started, check out these resources:

The post Building Red Hat MCP-ready images with image mode for Red Hat Enterprise Linux appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Export Once, Share Everywhere: Convert DOCX to PDF, HTML, and More in React

1 Share

Export Once, Share Everywhere: Convert DOCX to PDF, HTML, and More in React

TL;DR: Still juggling multiple document tools just to share files in your React app? With the advanced export options in Syncfusion React DOCX Editor, you can generate DOCX, PDF, HTML, Markdown, and more from a single editor, with no broken formatting and no manual conversions. You can enable scalable document sharing, reduce conversion errors, and simplify collaboration across teams. The result is faster exports, consistent formatting, and a future-proof workflow that supports both lightweight client apps and enterprise-grade server deployments.

The real problem with document export in React apps

If you’ve ever built a document heavy React app, you already know the pain.

A document gets written once, but somehow needs to exist as a:

  • DOCX for editing,
  • PDF for approvals,
  • Markdown for internal docs,
  • HTML for publishing.

And the moment exporting isn’t built-in, everything slows down. Manual conversions might work once or twice, but they don’t scale. They introduce errors and waste time. People copy-paste content, use third-party converters, or redo the same work in multiple tools. Formatting breaks. Layouts shift. Everyone gets frustrated.

What you really need is one editor with multiple export options, all handled in a clean, predictable way.

This is exactly where the Syncfusion® React DOCX Editor changes the game.

Create, edit, and export pixel‑perfect DOCX documents faster with the high‑performance Syncfusion DOCX Editor.

Advanced export options in React DOCX Editor

The Syncfusion React DOCX Editor supports a flexible, unified document export experience across teams and platforms. It uses fast, in‑browser exports for common formats and a server‑side Web API for formats that need higher accuracy.

Key features

  • Client‑side speed with server‑side reliability.
  • Export modern and legacy formats from one editor.
  • Consistent formatting across outputs.
  • Suitable for both lightweight and enterprise apps.

This hybrid approach delivers efficient exports without compromising quality.

Client-side exporting (Instant, no server)

For everyday use cases, exporting directly from the React app is often enough, with no server involved.

You can export to:

  • SFDT: For drafts, autosave, and round-trip editing.
  • DOCX: Share editable files via Microsoft Word or Google Docs.
  • DOTX: Use templates to maintain consistent branding and styles.
  • TXT: Export plain text for tickets, wikis, or simple tools.

Server-side exporting (Heavy lifting, perfect output)

Some formats need extra processing, especially when layout, pagination, or print quality matter. That’s where the server-side Web API comes in.

Using it, you can export to:

  • PDF: For finalized documents, printing, or e-signature.
  • HTML: Publish to web portals or embed in apps.
  • RTF: For compatibility with rich text editors.
  • Markdown: Ideal for developer docs and wikis.
  • ODT: A widely supported open format for cross-platform use.
  • WordML: Useful for XML-based workflows and systems that require structured document data.

Let’s build a React DOCX Editor and integrate both client-side and server-side export options. By the end, you’ll have a production-ready workflow for seamless document-sharing across formats and platforms.

Get step‑by‑step guidance, with APIs and examples, to integrate, customize, and scale DOCX editing faster with Syncfusion.

Server-side export setup for React DOCX Editor

Follow these steps to set up a server-side Web API that enables exporting documents from the Syncfusion React DOCX Editor to formats like PDF, HTML, RTF, Markdown, and ODT.

Step 1: Create a new ASP.NET Core Web API project

First, create a new ASP.NET Core Web API project using your preferred development environment or the .NET CLI.

Step 2: Add the required NuGet packages

Then, install the following NuGet packages to enable document import and export functionalities:

Step 3: Define export endpoints in the controller

Add a controller named DocumentEditorController.cs to handle export requests. Include endpoint logic to process SFDT input and return converted documents.

Here’s how you can do it in code:

[AcceptVerbs("Post")]
[HttpPost]
[EnableCors("AllowAllOrigins")]
[Route("Export")]
public FileStreamResult Export([FromBody] SaveParameter data)
{
    string fileName = data.FileName;
    string format = RetrieveFileType(string.IsNullOrEmpty(data.Format) ? fileName : data.Format);

    if (string.IsNullOrEmpty(fileName))
    {
        fileName = "Document1.docx";
    }

    WDocument document;
    if (format.ToLower() == ".pdf")
    {
        Stream stream = WordDocument.Save(data.Content, FormatType.Docx);
        document = new Syncfusion.DocIO.DLS.WordDocument(stream, Syncfusion.DocIO.FormatType.Docx);
    }
    else
    {
        document = WordDocument.Save(data.Content);
    }

    return SaveDocument(document, format, fileName);
}

private string RetrieveFileType(string name)
{
    int index = name.LastIndexOf('.');
    string format = index > -1 && index < name.Length - 1
        ? name.Substring(index)
        : ".doc";
    return format;
}

private FileStreamResult SaveDocument(WDocument document, string format, string fileName)
{
    Stream stream = new MemoryStream();
    string contentType = "";

    if (format.ToLower() == ".pdf")
    {
        contentType = "application/pdf";
        DocIORenderer render = new DocIORenderer();
        PdfDocument pdfDocument = render.ConvertToPDF(document);
        stream = new MemoryStream();
        pdfDocument.Save(stream);
        pdfDocument.Close();
    }
    else
    {
        WFormatType type = GetWFormatType(format);
        switch (type)
        {
            case WFormatType.Rtf:
                contentType = "application/rtf";
                break;
            case WFormatType.WordML:
                contentType = "application/xml";
                break;
            case WFormatType.Html:
                contentType = "application/html";
                break;
            case WFormatType.Dotx:
                contentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.template";
                break;
            case WFormatType.Docx:
                contentType = "application/vnd.openxmlformats-officedocument.wordprocessingml.document";
                break;
            case WFormatType.Doc:
                contentType = "application/msword";
                break;
            case WFormatType.Dot:
                contentType = "application/msword";
                break;
            case WFormatType.Odt:
                contentType = "application/vnd.oasis.opendocument.text";
                break;
            case WFormatType.Markdown:
                contentType = "text/markdown";
                break;
        }
        document.Save(stream, type);
    }

    document.Close();
    stream.Position = 0;

    return new FileStreamResult(stream, contentType)
    {
        FileDownloadName = fileName
    };
}

internal static WFormatType GetWFormatType(string format)
{
    if (string.IsNullOrEmpty(format))
        throw new NotSupportedException("EJ2 Document Editor does not support this file format.");

    switch (format.ToLower())
    {
        case ".dotx":
            return WFormatType.Dotx;
        case ".docx":
            return WFormatType.Docx;
        case ".docm":
            return WFormatType.Docm;
        case ".dotm":
            return WFormatType.Dotm;
        case ".dot":
            return WFormatType.Dot;
        case ".doc":
            return WFormatType.Doc;
        case ".rtf":
            return WFormatType.Rtf;
        case ".txt":
            return WFormatType.Txt;
        case ".xml":
            return WFormatType.WordML;
        case ".odt":
            return WFormatType.Odt;
        case ".html":
            return WFormatType.Html;
        case ".md":
            return WFormatType.Markdown;
        default:
            throw new NotSupportedException("EJ2 Document Editor does not support this file format.");
    }
}

Step 4: Run the web API

Build and run the web API locally to verify that the export endpoints are working as expected.

See the Syncfusion DOCX Editor in action through live demos and start building full‑featured, production‑ready document solutions today.

Client-side setup: Create a React DOCX Editor

Once your server-side export API is up and running, you can move on to building the client-side React app using the Syncfusion React Document Editor.

It allows users to view, edit, and export Word documents directly in the browser without relying on external plugins.

Step 1: Create a new React app

First, create a new React project on the local machine.

Step 2: Install the Syncfusion React Document Editor

Now, install the Syncfusion React DOCX Editor NPM package.

npm install @syncfusion/ej2-react-documenteditor --save

Step 3: Import the required styles

To ensure proper styling, import the necessary CSS styles in your App.css or index.css file.

Here’s the code you need:

@import '../node_modules/@syncfusion/ej2-base/styles/material.css';
@import '../node_modules/@syncfusion/ej2-buttons/styles/material.css';
@import '../node_modules/@syncfusion/ej2-inputs/styles/material.css';
@import '../node_modules/@syncfusion/ej2-popups/styles/material.css';
@import '../node_modules/@syncfusion/ej2-lists/styles/material.css';
@import '../node_modules/@syncfusion/ej2-navigations/styles/material.css';
@import '../node_modules/@syncfusion/ej2-splitbuttons/styles/material.css';
@import '../node_modules/@syncfusion/ej2-dropdowns/styles/material.css';
@import '../node_modules/@syncfusion/ej2-documenteditor/styles/material.css';

Step 4: Integrate the React DOCX Editor with advanced export options

Now, create the Exporting.js file in the src folder to render the DOCX Editor and configure the advanced export options using the Web API.

Code example for quick integration:

import * as React from 'react';
import { useEffect, useRef } from 'react';
import { DocumentEditorContainerComponent, Ribbon } from '@syncfusion/ej2-react-documenteditor';
import "./index.css";

DocumentEditorContainerComponent.Inject(Ribbon);

// Add the Service URL for server-dependent features
let hostUrl = "http://localhost:5257/api/documenteditor/";

const Exporting = () => {
    const container = useRef(null);
    const defaultSFDT = `{
        "sections": [{
            "blocks": [{
                "inlines": [{
                    "text": "Welcome to Syncfusion Document Editor!",
                    "characterFormat": {
                        "bold": true,
                        "fontSize": 14
                    }
                }]
            }]
        }]
    }`;

    useEffect(() => {
        if (container.current) {
            container.current.documentEditor.open(defaultSFDT);
            container.current.documentEditor.documentName = 'Getting Started';
            container.current.documentEditor.focusIn();
        }
    }, []);

    // Ribbon File tab Export menu
    const ribbonExportItems = [
        { text: 'Word Document (*.docx)', id: 'docx' },
        { text: 'Syncfusion Document Text (*.sfdt)', id: 'sfdt' },
        { text: 'Plain Text (*.txt)', id: 'text' },
        { text: 'Word Template (*.dotx)', id: 'dotx' },
        { text: 'PDF (*.pdf)', id: 'pdf' },
        { text: 'HyperText Markup Language (*.html)', id: 'html' },
        { text: 'OpenDocument Text (*.odt)', id: 'odt' },
        { text: 'Markdown (*.md)', id: 'md' },
        { text: 'Rich Text Format (*.rtf)', id: 'rtf' },
        { text: 'Word XML Document (*.xml)', id: 'wordml' },
    ];

    const fileMenuItems = [
        'New',
        'Open',
        { text: 'Export', id: 'export', iconCss: 'e-icons e-export', items: ribbonExportItems },
        ‘Print’,
    ];

    // Common export handler used by Ribbon menu
    const handleExportById = (value) => {
        switch (value) {
            case 'docx':
                container.current.documentEditor.save('Sample', 'Docx');
                break;
            case 'sfdt':
                container.current.documentEditor.save('Sample', 'Sfdt');
                break;
            case 'text':
                container.current.documentEditor.save('Sample', 'Txt');
                break;
            case 'dotx':
                container.current.documentEditor.save('Sample', 'Dotx');
                break;
            case 'pdf':
                formatSave('Pdf');
                break;
            case 'html':
                formatSave('Html');
                break;
            case 'odt':
                formatSave('Odt');
                break;
            case 'md':
                formatSave('Md');
                break;
            case 'rtf':
                formatSave('Rtf');
                break;
            case 'wordml':
                formatSave('Xml');
                break;
            default:
                break;
        }
    };

    const onFileMenuItemClick = (args) => {
        if (args && args.item && args.item.id) {
            if (args.item.id !== 'export') {
                handleExportById(args.item.id);
            }
        }
    };

    function formatSave(type) {
        let format = type;
        let url = container.current.documentEditor.serviceUrl + 'Export';
        let fileName = container.current.documentEditor.documentName;
        let http = new XMLHttpRequest();
        http.open('POST', url);
        http.setRequestHeader('Content-Type', 'application/json;charset=UTF-8');
        http.responseType = 'blob';

        let sfdt = {
            Content: container.current.documentEditor.serialize(),
            Filename: fileName,
            Format: '.' + format
        };

        http.onload = function () {
            if (http.status === 200) {
                let responseData = http.response;
                let blobUrl = URL.createObjectURL(responseData);
                let downloadLink = document.createElement('a');
                downloadLink.href = blobUrl;
                downloadLink.download = fileName + '.' + format.toLowerCase();
                document.body.appendChild(downloadLink);
                downloadLink.click();
                document.body.removeChild(downloadLink);
                URL.revokeObjectURL(blobUrl);
            } else {
                console.error('Request failed with status:', http.status);
            }
        };

        http.send(JSON.stringify(sfdt));
    }

    return (
        <div className="control-pane">
            <div className="control-section">
                <div id="documenteditor_container_body">
                    <DocumentEditorContainerComponent
                        id="container"
                        ref={container}
                        style={{ display: 'block' }}
                        height={'690px'}
                        toolbarMode="Ribbon"
                        ribbonLayout="Classic"
                        serviceUrl={hostUrl}
                        enableToolbar={true}
                        locale="en-US"
                        fileMenuItems={fileMenuItems}
                        fileMenuItemClick={onFileMenuItemClick}
                    />
                </div>
            </div>
        </div>
    );
};

export default Exporting;

Step 5: Launch the application

To see the React DOCX Editor in action:

  • Start your Web API service.
  • Then, run your React app using the following command.
    npm start

After executing the above code examples, we will get the output as shown in the following image.

Advanced export options in React DOCX Editor
Advanced export options in React DOCX Editor

Real-world use cases

Here are some practical scenarios where advanced export options in the React DOCX Editor can streamline document-sharing:

  1. Legal teams – Contract review: Export to PDF or DOCX for sharing finalized agreements or editable drafts across legal teams.
  2. Finance – Invoice & report distribution: Generate PDF or HTML exports for client-facing invoices and internal financial reports.
  3. HR – Policy templates & letters: Use the DOTX and DOCX formats to maintain consistent branding across HR documents and templates.
  4. Engineering – Developer documentation: Export to Markdown or HTML for publishing technical documents to GitHub, wikis, or internal portals.
  5. Education – Course material sharing: Support ODT and PDF formats for distributing training content across diverse platforms.

GitHub reference

Also, refer to the advanced export options in the Syncfusion React DOCX Editor GitHub demo.

Frequently Asked Questions

What export formats are supported by the Syncfusion React DOCX Editor?

The DOCX Editor supports SFDT, DOCX, DOTX, TXT, RTF, HTML, Markdown, ODT, WordML, and PDF export formats.

Do I need Word installed on my device to edit or export documents?

No, you do not need Microsoft Word installed. The DOCX Editor works directly in the browser.

Which formats require server-side support for import and export?

Server-side support is required to import or export documents in PDF, HTML, RTF, Markdown, ODT, WordML, and DOCX(import).

Can I use only client-side export without setting up a server?

Yes, you can export DOCX, DOTX, TXT, and SFDT without a server.

Trusted by 80% of the Fortune 500 companies, Syncfusion DOCX Editor unifies Word‑like editing, AI features, accessibility and more in one platform.

Get started with advanced export in React DOCX Editor today

Thanks for reading! With Syncfusion’s powerful components and APIs, building a React DOCX Editor with advanced export capabilities is easier than ever.

Whether you’re building solutions for legal, finance, HR, education, or enterprise teams, this setup gives you the flexibility to export documents in the formats your users need. With both client-side and server-side options, you can deliver a seamless document-sharing experience across platforms.

Are you already a Syncfusion user? You can download the product setup from our license and downloads page. If you’re not yet a Syncfusion user, you can download a 30-day trial.

If you have questions, contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Angular Material or PrimeNG? Choosing the Right UI Library in 2026

1 Share

Angular Material vs PrimeNG (2026): Which UI Library Fits Your Production Angular App?

TL;DR: Choosing between Angular Material and PrimeNG in 2026 is really about where you want to pay the cost in production. Angular Material favors predictable UX, strong accessibility defaults, and seamless SSR alignment, making it a solid choice for product and SaaS applications. PrimeNG, on the other hand, accelerates the delivery of data-heavy dashboards by providing enterprise-ready data tables, charts, and hierarchical components out of the box without weeks of custom CDK work.

Every Angular team building for production eventually faces the same crossroads: Angular Material or PrimeNG?

The answer has never been about which library has more stars on GitHub or which blog post ranks higher. It is about understanding what your application actually demands, and what it will demand two years from now when your codebase has tripled in size, and your team has doubled.

If you are researching any of the following:

  • Angular Material vs PrimeNG
  • Best Angular UI library in 2026
  • PrimeNG vs Angular Material for enterprise apps
  • Which Angular component library should I choose?
  • Angular Material or PrimeNG for dashboards?

This guide is for production teams shipping real software, not weekend demos.

Syncfusion® Angular component suite is the only suite you will ever need to develop an Angular application faster.

Quick decision summary (2026)

Before diving into the details, here is a straightforward recommendation table for teams that need an answer now:

If You Are Building… Recommended Library Why
Clean SaaS product UI Angular Material Predictable UI, official Angular alignment, and clean design tokens.
Government/compliance app Angular Material Strongest accessibility defaults and lower compliance risk.
Material Design-aligned app Angular Material Native implementation of Google’s Material Design spec.
Enterprise admin dashboard PrimeNG Advanced DataTable, charting, and tree hierarchies out of the box.
Data-heavy analytics system PrimeNG Virtual scrolling, lazy loading, aggregation, and export are built in.
Custom-branded UI system PrimeNG Flexible theming engine, not locked to Material Design aesthetic.

If your situation is more nuanced, read on. The rest of this guide exists precisely for cases where a simple table is not enough.

What changed in Angular UI architecture by 2026?

The Angular ecosystem in 2026 looks fundamentally different from the Angular of two years ago. These shifts directly affect how you should evaluate UI libraries.

Standalone components are the default

NgModules are no longer the primary unit of composition. Standalone components, directives, and pipes are now the default project scaffolding. Any UI library that still forces you to import heavyweight module bundles introduces friction in modern Angular architectures. Both Angular Material and PrimeNG have adapted, but the degree of adaptation matters when structuring a large application with fine-grained lazy-loading boundaries.

SSR is no longer optional

With Angular’s built-in hydration and improved Universal support, server-side rendering has moved from a “nice to have” to a baseline requirement for public-facing applications. UI libraries must render cleanly on the server, hydrate without layout shifts, and avoid browser-only API calls during the initial render pass. Libraries that rely heavily on window, document, or dynamic DOM measurement during initialization create SSR headaches that compound at scale.

Signal-based reactivity

Angular’s signal-based reactivity model has matured. Components built with signals benefit from fine-grained change detection, but only if the UI library’s internal change detection strategy does not fight against it. Libraries that rely on zone.js-triggered change detection or aggressive markForCheck() calls can undermine the performance gains signals provide.

Accessibility is an audit requirement

WCAG compliance is no longer a checkbox for government contracts alone. Enterprise clients, SaaS buyers, and even internal tooling teams increasingly require accessibility audit reports. A UI library accessibility defaults ARIA roles, keyboard navigation, focus management, and screen reader announcements directly reduce or increase the cost of passing those audits.

Design systems govern at scale

Large organizations no longer allow individual teams to pick colors and spacing values. Design tokens, governed design systems, and component libraries with strict visual contracts are standard practice. Your UI library must either be your design system or integrate cleanly with one.

A modern UI library must:

  • Support standalone components cleanly.
  • Work with Angular’s signal-based pattern.
  • Respect accessibility guidelines (WCAG 2.2+).
  • Avoid unnecessary bundle bloat.
  • Scale across large development teams.
  • Provide enterprise-grade data components.

This is where Angular Material and PrimeNG differ significantly.

Use the right property of Syncfusion® Angular components to fit your requirement by exploring the complete UG documentation.

What is Angular Material?

Angular Material is the official UI component library maintained by the Angular team at Google. It is not a third-party add-on; it ships from the same team that builds the framework itself.

What it implements

  • Google’s material design system: The same design language used across Google’s own products.
  • Accessibility-first components: ARIA roles, keyboard navigation, and focus management are built into the component architecture, not bolted on afterward.
  • Angular CDK (Component Dev Kit): A lower-level toolkit that provides behavioral primitives like overlays, drag-and-drop, virtual scrolling, and accessibility utilities. The CDK is arguably as valuable as the component library itself
  • Strict design patterns: Opinionated component APIs that enforce consistency.

Where Angular Material excels

  • Official Angular ecosystem alignment: When Angular introduces a new rendering strategy, a new change detection model, or a new build system, Angular Material is updated in lockstep. You never wait for a third-party maintainer to catch up with a breaking change.
  • Strong accessibility defaults: Every component ships with correct ARIA attributes, keyboard interaction patterns, and focus management. For teams building applications that must pass WCAG 2.2 AA audits, this reduces remediation work significantly.
  • Predictable UI behavior. Because Angular Material is opinionated, two different developers on the same team will produce components that look and behave the same way. This consistency compounds over time in large codebases.
  • Lighter footprint. Angular Material’s component set is deliberately curated rather than exhaustive. This results in a leaner dependency tree and smaller production bundles when you are only importing what you use.
  • Excellent documentation. API references, usage examples, and accessibility notes are thorough and consistently maintained.

Where Angular Material has limits

Angular Material focuses on consistency and usability, not feature abundance. It does not ship with advanced data grids, charting, tree tables, or rich text editors. If your application needs those capabilities on day one, Angular Material expects you to build them using the CDK or integrate third-party solutions.

This is a deliberate design choice, not an oversight. Angular Material would rather ship fewer components that meet a high bar than ship many components of varying quality.

What is PrimeNG?

PrimeNG is a comprehensive Angular UI component suite built specifically for Angular applications by PrimeTek, the same company behind PrimeFaces (JSF), PrimeReact, and PrimeVue.

What it includes

  • 80+ UI components: From basic buttons and inputs to highly specialized components like organization charts, Gantt-style schedulers, and terminal emulators.
  • Advanced DataTable: Arguably PrimeNG’s flagship component, with built-in sorting, filtering, pagination, column resizing, row grouping, cell editing, virtual scrolling, lazy loading, and export.
  • TreeTable: Hierarchical data display with expand/collapse, selection, and filtering.
  • Charts: Built-in charting powered by Chart.js integration.
  • Rich form components: Multi-select, autocomplete, cascading dropdowns, color pickers, rating components, and editors.
  • Multiple themes and layout systems: Including a theme designer and pre-built application templates.

Where PrimeNG excels

PrimeNG is built for enterprise-level application complexity. When a product manager walks in and says, “I need a data grid that supports inline editing, multi-column sorting, row expansion with nested tables, CSV export, and server-side pagination by next sprint,” PrimeNG delivers that without custom engineering.

The library’s breadth means teams spend less time building infrastructure components and more time building business logic. For organizations where time-to-feature matters more than pixel-perfect design consistency, PrimeNG’s value proposition is clear.

Where PrimeNG has trade-offs

The comprehensive nature of PrimeNG comes with a larger API surface, more configuration options, and a bigger bundle footprint.

  • Bigger API surface: More options means more ways to create inconsistency.
  • Potential bundle impact: Feature-rich components can add weight if you import broadly.
  • Accessibility consistency varies: Generally solid, but some components may require more manual attention than Angular Material equivalents.

Core architectural difference

This is the single most important distinction to understand:

  • Angular Material prioritizes design consistency.
  • PrimeNG prioritizes feature completeness.

That philosophical difference cascades through every decision:

Dimension Angular Material PrimeNG
Component philosophy Fewer components, higher consistency bar. More components and broader coverage.
Customization model Constrained by the design system. Flexible, sometimes at the cost of consistency.
Advanced features Build with CDK primitives. Ships out of the box.
API surface Smaller and easier to learn. Larger, more powerful, steeper learning curve.
Design governance Enforced by the library. Enforced by the team.

Neither approach is better. But choosing the wrong one for your context creates friction that compounds with every feature you ship.

Real-world comparison (2026 production criteria)

Here is a detailed comparison across the criteria that matter most in production:

Criteria Angular Material PrimeNG
Official Angular support Yes, maintained by the Angular team. No, third-party (PrimeTek).
Component volume Moderate (~30 components + CDK). Extensive (80+ components).
Advanced DataTable Basic (CDK-powered, extensible) Enterprise-grade (built-in)
Built-in charts No Yes (Chart.js integration)
Tree & hierarchy components Limited Strong (Tree, TreeTable, OrgChart)
Theming flexibility Constrained (Material-based) Highly flexible (theme designer)
Accessibility defaults Strong (WCAG 2.2 AA) Good (varies by component)
Bundle size Smaller per-component Larger overall footprint
Learning curve Low Moderate
Enterprise dashboard readiness Moderate High
SSR compatibility Seamless (official support) Good (requires configuration care)
Signal-based patterns Early adopter Compatible
Long-term upgrade path Tied to the Angular release cycle Independent release cycle

Be amazed exploring what kind of application you can develop using Syncfusion® Angular components.

Scenario-based evaluation

Abstract comparisons only go so far. Let us look at two concrete application archetypes and evaluate which library serves each one better.

Scenario 1: Clean SaaS product UI

Application profile: A subscription-based B2B platform, project management tools, CRM dashboards, or internal workflow applications.

Typical features needed:

  • Forms with validation
  • Dialogs and modals
  • Tabs and steppers
  • Side navigation and toolbars
  • Simple data tables with sorting and pagination
  • Role-based UI visibility
  • Responsive layout

Why Angular Material wins here

  • Clean, consistent UI: Every component follows Material Design’s spacing, typography, and interaction patterns. Designers and developers work from the same system, reducing design-to-code translation errors.
  • Official Angular support: When Angular ships with a new rendering optimization, Angular Material supports it on day one. You never block a framework upgrade waiting for a UI library update.
  • Predictable component behavior: Angular Material’s opinionated APIs mean there are fewer ways to misuse a component. A mat-select behaves the same way regardless of which developer implemented it. In a SaaS codebase maintained by rotating team members, this predictability is worth more than raw feature count.
  • Easier long-term refactoring: A smaller API surface means less configuration to migrate during upgrades. Angular Material’s schematics handle most breaking changes automatically.
  • Strong accessibility defaults: SaaS products increasingly need to pass accessibility audits to meet the needs of enterprise buyers. Angular Material’s built-in ARIA support reduces the cost of compliance.

For product-focused teams building subscription platforms or internal tools, Angular Material keeps the UI disciplined and lightweight.

Scenario 2: Enterprise admin dashboard

Application profile: An internal operations platform for a large organization, supply chain management, financial reporting, or analytics dashboards.

Typical features needed:

  • Advanced filtering across multiple data dimensions.
  • Lazy-loaded data grids with thousands of rows.
  • Multi-column sorting and grouping.
  • Tree hierarchies for organizational or category data.
  • Chart visualizations (bar, line, pie, mixed).
  • Complex reports with drill-down.
  • CSV/Excel export.

Why PrimeNG wins here

PrimeNG’s DataTable is purpose-built for this. Out of the box, it provides:

  • Multi-column sorting
  • Column resizing and reordering
  • Server-side and client-side pagination
  • Virtual scrolling for large datasets
  • Row grouping and expansion
  • Lazy loading with backend integration hooks
  • Built-in export to CSV and Excel

Angular Material’s table is flexible and CDK-powered, but achieving the same feature set requires significant custom development. You would need to build column resizing, row grouping, and export functionality yourself, or integrate additional third-party libraries.

For analytics-heavy dashboards, PrimeNG clearly provides more out-of-the-box power and saves weeks of custom development.

Deep-dive: Data Grid capability

Data grids are often the deciding factor in library selection. If your application revolves around tabular data, this section matters more than anything else in this article.

Angular Material table

Angular Material’s table component is built on the CDK’s table foundation. It is clean, structured, and deliberately minimal:

  • Declarative column definitions using matColumnDef
  • Sorting via matSort directive
  • Pagination via matPaginator component
  • No built-in filtering UI (you build your own)
  • No built-in column resizing
  • No built-in row grouping
  • No built-in export
  • Virtual scrolling available via CDK cdk-virtual-scroll-viewport

The Angular Material table is a toolkit, not a finished data grid. It handles basic sorting and pagination well, but advanced scenarios require custom development.

PrimeNG DataTable

PrimeNG’s DataTable is a feature-complete data grid component designed for production use:

  • Built-in column filtering with multiple match modes (contains, equals, starts with, custom)
  • Multi-column sorting with priority ordering
  • Column resizing (fit and expand modes)
  • Column reordering via drag-and-drop
  • Row grouping with subheaders and footers
  • Row expansion with nested content templates
  • Inline cell editing and row editing
  • Server-side lazy loading with event callbacks
  • Virtual scrolling with configurable row height
  • Selection (single, multiple, checkbox)
  • Context menus
  • CSV and Excel export
  • Column toggling and frozen columns
  • Aggregation footers (sum, average, count)

The PrimeNG DataTable is a finished product. Configure it, bind your data, and it works. For teams that need advanced data grid capabilities without spending sprints building them, PrimeNG’s DataTable is a strong argument for choosing the library.

The verdict on Data Grids

  • If grids are core to your product: PrimeNG saves significant engineering time.
  • If tables are simple: Angular Material stays lean and predictable.

Production trade-offs that actually matter

Most comparison articles count components. Real-world production success depends on deeper factors that only surface after months of development. Here are six considerations that experienced teams weigh carefully.

Dimension Angular Material PrimeNG
Maintainability Opinionated APIs reduce drift and simplify long-term maintenance in large codebases. Flexible APIs accelerate development but require team conventions to avoid inconsistency.
Design System Alignment Best fit for Material Design or closely derived design systems. Better for custom-branded or white‑label products needing deep theming flexibility.
Accessibility & Compliance Strong, consistent WCAG-ready defaults across all components. Good accessibility overall, but complex components may need manual ARIA tuning.
Bundle Size & Performance Lean, modular footprint works well for simple to moderately complex UIs. Larger footprint, justified when rich data features replace custom implementations.
Standalone & SSR Support First-class alignment with Angular’s SSR and hydration pipeline. SSR-compatible, but some components require extra care in server-rendered setups.
Enterprise Scalability Scales best with growing teams that value guardrails and predictable patterns. Scales best in feature-driven enterprises that need rapid delivery of complex UI.

The decision depends on the team’s maturity and the project’s scale. A mature team with strong internal conventions will extract enormous value from PrimeNG’s breadth. A growing team with frequent onboarding may benefit from Angular Material’s guardrails.

When should you choose Angular Material?

Choose Angular Material if:

  • Your application prioritizes clean, consistent UI over feature density.
  • You follow Material Design standards or a design system derived from it.
  • Accessibility compliance is critical, WCAG 2.2 AA is a contractual requirement.
  • Your data display needs are moderate, simple tables, forms, and navigation.
  • You want official Angular ecosystem alignment and release coordination with the Angular team, so a third-party UI stack does not block framework upgrades.
  • You value long-term maintainability over feature abundance.
  • Your team is growing, and you need guardrails that prevent inconsistent component usage.
  • Bundle size is a constraint, mobile-first or bandwidth-sensitive deployments.
  • You prefer to build specialized components on top of clean primitives (CDK).

When should you choose PrimeNG?

Choose PrimeNG if:

  • You build enterprise admin dashboards or internal operations platforms.
  • Your application is data-heavy, with large datasets, complex filtering, and multi-dimensional analysis.
  • You need advanced DataTable capabilities immediately, not three sprints from now.
  • Built-in charting is required without adding a separate charting library.
  • Branding flexibility matters if your design system is not based on Material Design.
  • You prefer feature completeness out of the box rather than building from primitives.
  • Your team is experienced and can maintain consistency across a large API surface.
  • You need tree structures, organization charts, or hierarchical data display.
  • Time-to-feature is more important than pixel-perfect design system adherence.

Frequently Asked Questions

Is Angular Material too limited for production apps in 2026?

No. Angular Material intentionally focuses on core UI needs with strong accessibility and Angular alignment. It’s only limiting if you need advanced data grids, charts, or complex hierarchies out-of-the-box.

Does PrimeNG hurt design consistency or maintainability?

It can if used without discipline. PrimeNG’s flexibility speeds up development but requires team conventions and shared patterns to avoid inconsistent UI as the app scales.

Should I choose based on features or long-term architecture?

Architecture matters more. Angular Material favors consistency and maintainability; PrimeNG favors speed and feature depth. The best choice depends on what your app will need to support over time.

Harness the power of feature-rich and powerful Syncfusion® Angular UI components.

Conclusion

Thank you for reading! Choosing Angular Material vs PrimeNG in 2026 is an architecture decision.

  • If you’re building a design-system-driven product UI where accessibility, consistency, and maintainability matter most, Angular Material is the better default.
  • If you’re building data-heavy dashboards where advanced grids, charts, and hierarchy views drive the UX, PrimeNG will usually get you to production faster with less custom UI engineering.

Pick the library that matches what you’re buildings and what you’ll still be maintaining two years from now.

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Running AI agents with customized templates using docker sandbox

1 Share

This post follows on directly from my previous post, in which I describe how to run AI agents safely using the docker sandbox tool, sbx. In this post I describe how to create custom templates, so that your sandboxes start with additional tools. I show both how to add tools to the default template, and how to start with a different docker image and layer-on the docker sandbox tooling later.

Running agents safely in a docker sandbox

As I described in my previous post, working with AI agents in their default mode can mean an infuriating number of tool calls, that interrupt your flow, and generally slow you down:

The Claude Code permissions call, asking "Do you want to create test.txt?"

However, ignoring these tool calls using the "bypass permissions" mode (AKA YOLO/dangerous mode) can be, well, dangerous. There's plenty of examples of AI agents going rogue; do you want to risk it? Docker Sandboxes provide one solution.

Docker sandboxes run in microVMs, which are isolated from the host machine. The only folder the sandbox can access is the working directory you give access to, and all network traffic goes through a network proxy, which can either block traffic, or it can inject credentials such that the coding agent never sees them directly.

I've only used docker sandboxes for a short while, but I've found they work relatively well for my purposes. However, one limitation is that some of the projects I'm working on have a bunch of requirements for tooling, which always needs to be installed in the sandbox. Doing that every time is a bit of a pain. Luckily, there's a solution: custom templates.

Creating a custom Claude Code template

The Docker sandbox documentation describes how to create a custom template, based on one of the default templates. I'm going to use the Claud Code examples in this post, but there's different templates for each of the supported agents. For each supported image there are also 2 variants: one that includes a Docker Engine, and one that doesn't. e.g.

  • claude-code—includes a variety of dev tools.
  • claude-code-docker—includes the same as above, but also has Docker Engine.

There's also a claude-code-minimal template which is similar to claude-code, but includes fewer tools, so you don't have npm, python, or golang, for example.

To create a custom template, you need to have Docker Desktop installed as you're basically building an OCI image (effectively a docker image, kinda, sorta). That's despite the fact that docker sandboxes don't run as docker containers, but rather as microVMs.

The following example, based on the documentation shows how to start from the default template, how to install package manager dependencies, and how to install other tools, using dotnet as an example:

FROM docker/sandbox-templates:claude-code-docker

# Switch to root to run package manager installs (.NET dependencies)
USER root
RUN apt-get update \
    && apt-get install -y --no-install-recommends \
    ca-certificates \
    libc6 \
    libgcc-s1 \
    libgssapi-krb5-2 \
    libicu76 \
    libssl3t64 \
    libstdc++6 \
    tzdata \
    zlib1g

# Most tools should be installed at user-level, using the agent user
USER agent
RUN curl -sSL https://dot.net/v1/dotnet-install.sh | bash /dev/stdin --channel 10.0 --no-path
ENV DOTNET_ROOT=/home/agent/.dotnet \
    PATH=$PATH:/home/agent/.dotnet:/home/agent/.dotnet/tools

This shows several important things:

  • The base docker/sandbox-templates images are based on Ubuntu, so use apt-get for managing packages.
  • The base images include two users, root and agent.
    • System-level package installations must be made using the root user.
    • Tools that install into the home directory must be installed using the agent user.

You can build the package using familiar docker build commands, but you must push it straight to an OCI registry (Docker Hub works!). You can't just build it locally as the docker sandbox doesn't share the image store with your local Docker host.

docker build -t my-org/my-template:v1 --push .

Once you've pushed the image to an OCI registry you can use it locally in a sandbox by using the --template or -t argument when calling sbx run:

sbx run -t docker.io/my-org/my-template:v1 claude

This will pull (and cache) the template you specify, and you'll have the extra tools immediately available in your sandbox. Note that you must include the docker.io (Docker Hub) or other prefix when specifying the template (which differs from when you're running "normal" docker commands).

I've created some sandboxes for .NET, similar to the above, and pushed them to dockerhub. You can see the definition of the images here. Feel free to use them if you wish!

Basing your custom templates on the standard default templates works well when you just want to make some extra tools available to your sandbox, but what if you fundamentally want to use a different base image? That's a bit trickier…

What if you need to change the base image?

The "supported" approach to these custom templates is shown in the previous section: you start with the docker/sandbox-templates and then install the extra tools on that base image. Currently, those images are based on ubuntu 25.10, which is a nice current base image. But what if you need to use an older image for running tests. This is the case for the Datadog .NET SDK where we build using old distro versions to ensure we can support customers running with early glibc versions.

This proves a little tricky, as it's not officially supported. On the one hand, to emulate the work the base images do, mostly there's just a few crucial configurations you need to add, such as setting NO_PROXY, creating an agent user, and installing the claude CLI. However, the docker/sandbox-templates images contain a lot more than that. Unfortunately, the contents of these images aren't readily available on GitHub, for example.

Luckily, you can see the contents of each layer on Docker Hub. It's a little bit messed up due to how buildkit renders it, but it is understandable. Based on each of those layers, I was able to effectively reverse-engineer the layering the docker/sandbox-templates:claude-code-docker image on top of a different base image.

The following shows a dockerfile that aims to perform all the steps the default docker/sandbox-templates to, but based on an arbitrary base image. There's quite a lot in here, but in summary:

  • It configures various environment variables.
  • Installs various basic tools (curl, certificates) and sets up various keyrings.
  • Configures the agent user.
  • Sets up a CLAUDE_ENV_FILE temporary session file.
  • Installs a variety of tools (npm, golang, python, make etc).
  • Installs Claude Code.

All in all, it looks a bit like this:

FROM dd-trace-dotnet/debian-tester AS base

# Grab stuff from the original sandbox
ENV NPM_CONFIG_PREFIX=/usr/local/share/npm-global
ENV PATH=/home/agent/.local/bin:/usr/local/share/npm-global/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV NO_PROXY=localhost,127.0.0.1,::1,172.17.0.0/16
ENV no_proxy=localhost,127.0.0.1,::1,172.17.0.0/16

WORKDIR /home/agent/workspace
RUN apt-get update \
    && apt-get install -yy --no-install-recommends \
    ca-certificates \
    curl \
    gnupg \
    && install -m 0755 -d /etc/apt/keyrings \
    && curl -fsSL https://download.docker.com/linux/debian/gpg | \
    gpg --dearmor -o /etc/apt/keyrings/docker.gpg \
    && chmod a+r /etc/apt/keyrings/docker.gpg \
    && echo \
    "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
    $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
    tee /etc/apt/sources.list.d/docker.list > /dev/null

# Remove base image user
# Create non-root user
# Configure sudoers
# Create sandbox config
# Set up npm global package folder under /usr/local/share
RUN userdel ubuntu || true \
    && useradd --create-home --uid 1000 --shell /bin/bash agent \
    && groupadd -f docker \
    && usermod -aG sudo agent \
    && usermod -aG docker agent \
    && mkdir /etc/sudoers.d \
    && chmod 0755 /etc/sudoers.d \
    && echo "agent ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/agent \
    && echo "Defaults:%sudo env_keep += \"http_proxy https_proxy no_proxy HTTP_PROXY HTTPS_PROXY NO_PROXY SSL_CERT_FILE NODE_EXTRA_CA_CERTS REQUESTS_CA_BUNDLE JAVA_TOOL_OPTIONS\"" > /etc/sudoers.d/proxyconfig \
    && mkdir -p /home/agent/.docker/sandbox/locks \
    && chown -R agent:agent /home/agent \
    && mkdir -p /usr/local/share/npm-global \
    && chown -R agent:agent /usr/local/share/npm-global

RUN touch /etc/sandbox-persistent.sh && chmod 644 /etc/sandbox-persistent.sh && chown agent:agent /etc/sandbox-persistent.sh
ENV BASH_ENV=/etc/sandbox-persistent.sh

# Source the sandbox persistent environment file
# Export BASH_ENV so non-interactive child shells also source the persistent env
RUN echo 'if [ -f /etc/sandbox-persistent.sh ]; then . /etc/sandbox-persistent.sh; fi; export BASH_ENV=/etc/sandbox-persistent.sh' \
    | tee /etc/profile.d/sandbox-persistent.sh /tmp/sandbox-bashrc-prepend /home/agent/.bashrc > /dev/null \
    && chmod 644 /etc/profile.d/sandbox-persistent.sh \
    && cat /tmp/sandbox-bashrc-prepend /etc/bash.bashrc > /tmp/new-bashrc \
    && mv /tmp/new-bashrc /etc/bash.bashrc \
    && chmod 644 /etc/bash.bashrc \
    && rm /tmp/sandbox-bashrc-prepend
    && chmod 644 /home/agent/.bashrc \
    && chown agent:agent /home/agent/.bashrc

USER root

# Setup Github keys
RUN curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg \
    | tee /etc/apt/keyrings/githubcli-archive-keyring.gpg > /dev/null \
    && chmod a+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
    && echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" \
    | tee /etc/apt/sources.list.d/github-cli.list > /dev/null

# Install all the tools available in the claude-code-docker image
RUN apt-get update \
    && apt-get install -yy --no-install-recommends \
    dnsutils \
    docker-buildx-plugin \
    docker-ce-cli \
    docker-compose-plugin \
    git \
    jq \
    less \
    lsof \
    make \
    procps \
    psmisc \
    ripgrep \
    rsync \
    socat \
    sudo \
    unzip \
    gh \
    bc \
    default-jdk-headless \
    golang \
    man-db \
    nodejs \
    npm \
    python3 \
    python3-pip \
    containerd.io docker-ce \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

LABEL com.docker.sandboxes.start-docker=true

USER agent

FROM base AS claude

# Install Claude Code
RUN curl -fsSL https://claude.ai/install.sh | bash

ENV CLAUDE_ENV_FILE=/etc/sandbox-persistent.sh
CMD ["claude", "--dangerously-skip-permissions"]

If you don't want all the extra tools like npm, python and golang, you can instead base it on the claude-code-minimal image instead. In that case, the final tool install step looks a bit like this instead:

RUN apt-get update \
    && apt-get install -yy --no-install-recommends \
        bubblewrap \
        dnsutils \
        docker-buildx-plugin \
        docker-ce-cli \
        docker-compose-plugin \
        git \
        gh \
        jq \
        less \
        lsof \
        make \
        procps \
        psmisc \
        ripgrep \
        rsync \
        socat \
        sudo \
        unzip \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

Or, you know, install a mix of those, whatever you want. That's the advantage of this approach at least, you can install more of fewer tools, whatever you want! Whichever approach you like, you can again build and push the image to an OCI registry:

docker build --tag dd-trace-dotnet/sandbox --push .

You can then use the image in your sbx sandbox, just as before, but this time you'll be running in a base image that has all of your prerequisites installed.

Updating the version of Claude Code only

You might notice in the above Dockerfile that I put the Claude Code image in its own section of the multi-stage build:

FROM base AS claude

# Install Claude Code
RUN curl -fsSL https://claude.ai/install.sh | bash

ENV CLAUDE_ENV_FILE=/etc/sandbox-persistent.sh
CMD ["claude", "--dangerously-skip-permissions"]

That's not necessary, but I did it for a subtle reason. Claude Code updates a lot, but I didn't really want to update the entire image repeatedly for performance reasons. By moving the Claude Code install to its own final stage, I could rebuild just that stage, without having to rebuild the entire image, by using --no-cache-filter:

docker build --tag dd-trace-dotnet/sandbox --no-cache-filter claude .

It's just a minor thing, but it means updating to the latest Claude Code version is a much quicker process.

I still need to test this image out properly, but I tried it out with a previous version and it was working pretty well for me. I'd be interested to know if anyone else has tried something similar, or if you have a better solution (short of just yolo/dangerous direct on the host!).

Summary

In this post I described how to create custom templates for Docker Sandboxes. First I showed the official approach, which layers tools on top of yhe default sandbox templates in docker/sandbox-templates. This is the easiest approach, and works well if the specific base image doesn't matter too much to you. Then I showed how I reverse-engineered the sandbox templates to allow completely swapping out the base image. This was necessary for a project I was working on, where I specifically wanted to run agents in the same base image we use to build the project. this approach isn't supported, and I'm not 100% it's quite right, but it seems to do the job well enough!

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When Daily Stand-ups Become Status Updates — The Warning Signs of a Team Falling Apart | Efe Gümüs

1 Share

Efe Gümüs: When Daily Stand-ups Become Status Updates — The Warning Signs of a Team Falling Apart

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

 

"When people start creating their own bubble inside the team, it's because they either don't feel safe, or they don't feel relevant to what the rest of the team is doing." - Efe Gümüs

 

Efe shares the story of an integration team — back-end and front-end developers working across legacy components, a monolithic environment, and a microservices transformation all at once. With so many different workstreams, team members ended up with their own individual projects. The daily stand-up became a status update: people shared what they were doing, but nobody was listening because nobody else's work affected them. The daily grew from 15 minutes to 30, sometimes an hour, morphing into an unplanned refinement session. Participation dropped — some stopped showing up, others attended but went silent. The team that had once been interactive and collaborative splintered into silos. Informal conversations disappeared entirely, and that was when Efe knew it was too late to make small fixes. Without trust, without a common goal, they were no longer a team — just a group of people sitting together. Then COVID hit, and remote work removed the last chance for accidental collaboration. The daily meeting, Efe realized, is your best radar for team health: pay attention to the energy, the interaction, the engagement — and you'll see the deeper dynamics before they become irreversible.

 

Self-reflection Question: How engaged is your team during the daily stand-up right now — and does the level of interaction tell you something about how connected they feel to each other's work?

Featured Book of the Week: Psycho-Cybernetics by Maxwell Maltz

"The book is all about building success mechanisms inside your own mind. If you can set your life goal, then it's way easier for you to set your career goal, your team goal, your sprint goal." - Efe Gümüs

 

Efe's most influential book isn't about Agile at all — it's Psycho-Cybernetics by Maxwell Maltz, a psychology book about building success mechanisms in your mind. Recommended by a fellow agile coach, the book helped Efe see the parallels between personal goal-setting and the iterative progress at the heart of Scrum. When you feel lost or stagnating, the exercises in the book help you create small pieces of progress — not quick wins, but genuine forward movement that builds momentum. Efe connects this directly to Agile: every event, every sprint, every review is a small achievement toward the next one. If you can set a clear life goal, setting a sprint goal becomes natural. The clarity of purpose unlocks action — and that's as true for individuals as it is for teams.

 

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn't just about innovation—it's about coaching!🔥

Angela thought she was just there to coach a team. But now, she's caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn't just about the product—it's about the people.

 

🚨 Will Angela's coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

 

Buy Now on Amazon

 

[The Scrum Master Toolbox Podcast Recommends]

 

About Efe Gümüs

 

Efe is an out-of-the-box Agile Coach and Scrum Master who brings fresh perspectives to Agile by connecting it with everyday life. He uses metaphors to reveal mindset patterns and applies continuous feedback loops beyond work, including music production and gym training, constantly refining performance, creativity, and personal growth and resilience.

 

You can link with Efe Gümüs on LinkedIn.

 





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260414_Efe_Gumus_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories