Download audio: https://traffic.libsyn.com/thepresentationpodcast/TPP_e243.mp3
Designing for autonomous agents presents a unique frustration. We hand a complex task to an AI, it vanishes for 30 seconds (or 30 minutes), and then it returns with a result. We stare at the screen. Did it work? Did it hallucinate? Did it check the compliance database or skip that step?
We typically respond to this anxiety with one of two extremes. We either keep the system a Black Box, hiding everything to maintain simplicity, or we panic and provide a Data Dump, streaming every log line and API call to the user.
Neither approach directly addresses the nuance needed to provide users with the ideal level of transparency.
The Black Box leaves users feeling powerless. The Data Dump creates notification blindness, destroying the efficiency the agent promised to provide. Users ignore the constant stream of information until something breaks, at which point they lack the context to fix it.
We need an organized way to find the balance. In my previous article, “Designing For Agentic AI”, we looked at interface elements that build trust, like showing the AI’s intended action beforehand (Intent Previews) and giving users control over how much the AI does on its own (Autonomy Dials). But knowing which elements to use is only part of the challenge. The harder question for designers is knowing when to use them.
How do you know which specific moment in a 30-second workflow requires an Intent Preview and which can be handled with a simple log entry?
This article provides a method to answer that question. We will walk through the Decision Node Audit. This process gets designers and engineers in the same room to map backend logic to the user interface. You will learn how to pinpoint the exact moments a user needs an update on what the AI is doing. We will also cover an Impact/Risk matrix that will help to prioritize which decision nodes to display and any associated design pattern to pair with that decision.
Transparency Moments: A Case Study ExampleConsider Meridian (not real name), an insurance company that uses an agentic AI to process initial accident claims. The user uploads photos of vehicle damage and the police report. The agent then disappears for a minute before returning with a risk assessment and a proposed payout range.
Initially, Meridian’s interface simply showed Calculating Claim Status. Users grew frustrated. They had submitted several detailed documents and felt uncertain about whether the AI had even reviewed the police report, which contained mitigating circumstances. The Black Box created distrust.
To fix this, the design team conducted a Decision Node Audit. They found that the AI performed three distinct, probability-based steps, with numerous smaller steps embedded:
The team turned these steps into transparency moments. The interface sequence was updated to:
The system still took the same amount of time, but the explicit communication about the agent’s internal workings restored user confidence. Users understood that the AI was performing the complex task it was designed for, and they knew exactly where to focus their attention if the final assessment seemed inaccurate. This design choice transformed a moment of anxiety into a moment of connection with the user.
Most AI experiences have no shortage of events and decision nodes that could potentially be displayed during processing. One of the most critical outcomes of the audit was to decide what to keep invisible. In the Meridian example, the backend logs generated 50+ events per claim. We could have defaulted to displaying each event as they were processed as part of the UI. Instead, we applied the risk matrix to prune them:
By cutting out the unnecessary details, the important information — like the coverage verification — was more impactful. We created an open interface and designed an open experience.
This approach uses the idea that people feel better about a service when they can see the work being done. By showing the specific steps (Assessing, Reviewing, Verifying), we changed a 30-second wait from a time of worry (“Is it broken?”) to a time of feeling like something valuable is being created (“It’s thinking”).
Let’s now take a closer look at how we can review the decision-making process in our products to identify key moments that require clear information.
The Decision Node AuditTransparency fails when we treat it as a style choice rather than a functional requirement. We have a tendency to ask, “What should the UI look like?” before we ask, “What is the agent actually deciding?”
The Decision Node Audit is a straightforward way to make AI systems easier to understand. It works by carefully mapping out the system’s internal process. The main goal is to find and clearly define the exact moments where the system stops following its set rules and instead makes a choice based on chance or estimation. By mapping this structure, creators can show these points of uncertainty directly to the people using the system. This changes system updates from being vague statements to specific, reliable reports about how the AI reached its conclusion.
In addition to the insurance case study above, I recently worked with a team building a procurement agent. The system reviewed vendor contracts and flagged risks. Originally, the screen displayed a simple progress bar: “Reviewing contracts.” Users hated it. Our research indicated they felt anxious about the legal implications of a missing clause.
We fixed this by conducting a Decision Node Audit. I’ve included a step-by-step checklist for conducting this audit at the conclusion of this article.
We ran a session with the engineers and outlined how the system works. We identified “Decision Points” — moments where the AI had to choose between two good options.
In standard computer programs, the process is clear: if A happens, then B will always happen. In AI systems, the process is often based on chance. The AI thinks A is probably the best choice, but it might only be 65% certain.
In the contract system, we found a moment when the AI checked the liability terms against our company rules. It was rarely a perfect match. The AI had to decide if a 90% match was good enough. This was a key decision point.

Once we identified this node, we exposed it to the user. Instead of “Reviewing contracts,” the interface updated to say: “Liability clause varies from standard template. Analyzing risk level.”
This specific update gave users confidence. They knew the agent checked the liability clause. They understood the reason for the delay and gained trust that the desired action was occurring on the back end. They also knew where to dig in deeper once the agent generated the contract.
To check how the AI makes decisions, you need to work closely with your engineers, product managers, business analysts, and key people who are making the choices (often hidden) that affect how the AI tool functions. Draw out the steps the tool takes. Mark every spot where the process changes direction because a probability is met. These are the places where you should focus on being more transparent.
As shown in Figure 2 below, the Decision Node Audit involves these steps:
Get the team together: Bring in the product owners, business analysts, designers, key decision-makers, and the engineers who built the AI. For example,
Think about a product team building an AI tool designed to review messy legal contracts. The team includes the UX designer, the product manager, the UX researcher, a practicing lawyer who acts as the subject-matter expert, and the backend engineer who wrote the text-analysis code.
Draw the whole process: Document every step the AI takes, from the user’s first action to the final result.
The team stands at a whiteboard and sketches the entire sequence for a key workflow that involves the AI searching for a liability clause in a complex contract. The lawyer uploads a fifty-page PDF → The system converts the document into readable text. → The AI scans the pages for liability clauses. → The user waits. → Moments or minutes later, the tool highlights the found paragraphs in yellow on the user interface. They do this for many other workflows that the tool accommodates as well.
Find where things are unclear: Look at the process map for any spot where the AI compares options or inputs that don’t have one perfect match.
The team looks at the whiteboard to spot the ambiguous steps. Converting an image to text follows strict rules. Finding a specific liability clause involves guesswork. Every firm writes these clauses differently, so the AI has to weigh multiple options and make a prediction instead of finding an exact word match.
Identify the ‘best guess’ steps: For each unclear spot, check if the system uses a confidence score (for example, is it 85% sure?). These are the points where the AI makes a final choice.
The system has to guess (give a probability) which paragraph(s) closely resemble a standard liability clause. It assigns a confidence score to its best guess. That guess is a decision node. The interface needs to tell the lawyer it is highlighting a potential match, rather than stating it found the definitive clause.
Examine the choice: For each choice point, figure out the specific internal math or comparison being done (e.g., matching a part of a contract to a policy or comparing a picture of a broken car to a library of damaged car photos).
The engineer explains that the system compares the various paragraphs against a database of standard liability clauses from past firm cases. It calculates a text similarity score to decide on a match based on probabilities.
Write clear explanations: Create messages for the user that clearly describe the specific internal action happening when the AI makes a choice.
The content designer writes a specific message for this exact moment. The text reads: Comparing document text to standard firm clauses to identify potential liability risks.
Update the screen: Put these new, clear explanations into the user interface, replacing vague messages like “Reviewing contracts.”
The design team removes the generic Processing PDF loading spinner. They insert the new explanation into a status bar located right above the document viewer while the AI thinks.

Once you look closely at the AI’s process, you’ll likely find many points where it makes a choice. An AI might make dozens of small choices for a single complex task. Showing them all creates too much unnecessary information. You need to group these choices.
You can use an Impact/Risk Matrix to sort these choices based on the types of action(s) the AI is taking. Here are examples of impact/risk matrices:
First, look for low-stakes and low-impact decisions.
Low Stakes / Low Impact
Then identify the high-stakes and high-impact decisions.
High Stakes / High Impact
Consider a financial trading bot that treats all buy/sell orders the same. It executes a $5 trade with the same opacity as a $50,000 trade. Users might question whether the tool recognizes the potential impact of transparency on trading on a large dollar amount. They need the system to pause and show its work for the high-stakes trades. The solution is to introduce a Reviewing Logic state for any transaction exceeding a specific dollar amount, allowing the user to see the factors driving the decision before execution.
Once you have identified your experience's key decision nodes, you must decide which UI pattern applies to each one you’ll display. In Designing For Agentic AI, we introduced patterns like the Intent Preview (for high-stakes control) and the Action Audit (for retrospective safety). The decisive factor in choosing between them is reversibility.
We filter every decision node through the impact matrix in order to assign the correct pattern:
High Stakes & Irreversible: These nodes require an Intent Preview. Because the user cannot easily undo the action (e.g., permanently deleting a database), the transparency moment must happen before execution. The system must pause, explain its intent, and require confirmation.
High Stakes & Reversible: These nodes can rely on the Action Audit & Undo pattern. If the AI-powered sales agent moves a lead to a different pipeline, it can do so autonomously as long as it notifies the user and offers an immediate Undo button.
By strictly categorizing nodes this way, we avoid “alert fatigue.” We reserve the high-friction Intent Preview only for the truly irreversible moments, while relying on the Action Audit to maintain speed for everything else.
| Reversible | Irreversible | |
|---|---|---|
| Low Impact | Type: Auto-Execute UI: Passive Toast / Log Ex: Renaming a file |
Type: Confirm UI: Simple Undo option Ex: Archiving an email |
| High Impact | Type: Review UI: Notification + Review Trail Ex: Sending a draft to a client |
Type: Intent preview UI: Modal / Explicit Permission Ex: Deleting a server |
Table 1: The impact and reversibility matrix can then be used to map your moments of transparency to design patterns.
You can identify potential nodes on a whiteboard, but you must validate them with human behavior. You need to verify whether your map matches the user’s mental model. I use a protocol called the “Wait, Why?” Test.
Ask a user to watch the agent complete a task. Instruct them to speak aloud. Whenever they ask a question, “Wait, why did it do that?” or “Is it stuck?” or “Did it hear me?” — you mark a timestamp.
These questions signal user confusion. The user feels their control slipping away. For example, in a study for a healthcare scheduling assistant, users watched the agent book an appointment. The screen sat static for four seconds. Participants consistently asked, “Is it checking my calendar or the doctor’s?”

That question revealed a missing Transparency Moment. The system needed to split that four-second wait into two distinct steps: “Checking your availability” followed by “Syncing with provider schedule.”
This small change reduced users’ expressed levels of anxiety.
Transparency fails when it only describes a system action. The interface must connect the technical process to the user’s specific goal. A screen displaying “Checking your availability” falls flat because it lacks context. The user understands that the AI is looking at a calendar, but they do not know why.
We must pair the action with the outcome. The system needs to split that four-second wait into two distinct steps. First, the interface displays “Checking your calendar to find open times.” Then it updates to “Syncing with the provider’s schedule to secure your appointment.” This grounds the technical process in the user’s actual life.
Consider an AI managing inventory for a local cafe. The system encounters a supply shortage. An interface reading “contacting vendor” or “reviewing options” creates anxiety. The manager wonders if the system is canceling the order or buying an expensive alternative. A better approach is to explain the intended result: “Evaluating alternative suppliers to maintain your Friday delivery schedule.” This tells the user exactly what the AI is trying to achieve.
Operationalizing the AuditYou have completed the Decision Node Audit and filtered your list through the Impact and Risk Matrix. You now have a list of essential moments for being transparent. Next, you need to create them in the UI. This step requires teamwork across different departments. You can’t design transparency by yourself using a design tool. You need to understand how the system works behind the scenes.
Start with a Logic Review. Meet with your lead system designer. Bring your map of decision nodes. You need to confirm that the system can actually share these states. I often find that the technical system doesn’t reveal the exact state I want to show. The engineer might say the system just returns a general “working” status. You must push for a detailed update. You need the system to send a specific notice when it switches from reading text to checking rules. Without that technical connection, your design is impossible to build.
Next, involve the Content Design team. You have the technical reason for the AI’s action, but you need a clear, human-friendly explanation. Engineers provide the underlying process, but content designers provide the way it’s communicated. Do not write these messages alone. A developer might write “Executing function 402,” which is technically correct but meaningless to the user. A designer might write “Thinking,” which is friendly but too vague. A content strategist finds the right middle ground. They create specific phrases, such as “Scanning for liability risks”, that show the AI is working without confusing the user.
Finally, test the transparency of your messages. Don’t wait until the final product is built to see if the text works. I conduct comparison tests on simple prototypes where the only thing that changes is the status message. For example, I show one group (Group A) a message that says “Verifying identity” and another group (Group B) a message that says “Checking government databases” (these are made-up examples, but you understand the point). Then I ask them which AI feels safer. You’ll often discover that certain words cause worry, while others build trust. You must treat the wording as something you need to test and prove effective.
Conducting these audits has the potential to strengthen how a team works together. We stop handing off polished design files. We start using messy prototypes and shared spreadsheets. The core tool becomes a transparency matrix. Engineers and the content designers edit this spreadsheet together. They map the exact technical codes to the words the user will read.
Teams will experience friction during the logic review. Imagine a designer asking the engineer how the AI decides to decline a transaction submitted on an expense report. The engineer might say the backend only outputs a generic status code like “Error: Missing Data”. The designer states that this isn’t actionable information on the screen. The designer negotiates with the engineer to create a specific technical hook. The engineer writes a new rule so the system reports exactly what is missing, such as a missing receipt image.
Content designers act as translators during this phase. A developer might write a technically accurate string like “Calculating confidence threshold for vendor matching.” A content designer translates that string into a phrase that builds trust for a specific outcome. The strategist rewrites it as “Comparing local vendor prices to secure your Friday delivery.” The user understands the action and the result.
The entire cross-functional team sits in on user testing sessions. They watch a real person react to different status messages. Seeing a user panic because the screen says “Executing trade” forces the team to rethink their approach. The engineers and designers align on better wording. They change the text to “Verifying sufficient funds” before buying stock. Testing together guarantees the final interface serves both the system logic and the user’s peace of mind.
It does require time to incorporate these additional activities into the team’s calendar. However, the end result should be a team that communicates more openly, and users who have a better understanding of what their AI-powered tools are doing on their behalf (and why). This integrated approach is a cornerstone of designing truly trustworthy AI experiences.
Trust Is A Design ChoiceWe often view trust as an emotional byproduct of a good user experience. It is easier to view trust as a mechanical result of predictable communication.
We build trust by showing the right information at the right time. We destroy it by overwhelming the user or hiding the machinery completely.
Start with the Decision Node Audit, particularly for agentic AI tools and products. Find the moments where the system makes a judgment call. Map those moments to the Risk Matrix. If the stakes are high, open the box. Show the work.
In the next article, we will look at how to design these moments: how to write the copy, structure the UI, and handle the inevitable errors when the agent gets it wrong.
Appendix: The Decision Node Audit ChecklistPhase 1: Setup and Mapping
✅ Get the team together: Bring in the product owners, business analysts, designers, key decision-makers, and the engineers who built the AI.
Hint: You need the engineers to explain the actual backend logic. Do not attempt this step alone.
✅ Draw the whole process: Document every step the AI takes, from the user’s first action to the final result.
Hint: A physical whiteboard session often works best for drawing out these initial steps.
Phase 2: Locating the Hidden Logic
✅ Find where things are unclear: Look at the process map for any spot where the AI compares options or inputs that do not have one perfect match.
✅ Identify the best guess steps: For each unclear spot, check if the system uses a confidence score. For example, ask if the system is 85 percent sure. These are the points where the AI makes a final choice.
✅ Examine the choice: For each choice point, figure out the specific internal math or comparison being done. An example is matching a part of a contract to a policy. Another example involves comparing a picture of a broken car to a library of damaged car photos.
Phase 3: Creating the User Experience
✅ Write clear explanations: Create messages for the user that clearly describe the specific internal action happening when the AI makes a choice.
Hint: Ground your messages in concrete reality. If an AI books a meeting with a client at a local cafe, tell the user the system is checking the cafe reservation system.
✅ Update the screen: Put these new, clear explanations into the user interface. Replace vague messages like Reviewing contracts with your specific explanations.
✅ Check for Trust: Make sure the new screen messages give users a simple reason for any wait time or result. This should make them feel confident and trusting.
Hint: Test these messages with actual users to verify they understand the specific outcome being achieved.
When a user signs in to an application, their Identity Provider (IdP) provides metadata about the user’s identity. This static information was provided by the user when the account was created, like the user’s name, email address, and country of origin. The amount of data available depends on the IdP implementation and requirements. Based on the requested (and consented) scopes, the IdP provides some or all of this information as claims to the client application.
The default mechanism that Duende IdentityServer uses for storing claims containing user information is a client-side cookie. Too much information bloats the cookie, increasing the size of each request and degrading performance. Additionally, the web client is storing access tokens in the browser, which goes against today’s best practices (e.g., using Backend-for-Frontend). We can work around these issues by storing the cookie data on the server using Duende IdentityServer server-side sessions.
Your branching strategy can support Continuous Delivery, or make it an impossible goal. You should assess the impact of how you branch on your ability to deliver software at all times, and you’ll find some branching techniques that work, while others that make software delivery more like walking in the dark through a field of rakes.
Continuous Integration is the practice of integrating code changes frequently, typically multiple times a day. This means you should check your code multiple times a day and keep your main branch deployable. Continuous Integration is a prerequisite for Continuous Delivery.
The name “Continuous Integration” provides solid hints about the crucial parts of the practice. Everyone should integrate their code into a shared branch (feature branches don’t count) and this should be done continuously, which means you’re doing it all the time; at least once a day, but ideally more often.
You’ll often speak to developers who want to stretch the definition of Continuous Integration, but you can’t escape those foundations. Merging the main branch into a long-lived branch feels like Continuous Integration, but you’ll notice no changes come back for ages until someone finally merges to main. Then you have to perform a large, complex merge, which you should avoid.
The DORA research on Continuous Delivery includes the capabilities of Continuous Integration and trunk-based development. The statistics suggest you can get similar benefits as long as you limit yourself to 3 (or fewer) short-lived branches (less than a day old).
You can watch the episode below, or read on to find some of the key discussion points.
Watch Continuous Delivery Office Hours Ep.3
Since the ability to branch code was invented, developers have applied a great deal of creativity to how to use branches. The most common approach is feature branching, where each feature gets a branch of main that evolves separately until the feature is complete, and it’s merged back into main. Release branching allows development to continue on main by taking a cut of the main branch that will be released. This allows hotfixes to be applied to the release branch without further destabilizing it.
The utility of branching strategies is often eroded by the coordination overhead of maintaining the separate branches and merging different changes back together. The more complex the branching strategy, the more likely it is that you’ll have merge conflicts and lost bug fixes. For example, you might fix a release branch and forget to merge the fix back to main, so the next release reintroduces the bug.
This is why the worst branching strategy is Gitflow. This is a complicated branching strategy that creates dedicated branches for features, releases, and hotfixes alongside permanent main and develop branches. The overhead of Gitflow vastly outweighs its benefits.
This is where trunk-based development shines, as it removes unnecessary complexity.
Trunk-based development is the process of making all commits directly to the main branch. This is complemented by out-of-band reviews that don’t block merging and by feature toggles that decouple deployments from releases. It should be possible to deploy your software from the main branch at all times, even if features aren’t complete.
The DORA research allows up to 3 short-lived branches, which can be useful for teams working remotely (like open-source project teams), who can use branches and pull requests to coordinate their work. Even so, the goal is to keep branches short-lived and to merge frequently.
Trunk-based development is complemented by automated builds and checks that run when code is committed to the main branch. If a problem is found during these checks, the team should prioritize the fix over other development work.
Teams with the best software delivery performance work in small steps. Trunk-based development and Continuous Integration are crucial practices for controlling batch size. Problems are discovered sooner and are easier to fix when you only have a small amount of change to reason about.
Making frequent commits makes it easy to back out a bad change. If a test fails, you can discard changes since the last commit and try again, instead of trying to debug the problem. This is especially true when using AI coding assistants or other code generation techniques.
Ultimately, for trunk-based development to succeed without friction, the entire team must be aligned on the process. It doesn’t work if only some of the team are on board.
Happy deployments!
Continuous Delivery Office Hours is a series of conversations about software delivery, with Tony Kelly, Bob Walker, and Steve Fenton.
You can find more episodes on YouTube, Apple Podcasts, and Pocket Casts.
TL;DR: Modern spreadsheets aren’t complete without intuitive clipboard support. This guide shows how to implement and customize cut, copy, and paste operations in a React Spreadsheet, whether through the UI or APIs. You’ll also learn how to handle data from external sources and restrict specific paste actions to preserve data integrity.
Clipboard support is one of those features users only notice when it doesn’t work. If you’re building an app with a spreadsheet-like UI, people expect Excel-style cut/copy/paste, keyboard shortcuts, and sane behavior when they paste data from outside your app.
In this post, we’ll walk through how to handle clipboard operations in the Syncfusion® React Spreadsheet, including:
Developers usually run into clipboard issues in a few common scenarios:
This is where built-in clipboard support helps, but the key is knowing how to control it.
When you’re working with Spreadsheets, cut, copy, and paste aren’t just basic actions; they’re the backbone of smooth data handling. The component offers a robust set of options for managing data:
Ctrl+C, Ctrl+X, and Ctrl+V for quick clipboard actions.The Syncfusion React Spreadsheet includes built-in clipboard functionality, making these operations easy to use. If you need to enable/disable them globally, use the enableClipboard property (enabled by default).
<SpreadsheetComponent enableClipboard={true} />
Need to move data around in your Spreadsheet? That’s where the Cut operation comes in. It allows you to move data without creating duplicates. When you cut cells, rows, or columns, the content is stored in the clipboard but remains in place until you paste it. The original data is removed only after the paste action is completed.
The Syncfusion React Spreadsheet makes moving data straightforward. Here are the different ways to perform a cut:


// cuts the currently selected range
spreadsheet.cut();
// cuts the given address range
spreadsheet.cut('A2:C5');
Sometimes you don’t need to move data; just create an extra copy. The Copy operation lets you duplicate selected cells, rows, or columns and store them in the clipboard, ready to be pasted anywhere in the Spreadsheet.
The Syncfusion React Spreadsheet offers multiple ways to copy data, allowing you to choose the most convenient approach:


// copy the currently selected range
spreadsheet.copy();
// copy data from the specified range
spreadsheet.copy('A2:C5')
The Paste operation allows you to insert clipboard content into cells, rows, or columns quickly and accurately. It enables you to move copied or cut data into the Spreadsheet with full control over what gets pasted.
Paste options available:

The React Spreadsheet supports pasting data copied from external applications such as Excel. Supported content includes values, number formats (date, currency, percentage), text formatting (bold, italic, font color), and basic cell formatting like background color and borders. This ensures imported data retains its readability and structure.
Ways to paste data in Syncfusion React Spreadsheet:


Ctrl + V (Windows) or Command + V (Mac) for the quickest way to paste.// Paste the selected cell.
spreadsheet.paste();
// Paste all the clipboard content to the specified range.
spreadsheet.paste('B2', 'All');
// Paste only the values to the specified range.
spreadsheet.paste('B3', 'Values');
// Apply only the formats to the specified range.
spreadsheet.paste('B4', 'Formats');
In the code example above, the paste() method gives you full control over where and how clipboard data is inserted, making it easy to handle simple and advanced paste scenarios.
Here’s a quick demo of the feature in action:

Uncontrolled paste operations can introduce unwanted data, styles, or even break formulas. With the React Spreadsheet, you can intercept and stop a paste action before it’s applied.
The key is the actionBegin event. This event fires before any spreadsheet action occurs, including paste operations. By detecting a paste request and canceling it, you can completely prevent the action.
Code snippet to achieve this:
const onActionBegin = (pasteArgs) => {
if (
pasteArgs.args.action === 'clipboard' &&
pasteArgs.args.eventArgs.requestType === 'paste'
) {
pasteArgs.args.eventArgs.cancel = true;
}
};
Here’s a quick demo of how paste actions can be blocked.

When pasting data from applications like Excel or Google Sheets, formatting such as fonts, colors, and borders often comes along. If you want to keep your Spreadsheet clean and consistent, you can allow only the values to be pasted and strip out all styles.
You can achieve this behaviour using three key events:
actionComplete: Triggered after the paste operation finishes. Use this event to reset any temporary state or cleanup logic related to the paste action.The code example below shows how to paste only values from an external clipboard.
//To check the content copied from external clipboard
let isPasteFromExternalClipboard;
//Event Triggers before any action starts in the Spreadsheet
const onActionBegin = (args) => {
// To check the requested type is paste and content copied is from external clipboard.
if (
args.args.eventArgs &&
args.args.eventArgs.requestType === 'paste' &&
(args.args.eventArgs.copiedInfo === null ||
args.args.eventArgs.copiedInfo ===undefined)
) {
//Enabling Boolean
isPasteFromExternalClipboard = true;
}
};
//Event Triggers just before a cell’s value or property is updated
const onBeforeCellUpdate = (args) => {
if(isPasteFromExternalClipboard) {
if (args.cell) {
//To remove styles and formatting and paste only values to cells
const value = args.cell.value;
args.cell = { value: value };
}
}
};
//Event Triggers after an action is completed
const onActionComplete = (args) => {
if(
args.action === 'clipboard' &&
args.eventArgs.requestType=== 'paste'
){
//Reset Boolean
isPasteFromExternalClipboard = false;
}
};
<SpreadsheetComponent
actionBegin={onActionBegin}
actionComplete={onActionComplete}
beforeCellUpdate={onBeforeCellUpdate}
>
</SpreadsheetComponent>
As a result, only raw cell values are pasted, with all formatting removed.
See the feature in action below:

The actionBegin event lets you check if copiedInfo is null or underfined, which indicates an external clipboard source.
Yes. You can override the default paste behavior by intercepting the actionBegin event when the
requesteType is paste and then overwrite the incoming value or style payload before it’s applied to the cells.
You can copy from a protected sheet, but the cut action is blocked because protection prevents modifying the source cells. Copied data can still be pasted into an unprotected sheet since the target sheet allows edits.
Yes. You can block copy actions for specific protected cells by checking the selected range during the copy event and canceling it only for those cells, while still allowing copying from other permitted cells.
Thank you for reading! Clipboard support isn’t just Ctrl+C/Ctrl+V; it’s a big part of whether your spreadsheet UI feels trustworthy. With Syncfusion React Spreadsheet, you can rely on default Excel-like behavior, and then add guardrails where real apps need them: preventing paste, or allowing paste while stripping formatting from external sources.
Syncfusion Spreadsheet is also available for JavaScript, Angular, Vue, ASP.NET Core, and ASP.NET MVC platforms, making it easy to integrate across your tech stack.
platforms, making it easy to integrate across your tech stack.
If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.
You can also contact us through our support forum, support portal, or feedback portal for queries. We are always happy to assist you!
TL;DR: Multi‑format document handling in apps doesn’t need separate viewers. By converting Word, Excel, PowerPoint, Images, and XPS into PDF, the .NET MAUI PDF Viewer becomes a universal document hub. Developers get one streamlined interface for previewing, annotating, and collaborating across formats. The result: simpler workflows, consistent user experience, reduced complexity, and scalable cross‑platform document management.
Modern cross-platform apps rarely deal with “just PDFs.” Users open Word docs, Excel sheets, PowerPoint decks, images, and even XPS files, often inside the same workflow. The problem: building and maintaining a separate viewer per format is costly, inconsistent across platforms, and hard to scale.
To create a smoother workflow, adopt a simpler approach: standardize PDF as the universal document viewer format. By converting incoming files to PDF, you enable seamless rendering through a single PDF Viewer component.
With this strategy in mind, this blog will guide you through setting up a unified document viewing workflow for your .NET MAUI apps.
PDF is a stable, layout-preserving format that works well for preview and review scenarios:
The Syncfusion® .NET MAUI PDF Viewer natively supports PDF files and offers a rich interaction model. When combined with Syncfusion Document Processing Libraries, it becomes a complete, universal document viewing engine.
Once all documents are converted to PDF, the .NET MAUI PDF Viewer presents a single, consistent UI across all formats.
The Syncfusion .NET MAUI PDF Viewer supports converting and previewing the following five commonly used file types into PDF format:
To convert various file formats into PDF, Syncfusion offers a range of document processing libraries. Depending on the file type and your specific requirements, one or more of these libraries can be used:
Each library handles format‑specific conversion, producing a PdfDocument object that can be passed to the viewer. Using this approach maintains high fidelity and preserves layouts.
1. First, we need to establish the foundation by creating a new .NET MAUI app. Then, install the required packages for the .NET MAUI PDF Viewer and document processing libraries.
Here’s the code you need:
<PackageReference Include="Syncfusion.DocIORenderer.NET" Version="*" />
<PackageReference Include="Syncfusion.Maui.PdfViewer" Version="*" />
<PackageReference Include="Syncfusion.Maui.TabView" Version="*" />
<PackageReference Include="Syncfusion.Pdf.Imaging.NET" Version="*" />
<PackageReference Include="Syncfusion.Presentation.NET" Version="*" />
<PackageReference Include="Syncfusion.PresentationRenderer.NET" Version="*" />
\<PackageReference Include="Syncfusion.XlsIORenderer.NET" Version="*" />8
<PackageReference Include="Syncfusion.XpsToPdfConverter.NET" Version="*" />
2. Next, configure the handlers in the MauiProgram.cs file as shown in the code below:
using Syncfusion.Maui.Core.Hosting;
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureSyncfusionCore();
return builder.Build();
}
Let’s see the steps to build a seamless universal document viewing experience using the Syncfusion .NET MAUI PDF Viewer:
Once the required Syncfusion libraries are configured, implement the logic to convert each supported file type into a PDF.
Below is a breakdown of the conversion logic for each supported format:
To convert a Word document, use the DocIORenderer.ConvertToPDF(WordDocument) method, which converts a loaded WordDocument into a PdfDocument.
Refer to the example below.
// Loading an existing Word document
Assembly assembly = typeof(App).GetTypeInfo().Assembly;
using (WordDocument document = new WordDocument(
assembly.GetManifestResourceStream("SampleName.Assets.InputDocument.docx"),
FormatType.Docx))
{
// Convert to PDF using DocIO's rendering engine
using (DocIORenderer renderer = new DocIORenderer())
{
PdfDocument pdfDocument = renderer.ConvertToPDF(document);
// Proceed to save as stream
}
}
Note: You can check out more options and examples in the Word to PDF conversion guide.
Excel workbooks can be converted using the XlsIORenderer.ConvertToPDF(IWorkbook) method, which converts an IWorkbook (Excel document) into a PdfDocument.
Here’s the code example for quick conversion:
// Initialize Excel engine and open workbook
using (ExcelEngine excelEngine = new ExcelEngine())
{
IApplication application = excelEngine.Excel;
IWorkbook workbook = application.Workbooks.Open(
assembly.GetManifestResourceStream("SampleName.Assets.InputDocument.xlsx"));
// Convert workbook to PDF
XlsIORenderer renderer = new XlsIORenderer();
PdfDocument pdfDocument = renderer.ConvertToPDF(workbook);
// Proceed to save as stream
}
Note: You can check out more options and examples in the Excel to PDF conversion guide.
For PowerPoint presentations, the PresentationToPdfConverter.Convert(IPresentation) method handles the conversion of an IPresentation (PowerPoint presentation) object into a PdfDocument.
Please refer to the complete code block:
// Open PowerPoint presentation
using (IPresentation presentation = Presentation.Open(
assembly.GetManifestResourceStream("SampleName.Assets.InputDocument.pptx")))
{
// Convert slides to PDF pages
PdfDocument pdfDocument = PresentationToPdfConverter.Convert(presentation);
// Proceed to save as stream
}
Note: You can check out more options and examples in the PowerPoint to PDF conversion documentation.
For image conversion, the PdfGraphics.DrawImage() method facilitates the conversion of images to PDF by drawing a PdfBitmap onto a PDF page. You can load images from various sources using the PdfBitmap class.
// Create a new PDF and add a page
PdfDocument document = new PdfDocument();
// Add a page to the document
PdfPage page = document.Pages.Add();
PdfGraphics graphics = page.Graphics;
// Load the image from the disk
FileStream imageStream = new FileStream("Autumn Leaves.jpg", FileMode.Open, FileAccess.Read);
PdfBitmap image = new PdfBitmap(imageStream);
// Draw the image
graphics.DrawImage(image, 0, 0);
Note: For more details, refer to the Image to PDF Conversion guide.
XPS documents can be converted using the XPSToPdfConverter.Convert(Stream) method, which converts an XPS file stream into a PdfDocument, preserving the document layout and the visual appearance of each page.
Code snippet to achieve this:
// Load XPS file and convert to PDF
XPSToPdfConverter converter = new XPSToPdfConverter();
using (FileStream xpsStream = new FileStream("sample.xps", FileMode.Open, FileAccess.ReadWrite))
{
PdfDocument pdfDocument = converter.Convert(xpsStream);
// Proceed to save as stream
}
Note: For detailed information, refer to the XPS to PDF conversion documentation.
Each of these conversions delivers a PdfDocument object, which can then be saved to a stream and passed to the .NET MAUI PDF Viewer for rendering.
Once converted, save the PDF document as a MemoryStream. This allows you to pass the PDF directly to the .NET MAUI Viewer without needing to store it on disk, as shown in the code example below.
// Assuming `pdfDocument` is the result of any conversion (Word, Excel, etc.) from step 3
MemoryStream pdfStream = new MemoryStream();
pdfDocument.Save(pdfStream);
pdfStream.Position = 0; // Reset the stream position before passing to the viewer
Finally, load the PDF stream into the Syncfusion.Maui.PdfViewer component to render the document within your app.
Here’s how you can do it in code:
// Add SfPdfViewer in the MainPage.xaml.
<pdfViewer:SfPdfViewer x:Name="PdfViewer" />
// In your code-behind (e.g., MainPage.xaml.cs). Assuming `pdfStream` containing the converted PDF (from step 4).
pdfViewer.DocumentSource = pdfStream;
After completing the implementation, your app will display the converted PDF document using the Syncfusion .NET MAUI PDF Viewer.
Below is an example of what the final preview might look like when various types of documents are shown in tabs:

Once documents are converted to PDF, the Syncfusion PDF Viewer becomes the central interface for interacting with them. Beyond simply displaying converted documents, the PDF Viewer acts as a powerful, universal document engine. It enhances the user experience by offering a rich set of features tailored for review, navigation, and collaboration:
That’s why the PDF Viewer is not just a passive display tool, but it is a full-fledged review and collaboration engine.
Since converted documents (e.g., Word, Excel, PowerPoint) may lose their native editing capabilities in PDF form, the annotation and review tools offered by the .NET MAUI PDF Viewer become essential. They allow users to:
This strategic use of the PDF format ensures controlled editing, consistent formatting, and streamlined workflows.
Note: For more details, refer to the Syncfusion .NET MAUI PDF Viewer documentation.
We’ve put together a complete working demo for building a universal .NET MAUI document viewer on GitHub. You can also extend it to support additional formats where feasible, based on available Syncfusion APIs or third-party solutions.
Yes. All document conversions using Syncfusion Document Processing Libraries occur locally within the app runtime. No files are uploaded to external servers or cloud services unless explicitly implemented by the developer. The original documents remain unchanged, ensuring data integrity and privacy throughout the conversion and preview process.
Yes. By converting documents into PDF for preview, native editing capabilities of formats like Word, Excel, and PowerPoint are intentionally restricted. The Syncfusion .NET MAUI PDF Viewer provides controlled interaction through annotations only, helping prevent accidental or unauthorized content changes while still enabling review and feedback.
Absolutely. Converted PDF documents can be stored and rendered directly from in-memory streams (such as MemoryStream) without writing files to disk. This approach minimizes data exposure, reduces persistence risks, and is well-suited for handling sensitive or confidential documents securely.
Thanks for reading! By combining the Syncfusion Document Processing Libraries with the .NET MAUI PDF Viewer, you can replace multiple document viewers with a single, universal viewing solution.
The result:
If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.
You can also contact us through our support forum, support portal, or feedback portal for queries. We are always happy to assist you!