In this article, part 3 of the “Moving from Python to esProc SPL” series, you’ll discover the key differences between Python and SPL. Whether you’re considering adding SPL to your skillset or simply curious about alternative approaches to data analysis, this comparison will help you understand when and why you might choose one over the other.
In the first two articles, we looked at setting up the esProc SPL environment and its syntax and data structures. Now that you have a foundation in SPL basics, it’s time to address a question some data analysts may ask: “How does SPL compare to Python, and why might I want to add it to my toolkit?”
As a Python developer, you’ve likely mastered libraries like Pandas for data analysis. Python’s flexibility and extensive ecosystem make it a tool for everything from data cleaning to machine learning. However, esProc SPL offers a different approach to data processing that can be more intuitive and efficient for certain tasks.
The first and most fundamental difference between Python and SPL are their programming paradigms. Python follows an imperative programming model, where you specify a sequence of operations to transform data. SPL, on the other hand, uses a dataflow programming model, where you define a series of steps that data flows through.

In Python, you typically write code that explicitly states how to perform operations:
import pandas as pd
# Load data
sales_data = pd.read_csv("sales.csv")
# Filter data
high_value_sales = sales_data[sales_data['AMOUNT'] > 1000]
# Group and aggregate
region_totals = high_value_sales.groupby('REGION')['AMOUNT'].sum().reset_index()
# Sort results
sorted_totals = region_totals.sort_values('AMOUNT', ascending=False)
# Display results
print(sorted_totals)In this Python example, you create a series of variables that hold the intermediate results of your data transformations. The focus is on how to perform each step, and you need to track the flow of data through these variables. If you want to see intermediate results, you need to explicitly print them.
In SPL, you define a sequence of cells, each representing a step in your data processing workflow:
| A | |
| 1 | =file(“document/sales.csv”).import@ct() |
| 2 | =A1.select(AMOUNT>2000) |
| 3 | =A2.groups(REGION;sum(AMOUNT):TOTAL) |
| 4 | =A3.sort(TOTAL:-1) |
In SPL, each cell represents a transformation of the data, and the results flow naturally from one cell to the next. The focus is on what happens to the data at each step, rather than how to perform each operation. The results of each step are immediately visible in the IDE (integrated development environment) – unlike Python, where you must manually print intermediate results. This makes it easier to understand the data flow and verify that each step is working as expected in SPL.
Additionally, instead of variables, SPL uses cell references like A1, A2, and A3 to represent each step in the workflow, making the data flow more structured and transparent. This approach contrasts with Python, where the sequence of operations determines the flow implicitly.
By focusing on defining what needs to be done rather than detailing every step of how to do it, SPL can make data transformations more intuitive, especially when working with large datasets that require multiple processing steps.
Python offers multiple ways to filter and sort data, including list comprehensions and Pandas methods. Let’s compare these with SPL’s approach.
# Using boolean indexing
laptops = sales_df[sales_df['PRODUCT'] == 'Laptop']
# Using query method
high_value_laptops = sales_df.query("PRODUCT == 'Laptop' and AMOUNT > 1000")
# Using list comprehension (with records)
laptop_records = [row for row in sales_df.to_dict('records') if row['PRODUCT'] == 'Laptop']
print(f"Number of laptop sales: {len(laptops)}")
print(f"Number of high-value laptop sales: {len(high_value_laptops)}")| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.select(PRODUCT==”Laptop”) | |
| 3 | =A2.len() 18 | |
| 4 | =A2.select(AMOUNT>1000) | Filtered for high-value laptops |
| 5 | =A4.len() 12 |
The output shows that there are 18 laptop sales in total (A3) and 12 high-value laptop sales (A5). This means that two-thirds of all laptop sales (12 out of 18) are high-value sales over $1,000. SPL’s `select` method provides a concise way to filter data based on conditions, and the results are immediately visible in the IDE.
# Filtering with multiple conditions
complex_filter = sales_df[
(sales_df['REGION'].isin(['East', 'West'])) &
(sales_df['AMOUNT'] > 1000) &
((sales_df['PRODUCT'] == 'Laptop') | (sales_df['PRODUCT'] == 'Server'))
]
print(f"Complex filter results: {len(complex_filter)}")
print(complex_filter.head(3))| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.select((REGION==”East” || REGION==”West”) && AMOUNT>1000 && (PRODUCT==”Laptop” || PRODUCT==”Server”)) | |
| 3 | =A2.len() | 14 |
| 4 | =A2.to(3) | First 3 rows of filtered data |
The output of A3 shows that there are 14 rows matching the complex filter. These are high-value sales (over $1,000) of laptops or servers in the East or West regions. The output of A4 might look like:

SPL’s syntax for complex conditions is more concise and readable than Pandas’ syntax, especially for nested conditions. The use of familiar operators like `==`, `&&`, and `||` makes the code more intuitive, particularly for those coming from programming languages like JavaScript or C#.
# Sort by a single column
sorted_by_amount = sales_df.sort_values('AMOUNT', ascending=False)
# Sort by multiple columns
multi_sorted = sales_df.sort_values(['REGION', 'AMOUNT'], ascending=[True, False])
print("Sorted by AMOUNT (descending):")
print(sorted_by_amount.head(3))
print("\nSorted by REGION (ascending) and AMOUNT (descending):")
print(multi_sorted.head(3))| A | |
| 1 | =file(“document/sales.csv”).import@ct() |
| 2 | =A1.sort(AMOUNT:-1) |
| 3 | =A2.to(3) |
| 4 | =A1.sort(REGION,AMOUNT:-1) |
| 5 | =A4.to(3) |
The output of A3 might look like:

This shows the three highest-value sales in the dataset, sorted in descending order by amount. The highest sale is a server for $2,550, followed by another server sale for $2,500.
The output of A5 might look like:

This shows the first three rows sorted by region (alphabetically) and then by amount (descending). All three rows are from the East region, with the highest-value sales listed first. SPL’s `sort` method provides a concise way to sort data by multiple columns in different directions.
When it comes to filtering and sorting, SPL offers a more concise and readable syntax compared to Pandas, particularly when dealing with complex conditions. It uses familiar logical operators like ==, &&, and ||, making expressions more intuitive, whereas Pandas relies on & and |, which require careful use of parentheses to avoid errors.
Sorting in SPL is also straightforward, as you can specify descending order with -1, while Pandas requires setting the ascending parameter to False. Both languages support chaining operations, but SPL’s cell-based execution allows you to inspect intermediate results more easily, making debugging and analysis more transparent.
Grouping and aggregation are common operations in data analysis, and both Python/Pandas and SPL provide powerful tools for these tasks. Let’s compare their approaches.
# Group by REGION and sum AMOUNT
region_totals = sales_df.groupby('REGION')['AMOUNT'].sum().reset_index()
# Group by REGION and calculate multiple aggregates
region_stats = sales_df.groupby('REGION').agg({
'AMOUNT': ['sum', 'mean', 'count']
}).reset_index()
# Rename columns for clarity
region_stats.columns = ['REGION', 'TOTAL', 'AVERAGE', 'COUNT']
print("Region Totals:")
print(region_totals)
print("\nRegion Statistics:")
print(region_stats)| A | |
| 1 | =file(“document/sales.csv”).import@ct() |
| 2 | =A1.groups(REGION;sum(AMOUNT):TOTAL) |
| 3 | =A1.groups(REGION; sum(AMOUNT):TOTAL, avg(AMOUNT):AVERAGE, count(AMOUNT):COUNT) Multiple aggregates |
The output of A2 might look like:

This shows the total sales amount for each region. The West region has the highest total sales at $37,730, followed by North at $34,370, East at $26,990, and South at $23,340.
The output of A3:

This provides a more comprehensive view of sales by region, including the total sales amount, average sale amount, and number of sales for each region. SPL’s `groups` method provides a concise way to calculate multiple aggregates in a single operation, with clear syntax for naming the resulting columns.
# Group by multiple columns
product_region = sales_df.groupby(['REGION', 'PRODUCT']).agg({
'AMOUNT': ['sum', 'count']
}).reset_index()
# Rename columns
product_region.columns = ['REGION', 'PRODUCT', 'TOTAL', 'COUNT']
# Filter groups after aggregation
high_volume = product_region[product_region['COUNT'] > 3]
print("Product sales by region (high volume only):")
print(high_volume)| A | |
| 1 | =file(“document/sales.csv”).import@ct() |
| 2 | =A1.groups(REGION, PRODUCT; sum(AMOUNT):TOTAL, count(AMOUNT):COUNT) Group by REGION and PRODUCT |
| 3 | =A2.select(COUNT>3) Filter for high-volume products |
The output of A3 might look like:

This shows the region-product combinations with more than 3 sales. Laptops are popular across all regions, with the highest total sales in the East region ($22,690). SPL’s approach to grouping and filtering is more concise and readable.
In SQL, the HAVING clause filters groups after aggregation. Let’s compare how Python and SPL handle this:
# Two-step process: group, then filter
region_totals = sales_df.groupby('REGION')['AMOUNT'].sum().reset_index()
high_total_regions = region_totals[region_totals['AMOUNT'] > 30000]
print("Regions with total sales over $30,000:")
print(high_total_regions)| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.groups(REGION;sum(AMOUNT):TOTAL) Group by REGION and sum AMOUNT | |
| 3 | =A2.select(TOTAL>30000) | Filter for high-total regions |
SPL’s approach to post-aggregation filtering is similar to Pandas’. In esProc SPL, the groups method offers a clear and intuitive way to perform grouping and aggregation, separating grouping columns from aggregate expressions for better readability.
Unlike Pandas, where renaming aggregated columns often requires an additional step, SPL allows direct renaming within the groups method using the : syntax. Both languages support filtering results after aggregation, but SPL’s approach tends to be more concise.
Additionally, SPL simplifies calculating multiple aggregates in a single operation with a straightforward syntax for naming the resulting columns, reducing the need for extra processing steps.
Date and time manipulation is a common task in data analysis. Let’s compare how Python and SPL handle these operations.
import pandas as pd
from datetime import datetime
# Load CSV file and parse dates
sales_df = pd.read_csv("sales.csv", parse_dates=['DATE'])
# Define date range
start_date = datetime(2023, 5, 1)
end_date = datetime(2023, 5, 31)
# Filter sales data within the date range
may_sales = sales_df[(sales_df['DATE'] >= start_date) & (sales_df['DATE'] <= end_date)]
# Display the number of sales in May 2023
print(f"Number of sales in May 2023: {len(may_sales)}")| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.select(DATE>=date(“2023-05-01”) && DATE<=date(“2023-05-31”)) | Filter for May 2023 |
| 3 | =A2.len() | 28 |
The output of A3 shows that there are 28 sales in May 2023. SPL’s date functions make it easy to filter data by date range, with a syntax that’s similar to filtering by other types of values.
# Extract date components
sales_df['YEAR'] = sales_df['DATE'].dt.year
sales_df['MONTH'] = sales_df['DATE'].dt.month
sales_df['DAY'] = sales_df['DATE'].dt.day
# Group by month
monthly_sales = sales_df.groupby('MONTH')['AMOUNT'].sum().reset_index()
print("Monthly sales totals:")
print(monthly_sales)| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.groups(month(DATE):MONTH; sum(AMOUNT):TOTAL) |
The output of A2 will show the total sales for each month. SPL’s date functions make it easy to extract components from dates and use them for grouping and analysis.
# Add 30 days to each date
sales_df['FUTURE_DATE'] = sales_df['DATE'] + timedelta(days=30)
# Calculate days between dates
today = datetime.now().date()
sales_df['DAYS_AGO'] = (today - sales_df['DATE'].dt.date).dt.days
print("Dates with days ago:")
print(sales_df[['DATE', 'DAYS_AGO']].head(3))| A | ||
| 1 | =file(“document/sales.csv”).import@ct() | |
| 2 | =A1.derive(FUTURE_DATE:date(DATE).addDays(30)) | Add 30 days to each date |
| 3 | =A2.derive(interval(DATE,now()):DAYS_AGO) | Calculate days between dates |
| 4 | =A3.to(3).new(DATE,DAYS_AGO).peek(3) | First 3 rows |
The output of A4 shows the original date and the number of days between that date and today. SPL’s `interval@d` function calculates the number of days between two dates, similar to subtracting dates in Pandas.
When working with dates, both Pandas and esProc SPL offer a range of functions, but their approaches differ. In Pandas, you need to parse dates when loading data, while in SPL, the string dates will be automatically converted into date objects. SPL provides built-in functions like year(), month(), and day() that are directly called on date objects, whereas Pandas requires using the .dt accessor for similar operations.
Date arithmetic is also handled differently: SPL uses methods like elapse(), while Pandas relies on operators combined with timedelta. When calculating intervals between dates, SPL offers the interval@d function to return the number of days between two dates, which is similar to subtracting date objects in Pandas.
String manipulation is another common task in data analysis. Let’s compare how Python and SPL handle these operations.
# Convert to uppercase
sales_df['REGION_UPPER'] = sales_df['REGION'].str.upper()
# Extract substring
sales_df['REGION_FIRST_3'] = sales_df['REGION'].str[:3]
# Concatenate strings
sales_df['PRODUCT_REGION'] = sales_df['PRODUCT'] + " - " + sales_df['REGION']
print("String operations:")
print(sales_df[['REGION', 'REGION_UPPER', 'REGION_FIRST_3', 'PRODUCT_REGION']].head(3))| A | ||
| 1 | =file(“document/sales.csv”).import@ct(REGION) | |
| 2 | =A1.derive(upper(REGION):REGION_UPPER,substr(REGION,1,3): REGION_FIRST_3,PRODUCT+” – “+REGION:PRODUCT_REGION) | String operations |
| 3 | =A2.to(3) | First three rows |
The output of A3 might look like:

This shows the results of various string operations on the REGION column. The `upper` function converts the region to uppercase, the `substr` function extracts the first three characters, and the `+` operator concatenates the product and region with a separator. SPL’s string functions are similar to Python’s, but with a more integrated approach that allows you to perform multiple operations in a single `derive` call.
# Check if string contains substring
sales_df['HAS_EAST'] = sales_df['REGION'].str.contains('East')
# Replace substring
sales_df['REGION_MODIFIED'] = sales_df['REGION'].str.replace('East', 'Eastern')
print("String searching and replacing:")
print(sales_df[['REGION', 'HAS_EAST', 'REGION_MODIFIED']].head(5))| A | ||
| 1 | =file(“document/sales.csv”).import@ct(REGION) | |
| 2 | =A1.derive(pos(REGION,”East”)>0: HAS_EAST, replace(REGION,”East”,”Eastern”):REGION_MODIFIED) | String searching and replacing |
| 3 | =A2.to(5) | First five rows |

This shows the results of string searching and replacing operations. The `pos` function returns the position of a substring within a string, and we use `pos(REGION,”East”)>0` to check if the region contains “East”. The `replace` function replaces all occurrences of a substring with another string. SPL’s string functions provide similar capabilities to Python’s, but with a syntax that integrates well with other data operations.
import re
# Extract digits from product names
def extract_digits(text):
match = re.search(r'\d+', str(text))
return match.group(0) if match else ""
sales_df['PRODUCT_DIGITS'] = sales_df['PRODUCT'].apply(extract_digits)
print("Regular expression extraction:")
print(sales_df[['PRODUCT', 'PRODUCT_DIGITS']].head(5))| A | |
| 1 | =file(“document/sales.csv”).import@ct(PRODUCT) |
| 2 | =A1.derive(PRODUCT.regex(“[^\d]*(\d+)”):PRODUCT_DIGITS) |
| 3 | =A2.to(3) |
esProc SPL allows for efficient string manipulation and regular expression operations directly within its functional framework. In this example, we first append a version number to each product name and then extract the digits using a regex-based approach. Unlike Python’s Pandas, where string operations require the str accessor and method chaining, SPL applies transformations directly to data columns, making expressions more concise.
And, while Python offers greater flexibility in handling complex regex patterns, SPL integrates string processing directly into data manipulation steps, reducing the need for additional function calls like `apply()’.
In this article, we’ve compared Python and esProc SPL across various aspects of data analysis, from basic operations to complex transformations. While both languages are good tools for data analysis, they approach the task from different perspectives.
As a Python developer, adding SPL to your toolkit doesn’t mean abandoning Python. Instead, it gives you another perspective on data analysis and another tool for specific tasks where SPL’s approach might be more efficient or intuitive.
Click here for more in the “Moving from Python to esProc SPL” series.
esProc SPL offers a different approach to data analysis that can be more intuitive for certain tasks. Its cell-based, data-flow programming model makes complex data transformations easier to understand and debug. The immediate visibility of results at each step helps you identify issues early and iterate quickly. While Python excels at general-purpose programming and has a vast ecosystem, esProc SPL can be more efficient for data transformation workflows, especially when working with tabular data.
Yes, esProc SPL provides built-in filtering, sorting, grouping, and aggregation functions.
esProc SPL’s `groups()` function achieves grouping and aggregation in one step.
Performance depends on the specific operation and dataset size. esProc SPL is optimized for data processing operations and can be faster than Pandas for certain tasks, especially those involving complex transformations of tabular data. Also, its memory management is designed specifically for data processing, which can lead to better performance for large datasets. However, for large datasets that exceed memory capacity, both tools offer options for processing data in chunks or connecting to external databases.
The post Data analysis in Python and esProc SPL compared – what are the differences, and which is best? appeared first on Simple Talk.
TL;DR: Want your React spreadsheets to feel rock‑solid without slowing your users down? Spreadsheet protection gives you the best of both worlds: you can lock down formulas, structure, and layouts while keeping key input cells open for editing. Use sheet protection + protectSettings to decide exactly what users can and can’t do, lockCells() to open just the ranges that matter, and workbook protection to stop sheet renaming, deleting, or moving.
Handling sensitive or business-critical data in the Syncfusion® React Spreadsheet? Protecting sheets and workbooks ensures your templates, formulas, and inputs stay intact and tamper‑free.
This guide walks you through:
Syncfusion includes different layers of protection, each for different use cases:
Ready to lock it down? Explore advanced sheet-protection methods in Syncfusion React Spreadsheet to keep your data secure.
Let’s dive in and see how you can lock sheets, secure cells, protect workbooks, and manage permissions with ease!
The protect sheet feature makes a sheet read-only, preventing edits unless you explicitly allow them. Enable sheet protection by setting the isProtected property to true (it is false by default).
By default, these actions are disabled:
The protectSettings option lets you enable specific actions even when a sheet is protected. All options are disabled by default; set to true to enable.
The available protectSettings options are,
For example, if formatCells: true is set in protectSettings, users can still apply styles, borders, and font or background colors even while the sheet is protected.
You can protect a sheet directly from the Syncfusion Spreadsheet UI using:


After selecting the option, a protection settings dialog appears. From here, users can:

Once confirmed, the sheet will be protected based on the permissions.
You can enable sheet protection through model binding during the initial load by using the isProtected property.
Here’s how you can do it in code:
const protectSettings = {
selectCells: true,
selectUnLockedCells: false,
formatCells: false,
insertLink: false,
formatColumns: false,
formatRows: false,
};
<SheetDirective
name='EMI Schedule'
isProtected={true}
protectSettings={protectSettings} >
</SheetDirective>
In the above code example, the sheet is protected by passing isProtected as true and protectSettings to configure permissions in the SheetDirective.
You can also use the protectSheet method to apply sheet protection, which lets you configure permissions using protectSettings.
Refer to the code example below to learn how to enable sheet protection in a spreadsheet.
const onCreated = () => {
// Protect settings
const protectSettings = {
selectCells: true,
selectUnLockedCells: false,
formatCells: false,
insertLink: false,
formatColumns: false,
formatRows: false
}
// To protect the sheet programmatically using the protect sheet method.
spreadsheet.protectSheet('EMI Calculator', protectSettings);
}
In this code example, protection is applied in the created event, which runs after the spreadsheet initializes. By default, all cells are locked when protection is active, and you can customize what users can still do by modifying protectSettings.

When you protect a sheet, all the cells are locked by default (i.e., isLocked property is set to true). But what if you need to edit specific cells in a protected sheet?
That’s where the lockCells() method comes in! To unlock specific cells or ranges, simply pass the cell range and set the isLocked parameter in the lockCells() method to false. This keeps those cells editable even while the sheet remains protected.
You can also use the same method to lock a particular range by setting the second parameter(isLocked) to true.
Try this in your code:
//Unlocking cells using the lockCells method in a protected sheet
spreadsheet.lockCells('C2:C9', false);
In this code example, the cell range C2:C9 becomes editable, allowing users to modify these cells while the rest of the sheet stays protected.

The Unprotect Sheet feature restores full editing access, allowing users to modify, format, insert, or delete content in the sheet. You can unprotect your sheet in two ways.
You can unprotect a sheet using:



Once completed, users regain full editing access to the sheet.
You can remove sheet protection using the unprotectSheet method, as shown below.
// To unprotect the sheet programmatically using the unprotectSheet public method.
// You can pass the index or name of the sheet to unprotect it.
spreadsheet.unprotectSheet('EMI Calculator');
In this code example, the sheet EMI Calculator is initially protected on the created event. The unprotectSheet method removes protection from the specified sheet (by name or index) and makes it editable again.
Watch how the feature works in action:

Want to lock only certain cells without protecting the entire sheet? The Read-Only feature lets you restrict editing, formatting, inserting, or deleting selected cells, rows, or columns, while still allowing users to view the data.
Note: Cells marked as read-only using the isReadOnly property will not remain protected in the saved Excel file after exporting the spreadsheet. If you want cells to stay noneditable in the exported file, use the Protect Sheet feature instead, as it ensures the protection settings are correctly preserved.
Here’s how to do it using the API:
true as the first parameter, along with the desired range and sheet index.setRangeReadOnly method and pass false as the first parameter, along with the range and sheet index.Quick and easy, your data stays safe without locking the whole sheet!
Code block:
//To apply read-only to cells
spreadsheet.setRangeReadOnly(true, 'E1:F4', spreadsheet.activeSheetIndex);
//To apply read-only to a row
spreadsheet.setRangeReadOnly(true, '2:2', spreadsheet.activeSheetIndex);
//To apply read-only to columns
spreadsheet.setRangeReadOnly(true, 'A:C', spreadsheet.activeSheetIndex);
//To remove read-only to a range
spreadsheet.setRangeReadOnly(false, '2:2', spreadsheet.activeSheetIndex);
In this code block, the setRangeReadOnly method is used to make specific parts of the spreadsheet non-editable. You can apply read-only to a range of cells, an entire row, or a column by passing the respective range (e.g., 'E1' for a cell, '2:2' for a row, and 'A:C' for a column). This ensures users cannot modify those ranges while other cells in the sheet remain interactive.
Alternatively, you can make cells read-only using the cell data-binding approach by setting the isReadOnly property to true for specific cells, rows, or columns.
Code example for quick integration:
//To apply read-only to a row
<RowDirective index={3} isReadOnly={true}></RowDirective>
//To apply read-only to a column
<ColumnDirective isReadOnly={true} width={130}></ColumnDirective>
//To apply read-only to a cell
<CellDirective index={5} isReadOnly={true}></CellDirective>
In this code block, the isReadOnly property is applied at the row, column, and cell levels to make specific parts of the spreadsheet non-editable.
Here’s a quick demo of the read-only feature in action:

Want to keep your workbook structure safe from accidental changes? The Protect Workbook feature is your go-to. Once enabled, it locks down actions like inserting, deleting, hiding, renaming, or moving sheets. Here are two easy ways to do it:
Click Data tab → Protect Workbook.

You can set a password (optional) and confirm it.

To protect a workbook, set the isProtected property to true. If you want to protect the sheet with a password, you can set the password property.
Below is a code example demonstrating how to protect a workbook using the spreadsheet API.
<SpreadsheetComponent isProtected={true} password='spreadsheet' >
Need to unlock your workbook? The Unprotect Workbook feature lets you remove restrictions on inserting, deleting, renaming, hiding, or moving sheets.
Go to the Data tab → click Unprotect Workbook. If a password is set, enter it when prompted.

Simple as that, your workbook is now open for modifications.
Here’s a preview of the feature in action:

To explore these features in action, visit our GitHub repository, which contains sample implementations for sheet protection, workbook protection, and read-only cells.
Protect the sheet, then unlock specific ranges using
lockCells(range, false). This keeps formulas/templates locked while allowing controlled data entry.
Use read-only ranges when full sheet protection isn’t required. Use sheet protection with locked and unlocked cells when you need stronger restrictions or when protection must persist after Excel export.
Enable workbook protection to block structural changes like rename, delete, hide, move, and insert sheets.
Thank you for reading! Syncfusion React Spreadsheet gives you complete control over data security with flexible protection options. Whether you need to lock entire sheets, safeguard workbook structure, allow edits only in selected cells, or set read-only access, these features make it easy to keep your spreadsheets accurate and secure.
Ready to secure your data? Try Syncfusion React Spreadsheet today and experience powerful protection features in action! Syncfusion Spreadsheet is also available for JavaScript, Angular, Vue, ASP.NET Core, and ASP.NET MVC platforms, making it easy to integrate across your tech stack.
If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.
You can also contact us through our support forum, support portal, or feedback portal for queries. We are always happy to assist you!
TL;DR: Learn to implement digital signatures inside the Syncfusion WPF PDF Viewer using programmatic techniques. It covers adding a custom eSign button, capturing click positions, generating a signature image, and applying a certificate-based signature. It enables a smooth, secure in-app signing experience even though the viewer does not support signature creation through its UI.
A digital signature is a cryptographic stamp applied to a document using a certificate containing a private key. It isn’t just about convenience; it’s about trust.
Digital signatures ensure:
For this reason, digital signatures are standard in legal, financial, and enterprise document workflows.
While Syncfusion WPF PDF Viewer currently supports viewing digital signatures, it does not allow adding or modifying them directly through the UI.
But by combining the Syncfusion WPF PDF Viewer and .NET PDF Library, we can enable programmatic digital signing. We can customize the toolbar, capture user interactions, and embed a certificate-based signature with a dynamically generated image.
Instead of relying on external tools or complex user flows, this solution enables secure, fully integrated digital signing, right inside your WPF PDF Viewer.
Install the following prerequisites:
We need a valid .pfx certificate file that contains a private key. This certificate is used to securely sign the PDF and embed metadata such as the signer’s identity, location, and signing reason.
This digital signing workflow is built around the following four key steps:
When the WPF PDF Viewer loads, you can inject a custom eSign button into the existing toolbar.
To keep the UI consistent, we will reuse the style and icon of an existing toolbar item, as shown in the following code.
private void PDFViewer_Loaded(object sender, RoutedEventArgs e)
{
var toolbar = PDFViewer.Template.FindName("PART_Toolbar", PDFViewer) as DocumentToolbar;
var stackPanel = toolbar.Template.FindName("PART_ToolbarStack", toolbar) as StackPanel;
var defaultButton = (Button)((StackPanel)stackPanel.Children[^1]).Children[0];
var eSignButton = GetButton((Path)defaultButton.Content, defaultButton);
stackPanel.Children.Add(eSignButton);
}
The GetButton method is responsible for:
Once the eSign button is clicked, a flag is set to indicate that the next mouse click should trigger signature placement. This will convert the mouse-click coordinates to page-relative positions using the WPF PDF Viewer’s built-in method. So, the signature lands exactly where the user clicked.
This ensures:
Refer to the following code example.
private void PDFViewer_PageClicked(object sender, PageClickedEventArgs args)
{
if (addSignature)
{
var pageIndex = PDFViewer.CurrentPageIndex - 1;
var pagePoint = PDFViewer.ConvertClientPointToPagePoint(args.Position, pageIndex + 1);
ApplySignature(pageIndex, pagePoint);
addSignature = false;
}
}
Instead of a generic stamp, this solution creates a meaningful visual signature. It:
The visual result meets professional and regulatory requirements.
private void CreateCurrentDataImage()
{
string text = $"Digitally signed by John\nDate: {DateTime.Now:yyyy.MM.dd\nHH:mm:ss zzz}";
using var bitmap = new Bitmap(200, 100);
using var graphics = Graphics.FromImage(bitmap);
graphics.FillRectangle(Brushes.White, 0, 0, 200, 100);
graphics.DrawString(text, new Font("Arial", 9), Brushes.Black, new RectangleF(10, 10, 180, 80));
bitmap.Save(filePath + "DigitalSignatureBlock.png", ImageFormat.Png);
}
private void CombineSignatureAndDataImage()
{
using var nameImage = Image.FromFile(filePath + "John.png");
using var signImage = Image.FromFile(filePath + "DigitalSignatureBlock.png");
using var combinedImage = new Bitmap(nameImage.Width + signImage.Width, Math.Max(nameImage.Height, signImage.Height));
using var g = Graphics.FromImage(combinedImage);
g.DrawImage(nameImage, 0, 0);
g.DrawImage(signImage, nameImage.Width, 0);
combinedImage.Save(filePath + "ESign.png", ImageFormat.Png);
}
Here’s where security comes in. Using the .pfx certificate:
Then, the PDF is saved and reloaded, now officially signed. All of this happens programmatically and securely.
private void ApplySignature(int pageIndex, Point pagePoint)
{
var page = PDFViewer.LoadedDocument.Pages[pageIndex] as PdfLoadedPage;
var cert = new PdfCertificate(filePath + "PDF.pfx", "password123");
var signature = new PdfSignature(PDFViewer.LoadedDocument, page, cert, "Signature");
var image = new PdfBitmap(filePath + "ESign.png");
signature.Bounds = new RectangleF((float)pagePoint.X, (float)pagePoint.Y, image.PhysicalDimension.Width, image.PhysicalDimension.Height);
signature.ContactInfo = "johndoe@owned.us";
signature.LocationInfo = "Honolulu, Hawaii";
signature.Reason = "I am the author of this document.";
signature.Appearance.Normal.Graphics.DrawImage(image, 0, 0);
using var stream = new MemoryStream();
PDFViewer.LoadedDocument.Save(stream);
stream.Position = 0;
PDFViewer.Load(stream);
}
Refer to the following image for visual clarity.

Users simply:
No pop‑ups. No exports. No external apps.
Also, refer to the example for adding a digital signature using Syncfusion WPF PDF Viewer and .NET PDF Library on GitHub.
No. The Syncfusion WPF PDF Viewer currently supports viewing existing digital signatures only. It does not provide a built-in UI option to add or modify signatures, which is why this blog demonstrates a programmatic workaround using toolbar customization and the Syncfusion PDF library.
This implementation uses certificate-based digital signatures, which rely on a .pfx certificate containing a private key. These signatures ensure document integrity, signer identity verification, and tamper detection, making them suitable for legal and enterprise workflows.
The PDF Viewer converts the mouse click’s client coordinates into page-relative coordinates using built-in conversion APIs. This guarantees precise placement regardless of zoom level, DPI, or scrolling position.
Thanks for reading! With this approach, you can enable smooth, certificate‑based PDF signing directly inside your WPF app, no external tools, no broken workflows. By extending Syncfusion WPF PDF Viewer with custom UI actions and the Syncfusion .NET PDF library, you gain full control over signature placement, appearance, and security.
Whether you’re building enterprise document workflows or everyday desktop utilities, this pattern makes in‑app PDF signing both practical and reliable.
If you’re already a Syncfusion customer, you can download the setup from your license and downloads page. New users can start with a free 30-day trial.
You can also contact us through our support forum, support portal, or feedback portal for queries. We are always happy to assist you!
In this video, I delve into the blocking and deadlock monitoring capabilities of FreeSQL Server Performance Monitoring, a tool I’ve developed and made available on GitHub. With a focus on practicality and ease-of-use, I explain how we leverage extended events for both blocking and deadlock scenarios, ensuring that you can identify and address performance issues efficiently. Whether you’re using system health or prefer to rely on the block process report, my monitoring tool streamlines the process by automatically setting up necessary configurations and providing easy-to-understand visualizations of your SQL Server’s performance trends over time.
Erik Darling here with Darling Data, and in this video I want to go over how, or rather, how and what we collect in my FreeSQL Server performance monitoring tool, available on GitHub for free, way better than all the crap you pay for, around blocking and deadlocks, and although this can help you identify when you had terrible, problems. So for blocking in deadlocks, we use extended events for both. The block process report will send data to an extended event, as well as the deadlock stuff in there. Now for the deadlocks, if you don’t have a dedicated event set up to capture deadlocks, the monitoring tool will set one up for you. But if you would prefer to go to system health, then we can, then we will fall back to system health. There are also some charts that trend deadlocking and blocking activity over time. You know, like it’ll tell you all the sort of, you know, normal stuff that you would expect to see from looking at, you know, blocked process report or deadlock reports. If you were, you know, using one of, like, you know, say a free community script, like mine, a human events block viewer or SP blitz lock, it’s a very commensurate experience because a lot of the stuff is powered by community tools. That is why I am rather that is part of the open source ethos that I am hoping to instill and hope also hoping to, you know, you know, call broader attention to with people. There’s a lot of great stuff out there. But some people are afraid to run it, don’t know how to use it. And this sort of just encapsulates it and makes a lot easier. So blocked process reports go into the extended event, they are all XML, which is a damn shame because XML is an awful pain in the butt to deal with. But I try to do try to make that as easy as possible for you by doing all the shredding and picking apart that you would that you would need done. I do set up you set that up with a five second SP configure where I can. The deadlock monitor stuff just runs automatically. You know, you don’t really need to set a threshold there. If SQL Server hits a deadlock and it can log it, it will. That’s pretty much it. There are some platform differences. In AWS RDS, SP configure is not directly available the way that it is in many other SQL Server platforms. It is available via an RDS parameter group.
So if you are using this to monitor RDS, you will need to set things there. In Azure SQL DB, it is fixed at 20 seconds. I don’t know what Microsoft’s scared of, but they set it up pretty high. So the block process report will sweep through every 20 seconds, look for blocking. And if it happens to catch any, it’ll show it in there. With both of these, though, wherever the reports are collected, you can download them and do whatever you want with them. You can send them to someone. You can, you know, put them in another tool that parses them out, whatever you want to do. If you want to check out this awesome free SQL Server monitoring tool, again, go to code.erikdarling.com. It is in the performance monitor repo.
And if you go to the releases section, you’ll find nightly releases and the latest sort of stable build release. So whichever one you’re feeling, you’re feeling cool with testing out, you can, you can grab and start monitoring your SQL Server’s performance for free. So let’s look at these tabs, just so you get sort of a sense of what you’re dealing with here.
So I set this back 30 days because apparently I haven’t done anything interesting on my server in a little bit. But if you look in the sort of locking trends tab, this will show you lock, just lock weights as they occurred. This will give you sort of the count of blocking and deadlocking events.
And this will, and these two graphs down here will sort of start to show you the durations of them. So where these durations get higher and higher, you had bigger and bigger problems. I’ll try to move out of the way of the deadlock one.
So you can see in here, this thing spikes up when we had a bunch of deadlocks. This current weights tab is sort of an interesting one. So I’m working on sort of how to get this across the best. Because like if you look at this, you know, you have some, like you have lock weights down here that are hitting, you know, about 45 seconds.
But that’s very low. And then over here, you have some absolutely tremendous lock weights, right? I mean, look at this, right?
That’s 974,000 milliseconds. And then this thing up here, LCKMIX for 2 million milliseconds. So some of this data is a little hard to get, like, good perspective on. So I’m working on how to best visualize that.
I need to figure out maybe a little bit better way of doing that. But for now, if you see lines go up very high and you see numbers into the millions here, that’s probably not a good sign, right? I think it’s a challenge for anyone designing charts and graphs is how to deal with these extreme outliers, like 975,000 milliseconds and 2.6 million milliseconds.
They are challenging to deal with from a visualization perspective. But I will try to figure something out. It’s like even if you look in here, where it’s like LCKMIS, 219,000 milliseconds, that’s a brave number of milliseconds.
But down in here, I break down blocked sessions by database, right? So, like, the primary ones in here that are – go away, Visual Studio. No one needs you.
So the primary ones in here are HammerDB. That’s in green, right? So HammerDB TPCC had a lot of blocking going on at some point. And then the HammerDB TPCH database also had a bunch of blocked sessions in it. So I do try to point out not only, like – I try to, like, give you better, more granular breakdown of which databases have the most blocking in them so you can make decisions about which ones to sort of troubleshoot.
Over here are the shredded block process and deadlock reports. So if we look under blocking here, you know, it’ll be all the sort of normal stuff that you would expect to see if you were to run SPHumanEventsBlockViewer. You know, you see the blocking chain in here.
You can figure – like, you know, it’s a very similar experience to running that tool, the query text. And if you want to download the block process report XML, you can do that very easily there. And then under deadlocks, you’ll see very, very similar to what you would get back from SPHumanEventsBlock because guess what?
I run SPHumanEventsBlock to do the parsing, right? Because it’s a lot easier than having to rewrite code that I wrote the first time again. So – oh, don’t run away from me.
Who do you think you are? Silly goose. Anyway, just a quick overview of the blocking and deadlocking stuff in my free SQL Server monitoring tool. If you want to get a hold of this and start monitoring your SQL Servers for free, again, that is all at code.erikdarling.com.
You can go in, start getting information, start troubleshooting SQL Server. And if you like this project, you appreciate this project, and you would like to sponsor it, you can absolutely do that. Or if you start looking at all this monitoring data and you say to yourself, hey, I’m in way over my head, you can always call me because I’m still – I still am doing a consult, and I can still help you with your SQL Server performance problems.
So thank you for watching. Hope you enjoyed yourselves. Hope you learned something, and I will see you in tomorrow’s video where we will dig a little bit more into some of the foundations and fundamentals in my free SQL Server performance monitoring tool.
Alright, thank you for watching.
If this is the kind of SQL Server stuff you love learning about, you’ll love my training. Blog readers get 25% off the Everything Bundle — over 100 hours of performance tuning content. Need hands-on help? I offer consulting engagements from targeted investigations to ongoing retainers. Want a quick sanity check before committing to a full engagement? Schedule a call — no commitment required.
The post Free SQL Server Performance Monitoring Blocking and Deadlocking appeared first on Darling Data.
I mostly link to written material here, but I’ve recently listened to two excellent podcasts that I can recommend.
Anyone who regularly reads these fragments knows that I’m a big fan of Simon Willison, his (also very fragmentary) posts have earned a regular spot in my RSS reader. But the problem with fragments, however valuable, is that they don’t provide a cohesive overview of the situation. So his podcast with Lenny Rachitsky is a welcome survey of that state of world as seen through a discerning pair of eyeballs. He paints a good picture of how programming has changed for him since the “November inflection point”, important patterns for this work, and his concern about the security bomb nestled inside the beast.
My other great listening was on a regular podcast that I listen to, as Gergely Orosz interviewed Thuan Pham - the former CTO of Uber. As with so many of Gergely’s podcasts, they focused on Thuan Pham’s fascinating career direction, giving listeners an opportunity to learn from a successful professional. There’s also an informative insight into Uber’s use of microservices (they had 5000 of them), and the way high-growth software necessarily gets rewritten a lot (a phenomenon I dubbed Sacrificial Architecture)
❄ ❄ ❄ ❄ ❄
Axios published their post-mortem on their recent supply chain compromise. It’s quite a story, the attackers spent a couple of weeks developing contact with the lead maintainer, leading to a video call where the meeting software indicated something on the maintainer’s system was out of date. That led to the maintainer installing the update, which in fact was a Remote Access Trojan (RAT).
they tailored this process specifically to me by doing the following:
- they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.
- they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.
- they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.
- the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.
- everything was extremely well co-ordinated looked legit and was done in a professional manner.
Simon Willison has a summary and further links.
❄ ❄ ❄ ❄ ❄
I recently bumped into Diátaxis, a framework for organizing technical documentation. I only looked at it briefly, but there’s much to like. In particular I appreciated how it classified four forms of documentation:
The distinction between tutorials and how-to guides is interesting
A tutorial serves the needs of the user who is at study. Its obligation is to provide a successful learning experience. A how-to guide serves the needs of the user who is at work. Its obligation is to help the user accomplish a task.
I also appreciated its point of pulling explanations out into separate areas. The idea is that other forms should contain only minimal explanations, linking to the explanation material for more depth. That way we keep the flow on the goal and allow the user to seek deeper explanations in their own way. The study/work distinction between explanation and reference mirrors that same distinction between tutorials and how-to guides.
❄ ❄ ❄ ❄ ❄
For eight years, Lalit Maganti wanted a set of tools for working with SQLite. But it would be hard and tedious work, “getting into the weeds of SQLite source code, a fiendishly difficult codebase to understand”. So he didn’t try it. But after the November inflection point, he decided to tackle this need.
His account of this exercise is an excellent description of the benefits and perils of developing with AI agents.
Through most of January, I iterated, acting as semi-technical manager and delegating almost all the design and all the implementation to Claude. Functionally, I ended up in a reasonable place: a parser in C extracted from SQLite sources using a bunch of Python scripts, a formatter built on top, support for both the SQLite language and the PerfettoSQL extensions, all exposed in a web playground.
But when I reviewed the codebase in detail in late January, the downside was obvious: the codebase was complete spaghetti. I didn’t understand large parts of the Python source extraction pipeline, functions were scattered in random files without a clear shape, and a few files had grown to several thousand lines. It was extremely fragile; it solved the immediate problem but it was never going to cope with my larger vision, never mind integrating it into the Perfetto tools. The saving grace was that it had proved the approach was viable and generated more than 500 tests, many of which I felt I could reuse.
He threw it all away and worked more closely with the AI on the second attempt, with lots of thinking about the design, reviewing all the code, and refactoring with every step
In the rewrite, refactoring became the core of my workflow. After every large batch of generated code, I’d step back and ask “is this ugly?” Sometimes AI could clean it up. Other times there was a large-scale abstraction that AI couldn’t see but I could; I’d give it the direction and let it execute. If you have taste, the cost of a wrong approach drops dramatically because you can restructure quickly.
He ended up with a working system, and the AI proved its value in allowing him to tackle something that he’d been leaving on the todo pile for years. But even with the rewrite, the AI had its potholes.
His conclusion of the relative value of AI in different scenarios:
When I was working on something I already understood deeply, AI was excellent…. When I was working on something I could describe but didn’t yet know, AI was good but required more care…. When I was working on something where I didn’t even know what I wanted, AI was somewhere between unhelpful and harmful…
At the heart of this is that AI works at its best when there is an objectively checkable answer. If we want an implementation that can pass some tests, then AI does a good job. But when it came to the public API:
I spent several days in early March doing nothing but API refactoring, manually fixing things any experienced engineer would have instinctively avoided but AI made a total mess of. There’s no test or objective metric for “is this API pleasant to use” and “will this API help users solve the problems they have” and that’s exactly why the coding agents did so badly at it.
❄ ❄ ❄ ❄ ❄
I became familiar with Ryan Avent’s writing when he wrote the Free Exchange column for The Economist. His recent post talks about how James Talarico and Zohran Mamdani have made their religion an important part of their electoral appeal, and their faith is centered on caring for others. He explains that a focus on care leads to an important perspective on economic growth.
The first thing to understand is that we should not want growth for its own sake. What is good about growth is that it expands our collective capacities: we come to know more and we are able to do more. This, in turn, allows us to alleviate suffering, to discover more things about the universe, and to spend more time being complete people.