Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122191 stories
·
29 followers

Easily Create an Excel Pivot Table in Just 3 Steps Using C#

1 Share

Easily Create an Excel Pivot Table in Just 3 Steps Using C#

TL;DR: Syncfusion Excel Library is the perfect tool for all kinds of Excel creation, reading, editing, and viewing functionalities. Let’s learn how to create pivot tables in an Excel document using this robust library with C#.

A pivot table is an extraordinary feature in Excel that allows users to summarize and analyze large datasets quickly. It allows the users to create dynamic pivot views by grouping only required fields in the Excel data.

The Syncfusion Excel Library is also known as Essential XlsIO. It facilitates the smooth creation, reading, and editing of Excel documents using C#. It supports the creation of Excel documents from scratch, modification of existing Excel documents, data import and export, Excel formulas, conditional formats, data validations, charts, sparklines, tables, pivot tables, pivot charts, template markers, and much more.

In this blog, we’ll explore the steps to create a pivot table using Syncfusion Excel Library in C#.

Enjoy a smooth experience with Syncfusion’s Excel Library! Get started with a few lines of code and without Microsoft or interop dependencies.

Creating a pivot table in Excel using C#

Follow these steps to create a pivot table using the Syncfusion Excel Library and C#:

Note: Please refer to the .NET Excel Library’s getting started documentation before proceeding.

  1. First, create a .NET Core console application in Visual Studio.Create a .NET Core Console Application
  2. Install the latest Syncfusion.XlsIO.Net.Core NuGet package in your app.Install Syncfusion.XlsIO.Net.Core NuGet
  3. Finally, add the following code to create a pivot table in a new worksheet(PivotSheet) in the existing Excel document.
    using Syncfusion.XlsIO; 
    
    namespace PivotTable
    
    {
    
        class Program
    
        {
    
            public static void Main()
    
            {
    
                using (ExcelEngine excelEngine = new ExcelEngine())
    
                {
    
                    IApplication application = excelEngine.Excel;
    
                    FileStream fileStream = new FileStream("../../../Data/SalesReport.xlsx", FileMode.Open, FileAccess.Read);
    
                    IWorkbook workbook = application.Workbooks.Open(fileStream);
    
                    IWorksheet worksheet = workbook.Worksheets[0]; 
    
                    IWorksheet pivotSheet = workbook.Worksheets.Create("PivotSheet"); 
    
                    //Create a Pivot cache with the given data range.
    
                    IPivotCache cache = workbook.PivotCaches.Add(worksheet["A1:H50"]); 
    
                    //Create "PivotTable1" with the cache at the specified range.
    
                    IPivotTable pivotTable = pivotSheet.PivotTables.Add("PivotTable1", pivotSheet["A1"], cache); 
    
                    //Add Pivot table row fields.             
    
                    pivotTable.Fields[3].Axis = PivotAxisTypes.Row;
    
                    pivotTable.Fields[4].Axis = PivotAxisTypes.Row; 
    
                    //Add Pivot table column fields.
    
                    ivotable.Fields[2].Axis = PivotAxisTypes.Column; 
    
                    //Add data fields.
    
                    IPivotField field = pivotTable.Fields[5];
    
                    pivotTable.DataFields.Add(field, "Units", PivotSubtotalTypes.Sum); 
    
                    field = pivotTable.Fields[6];
    
                    pivotTable.DataFields.Add(field, "Unit Cost", PivotSubtotalTypes.Sum); 
    
                    //Pivot table style.
    
                    pivotTable.BuiltInStyle = PivotBuiltInStyles.PivotStyleMedium14;           
    
                    string fileName = "PivotTable.xlsx";
    
                    //Saving the workbook as a stream.
    
                    FileStream stream = new FileStream(fileName, FileMode.Create, FileAccess.ReadWrite);
    
                    workbook.SaveAs(stream);
    
                    stream.Dispose();
    
                }
    
            }
    
        }
    
    }

Refer to the following images

Input Excel document
Input Excel document
Creating a pivot table in an Excel document using Syncfusion .NET Excel Library and C#
Creating a pivot table in an Excel document using Syncfusion .NET Excel Library and C#

References

For more details, refer to creating pivot tables in Excel using C# documentation and GitHub demo.

Witness the possibilities in demos showcasing the robust features of Syncfusion’s C# Excel Library.

Conclusion

Thanks for reading! This blog explored creating a pivot table in an Excel document using C# and the Syncfusion Excel Library(XlsIO). The Excel Library also allows you to export Excel data to imagesdata tablesCSVTSVHTMLcollections of objectsODSJSON, and other file formats.

Take a moment to peruse the import data documentation, where you’ll discover additional importing options and features such as data tablescollection objectsgrid viewdata columns, and HTML, all accompanied by code samples.

Feel free to try out these features and share your feedback in the comments section of this blog post!

For existing customers, the new version of Essential Studio is available for download from the License and Downloads page. If you are not a Syncfusion customer, try our 30-day free trial to check out our available features.

For questions, you can contact us through our support forumsupport portal, or feedback portal. We are always happy to assist you!

Don't settle for ordinary spreadsheet solutions. Switch to Syncfusion and upgrade the way you handle Excel files in your apps!

Related blogs

Read the whole story
alvinashcraft
just a second ago
reply
West Grove, PA
Share this story
Delete

Using GitHub Copilot as your Coding GPS

1 Share

Transform Your Coding Workflow with GitHub Copilot in Visual Studio

GitHub Copilot is a game-changing AI-powered assistant that can revolutionize your coding workflow in Visual Studio. In our video series, Bruno Capuano explores how this intelligent coding companion can help you write code more efficiently while maintaining quality and accuracy.

Copilot: An Assistant, Not a Replacement

Bruno highlights that GitHub Copilot is designed to support your coding journey in Visual Studio, not replace developers. Microsoft’s philosophy is centered on AI working in harmony with human efforts, maintaining a balance that respects human dignity. As CEO Satya Nadella emphasizes, AI should enhance productivity without displacing people.

This is why developers should always validate the code generated by GitHub Copilot, as AI-based systems can sometimes suggest code that doesn’t align with your requirements or even produce errors, known as “hallucinations.” Even though GitHub Copilot is generally accurate, it’s critical to review its suggestions to ensure correctness.

To get started, ensure GitHub Copilot is installed in your development environment. For more information on setup, refer to the GitHub Copilot documentation, or learn how to install GitHub Copilot Chat for Visual Studio.

Leveraging LLMs for a New Interaction Paradigm

Large Language Models (LLMs), the technology behind GitHub Copilot, offer a new paradigm for interacting with computers. These models rely on complex probabilities and extensive training data to generate responses based on natural-language prompts, allowing for a more conversational style of coding. This interaction model is not limited to text—it can also involve other media types like images and videos.

Illustration with an overview of AI capabilities, including computer vision, voice recognition, image recognition, and more.

However, given the variability of LLMs, the same query might yield different results, emphasizing the need for developer oversight.

Embracing the Future with GitHub Copilot

As AI becomes increasingly integrated into various industries, developers need to adapt. Tools like GitHub Copilot can give you a competitive edge by improving efficiency and adaptability. To stay ahead in the ever-changing tech landscape, it’s crucial to familiarize yourself with AI tools and understand their strengths and limitations. To learn more about GitHub Copilot and how to use it, check our collection with resources here or via our full-length video. Explore our resources and video tutorials for in-depth guidance on making the most of GitHub Copilot in your development projects!

Additional Resources:

The post Using GitHub Copilot as your Coding GPS appeared first on Visual Studio Blog.

Read the whole story
alvinashcraft
13 seconds ago
reply
West Grove, PA
Share this story
Delete

Who Made That Change? Low Rent User Auditing Using Temporal Tables

1 Share

I Don’t Find This Stuff Fun


ED: I moved up this post’s publication date after Mr. O posted this question. So, Dear Brent, if you’re reading this, you can consider it my humble submission as an answer.

It’s really not up my alley. I love performance tuning SQL Server, but occasionally things like this come up.

Sort of recently, a client really wanted a way to figure out if support staff was manipulating data in a way that they shouldn’t have. Straight away: this method will not track if someone is inserting data, but inserting data wasn’t the problem. Data changing or disappearing was.

The upside of this solution is that not only will it detect who made the change, but also what data was updated and deleted.

It’s sort of like auditing and change data capture or change tracking rolled into one, but without all the pesky stuff that comes along with auditing, change tracking, or change data capture (though change data capture is probably the least guilty of all the parties).

Okay, so here are the steps to follow. I’m creating a table from scratch, but you can add all of these columns to an existing table to get things working too.

Robby Tables


First, we create a history table. We need to do this first because there will be computed columns in the user-facing tables.

/*
Create a history table first
*/
CREATE TABLE
    dbo.things_history
(
    thing_id int NOT NULL,
    first_thing nvarchar(100) NOT NULL,
    original_modifier sysname NOT NULL, 
        /*original_modifier is a computed column below, but not computed here*/
    current_modifier sysname NOT NULL, 
        /*current_modifier is a computed column below, but not computed here*/
    valid_from datetime2 NOT NULL,
    valid_to datetime2 NOT NULL,
    INDEX c_things_history CLUSTERED COLUMNSTORE
);

I’m choosing to store the temporal data in a clustered columnstore index to keep it well-compressed and quick to query.

Next, we’ll create the user-facing table. Again, you’ll probably be altering an existing table to add the computed columns and system versioning columns needed to make this work.

/*Create the base table for the history table*/
CREATE TABLE
    dbo.things
(
  thing_id int
      CONSTRAINT pk_thing_id PRIMARY KEY,
  first_thing nvarchar(100) NOT NULL,
  original_modifier AS /*a computed column, computed*/
      ISNULL
      (
          CONVERT
          (
              sysname,
              ORIGINAL_LOGIN()
          ),
          N'?'
      ),
  current_modifier AS /*a computed column, computed*/
      ISNULL
      (
          CONVERT
          (
              sysname,
              SUSER_SNAME()
          ),
          N'?'
      ),
  valid_from datetime2
      GENERATED ALWAYS AS
      ROW START HIDDEN NOT NULL,
  valid_to datetime2
      GENERATED ALWAYS AS
      ROW END HIDDEN NOT NULL,
  PERIOD FOR SYSTEM_TIME
  (
      valid_from,
      valid_to
  )
)
WITH
(
    SYSTEM_VERSIONING = ON  
    (
        HISTORY_TABLE = dbo.things_history,
        HISTORY_RETENTION_PERIOD = 7 DAYS
    )
);

A couple things to note: I’m adding the two computed columns as non-persisted, and I’m adding the system versioning columns as HIDDEN, so they don’t show up in user queries.

The WITH options at the end specify which table we want to use as the history table, and how long we want to keep data around for. You may adjust as necessary.

I’m tracking both the ORIGINAL_LOGIN() and the SUSER_SNAME() details in case anyone tries to change logins after connecting to cover their tracks.

Inserts Are Useless


Let’s stick a few rows in there to see how things look!

INSERT
    dbo.things
(
    thing_id,
    first_thing
)
VALUES
    (100, N'one'),
    (200, N'two'),
    (300, N'three'),
    (400, N'four');

Okay, like I said, inserts aren’t tracked in the history table, but they are tracked in the main table.

If I do this:

EXECUTE AS LOGIN = N'ostress';
INSERT
    dbo.things
(
    thing_id,
    first_thing
)
VALUES
    (500, N'five'),
    (600, N'six'),
    (700, N'seven'),
    (800, N'eight');

And then run this query:

SELECT
    table_name =
        'dbo.things',
    t.thing_id,
    t.first_thing,
    t.original_modifier,
    t.current_modifier,
    t.valid_from,
    t.valid_to
FROM dbo.things AS t;

The results won’t make a lot of sense. Switching back and forth between the sa and ostress users, the original_modifier column will always say sa, and the current_modifier column will always show whichever login I’m currently using.

You can’t persist either of these columns, because the functions are non-deterministic. In this way, SQL Server is protecting you from yourself. Imagine maintaining those every time you run a different query. What a nightmare.

The bottom line here is that you get no useful information about inserts, nor do you get any useful information just by querying the user-facing table.

Updates And Deletes Are Useful


Keeping my current login as ostress, let’s run these queries:

UPDATE 
    t
SET 
    t.first_thing =
        t.first_thing +
        SPACE(1) +
        t.first_thing
FROM things AS t
WHERE t.thing_id = 100;

UPDATE 
    t
SET 
    t.first_thing =
        t.first_thing +
        SPACE(3) +
        t.first_thing
FROM things AS t
WHERE t.thing_id = 200;

DELETE
    t
FROM dbo.things AS t
WHERE t.thing_id = 300;

DELETE
    t
FROM dbo.things AS t
WHERE t.thing_id = 400;

Now, along with looking at the user-facing table, let’s look at the history table as well.

To show that the history table maintains the correct original and current modifier logins, I’m going to switch back to executing this as sa.

sql server query results
peekaboo i see you!

Alright, so here’s what we have now!

In the user-facing table, we see the six remaining rows (we deleted 300 and 400 up above), with the values in first_thing updated a bit.

Remember that the _modifier columns are totally useless here because they’re calculated on the fly every time

We also have the history table with some data in it finally, which shows the four rows that were modified as they existed before, along with the user as they logged in, and the user as the queries were executed.

This is what I would brand “fairly nifty”.

FAQ


Q. Will this work with my very specific login scenario?

A. I don’t know.

 

Q. Will this work with my very specific set of permissions?

A. I don’t know.

 

Q. But what about…

A. I don’t know.

I rolled this out for a fairly simple SQL Server on-prem setup with very little insanity as far as login schemes, permissions, etc.

You may find edge cases where this doesn’t work, or it may not even work for you from the outset because it doesn’t track inserts.

With sufficient testing and moxie (the intrinsic spiritual spark, not the sodie pop) you may be able to get it work under you spate of local factors that break the peace of my idyllic demo.

Thanks for reading!

Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

Read the whole story
alvinashcraft
25 seconds ago
reply
West Grove, PA
Share this story
Delete

Amazing stories I wish everyone knew

1 Share

Meet some of the heroes who are fighting poverty and saving lives.

Read the whole story
alvinashcraft
58 seconds ago
reply
West Grove, PA
Share this story
Delete

You asked: We don't sell saddles here

1 Share
You asked: We don't sell saddles here

From John O'Nolan (CEO of Ghost):

How did Stewart's infamous "we don't sell saddles here" essay go down internally, at the time? And how do you feel that essay aged, with hindsight?

Stewart shared We Don't Sell Saddles Here internally in July 2013, just before we launched our Preview Release. You can read about what was going on at the company at that time in our recent post: Good enough to be tried by the general public. He later published it on the web around the time of our public launch in February 2014.

My memory of the internal reaction is straightforward: it raised our eyeline and expanded the scope of our efforts. We were deep in the details of making everything work and preparing for our first public release. Stewart, as ever, was thinking several steps ahead. He needed to convey to us the opportunity and challenge we had ahead of us:

  • Convince a large group of people who have no idea what Slack is or why they need it to sign up and pay for it
  • Refine the quality of the product to the point where all the rough edges were eliminated and the customer got an experience "as smooth as lacquered mahogany"

In these ways it was providing directive, tactical guidance about how we should be spending our time, and what the next phase of our efforts would look like. It was time to step up, and we all knew it.

It was also setting the strategic plan for how we would win: by defining and owning a new and potentially massive market. We would do this by telling a story about a better way of working, and helping customers see themselves in that story. Then we would make sure that they actually got that experience when they made the effort to sign up.

Later in the company's history, Stewart would often say that leaders let people know what's important. This was an early example of him doing that. Everyone at the company at the time was already fully committed, but this framing served to expand and elevate our efforts. This was it! We had built something people genuinely wanted. This was our game to win. How are we going to do it?

We do it really, really fucking good.

Slack's relative success in 2024 can still be attributed to the extent to which the company achieves the twin aims of conveying the potential power of the tool – organizational transformation – and delivering on that promise with high quality software that is polished, coherent and performs well. The shortcomings of the product and business can be attributed to failure to meet those requirements.

As a sort of "founding document" of the company, the memo distils a lot of Stewart's perspective on business and software into a succinct thesis. His primary emphasis on the quality of the software and service as experienced by end users remained evergreen throughout his leadership. His intuitions about the readiness of people to adopt new ways of working guided many of our product iterations over the years.

Be harsh, in the interest of being excellent.

With this essay, Stewart urged us to be as critical of our own software as we all are of the software made by other companies. He encouraged us not to get complacent about the flaws in our product or the gaps that prevented others from understanding it. This mindset was also consistent throughout his tenure. A restless, productive dissatisfaction with our efforts that inevitably forced us all to do better and make the best thing we knew how to make.

Ensuring that the pieces all come together is not someone else’s job. It is your job, no matter what your title is and no matter what role you play.

He encouraged us to take personal responsibility not just for the tasks we were assigned, but for our shared mission in the biggest sense. This was operationalized in the early days. We would say "Somebody doesn't work here." As in, "Somebody should fix the typing lag in the search input" or "Somebody should follow up with the teams that churned last week." Nope. It's our shared responsibility, and you need to do it yourself or chase things down to ensure it's going to get done.

Finally, as a statement of leadership and an answer to the question, "Why does this matter?" the essay has stood the test of time. We needed to play to win, to go big, and to play the game to the full extent of our abilities. As Stewart put it, "why the fuck else would you even want to be alive but to do things as well as you can?" Coming from him, who worked harder than anyone else, this was inspiring.


Thanks for the question John, and thank you for making Ghost. We're really enjoying using it.

Read the whole story
alvinashcraft
1 minute ago
reply
West Grove, PA
Share this story
Delete

SQL turns 50 this month -- why is it still going strong? [Q&A]

1 Share
Data management language SQL (originally published as SEQUEL) first appeared in May 1974, so this month marks its 50th anniversary. We spoke to Peter Zaitsev, founder at Percona, to find out why SQL has survived for the last 50 years, and is still the third most used language for programmers and software developers according to Stack Overflow. BN: Why was SQL successful at the time, and why is it still used today? PZ: I think SQL was successful because it gave us a data language -- a set of tools that could be used to work with and manipulate data… [Continue Reading]
Read the whole story
alvinashcraft
7 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories