Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148542 stories
·
33 followers

Protect Your MCP Tools With Auth0 FGA in TypeScript

1 Share
Learn how to secure your Model Context Protocol (MCP) tools using Auth0 FGA and TypeScript. Implement relationship-based access control for AI applications.

Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Yet Another Way to Center an (Absolute) Element

1 Share

TL;DR: We can center absolute-positioned elements in three lines of CSS. And it works on all browsers!

.element {
  position: absolute;
  place-self: center; 
  inset: 0;
}

Why? Well, that needs a longer answer.

In recent years, CSS has brought a lot of new features that don’t necessarily allow us to do new stuff, but certainly make them easier and simpler. For example, we don’t have to hardcode indexes anymore:

<ul style="--t: 8">
  <li style="--i: 1"></li>
  <li style="--i: 2"></li>
  <!--  ...  -->
  <li style="--i: 8"></li>
</ul>

Instead, all this is condensed into the sibling-index() and sibling-count() functions. There are lots of recent examples like this.

Still, there is one little task that feels like we’ve doing the same for decades: centering an absolutely positioned element, which we usually achieve like this:

.element {
  position: absolute;
  top: 50%;
  left: 50%;
  
  translate: -50% -50%;
}

We move the element’s top-left corner to the center, then translate it back by 50% so it’s centered.

There is nothing wrong with this way — we’ve been doing it for decades. But still it feels like the old way. Is it the only way? Well, there is another not-so-known cross-browser way to not only center, but also easily place any absolutely-positioned element. And what’s best, it reuses the familiar align-self and justify-self properties.

Turns out that these properties (along with their place-self shorthand) now work on absolutely-positioned elements. However, if we try to use them as is, we’ll notice our element doesn’t even flinch.

/* Doesn't work!! */
.element {
  position: absolute;
  place-self: center; 
}

So, how do align-self and justify-self work for absolute elements? It may be obvious to say they should align the element, and that’s true, but specifically, they align it within its Inset-Modified Containing Block (IMCB). Okay… But what’s the IMCB?

Imagine we set our absolute element width and height to 100%. Even if the element’s position is absolute, it certainly doesn’t grow infinitely, but rather it’s enclosed by what’s known as the containing block.

The containing block is the closest ancestor with a new stacking context. By default, it is the html element.

We can modify that containing block using inset properties (specifically top, right, bottom, and left). I used to think that inset properties fixed the element’s corners (I even said it a couple of seconds ago), but under the hood, we are actually fixing the IMCB borders.

Diagram showing the CSS for an absolutely-positioning element with inset properties and how those values map to an element.

By default, the IMCB is the same size as the element’s dimensions. So before, align-self and justify-self were trying to center the element within itself, resulting in nothing. Then, our last step is to set the IMCB so that it is the same as the containing block.

.element {
  position: absolute;
  place-self: center; 
  top: 0;
  right: 0;
  bottom: 0;
  left: 0;
}

Or, using their inset shorthand:

.element {
  position: absolute;
  place-self: center; 
  inset: 0;
}

Only three lines! A win for CSS nerds. Admittedly, I might be cheating since, in the old way, we could also use the inset property and reduce it to three lines, but… let’s ignore that fact for now.

We aren’t limited to just centering elements, since all the other align-self and justify-self positions work just fine. This offers a more idiomatic way to position absolute elements.

Pro tip: If we want to leave a space between the absolutely-positioned element and its containing block, we could either add a margin to the element or set the container’s inset to the desired spacing.

What’s best, I checked Caniuse, and while initially Safari didn’t seem to support it, upon testing, it seems to work on all browsers!


Yet Another Way to Center an (Absolute) Element originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The ultimate dev skill is Integration Testing – Podcast interview with Internet of Bugs [Podcast #209]

1 Share

Today Quincy Larson interviews Carl Brown, who runs the Internet of Bugs YouTube channel and has worked as a dev at Amazon, IBM, Sun Microsystems, and startups for over 37 years.

We talk about:

  • The hype versus the utility in LLMs and agent code generation tools

  • Why you might want to target developer jobs at smaller companies, and how these differ from "big tech"

  • How everyone will face agism eventually. Carl argues that a consulting career is a great escape hatch.

Watch the podcast on the freeCodeCamp.org YouTube channel or listen on your favorite podcast app.

Links from our discussion:

Ted Chiang "ChatGPT Is a Blurry JPEG of the Web" article Carl mentions: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

The Karpathy on Moltbook saga:

Karpathy hyping up MoltBook

https://x.com/elonmusk/status/2017370646767145419

Noon Jan 30

Doubles Down after "being accused of overhyping" Moltbook
https://x.com/karpathy/status/2017442712388309406

9:39 PM Jan 30

Tweet showing Karpathy's (redacted) private information from a MoltBook security breach
https://x.com/theonejvo/status/2017732898632437932

4:53PM Jan 31

Fortune quotes Karpathy saying MoltBook is "a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers"
https://fortune.com/2026/02/02/moltbook-security-agents-singularity-disaster-gary-marcus-andrej-karpathy/

Feb 2

Quote from Cory Doctorow about code failing well: https://pluralistic.net/2026/01/06/1000x-liability/ Excerpt from Cory's Mastodon with that quote in it: https://mamot.fr/@pluralistic/115848576290992814 Mastodon from Carl to Cory telling him I'm going to use that quote (which he boosted): https://mastodon.social/@carlbrown/115867074293449215

Article on Claude 4.6 being good at finding bugs with fuzzing: https://red.anthropic.com/2026/zero-days/

Reference to it from Computer Security Guru Bruce Schneier: https://www.schneier.com/blog/archives/2026/02/llms-are-getting-a-lot-better-and-faster-at-finding-and-exploiting-zero-days.html

Older paper on LLMs being good at fuzzing prior to this new claim about claude 4.6: https://arxiv.org/html/2508.01750v1

Falsehoods programmers believe about names from Patio11: https://img.sauf.ca/pictures/2025-10-23/61fb6db44e7173cd9318753c955f7dda.pdf

Same kind of article, but this one is about time instead of names (Carl said he was wrong in that Partick/Patio11 didn't write this one, but it's worth passing along): https://infiniteundo.com/post/25509354022/more-falsehoods-programmers-believe-about-time

Article with discussion of ageism in tech with the Zuckerberg quote Carl was thinking of: https://www.forbes.com/sites/stevenkotler/2015/02/14/is-silicon-valley-ageist-or-just-smart/

Book on (interpersonal) networking that Carl recommends: https://www.penguinrandomhouse.com/books/227558/never-eat-alone-expanded-and-updated-by-keith-ferrazzi-and-tahl-raz/

And another one: https://www.penguinrandomhouse.com/books/105512/dig-your-well-before-youre-thirsty-by-harvey-mackay/

Carl's video on how AdTech is fracturing Society: https://www.youtube.com/watch?v=FmYXyWbis9w

Carl's Website: https://internetofbugs.com/

Community news section:

  1. freeCodeCamp just published a comprehensive course that will teach you the fundamental concepts, protocols, and architectures of computer networking. You'll learn key network engineering topics like topology, subnetting, flow control, routing, IPv4 addressing, DNS, and more. (12 hour YouTube course): https://www.freecodecamp.org/news/computer-networking-fundamentals/

  2. And we just published our second-ever chess course. This time you'll learn the Italian Game, one of the most common chess openings. This handbook and accompanying video course are taught by freeCodeCamp engineer Ihechikara Abba, who has a chess Elo rating of 2285. He will lay out the many traps that white can set for black, and how to not fall for them. (full-length handbook and 1 hour YouTube course): https://www.freecodecamp.org/news/the-chess-italian-game-handbook-traps-for-white/

  3. freeCodeCamp also published a full-length book on Product-Led Research. This is a must-read for any manager within a tech company. It's written by a CTO and security researcher named Omer Rosenbaum, who says: “if you manage Research like it's Development, things aren't going to go well for you.” He breaks down the most common research frameworks and methodologies, and contextualizes them through a series of case studies. (full-length book): https://www.freecodecamp.org/news/product-led-research-a-practical-guide-for-randd-leaders-full-book/

  4. If you're a Python developer and use the Django web development framework, this tutorial will help you optimize the heck out of your APIs. Mari will teach you how to use profiling and logging to find bottlenecks in your codebase. Then she'll show you how to get extra performance through caching, so you can serve users at scale. (20 minute read): https://www.freecodecamp.org/news/how-to-optimize-django-rest-apis-for-performance/

  5. Today's song of the week is 1984 synth jazz classic "No One Emotion" by George Benson. I love the driving synth bass, the vocal harmonies, and excellent guitar solo by Michael Sambello – the guy who made the She's a Maniac song. If you're looking for a pick me up jam this song any day of the week. https://www.youtube.com/watch?v=Q-MyvbolxG0



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Allocating on the Stack

1 Share

The Go Blog

Allocating on the Stack

Keith Randall
27 February 2026

We’re always looking for ways to make Go programs faster. In the last 2 releases, we have concentrated on mitigating a particular source of slowness, heap allocations. Each time a Go program allocates memory from the heap, there’s a fairly large chunk of code that needs to run to satisfy that allocation. In addition, heap allocations present additional load on the garbage collector. Even with recent enhancements like Green Tea, the garbage collector still incurs substantial overhead.

So we’ve been working on ways to do more allocations on the stack instead of the heap. Stack allocations are considerably cheaper to perform (sometimes completely free). Moreover, they present no load to the garbage collector, as stack allocations can be collected automatically together with the stack frame itself. Stack allocations also enable prompt reuse, which is very cache friendly.

Stack allocation of constant-sized slices

Consider the task of building a slice of tasks to process:

func process(c chan task) {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

Let’s walk through what happens at runtime when pulling tasks from the channel c and adding them to the slice tasks.

On the first loop iteration, there is no backing store for tasks, so append has to allocate one. Because it doesn’t know how big the slice will eventually be, it can’t be too aggressive. Currently, it allocates a backing store of size 1.

On the second loop iteration, the backing store now exists, but it is full. append again has to allocate a new backing store, this time of size 2. The old backing store of size 1 is now garbage.

On the third loop iteration, the backing store of size 2 is full. append again has to allocate a new backing store, this time of size 4. The old backing store of size 2 is now garbage.

On the fourth loop iteration, the backing store of size 4 has only 3 items in it. append can just place the item in the existing backing store and bump up the slice length. Yay! No call to the allocator for this iteration.

On the fifth loop iteration, the backing store of size 4 is full, and append again has to allocate a new backing store, this time of size 8.

And so on. We generally double the size of the allocation each time it fills up, so we can eventually append most new tasks to the slice without allocation. But there is a fair amount of overhead in the “startup” phase when the slice is small. During this startup phase we spend a lot of time in the allocator, and produce a bunch of garbage, which seems pretty wasteful. And it may be that in your program, the slice never really gets large. This startup phase may be all you ever encounter.

If this code was a really hot part of your program, you might be tempted to start the slice out at a larger size, to avoid all of these allocations.

func process2(c chan task) {
    tasks := make([]task, 0, 10) // probably at most 10 tasks
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

This is a reasonable optimization to do. It is never incorrect; your program still runs correctly. If the guess is too small, you get allocations from append as before. If the guess is too large, you waste some memory.

If your guess for the number of tasks was a good one, then there’s only one allocation site in this program. The make call allocates a slice backing store of the correct size, and append never has to do any reallocation.

The surprising thing is that if you benchmark this code with 10 elements in the channel, you’ll see that you didn’t reduce the number of allocations to 1, you reduced the number of allocations to 0!

The reason is that the compiler decided to allocate the backing store on the stack. Because it knows what size it needs to be (10 times the size of a task) it can allocate storage for it in the stack frame of process2 instead of on the heap1. Note that this depends on the fact that the backing store does not escape to the heap inside of processAll.

Stack allocation of variable-sized slices

But of course, hard coding a size guess is a bit rigid. Maybe we can pass in an estimated length?

func process3(c chan task, lengthGuess int) {
    tasks := make([]task, 0, lengthGuess)
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

This lets the caller pick a good size for the tasks slice, which may vary depending on where this code is being called from.

Unfortunately, in Go 1.24 the non-constant size of the backing store means the compiler can no longer allocate the backing store on the stack. It will end up on the heap, converting our 0-allocation code to 1-allocation code. Still better than having append do all the intermediate allocations, but unfortunate.

But never fear, Go 1.25 is here!

Imagine you decide to do the following, to get the stack allocation only in cases where the guess is small:

func process4(c chan task, lengthGuess int) {
    var tasks []task
    if lengthGuess <= 10 {
        tasks = make([]task, 0, 10)
    } else {
        tasks = make([]task, 0, lengthGuess)
    }
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

Kind of ugly, but it would work. When the guess is small, you use a constant size make and thus a stack-allocated backing store, and when the guess is larger you use a variable size make and allocate the backing store from the heap.

But in Go 1.25, you don’t need to head down this ugly road. The Go 1.25 compiler does this transformation for you! For certain slice allocation locations, the compiler automatically allocates a small (currently 32-byte) slice backing store, and uses that backing store for the result of the make if the size requested is small enough. Otherwise, it uses a heap allocation as normal.

In Go 1.25, process3 performs zero heap allocations, if lengthGuess is small enough that a slice of that length fits into 32 bytes. (And of course that lengthGuess is a correct guess for how many items are in c.)

We’re always improving the performance of Go, so upgrade to the latest Go release and be surprised by how much faster and memory efficient your program becomes!

Stack allocation of append-allocated slices

Ok, but you still don’t want to have to change your API to add this weird length guess. Anything else you could do?

Upgrade to Go 1.26!

func process(c chan task) {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    processAll(tasks)
}

In Go 1.26, we allocate the same kind of small, speculative backing store on the stack, but now we can use it directly at the append site.

On the first loop iteration, there is no backing store for tasks, so append uses a small, stack-allocated backing store as the first allocation. If, for instance, we can fit 4 tasks in that backing store, the first append allocates a backing store of length 4 from the stack.

The next 3 loop iterations append directly to the stack backing store, requiring no allocation.

On the 4th iteration, the stack backing store is finally full and we have to go to the heap for more backing store. But we have avoided almost all of the startup overhead described earlier in this article. No heap allocations of size, 1, 2, and 4, and none of the garbage that they eventually become. If your slices are small, maybe you will never have a heap allocation.

Stack allocation of append-allocated escaping slices

Ok, this is all good when the tasks slice doesn’t escape. But what if I’m returning the slice? Then it can’t be allocated on the stack, right?

Right! The backing store for the slice returned by extract below can’t be allocated on the stack, because the stack frame for extract disappears when extract returns.

func extract(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    return tasks
}

But you might think, the returned slice can’t be allocated on the stack. But what about all those intermediate slices that just become garbage? Maybe we can allocate those on the stack?

func extract2(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    tasks2 := make([]task, len(tasks))
    copy(tasks2, tasks)
    return tasks2
}

Then the tasks slice never escapes extract2. It can benefit from all of the optimizations described above. Then at the very end of extract2, when we know the final size of the slice, we do one heap allocation of the required size, copy our tasks into it, and return the copy.

But do you really want to write all that additional code? It seems error prone. Maybe the compiler can do this transformation for us?

In Go 1.26, it can!

For escaping slices, the compiler will transform the original extract code to something like this:

func extract3(c chan task) []task {
    var tasks []task
    for t := range c {
        tasks = append(tasks, t)
    }
    tasks = runtime.move2heap(tasks)
    return tasks
}

runtime.move2heap is a special compiler+runtime function that is the identity function for slices that are already allocated in the heap. For slices that are on the stack, it allocates a new slice on the heap, copies the stack-allocated slice to the heap copy, and returns the heap copy.

This ensures that for our original extract code, if the number of items fits in our small stack-allocated buffer, we perform exactly 1 allocation of exactly the right size. If the number of items exceeds the capacity our small stack-allocated buffer, we do our normal doubling-allocation once the stack-allocated buffer overflows.

The optimization that Go 1.26 does is actually better than the hand-optimized code, because it does not require the extra allocation+copy that the hand-optimized code always does at the end. It requires the allocation+copy only in the case that we’ve exclusively operated on a stack-backed slice up to the return point.

We do pay the cost for a copy, but that cost is almost completely offset by the copies in the startup phase that we no longer have to do. (In fact, the the new scheme at worst has to copy one more element than the old scheme.)

Wrapping up

Hand optimization can still be beneficial, especially if you have a good estimate of the slice size ahead of time. But hopefully the compiler will now catch a lot of the simple cases for you and allow you to focus on the remaining ones that really matter.

There are a lot of details that the compiler needs to ensure to get all these optimizations right. If you think that one of these optimizations is causing correctness or (negative) performance issues for you, you can turn them off with -gcflags=all=-d=variablemakehash=n. If turning these optimizations off helps, please file an issue so we can investigate.

Footnotes

1 Go stacks do not have any alloca-style mechanism for dynamically-sized stack frames. All Go stack frames are constant sized.

Previous article: Using go fix to modernize Go code
Blog Index

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Inside Platform Engineering with Matt Gowie

1 Share

Infrastructure as code sounds simple until it isn’t. Matt Gowie, founder of IaC consulting firm MasterPoint, joined me on Inside Platform Engineering to share what he’s learned helping organizations build sustainable, scalable platforms with Terraform and Open Tofu, including some costly mistakes he sees teams make time and time again.

One of my favorite topics was talking with Matt about whether to do it yourself and “reinvent the wheel”, or lean on open-source and community modules for building your IaC. I can see value in both approaches and have mostly been on the side of using open source modules where possible. Your infrastructure probably isn’t that different from someone else’s. With that said, there are times when you need to consider the risks of using these modules and whether their support and community are responsive enough to meet your demands when critical changes are needed.

While the discussion focused on Matt’s specialty around IaC, I found all of the discussion points can be re-applied broadly across Platform Engineering.

Watch the episode

You can watch the episode with Matt below.

Inside Platform Engineering with Matt Gowie

Pick one tool and go deep

One of the first traps Matt sees platform teams fall into is spreading their expertise across too many IaC tools. Whether it’s Terraform, Bicep, CloudFormation, or Pulumi, the instinct to keep options open actually slows teams down and breeds inconsistency. His advice is to consolidate, but do so mindfully. Vendor-specific tools like Bicep and CloudFormation lock you into a single cloud, and the moment you need to automate something outside that ecosystem - be it a DNS provider, a monitoring tool, a SaaS platform — you’re hacking around the edges and accumulating technical debt. Pick the tool that gives you the most reach, build expertise around it, and create practices that scale.

Stop reinventing the wheel

If your platform team is writing every Terraform resource by hand, you’re burning time your competitors aren’t. Matt is a strong advocate for the open source module ecosystem and pushes back on the common instinct to build everything internally. A well-maintained, focused open-source module delivers great security defaults, community-vetted patterns, and ongoing updates that most internal teams simply can’t match.

The hidden cost of building it yourself

The same logic applies to the operational layer around your Platform. Many teams build their own Jenkins or GitHub Actions pipelines to run Terraform and assume it saves money because the work is done in-house. But Matt argues this rarely pencils out. At scale, managing state files, enforcing policy, handling environment-specific approvals, and maintaining all of that custom pipeline code is a significant ongoing burden, and when the person who built it leaves, that cost compounds. Matt’s take is to evaluate the vendor tooling available to solve your problem and be honest about what an engineer’s time is actually worth when measured against a vendor invoice.

Happy Deployments!

Inside Platform Engineering is a series of conversations with Matt Allford and a guest, bringing their own experience and perspective from the world of Platform Engineering.

You can find more episodes on YouTube.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

SDKs vs APIs vs JavaScript: Modern Document Processing in 2026

1 Share

How Document Processing Is Changing in 2026: SDKs, APIs, and JavaScript

TL;DR: Modern document workflows in 2026 typically rely on three approaches: high-performance SDKs, scalable Docker-based Web APIs, and flexible JavaScript or Node.js tools. Each addresses different needs around performance, scalability, and user experience. This guide compares the trade-offs, explains when to use each approach, and shows why many teams now combine them to build faster, cloud-ready, and future-proof document-processing systems.

Document processing is now a core feature in modern apps, from generating invoices and contracts to powering real-time PDF review and web-based document editing. But expectations have changed. Teams want automation that is reliablescalable, and compatible with both traditional deployments and cloud-native platforms.

In 2026, most developers choose one of three approaches:

  • SDKs that run directly inside applications deliver raw speed and handle heavy workloads efficiently.
  • Web APIs packaged as Docker containers help teams scale document processing easily across microservices.
  • JavaScript/Node.js solutions for web‑based, interactive experiences provide rich in‑browser viewing and editing without requiring additional installations.

The real question isn’t “Which is best?” It’s “Which approach aligns with your workload, architecture, and UX requirements, and where does it make sense to combine them?”

The best part is that Syncfusion® supports all three approaches, giving teams a single, reliable solution for any document‑processing workflow.

The 2026 dilemma: Speed, Scale, or Flexibility?

Microsoft Office or Adobe‑based pipelines often struggle to keep up with modern automation demands, frequently introducing errors and bottlenecks.

Document workloads keep getting more complex:

  • Multiple formats: PDF, DOCX, XLSX, PPTX.
  • More automation: Merge, split, convert, redact, sign, extract.
  • Higher reliability requirements: Fewer failures, fewer “works on my machine” issues.
  • More security pressure: sensitive documents, controlled environments, compliance needs.

Often create bottlenecks, add manual steps, and introduce dependency issues in server environments.

So teams end up deciding two things:

  1. Where does processing happen? In-process vs over HTTP vs in the browser.
  2. How does it scale? Single service vs microservices vs client-side offload.

3 approaches to modern document processing

Each model plays a distinct role in meeting specific performance and scalability needs, which we’ll explore in detail below.

SDKs as the high-performance backbone

If you need low latency and high throughput, SDKs are usually the most direct path. They run inside your application process, so you avoid network overhead and keep tight control over document behavior.

Syncfusion provides a complete .NET Document SDK for programmatic processing across common formats, without any Microsoft Office dependencies.

  • PDF Library: Create, edit, merge, split, secure, and convert PDFs. You can process PDF files with advanced features like forms, OCR, redaction, digital signatures, and PDF/A compliance, all without relying on Adobe Acrobat.
  • Word Library (DocIO): Create, edit, and convert Word documents with formatting, mail merge, charts, images, document protection, and PDF export.
  • Excel Library (XLSX): Create and modify spreadsheets, import and export data, use formulas, generate charts and pivot tables, apply conditional formats, and convert workbooks to PDF.
  • PowerPoint Library (PPTX): Build slide decks, update content, add images, create charts, apply animations and transitions, clone or merge slides, and export presentations to PDF.

Why SDKs win

  • Best fit for performance-critical services (conversion pipelines, batch jobs, high-volume generation).
  • Strong option for restricted environments (no desktop installs, fewer moving parts).
  • Deep APIs for document manipulation and compliance workflows.

Unlock advanced PDF, Word, Excel, and PowerPoint automation with standalone, high‑performance libraries built for enterprise workloads. → Explore Syncfusion’s complete .NET Document Processing Suite.

Prefer building interactive, UI-driven document experiences instead of using libraries alone? Syncfusion offers UI controls you can pair with the Document Libraries to create complete, end‑to‑end document workflows. Each UI SDK requires its own separate license and is not included with the Document SDK. If your application needs a fully interactive UI, the UI SDK is the right choice. For automation or server‑side processing, the Document SDK alone is sufficient.

  • PDF Viewer SDK: View, annotate, review, and redact PDFs directly in the browser.
  • DOCX Editor SDK: Create and edit Word documents online with full formatting, track changes, and comments.
  • Spreadsheet Editor SDK: Work with Excel-like spreadsheets online with formulas, formatting, and charts.

Cloud Native Scalability Web APIs (Docker)

Using a ready-to-use Docker image for document processing is a popular approach because it provides a consistent, portable, and easy-to-deploy environment. If your architecture is microservice-oriented or you need document processing accessible from multiple stacks Web APIs (Docker) are often the cleanest option.

Dockerized APIs give you:

  • A consistent runtime across dev/stage/prod.
  • Easier scaling (run more containers when demand spikes).
  • Language-agnostic access (anything that can call HTTP can use it).

Syncfusion’s Docker-based Document Processing Web APIs are positioned for teams that want a ready-to-deploy container model, with benefits like:

  • Preconfigured and ready to use: Everything is included, so you can start processing documents immediately without manual setup.
  • Scales easily: Add more containers to handle higher workloads while keeping performance steady.
  • Runs anywhere Docker runs: Ensures consistent behavior across dev, staging, and production.
  • Secure by design: Host the API in your own infrastructure for full control over authentication, network rules, and compliance.
  • Flexible and customizable: Extend the image or adjust configurations to fit your project’s needs.

When APIs win

  • You need document processing shared across multiple services or languages.
  • You want predictable deployments and cleaner CI/CD.
  • You prefer to centralize document logic behind versioned endpoints.

Ready to deploy Syncfusion’s Document Processing Web APIs in Docker? Start with the official Docker setup guide.

JavaScript & Node.js for Interactive and Collaborative Apps

Using JavaScript and Node.js for document processing is increasingly popular because it enables fast, interactive, and fully client‑side workflows. With this approach, applications can generate and modify PDFs directly in the browser or in Node.js without relying on servers, plugins, or external software. This makes the experience lightweight, secure, and ideal for modern, real‑time, collaborative web applications.

Syncfusion’s JavaScript PDF Library enhances this approach by offering:

  • A pure JavaScript PDF engine that works entirely in the browser with zero installation.
  • Unified API for both browser and Node.js, allowing client and server workflows to share the same code.
  • No server dependencies, making deployments simpler, faster, and more secure.
  • Full PDF creation and editing features, including loading, editing, saving, and working with password‑protected files.
  • Rich PDF enhancements such as adding text, images, shapes, hyperlinks, bookmarks, annotations, and form fields.
  • Advanced operations, including flattening, merging, splitting, redaction, text extraction, image extraction, and layer management.
  • Support for digital signatures, enabling secure authentication and integrity checks.

Want to see the JavaScript PDF Library in action? Try the JavaScript PDF Library.

Document Processing SDK vs Web API vs JavaScript — Comparison Table

Choosing the right document processing model depends on your performance needs, deployment style, and user experience goals. Here’s a simple table to help you decide the best approach for your needs.

Decision Factor .NET Document Processing SDKs Web APIs (Docker) JavaScript / Node.js
Performance Fastest in‑process execution (no HTTP overhead) Slight overhead due to HTTP requests Fast in-browser; Node.js suitable for light–moderate workloads
Setup Requirements No Office/Acrobat required; works fully standalone Requires Docker + hosting environment Zero install in browser; Node.js runtime for server usage
Best For High‑performance backend automation in .NET Cloud‑native, microservice‑based architectures Interactive, real‑time client experiences
Access Model In‑app library calls HTTP API accessible from any stack Browser APIs + Node.js APIs
Scalability Scales by app instance Horizontal scaling by adding containers Browser scaling depends on client device; Node.js can scale
Cross‑Platform Use .NET-only Any language that can call HTTP Browser + Node.js (JavaScript ecosystems)
Ideal Workloads Generation, conversion, protection, signing, heavy document processing High‑volume conversion/extraction shared across teams In‑browser viewing, annotation, collaboration, client‑side workflows
When to Choose You need maximum speed & deep control; automation-centric You need shared processing across teams or stacks; microservices You want real-time UX or privacy‑focused client-side operations

What the future suggests: Hybrid models win

Most modern document apps don’t pick only one approach. They combine them to get speed, scale, and UX.

Below are two practical hybrid patterns you can use.

  • SDK backend + JS frontend
    • Use .NET SDKs for high-speed creation/conversion/redaction/signing on the server.
    • Use in-browser JS for viewing, forms, annotations, and interactive review.
    • Optionally perform some edits fully in the browser using a JS PDF library.
    • This pattern keeps heavy processing server-side while giving users a responsive UI.
  • Docker API + in-browser editors
    • Run document processing as Dockerized Web APIs for scalable conversions and operations
    • Pair with browser-based viewing/editing for real-time interaction
    • This pattern works well for microservices and multi-language organizations.

With Syncfusion supporting SDKs, Docker‑based Web APIs, and JavaScript tools, you can mix and match to achieve the right balance of speed, scale, and user experience without locking yourself into a single approach. Syncfusion also offers UI controls like the PDF Viewer, DOCX Editor, and Spreadsheet Editor, which you can pair with any model to deliver rich in‑app viewing and editing experiences.

Frequently Asked Questions

Is client-side JavaScript PDF processing secure enough for sensitive documents?

Client‑side processing improves privacy because documents never leave the user’s device. Sensitive PDFs (medical forms, financial statements, legal drafts) can be viewed, annotated, and redacted safely without hitting a backend.

Are Docker-based document processing APIs suitable only for large enterprises?

No. Even small products benefit from Web APIs because containers eliminate dependency headaches. Whether you run a single instance or thousands, Docker gives you predictable behavior and easy deployment—ideal for both startups and enterprise workloads.

Can Docker-based document APIs replace SDKs process entirely?

Not fully. Docker‑based Web APIs scale easily and centralize document processing, but SDKs still offer the lowest latency because they run in‑process with no network calls. Many teams use Docker APIs for high‑volume workloads and SDKs for performance‑critical tasks inside core services. To learn more, see the Docker Image Hosting Guide and the Ready to deploy Docker Image for Syncfusion document processing APIs.

Do JavaScript libraries require Web Assembly or plugins to process PDFs?

No. Syncfusion’s JavaScript PDF Library is a pure JS, non-UI library—no plugins or Web-Assembly needed. It supports full PDF manipulation: create, edit, annotate, sign, redact, merge, split, and more.

Will Self-Hosted APIs lock me into a single tech stack?

No. Because Web APIs are language‑agnostic (HTTP endpoints), they can be used from Java, Python, Go, Node.js, PHP, .NET, or anything else. This makes the API approach ideal for large teams using multiple stacks.

Are the UI components easy to integrate?

Yes. Syncfusion’s UI components come with simple APIs, clear documentation, and ready-made examples, so you can add viewing or editing features to your app quickly and with minimal effort.

Conclusion

Thank you for reading! Document processing in 2026 isn’t about choosing one tool. It’s about choosing the right execution model for each part of your workflow.

  • Pick SDKs for speed, deep control, and dependency-free backend automation.
  • Pick Web APIs (Docker) for cloud-native scaling and cross-stack reuse.
  • Pick JavaScript/Node.js for interactive, zero-install browser experiences.
  • Use a hybrid when you need both backend power and frontend UX.

Syncfusion supports all three approaches: .NET SDKs, Docker-based Web APIs, and JavaScript/Node libraries, so teams can implement the architecture that fits their product without re-platforming later.

Ready to Power Your Apps with Syncfusion’s Document SDK? Explore the complete Syncfusion Document Processing Suite.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories