Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153080 stories
·
33 followers

A dispute over the TAB key highlights a mismatch between Microsoft and IBM organizational structures

1 Share

I’ve written in the past about the cultural mismatch between Microsoft and IBM during the collaboration on OS/2, with the Microsofties viewing their IBM colleagues as mired in pointless bureaucracy and the IBM folks viewing Microsofties as undisciplined hackers.¹

One of many points of mismatch was the organizational structure.

A colleague recalls that while he was assigned to the IBM offices in Boca Raton, Florida, there was a dispute over what key should be used to move from one field to another in dialog boxes. The folks at IBM were not happy with my colleague’s decision to use the TAB key, so they asked him to escalate the issue to his manager back in Redmond.

My colleague’s manager replied, “The reason you are in Boca is to make these decisions so I don’t have to be in Boca.”

My colleague rephrased this reply in a more corporate manner before passing it on to IBM: “Microsoft supports the use of the TAB key for this purpose.”

Unsatisfied, the IBM folks escalated the issue up their organizational chain for several levels, and replied that their VP (who was around seven levels of management above the programmers) was absolutely opposed to the use of the TAB for this purpose, and they wanted confirmation from the equivalent-level manager at Microsoft that Microsoft stands by the choice of the TAB key.

My colleague replied, “Bill Gates’s mother is not interested in the TAB key.”

This apparently ended the discussion, and the TAB key stayed.

Note: This upcoming Sunday is Mother’s Day in the United States. You probably shouldn’t ask her for her opinion on the TAB key.

¹ There was probably merit to both arguments.

The post A dispute over the <KBD>TAB</KBD> key highlights a mismatch between Microsoft and IBM organizational structures appeared first on The Old New Thing.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Building and Using MCP Servers in Visual Studio

1 Share
Microsoft Product Manager Mike Kistler previews his Visual Studio Live! session on how MCP servers give .NET developers a universal standard for connecting AI models to external data and tools -- and how to build them with the C# SDK. Microsoft's Leslie Richardson is a co-presenter.
Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Fix Slow EF Core Queries: Practical Performance Tuning

1 Share

How to Fix Slow EF Core Queries: Practical Performance Tuning

TL;DR: Slow EF Core queries can significantly impact application performance if not optimized properly. This guide covers practical techniques such as query optimization, proper indexing, no-tracking reads, compiled queries, pagination, batching, and caching to help you build faster, scalable, and maintainable .NET applications with improved runtime efficiency and user experience.

Unoptimized EF Core queries can quietly become a bottleneck as applications scale, driving up response times, memory usage, and infrastructure costs. While EF Core simplifies data access, performance depends heavily on how queries are modeled, executed, and monitored.

This guide focuses on practical EF Core performance tuning techniques tested in real-world .NET applications. The goal is simple: reduce query latency, improve scalability, and build APIs that stay fast under load.

Tips for EF Core performance tuning

1. Measure first: Identify real bottlenecks

Before applying optimizations, measure where time is actually spent. Most EF Core performance issues come from inefficient SQL generation, excessive round-trips, or unnecessary data loading.

Focus on identifying:

  • N+1 queries
  • Large result sets
  • Repeated query execution
  • Missing or ineffective indexes

Tools: You can use profilers like Entity Framework Profiler or SQL Server Profiler/pgAdmin for deeper analysis. You can log the information via EF Core to check the operation details.

Enable EF logging

services.AddDbContext(options =>
    options.UseSqlServer(connectionString)
           .LogTo(Console.WriteLine, LogLevel.Information));

Tip: Benchmark alternatives; public results may not match your setup due to latency or data volume differences. Always test it in your environment.

Performance impact: Ensures efforts target real problems, avoiding wasted time.

SQL Server Profiler analyzing EF Core query performance

2. Index the database strategically

Indexes are one of the highest-impact optimizations for EF Core queries. Without them, even well-written LINQ queries can degrade into full table scans as data grows. Indexes act like a lookup table for your database, enabling faster data retrieval for queries involving WHERE, JOIN, ORDER BY, and GROUP BY clauses. Without indexes, the database performs full table scans, which can be prohibitively slow for large datasets.

Good index candidates include:

  • Columns used in WHERE filters
  • Foreign keys used in joins
  • Columns used for sorting and pagination
  • Frequently searched fields

Composite indexes work well for multi-column filters when column order matches selectivity. Keep in mind that indexes improve read performance at the cost of slower writes and additional storage.

EF Core indexing code example

Use EF Core’s Fluent API in your DbContext to define indexes during model configuration. Here’s an example for indexing the Url column in a Blog entity:

modelBuilder.Entity<Blog>()
    .HasIndex(b => b.Url)
    .HasDatabaseName("IX_Blog_Url");

Explanation: The HasIndex method creates an index on the Url column, which speeds up queries such as WHERE Url = 'example.com/blog‘. The optional HasDatabaseName assigns a clear name (e.g., IX_Blog_Url, following the convention: IX for index, table, then column).

Composite indexes

For queries filtering multiple columns, use composite indexes. For example, if you frequently query Posts by PublishedDate and CategoryId:

modelBuilder.Entity<Post>()
    .HasIndex(p => new { p.PublishedDate, p.CategoryId })
    .HasDatabaseName("IX_Post_PublishedDate_CategoryId");

Order matters: Place the most selective column (e.g., PublishedDate if it has more unique values) first for better efficiency.

  • Benefits: Faster WHERE, JOIN, and ORDER BY operations.
  • Warning: Indexes increase write time and storage; test before applying in production.
  • Tool: Use EF Core’s migrations to apply indexes or analyze query plans with tools like SQL Server Profiler.

3. Fetch only what you need

Over-fetching data is a common cause of slow queries and high memory usage. EF Core makes it easy to load full entities, but most read paths don’t need every column.

Recommended approach:

  • Use Select projections to fetch only required fields
  • Map results to lightweight DTOs or C# records

This reduces network payload size, memory usage, and materialization cost, especially in high-concurrency scenarios.

Avoid Select * Equivalents: Instead of retrieving entire entities, use Select to project only required fields.

// Bad: Fetches all columns
var users = context.Users.ToList();

// Good: Fetches only needed fields
var users = context.Users
    .Select(u => new { u.Id, u.Name })
    .ToList();

Projection to a record DTO

Use a record for a reusable, immutable DTO instead of a class.

public record BlogDto(int Id, string Title, DateTime PublishedDate);

// Query using record DTO
var blogs = context.Blogs
    .Select(b => new BlogDto(
        Id: b.Id,
        Title: b.Title,
        PublishedDate: b.PublishedDate
    ))
    .ToList();

Explanation: The BlogDto record is immutable, ensuring data consistency in high-concurrency scenarios. The SQL query is SELECT Id, Title, PublishedDate FROM Blogs. Records also support value equality, reducing bugs in comparisons.

  • Benefits: Reduces database load and memory usage.
  • Warning: Overusing projections can lead to complex queries; balance readability and performance.

4. Fix N+1 Queries with the right loading strategy

Improper loading strategies often lead to N+1 query problems or overly complex SQL.

Choose wisely between eager, lazy, and explicit loading to avoid N+1 issues or cartesian explosions. The N+1 problem occurs when EF Core executes one query for an entity and additional queries for each related entity.

Example:

// Bad: N+1 queries
var orders = context.Orders.ToList();
foreach (var order in orders)
{
    var items = order.OrderItems.ToList(); // Separate query per order
}

// Good: Single query
var blogs = await context.Blogs
    .Include(b => b.Posts)
    .ToListAsync();

Advanced tip: Use SplitQuery in EF Core 5+ to break large queries into smaller, parallel ones.

var blogs = await context.Blogs
    .AsSplitQuery()
    .Include(b => b.Posts)
    .ToListAsync();

Comparison of eager, lazy, and explicit loading:

Strategy Pros Cons
Eager Single query Large result sets
Lazy On-demand N+1 risk
Explicit Controlled Extra code
Chart comparing EF Core N+1 query vs. eager loading performance
Performance comparison of EF Core N+1 queries vs. eager loading

5. Filter and paginate early

Filtering and pagination should always be pushed to the database to avoid loading unnecessary data into memory.

Key practices:

  • Apply Where, OrderBy, and Take as early as possible
  • Avoid offset-based pagination for large datasets

Apply filters client-side only if necessary; push to the database for efficiency. Limit results with Take and Skip.

var recentPosts = await context.Posts
    .Where(p => p.Title.StartsWith("A"))
    .OrderBy(p => p.CreatedDate)
    .Take(25)
    .ToListAsync();

Efficient pagination options

  • Keyset pagination uses indexed columns to retrieve the next page efficiently. Instead of skipping rows, keyset pagination uses a filter based on the last seen key (usually a unique, indexed column like Id) to fetch the next set of records.
public async Task<List<Post>> GetPostsKeysetAsync(DateTime? lastSeenDate)
{
    return await context.Posts
        .Where(p => p.Title.StartsWith("A") && (!lastSeenDate.HasValue || p.CreatedDate > lastSeenDate.Value))
        .OrderBy(p => p.CreatedDate)
        .Take(25)
        .ToListAsync();
}
  • Cursor-based pagination builds on keyset pagination by encoding position into an opaque cursor, improving API usability and safety
public async Task<List<Post>> GetPostsCursorAsync(string? cursor)
{
    DateTime? lastSeenDate = null;

    if (!string.IsNullOrEmpty(cursor))
    {
        var bytes = Convert.FromBase64String(cursor);
        var dateString = Encoding.UTF8.GetString(bytes);
        lastSeenDate = DateTime.Parse(dateString);
    }

    return await context.Posts
        .Where(p => p.Title.StartsWith("A") && (!lastSeenDate.HasValue || p.CreatedDate > lastSeenDate.Value))
        .OrderBy(p => p.CreatedDate)
        .Take(25)
        .ToListAsync();
}

The API call would look like the following,

GET /posts?cursor=MjAyNS0xMC0xNlQwMTowMzozNi4wMDAwMDAwWg==

Performance impact: Prevents loading massive result sets, saving memory and time.

6. Use compiled and precompiled queries

EF Core translates LINQ expressions into SQL at runtime. For queries executed frequently, this translation cost can become noticeable.

  • Compiled queries reduce runtime translation overhead
private static readonly Func<BlogContext, int, IAsyncEnumerable<Blog>> CompiledQuery =
    EF.CompileAsyncQuery((context, length) => context.Blogs.Where(b => b.Url.Length == length));

await foreach (var blog in CompiledQuery(context, 8)) { /* ... */ }
  • Precompiled Queries (EF 9+): Generate during publish for Native AOT; reduces startup time.

Warning: Not for dynamic queries; rewrite as static if possible.

Comparison of different compilation methods

Method Execution Time (ms) for 10K Records Improvement 
Standard LINQ 5528.07
Compiled Query 4113.80 25.6% faster than Standard
Precompiled Query (EF9) 389.07 90.5% faster than Compiled

7. Batch updates and raw SQL for high-volume operations

Batch operations and raw SQL allow you to:

  1. Execute bulk updates or deletes in a single database command.
  2. Write custom SQL for complex queries that LINQ struggles to optimize.
  3. Minimize memory usage by bypassing change tracking.
  4. Reduce roundtrips, improving latency, and scalability.

Tracked entity updates can be inefficient for bulk operations or complex queries.

Prefer:

  • ExecuteUpdateAsync and ExecuteDeleteAsync for bulk changes
  • Parameterized raw SQL for complex or performance-sensitive queries

These approaches reduce round-trips, bypass unnecessary tracking, and improve throughput for large workloads.

ExecuteUpdateAsync

await context.Posts.ExecuteUpdateAsync(s =>
    s.SetProperty(p => p.Views, p => p.Views + 1));

Raw SQL (Use parameterized queries to prevent SQL injection)

int pageSize = 10;
DateTime lastSeenCreatedDate = new DateTime(2023, 01, 01); 

var blogs = await context.Blogs
    .FromSqlRaw(
        @"SELECT TOP({1}) * FROM Blogs 
            WHERE CreatedDate > {0} 
            ORDER BY CreatedDate ASC", 
        lastSeenCreatedDate, pageSize)
    .ToListAsync();

Tip: Batch saves automatically, but configure MaxBatchSize if needed.

Performance impact: Single roundtrip for bulk ops vs. multiple with traditional methods.

8. Use no-tracking for read-only queries

Change tracking adds overhead that isn’t needed for read-only queries.

For reporting, feeds, or public API endpoints:

  • Use AsNoTracking() to skip tracking
  • Reduce memory usage and improve query performance

This is one of the simplest and most effective EF Core optimizations for read-heavy paths.

// Tracking (default) - Slower for reads
var blogs = await context.Blogs.ToListAsync();

// No-tracking - Faster, no identity resolution
var blogsNoTracking = await context.Blogs
    .AsNoTracking()
    .ToListAsync();

The tracking query in the above code example loads all Blog entities into memory with snapshots, generating SQL like SELECT * FROM Blogs. The no-tracking query uses the same SQL but skips snapshot creation and identity resolution, reducing overhead.

When to use: Read-only scenarios like reports; skip if updating later.

9. Cache strategically to reduce database load

Cache query results externally (e.g., with IMemoryCache or Redis) to skip database hits. Database queries, even optimized ones, incur latency due to I/O, network roundtrips, and query execution.

For example, fetching a list of recent Blogs from a database with 100,000 rows can take 100-200ms per request, even with indexing and no-tracking. In a high-traffic blog API with thousands of requests per minute, this adds up, straining the database and increasing response times. Caching stores query results in memory (or a distributed store), allowing subsequent requests to retrieve data in microseconds instead of milliseconds.

For a blogging system, caching is ideal for:

  1. Static data, like a list of top blogs or categories.
  2. Semi-static data, like recent posts updated hourly.
  3. Read-heavy endpoints, like blog feeds or search results.
if (!cache.TryGetValue("blogs", out List<Blog> blogs))
{
    blogs = await context.Blogs
        .AsNoTracking()
        .ToListAsync();
    cache.Set("blogs", blogs, TimeSpan.FromMinutes(5));
}

Choose appropriate expiration and invalidation strategies to avoid serving stale data.

10. Advanced: DbContext pooling and async

Creating and disposing DbContext instances per request adds overhead under high concurrency. DbContext pooling reuses instances to reduce allocation and initialization cost.

Benefits include:

  • Faster request handling
  • Lower memory pressure
  • Better scalability in high-throughput APIs

Pool sizes should be tuned based on traffic patterns and available resources.

services.AddDbContextPool<BlogContext>(
    o => o.UseSqlServer(connectionString),
    poolSize: 2000);

In the above code example, AddDbContextPool creates a pool of up to 2000 BlogContext instances (the default pool size in EF Core is 1024). Reused instances skip initialization, reducing overhead. The connectionString specifies the SQL Server database. A smaller pool size (e.g., 256) is often sufficient for most applications, balancing memory and performance.

DbContext pooling
DbContext pooling

Async: Always use async methods (e.g., ToListAsync) for scalability.

Implementation example: Optimized EF Core blog query

Combine tips for fetching blogs with posts, paginated and cached. This uses no-tracking, split queries, projection, pagination, and caching for optimal performance.

public async Task<List<BlogDto>> GetBlogsAsync(int page, int size, IMemoryCache cache)
{
    var key = $"blogs_page_{page}";
    if (!cache.TryGetValue(key, out List<BlogDto> blogs))
    {
        blogs = await Context.Blogs
            .AsNoTracking()
            .AsSplitQuery()
            .Include(b => b.Posts.Where(p => p.Published))
            .Select(b => new BlogDto
            {
                Id = b.Id,
                Url = b.Url,
                PostCount = b.Posts.Count
            })
            .OrderBy(b => b.Url)
            .Skip((page - 1) * size)
            .Take(size)
            .ToListAsync();
        cache.Set(key, blogs, TimeSpan.FromMinutes(10));
    }
    return blogs;
}
Loading cache data to reduce multiple calls to data retrieval and optimize data generation.
Loading cache data to reduce multiple calls to data retrieval and optimize data generation.

Frequently Asked Questions

How do I decide which EF Core optimization to apply first?

Start with profiling because the most expensive bottleneck is not always the most obvious one. After identifying whether the issue is query shape, data volume, tracking, indexing, or roundtrips, apply the optimization that directly targets that problem.

Are EF Core performance gains the same across SQL Server, PostgreSQL, and MySQL?

No, the impact varies by database provider, schema design, query planner, indexing strategy, and network latency. The same LINQ query can generate slightly different SQL depending on the provider, so benchmarks should be repeated against the actual production database.

When should I prefer EF Core over writing raw SQL for performance-critical features?

Use EF Core when queries are maintainable, provider-portable, and produce efficient SQL. Choose raw SQL only when you need precise control over complex joins, database-specific features, reporting queries, or bulk operations that LINQ cannot express efficiently.

Can these EF Core optimizations help cloud-hosted applications reduce cost?

Yes, better query performance can reduce database CPU usage, memory pressure, network transfer, and application server wait time. In cloud environments where compute, database throughput, and I/O are billed separately, optimized EF Core usage can directly lower operating costs.

What is the safest way to introduce performance tuning into an existing EF Core project?

Apply one change at a time and compare results using repeatable benchmarks or production-like load tests. This reduces the risk of introducing incorrect query behavior, stale cache data, or database regressions while trying to improve speed.

How do EF Core optimizations affect application architecture?

Performance tuning often encourages clearer separation between read models, write models, DTOs, caching layers, and database-specific concerns. Over time, this can lead to a more intentional architecture, especially for APIs with high-read traffic or reporting-heavy workloads.

Should every EF Core query use no-tracking, projection, pagination, and caching?

No, these techniques should match the query’s purpose. For example, no-tracking is ideal for read-only views, projection is useful when only selected fields are needed, pagination helps large result sets, and caching works best when data does not need to be instantly fresh.

How do I know when EF Core tuning is no longer enough?

If optimized queries, indexing, batching, and caching still cannot meet latency or scale goals, the issue may be architectural rather than ORM-related. At that stage, consider read replicas, denormalized read models, background processing, CQRS, database partitioning, or specialized storage for search and analytics.

Conclusion

Thank you for reading! Tuning EF Core queries involves measuring first, then applying targeted optimizations like no-tracking, projections, and compiled queries. By modeling efficiently and caching wisely, you can achieve significant speedups, often 2x or more in benchmarks. In my experience, combining AsNoTracking with caching has been a game-changer for high-traffic APIs.

Test iteratively in your environment, as gains vary by database and workload. With these tips, your .NET apps will handle data like a pro.

Apply these optimizations to your EF Core projects today! Dive deeper into the EF Core docs or share your experiences in the comments. You can refer our Entity Framework Core Succinctly®  free e-book here. Subscribe for more .NET tips!

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Enhances CodeQL with Declarative Security Modeling for Faster, More Flexible Analysis

1 Share

GitHub has introduced a significant update to its CodeQL engine, enabling developers to define custom sanitizers and validators directly through "models-as-data," a move that simplifies how teams extend security analysis across their codebases.

By Craig Risi
Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What are managed identities in SQL Server 2025? A complete guide

1 Share

Learn how managed identities in SQL Server 2025 enhance security by eliminating passwords and enabling seamless Microsoft Entra authentication for Azure resources.

Those who have worked with SQL Server will understand the need to avoid storing passwords for accessing resources. Windows-based identities are fine for on-premises SQL Server systems, including those on cloud-based virtual machines (VMs), but are of no use when you need to access cloud-based resources like those in Azure.

Some Azure-based resources (including storage accounts) offer other access methods, such as shared access signatures (SAS), but these aren’t much of a step-up from passwords.

What’s really needed is for SQL Server to have its own Microsoft Entra based identity. These can be used directly with Azure-based resources – and that’s exactly where managed identities come in.

What are managed identities?

A managed identity is a Microsoft Entra identity that Azure creates and manages for a resource. This allows the resource to obtain tokens for other Azure services without the need for storing passwords, client secrets, or certificates in code or configuration. Two types of managed identities are available.

First, a system-assigned managed identity, which is specific to a resource. For example, I could add a system-assigned managed identity to an Azure Data Factory (ADF). The ADF could then use the identity when making connections to, or accessing, other resources. When the ADF is deleted, the system-assigned managed identity is also removed.

Then there’s a user-assigned managed identity. This is created independently of any specific resource and can be applied to a resource, in addition to any system-assigned managed identity. User-assigned managed identities are very powerful because a single identity can be assigned to multiple resources.

For example, I might have a single identity for a group of data factories. This simplifies the configuration of access and, importantly, if I need to recreate the data factory, I can use the same identity. Even when all the data factories are deleted, the user-assigned managed identity can survive. Another common use for these is to assign the same identity to a group of web servers.

The official Microsoft documentation for managed identities can be found here.

Why managed identities are not just service principals

Under the covers, managed identities are a special kind of service principal, but there are important differences. The major one is that service principals expire, whereas Azure manages the lifecycle and credential rotation of managed identities for you.

I’m always concerned when I see solutions built using service principals without plans for what would happen when they expire. These are like time-bombs at many sites today, but using managed identities avoids these issues. The credential expires after 90 days and is rolled over after 45 days.

The biggest contrast is operational. With a service principal, someone must create an app registration, issue and protect a secret or certificate, and periodically rotate that credential. That might also require updating dependent systems.

Fast, reliable and consistent SQL Server development…

…with SQL Toolbelt Essentials. 10 ingeniously simple tools for accelerating development, reducing risk, and standardizing workflows.
Learn more & try for free

What’s needed to use a managed identity?

To be able to use managed identities, you need two things:

  • The ability to assign a managed identity to a resource or service
  • The ability to use the managed identity when making a connection from that resource or service.

You might think that anything that can have a managed identity could also use it for connections, but that’s not the case.

For example, you can assign either or both a system-assigned managed identity and a user-assigned managed identity to an Azure Analysis Services (AAS). However, AAS does not currently provide any way to use that identity when connecting to an Azure-based resource or service.

We currently have AAS servers making connections to Azure SQL Database using logins and passwords, but I wish we didn’t have to do this. It wouldn’t be required at all if we could just specify the managed identity in the connection details.

How do managed identities apply to SQL Server 2025?

It’s increasingly common for SQL Server to need to connect to Azure-based resources. This not only happens with SQL Server in Azure VMs, but also with on-premises SQL Server installations. However, SQL Server is not a native Azure resource or service.

From SQL Server 2025 onwards, it’s possible to both configure and use a system-assigned managed identity when SQL Server is enrolled in Azure Arc. To enable this, the SQL Server instance must be connected to Arc and run on Windows Server. It must also use the latest Azure Extension for SQL Server.

Currently, the documentation notes that SQL Server must have access to Azure public cloud for Microsoft Entra authentication, and that failover cluster instances are not supported for this managed-identity setup. We also can’t (yet) use user-assigned managed identities with SQL Server.

So, we only have part of what we need right now – but it’s a great start.

How to configure and use the managed identity in SQL Server 2025

Once the SQL Server instance is Arc-enabled, you can assign the server’s primary managed identity. You do this from the Azure portal while configuring the properties of the Arc-enabled SQL Server resource, under the Microsoft Entra ID and Purview settings. This primary managed identity is what SQL Server 2025 uses when making outbound connections to Azure-based resources.

You also need to grant the identity permission to the target resource in Azure. The role required is a role-based access control (RBAC) configuration, and will usually be different for each service type.

As part of making an outbound connection, you need to use a database-scoped credential like this:

EXECUTE sp_configure 'allow server scoped db credentials', 1;
RECONFIGURE;
GO
CREATE DATABASE SCOPED CREDENTIAL [storage_credential]
WITH IDENTITY = 'Managed Identity';
GO

Some credentials will also require you to specify a ‘secret’, for example when making a connection to an Azure OpenAI endpoint. Here, you not only specify IDENTITY = ‘Managed Identity’, but also a JSON secret identifying the token audience.

Note that it’s common to name the credential after the scope of the resource you’re accessing:

CREATE DATABASE SCOPED CREDENTIAL
[https://my-azure-openai-endpoint.cognitiveservices.azure.com/]
WITH IDENTITY = 'Managed Identity',
     SECRET = {"resourceid":"https://cognitiveservices.azure.com"}';
GO

This credential can then be used when executing calls like CREATE EXTERNAL MODEL. Once this is done, calls to the external model can use that credential to authenticate without the need to supply an API key.

In summary: managed identities in SQL Server 2025

We still have limitations with how we can configure and use managed identities with SQL Server, but what’s been provided so far is a great start. It lets traditional on-premises (and cloud VM-based) SQL Server instances participate in modern Microsoft Entra-based authentication patterns, and in turn reduce potential issues like credential leakage.

Protect your data. Demonstrate compliance.

With Redgate, stay ahead of threats with real-time monitoring and alerts, protect sensitive data with automated discovery & masking, and demonstrate compliance with traceability across every environment.
Learn more

FAQs: Managed identities in SQL Server 2025

1. What is a managed identity in Azure?

A managed identity is a Microsoft Entra ID identity automatically created and managed by Azure. It allows services to securely access other Azure resources without storing passwords, secrets, or certificates.

2. What are the types of managed identities?

  • System-assigned managed identity – tied to a single resource and deleted with it
  • User-assigned managed identity – reusable across multiple resources and managed independently

3. Why use managed identities instead of service principals?

Managed identities eliminate the need to manage credentials manually. Unlike service principals, Azure automatically rotates and manages credentials, reducing security risks and maintenance overhead.

4. Can SQL Server 2025 use managed identities?

Yes. SQL Server 2025 supports system-assigned managed identities when connected via Azure Arc, enabling secure access to Azure resources.

5. What are the requirements to use managed identities in SQL Server 2025?

  • SQL Server must be Arc-enabled
  • Must run on Windows Server
  • Requires the latest Azure Extension for SQL Server
  • Needs access to Azure public cloud for authentication

6. Does SQL Server 2025 support user-assigned managed identities?

No, currently only system-assigned managed identities are supported.

7. How do managed identities improve security in SQL Server?

They remove the need to store credentials like passwords or API keys, reducing the risk of credential leaks and simplifying secure access to Azure services.

8. How are managed identities used in SQL Server connections?

They are configured via database-scoped credentials using IDENTITY = 'Managed Identity', allowing SQL Server to authenticate securely when accessing external Azure resources.

The post What are managed identities in SQL Server 2025? A complete guide appeared first on Simple Talk.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build a Loan Document Workflow Using an Embedded React PDF Viewer

1 Share

TL;DR: Simplify loan processing by building a React-based workflow with an embedded PDF viewer to handle form filling, document review, approvals, e-signatures, audit trails, and final delivery in one place. Using a React PDF viewer from Syncfusion, you can replace fragmented processes with a unified, secure, and compliance-ready solution.

Loan processing breaks where documents begin

Loan processing is document-intensive by nature, and that’s exactly where most workflows break down.

Loan officers juggle email threads, outdated file copies, fragmented review tools, and customers wait for long days for approvals, and incomplete audit trails. When form filling, reviewing, annotating, signing, and approving all happen in separate systems, every transition is a potential point of failure.

This blog shows how to solve that problem by building a complete, role-based loan document review workflow by embedding the Syncfusion® React PDF Viewer, all inside a single React application. You’ll get a step-by-step walkthrough, key code snippets, and a working GitHub sample to reproduce the full flow.

Experience a leap in PDF technology with Syncfusion's PDF Library, shaping the future of digital document processing.

Why digital loan workflows matter more than ever

Loan processing is the end-to-end sequence that turns an application into a sanctioned loan: intake, verification, review, approval, and delivery. When each stage depends on manual handoffs or disconnected tools, the entire pipeline slows down.

Here’s what that looks like in practice:

  • Slow turnaround: Manual routing and email threads add days or weeks.
  • Broken context: Reviewers work from copies or fragmented annotations, causing rework.
  • Compliance exposure: Incomplete audit trails and versioning gaps increase risk.
  • Operational cost: Multiple-point tools and manual steps inflate headcount and processing cost.
  • Poor customer experience: Long waits and repeated document requests kill conversion.

In 2026, a unified, auditable digital workflow isn’t just a modernization effort. It’s a competitive advantage that turns loan pipelines into predictable revenue engines.

What a unified digital workflow delivers

Replacing fragmented processes with structured, automated digital document flows in a single interface changes the picture entirely:

  • Speed: Digital workflows cut decision time. Faster approvals mean more closed loans.
  • Accuracy: Inline validation and single-source PDFs reduce errors and resubmission.
  • Visibility: Real-time dashboarding and status badges eliminate chase work for staff.
  • Compliance: Embedded signatures, stamps, and audit logs create defensible records.
  • Lower cost, higher conversion: Fewer touchpoints, less rework, and better customer experience directly boost margins.

Architecture overview: One app, one document, full traceability

The loan processing application uses a clean, modern architecture that pairs an intuitive frontend review experience with a secure, auditable backend. The result is a seamless end-to-end workflow, form filling, annotation, review, approval, and final PDF generation, all inside a unified digital system.

Technology stack

Frontend (React + Syncfusion PDF Viewer)

A React application embeds the Syncfusion PDF Viewer, enabling users to:

  • Fill form fields directly in the browser.
  • Add annotations (highlights, comments, notes).
  • Apply E-signatures.
  • Download final PDFs.

All interactions happen inside a single interface. No external viewers, no file switching, and no context loss.

Backend services

A lightweight backend service handles the business logic and compliance needs:

  • Persist annotation JSON for each review cycle.
  • Manage workflow status across the loan requester → loan officer → manager.
  • Record audit events (timestamps, actions, version history) for compliance.
  • Generate flattened PDFs with embedded stamps, signatures, and metadata.
  • Store documents with version tracking for future verification or audits.

Together, the stack ensures a secure, traceable, and fully digital review flow.

End-to-end role-based architecture workflow

The loan processing workflow is structured with a role-based workflow, each interacting with the PDF viewer differently:

  1. Loan requester: Fills out loan applications, uploads supporting documents, and signs forms.
  2. Loan officer: Reviews applications, adds annotations, and provides preliminary approval.
  3. Site officer: Verifies the details submitted by the loan requester and completes the verification fields in the loan application.
  4. Manager: Final review, approval authority, and sanction letter generation.

This architecture delivers a fully connected, auditable, and efficient loan processing ecosystem powered by an embedded PDF experience.

Building the end-to-end loan processing workflow with Syncfusion React PDF Viewer

Let’s break down the steps to build the loan document review processing workflow using the Syncfusion React PDF Viewer.

Step 1: Initialize the React PDF Viewer

Setting up the Syncfusion React PDF Viewer is quick and developer-friendly. All you need to do is:

  • Install the Syncfusion React packages.
  • Import the required style CSS reference.
  • Then, add the PDF Viewer component to your React app.

From there, you can load PDFs, enable form filling, annotations, and signatures, and connect the viewer to your backend APIs.

Below is a code snippet to initialize the React PDF Viewer component.

import * as ReactDOM from 'react-dom/client';
import * as React from 'react';
import './index.css';
import { PdfViewerComponent, Toolbar, Magnification, Navigation, LinkAnnotation, BookmarkView,
         ThumbnailView, Print, TextSelection, Annotation, TextSearch, FormFields, FormDesigner, Inject} from '@syncfusion/ej2-react-pdfviewer';

function App() {
    return (<div>
    <div className='control-section'>
    {/* Render the PDF Viewer */}
      <PdfViewerComponent
        id="container"
        documentPath="https://cdn.syncfusion.com/content/pdf/pdf-succinctly.pdf"
        resourceUrl="https://cdn.syncfusion.com/ej2/31.2.2/dist/ej2-pdfviewer-lib"
        style={{ 'height': '640px' }}>

         <Inject services={[ Toolbar, Magnification, Navigation, Annotation, LinkAnnotation, BookmarkView,
                             ThumbnailView, Print, TextSelection, TextSearch, FormFields, FormDesigner ]}/>

      </PdfViewerComponent>
    </div>
  </div>);
}

For more details, explore thfaq_e complete setup guide for the Syncfusion React PDF Viewer!

Step 2: Building a unified user login

The application uses a single login page for all four roles:

  • Loan Requester
  • Loan Officer
  • Site Officer
  • Manager

Users select their role at login, and after authentication, they’re directed to a role-specific dashboard showing only the tasks, documents, and permissions relevant to their stage in the workflow.

Unified user login page in the loan application
Unified user login page in the loan application

Note: This demo uses mock user-role details. In a production environment, you can integrate it with your organization’s authentication system to enforce actual access control.

Explore the wide array of rich features in Syncfusion's PDF Library through step-by-step instructions and best practices.

Step 3: Filling out the loan application

After logging in, the loan requester lands on their dashboard, where they can view existing applications, check statuses, and create a new loan request. The dashboard shows the following key details:

  • Loan ID
  • Application Name
  • Status
  • Action
  • Comments

When the customer clicks the Create button, the loan form opens directly inside the Syncfusion React PDF Viewer. The loan form includes sections for applicant details, employment information, and loan specifics. Attachment fields allow applicants to upload supporting documents without leaving the interface.

The customers need to complete all required fields, add their e-signature, and submit.

The built-in validation enforces clean data entry. Fields like Date of Birth and Phone Number accept only valid formats, preventing errors before submission.

Refer to the following code example to implement numeric field validation.

if (targetName) {
    // If field name references numeric fields (date, phone, amount, tenure, etc.), enforce digits-only
    const numericKeywords = [
        'date',
        'dob',
        'dateofbirth',
        'phone',
        'mobile',
        'contact',
        'amount',
        'tenure',
        'number',
        'no',
        'age'
    ];

    let isNumericField = numericKeywords.some(k => targetName.includes(k));

    // If the field is a signature field, skip numeric sanitization even if the name contains keywords
    const fieldTypeRaw =
        (args.field && (args.field.type || args.field.fieldType || '')) || '';

    const fieldType = (fieldTypeRaw + '').toString().toLowerCase();
    const isSignatureField = fieldType.includes('signature');

    if (isSignatureField && isNumericField) {
        console.debug(
            'Skipping numeric sanitization for signature field',
            { targetName, fieldType }
        );
        isNumericField = false;
    }
}

Refer to the following image.

Filling the loan application form using React PDF Viewer
Filling the loan application form using React PDF Viewer

Step 4: Site officer reviews

Now, the loan officer checks the loan details filled by the loan requester and forwards them to the site officer for verification.

The site officer can access all pending verification loan forms from their dashboard. Their primary responsibility is to confirm that the information provided by the loan requester is accurate, genuine, and complete. The site officer also fills site‑specific fields and returns the application to the loan officer with all relevant verification details added as comments.

This step introduces an essential layer of field‑level authentication, strengthening the reliability of the loan approval process before the application moves forward for final approval.

See the following image for more information.

Site officer reviewing the loan application
Site officer reviewing the loan application

Step 5: Loan officer reviews

Loan officers access their dedicated dashboard listing all pending applications. Selecting an application opens it inside the Syncfusion React PDF Viewer.

Loan officers cannot edit fields submitted by the loan requester. Instead, they use annotations, highlights, and comments to review the document and provide structured feedback.

After site officer verification, a loan officer can:

  • Request additional information from the loan requester.
  • Reject the application with comments.
  • Approve and forward to the manager for final review.

Implement the following code example to prevent editing field values submitted by the customer.

/**
 * onDocumentLoad
 */
const onDocumentLoad = () => {
    evaluateFields();

    // Apply read-only mode based on loan status and role
    const canEdit =
        loanStatus === LoanStatus.INFO_REQUIRED &&
        (role === "Manager" || role === "Loan Officer");

    if (
        (!canEdit || loanStatus !== LoanStatus.INFO_REQUIRED) &&
        loanStatus !== ""
    ) {
        readOnly();
    }
};

The following GIF image provides visual clarity about the loan reviewing process.

Loan officer reviewing the loan application
Loan officer reviewing the loan application

Step 6: Manager reviews and approvals

Managers hold final approval authority. Their dashboard displays all applications waiting for approval, and each document opens directly inside the Syncfusion React PDF Viewer for review. Managers cannot modify customer-submitted fields but can review all annotations added by other officers and verify supporting documents.

From here, they can:

  • Approve the application and trigger the sanction‑letter creation.
  • Reject the application with comments.

Once approved, the manager sends the sanction letter to the customer for signature. After the customer signs, the manager completes the final signing and closes the loan process.

The following code example explains how sanction letter fields are populated and locked during the approval stage.

/**
 * UpdateForm
 * Populate sanction letter fields with values collected during approval
 * and set them to read-only.
 */
function UpdateForm() {
    const viewer = viewerRef.current;
    if (!viewer) return;

    let forms = viewer.retrieveFormFields();

    for (let i = 0; i < forms.length; i++) {
        const field = forms[i];

        // Populate sanction fields based on field name
        if (field.name === "ApplicantName") {
            field.value = sanctionValues.name;
        } else if (field.name === "Amount") {
            field.value = sanctionValues.amount;
        } else if (field.name === "Tenure") {
            field.value = sanctionValues.tenure;
        } else if (field.name === "Date") {
            field.value = new Date().toLocaleDateString("en-GB");
        }

        // If the field has a value, update and lock as read-only
        if (field.value !== "") {
            viewer.updateFormFieldsValue(field);
            viewer.formDesignerModule.updateFormField(field, {
                isReadOnly: true
            });
        }
    }
}
Manager reviewing and approving the loan application
Manager reviewing and approving the loan application

Step 7: Downloading the loan sanction letter

After the manager approves the loan, the loan requester’s dashboard updates the application status to “APPROVED.” They can now open the sanction letter directly inside the React PDF Viewer and use the built-in Download option to save a signed copy locally.

Downloading the loan sanction letter
Downloading the loan sanction letter

Witness the advanced capabilities of Syncfusion's PDF Library with feature showcases.

Comparing manual vs. digital loan processing workflow

The move from manual loan processing to an embedded digital workflow significantly reduces delays, errors, and compliance risks. The table below shows how each stage improves when using the Syncfusion React PDF Viewer instead of traditional, fragmented tools.

Stage Traditional approach Embedded PDF Viewer workflow
Application submission Email attachments or physical forms require manual data entry. Inline form filling with real-time validation and instant submission.
Document review Printed markups, handwritten notes, or separate annotation tools require scanning. Built-in annotation directly on the PDF with digital commenting.
Approval and signing In-person meetings or wet signatures require physical presence. E-signatures can be completed within the viewer, entirely remotely, in just seconds.
Status tracking Manual email updates with no centralized visibility. Real-time dashboard status is visible to all authorized parties.
Sanction letter delivery Postal or email attachment with security risks. Instant in-app download with full encryption and audit trail.
Integration complexity 4-5 separate tools. Single embedded component with unified functionality.

GitHub reference

Ready to try it yourself? Clone the Loan processing demo on GitHub, run it in minutes, and experience a complete end‑to‑end loan review and approval process firsthand.

Frequently Asked Questions

Can I customize the workflow for my application?

Absolutely. The workflow, UI, permissions, backend logic, and viewer tools can all be configured to match your organization’s loan lifecycle and approval structure.

Can we automate notifications for each workflow step?

Yes. Email, SMS, Teams, or in-app notifications can be triggered by backend workflow events such as form submission, loan officer information requests, or manager approvals.

Can we add custom validation rules to the form fields?

Yes. Beyond the default PDF field validation, you can add custom form field validation rules (e.g., PAN format, IFSC validation, salary range logic).

Does the demo support multiple loan types with different workflows?

Yes. You can configure different PDF template forms, add custom approval pipelines, and include additional review steps as required by your banking solution.

Can multiple customers and loan officers work on apps at the same time?

Yes. Because the app runs in the browser and is role-based, multiple customers and loan officers can work on the loan document review demo and submit loan forms concurrently.

Can we integrate the system with OCR or document classification tools?

Yes. You can integrate Syncfusion OCR engines to auto-extract data from customer‑uploaded documents like pay slips, IDs, or bank statements before reviewing.

Does the viewer support dark mode?

Yes. Syncfusion supports built‑in theme variations, including Material, Tailwind, Fabric, Bootstrap, and custom themes with dark mode variants to match your brand style.

How does the system prevent tampering after a document is signed?

Once the document is signed, signatures and loan details are set to read-only form fields to prevent unauthorized edits from others.

Syncfusion’s high-performance PDF Library allows you to create PDF documents from scratch without Adobe dependencies.

Build faster, compliant loan workflows directly inside your React app

Thanks for reading! The Syncfusion React PDF Viewer brings together every capability needed to build a seamless, end-to-end digital loan processing workflow directly inside the browser. Its comprehensive built-in features eliminate the need for external tools, reduce operational friction, and ensure a secure, compliant loan lifecycle from app to sanction letter.

Ready to modernize your loan processing? Start exploring the full power of the React PDF Viewer:

  • Form filling & annotations: Interactive form fields with validation plus rich markup tools for reviewing documents.
  • E‑Signature support: Secure electronic signing without the need for wet signatures.
  • Role‑based access: Customizable toolbars and permissions tailored for each user role.
  • API & customization: Deep customization with 200+ APIs and easy backend integration for workflow state, audits, and versioning.
  • Performance & compliance: Fast, secure rendering (client or server) with GDPR, HIPAA, and SOC 2 compliant infrastructure.
  • Multi‑platform support: Works across React, Angular, Vue, Blazor, ASP.NET Core, and more.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories