Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147187 stories
·
33 followers

ASP.NET Core Document Editor with Backend via the Text Control Private NuGet Feed

1 Share
This article demonstrates how to create a Document Editor ASP.NET Core application using the Text Control Private NuGet Feed. We will build a basic web application that enables users to edit documents directly in their web browser with the Document Editor component from Text Control. The backend is powered by the Private NuGet Feed, which provides seamless licensing and eliminates the need for setup.

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI meets you where you are: Slack, email & ServiceNow

1 Share

The IT self-service agent AI quickstart connects AI with the communication tools your team already uses, including Slack, email, and ServiceNow. This post explores how the agent handles multi-turn workflows—such as laptop refreshes or access requests—across different channels.

This agent is part of the AI quickstarts catalog, a collection of ready-to-run, industry-specific use cases for Red Hat AI. Each AI quickstart is simple to deploy and extend, providing a hands-on way to see how AI solves problems on open source infrastructure. Learn more: AI quickstarts: An easy and practical way to get started with Red Hat AI

Users can interact with a single AI agent across several channels without losing context. For example, a user can start a request in Slack, follow up by email, and trigger actions in ServiceNow. We integrated these tools into the core architecture to ensure a consistent experience across platforms.

This is the second post in our series about developing the it-self-service-agent AI quickstart. If you missed part 1, read it here: AI quickstart: Self-service agent for IT process automation

Why integration is the real problem

Most enterprise AI systems start from the model outward. They focus on prompts, tools, and responses, then bolt on an interface at the end. That approach works fine for demos, but it breaks down quickly in real environments.

In many organizations enterprise work is fragmented:

  • Slack is where questions start and evolve.
  • Email is where updates, approvals, and follow-ups still live.
  • ServiceNow is where work becomes official and auditable.

Forcing users into a single "AI interface" just creates another silo. The goal of this AI quickstart is the opposite: to meet users where they already work and let the AI adapt to them, not the other way around.

That decision has architectural consequences. Slack, email, and ServiceNow all behave differently. They have different identity models, delivery semantics, and interaction patterns. Treating them as interchangeable doesn't work—but treating them as completely separate systems doesn't either.

A unifying architecture

At a high level, every interaction flows through the same core path as shown in in Figure 1. The request begins at a channel—such as Slack or email—and passes through an integration dispatcher using channel adapters. From there, it moves into a request manager for normalization and session handling, then through agent services for routing and context, before finally reaching the MCP servers (such as ServiceNow).

Flowchart showing a request moving from a channel through an integration dispatcher, request manager, and agent services to reach MCP servers.
Figure 1: High-level architectural flow of a request from external channels to MCP servers.

Each integration adapter is responsible for handling protocol-specific details—verifying Slack signatures, polling email inboxes, parsing headers—but that logic stops at the boundary. Once a request is normalized and associated with a user session, the rest of the system treats it the same way regardless of where it came from.

This separation keeps the agent logic focused on intent, policy, and workflow orchestration instead of UI or transport details. It also makes the system extensible: new integrations don't require rewriting the agent.

But how do these services actually communicate? That's where CloudEvents comes in.

CloudEvents: The communication layer

All inter-service communication uses CloudEvents, a standardized event format (Cloud Native Computing Foundation specification) implemented as HTTP messages. This enables the scalability and reliability that enterprise deployments require.

CloudEvents provides three key benefits: scalability, reliability, and decoupling.

Scalability comes from the event-driven model. Services don't block waiting for responses. The request manager can publish a request event and return immediately, while the agent service processes it asynchronously. This means the request manager can handle many more concurrent requests than synchronous processing would allow.

Reliability comes from durable message queuing. If the agent service is temporarily unavailable, events queue up and are processed when the service recovers. This is critical for production deployments where services might restart, scale, or experience temporary failures.

Decoupling means services don't need to know about each other's implementation details. The request manager publishes events with a standard format, and any service that subscribes can process them. This makes it easy to add new services—like a monitoring service or audit logger—without modifying existing code.

The system supports two deployment modes that use the same codebase. In production, we use Knative Eventing with Apache Kafka for enterprise-grade reliability. For development and testing, we use a lightweight mock eventing service that mimics the same CloudEvents protocol but routes events via simple HTTP. The same application code works in both modes—only the infrastructure changes.

Here's an example of a CloudEvent published to the broker:

{
  "specversion": "1.0",
  "type": "com.self-service-agent.request.created",
  "source": "integration-dispatcher",
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "time": "2024-01-15T10:30:00Z",
  "datacontenttype": "application/json",
  "userid": "user-uuid-abc123",
  "data": {
    "content": "I need a new laptop",
    "integration_type": "slack",
    "channel_id": "C01234ABCD",
    "thread_id": "1234567890.123456",
    "slack_user_id": "U01234ABCD",
    "slack_team_id": "T01234ABCD",
    "metadata": {}
  }
}

Knative Eventing's abstraction layer, combined with CloudEvents, enables platform flexibility. The application code just publishes and consumes CloudEvents to a broker URL—it doesn't know or care about the underlying broker implementation. While the current deployment uses Kafka, Knative Eventing supports other broker types (like RabbitMQ or NATS) through different broker classes.

Switching brokers requires updating a Kubernetes configuration, but no application code changes. For example, to switch from Kafka to RabbitMQ, you'd change the broker class annotation, as shown in the following YAML:

# Switch to RabbitMQ
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
  annotations:
    eventing.knative.dev/broker.class: RabbitMQ

The services still send CloudEvents to the same broker URL—only the infrastructure behind it changes.

Session continuity across channels

Different integrations use different identifier formats (Slack uses user IDs like U01234ABCD, email uses addresses), and email addresses don't always match across systems or change independently. To enable session continuity, we use a canonical user identity—a UUID that maps to all integration-specific identifiers. This gives us a stable anchor that doesn't break when identifiers change.

Sessions in this system are user-centric by default, not integration-centric. That means a single active session can span Slack, email, and other channels (web, CLI, webhooks). A user can:

  • Start a request in Slack
  • Receive updates by email
  • Reply to that email later
  • Continue the same conversation without restating context

This behavior is what makes the system feel unified rather than stitched together. Without it, you'd effectively be running separate systems for each channel. There's a configuration option to scope sessions per integration type if needed, but in practice, cross-channel sessions are what users expect.

The canonical identity approach also enables extensibility: when adding a new channel (like Teams or SMS), you just map its identifiers to the canonical UUID, and sessions automatically work across all channels without additional session management logic.

Session resolution happens early in request handling. If a request includes an explicit session identifier (for example, via email headers), it's used. Otherwise, the system looks for an active session for that user and reuses it. New sessions are created only when necessary.

Slack: Conversational by design

Slack is usually the front door. It's real-time, interactive, and event-driven. The Slack integration handles:

  • Signature verification and replay protection
  • User resolution from Slack IDs to canonical users
  • Translation of messages, commands, and interactions into normalized requests

Figure 2 shows an example of an interaction in Slack.

Slack agent interaction
FIgure 2: Slack interaction with IT self-service agent.

Slack-specific features—threads, buttons, modals—are handled entirely within the adapter. The agent never needs to know how a button click is represented in Slack; it just receives structured intent.

This design keeps Slack interactions responsive and rich while preventing Slack assumptions from leaking into the rest of the system. It also allows the integration to safely handle retries and duplicate events, which are a reality of Slack's delivery model.

Email: Asynchronous but persistent

Email plays a different role. It's not conversational in the same way Slack is, but it's still critical—especially for long-running workflows and notifications.

Rather than forcing email into a chat metaphor, the system treats it as a continuation and notification channel. Email is used to:

  • Deliver agent responses (which may include summaries, status updates, or requests for action)
  • Provide an asynchronous alternative to real-time channels—critical for IT workflows where ticket updates and approvals often happen via email
  • Allow conversations to resume after delays, matching how IT ticketing systems typically operate

Figure 3 shows an example of an interaction via email.

Email agent interaction
Figure 3: Email interaction with IT self-service agent.

Outgoing emails include session identifiers in headers and message bodies. Incoming email is polled, deduplicated, and correlated back to existing sessions using those identifiers and standard threading headers.

From the user's perspective, replying to an email "just works." From the system's perspective, it's another request in an existing session.

The request manager: Normalization and control

At the center of all this is the request manager. Its job is not to reason about intent—that's the agent's responsibility—but to ensure that requests are:

  • Normalized into a consistent internal format
  • Associated with the correct user and session
  • Logged and deduplicated
  • Routed to the appropriate agent

This normalization follows object-oriented encapsulation principles: The NormalizedRequest object encapsulates integration-specific complexity behind a unified interface. The agent service processes all requests identically—it never needs to know whether a request came from Slack, email, or a CLI tool. Integration-specific details (like Slack's channel_id or email's threading headers) are preserved in integration_context for response routing, but hidden from the core processing logic. This abstraction boundary is what makes the system extensible: adding a new integration doesn't require modifying any downstream services.

It's also where we enforce idempotency and prevent duplicate processing—an unglamorous but essential part of building something that survives real-world usage.

Each integration type has its own request format—Slack requests include channel_id and thread_ts, email requests include email_from and email_in_reply_to headers, CLI requests have command context. The RequestNormalizer transforms all of these into a single NormalizedRequest format that the rest of the system understands.

The following simplified pseudocode illustrates how Slack requests are normalized (the actual implementation includes additional error handling and context extraction):

def _normalize_slack_request(
    self, request: SlackRequest, base_data: Dict[str, Any]
) -> NormalizedRequest:
    """Normalize Slack-specific request."""
    integration_context = {
        "channel_id": request.channel_id,
        "thread_id": request.thread_id,
        "slack_user_id": request.slack_user_id,
        "slack_team_id": request.slack_team_id,
        "platform": "slack",
    }
    # Extract user context from Slack metadata
    user_context = {
        "platform_user_id": request.slack_user_id,
        "team_id": request.slack_team_id,
        "channel_type": "dm" if request.channel_id.startswith("D") else "channel",
    }
    return NormalizedRequest(
        **base_data,
        integration_context=integration_context,
        user_context=user_context,
        requires_routing=True,
    )

This abstraction makes it easy to add new integrations—you just need to create a new request schema and add a normalization method. The rest of the system automatically works with the new integration because it only deals with NormalizedRequest objects.

You can find the full implementation with support for Slack, Email, Web, CLI, and Tool requests in it-self-service-agent/request-manager/src/request_manager /normalizer.py.

The request manager is stateless in terms of conversation context—it delegates that to the agent service. But it's stateful in terms of request tracking and session management, which enables the cross-channel continuity we need.

The agent service uses LangGraph with PostgreSQL checkpointing to persist conversation state. Every turn of the conversation—messages, routing decisions, workflow state—is saved to the database, allowing conversations to resume exactly where they left off. The request manager coordinates this by maintaining session records that map user identities to LangGraph thread IDs. When a request arrives from any channel, the request manager retrieves the associated thread ID and passes it to the agent service, which resumes the conversation from its last checkpointed state.

This checkpointing is what makes multi-turn agentic workflows possible across channels and time gaps. Without it, you'd be rebuilding conversation state from scratch on every request, which breaks the continuity that makes agentic systems feel intelligent rather than stateless. A user can start in Slack, continue via email days later, and return to Slack without losing context—because the conversation state persists independently of the channel.

Integration dispatcher: Delivering responses reliably

Incoming requests are only half the story. Delivering responses is just as important.

The integration dispatcher is responsible for sending agent responses back out through the appropriate channels. It supports:

  • Multichannel delivery (Slack, email, webhooks)
  • Smart defaults (users don't need to configure everything up front)
  • User overrides for delivery preferences
  • Graceful degradation if one channel fails

If Slack delivery fails, email can still succeed. If a user has no explicit configuration, sensible defaults are applied dynamically. This "lazy configuration" approach reduces operational overhead while still allowing full customization when needed.

ServiceNow and MCP: Turning conversation into action

Slack and email are about interaction. ServiceNow is where work happens in many organizations.

Interactions by the agent with ServiceNow are handled with a dedicated Model Context Protocol (MCP) server. This creates a clean, enforceable boundary:

  • Authentication and credentials are isolated.
  • Allowed operations are explicit.
  • Side effects are controlled and auditable.

Figure 4 shows an example of an interaction with ServiceNow.

ServiceNow interaction
Figure 4: ServiceNow interaction with IT self-service agent.

The agent reasons about what needs to happen; the MCP server controls how it happens. This separation improves safety and makes integrations easier to evolve.

The same pattern applies to other backend systems. Once identity and session context are established, the agent can interact with operational systems in a controlled, extensible way.

A deeper dive into MCP design and extension patterns will be covered in a later post in this series.

What this enables

Putting all of this together enables a few important outcomes:

  • Lower friction: Users stay in tools they already use
  • Continuity: Conversations survive channel switches and time gaps
  • Real work: Requests result in actual tickets and system changes
  • Extensibility: New integrations fit naturally into the architecture
  • Maintainability: Channel logic stays separate from agent logic

This comes from treating integration, identity, and sessions as core architectural concerns.

Get started

If you're building AI-driven IT self-service solutions, consider how your system will integrate with existing tools. The AI quickstart provides a framework you can adapt for your own use cases, whether that's laptop refresh requests, access management, compliance workflows, or other IT processes.

Ready to get started? The IT self-service agent AI quickstart includes complete deployment instructions, integration guides, and evaluation frameworks. You can deploy it in testing mode (with mock eventing) to explore the concepts, then scale up to production mode (with Knative Eventing and Kafka) when you're ready.

Closing thoughts

"Meeting users where they are" isn't just a design slogan; it's an architectural commitment. The IT self-service agent AI quickstart shows that this is achievable using open source tools and an intentional design. This approach results in an AI that fits into existing workflows rather than just providing isolated responses.

Where to learn more

If this post sparked your interest in the IT self-service agent AI quickstart, here are additional resources to explore.

The post AI meets you where you are: Slack, email & ServiceNow appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Text Control Private NuGet Feed

1 Share
The Text Control private NuGet feed delivers your licensed packages with zero friction. Your license is automatically embedded in every download. No manual license file management. No hunting for serial numbers. Just dotnet add package and you are ready to code.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Modern Mapping with EF Core

1 Share

Introduction

In EF Core (as in most O/RMs) we can map and store multiple things:

  • Objects that hold a single value, like int, string, bool, float, Guid, DateTime, TimeSpan, Uri, etc. These are (somewhat wrongly) called primitive types and are always included inside some container, like the following
  • Objects that hold multiple single values and which have an identity, for example, Customer, Product, Order. These are called entity types and they must have an identity that makes them unique amongst all the other objects of the same type
  • Objects that are structured to hold multiple values, but have no identity, such as Address, Coordinate. These are called value types and are merely a collection of (possibly related) properties
  • We can also have collections of the previous things

There's more to it, of course, for example, it is even possible to represent inheritance of entities, of which I talked about before,

In this post I am going to cover some modern/advanced mapping techniques for EF Core that may not be so well known. It assumes EF Core 10 and a database that supports JSON columns, such as SQL Server 2025. Some things will not work with older versions, like JSON persistence - a compatibility level of 170 or higher is required. Other databases, such as PostgreSQL, should work too.

I will be using singular table names for all tables, the same as for their related entity (e.g., Customer entity <=> Customer table), which means that each entity is persisted to a table of the same name, and will show only the minimum required code to illustrate a point.

Classic Mappings

Up until recently, EF Core would allow us to map standard (classic) relationships between entities only. The difference between entities and value types is:

Entities and Value Types

Both entities and value types are custom classes (or records) with properties or fields, but the difference between the two is, entities have an identity. This means that one or a combination of properties of an entity are unique in the data store where they live, in relational databases this is called a primary key. It is possible to query the data store for this entity's id and it shall return always the same entity (unless the data store is modified, of course). We cannot query by a value type, as it is a detail of a containing entity, does not exist on it's own and does not have any id. Not all O/RMs support the concept of value types, and until recently, EF Core didn't.

Standard relations in relational databases are:

One to Many

Each source entity can be related to many target entities. For example, one customer can have many orders. This is represented as:

public class Customer
{
    public int Id { get; set; }
    public List<Order> Orders { get; set; } = [];
}

This means that the table holding the Order entity will have a foreign key to the table holding Customer.

Many to One

Many source entities can reference the same target entity. For example. many orders belong to the same customer. We can represent this as:

public class Order
{
    public int Id { get; set; }
    public Customer Customer { get; set; }
}

As you know, this is exactly the opposite of one to many, and it means that the Order table will have a foreign key to the Customer table.

One to One

Each source entity can be related to a single target entity, which doesn't relate to any other source. For example, one customer has an address. An example code:

public class Customer
{
    public int Id { get; set; }
    public Address Address { get; set; }
}

public class Address
{
    public int Id { get; set; }
    public string Street { get; set; }   
    public string City { get; set; }
    public string Country { get; set; }
    public string POBox { get; set; }
}

This relation is very similar to many to one, and some people avoid it. It means that either the Customer table has a foreign key to the Address table or the Customer table has a foreign key to Address, depending on what exists first (can share the same key).


Many to Many

Each source entity can be related to many target entities, and, on their turn, each target entity can be related to many source entities. It's essentially two one to many/many to one put together. For example: one order can contain multiple products; a product can be part of many orders. In code:

public class Order
{
    public int Id { get; set; }
    public List<Product> Products { get; set; } = [];
}

public class Product
{
    public int Id { get; set; }
    public List<Order> Orders { get; set; } = [];
}

For many to many we need a third table to hold foreign keys to both the Order and Product tables. However, there is no need to map it to a class, unless it requires additional properties, in which case, the relations become two many to one.

Modern Mappings

With modern versions of EF Core (and relational databases), we can have more complex situations. One particular case is, when we want to use value types, meaning, classes without identity, which can be reused across the domain model. There are two possible approaches to this, using EF Core: complex properties and owned entities. The same .NET model can be mapped in many different ways to the database.

Complex Properties

Imagine for a second that we wish to use classes that are not entities, meaning, we don't care about their identity, just their values. From EF Core 8 onwards, we have complex properties and complex collections for this.

Let's suppose that we want to store an Address not as an entity, but only its values. If we only want to store one Address per Customer, we could have this:

public class Customer
{
    public int Id { get; set; }
    public Address Address { get; set; }
}

public class Address
{
    public string Street { get; set; }
    public string City { get; set; }
    public string Country { get; set; }
    public string POBox { get; set; }
}

Notice the Id property of Address is gone, we don't need it. 

And, if we need multiple Addresses, possibly of different types:

public class Customer
{
    public int Id { get; set; }
    public List<Address> Addresses { get; set; } = [];
}
    
public class Address
{
    public AddressType AddressType { get; set; }
    public string Street { get; set; }
    public string City { get; set; }
    public string Country { get; set; }
    public string POBox { get; set; }
}

public enum AddressType
{
    Personal,
    Work,
    Other
}

Here we introduced an AddressType enumeration, let's assume that we can add multiple addresses to a single Customer, possibly all of different types, but this is not required.

Enter complex types. Complex types are the EF Core implementation of value types, and allow us to map values in our entities explicitly, and also collections of value types. The configuration for a single Address would be defined using ComplexProperty:

modelBuilder.Entity<Customer>()
    .ComplexProperty(x => x.Address);

What happens is, each of the properties of Address will be stored as a separate column in the Customer table.

And, for multiple Addresses, we use ComplexCollection instead:

modelBuilder.Entity<Customer>()
    .ComplexCollection(x => x.Addresses, options =>
    {
        options.ToJson();
    });

You may have noticed ToJson: indeed, complex collections need to be stored in single column, which must contain JSON - remember, here we do not have a foreign key to another table. The actual type is decided by the data store that we are using (e.g., SQL Server uses JSON or NVARCHAR(MAX)).

Owned Entities

Unlike complex types, owned entities can either be stored in a table separate from the owning entity's table or as a JSON column in the owning table. They can be entity types, with id semantics (even when that id is shadowed), if we store them in a different table, or they do not have an id value at all, if we store them as a JSON column.

So, let's pick on the previous example of a Customer with a single Address, we would configure it, using owned entity, with OwnsOne, as this:

modelBuilder.Entity<Customer>()
    .OwnsOne(x => x.Address);

Again, each property of Address will be stored as a separate column in the Customer table.

For having multiple Addresses, on the same table as Customer, we use OwnsMany instead:

modelBuilder.Entity<Customer>()
    .OwnsMany(x => x.Addresses, options =>
    {
        options.ToJson();
    });

Notice again the call to ToJson: we always need this if we are going to store a collection of objects inside the containing entity. In this case a single column will be used.

If we want to allow storing Addresses on a separate table, we call instead OwnsMany and also HasKey:

modelBuilder.Entity<Customer>()
    .OwnsMany(x => x.Addresses, options =>
    {
        options.HasKey("Id");  //required if not using JSON
    });

This way, a new table is created transparently (Address, by default) which is linked to the Customer table: Customer has a foreign key to Address.

The difference between complex properties and owned entities is that, complex properties are always stored on the same table as the containing entity, whereas owned entities may or may not require a separate table. On both cases, you cannot query by the complex/owned type, meaning, this waill fail:

ctx.Set<Address>().ToList(); //error: Address is not an entity

Collections of Primitive Types

In the old days, as was mentioned, it wasn't possible to store collections of primitive types. You could, of course, store them all in a text column and then use some value converter to turn the database value into the .NET property. Now, with primitive collections, EF Core takes care of this for us, so if we have, for example:

public class Product
{
    public int Id { get; set; }
    public List<string> Tags { get; set; } = [];
}

This just works, the Tags property is persisted automatically inside the Product table, plus it works with any primitive type. We do not need to know the details, but it is probably going to be stored as JSON, if the database supports it. And it can be queried too:

ctx.Products
    .Where(x => x.Tags.Contains("blue"))
    .ToList();

Entities from Views or SQL

It is also possible to map .NET classes from database views or raw SQL. The resulting entities, of course, must be read-only, and any attempt to persist them will result in an exception being throw. 

For views, we use ToView:

modelBuilder.Entity<OrderCustomer>()
    .ToView("OrderCustomer")
    .HasNoKey();

HasNoKey is also required, it tells EF Core that there is no logical key on the returned entities and so the entities must be read-only: they are keyless entities. A simple view that joins Orders, Customers, and Products, could be defined as (for SQL Server):

CREATE VIEW dbo.OrderCustomer AS
SELECT 
    o.Timestamp AS OrderTimestamp, 
    c.Name AS CustomerName, 
    COUNT(op.OrderId) AS ProductCount
FROM dbo.Order o
INNER JOIN dbo.Customer c ON o.CustomerId = c.Id
LEFT JOIN dbo.OrderProduct op ON op.OrderId = o.Id
GROUP BY o.Timestamp, c.Name, o.Id

For using SQL, we call ToSqlQuery, using the same query previously defined:

modelBuilder.Entity<OrderCustomer>()
    .ToSqlQuery("SELECT o.CreationDate AS OrderCreationDate, c.Name AS CustomerName, COUNT(op.OrderId) AS ProductCount
                 FROM dbo.Order o
                 INNER JOIN dbo.Customer c ON o.CustomerId = c.Id
                 LEFT JOIN dbo.OrderProduct op ON op.OrderId = o.Id
                 GROUP BY o.CreationDate, c.Name, o.Id")
    .HasNoKey();

Both are keyless entities, same restrictions apply: we can query but not make modifications.

Final option is using some SQL function that returns the columns and records we need, we configure it with ToFunction:

modelBuilder.Entity<OrderCustomer>()
    .ToFunction("GetOrderCustomers");

Where the function could be (SQL Server):

CREATE FUNCTION dbo.GetOrderCustomers()
RETURNS TABLE
AS
RETURN
(
    SELECT 
        o.Timestamp AS OrderTimestamp, 
        c.Name AS CustomerName, 
        COUNT(op.OrderId) AS ProductCount
    FROM dbo.Order o
    INNER JOIN dbo.Customer c ON o.CustomerId = c.Id
    LEFT JOIN dbo.OrderProduct op ON op.OrderId = o.Id
    GROUP BY o.Timestamp, c.Name, o.Id
)

Table Splitting

Now I'm going to cover two opposite techniques related to entity persistence: first, the possibility of splitting (or mapping) a table into many entities. It's called table splitting. Why is this useful? Imagine that you want to have a smaller entity with just the essential information about an order, and another entity with the rest of the details. This way, you only access the second table if you absolutely need to.

Imagine we have an Order class:

public class Order
{
    public int Id { get; set; }
    public State State { get; set; }
    public DateTime CreationDate { get; set; }
    public OrderDetail Detail { get; set; }
}

And also an OrderDetail class:

public class OrderDetail
{
    public int Id { get; set; }
    public DateTime? DispatchDate { get; set; }
    public Order Order { get; set; }
    public Customer Customer { get; set; }
    public List<Product> Products { get; set; } = [];
}

Let's consider that Order contains the more important properties and OrderDetail all the rest. You only load the details when and if you need to. This could be mapped before using a one to one relation (with different tables), but now we have table splitting, which allows to map to the same table. We configure it like this:

modelBuilder.Entity<OrderDetail>(x =>
{
    x.ToTable("Order");
});

modelBuilder.Entity<Order>(x =>
{
    x.ToTable("Order");
    x.HasOne(o => o.Detail)
        .WithOne(o => o.Order)
        .HasForeignKey(o => o.Id);
});    

Entity Splitting

Entity splitting is the exact opposite of the previous technique: an entity is spread into multiple tables. Each table must be joined by the same primary key. Let's imagine that we want to separate the Order entity:

public class Order
{
    public int Id { get; set; }
    public State State { get; set; }
    public DateTime CreationDate { get; set; }
    public DateTime? DispatchDate { get; set; }
    public Customer Customer { get; set; }
    public List<Product> Products { get; set; } = [];
}

into two tables, for better organisation: one with the more important data, and the other with the rest. Here is how we set it up:

modelBuilder.Entity<Order>(x =>
{
    x.ToTable("Order")
        .SplitToTable("OrderDetail", y =>
        {
            y.Property(o => o.DispatchDate);
            y.HasOne(o => o.Customer).WithMany();
            y.HasMany(o => o.Products).WithMany();
        });
});

So, entity Order will be mapped to Order and OrderDetail tables. The Order table will get:

  • Id
  • State
  • CreationDate
And OrderDetail will have:

  • DispatchDate
  • Customer (foreign key)
  • Products (foreign key from Product will point to OrderDetail)

Shadow and Indexer Properties

Shadow properties are properties that exist in the database but do not have a physical property in the data model (.NET class). One common usage is for things that we do not want users to change, such as soft-deleted or last updated columns; EF Core uses this behind the scenes in many to many relations, and for other relations for which there is no collection or navigation property.

To configure, we use Property:

builder.Property<DateTime?>("LastUpdated")
    .HasDefaultValueSql("GETUTCDATE()")
    .ValueGeneratedOnAddOrUpdate();

It is possible to access the current (and the original value) too, from the entity's Entry:

var lastUpdated = ctx.Entry(entity)
    .Property<DateTime?>("LastUpdated")
    .CurrentValue;

And they can even be used in queries using EF.Property:

var query = ctx.Products
    .Where(x => EF.Property<DateTime>(x, "LastUpdated").Year == 2025);

Indexer properties are similar to shadow properties, in the sense that they are virtual and do not have a backing field or property. They can be configured using IndexerProperty:

builder.IndexerProperty<string>("Colour");
builder.IndexerProperty<string>("Make");

We map then using classic .NET indexers with string keys, but we persist them is up to us:

public class Product
{
    private readonly Dictionary<string, object> _data = new();

    public object this[string key]
    {
        get => _data[key];
        set => _data[key] = value;
    }
}

As you can see, indexer properties rely on an indexer in our entity and then we can control how we are going to persist it. They can be used together with other regular properties.

To store a value:

product["Colour"] = "red";
product["Make"] = "cotton";

And to query:

ctx.Products.Single(x => x["Colour"] == "red");

Indexer properties are easier to access than shadow properties because of the indexer, which does not require the context.

Property Bag Entity Types

Now this is something totally new: the possibility to represent entities as key-value dictionaries (Dictionary<string, object> in .NET) instead of POCOs! It is called property bag entity types or shared type entities and it means that you can have this definition:

public class Context : DbContext
{
    public DbSet<Dictionary<string, object>> KeyValuePairs => Set<Dictionary<string, object>>("KeyValuePairs");
    
    protected override void OnModelCreating(ModelBuilder modelBuilder)
    {
        modelBuilder.SharedTypeEntity<Dictionary<string, object>>("KeyValuePairs", options =>
        {
            options.Property<int>("Id");
            options.Property<string>("A");
            options.Property<int>("B");
            options.Property<DateTime>("C");
            options.HasKey("Id");
        });
    }
} 

The parameter to SharedTypeEntity is the mapped table name (same as the one passed on the call to Set), and on it we must define all our properties (name, type, other constraints), using Property.

To store, it's the usual process - don't set a value for the Id property, as you wouldn't do with POCOs:

var keyValue = new Dictionary<string, object>
{
    ["A"] = "this is a phrase",
    ["B"] = 100,
    ["C"] = DateTime.UtcNow
};

ctx.KeyValuePairs.Add(keyValue);

And to query too:

ctx.KeyValuePairs.Single(x => x["Id"].Equals(1));
//or
ctx.KeyValuePairs.Find(1);

Each Dictionary<string, object> entry corresponds to a single record in the database. These are entities fully made of indexer properties.

Conclusion

As we can see, modern EF Core supports quite a lot of new functionality, including functionality that previously only existed on other O/RMs such as NHibernate. There are a few options missing, though, such as setsmapsidbags, and indexed collections (with extra lazy loading), but what is available is already pretty impressive. Let me know what you think of this and stay tuned for more!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How LeftJoin and RightJoin Work in EF Core .NET 10

1 Share

Learn how LeftJoin and RightJoin work in EF Core .NET 10, replacing complex GroupJoin patterns and simplifying left and right joins in LINQ.

The page How LeftJoin and RightJoin Work in EF Core .NET 10 appeared on Round The Code.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Prompt Product Updates – February 2026

1 Share

SQL Prompt’s latest releases focus on stability and quality improvements, particularly around our AI features and SSMS 22 compatibility.

AI reliability improvements

We’ve made several improvements to how SQL Prompt AI handles real-world conditions. If SQL Prompt is unable to retrieve your database schema, AI requests will now continue without schema-awareness rather than failing silently — with a warning displayed in the Prompt AI window so you know what’s happening. We’ve also fixed an issue where intermittent schema retrieval failures required you to close and re-open the Prompt AI window to recover, and resolved a bug where a failed AI request (e.g. Explain SQL) could leave the spinner visible without surfacing the error.

In addition, AI suggestion lists are now only regenerated, when necessary (for example, when your database connection changes), reducing unnecessary processing and network requests.

SSMS 22 stability

We’ve fixed an issue where SSMS 22 could freeze during the initial import of user settings when SQL Prompt was already installed. This should make the transition to SSMS 22 much smoother. We’re aware of a few other SSMS 22-related bugs that the team is working hard to resolve.

Help shape what’s next for AI Code Completion

If you haven’t tried AI Code Completion yet, now’s a great time. The preview feature generates intelligent, multi-line code suggestions based on your query context — and can even write SQL directly from plain English comments in your editor.

We’re actively developing this feature and your feedback is shaping what comes next. Try it out and let us know what you think via our AI feedback form or the built-in link in the Prompt AI window.

Download the latest version

If you have an active subscription or a supported license for SQL Prompt, SQL Toolbelt Essentials, or SQL Toolbelt, you can download the latest version here. Please note: SQL Prompt’s AI-powered features are available exclusively with an active subscription and are not included with perpetual licenses. Don’t have an active subscription? You can buy online to experience the latest updates.

The post SQL Prompt Product Updates – February 2026 appeared first on Redgate.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories