Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150727 stories
·
33 followers

Write Cleaner Code with C# 14’s Null-Conditional Assignment Operator

1 Share

This post introduces the null-conditional assignment operator, a new feature of C# 14 that allows you to write clean and terse code.

When looking at the C# 14 updates coming out, I discovered a feature and thought, “How did this not already exist?” If you’ve been using C#'s null-conditional operators (?. and ?[]) for years like I have, you’ll love the support for null-conditional assignments.

Our Long Null Journey

It’s been a long journey to improve how C# developers work with null, the billion-dollar mistake.

C# 2 kicked things off with nullable value types (like int? and bool?) because sometimes you need to know if someone didn’t enter a number versus entering zero.

With C# 6, we got null-conditional operators (?. and ?[]), letting us chain through potentially null objects without writing novels. C# 7 gave us pattern matching with is null checks that read like English.

C# 8 came out with nullable reference types, the null-coalescing operator (??=), and the null-forgiving operator (!) for when you think you know better than the compiler. And, of course, C# 9 rounded it out with is not null because is null was feeling lonely.

For me, the null-conditional operators from C# 6 were a game-changer. Instead of doom-checking each level of potential nulls, I can just chain it.

// The old way
string? city = null;
if (customer != null && customer.Address != null)
{
    city = customer.Address.City;
}
    
// The new way
string? city = customer?.Address?.City;

This was great for reading values. However, we could never use the same trick for writing values. Enter C# 14.

The Problem It Solves

How many times have you written code like this?

if (customer != null)
  customer.Order = GetOrder(customer.Id)

If you’re anything like me, this pattern is burned into your memory. I type it without thinking.

But we already have a perfectly good “do this thing if not null” operator. We just couldn’t use it for assignments.

Weird, yes? We could read through each null conditionally but not write through it. Every little assignment needed its own little null check guard.

The C# 14 Solution

C# 14 lets you write this instead:

customer?.Order = GetOrder(customer.Id);

That’s it! Clean, readable and the intent is obvious: “If the customer isn’t null, assign the current order to it.”

The semantics are exactly what you’d hope for: the right-hand side (GetOrder(id)) only runs if the left-hand side isn’t null. If customer is null, nothing happens—no assignment, no method call, and no exceptions.

: Like any language feature, this is a tool and not a hammer for every nail. There are times when explicit null checks are actually clearer.

For example, don’t hide business logic. If null requires specific handling, explicit checks are much clearer.

// Less clear - what should happen if account is null?
account?.Balance += deposit;
    
// Better when null is exceptional
if (account is null)
    throw new InvalidOperationException("Cannot deposit to null account");
        
account.Balance += deposit;

Watch out for side effects! Remember, the entire right-hand side is skipped if the left is null.

// We don't call GetNextId() if the record is null
record?.Id = GetNextId();

Compound Assignments

The real power in this improvement shows up when you use null-conditional assignments with compound assignment operators.

// Before C# 14
if (account != null)
{
    account.Balance += deposit;
}
    
// C# 14
account?.Balance += deposit;

This works with all compound assignment operators: +=, -=, *=, /=, %=, &=, |=, ^=, <<=, >>= and ??=.

Check out this shopping cart example:

public class ShoppingCart
{
    public decimal Subtotal { get; set; }
    public decimal Tax { get; set; }
}
    
public void ApplyDiscount(ShoppingCart? cart, decimal discountAmount)
{
    // Only apply discount if cart exists
    cart?.Subtotal -= discountAmount;
        
    // Recalculate tax based on new subtotal
    cart?.Tax = cart.Subtotal * 0.08m;
}

Notice the improvements: no nested ifs, no ceremony, just the expected “if it’s there, update it” logic. Finally.

When This Feature Shines

Let’s take a look at some brief real-world scenarios where null-conditional assignments are genuinely useful.

Optional Dependencies

In modern apps, you have services everywhere—like logging, telemetry, caching and so on—and it seems half of them are optional depending on what features are enabled.

public class TelemetryService
{
    private ILogger? _logger;
    private IMetricsCollector? _metrics;
        
    public void RecordEvent(string eventName, Dictionary<string, object> properties)
    {
        // Log only if logger is configured
        _logger?.LogInformation("Event recorded: {EventName}", eventName);
            
        // Track metrics only if collector is available
        _metrics?.EventCount += 1;
    }
}

With this improvement, our code doesn’t get buried under a mountain of if (_logger != null) checks. The dependency is handled right where you use it, which means the happy path (the service is there) stays front and center.

This is huge with classes that have multiple optional dependencies. Traditional null checks create a ton of noise that obscures what your code actually does.

Event Handlers with Optional Subscribers

When you’re working with events, subscribers are optional by nature. It’s like me watching a football game: sometimes I’m listening and sometimes I’m not. And that’s fine.

public class ProgressTracker
{
    public IProgress<int>? Progress { get; set; }
    private int _currentStep;
    private int _totalSteps;
    
    public void AdvanceProgress()
    {
        _currentStep++;
        var percentComplete = (_currentStep * 100) / _totalSteps;
        
        // Report only if someone is listening
        Progress?.Report(percentComplete);
    }
}

With this improvement, publishers don’t need to defensively write null checks before every notification. This is great for library code or reusable components where you have zero control over whether consumers attach handlers.

Plus, it’s self-documenting. Progress?.Report() clearly says “if anyone cares, report progress.”

Conditional Updates with Fluent APIs

Builder patterns and fluent APIs are obsessed with optional configuration. Sometimes you’ve set up all the pieces, and sometimes just a few.

public class ApplicationBuilder
{
    public DatabaseConfiguration? Database { get; set; }
    public ApiConfiguration? Api { get; set; }
    
    public void ApplyProduction()
    {
        // Safely configure only what exists
        Database?.ConnectionString = Environment.GetEnvironmentVariable("DB_PROD");
        Database?.CommandTimeout += TimeSpan.FromSeconds(30);
        
        Api?.RateLimitPerMinute = 1000;
        Api?.EnableCaching = true;
    }
}

With this example, configuration methods can be flexible without requiring everything to exist first. This is perfect for plugin architectures or systems where features come and go. You can write configuration code that gracefully handles a mix of initialized components without the spaghetti code.

Collection Operations

We consistently work with collections that might not be initialized yet. Think of when you’re doing lazy initialization for performance reasons.

public class CacheManager
{
    private List<string>? _recentItems;
    private Dictionary<string, object>? _settings;
    
    public void RecordAccess(string item)
    {
        // Add to recent items if cache is initialized
        _recentItems?.Add(item);
        
        // Update access count
        if (_settings?.ContainsKey("accessCount") == true)
        {
            _settings["accessCount"] = ((int)_settings["accessCount"]) + 1;
        }
    }
    
    public void UpdateTheme(string theme)
    {
        // Safe dictionary assignment
        _settings?["theme"] = theme;
    }
}

Nice, eh? You can now work with collections that might not exist yet without checking if they exist first. This is great for performance-sensitive code where you want to defer allocating collections until you actually need them … but you also want clean code. We can now keep the lazy initialization pattern without our code looking like a mess.

One Gotcha: No ++ or –-

Don’t kill the messenger: You can’t use increment (++) or decrement (--) operators with null-conditional access.

// Ope
counter?.Value++;

If you want this pattern, do the traditional null check or use the compound assignment form.

counter?.Value += 1; 

Captain Obvious says: ++ and -- are a read and write operation rolled into one. And that’s where semantics can get really weird with null-conditional operators. If counter is null, what should counter?.Value++ return? Null? The pre-increment value? The post-increment value that never computed? Instead of confusing everyone, it just isn’t supported.

A ‘Putting It All Together’ Example: A Configuration System

Let’s put this all together with an example. Let’s build a super-exciting configuration system that applies environment-specific overrides to application settings.

public class AppSettings
{
    public DatabaseConfig? Database { get; set; }
    public ApiConfig? Api { get; set; }
    public LoggingConfig? Logging { get; set; }
    public CacheConfig? Cache { get; set; }
}

public class DatabaseConfig
{
    public string? ConnectionString { get; set; }
    public int CommandTimeout { get; set; }
    public bool EnableRetry { get; set; }
}

public class ApiConfig
{
    public string? Endpoint { get; set; }
    public TimeSpan Timeout { get; set; }
    public int MaxRetries { get; set; }
}

public interface IAppEnvironment
{
    string? GetVariable(string name);
    bool IsDevelopment();
}

public class ConfigurationUpdater
{
    public void ApplyEnvironmentOverrides(AppSettings? settings, IAppEnvironment env)
    {
        // Database overrides - only if database config exists
        settings?.Database?.ConnectionString = env.GetVariable("DB_CONNECTION");
        settings?.Database?.CommandTimeout = 60;
        settings?.Database?.EnableRetry = true;

        // API configuration - compound assignments work too
        settings?.Api?.Endpoint = env.GetVariable("API_URL");
        settings?.Api?.Timeout += TimeSpan.FromSeconds(30);
        settings?.Api?.MaxRetries = 5;

        // Logging adjustments
        settings?.Logging!.Level = LogLevel.Information;
        settings?.Logging!.EnableConsole = env.IsDevelopment();

        // Cache settings
        settings?.Cache!.ExpirationMinutes += 15; 
    }
    
    public void ApplyDevelopmentDefaults(AppSettings? settings)
    {
        // Development-specific configuration
        settings?.Database?.EnableRetry = false; // Fail fast in dev
        settings?.Api?.Timeout = TimeSpan.FromSeconds(5); // Shorter timeouts
        settings?.Logging?.Level = LogLevel.Debug;
    }
}

This allows us to be modular. Not every app needs every config section. A simple POC might only touch database config, while our behemoth needs everything.

It also allows us to be optional by nature. The settings object itself might be null during early startup and individual sections might not be wired up yet. It be like that sometimes.

Compare it to the old way. To avert your eyes, I’ll use a snippet.

// The verbose alternative (please don't do this)
if (settings != null)
{
    if (settings.Database != null)
    {
        settings.Database.ConnectionString = env.GetVariable("DB_CONNECTION");
        settings.Database.CommandTimeout = 60;
    }
    
    if (settings.Api != null)
    {
        settings.Api.Endpoint = env.GetVariable("API_URL");
        settings.Api.Timeout += TimeSpan.FromSeconds(30);
    }
}

That is a lot of ceremony just for “configure what exists.” With null-conditional assignments, we get clean code that focuses on what we’re configuring instead of whether we can configure it.

Wrapping Up

Unless you’re a language designer, C# 14’s null-conditional assignment won’t blow your mind. However, it will make your code clearer and easier to write. It takes a pattern we’ve all typed a thousand times and finally makes it as concise as it should have been all along.

So next time you catch yourself typing if (x != null) { x.Property = value; }, know there’s a better way.

What patterns will you use with null-conditional assignments? Let me know in the comments, and happy coding!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

AWS built an integrated AI Agent training pipeline and they want you to rent it

1 Share

AWS re:Invent 2025 delivered myriad of announcements across AI, silicon, and cloud infrastructure. AWS unveiled the expanded Nova model family, introduced Nova Forge for custom model training, launched Trainium3 UltraServers, and added major production features to AgentCore. It was a lot and taken at face value, it looks like another scattershot year of big releases.

But if you look past the firehose, a pattern emerges. These announcements fit together into a single bet about how enterprise AI will be built.

AWS built a vertically integrated agent-training pipeline; it’s expensive, ambitious, and not for everyone.

The clearest place to see that pattern is in how AWS talked about the Nova models. AWS dropped four new foundation models (Lite, Pro, Sonic, Omni) spanning text, multimodal, and speech. And AWS downplayed benchmarks entirely. A hint that the models aren’t the real story.

What AWS was really foregrounding wasn’t the models themselves, but the system used to shape them: Nova Forge.

Try it with Neo: Set up Nova 2 with Pulumi

Rent the lab: Nova Forge

The LLM training pipeline: pre-training, SFT, RLHF, fine-tuning, narrowing down to prompt/context

Nova Forge is a managed way to run continued pretraining, fine-tuning, and reward-based alignment on Amazon’s Nova models using your data and your reinforcement loops. Instead of a finished, frozen model plus a thin fine-tuning API, you feed your data into earlier training stages while AWS handles the ugly parts: large-scale training runs, cluster management, and hosting. Access is $100,000 per year1, plus compute costs!

If you can afford that, you bring big proprietary datasets (code, tickets, logs, documents) and they keep doing next-token pretraining on a mix of their data and yours, then instruction tuning (SFT), then “RL-style” preference optimization, but with your data and your reward signals mixed in.

Why does this exist? Because the kind of training this enables is usually out of reach. Training a GPT-4-class frontier model from scratch runs tens of millions to $100M+ in compute alone2. You don’t own the weights and you’re locked into their stack, but you get frontier-level capabilities with your data baked in, without building datacenters or staffing ML teams.

Think of it as frontier-lab-as-a-service. No one else offers anything this close to a public, end-to-end training pipeline. And the only reason AWS can offer it is the next annoucement.

The margin weapon: Trainium

The Trainium flywheel: cheaper training leads to more custom models, more inference revenue, funding the next chip
The idealized Trainium flywheel: each generation should decrease training costs.

AWS built their own AI accelerator so they don’t have to live entirely on Nvidia. Trainium is that chip. You don’t buy it; you rent it as a cloud box. This year: their third-gen chip (Trainium3) and new rack-scale Trn3 UltraServers are out, with 4× the performance and big energy/cost gains over the previous gen, positioned as a serious alternative to high-end GPUs for training and serving big models.

I’m sure one reason for trainium’s is AWS wants to stop handing Nvidia half its AI training revenue. But the real story is bigger than cost-cutting. Trainium is the quiet machinery that makes AWS’s model-factory ambitions economically viable. You can only rent a frontier training pipeline if you can afford it, and Trainium makes it cheaper (if that word applies to six-figure entry costs).

Trainium is what turns Forge from a one-off experiment into an actual development pipeline. By compressing the marginal cost of each training cycle, AWS is trying to make iterative specialization economically viable. You can tune, test, and retrain until you converge on something useful.

AWS is clearly positioning Trainium3 to anchor a fully vertical stack

Spinning up a Trainium instance with Pulumi (where available):

import pulumi_aws as aws

# Assumes: ami, subnet, security_group already configured
trn1_instance = aws.ec2.Instance("trn1-instance",
 instance_type="trn1.2xlarge",
 ami=ami.id,
 subnet_id=subnet.id,
 vpc_security_group_ids=[security_group.id],
 associate_public_ip_address=True,
 tags={"Name": "trn1-training-instance"},
)

Try it with Neo: Provision Trainium instances

The data moat play

For most companies, this whole stack is overkill. If your AI roadmap is “add a chatbot and maybe summarize some tickets,” you don’t need Nova Forge, and you definitely don’t need Trainium. Hosted models plus RAG and an Agentic loop will get you 90% of the way there.

But this type of training is powerful and its never before been in reach to so many. If LLMs behave like the distributions they’re trained on, then getting your proprietary mess (logs, incident reports, claims histories, deal flows, call transcripts) into the core training loop means the model doesn’t just know your docs; it behaves like someone who’s lived inside your systems. That’s qualitatively different from “we stuffed a PDF into the context window.”

Latency and cost at scale matter too. For high-volume workflows like support triage, routing, code review, and fraud checks, “generic frontier model + giant prompt + RAG + tools” is slow and expensive. A smaller model that has your world baked into the weights can run with smaller contexts, simpler prompts, and fewer tool calls. And then there is reinforcement learning, which I’ll get to shortly.

But even if you get that far, a custom Nova model sitting in Bedrock is only half the story. You still need somewhere for it to act: a runtime, tools, policies, and an audit trail. That’s the gap AgentCore is meant to fill.

Where the models work: AgentCore

AgentCore components as Lego blocks: Runtime, Memory, Policy, Evals
AgentCore: building blocks so you don't have to wire agents from scratch.

If Nova is the brain and Trainium is the muscle to build it, AgentCore is the nervous system.

AgentCore is a managed runtime for AI agents: instead of you wiring LLMs, tools, memory, auth, and logging together on Lambda or Fargate, AWS gives you a sticky per-session microVM, a standard way to call tools (Gateway), built-in long- and short-term memory, identity/permissions, and observability/evals. You package your agent, deploy it as an AgentCore runtime, and AWS handles the ugly parts: session isolation, scaling, policy guardrails, and tracing. You pay Fargate-ish per-vCPU/GB-hour pricing for the runtime plus normal Bedrock token and tool-call costs.

At re:Invent 2025, AgentCore picked up the missing “production” pieces: Policy, Evaluations, and episodic Memory. These handle guardrails, quality checks, and per-session state so you don’t have to build them yourself.

Deploying an AgentCore runtime with Pulumi:

import pulumi_aws as aws

# Assumes: role, ecr_repo already configured
agent_runtime = aws.bedrock.AgentcoreAgentRuntime("my-agent",
 agent_runtime_name="my-agent-runtime",
 role_arn=role.arn,
 agent_runtime_artifact={
 "container_configuration": {
 "container_uri": f"{ecr_repo.repository_url}:latest",
 },
 },
 network_configuration={
 "network_mode": "PUBLIC",
 })

Try it with Neo: Deploy an AgentCore runtimeRequires a container image in ECR.

How does this come together? AWS shipped a use case.

The proof of concept: Nova Act

The AWS AI stack as a layer cake: Trainium at the bottom, Nova Forge, Bedrock, AgentCore on top
The AWS AI stack: vertically integrated from silicon to agent runtime. Nova Act uses the full stack.

Nova Act is the concrete example of this whole thing coming together. It handles browser-based UI automation: form filling, search-and-extract, QA testing. Amazon claims ~90% reliability. It deploys directly to AgentCore Runtime.

It’s not “an LLM plus Playwright.” Nova Act uses a specialized Nova 2 Lite variant trained on synthetic “web gym” environments: browser simulations that mirror enterprise UIs and provide an automatic reward signal when tasks complete correctly. Instead of judging output quality, this model was trained on an RL loop that asks: did the workflow succeed?

That specialized model is wrapped in AgentCore. The platform handles isolation, scaling, logging, and guardrails, so Nova Act behaves like a production automation system rather than a brittle demo.

Seen this way, Nova Act is Amazon’s reference implementation for a certain class of enterprise agents: start with a strong general model, specialize it through domain-specific RL in a controlled environment, and run it on AgentCore with tools and policies around it. It’s the pattern AWS expects customers to adopt.

One stack to rule them all

So Nova Forge, Trainium, AgentCore, and Nova Act connect. Trainium lowers the cost of big training runs. Nova Forge lets enterprises plug their own data and rewards into those runs. AgentCore is where the resulting models act, with tools, memory, and policy guardrails. Nova Act shows the pattern in action: a domain-specialized Nova model, trained in a controlled loop, running as a production agent.

Most enterprises still won’t choose this path. They don’t have the data, the reward loops, or the operational maturity to make early-stage training worthwhile.

But AWS’s bet is that enterprise AI is moving past stock foundation models and generic chatbots. AWS is expected a world of agents shaped by proprietary data and domain feedback. Most companies won’t build the infrastructure to train and operate those agents, and so AWS is offering to rent them the whole pipeline.

Try it with Neo: Deploy a Bedrock-powered API with Pulumi


  1. CNBC reporting on Nova Forge pricing. Source ↩︎

  2. Sam Altman stated GPT-4 cost “more than $100 million” to train. Source ↩︎

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How Many Plan Variants Can You Get With The Parameter Sensitive Plan Optimization In SQL Server?

1 Share

How Many Plan Variants Can You Get With The Parameter Sensitive Plan Optimization In SQL Server?


Going Further


If this is the kind of SQL Server stuff you love learning about, you’ll love my training. I’m offering a 75% discount to my blog readers if you click from here. I’m also available for consulting if you just don’t have time for that, and need to solve database performance problems quickly. You can also get a quick, low cost health check with no phone time required.

The post How Many Plan Variants Can You Get With The Parameter Sensitive Plan Optimization In SQL Server? appeared first on Darling Data.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Update: SQL Server 2025’s REGEX Performance Isn’t So Bad!

1 Share

Back in March 2025 when Microsoft first announced that REGEX support was coming to SQL Server 2025 and Azure SQL DB, I gave it a quick test, and the performance was horrific. It was bad in 3 different ways:

  1. The CPU usage was terrible, burning 60 seconds of CPU time to check a few million rows
  2. It refused to use an index
  3. The cardinality estimation was terrible, hard-coded to 30% of the table

Prompted by a comment from Erland Sommarskog this month, I circled back and ran the tests again with the release version of SQL Server 2025. Great news! Microsoft fixed 1 of the problems, and… well, one of them is a little tricky. To demonstrate, I’m going to use the large 2024-04 Stack Overflow database to create a worst-case scenario, then start with an index on the small Users table and query it via regex like we did in the March 2025 post.

CREATE INDEX Location ON dbo.Users(Location);

SET STATISTICS IO, TIME ON;

SELECT TOP 100 *
FROM dbo.Users 
WHERE REGEXP_LIKE(Location, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$');

The actual execution plan:

Actual plan for TOP 100

It took about 8 seconds, all of which was spent burning CPU. That’s actually GREAT, a HUGE improvement from last time! 8 seconds of CPU time sounds bad, but it’s fantastic given the number of rows that SQL Server had to examine to find 100 matches:

Number of rows read

Because the data I was looking for was relatively rare, SQL Server had to read about 10 million rows in order to find 100 matches. That means SQL Server was able to read 1.2 million rows per second, and examine their contents with regex. That’s awesome! I love it, and I wish the story ended there.

But let’s switch over to examining the Title column of the Posts table, one of the bigger ones in the database. I’ve created an index on the Title column:

CREATE INDEX Title ON dbo.Posts(Title)
    WITH (ONLINE = OFF, MAXDOP = 0);
GO
sp_BlitzIndex @TableName = 'Posts'

The table has about 60M rows, the clustered index is 163GB, and the index on just Title is 3GB. If SQL Server will use the index, this will give us a giant performance boost over having to scan the whole table.

Posts table size

Let’s run the same WHERE clause filter, but use a SUM(1) this time instead of TOP 100 so that SQL Server is forced to hit all of the rows, and so I can demonstrate the cardinality estimation:

SELECT SUM(1)
FROM dbo.Posts 
WHERE REGEXP_LIKE(Title, '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$');

The actual plan doesn’t look great at first glance, but hang in there, because this really is a worst-case scenario – there’s some great stuff in here:

Index scan on Posts.Title

First, it used the index! That’s fantastic. Obviously we can’t seek on it, but at least we’re only reading 3GB of data instead of 163GB. That’s good – that’s the one problem Microsoft completely fixed. Love it.

Second, it went parallel automatically, recognizing that it was gonna be a lot of work. It had to read 60M rows, and it took 7 minutes, so it processed about 145K rows per second. That’s… not good. That’s a huge drop from our Users table processing which was hitting about 1.2 million rows per second. While it was running, ooof, our poor server:

It's getting hot in here
It’s getting hot in here, so light up all your cores

Wait stats are a parallelism disaster:

Wait stats

So… is parallelism a problem? I’ve heard folks say CXCONSUMER is harmless, but long-term readers here will know better. Slap an OPTION (MAXDOP 1) hint on the query, and it runs in 32 seconds:

32 seconds for MAXDOP 1

Which brings us back up to 1.86 million rows per second processed by REGEX. That’s honestly fantastic. If you need to find a needle in a haystack with regex, and you’ve got an index so it can scan less data, and if CPU scheduling doesn’t get in the way, this is a dang fast way to do it. Note that I didn’t try more complex regular expressions – I don’t wanna make up synthetic stuff for testing, and instead you should test with regexes you actually intend to use in your work.

On the down side, note that the estimated number of rows is still hot garbage – SQL Server estimated that 5,383,710 rows would come back, when in actuality none did. I think that’s fair, because I don’t know how you could predict the number of rows that would match a given regex. Hell, I can’t even READ most regexes and understand what they’re trying to do. If your query has a regular expression in the filter, you probably want to load the matching rows into a temp table first so that on subsequent joins, SQL Server better understands the number of rows that’ll be involved.

So in summary, SQL Server 2025’s regex situation is better than it was in Azure SQL DB – at least it’s using indexes now, and CPU is better than it was. Just be careful with running it in production – if you’re just using it as a utility query for quick research, try hinting it with MAXDOP 1 for two reasons. It might run faster, and it’ll be less likely to dominate the server’s entire CPU stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Major Architectural Update: Introducing the New Search Everywhere API (Built for Remote Development)

1 Share

The Search Everywhere dialog is one of the most important entry points in the IntelliJ Platform. To future-proof this core feature and ensure its optimal performance in remote development, we have completely rewritten the underlying architecture, and we’re introducing a brand-new API for plugin developers.

This is a critical update for anyone contributing custom results to Search Everywhere, particularly for plugins that need to support remote environments.

Why the rewrite? The challenge of remote development

The need for a new API stemmed directly from our ongoing commitment to remote development.

The original SearchEverywhereContributor extension point was designed for the traditional, monolithic (local) IDE architecture. It coupled the search result data with the UI renderer responsible for displaying that data. This tight coupling became a significant blocker for our split-architecture remote solution:

  1. The backend/frontend problem: In remote development, search logic often runs on a powerful backend machine, but the results must be displayed on the lightweight frontend.
  2. Serialization issue: Because SearchEverywhereContributor coupled data and presentation, it was impossible to reliably serialize third-party search results – both the data and its appearance – and safely send them across the network from the backend process to the frontend client.

To solve this, we initiated a deep rewrite of Search Everywhere to separate data and logic from UI presentation, enabling seamless and performant remote execution for all plugins.

The new API: Separation of data and UI

Frontend components

These components primarily manage the user interface and presentation logic.

  • SeTab
    • An interface representing a single tab within the Search Everywhere dialog.
    • Lives on the frontend only.
  • SeTabFactory
    • A factory for creating SeTab instances.
    • Serves as an extension point for adding new tab types.

Core data and logic (backend/frontend)

These components are responsible for fetching and structuring the search results.

  • SeItemsProvider
    • The conceptual analog of the older SearchEverywhereContributor.
    • Can run on both the backend and frontend.
    • Returns search results that include a serializable presentation – a standardized way to describe the item’s appearance. By ensuring that the results returned by SeItemsProvider include a serializable presentation, we can transmit them over the network and reliably render the results on the client, regardless of which process generated them.
  • SeItemsProviderFactory
    • A factory for creating SeItemsProvider instances.
    • Serves as an extension point for adding new result sources.

Migration and compatibility

What this means for your plugin

We have implemented adapters for all existing legacy SearchEverywhereContributor implementations.

  • In a monolith (local) environment: Existing plugins should continue to work almost exactly as before, with no immediate changes required.
  • In a remote development environment (2025.2 onwards): The new Search Everywhere architecture is now enabled by default. Plugins supporting remote execution must either implement the new API or ensure their legacy results use a serializable presentation format that our adapters can manage. For a simpler, temporary presentation method, you can use SeLegacyItemPresentationProvider.

The path forward – Migration recommended

While the adapters provide a temporary bridge, we encourage all plugin developers to try the new API and provide feedback while it’s in the experimental phase. Once the new API has been refined based on the community’s input, we will start deprecating the old API.

Rollout plan

We are committed to a smooth transition and are rolling out the new Search Everywhere architecture gradually:

  1. 2025.2: The new Search Everywhere architecture is enabled by default in remote development.
  2. 2026.1 (planned): The new architecture is enabled by default in monolith environments.
  3. 2026.2 (tentative): The new API is no longer experimental, and the old API is deprecated

We understand that changing a core API is a significant step, but this rewrite is essential for the future of the IntelliJ Platform and your ability to serve users in modern, remote environments.

We are excited to see the improved performance and remote capabilities this change will bring to your plugins!

The IntelliJ Platform team

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The End of Debugging

1 Share

The following article originally appeared on Medium and is being republished here with the author’s permission.

This post is a follow-up to a post from last week on the progress of logging. A colleague pushed back on the idea that we’d soon be running code we don’t fully understand. He was skeptical: “We’ll still be the ones writing the code, right? You can only support the code if you wrote it, right?…right?”

That’s the assumption—but it’s already slipping.

You Don’t Have to Write (or Even Read) Every Line Anymore

I gave him a simple example. I needed drag-and-drop ordering in a form. I’ve built it before, but this time I asked Cursor: “Take this React component, make the rows draggable, persist the order, and generate tests.”

It did. I ran the tests, and everything passed; I then shipped the feature without ever opening the code. Not because I couldn’t but because I didn’t have to. That doesn’t mean I always ship this way. Most of the time, I still review, but it’s becoming more common that I don’t need to.

And this isn’t malpractice or vibe coding. The trust comes from two things: I know I can debug and fix if something goes wrong, and I have enough validation to know when the output is solid. If the code works, passes tests, and delivers the feature, I don’t need to micromanage every line of code. That shift is already here—and it’s only accelerating.

Already Comfortable Ceding Control

Which brings me back to site reliability. Production systems are on the same trajectory. We’re walking into a world where the software is watching itself, anticipating failures, and quietly fixing them before a human would ever notice. Consider how Airbus advises pilots to keep the autopilot on during turbulence. Computers don’t panic or overcorrect; they ride it out smoothly. That’s what’s coming for operations—systems that absorb the bumps without asking you to grab the controls.

This shift doesn’t eliminate humans, but it does change the work. We won’t be staring at charts all day, because the essential decisions won’t be visible in dashboards. Vendors like Elastic, Grafana, and Splunk won’t vanish, but they’ll need to reinvent their value in a world where the software is diagnosing and correcting itself before alerts even fire.

And this happens faster than you think. Not because the technology matures slowly and predictably, but because the incentives are brutal: The first companies to eliminate downtime and pager duty will have an unassailable advantage, and everyone else will scramble to follow. Within a couple of years (sorry, I meant weeks), the default assumption will be that you’re building for an MCP—the standard machine control plane that consumes your logs, interprets your signals, and acts on your behalf. If you’re not writing for it, you’ll be left behind.

More Powerful Primitives (We May Not Fully Understand)

I’ll end with this. I majored in computer engineering. I know how to design an 8-bit microprocessor on FPGAs. . .in the late 1990s. Do you think I fully understand the Apple M4 chip in the laptop I’m writing on? Conceptually, yes—I understand the principles. But I don’t know everything it’s doing, instruction by instruction. And that’s fine.

We already accept that kind of abstraction all the time. As Edsger W. Dijkstra said: “The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.” Abstractions give us new building blocks—smaller, sharper units of thought—that let us stop worrying about every transistor and instead design at the level of processors, operating systems, or languages.

Code generation is about to redefine that building block again. It’s not just another abstraction layer; it’s a new “atom” for how we think about software. Once that shift takes hold, we’ll start leveling up—not because we know less but because we’ll be working with more powerful primitives.



Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories