Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153447 stories
·
33 followers

How to ensure success following a cloud migration

1 Share

Making sure everything is running smoothly after a cloud migration is critically important, even after a lot of time has passed. It’s also important to continue the optimization journey post-migration. Pat Wright explains why in this article, part of his ‘How to migrate from on-prem to the cloud series.

You’ll likely have one or two processes that didn’t properly migrate and now don’t work – you’ll want to find these before they impact your customers. You’ll also want to optimize your resources now that you’re in this shiny new cloud, after all those hours of work.

It’s also important to understand that migrations like to scale everything up and use more resources to deal with the complexity of the migration. You may have started this process to save costs in the future – so now you need to focus on how your system is actually running, and optimize for it. But how’s best to do so? Here’s my advice.

Ensure the key baselines are on the system

Knowing what resources (cpu, mem, disk) the system has been using will give you an idea of what you should expect from you new cloud servers. You can use these baselines to compare what the cloud is doing to make adjustments. 

Minimize, minimize, minimize…

Now that you’re in the cloud, it’s crucial to understand that everything has a cost. The more you can review what is actually needed (or not), the more you can trim down and cut those costs. So, focus on what you can now shrink, even if this just asks of each cluster, “do we need these resources?”

…and set up automation to help do so

On a similar note, it’s good practice to set up an automation to focus on removing servers/resources you no longer need. It’s now VERY easy to create a server and test things. It’s also very easy to forget you have the server and not shut it down! Automation can greatly help with this, so you should put something in place sooner rather than later.

Learn what metrics you now need to track 

Cloud servers have some metrics that don’t act the same as on-premises servers, so it’s important you understand these. Working directly with your cloud provider should help here.

Check your permissions, security, and roles

Security may have taken a back seat during the migration phase, so it’s imperative to now review that and get everything in order.

A cloud migration is never just ‘done’

Even after completing all the steps above, you’re never truly ‘finished’ with a cloud migration. At least, I’ve never seen this myself. Typically, you get over a few big components/applications and then spend even more time optimizing and improving. I do see that migrations change everything about a company and how it works. It does make things better in the end. Of course, you want to now take advantage of that, and focus on what you can do to continue to optimize for the future.

But first – celebrate!

It’s important to let people know all the good work they’ve put into the project. Cloud migrations can take a long time, so I suggest setting milestones along the way to ‘celebrate’ when you hit them.

The importance of knowledge sharing

The cloud migration may not be over because, as mentioned, it rarely ever is. However, the team that executed the migration may now have other projects to work on, so others need to be trained up. Just like I’m sharing knowledge in these articles, it’s important to do the same with your colleagues. To make things easier, you’ll ideally have documented everything during the migration, as noted in a previous article.

Focus on automation to ensure future migration success

This is loosely connected to what I mentioned earlier, but this time I’m more referring to automation that can help with future cloud migrations. You may have written some good scripts during the migration, and made some useful tools, so now’s a great time to harden them and adapt them for a variety of situations.

Check any fixes you may have made during the cloud migration

Perhaps, during the cloud migration, you needed to put in place some temporary fixes to the application. Well, don’t lose those changes! Make sure to go back and address them.

In summary: the migration is never fully ‘over’

This most likely won’t be the end of the project. For example, you may have other applications that you weren’t able to move alongside everything else, so now need to be taken care of. In a large organization, a cloud migration can sometimes even take several years! For now, though, make sure to celebrate – migrations are one of the largest projects you can take on.

Cloud adoption is accelerating, but database migrations aren’t keeping pace. Find out why.

The Cloud Migration Divide explores why complex, business-critical databases remain on-premises – and what’s holding organizations back as confidence fails to scale with complexity.
Download the free report

FAQs: How to ensure success following a cloud migration

1. What should you do after a cloud migration?

After a cloud migration, you should monitor system performance, identify broken processes, optimize resource usage, improve security, and implement automation to reduce costs and improve efficiency.

2. Why is optimization important after moving to the cloud?

Cloud environments often scale resources up during migration. Optimization helps reduce unnecessary costs, improve performance, and ensure your infrastructure matches real usage needs.

3. How can you reduce cloud costs post-migration?

You can reduce costs by minimizing unused resources, rightsizing servers, removing unnecessary workloads, and using automation to shut down idle systems.

4. What metrics should you track in the cloud?

Key metrics include CPU usage, memory, disk performance, and cloud-specific metrics like scaling behavior and service utilization, which may differ from on-prem systems.

5. How does automation help after cloud migration?

Automation helps manage resources efficiently, remove unused infrastructure, enforce policies, and prevent unnecessary spending from forgotten services.

6. Why is security review important after migration?

Security may be overlooked during migration, so reviewing permissions, roles, and access controls ensures your cloud environment is protected against vulnerabilities.

7. Is a cloud migration ever truly finished?

No, cloud migration is an ongoing process. Continuous optimization, updates, and improvements are required to maintain performance and cost efficiency.

8. What role does knowledge sharing play after migration?

Knowledge sharing ensures teams understand the new cloud environment, reduces reliance on migration teams, and supports future improvements and scalability.

The post How to ensure success following a cloud migration appeared first on Simple Talk.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

How to Verify Network Connectivity in .NET MAUI

1 Share

Learn the key steps to checking network connectivity in the various platforms available to your .NET MAUI app.

Identifying the connectivity of the devices running our mobile applications allows us to have much more precise control over the decisions we need to make within the app. From knowing whether the device has internet access, if the connection is limited, and if connection types such as Bluetooth, WiFi or Ethernet are active, all this information helps us provide a better user experience.

For example, we can decide whether to show an empty state when there is no internet connection, prevent certain actions or clearly inform the user about what is happening. This is especially important because, in many cases, the user loses connectivity and assumes the problem is with the application, when in reality it is a network issue.

Additionally, we can display different scenarios or behaviors depending on the type of connection available, such as when the device only has Bluetooth active or does not have internet access. That’s why it’s essential to know how to detect these connectivity states. The good news is that in .NET MAUI, we have the ability to do this in a very simple way. Let’s take a look!

First, Platform Configuration

Before starting with any implementation, it’s important to verify whether you need to apply any platform-specific configuration. Some platforms may require additional setup, while others work out of the box.

For this, iOS/Mac Catalyst and Windows require no additional configuration.

For Android, to access connectivity information, you must add the ACCESS_NETWORK_STATE permission. There are three different ways to add this permission on Android:

Android Option 1: Add the Permission Directly in the AndroidManifest.xml

Go to Platforms → Android, open the AndroidManifest.xml file, and add the following node:

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

Android Option 2: Use the Android Manifest Graphical Editor

Go to Platforms → Android, double-click the AndroidManifest.xml file, and locate the Required permissions section. Find the permission labeled ACCESS_NETWORK_STATE and simply check the option, as shown below.

Android Option 3: Add the Assembly-based Permission

Go to Platforms → Android → MainApplication.cs and add the permission as follows:

[assembly: UsesPermission(Android.Manifest.Permission.AccessNetworkState)]

Network Accessibility in .NET MAUI

To inspect the network accessibility on a device, .NET MAUI provides the IConnectivity interface. This is part of the Microsoft.Maui.Networking namespace and is available through the Connectivity.Current property.

One thing I really like about this API is that it doesn’t simply return a boolean value indicating whether there is internet access or not. Instead, it provides much more detailed information, such as the scope of the current network (for example, Internet, ConstrainedInternet and others), as well as details about active connection profiles like Bluetooth, Cellular, WiFi and others. It also exposes an event that allows you to monitor changes in the device’s connectivity state in real time.

Next, we’ll take a closer look at each of these values to better understand what they mean and how you can use them in your applications.

How to Inspect the Current Network Scope?

Thanks to the .NET MAUI team, we can determine the scope of the current network in a much more precise way through the NetworkAccess property. This property provides different values that we can evaluate to obtain more detailed information about the device’s connectivity state. These values are the following:

Internet: Indicates that the device has access to both the local network and the internet. This is the ideal state for making API calls or performing any action that requires a full internet connection.

Local: The device has access only to the local network.

None: No type of connectivity is available. In this state, it’s ideal to inform the user that all or some actions within the app will not work correctly due to the lack of connectivity. This can be done through an empty state, an alert or any other approach that best fits the scenario you’re working on.

Unknown: It’s not possible to determine the connectivity state. If this happens, while the correct information is being retrieved, it’s recommended to inform the user in the same way as in the None state.

ConstrainedInternet: Indicates that the device has limited internet access. This state usually appears when the device is connected to a network with a captive portal, meaning networks that require accepting terms or entering information before granting full internet access. This is common in places like airports, universities or hotels.

Thanks to all these states, as developers we can be much more specific in how we communicate connectivity issues to our users and better adapt the behavior of our applications.

For the code implementation, you can do something like what I show below:

NetworkAccess accessType = Connectivity.Current.NetworkAccess; 

if (accessType == NetworkAccess.Internet) 
{ 
    // Add the code that you need here 
}

Checking Active Connection Profiles

While NetworkAccess tells us how accessible the network is (internet, local, none, etc.), ConnectionProfiles allows us to know which type of connection the device is actively using at a given moment.

The types of connections we can detect are the following:

  • WiFi
  • Cellular (mobile data)
  • Bluetooth
  • Ethernet

This information is extremely useful for making decisions within your application. For example:

  • You can limit your app to download files only when the device is connected to a WiFi network.
  • Enable local features when only a Bluetooth connection is available.

It’s important to keep in mind that Connectivity.Current.ConnectionProfiles returns a collection (IEnumerable<ConnectionProfile>), because a device can have multiple connection types active at the same time. For instance, the device may be connected to WiFi while Bluetooth is enabled simultaneously.

In code, the implementation would look like the following:

IEnumerable<ConnectionProfile> profiles = Connectivity.Current.ConnectionProfiles;

if (profiles.Contains(ConnectionProfile.WiFi)) 
{ 
    // Add the code that you need here. 
}

Reacting to Connectivity Changes

We know that network conditions can change at any moment. For this reason, .NET MAUI provides the ConnectivityChanged event, which allows us to detect when network access or active connection profiles change. This makes it possible for our applications to react immediately to these changes, without breaking the app experience.

Let’s take a look at an example based on the official documentation:

public class ConnectivityListener 
{ 
    public ConnectivityListener() 
    { 
    Connectivity.ConnectivityChanged += OnConnectivityChanged; 
    }
     
    void OnConnectivityChanged(object sender, ConnectivityChangedEventArgs e) 
    { 
    if (e.NetworkAccess != NetworkAccess.Internet) 
    { 
    Console.WriteLine("No internet connection available."); 
    return; 
    }
     
    if (e.ConnectionProfiles.Contains(ConnectionProfile.WiFi)) 
    { 
    Console.WriteLine("Connected via Wi-Fi."); 
    } 
    else if (e.ConnectionProfiles.Contains(ConnectionProfile.Cellular)) 
    { 
    Console.WriteLine("Using mobile data."); 
    } 
    } 
}

⚠️ Important Considerations

NetworkAccess.Internet: Due to how connectivity detection works on each platform, .NET MAUI can only detect that a network connection is available. It does not guarantee that the connection has real internet access. For example, a device may be connected to a WiFi network, but the router itself may not have an internet connection.

Conclusion

And that’s it! In this article, you explored how to work with network connectivity in .NET MAUI using Connectivity. You learned how to determine the scope of the current network, detect active connection profiles such as WiFi, Cellular, Bluetooth and Ethernet, and react to connectivity changes in real time using the ConnectivityChanged event.

With this knowledge, you can now make better decisions in your apps, provide clearer feedback to users and build more resilient experiences that adapt to changing network conditions.

See you in the next article! ‍♀️✨

References

Code samples and explanations were based on the official documentation:

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Optimizing Angular Pivot Table Performance Using Virtualization Techniques

1 Share

Optimizing Angular Pivot Table Performance Using Virtualization Techniques

TL;DR: Angular pivot tables can struggle with large datasets and complex calculations. This article explores common performance bottlenecks and why virtualization is essential for building faster, scalable, and user‑friendly data‑heavy Angular applications.

As Angular applications scale to support large datasets, pivot tables often become a performance bottleneck. Rendering thousands of rows and columns can lead to long load times, high memory usage, and sluggish interactions. These challenges are especially noticeable in data-heavy dashboards, where users expect smooth scrolling, fast filtering, and responsive drill-down operations. 

This article examines why pivot table performance degrades at scale and explores how virtualization fits into the overall rendering architecture of Syncfusion® Angular Pivot Table.

Why large datasets hurt Angular Pivot Table performance

Pivot tables aggregate and display multidimensional data across rows, columns, headers, and values. As data volume increases, the number of rendered cells grows rapidly. For example, a grid with 10,000 rows and 50 columns results in hundreds of thousands of rendered elements.

This growth places pressure on the browser in several areas:

  • DOM size and memory consumption
  • Layout and paint recalculations
  • Interaction handling during scroll and resize
  • Repeated aggregation and reflow during user actions

These factors compound as datasets grow, making performance degradation unavoidable without architectural optimizations.

Understanding virtualization in Pivot Tables

Virtualization is a rendering optimization technique that decouples dataset size from DOM size. Instead of rendering every pivot cell at once, only the rows and columns visible within the viewport are rendered, along with a small buffer.

Key characteristics of virtualization include:

  • Rendering only visible cells while preserving full dataset navigation
  • Recycling a fixed pool of DOM elements instead of recreating them
  • Maintaining scrollbar behavior that represents the complete dataset
  • Keeping DOM complexity nearly constant regardless of data size

By limiting what the browser needs to render at any moment, virtualization reduces layout cost, memory overhead, and interaction latency.

Configuring virtualization in Angular

The Syncfusion Angular Pivot Table exposes virtualization through straightforward component-level settings. Enabling virtualization typically requires activating virtual scrolling and registering the required services.

A basic configuration includes:

  • Enabling virtualization at the component level
  • Providing the virtual scroll service
  • Defining fixed height and width values to establish a predictable viewport

To enable it, inject the VirtualScrollService and set enableVirtualization="true" on the component.

import { Component } from '@angular/core';
import { VirtualScrollService, VirtualScrollSettingsModel } from '@syncfusion/ej2-angular-pivotview';
import { DataSourceSettingsModel } from '@syncfusion/ej2-pivotview/src/model/datasourcesettings-model';

@Component({
    selector: 'app-pivot-dashboard',
    providers: [VirtualScrollService],
    template: `
        <ejs-pivotview
            [dataSourceSettings]="dataSourceSettings"
            enableVirtualization="true"
            [virtualScrollSettings]="virtualScrollSettings"
            height="600px"
            width="100%">
        </ejs-pivotview>
    `
})
export class PivotDashboardComponent {
    dataSourceSettings: DataSourceSettingsModel = {
        dataSource: this.largeOrdersDataset,
        rows: [{ name: 'Region' }],
        columns: [{ name: 'Year' }],
        values: [{ name: 'Revenue', type: 'Sum' }],
        filters: []
    };

    virtualScrollSettings: VirtualScrollSettingsModel = {
        allowSinglePage: true
    };
}

Virtualization affects only rendering behavior. The full dataset is still accepted, processed, and aggregated by the pivot engine.

Performance tuning strategies

Data shaping

Reduce dataset size before binding:

const filteredData = rawOrderData
    .filter(order => order.year >= 2020 && order.status === 'completed')
    .slice(0, 100000);

this.pivotComponent.dataSourceSettings.dataSource = filteredData;

Smaller input datasets reduce aggregation overhead before virtualization even begins.

Field optimization

Limit rows, columns, and values to what users actually need:

// Efficient configuration
rows = [{name: 'Region'}];
columns = [{name: 'Year'}];
values = [{ name: 'Revenue', type: 'Sum' }];

Avoid unnecessary fields, which multiply aggregation complexity and memory usage.

Viewport sizing

Larger viewports require rendering more cells. Use pixel-based dimensions and balance usability with performance:

height = window.innerHeight > 1000 ? '800px' : '500px';
width  = window.innerWidth  > 1200 ? '1200px' : '100%';

Smaller viewports reduce the working DOM set and improve responsiveness.

Strategic filtering

Apply filters early to narrow data scope:

filterSettings: [
    {
        name: 'Channel',
        type: 'Include',
        items: ['Online', 'Retail']
    },
    {
        name: 'Year',
        type: 'Include',
        items: ['2023', '2024', '2025']
    }
]

Filtering reduces both aggregation cost and rendered output.

Deferred layout updates

Batch multiple configuration changes to avoid repeated recomputation:

this.pivotComponent.setProperties(
    {
        dataSourceSettings: {
            rows: [...],
            columns: [...],
            values: [...]
        }
    },
    false // defer recalculation
);

Server-side aggregation overview

When datasets reach hundreds of thousands or millions of records, server‑side aggregation can offload heavy computation from the browser. In this architecture:

  • The client defines the pivot configuration and renders only visible cells
  • The server performs aggregation, filtering, and sorting
  • Only aggregated results required for the current view are transmitted

This separation reduces client memory usage and minimizes data transfer while maintaining interactive behavior.

Caching strategy

Each pivot instance is assigned a unique session GUID. The server caches:

  • Engine properties
  • Data sources
  • Aggregated results

Caches typically expire after 60 minutes to balance performance and memory usage. Subsequent interactions reuse cached results, dramatically improving responsiveness.

Authentication

Use the beforeServiceInvoke event to attach authorization headers to all requests, ensuring secure communication without hard-coded credentials.

Virtualization limitations and constraints

Virtualization introduces certain constraints that should be accounted for:

  • Column widths should be defined using fixed pixel values
  • Row heights must remain consistent for accurate scroll calculations
  • Features that rely on dynamic sizing may not behave predictably at scale
  • Built‑in grouping is best applied at the data source level for large datasets

Understanding these constraints helps avoid rendering inconsistencies and unexpected behavior.

Common troubleshooting

Common issues encountered with virtualization include:

  • Virtualization not activating due to missing configuration or services
  • Scroll artifacts caused by inconsistent row heights or column widths
  • Performance slowdowns related to excessive aggregation or unfiltered data

These issues are typically related to configuration scope rather than rendering logic itself.

Frequently Asked Questions

Does virtualization work with all data types and aggregation functions?

Yes. Virtualization affects only how pivot cells are rendered in the browser. It works independently of data types or aggregation logic, whether the pivot table uses sums, counts, averages, or custom calculations.

Does virtualization change how much data the pivot table processes?

No. The pivot table still processes and aggregates the complete dataset. Virtualization only controls how much of the aggregated result is rendered at any given time.

Does virtualization affect exporting pivot table data?

No. Export operations include the full aggregated pivot result, not just the visible portion. Virtualization does not limit or truncate exported data.

Is virtualization compatible with interactive features like drill‑down?

Yes. Drill‑down interactions continue to work as expected. The detail rows revealed during drill‑down are rendered using the same virtual rendering mechanism, helping maintain consistent performance.

Is virtualization useful if users don’t scroll much?

Virtualization is most beneficial when pivot tables display large result sets that exceed the viewport. If the entire pivot output fits within the visible area, the performance impact of virtualization is minimal but generally safe to keep enabled.

Harness the power of Syncfusion’s feature-rich and powerful Angular UI components.

Conclusion

Thank you for reading! Virtualization plays a key role in addressing performance constraints when handling large datasets in the Syncfusion Angular Pivot Table. By limiting rendering to the visible portion of the pivot layout and managing DOM usage efficiently, virtualization helps maintain responsiveness as data volume grows.

Understanding where virtualization fits within the pivot table architecture and how it interacts with data size, aggregation scope, and layout constraints provides a clearer foundation for building scalable, data‑intensive Angular applications.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Don't overestimate domain expertise

1 Share

cover

I was toying with LLM-based domain research. It reminded me of the common mistake we make when we try to practice DDD: overreliance on what domain experts are telling us.

I wanted to remind myself of the domain I used to know: hospitality management, to adjust my upcoming workshop. I selected LLM (Claude Opus) as a sparing partner. And boy, I got a loooot of details. I was swamped by them. The LLM modelled everything: marketing consent, loyalty timing, inventory management, revenue posting, regulatory submissions, data retention policies, all at the same level of detail, which buried the core checkout flow under noise, the thing I asked about.

That’s not much different from initial work with domain experts. When we’re starting discovery, people tend to start by explaining everything they do and feel is important. Quite often, that means you’ll get more domain-expert pet peeves in the surroundings than in the process descriptions.

You also get a lot of jargon, words that sound familiar but mean something different.

That’s also what I got from LLM, I asked it to review reference sources like:

  • popular tools documentation,
  • open specifications from the hospitality organisations,
  • some laws like GDPR related stuff,

I researched how different tools handle the guest checkout process. I knew it, and implemented it in past; it’s a surprisingly complex process, as you need to:

  • verify if the stay is fully paid and close the financial account,
  • generate invoice
  • mark guest’s stay as completed,
  • schedule full room cleanup,
  • mark in the inventory that the room will be available soon (unless the maid finds that it’s broken),
  • cleanup GDPR-related data that’s not needed to keep,
  • adjust loyalty points and update CRM,
  • etc.

I remembered how it works, but not all the details, and I would like to double-check whether anything in the industry has changed.

I wanted to get the overall vision first, then gradually dive deeper. As mentioned, I was overwhelmed with naming like:

  • Folio,
  • Drain Pending postings,
  • Property,
  • Account Receivables,

etc.

Don’t get me wrong, those were valid names in these domains. Sounds plausible if you’ve never worked in hospitality. Sounds plausible if you have, too. So what’s wrong with it?

They’re valid terms, as people actually use them, but weird, as they don’t tell “how the process works” but “how people do stuff,” and that’s not the same thing.

For instance, Property is a hotel, kind of makes sense. But Folio?

This name comes from the Oracle Opera. Yes, Oracle has a system in this domain, a dominating one. At some point, authors decided to name things this way, and now 30 years later, that vocabulary is baked into how thousands of hoteliers talk about their work. “Drain pending postings” is a phrase real cashiers say.

“Settle the folio” is a thing real cashiers do.

The problem: none of it explains how our system should work. It only tells what current systems do.

Also, when I asked about the business rules and policies, I got stuff like:

The system immediately posts room and tax charges for day-use reservations upon opening the Billing screen.

Why are transactions such as night stay charges or taxes added when we open the billing screen? That sounds counterintuitive, but maybe it was a fair tradeoff 30 or 20 years ago for an on-premises system installed in the specific hotel (ekhm “property|). Yes, Opera had (and maybe still had) consultants, similar to SAP. They’d go to the hotel, go to the back office, log in to the server and hack stored procedures in the Oracle database to fine-tune Opera behaviour.

Also, is there really a “Draining Pending Postings” option before doing checkout nowadays? Maybe it is, but in modern systems, we shouldn’t manually check all bills, etc., and explicitly pull them from integrated payment solutions. Nowadays, all the accommodation charges should already be recorded on the bill. The financial module continuously collects charges from payment gateways throughout the stay. There’s no separate moment of “draining”. The LLM invented a coordination command to match a phrase (“drain the interfaces”) that real cashiers use as shorthand for “let me check that nothing’s outstanding.”

As I asked, LLMs researched systems in this space: OPERA, Mews, Apaleo, and Cloudbeds. Each has its own vocabulary and mechanics, and the training data heavily favours OPERA because its documentation is everywhere. When I pushed back on terminology, the model would just swap in different OPERA jargon instead of actually thinking about what’s modern. The blending happened invisibly, mixing vocabulary from one system with mechanics from another, all delivered with equal confidence.

To be fair, the same happens quite often when talking to domain experts. They explain how it works now and what people do. Quite often, they bring us solutions instead of problems. Usually, solutions are based on their experience and how they see the updated version. This can be fine as a brain dump, but it’s not enough to translate it directly into the software design.

In Domain-Driven Design, finding and understanding the ubiquitous language is considered the most important aspect. Yet, Ubiquitous Language is not the source of truth. It’s the way to keep our heads from exploding due to the constant split-brain situation. It’s a tool to reduce cognitive load and the need for additional translation. That’s why we separate domain contexts and bind them to specific departments, people, and the language they use.

It’s fine to start with the current state of the art. Understanding how people do their job. We need to understand, though, that what we get is a mixture of habits (both good and bad), tribal knowledge, jargon, etc. If you ask different tribes, each will tell you something different.

Domain language is a cognitive tool, not gospel. LLMs compound this problem by reiterating competitor vocabulary without understanding the reasoning behind those systems, and they tend to align with whatever the prompter already believes.

Domain experts also struggle to define what they want and how it should work. That’s also why they hire us. It’s our job to help them and to transfer those sometimes contradicting visions into working software. That’s what we’re learning and what we do when modelling and step-by-step shaping the working software. That’s our work as engineers. We should work together to have a proper outcome. Collaborate.

And I’m not making the bold statement here that we’re smarter or we know better. We’re not; we have different expertise and different roles in the software development process.

Contrary to what many people believe in the DDD community, in my opinion, we’re not here to become domain experts; we’re here to build software.

The model we built doesn’t have to reflect the whole universe; it’s a way to take part of the business, understand, and automate it in the form of a software system. A software system is a tool for the business make more money. So software needs to be useful in a certain, defined way. Not all possible ways.

We need to learn how to communicate with business people, for instance:

  • Don’t use jargon or acronyms or assume someone should know something.
  • Understand that what someone wants to tell us might not be what they hear.
  • Do not take others’ behaviour personally.
  • Be assertive, critical and sceptical. Also, to our own judgments.
  • Be curious about the business domain. Don’t assume too much.
  • Don’t use “business won’t let me” as an easy excuse.

For some people, that’s too much. That’s probably why they try to ask LLM instead of Domain Experts, hoping that we won’t need to learn that, and we’ll get a solution for free.

We’re sometimes annoyed by being pushed hard by business people, I get that, but if we want to build something useful, that’s also where LLM will fail, as they will just agree eventually to what we believe. And that’ll probably be even worse than the skewed reality shown by the domain expert, since this will be Artificial Reality.

I keep hearing blank statements that “LLMs are great for research”. Blank because they usually are not followed up with what the author means by “great”. LLMs have many inherent limitations here. We’re anthropomorphising them too much. They don’t reason; they’re statistical machines. We should constantly remind ourselves of that.

I think people who claim that “LLMs are great at research” are just conflating their own skills and projecting them onto LLMs.

Hence, in my opinion, that’s why we’re getting those hot takes. Might be that they just have the skills to drill down, organise research, evaluate it, and model system design.

And I think this narrative is dangerous.

Because I, too, could sit down and say that I iteratively arrived at a solution thanks to LLMs. Write this article in a much different narrative, praising LLMs. I could say that if someone couldn’t get the same result, then that’s a skill issue.

But that wouldn’t be true, because someone without my experience in the domain and in modelling probably wouldn’t pick up on it.

Furthermore, it’s not necessarily true that I did it well; that would just be my perspective.

In practice, in this context, “LLM does it well” means “LLM does it the way I wanted it to.” And if the outcome is right, it is more likely that the “operator” had the necessary skill and used an LLM as a tool to speed it up.

The research I did looked at what our competition does and how they name things, but without internal knowledge of why those systems got to where they are. If our goal is to provide additional value to users, then just doing a blind copy won’t take us far. Instead of doing Lift and Shift, building something that has the same features but does better, we’ll get Lift and Shift with silent f.

Yes, LLMS can gather and compile multiple sources into a summary. That’s actually impressive and can spare us a lot of time. They’re also taught in the public documents, so some of their knowledge is built in. They look impressive on the surface, but not so if you look deeper. In the mentioned research, I got disconnected pieces of information, the techniques were randomly assembled with muddled technical and business language, and the whole thing didn’t tell a coherent story. Because of the randomness, you never know where the knowledge came from, what was omitted and what was skewed. With domain experts, you can at least understand the origin of their biases.

LLM also has limitations. The biggest is the context size, and being a yes-man. If we ask for the big picture, usually, we’ll end up with a swamp of information instead. If we don’t know how to sort this knowledge, what we’re looking for, and do a proper drill-down, then we won’t be able to untangle it. If we don’t have domain experts and don’t know the domain, how will we challenge issues like the ones I gave above?

I also keep hearing that LLMs are really useful in modelling. Sure, I tried that. I also asked LLM to try to model, and well, even after numerous iterations and using modelling tools, the results were mediocre. Mostly cliches and bad modelling practices.

The output muddied what we had before. The context was lost, and the process was oversimplified. When asked to apply specific tools like the C4 model, EventStorming, and Context Maps, it was forcing DDD patterns instead of describing actual processes. Even with precise instructions, it was building models with broken notation where rules floated outside the command-event chains, leaning on clichéd solutions, losing context between iterations. The domain was also presented as a big ball of mud. Concepts from room reservation bleed into the cashiering module. It was a recurring pattern across different versions of the same mistakes.

If you give too much context, they’ll mix and blend multiple conflicting vocabularies from different operational tribes trying to satisfy you. If you give it too little, it will come with simplistic hallucinations.

Is it hopeless then? Not at all, we can get help from LLM, but when we use it as a tool, not a replacement. We should not outsource thinking.

LLMs are useful for grasping the big picture and identifying known unknowns that we may miss as we get into the domain. We can learn about the language specifics, as mentioned in the Folio, Postings, Account Receivables, etc. LLMs can help us drill down into interesting aspects. We can get a brief understanding of domains we don’t know at all. But then we need to dive deeper and collaborate with our domain experts, real stakeholders. We need to understand what we want to build, organise our findings and focus on a certain context.

If we want to model our system, then LLMs can help us do boring work. It can organise our findings and create rapid text-based transitions with a defined format (including modelling practices). This can work. Yet it’s still our job to do domain discovery, refine the knowledge, evaluate and test it. We still need to focus on the feedback loop with real world.

We won’t hide from gathering modelling skills, we won’t skip the process of translating the domain into technical design, and we’ll still need to drive how and when to drill down. Those are engineering skills that were needed and will still be needed. And that’s fine, as if they wouldn’t, why would someone want to hire us?

I see that (for unknown reasons) collaboration is not discussed anymore. Working collaboratively with fellow humans has become something people dread rather than embrace. Maybe some people hope that they won’t need to talk to “domain experts” and “fight them”, because they “have everything here in LLM”.

And yes, they have all.

All the same issues they would have with domain experts.

As much as we shouldn’t trust domain experts blindly, but work with them, we shouldn’t blindly trust LLMs. We shouldn’t drop our engineering and design skills and outsource them.

If we want our software products to be better than the competition, simply blending their experience isn’t enough. We should focus on finding our special sauce, and I don’t see any other way to make it right than to work together and collaborate. We can use LLMs to help us do it faster, but as tools, not as solutions or replacements.

Check also:

Cheers!

Oskar

p.s. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, putting pressure on your local government or companies. You can also support Ukraine by donating e.g. to Red Cross, Ukraine humanitarian organisation or donate Ambulances for Ukraine.

Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

C#: How to Refactor Legacy Code Safely

1 Share

Legacy C# code is usually not dangerous because it is old. It is dangerous because you do not fully know which parts are stable, which parts are accidentally correct, and which parts are one small change away from breaking production.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Podcast: From Java EE to Quarkus and LLMs: Adam Bien’s Playbook for Boring, Future‑Proof Systems

1 Share

Adam Bien, an independent consultant and pioneer of zero dependencies in the enterprise world of Java, highlights the benefits of consistently using standards, regardless of whether they involve Java or existing patterns. He argues that by doing so, he managed to future-proof the systems he built, preparing them for the cloud era and even for the AI-Native era.

By Adam Bien
Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories