Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153450 stories
·
33 followers

PEAS for Agent AI

1 Share

A classic AI framework to define an agent’s task environment is PEAS. It stands for:

  • Performance
  • Environment
  • Actuators
  • Sensors

Performance

Performance defines success for our agent (the objective and measurable criteria for evaluating the agent’s behavior). A good performance measure will evaluate the state of the environment, not the agent’s internal state.

Designing for performance is typically the hardest part of designing an agent. Thinking deeply about what you want to accomplish, before you start coding, is the key to creating a successful agent.

For example, a naïve performance measure for an automated vehicle might be “Get me to my destination.” A more robust performance measure might include:

  • Speed
  • Comfort
  • Fuel consumption
  • Following traffic laws
  • Safety

Some of these conflict (e.g., speed v safety)

To ensure that performance matches your goals, you will need to tradeoff values. You might end up with something like this:

  • Value safety over speed
  • Value speed over comfort
  • Value fuel consumption over speed
  • Value comfort over fuel consumption
  • Value traffic laws over comfort or fuel consumption

Even this is a bit simplistic. Another approach is to give weights to the various factors and see how they balance out. As you can see, deep thinking is required to get this right.

Environment

This refers to the “world” the agent lives in—that is, everything external to the agent. In the case above the roads, streets, traffic lights, pedestrians, etc., constitute the environment.

Actuators

These are the things that change the environment. In our self-driving car these are the steering wheel, the brakes, and the accelerator. In a robot these might be legs, arms, and fingers. And, most relevant to most of us, in code these are methods that call APIs, functions that change values, etc.

Sensors

Sensors are how the agent perceives its environment. Like humans, the only way the agent can know about the world it moves in is through its sensors. For our self-driving car this might include

  • GPS
  • LiDAR
  • Cameras
  • Tire pressure sensors

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Single-Agent vs Multi-Agent AI: Architecting the Future of AI-Native Systems

1 Share
Explore single-agent vs. multi-agent AI architectures for AI-native systems. Learn which approach suits your needs for scalable, efficient AI solutions.
Read the whole story
alvinashcraft
16 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

When benchmarks go bad - what I learned from measuring performance wrong

1 Share

The world of performance analysis is littered with flawed claims, cognitive biases, dangerous intuitions, and beguiling fallacies. Sadly, Holly has been guilty of all of the above! Repeatedly. But this is a no-judgement zone. Some measurement anti-patterns are subtle, and some are downright counter-intuitive. In this talk, Holly will explain why measuring performance is important, and talk through some of the ways it can go wrong. That would be depressing if that was all there was, so she’ll also introduce a toolbox of questions and principles that you can use to improve the performance of your own applications.

These include:

  • How to set up a test system
  • Recommended load generators
  • The USE method
Read the whole story
alvinashcraft
23 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to ensure success following a cloud migration

1 Share

Making sure everything is running smoothly after a cloud migration is critically important, even after a lot of time has passed. It’s also important to continue the optimization journey post-migration. Pat Wright explains why in this article, part of his ‘How to migrate from on-prem to the cloud series.

You’ll likely have one or two processes that didn’t properly migrate and now don’t work – you’ll want to find these before they impact your customers. You’ll also want to optimize your resources now that you’re in this shiny new cloud, after all those hours of work.

It’s also important to understand that migrations like to scale everything up and use more resources to deal with the complexity of the migration. You may have started this process to save costs in the future – so now you need to focus on how your system is actually running, and optimize for it. But how’s best to do so? Here’s my advice.

Ensure the key baselines are on the system

Knowing what resources (cpu, mem, disk) the system has been using will give you an idea of what you should expect from you new cloud servers. You can use these baselines to compare what the cloud is doing to make adjustments. 

Minimize, minimize, minimize…

Now that you’re in the cloud, it’s crucial to understand that everything has a cost. The more you can review what is actually needed (or not), the more you can trim down and cut those costs. So, focus on what you can now shrink, even if this just asks of each cluster, “do we need these resources?”

…and set up automation to help do so

On a similar note, it’s good practice to set up an automation to focus on removing servers/resources you no longer need. It’s now VERY easy to create a server and test things. It’s also very easy to forget you have the server and not shut it down! Automation can greatly help with this, so you should put something in place sooner rather than later.

Learn what metrics you now need to track 

Cloud servers have some metrics that don’t act the same as on-premises servers, so it’s important you understand these. Working directly with your cloud provider should help here.

Check your permissions, security, and roles

Security may have taken a back seat during the migration phase, so it’s imperative to now review that and get everything in order.

A cloud migration is never just ‘done’

Even after completing all the steps above, you’re never truly ‘finished’ with a cloud migration. At least, I’ve never seen this myself. Typically, you get over a few big components/applications and then spend even more time optimizing and improving. I do see that migrations change everything about a company and how it works. It does make things better in the end. Of course, you want to now take advantage of that, and focus on what you can do to continue to optimize for the future.

But first – celebrate!

It’s important to let people know all the good work they’ve put into the project. Cloud migrations can take a long time, so I suggest setting milestones along the way to ‘celebrate’ when you hit them.

The importance of knowledge sharing

The cloud migration may not be over because, as mentioned, it rarely ever is. However, the team that executed the migration may now have other projects to work on, so others need to be trained up. Just like I’m sharing knowledge in these articles, it’s important to do the same with your colleagues. To make things easier, you’ll ideally have documented everything during the migration, as noted in a previous article.

Focus on automation to ensure future migration success

This is loosely connected to what I mentioned earlier, but this time I’m more referring to automation that can help with future cloud migrations. You may have written some good scripts during the migration, and made some useful tools, so now’s a great time to harden them and adapt them for a variety of situations.

Check any fixes you may have made during the cloud migration

Perhaps, during the cloud migration, you needed to put in place some temporary fixes to the application. Well, don’t lose those changes! Make sure to go back and address them.

In summary: the migration is never fully ‘over’

This most likely won’t be the end of the project. For example, you may have other applications that you weren’t able to move alongside everything else, so now need to be taken care of. In a large organization, a cloud migration can sometimes even take several years! For now, though, make sure to celebrate – migrations are one of the largest projects you can take on.

Cloud adoption is accelerating, but database migrations aren’t keeping pace. Find out why.

The Cloud Migration Divide explores why complex, business-critical databases remain on-premises – and what’s holding organizations back as confidence fails to scale with complexity.
Download the free report

FAQs: How to ensure success following a cloud migration

1. What should you do after a cloud migration?

After a cloud migration, you should monitor system performance, identify broken processes, optimize resource usage, improve security, and implement automation to reduce costs and improve efficiency.

2. Why is optimization important after moving to the cloud?

Cloud environments often scale resources up during migration. Optimization helps reduce unnecessary costs, improve performance, and ensure your infrastructure matches real usage needs.

3. How can you reduce cloud costs post-migration?

You can reduce costs by minimizing unused resources, rightsizing servers, removing unnecessary workloads, and using automation to shut down idle systems.

4. What metrics should you track in the cloud?

Key metrics include CPU usage, memory, disk performance, and cloud-specific metrics like scaling behavior and service utilization, which may differ from on-prem systems.

5. How does automation help after cloud migration?

Automation helps manage resources efficiently, remove unused infrastructure, enforce policies, and prevent unnecessary spending from forgotten services.

6. Why is security review important after migration?

Security may be overlooked during migration, so reviewing permissions, roles, and access controls ensures your cloud environment is protected against vulnerabilities.

7. Is a cloud migration ever truly finished?

No, cloud migration is an ongoing process. Continuous optimization, updates, and improvements are required to maintain performance and cost efficiency.

8. What role does knowledge sharing play after migration?

Knowledge sharing ensures teams understand the new cloud environment, reduces reliance on migration teams, and supports future improvements and scalability.

The post How to ensure success following a cloud migration appeared first on Simple Talk.

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Verify Network Connectivity in .NET MAUI

1 Share

Learn the key steps to checking network connectivity in the various platforms available to your .NET MAUI app.

Identifying the connectivity of the devices running our mobile applications allows us to have much more precise control over the decisions we need to make within the app. From knowing whether the device has internet access, if the connection is limited, and if connection types such as Bluetooth, WiFi or Ethernet are active, all this information helps us provide a better user experience.

For example, we can decide whether to show an empty state when there is no internet connection, prevent certain actions or clearly inform the user about what is happening. This is especially important because, in many cases, the user loses connectivity and assumes the problem is with the application, when in reality it is a network issue.

Additionally, we can display different scenarios or behaviors depending on the type of connection available, such as when the device only has Bluetooth active or does not have internet access. That’s why it’s essential to know how to detect these connectivity states. The good news is that in .NET MAUI, we have the ability to do this in a very simple way. Let’s take a look!

First, Platform Configuration

Before starting with any implementation, it’s important to verify whether you need to apply any platform-specific configuration. Some platforms may require additional setup, while others work out of the box.

For this, iOS/Mac Catalyst and Windows require no additional configuration.

For Android, to access connectivity information, you must add the ACCESS_NETWORK_STATE permission. There are three different ways to add this permission on Android:

Android Option 1: Add the Permission Directly in the AndroidManifest.xml

Go to Platforms → Android, open the AndroidManifest.xml file, and add the following node:

<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />

Android Option 2: Use the Android Manifest Graphical Editor

Go to Platforms → Android, double-click the AndroidManifest.xml file, and locate the Required permissions section. Find the permission labeled ACCESS_NETWORK_STATE and simply check the option, as shown below.

Android Option 3: Add the Assembly-based Permission

Go to Platforms → Android → MainApplication.cs and add the permission as follows:

[assembly: UsesPermission(Android.Manifest.Permission.AccessNetworkState)]

Network Accessibility in .NET MAUI

To inspect the network accessibility on a device, .NET MAUI provides the IConnectivity interface. This is part of the Microsoft.Maui.Networking namespace and is available through the Connectivity.Current property.

One thing I really like about this API is that it doesn’t simply return a boolean value indicating whether there is internet access or not. Instead, it provides much more detailed information, such as the scope of the current network (for example, Internet, ConstrainedInternet and others), as well as details about active connection profiles like Bluetooth, Cellular, WiFi and others. It also exposes an event that allows you to monitor changes in the device’s connectivity state in real time.

Next, we’ll take a closer look at each of these values to better understand what they mean and how you can use them in your applications.

How to Inspect the Current Network Scope?

Thanks to the .NET MAUI team, we can determine the scope of the current network in a much more precise way through the NetworkAccess property. This property provides different values that we can evaluate to obtain more detailed information about the device’s connectivity state. These values are the following:

Internet: Indicates that the device has access to both the local network and the internet. This is the ideal state for making API calls or performing any action that requires a full internet connection.

Local: The device has access only to the local network.

None: No type of connectivity is available. In this state, it’s ideal to inform the user that all or some actions within the app will not work correctly due to the lack of connectivity. This can be done through an empty state, an alert or any other approach that best fits the scenario you’re working on.

Unknown: It’s not possible to determine the connectivity state. If this happens, while the correct information is being retrieved, it’s recommended to inform the user in the same way as in the None state.

ConstrainedInternet: Indicates that the device has limited internet access. This state usually appears when the device is connected to a network with a captive portal, meaning networks that require accepting terms or entering information before granting full internet access. This is common in places like airports, universities or hotels.

Thanks to all these states, as developers we can be much more specific in how we communicate connectivity issues to our users and better adapt the behavior of our applications.

For the code implementation, you can do something like what I show below:

NetworkAccess accessType = Connectivity.Current.NetworkAccess; 

if (accessType == NetworkAccess.Internet) 
{ 
    // Add the code that you need here 
}

Checking Active Connection Profiles

While NetworkAccess tells us how accessible the network is (internet, local, none, etc.), ConnectionProfiles allows us to know which type of connection the device is actively using at a given moment.

The types of connections we can detect are the following:

  • WiFi
  • Cellular (mobile data)
  • Bluetooth
  • Ethernet

This information is extremely useful for making decisions within your application. For example:

  • You can limit your app to download files only when the device is connected to a WiFi network.
  • Enable local features when only a Bluetooth connection is available.

It’s important to keep in mind that Connectivity.Current.ConnectionProfiles returns a collection (IEnumerable<ConnectionProfile>), because a device can have multiple connection types active at the same time. For instance, the device may be connected to WiFi while Bluetooth is enabled simultaneously.

In code, the implementation would look like the following:

IEnumerable<ConnectionProfile> profiles = Connectivity.Current.ConnectionProfiles;

if (profiles.Contains(ConnectionProfile.WiFi)) 
{ 
    // Add the code that you need here. 
}

Reacting to Connectivity Changes

We know that network conditions can change at any moment. For this reason, .NET MAUI provides the ConnectivityChanged event, which allows us to detect when network access or active connection profiles change. This makes it possible for our applications to react immediately to these changes, without breaking the app experience.

Let’s take a look at an example based on the official documentation:

public class ConnectivityListener 
{ 
    public ConnectivityListener() 
    { 
    Connectivity.ConnectivityChanged += OnConnectivityChanged; 
    }
     
    void OnConnectivityChanged(object sender, ConnectivityChangedEventArgs e) 
    { 
    if (e.NetworkAccess != NetworkAccess.Internet) 
    { 
    Console.WriteLine("No internet connection available."); 
    return; 
    }
     
    if (e.ConnectionProfiles.Contains(ConnectionProfile.WiFi)) 
    { 
    Console.WriteLine("Connected via Wi-Fi."); 
    } 
    else if (e.ConnectionProfiles.Contains(ConnectionProfile.Cellular)) 
    { 
    Console.WriteLine("Using mobile data."); 
    } 
    } 
}

⚠️ Important Considerations

NetworkAccess.Internet: Due to how connectivity detection works on each platform, .NET MAUI can only detect that a network connection is available. It does not guarantee that the connection has real internet access. For example, a device may be connected to a WiFi network, but the router itself may not have an internet connection.

Conclusion

And that’s it! In this article, you explored how to work with network connectivity in .NET MAUI using Connectivity. You learned how to determine the scope of the current network, detect active connection profiles such as WiFi, Cellular, Bluetooth and Ethernet, and react to connectivity changes in real time using the ConnectivityChanged event.

With this knowledge, you can now make better decisions in your apps, provide clearer feedback to users and build more resilient experiences that adapt to changing network conditions.

See you in the next article! ‍♀️✨

References

Code samples and explanations were based on the official documentation:

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Optimizing Angular Pivot Table Performance Using Virtualization Techniques

1 Share

Optimizing Angular Pivot Table Performance Using Virtualization Techniques

TL;DR: Angular pivot tables can struggle with large datasets and complex calculations. This article explores common performance bottlenecks and why virtualization is essential for building faster, scalable, and user‑friendly data‑heavy Angular applications.

As Angular applications scale to support large datasets, pivot tables often become a performance bottleneck. Rendering thousands of rows and columns can lead to long load times, high memory usage, and sluggish interactions. These challenges are especially noticeable in data-heavy dashboards, where users expect smooth scrolling, fast filtering, and responsive drill-down operations. 

This article examines why pivot table performance degrades at scale and explores how virtualization fits into the overall rendering architecture of Syncfusion® Angular Pivot Table.

Why large datasets hurt Angular Pivot Table performance

Pivot tables aggregate and display multidimensional data across rows, columns, headers, and values. As data volume increases, the number of rendered cells grows rapidly. For example, a grid with 10,000 rows and 50 columns results in hundreds of thousands of rendered elements.

This growth places pressure on the browser in several areas:

  • DOM size and memory consumption
  • Layout and paint recalculations
  • Interaction handling during scroll and resize
  • Repeated aggregation and reflow during user actions

These factors compound as datasets grow, making performance degradation unavoidable without architectural optimizations.

Understanding virtualization in Pivot Tables

Virtualization is a rendering optimization technique that decouples dataset size from DOM size. Instead of rendering every pivot cell at once, only the rows and columns visible within the viewport are rendered, along with a small buffer.

Key characteristics of virtualization include:

  • Rendering only visible cells while preserving full dataset navigation
  • Recycling a fixed pool of DOM elements instead of recreating them
  • Maintaining scrollbar behavior that represents the complete dataset
  • Keeping DOM complexity nearly constant regardless of data size

By limiting what the browser needs to render at any moment, virtualization reduces layout cost, memory overhead, and interaction latency.

Configuring virtualization in Angular

The Syncfusion Angular Pivot Table exposes virtualization through straightforward component-level settings. Enabling virtualization typically requires activating virtual scrolling and registering the required services.

A basic configuration includes:

  • Enabling virtualization at the component level
  • Providing the virtual scroll service
  • Defining fixed height and width values to establish a predictable viewport

To enable it, inject the VirtualScrollService and set enableVirtualization="true" on the component.

import { Component } from '@angular/core';
import { VirtualScrollService, VirtualScrollSettingsModel } from '@syncfusion/ej2-angular-pivotview';
import { DataSourceSettingsModel } from '@syncfusion/ej2-pivotview/src/model/datasourcesettings-model';

@Component({
    selector: 'app-pivot-dashboard',
    providers: [VirtualScrollService],
    template: `
        <ejs-pivotview
            [dataSourceSettings]="dataSourceSettings"
            enableVirtualization="true"
            [virtualScrollSettings]="virtualScrollSettings"
            height="600px"
            width="100%">
        </ejs-pivotview>
    `
})
export class PivotDashboardComponent {
    dataSourceSettings: DataSourceSettingsModel = {
        dataSource: this.largeOrdersDataset,
        rows: [{ name: 'Region' }],
        columns: [{ name: 'Year' }],
        values: [{ name: 'Revenue', type: 'Sum' }],
        filters: []
    };

    virtualScrollSettings: VirtualScrollSettingsModel = {
        allowSinglePage: true
    };
}

Virtualization affects only rendering behavior. The full dataset is still accepted, processed, and aggregated by the pivot engine.

Performance tuning strategies

Data shaping

Reduce dataset size before binding:

const filteredData = rawOrderData
    .filter(order => order.year >= 2020 && order.status === 'completed')
    .slice(0, 100000);

this.pivotComponent.dataSourceSettings.dataSource = filteredData;

Smaller input datasets reduce aggregation overhead before virtualization even begins.

Field optimization

Limit rows, columns, and values to what users actually need:

// Efficient configuration
rows = [{name: 'Region'}];
columns = [{name: 'Year'}];
values = [{ name: 'Revenue', type: 'Sum' }];

Avoid unnecessary fields, which multiply aggregation complexity and memory usage.

Viewport sizing

Larger viewports require rendering more cells. Use pixel-based dimensions and balance usability with performance:

height = window.innerHeight > 1000 ? '800px' : '500px';
width  = window.innerWidth  > 1200 ? '1200px' : '100%';

Smaller viewports reduce the working DOM set and improve responsiveness.

Strategic filtering

Apply filters early to narrow data scope:

filterSettings: [
    {
        name: 'Channel',
        type: 'Include',
        items: ['Online', 'Retail']
    },
    {
        name: 'Year',
        type: 'Include',
        items: ['2023', '2024', '2025']
    }
]

Filtering reduces both aggregation cost and rendered output.

Deferred layout updates

Batch multiple configuration changes to avoid repeated recomputation:

this.pivotComponent.setProperties(
    {
        dataSourceSettings: {
            rows: [...],
            columns: [...],
            values: [...]
        }
    },
    false // defer recalculation
);

Server-side aggregation overview

When datasets reach hundreds of thousands or millions of records, server‑side aggregation can offload heavy computation from the browser. In this architecture:

  • The client defines the pivot configuration and renders only visible cells
  • The server performs aggregation, filtering, and sorting
  • Only aggregated results required for the current view are transmitted

This separation reduces client memory usage and minimizes data transfer while maintaining interactive behavior.

Caching strategy

Each pivot instance is assigned a unique session GUID. The server caches:

  • Engine properties
  • Data sources
  • Aggregated results

Caches typically expire after 60 minutes to balance performance and memory usage. Subsequent interactions reuse cached results, dramatically improving responsiveness.

Authentication

Use the beforeServiceInvoke event to attach authorization headers to all requests, ensuring secure communication without hard-coded credentials.

Virtualization limitations and constraints

Virtualization introduces certain constraints that should be accounted for:

  • Column widths should be defined using fixed pixel values
  • Row heights must remain consistent for accurate scroll calculations
  • Features that rely on dynamic sizing may not behave predictably at scale
  • Built‑in grouping is best applied at the data source level for large datasets

Understanding these constraints helps avoid rendering inconsistencies and unexpected behavior.

Common troubleshooting

Common issues encountered with virtualization include:

  • Virtualization not activating due to missing configuration or services
  • Scroll artifacts caused by inconsistent row heights or column widths
  • Performance slowdowns related to excessive aggregation or unfiltered data

These issues are typically related to configuration scope rather than rendering logic itself.

Frequently Asked Questions

Does virtualization work with all data types and aggregation functions?

Yes. Virtualization affects only how pivot cells are rendered in the browser. It works independently of data types or aggregation logic, whether the pivot table uses sums, counts, averages, or custom calculations.

Does virtualization change how much data the pivot table processes?

No. The pivot table still processes and aggregates the complete dataset. Virtualization only controls how much of the aggregated result is rendered at any given time.

Does virtualization affect exporting pivot table data?

No. Export operations include the full aggregated pivot result, not just the visible portion. Virtualization does not limit or truncate exported data.

Is virtualization compatible with interactive features like drill‑down?

Yes. Drill‑down interactions continue to work as expected. The detail rows revealed during drill‑down are rendered using the same virtual rendering mechanism, helping maintain consistent performance.

Is virtualization useful if users don’t scroll much?

Virtualization is most beneficial when pivot tables display large result sets that exceed the viewport. If the entire pivot output fits within the visible area, the performance impact of virtualization is minimal but generally safe to keep enabled.

Harness the power of Syncfusion’s feature-rich and powerful Angular UI components.

Conclusion

Thank you for reading! Virtualization plays a key role in addressing performance constraints when handling large datasets in the Syncfusion Angular Pivot Table. By limiting rendering to the visible portion of the pivot layout and managing DOM usage efficiently, virtualization helps maintain responsiveness as data volume grows.

Understanding where virtualization fits within the pivot table architecture and how it interacts with data size, aggregation scope, and layout constraints provides a clearer foundation for building scalable, data‑intensive Angular applications.

If you’re a Syncfusion user, you can download the setup from the license and downloads page. Otherwise, you can download a free 30-day trial.

You can also contact us through our support forumsupport portal, or feedback portal for queries. We are always happy to assist you!

Read the whole story
alvinashcraft
56 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories