Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153180 stories
·
33 followers

Exploring the AI-Powered KendoReact Smart Grid

1 Share

The KendoReact SmartGrid brings AI prompting capabilities to your favorite enterprise-grade component.

Working with large datasets in web applications can involve repetitive tasks, such as manually configuring filters, setting sorting rules or grouping data in multiple ways to find the information you need. While traditional data grids (like the Progress KendoReact Data Grid) provide the tools to accomplish these operations, they require users to understand the grid’s interface and manually configure each step of each operation.

What if our data grid could understand natural language commands? What if we could ask it to “show only failed transactions over $500” or “group customers by region and sort by revenue”? This is exactly what the new KendoReact SmartGrid offers.

KendoReact SmartGrid

The KendoReact SmartGrid enhances the traditional KendoReact DataGrid with AI-powered capabilities that enable users to interact with data using natural language. Instead of clicking through menus and manually configuring filters, users can describe what they want to see, and the AI assistant translates those requests into grid operations.

The SmartGrid maintains all the powerful features of the standard grid (sorting, filtering, grouping, etc.) while adding an intelligent layer that interprets user intent and automatically applies the appropriate data operations.

Let’s start with a basic implementation to see the SmartGrid in action:

import * as React from 'react';
import {
  Grid,
  GridColumn,
  GridToolbar,
  GridToolbarAIAssistant,
} from '@progress/kendo-react-grid';
import { filterIcon } from '@progress/kendo-svg-icons';

const App = () => {
  const transactions = [
    {
      id: 1,
      customerName: 'Acme Corp',
      amount: 1250.0,
      currency: 'USD',
      status: 'Completed',
      transType: 'Deposit',
      transDate: new Date('2024-10-15'),
    },
    {
      id: 2,
      customerName: 'Tech Solutions',
      amount: 450.0,
      currency: 'EUR',
      status: 'Failed',
      transType: 'Withdrawal',
      transDate: new Date('2024-10-16'),
    },
    {
      id: 3,
      customerName: 'Global Industries',
      amount: 2100.0,
      currency: 'USD',
      status: 'Completed',
      transType: 'Deposit',
      transDate: new Date('2024-10-17'),
    },
    {
      id: 4,
      customerName: 'StartUp Inc',
      amount: 850.0,
      currency: 'GBP',
      status: 'Pending',
      transType: 'Withdrawal',
      transDate: new Date('2024-10-18'),
    },
    {
      id: 5,
      customerName: 'Enterprise Co',
      amount: 3200.0,
      currency: 'USD',
      status: 'Completed',
      transType: 'Deposit',
      transDate: new Date('2024-10-19'),
    },
  ];

  return (
    <Grid
      autoProcessData={true}
      dataItemKey="id"
      data={transactions}
      sortable={true}
      groupable={true}
      pageable={true}
      columnMenuIcon={filterIcon}
    >
      <GridToolbar>
        <GridToolbarAIAssistant
          requestUrl="https://demos.telerik.com/service/v2/ai/grid/smart-state"
          promptPlaceHolder="Filter, sort or group with AI"
          suggestionsList={[
            'Sort by amount descending',
            'Show only completed transactions',
            'Group by transaction type',
            'Filter where currency is USD',
          ]}
          enableSpeechToText={true}
        />
      </GridToolbar>

      <GridColumn field="customerName" title="Customer Name" width={180} />
      <GridColumn field="amount" title="Amount" width={120} format="{0:c2}" />
      <GridColumn field="currency" title="Currency" width={100} />
      <GridColumn field="status" title="Status" width={120} />
      <GridColumn field="transType" title="Type" width={120} />
      <GridColumn
        field="transDate"
        title="Date"
        width={140}
        format="{0:MM/dd/yyyy}"
      />
    </Grid>
  );
};

export default App;

In the example above, we’ve created a transaction grid using the AI-powered toolbar assistant. The GridToolbarAIAssistant component provides a natural language interface that lets users type commands like “show only completed transactions” or “sort by amount descending,” and the AI automatically applies those operations to the grid.

The autoProcessData prop is key here in the above example. It enables the grid to automatically handle state updates when AI operations are applied, eliminating the need for manual state management in simple scenarios.

The SmartGrid is part of KendoReact Premium, an enterprise-grade UI library with 120+ components. The AI features demonstrated here use a Telerik-hosted service for demonstration purposes. For production applications, you’ll want to implement your own AI service tailored to your specific domain and data requirements.

Voice Input Support

A cool feature of the SmartGrid is its support for speech-to-text input. By setting enableSpeechToText={true} on the GridToolbarAIAssistant, users can speak their commands directly instead of typing them.

To use voice input, users click the microphone icon in the AI assistant and speak their command. The grid processes spoken commands the same way as typed text, so commands like “show only completed transactions” work just as seamlessly.

The voice input feature uses the browser’s native Web Speech API, so it works across modern browsers without additional dependencies or setup.

Tracking Operations with Controlled Integration

While autoProcessData works great for simple scenarios, we may need more control over how AI operations are applied to our grid. For example, we can log user interactions, validate AI suggestions before applying them or display an operation history to users.

The controlled approach gives us this flexibility through the onResponseSuccess callback:

import * as React from 'react';
import { process } from '@progress/kendo-data-query';
import {
  Grid,
  GridColumn,
  GridToolbar,
  GridToolbarAIAssistant,
} from '@progress/kendo-react-grid';

const App = () => {
  const [dataState, setDataState] = React.useState({
    skip: 0,
    take: 10,
    sort: [],
    filter: undefined,
    group: [],
  });

  const [outputs, setOutputs] = React.useState([]);
  const aiAssistantRef = React.useRef(null);

  const transactions = [
    {
      id: 1,
      customerName: 'Acme Corp',
      amount: 1250.0,
      status: 'Completed',
      currency: 'USD',
    },
    {
      id: 2,
      customerName: 'Tech Solutions',
      amount: 450.0,
      status: 'Failed',
      currency: 'EUR',
    },
    {
      id: 3,
      customerName: 'Global Industries',
      amount: 2100.0,
      status: 'Completed',
      currency: 'USD',
    },
    {
      id: 4,
      customerName: 'StartUp Inc',
      amount: 850.0,
      status: 'Pending',
      currency: 'GBP',
    },
    {
      id: 5,
      customerName: 'Enterprise Co',
      amount: 3200.0,
      status: 'Completed',
      currency: 'USD',
    },
  ];

  const handleResponseSuccess = (response, promptMessage) => {
    if (response && response.data) {
      // Apply AI-suggested operations to grid state
      const newState = { ...dataState };

      if (response.data.sort) newState.sort = response.data.sort;
      if (response.data.filter) newState.filter = response.data.filter;
      if (response.data.group) newState.group = response.data.group;

      setDataState(newState);

      // Track the operation in history
      if (response.data.messages) {
        const newOutput = {
          id: outputs.length + 1,
          prompt: promptMessage,
          responseContent: response.data.messages.join('\n'),
        };
        setOutputs([newOutput, ...outputs]);
      }
    }

    aiAssistantRef.current?.hide();
  };

  const processedData = process(transactions, dataState);

  return (
    <Grid
      dataItemKey="id"
      data={processedData.data}
      total={processedData.total}
      sortable={true}
      sort={dataState.sort}
      onSortChange={(e) => setDataState({ ...dataState, sort: e.sort })}
      filterable={true}
      filter={dataState.filter}
      onFilterChange={(e) => setDataState({ ...dataState, filter: e.filter })}
      groupable={true}
      group={dataState.group}
      onGroupChange={(e) => setDataState({ ...dataState, group: e.group })}
      pageable={true}
      skip={dataState.skip}
      take={dataState.take}
      onPageChange={(e) =>
        setDataState({
          ...dataState,
          skip: e.page.skip,
          take: e.page.take,
        })
      }
    >
      <GridToolbar>
        <GridToolbarAIAssistant
          ref={aiAssistantRef}
          requestUrl="https://demos.telerik.com/service/v2/ai/grid/smart-state"
          onResponseSuccess={handleResponseSuccess}
          outputs={outputs}
          promptPlaceHolder="Filter, sort or group with AI"
        />
      </GridToolbar>

      <GridColumn field="customerName" title="Customer" width={180} />
      <GridColumn field="amount" title="Amount" width={120} format="{0:c2}" />
      <GridColumn field="status" title="Status" width={120} />
      <GridColumn field="currency" title="Currency" width={100} />
    </Grid>
  );
};

export default App;

In this controlled approach, we explicitly manage the grid’s state and intercept AI responses via onResponseSuccess. This allows us to:

  • Validate AI suggestions before applying them to the grid.
  • Log user interactions and AI operations for analytics or debugging.
  • Display operation history through the outputs prop, giving users transparency into what commands they’ve issued.

The outputs prop displays a history of AI operations, helping users understand how their natural language requests were interpreted and what operations were performed on the grid.

Wrap-up

The KendoReact SmartGrid brings AI-powered capabilities to data grids, making complex data operations accessible through natural language. By simply describing what they want to see, users can filter, sort and group data without needing to understand the intricacies of the grid interface.

For more details on implementing the SmartGrid and exploring its capabilities, check out the official documentation:

And to try it for yourself, download the 30-day free trial:

Try Now

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Google Releases FunctionGemma Model

1 Share
Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

On the proper usage of a custom Win32 dialog class

1 Share

Some time ago, I discussed custom dialog classes. You can specify that a dialog template use your custom dialog class by putting the custom class’s name in the CLASS statement of the dialog template. A customer tried doing that but it crashes with a stack overflow.

// Dialog template

IDD_AWESOME DIALOGEX 0, 0, 170, 62
STYLE DS_SHELLFONT | DS_MODALFRAME | WS_POPUP | WS_CAPTION
CAPTION "I'm so awesome"
CLASS "MyAwesomeDialog"
FONT 8, "MS Shell Dlg", 0, 0, 0x1
BEGIN
    ICON            IDI_AWESOME,IDC_STATIC,14,14,20,20
    LTEXT           "Whee!",IDC_STATIC,42,14,114,8
    DEFPUSHBUTTON   "OK",IDOK,113,41,50,14,WS_GROUP
END

// Custom dialog class procedure
// Note: This looks ugly but that's not the point.
LRESULT CALLBACK CustomDlgProc(HWND hwnd, UINT message,
                               WPARAM wParam, LPARAM lParam)
{
    if (message == WM_CTLCOLORDLG) {
        return (LRESULT)GetSysColorBrush(COLOR_INFOBK);
    }
    return DefDlgProc(hwnd, message, wParam, lParam);
}

void Test()
{
    // Register the custom dialog class
    WNDCLASS wc{};
    GetClassInfo(nullptr, WC_DIALOG, &wc);
    wc.lpfnWndProc = CustomDlgProc;
    wc.lpszClassName = TEXT("MyAwesomeDialog");
    RegisterClass(&wc);

    // Use that custom dialog class for a dialog
    DialogBox(hInstance, MAKEINTESOURCE(IDD_AWESOME), hwndParent,
              CustomDlgProc);
}

Do you see the problem?

The problem is that the code uses the CustomDlgProc function both as a window procedure and as a dialog procedure.

When a message arrives, it goes to the window procedure. This rule applies regardless of whether you have a traditional window or a dialog. If you have a standard dialog, then the window procedure is Def­Dlg­Proc, and that function calls the dialog procedure to let it respond to the message. If the dialog procedure declines to handle the message, then the Def­Dlg­Proc function does some default dialog stuff.

Creating a custom dialog class means that you want a different window procedure for the dialog, as if you had subclassed the dialog. The custom window procedure typically does some special work, and then it passes messages to Def­Dlg­Proc when it wants normal dialog behavior.

If you use the same function as both the window procedure and the dialog procedure, then when the function (acting as a window procedure) passes a message to Def­Dlg­Proc, the Def­Dlg­Proc function will call the dialog procedure, which is also Custom­Dlg­Proc. That function doesn’t realize that it’s now being used as a dialog procedure, so it is expected to return TRUE or FALSE (depending on whether it decided to handle the message). It thinks it is still a window procedure, so it passes the message to Def­Dlg­Proc, and the loop continues until you overflow the stack.

The idea behind custom dialog classes is that you have some general behavior you want to apply to all the dialogs that use that class. For example, maybe you want them all to use different default colors, or you want them all to respond to DPI changes the same way. Instead of replicating the code in each dialog procedure, you can put it in the dialog class window procedure.

But even if you are using a custom dialog class, your dialog procedure should still be a normal dialog procedure. That dialog procedure is the code-behind for the dialog template, initializing the controls in the template, responding to clicks on the controls in the template, and so on.

The post On the proper usage of a custom Win32 dialog class appeared first on The Old New Thing.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Pri.ProductivityExtensions.Source - A .NET Standard Package to Enable Modern C# Language Features

1 Share

.NET Standard compile errors

Targeting .NET Standard for a class library means that the library will be compatible with applications designed for past versions of .NET. E.g., an application targeting .NET Framework 4.8 or .NET Core 2.1 will be able to load a class library targeting .NET Standard 2.0 because it can only rely on the same API surface that the application does.

There are a few reasons developers need to work in .NET Standard. A popular one is writing Roslyn analyzers, another is writing PowerShell modules (Cmdlets) in C#. .NET Standard is a formal specification of .NET APIs available across multiple .NET implementations. It's like a definition of a slice of APIs that are implemented across .NET versions. Designing for a slice of APIs allows you to design and deploy class libraries that will work across .NET implementations. The latest version of .NET Standard (2.1) was released in September 2020.

Paradoxically, .NET languages depend on base-class libraries for aspects of their implementation. For example, the C# language feature foreach depends on the types IEnumerable, IEnumerable<T>, IEnumerator, and IEnumerator<T>. Specifications like .NET Standard enable C# language features across .NET implementations by including those dependent types in their specification. But languages evolve and improve over time, as do base class libraries.

Since .NET Standard was last updated in 2020, some new language features that depend on new types in the base class library have been introduced. Some of these new language features aren't necessarily supported in .NET Standard, even though you can use the latest compiler version when building .NET Standard class libraries. For example, Ranges and Indices are a feature added to C# 8.0, released in September 2019. Ranges and Indices depend upon the Range and Index types. But to write a PowerShell library that supports Windows PowerShell and PowerShell 7, you have to target .NET Standard 2.0, whose API surface was specified around August 2017--roughly two years before Range and Index existed. This means Ranges and Indices aren't supported out of the box in C# code targeting .NET Standard 2.0. var lastWord = words[^1] or var firstFourWords = words[0..3] will cause errors (Predefined type 'System.Index' is not defined or imported, and Predefined type 'System.Range' is not defined or imported, respectively).

Fortunately, the compiler is flexible about where those predefined types are defined. Someone can't simply create a .NET Standard 2.0 library with Range and Index as public types, because they will clash with modern versions of .NET (e.g., in an XUnit test project). But the .NET Standard 2.0 library can create internal versions of those classes that the compiler is happy to resolve to. But having everyone who authors a .NET Standard 2.0 library write their own Range and Index classes (as well as any other class that modern C# syntax requires, like CallerArgumentExpressionAttribute, DoesNotReturnAttribute, etc.) is problematic, to say the least.

Content-only NuGet Packages

.NET and NuGet packages support content- or source-only packages. Source-only packages are NuGet packages that, instead of packaging binaries like DLLs, package source code. Source-only packages don't add any assembly dependencies that require deployment.

That's where Pri.ProductivityExtensions.Source comes in. Pri.ProductivityExtensions.Source is a source-only NuGet package that includes the source code for various types created after .NET Standard 2.x to support C# language features (and some helper extensions like ArgumentException.ThrowIfNull that depend on those features) that aren't available in .NET Standard.

"Pri.ProductivityExtensions.Source" is also the ID of the package and can be found on NuGet.org here. It can be added to a project via the .NET API: dotnet add package Pri.ProductivityExtensions.Source, or via Package Manager Console: Install-Package Pri.ProductivityExtensions.Source. It is open source, and the source code is available on GitHub. For an example using Pri.ProductivityExtensions.Source see Pri.Essentials.DotnetPsCmds. And if you install DotnetPsCmds, you can install Pri.ProductivityExtensions.Source in PowerShell: Add-DotnetPackages Pri.ProductivityExtensions.Source.

When you reference Pri.ProductivityExtensions.Source, it causes the content (source code) to be considered part of the target project. The source code remains in the .nuget cache and is not copied alongside the rest of your source code so no new source code needs to be committed to revision control.

The types included in Pri.ProductivityExtensions.Source are internal, so they won't clash if Pri.ProductivityExtensions.Source is referenced in a project targeting a modern .NET implementation. Although I don't recommend using Pri.ProductivityExtensions.Source in a .NET Standard class library that is targeting a modern .NET implementation because it will resolve to Pri.ProductivityExtensions.Source instead of the .NET implementation's.

If you've created a .NET Standard library and have wanted to use the latest version of the compiler but start receiving compiler errors like The type or namespace name 'CallerArgumentExpressionAttribute' does not exist in the namespace 'System.Runtime.CompilerServices' (are you missing an assembly reference?) or the other aforementioned errors, fear not, you simply need to reference Pri.ProductivityExtensions.Source!

Pri.ProductivityExtensions.Source enables the following modern C# compiler features:

  • Ranges and indices
  • [CallerArgumentExpression] 🔗
  • [DoesNotReturn] 🔗
  • [NotNullWhen] 🔗

Additionally, Pri.ProductivityExtensions.Source includes the following helper methods that otherwise depend on newer language-dependent types:

  • ArgumentException.ThrowIfNullOrWhiteSpace(string?, string?) 🔗
  • ArgumentNullException.ThrowIfNull(object?, string?) 🔗

Understanding C# Timeline Since Last .NET Standard

The last version of .NET Standard (2.1) was released in 2020. Since 2020, C# versions 10, 11, 12, 13, and 14 have been released. A lot has changed across 5 versions of C#, including the use of numerous APIs to enable various features. There isn't a complete list of base-class APIs that the C# compiler depends on.

If you find this useful
I'm a freelance software architect. If you find this post useful and think I can provide value to your team, please reach out to see how I can help. See About for information about the services I provide.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Velocity Is the New Authority. Here’s Why

1 Share

Why does everyone feel overwhelmed by information? Why does it feel impossible to trust what passes through our streams? We tend to blame individual publications, specific platforms, or bad actors. The real answer has less to do with any single media entity and more with structural changes in the information ecosystem.

I started my “information” life typing copy on an ill-tempered Remington. As a teenage reporter, I saw newspapers being typeset, one letter at a time. It was a messy, slow, and laborious process. So I don’t carry romantic notions about the old days. I’ve been quick to embrace any technology that, in Stephen Covey’s words, helps me keep “the main thing the main thing.” The main thing is telling a thoroughly reported, well-written story.

The early 1990s Internet, followed by blogging at the turn of the century, and social media a decade later all helped me do that main thing. In the mid-2000s I embraced Dave Winer’s mantra of “sources going direct.” As far back as 2009, I outlined the coming changes in my essays “How Internet Content Distribution and Discovery Are Changing” and “Amplification and the Changing Role of Media.”

For the past decade and a half, the whole information ecosystem has become much larger, faster and noiser. It is hardly surprising that nothing works. And we feel a collective sense of overwhelming disappointment. 

So, why does nothing work?

Authority used to be the organizing principle of information, and thus the media. You earned attention by being right, by being first in discovery, or by being big enough to be the default. That world is gone. The new and current organizing principle of information is velocity.

What matters now is how fast something moves through the network: how quickly it is clicked, shared, quoted, replied to, remixed, and replaced. In a system tuned for speed, authority is ornamental. The network rewards motion first and judgment later, if ever. Perhaps that’s why you feel you can’t discern between truths, half-truths, and lies.

With so much coming at us all the time, it is difficult to give any single story or news event much weight. More content means already fragmented attention fractures even further. 

Greenland, Iran, Venezuela, Epstein Files, Dodgers. On and on.

Networks have always shaped how societies are organized. Roman roads didn’t just make travel easier; they mapped the reach of the state and the limits of power. Shipping routes determined where colonial empires flourished and where they faded. In the Victorian age, the railways didn’t just shorten journeys; they rearranged British society. 

They created commuting and leisure, turned market towns into suburbs, standardized national time, and collapsed the meaning of distance. They also reordered authority: timetables mattered as much as parliaments. What looks like cultural choice is often the echo of infrastructure. Today’s mobile, cloud-linked world is another Victorian moment. Networks compress time and space, then quietly train us to live at their speed.

That’s why we get all our information as memes. The meme has become the metastory, the layer where meaning is carried. You don’t need to read the thing; you just need the gist, compressed and passed along in a sentence, an image, or a joke. It has taken the role of the headline. The machine accelerates this dynamic. It demands constant material; stop feeding it and the whole structure shakes. The point of the internet now is mostly to hook attention and push it toward commerce, to keep the engine running. Anyone can get their cut.

Velocity has taken over. 

Algorithms on YouTube, Facebook, TikTok, Instagram, and Twitter do not optimize for truth or depth. They optimize for motion. A piece that moves fast is considered “good.” A piece that hesitates disappears. There are almost no second chances online because the stream does not look back. People are not failing the platforms. People are behaving exactly as the platforms reward. We might think we are better, but we have the same rat-reward brain. 

We built machines that prize acceleration and then act puzzled that everything feels rushed and slightly manic. The networks of the past were slower and at a scale that was adaptable. I wrote about this years ago, and nothing since has disproved it. So when the author of “beliefs outrun facts” says nothing works, now you know why.

The fundamental network-level changes should give you a good idea of why we have a growing ambivalent relationship toward media as an organized information entity. I will get into technology media from startup perspective in a separate piece. For now, I will stick to the broader media ecosystem.

Let’s use YouTube technology reviews as a case study, because they are universally understandable. Take the launch of a new phone: when the embargo lifts, dozens of polished video reviews appear on YouTube. They run about 20 minutes, share similar thumbnails, and use the same mood lighting. The reviewers had access to the phones before everyone else, so they had time to prepare their reviews.

In the old days, before the current phase of content abundance, folks like Walt Mossberg, Ed Baig, David Pogue, and Steven Levy were often the first to get Apple products for review. Sure, these folks had big platforms, but that head startgave them a lot of clout, which meant many non-Apple companies offered them early access to their products. I never felt cheated or misled by their reviews, though I did notice what they omitted after using the product for a few months.

These days, things are markedly different. For YouTubers, access is the currency of survival. Access, of course, means suggested talking points. Again, nothing new. What’s different is that every reviewer knows that if they paint outside the lines, they’ll lose access. If you don’t have the review out when the embargo lifts, it doesn’t matter if you have a better review; no one is going to notice.

The system rewards whoever speaks first, not whoever lives with it long enough to understand it. The “review” at launch outperforms the review written two months later by orders of magnitude. The second, longer, more in-depth, more honest review might as well not exist. It’s not that people are less honest by nature. It’s that the structure pays a premium for compliance and levies a tax on independence. The result is a soft capture where creators don’t have to be told what to say. The incentives do the talking.

We built systems that reward acceleration, then act surprised when everything feels rushed, shallow, and slightly manic. People do what the network rewards. Writers write for the feed. Photographers shoot for the scroll. Newsrooms frame stories as conflict because conflict travels faster than nuance. Even our emotional lives adapt to latency and refresh cycles. The design of the network becomes the choreography of daily life.

In older networks, the constraints were physical. The number of train lines limited where cities could grow. The number of printing presses limited how many voices could speak. In our case, the constraint is temporal: how fast something can be produced, clicked, shared, and replaced. When velocity becomes the scarcest resource, everything orients around it. This is why it’s wrong to think of “the algorithm” as some quirky technical layer that can be toggled on and off or worked around. The algorithm is the culture. It decides what gets amplified, who gets to make a living, and what counts as “success.”

Once velocity is the prize, quality becomes risky. Thoughtfulness takes time. Reporting takes time. Living with a product or an idea takes time. Yet the window for relevance keeps shrinking, and the penalty for lateness is erasure. We get a culture optimized for first takes, not best takes. The network doesn’t ask if something is correct or durable, only if it moves. If it moves, the system will find a way to monetize it.

The algorithm doesn’t care whether something is true; it cares whether it moves. Day-one content becomes advertising wearing the mask of criticism.

All of this folds back into a larger point. When attention is fragmented and speed becomes the dominant value, media rearranges itself around that reality. Not because anyone wakes up wanting to mislead people, but because the context makes some paths survivable and others impossible.

The YouTube algorithm is the real enforcer because it rewards velocity. Get into the algorithmic slip stream and you get the numbers and make money. So it is no surprise that most day-one reviews are, well, anything but. This goes back to my original premise that when velocity becomes the defining metric, authority is displaced.

You don’t need to be right; you need to be first in the feed. Generalize this beyond YouTube tech reviews and you see the same pattern everywhere. I’m flabbergasted by how much good journalism goes unnoticed every day. We didn’t just put journalism, entertainment, politics, and private lives on networks. We let the networks rewrite what those things are forand how they work.

None of what I am saying is new. Decades ago the media sage Marshall McLuhan summed it up in his timeless phrase, “The medium is the message.” The medium, the technology or channel of communication, influences society and individuals more profoundly than the content, altering our senses and habits and, in turn, our perception, interaction, and culture. The only difference is that network is like a hydra, and data is the fuel that adds velocity, the new metric of perceived reality.

The cost of all this isn’t abstract. It’s the review that took three months, and no one will read it. It’s the investigation that requires patience. It’s the work of understanding before passing judgment. All of it still exists, still gets made. It just doesn’t travel. In a system where only what travels matters, we’ve made expertise indistinguishable from noise.

The cost of all this isn’t abstract. It’s the review that took three months but no one will read. It’s the investigation that required patience. It’s the work of understanding something before declaring judgment. All of it still exists, still gets made. It just doesn’t travel. And in a system where only what travels matters, we’ve made expertise indistinguishable from noise.

In the age of AI, will any of this matter when our idea of information will be entirely different?

January 21, 2026. San Francisco

Photo Courtesy of Yousef Hussain via Unsplash

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rethinking AI’s future in an augmented workplace

1 Share

There are many paths AI evolution could take. On one end of the spectrum, AI is dismissed as a marginal fad, another bubble fueled by notoriety and misallocated capital. On the other end, it’s cast as a dystopian force, destined to eliminate jobs on a large scale and destabilize economies. Markets oscillate between skepticism and the fear of missing out, while the technology itself evolves quickly and investment dollars flow at a rate not seen in decades. 

All the while, many of today’s financial and economic thought leaders hold to the consensus that the financial landscape will stay the same as it has been for the last several years. Two years ago, Joseph Davis, global chief economist at Vanguard, and his team felt the same but wanted to develop their perspective on AI technology with a deeper foundation built on history and data. Based on a proprietary data set covering the last 130 years, Davis and his team developed a new framework, The Vanguard Megatrends Model, from research that suggested a more nuanced path than hype extremes: that AI has the potential to be a general purpose technology that lifts productivity, reshapes industries, and augments human work rather than displaces it. In short, AI will be neither marginal nor dystopian. 

“Our findings suggest that the continuation of the status quo, the basic expectation of most economists, is actually the least likely outcome,” Davis says. “We project that AI will have an even greater effect on productivity than the personal computer did. And we project that a scenario where AI transforms the economy is far more likely than one where AI disappoints and fiscal deficits dominate. The latter would likely lead to slower economic growth, higher inflation, and increased interest rates.”

Implications for business leaders and workers

Davis does not sugar-coat it, however. Although AI promises economic growth and productivity, it will be disruptive, especially for business leaders and workers in knowledge sectors. “AI is likely to be the most disruptive technology to alter the nature of our work since the personal computer,” says Davis. “Those of a certain age might recall how the broad availability of PCs remade many jobs. It didn’t eliminate jobs as much as it allowed people to focus on higher value activities.” 

The team’s framework allowed them to examine AI automation risks to over 800 different occupations. The research indicated that while the potential for job loss exists in upwards of 20% of occupations as a result of AI-driven automation, the majority of jobs—likely four out of five—will result in a mixture of innovation and automation. Workers’ time will increasingly shift to higher value and uniquely human tasks. 

This introduces the idea that AI could serve as a copilot to various roles, performing repetitive tasks and generally assisting with responsibilities. Davis argues that traditional economic models often underestimate the potential of AI because they fail to examine the deeper structural effects of technological change. “Most approaches for thinking about future growth, such as GDP, don’t adequately account for AI,” he explains. “They fail to link short-term variations in productivity with the three dimensions of technological change: automation, augmentation, and the emergence of new industries.” Automation enhances worker productivity by handling routine tasks; augmentation allows technology to act as a copilot, amplifying human skills; and the creation of new industries creates new sources of growth.

Implications for the economy 

Ironically, Davis’s research suggests that a reason for the relatively low productivity growth in recent years may be a lack of automation. Despite a decade of rapid innovation in digital and automation technologies, productivity growth has lagged since the 2008 financial crisis, hitting 50-year lows. This appears to support the view that AI’s impact will be marginal. But Davis believes that automation has been adopted in the wrong places. “What surprised me most was how little automation there has been in services like finance, health care, and education,” he says. “Outside of manufacturing, automation has been very limited. That’s been holding back growth for at least two decades.” The services sector accounts for more than 60% of US GDP and 80% of the workforce and has experienced some of the lowest productivity growth. It is here, Davis argues, that AI will make the biggest difference.

One of the biggest challenges facing the economy is demographics, as the Baby Boomer generation retires, immigration slows, and birth rates decline. These demographic headwinds reinforce the need for technological acceleration. “There are concerns about AI being dystopian and causing massive job loss, but we’ll soon have too few workers, not too many,” Davis says. “Economies like the US, Japan, China, and those across Europe will need to step up function in automation as their populations age.” 

For example, consider nursing, a profession in which empathy and human presence are irreplaceable. AI has already shown the potential to augment rather than automate in this field, streamlining data entry in electronic health records and helping nurses reclaim time for patient care. Davis estimates that these tools could increase nursing productivity by as much as 20% by 2035, a crucial gain as health-care systems adapt to ageing populations and rising demand. “In our most likely scenario, AI will offset demographic pressures. Within five to seven years, AI’s ability to automate portions of work will be roughly equivalent to adding 16 million to 17 million workers to the US labor force,” Davis says. “That’s essentially the same as if everyone turning 65 over the next five years decided not to retire.” He projects that more than 60% of occupations, including nurses, family physicians, high school teachers, pharmacists, human resource managers, and insurance sales agents, will benefit from AI as an augmentation tool. 

Implications for all investors 

As AI technology spreads, the strongest performers in the stock market won’t be its producers, but its users. “That makes sense, because general-purpose technologies enhance productivity, efficiency, and profitability across entire sectors,” says Davis. This adoption of AI is creating flexibility for investment options, which means diversifying beyond technology stocks might be appropriate as reflected in Vanguard’s Economic and Market Outlook for 2026. “As that happens, the benefits move beyond places like Silicon Valley or Boston and into industries that apply the technology in transformative ways.” And history shows that early adopters of new technologies reap the greatest productivity rewards. “We’re clearly in the experimentation phase of learning by doing,” says Davis. “Those companies that encourage and reward experimentation will capture the most value from AI.” 

Looking globally, Davis sees the United States and China as significantly ahead in the AI race. “It’s a virtual dead heat,” he says. “That tells me the competition between the two will remain intense.” But other economies, especially those with low automation rates and large service sectors, like Japan, Europe, and Canada, could also see significant benefits. “If AI is truly going to be transformative, three sectors stand out: health care, education, and finance,” says Davis. “For AI to live up to its potential, it must fundamentally reshape these industries, which face high costs and rising demand for better, faster, more personalized services.”

However, Davis says Vanguard is more bullish on AI’s potential to transform the economy than it was just a year ago. Especially since that transformation requires application beyond Silicon Valley. “When I speak to business leaders, I remind them that this transformation hasn’t happened yet,” says Davis. “It’s their investment and innovation that will determine whether it does.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories