Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153031 stories
·
33 followers

OpenAI’s 2026 ‘focus’ is ‘practical adoption’

1 Share

OpenAI plans to focus on "practical adoption" of AI in 2026, according to a blog post from CFO Sarah Friar. As the company spends a huge amount of money on infrastructure, OpenAI is working on "closing the gap" on what AI can do and how people actually use it. "The opportunity is large and immediate, especially in health, science, and enterprise, where better intelligence translates directly into better outcomes."

Much of the blog post, titled "A business that scales with the value of intelligence," is about how OpenAI has evolved since it launched ChatGPT and how it has scaled up its business. The company's weekly active user and daily act …

Read the full story at The Verge.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Azure Boards additional field filters (private preview)

1 Share

We’re excited to announce a limited private preview that introduces the ability to add additional fields as filters on both the backlog and Kanban boards. This feature has been a long-standing request from the developer community, and we’re eager to get it into customers’ hands early through a private preview.

Today, filter options are limited, hardcoded, and can vary slightly depending on the page you’re viewing.

blog filters 1 image

With this new feature, you’ll continue to start with the same default filters you’re used to. In addition, you can now open the filter settings and add any field that is already displayed on backlog columns or Kanban cards.

Once you apply your changes, the selected fields immediately become available in the filters control.

🔊 Limitations

There are a few limitations you should be aware of.

Fields

  • Large text fields are not supported
  • Fields that always contain unique values, such as Stack Rank, are not available
  • Fields must be added as a backlog column or card field to be available as a filter

Sharing behavior

  • If you share a filtered URL and the recipient does not have the same fields configured as filters, those fields will be ignored. To resolve this, add the field as a filter and open the URL again.

Page support

  • This feature is currently available only on backlog and board pages
  • It is not yet supported on sprint backlogs, sprint boards, query results, or the work items hub

🙋 Private preview process

If you’d like early access to this feature, please send me an email with your organization name in the format of dev.azure.com/{organization}. Once enabled, the feature will be available to all users within that organization.

We plan to accept organizations into the private preview until February 6, 2026. After that date, we will close enrollment to focus on collecting feedback, addressing issues, and validating the experience with those preview customers. Our intent is to incorporate what we learn and move quickly toward general availability.

We’re excited to move this feature through private preview and into general availability. It’s been a long time coming!

The post Azure Boards additional field filters (private preview) appeared first on Azure DevOps Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Filtering as domain logic

1 Share

Performance and correctness are two independent concerns with overlapping solutions.

How do you design, implement, maintain, and test complex filter logic as part of out-of-process (e.g. database) queries?

One option is to implement parts of the filtering logic twice: Once as an easily-testable in-memory implementation to ensure correctness, and another, possibly simpler, query using the query language (usually, SQL) of the data source.

Does this not imply duplication of effort? Yes, to a degree it does. Should you always do this? No, only when warranted. As usual, I present this idea as an option you may consider; a tool for your software design tool belt. You decide if it's useful in your particular context.

Motivation #

When extracting data from a data source, an application usually needs some of the data, but not all of it. If the software system in question has a certain size, the subset required for an operation is only a miniscule fraction of the entire database. For example, a user may want to see his or her latest order in a web shop, but the entire system contains millions of orders. Another example could be a system for managing help desk requests: Each supporter may need a dashboard of open cases assigned to him or her, but the system holds millions of tickets, and most of them are closed.

If a data store supports server-side querying, for example with SQL or Cypher, it's reasonable to let the data store itself do the filtering.

As anyone who has worked professionally with SQL can attest, SQL queries can become complicated. When this happens, you may become concerned with the correctness of a query. Does it include all the data it should? Does it exclude irrelevant data? If you later change a query, how can you verify that it still works as intended? How do you even version it?

Automated testing can address several of these concerns, but testing against a real database, while possible, tends to be cumbersome and slow. Do alternatives, or augmentations, exist?

How it works #

If a server-side query threatens to become too complicated, consider shifting some of the work to clients. You may retain some filtering logic in the server-side query, but only enough to keep performance good, and simple enough that you are no longer concerned about its correctness.

Implement the difficult filtering logic in a client-side library. Since you implement this part in a programming language of your choice, you can use any tool or technique available in that context to ensure correctness: Test-driven development, static code analysis, type checking, property-based testing, code coverage, mutation testing, etc.

Using a funnel as a symbol of filtering, this diagram depicts the idea:

Two upside-down funnel connect the database with the application.

Normally a funnel is only useful when the widest part faces up, but on the other hand, we usually depict application architectures with the database under the the application. You have to imagine data being 'sucked up' through the funnels.

In reality, the two filters will differ, but have overlapping functionality.

Two sets labelled service-side filter and client-side filter, with substantial intersection.

If based on a relational database, the server-side query will still hold table joins and column projections that are effectively irrelevant to the client-side Domain Model. On the other hand, while the server-side query may apply a rough filter, the more detailed selection of what is, and is not, included happens in the client.

The server-side query is defined using the query language of the data store, such as SQL or Cypher. The client-side query is part of the application code base, and written in the same programming language.

When to use it #

Use this pattern if a server-side query becomes so complicated that you are concerned about its correctness, or if correctness is an essential part of a Domain Model's contract.

While it is conceptually possible to load the entire data store's data into memory, this is often prohibitively expensive in terms of time and memory. It is often necessary to retain some filtering logic (e.g. one or more SQL WHERE clauses) on the server to pare down data to acceptable sizes. This implies a degree of duplicated logic, since the client-side filter shouldn't assume that any filtering has been applied.

Duplication comes with its own set of problems, even if this looks like the benign kind. Alternatives include keeping all logic on the database server, which is viable if the logic is simple, or can be sufficiently simplified. Another alternative is to perform all filtering in the client, which may be an attractive solution if the entire data set is small.

Encapsulation #

If a Domain Model is composed of pure functions, data must be supplied as normal input arguments. In a more object-oriented style, data may arrive as indirect input. In both object-oriented and functional architecture, encapsulation is important. This entails being explicit about invariants and pre- and postconditions; i.e. contracts.

To enforce preconditions, a Domain Model must ensure that input is correct. While it could choose to reject input if it contains 'too much' data, a Tolerant Reader should instead pare the data down to size. This implies that filtering should be part of a Domain Model's contract.

This further implies that a Domain Model becomes less vulnerable to changes in data access code.

Implementation details #

Server-side filtering (with e.g. SQL) is often difficult to test with sufficient rigour. The point of moving the complex filtering logic to the Domain Model is that this makes it easier to test, and thereby to maintain.

If no filtering takes place on the server, however, the entire data set of the system would have to be transmitted to, and filtered on, the client. This is usually too expensive, so some filtering must still take place at the data source. The whole point of this exercise is that the 'correct' filtering is too complicated to maintain as a server-side query, so whatever filtering still takes place on the server only happens for performance reasons, and can be simpler, as long as it's wider.

Specifically, the simplified server-side query can (and probably should) be wider, in the sense that it returns more data than is required for the correctness of the overall system. The client, receiving more data than strictly required, can perform more sophisticated (and testable) filtering.

The simplified filtering on the server must not, on the other hand, narrow the result set. If relevant data is left out at the source, the client has no chance to restore it, or even know that it exists.

Motivating example #

The code base that accompanies Code That Fits in Your Head contains an example. When a user attempts to make a restaurant reservation, the system must look at existing reservations on the same date to check whether it has a free table. Many restaurants operate with seating windows, and the logic involved in figuring out if a time slot is free is easy to get wrong. On top of that, the decision logic needs to take opening hours and last seating into account. The book, as well as the article The Maître d' kata, has more details.

Based on information about seating duration, opening hours, and so on, it seems as though it should be possible to form an exact SQL query that only returns existing reservations that overlap the new reservation. Even so, this struck me as error-prone. Instead, I decided to make input filtering part of the Domain Model.

The Domain Model in question, an immutable class named MaitreD, uses the WillAccept method to decide whether to accept a reservation request. Apart from the candidate reservation, it also takes as parameters existingReservations as well as the current time.

public bool WillAccept(
    DateTime now,
    IEnumerable<ReservationexistingReservations,
    Reservation candidate)

The function uses the existingReservations to filter so that only the relevant reservations are considered:

var seating = new Seating(SeatingDuration, candidate.At);
var relevantReservations =
    existingReservations.Where(seating.Overlaps);

As implied by this code snippet, a specialized Domain Model named Seating contains the actual filtering logic:

public bool Overlaps(Reservation otherReservation)
{
    if (otherReservation is null)
        throw new ArgumentNullException(nameof(otherReservation));
 
    var other = new Seating(SeatingDuration, otherReservation.At);
    return Overlaps(other);
}
 
public bool Overlaps(Seating other)
{
    if (other is null)
        throw new ArgumentNullException(nameof(other));
 
    return Start < other.End && other.Start < End;
}

Notice how the core implementation, the overload that takes another Seating object, implements a binary relation. To extrapolate from Domain-Driven Design, whenever you arrive at 'proper' mathematics to describe the application domain, it's usually a sign that you've arrived at something fundamental.

The Overlaps functions are public and easy to unit test in their own right. Even so, in the code base that accompanies Code That Fits in Your Head, there are no tests that directly exercise these functions, since they only grew out of refactoring the implementation of MaitreD.WillAccept, which is covered by many tests. Since the Overlaps functions only emerged as a result of test-driven development, they might as well have been private helper methods, but I later needed them for verifying some unrelated test outcomes.

The filtering performed in WillAccept will throw away any reservations that don't overlap. Even if existingReservations contained the entire data set from the database, it would still be correct. Given, however, that there could be hundreds of thousands of reservations, it seems prudent to perform some coarse-grained filtering in the database.

The ReservationsController that calls WillAccept first queries the database, getting all the reservation on the relevant date.

var reservations = await Repository
    .ReadReservations(restaurant.Id, reservation.At)
    .ConfigureAwait(false);

Now that I write this description, I realize this query, while wide in one sense, could actually be too narrow. None of my test restaurants have a last seating after midnight, but I wouldn't rule that out in certain cultures. If so, it's easy to widen the coarse-grained query to include reservations for the day before (for breakfast restaurants, perhaps) and the day after, assuming that no seating lasts more than 24 hours.

All that said, the point is that ReadReservations(restaurant.Id, reservation.At) (which is an extension method) performs a simple, coarse-grained query for reservations that may be relevant to consider, given the candidate reservation. This query should return a 'gross' data set that contains all relevant, but also some irrelevant, reservations, thereby keeping the query simple. An indeed, the actual database interaction is this parametrised query:

SELECT [PublicId], [At], [Name], [Email], [Quantity]
FROM [dbo].[Reservations]
WHERE [RestaurantId] = @RestaurantId AND
      @Min <= [At] AND [At] <= @Max

This range query should be simple enough that a few integration tests should be sufficient to give you confidence that it works correctly.

Consequences #

The main benefit from a design like this is that it shifts some of the burden of correctness to the Domain Model, which is easier to test, maintain, and version than is typically the case for query languages. An added advantage is improved separation of concerns.

In practice, server-side filtering tends to mix two independent concerns: Performance and correctness. Filtering is important for performance, because the alternative is to transmit all rows to the client. Filtering is also important for correctness, because the code making use of the data should only consider data relevant for its purpose. Exclusive server-side filtering performs both of these tasks, thereby mixing concerns. Moving filtering for correctness to a Domain Model can make explicit that these are two separate concerns.

While a Domain Model can implement in-memory filtering, it can only deal with data that is too wide; that is, it can identify and remove superfluous data. If, on the other hand, the dataset passed to the Domain Model lacks relevant records, the Domain Model can't detect that. The above discussion about the reservation system contains a concrete discussion of such a problem. Thus, Domain-based filtering does not alleviate developers from the burden of ensuring that any server-side filtering is sufficiently permissible.

Another consequence of this design is that as server-side queries become more coarse-grained, this could increase potential cache hit ratios. If you somehow cache queries, when queries become more general, there will be less variation, and thus caches will need fewer entries that will statistically be hit more often. This applies to CQRS-style architectures, too.

Consider the restaurant reservation example, above. Since queries are only distinguished by date, you can easily cache query results by date, and all reservation requests for a given date may go through that cache. If, as a counter-example, all filtering took place in the database, a query for a reservation at 18:00 would be different from a query for 18:30, and so on. This would make a hypothetical cache bigger, and decrease the frequency of cache hits.

Test evidence #

When I originally decided that WillAccept should perform in-memory filtering, my motivation was one of correctness. I was concerned whether I could get the seating overlap detection correct without comprehensive testing, and I thought that it would be easier to test a function doing in-memory filtering than to drive all of this via integration tests involving a real SQL Server instance. (Not that I don't know how to do this. The code base accompanying the book has examples of tests that exercise the database. These tests are, however, more work to write and maintain, and they execute slower.)

As discussed in Coupling from a big-O perspective, I much later realized that I actually had no test coverage of edge cases related to querying the database. It was only after attempting to write such a test that I realized that the design had the consequence that a marginal error in the database query had no impact on the correctness of the overall system. Here's that test:

[Fact]
public async Task AttemptEdgeCaseBooking()
{
    var twentyFour7 = new Restaurant(
        247,
        "24/7",
        new MaitreD(
            opensAtTimeSpan.FromHours(0),
            lastSeatingTimeSpan.FromHours(0),
            seatingDurationTimeSpan.FromDays(1),
            tablesTable.Standard(1)));
    var db = new FakeDatabase();
    var now = DateTime.Now;
    var sut = new ReservationsController(
        new SystemClock(),
        new InMemoryRestaurantDatabase(twentyFour7),
        db);
    var r1 = Some.Reservation.WithDate(now.AddDays(3).Date);
    await sut.Post(twentyFour7.Id, r1.ToDto());
 
    var r2 = Some.Reservation.WithDate(now.AddDays(2).Date);
    var ar = await sut.Post(twentyFour7.Id, r2.ToDto());
 
    Assert.IsAssignableFrom<CreatedAtActionResult>(ar);
    // More assertions could go here.
}

This test is an attempt to cover the edge case related to how the system queries the database, just like the Moq-based test shown in Greyscale-box test-driven development. The idea is to create a reservation that just barely touches a reservation the following day, and thereby trigger a test failure when a change is made to the query, similar to how the Moq-based test fails. Even with a custom restaurant, I can't, however, get this test to fail, because of the Domain-based filtering, which keeps the system working correctly.

It was then that I realized that what I had inadvertently done was to strengthen the contract of WillAccept, compared to a more stereotypical design. Who knew test-driven development could lead to better encapsulation?

Conclusion #

Some queries may become so complicated that they are difficult to maintain. Bugs creep in, you address them, only to reanimate regressions. When this happens, consider moving the complicated parts of data filtering to the client, preferably to a Domain Model. This enables you to test the filtering logic with as much rigour as is required.

For small databases, you may read the entire dataset into memory, but usually you will need to retain some coarse-grained filtering on the database server.

This design, while more complicated than letting a query language like SQL handle all filtering, can lead to better encapsulation and separation of concerns.


This blog is totally free, but if you like it, please consider supporting it.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Agent psychosis: are we going insane? (News)

1 Share

Armin Ronacher thinks AI agent psychosis might be driving us insane, Dan Abramov explains how AT Protocol is a social filesystem, RepoBar keeps your GitHub work in view without opening a browser, Ethan McCue shares some life altering Postgres patterns, and Lea Verou says web dependencies are broken and we need to fix them.

View the newsletter

Join the discussion

Changelog++ members support our work, get closer to the metal, and make the ads disappear. Join today!

Sponsors:

Featuring:





Download audio: https://op3.dev/e/https://cdn.changelog.com/uploads/news/177/changelog-news-177.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

013 - Getting Started with Personal or Professional AI Projects

1 Share

In this episode, the hosts discuss the current state and future developments of Siri in the context of rapidly evolving AI technologies, focusing on Apple's strategy with OpenAI and potential WWDC announcements. They also delve into personal AI projects, highlighting practical applications such as using AI for real estate research, personal knowledge management (PKM), and weekly executive summaries. The conversation touches on the importance of structured data, the potential for AI in local businesses, and the need for professionals to integrate AI into their workflows for enhanced productivity.

00:00 Introduction: The Rise and Fall of Siri

02:07 Personal Experiences with Siri

03:21 Speculations on Siri's Future

04:18 Apple's Strategy with AI Models

05:42 The Role of User Experience in AI

06:45 Challenges and Opportunities for Apple

07:57 The Future of AI Integration

10:55 OpenAI's New Monetization Strategy

12:39 OpenAI's Subscription Tiers

14:32 The Impact of Ads on User Experience

17:16 Potential of AI-Driven Advertising

26:37 Upcoming Tech Events and Excitement

27:35 Passion Projects and AI Inspiration

28:12 Personal Knowledge Management (PKM)

36:38 AI in Real Estate

40:50 AI for Personal and Work Efficiency

51:23 The Future of AI in Daily Life

55:49 Conclusion and Final Thoughts



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com



Download audio: https://api.substack.com/feed/podcast/185098206/3cdb3b56d85b67a5ff5ff08d26dce789.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

A second US Sphere could come to Maryland

1 Share
A rendering of the planned mini-Sphere potentially coming to National Harbor, Maryland.

Sphere Entertainment, the company behind the eye-catching interactive venue in Las Vegas, has announced its "intent to develop" another Sphere in Maryland that will be located 15 minutes south of Washington, DC. A timeline and exact location haven't been finalized, but the Maryland Sphere would be the company's second venue in the US, following plans to build a Sphere in Abu Dhabi announced in October 2024.

The second US sphere would be built in an area known as National Harbor in Prince George's County, Maryland. Located along the Potomac River, National Harbor currently features a convention center, multiple hotels, restaurants, and shops …

Read the full story at The Verge.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories