Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
121853 stories
·
29 followers

jQuery UI 1.13.3 released

1 Share

We’re happy to announce the third patch release to jQuery 1.13 is out. It includes fixes for the resizable widget when a global box-sizing: border-box CSS declaration is present (a common complaint was about resizable dialogs), support for the hidden attribute in selectmenu options, fixes for the deprecated -ms-filter syntax, and correcting the format of the deprecated ui/core.js AMD module.

jQuery UI has a new test runner ported from jQuery that allows local & BrowserStack test runs without reliance on Karma. As an added bonus, we’re now running tests against Chrome, Firefox, Safari & Edge against latest jQuery 1.x, 2.x, 3.x & the development version in GitHub CI, allowing to detect more issues at the pull request level. This will also be a basis for a future jQuery UI 1.14 – but that’s a topic for a separate blog post.

Please remember jQuery UI is in a maintenance state: we’ll make sure the library is compatible with new jQuery releases and that security issues are fixed but no new significant feature work is planned. We’ll also try to fix important regressions from jQuery UI 1.12.1; older long-standing bugs may not get fixed. Note that this does not affect jQuery Core which is still actively maintained.

Download

File Downloads

Git (contains source files, with @VERSION replaced with 1.13.3, base theme only)

Install via npm

  • npm install jquery-ui@1.13.3

Install via bower

  • bower install jquery/jquery-ui#1.13.3

jQuery CDN

Google Ajax Libraries API (CDN)

Microsoft Ajax CDN (CDN)

Custom Download Builder

Changelog

See the 1.13 Upgrade Guide for a list of changes that may affect you when upgrading from 1.12.x. For full details on what’s included in this release see the 1.13.3 Changelog.

Thanks

Thanks to all who helped with this release, specifically: Ashish Kurmi, DeerBear, divdeploy, Kenneth DeBacker, mark van tilburg, Matías Cánepa, Michał Gołębiowski-Owczarek, Timmy Willison, Timo Tijhof, Дилян Палаузов, Felix Nagel.

Comments

Note: please report bugs to the jQuery UI Bug Tracker; support questions should be posted on Stack Overflow with the jquery-ui tag. Please don’t use comments to report bugs.

If you have feedback on us doing our release for jQuery UI 1.13.3, feel free to leave a comment below. Thank you.

Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Reducing the Environmental Impact of Generative AI: a Guide for Practitioners

1 Share

Introduction

As generative AI's adoption rapidly expands across various industries, integrating it into products, services, and operations becomes increasingly commonplace. However, it's crucial to address the environmental implications of such advancements, including their energy consumption, carbon footprint, water usage, and electronic waste, throughout the generative AI lifecycle. This lifecycle, often referred to as large language model operations (LLMOps), encompasses everything from model development and training to deployment and ongoing maintenance, all of which demand diligent resource optimisation.

 

This guide aims to extend Azure’s Well-Architected Framework (WAF) for sustainable workloads to the specific challenges and opportunities presented by generative AI. We'll explore essential decision points, such as selecting the right models, optimising fine-tuning processes, leveraging Retrieval Augmented Generation (RAG), and mastering prompt engineering, all through a lens of environmental sustainability. By providing these targeted suggestions and best practices, we equip practitioners with the knowledge to implement generative AI not only effectively, but responsibly.

wiigg_0-1713541154556.jpeg

 

Image Description: A diagram titled "Sustainable Generative AI: Key Concepts" divided into four quadrants. Each quadrant contains bullet points summarising the key aspects of sustainable AI discussed in this article.

 

Select the foundation model

Choosing the right base model is crucial to optimising energy efficiency and sustainability within your AI initiatives. Consider this framework as a guide for informed decision-making:

 

Pre-built vs. Custom Models

When embarking on a generative AI project, one of the first decisions you'll face is whether to use a pre-built model or train a custom model from scratch. While custom models can be tailored to your specific needs, the process of training them requires significant computational resources and energy, leading to a substantial carbon footprint. For example, training an LLM the size of GPT-3 is estimated to consume nearly 1,300 megawatt hours (MWh) of electricity. In contrast, initiating projects with pre-built models can conserve vast amounts of resources, making it an inherently more sustainable approach.

 

Azure AI Studio's comprehensive model catalogue is an invaluable resource for evaluating and selecting pre-built models based on your specific requirements, such as task relevance, domain specificity, and linguistic compatibility. The catalogue provides benchmarks covering common metrics like accuracy, coherence, and fluency, enabling informed comparisons across models. Additionally, for select models, you can test them before deployment to ensure they meet your needs. Choosing a pre-built model doesn't limit your ability to customise it to your unique scenarios. Techniques like fine-tuning and retrieval augmented generation (RAG) allow you to adapt pre-built models to your specific domain or task without the need for resource-intensive training from scratch. This enables you to achieve highly tailored results while still benefiting from the sustainability advantages of using pre-built models, striking a balance between customisation and environmental impact.

 

Model Size

The correlation between a model's parameter count and its performance (and resource demands) is significant. Before defaulting to the largest available models, consider whether more compact alternatives, such as Microsoft’s Phi-2, Mistral AI’s Mixtral 8x7B or similar sized models, could suffice for your needs. The efficiency "sweet spot"—where performance gains no longer justify the increased size and energy consumption—is critical for sustainable AI deployment. Opting for smaller, fine-tuneable models (known as small language models—or SLMs) can result in substantial energy savings without compromising effectiveness.

 

Model Selection

Considerations

Sustainability Impact

Pre-built Models

Leverage existing models and customise with fine-tuning, RAG and prompt engineering

Reduces training-related emissions

Custom Models

Tailor models to specific needs and customise further if needed

Higher carbon footprint due to training

Model Size

Larger models offer better output performance but require more resources

Balancing performance and efficiency is crucial

 

Improve the model’s performance

Improving your AI model's performance involves strategic prompt engineering, grounding the model in relevant data, and potentially fine-tuning for specific applications. Consider these approaches:

 

Prompt Engineering

The art of prompt engineering lies in crafting inputs that elicit the most effective and efficient responses from your model, serving as a foundational step in customising its output to your needs. Beyond following the detailed guidelines from the likes of Microsoft and OpenAI, understanding the core principles of prompt construction—such as clarity, context, and specificity—can drastically improve model performance. Well-tuned prompts not only lead to better output quality but also contribute to sustainability by reducing the number of tokens required and the overall compute resources consumed. By getting the desired output in fewer input-output cycles, you inherently use less carbon per interaction. Orchestration frameworks like prompt flow and Semantic Kernel facilitate experimentation and refinement, enhancing prompt effectiveness with version control and reusability with templates.

 

Retrieval Augmented Generation (RAG)

Integrating RAG with your models taps into existing datasets, leveraging organisational knowledge without the extensive resources required for model training or extensive fine-tuning. This approach underscores the importance of how and where data is stored and accessed since its effectiveness and carbon efficiency is highly dependent on the quality and relevance of the retrieved data. End-to-end solutions like Microsoft Fabric facilitate comprehensive data management, while Azure AI Search enhances efficient information retrieval through hybrid search, combining vector and keyword search techniques. In addition, frameworks like prompt flow and Semantic Kernel enable you to successfully build RAG solutions with Azure AI Studio.

 

Fine-tuning

For domain-specific adjustments or to address knowledge gaps in pre-trained models, fine-tuning is a tailored approach. While involving additional computation, fine-tuning can be a more sustainable option than training a model from scratch or repeatedly passing large amounts of context via prompts and organisational data for each query. Azure OpenAI’s use of PEFT (parameter-efficient fine-tuning) techniques, like LoRA (low-rank approximation) uses far fewer computational resources over full fine-tuning. Not all models support fine-tuning so consider this in your base model selection.

 

Model Improvement

Considerations

Sustainability Impact

Prompt Engineering

Optimise prompts for more relevant output

Low carbon impact vs. fine-tuning, but consistently long prompts may reduce efficiency

Retrieval Augmented Generation (RAG)

Leverages existing data to ground model

Low carbon impact vs. fine-tuning, depending on relevance of retrieved data

Fine-tuning (with PEFT)

Adapt to specific domains or tasks not encapsulated in base model

Carbon impact depends on model usage and lifecycle, recommended over full fine-tuning

 

Deploy the model

Azure AI Studio simplifies model deployment, offering various pathways depending on your chosen model. Embracing Microsoft's management of the underlying infrastructure often leads to greater efficiency and reduced responsibility on your part.

 

MaaS vs. MaaP

Model-as-a-Service (MaaS) provides a seamless API experience for deploying models like Llama 3 and Mistral Large, eliminating the need for direct compute management. With MaaS, you deploy a pay-as-you-go endpoint to your environment, while Azure handles all other operational aspects. This approach is often favoured for its energy efficiency, as Azure optimises the underlying infrastructure, potentially leading to a more sustainable use of resources. MaaS can be thought of as a SaaS-like experience applied to foundation models, providing a convenient and efficient way to leverage pre-trained models without the overhead of managing the infrastructure yourself.

 

On the other hand, Model-as-a-Platform (MaaP) caters to a broader range of models, including those not available through MaaS. When opting for MaaP, you create a real-time endpoint and take on the responsibility of managing the underlying infrastructure. This approach can be seen as a PaaS offering for models, combining the ease of deployment with the flexibility to customise the compute resources. However, choosing MaaP requires careful consideration of the sustainability trade-offs outlined in the WAF, as you have more control over the infrastructure setup. It's essential to strike a balance between customisation and resource efficiency to ensure a sustainable deployment.

 

Model Parameters

Tailoring your model's deployment involves adjusting various parameters—such as temperature, top p, frequency penalty, presence penalty, and max response—to align with the expected output. Understanding and adjusting these parameters can significantly enhance model efficiency. By optimising responses to reduce the need for extensive context or fine-tuning, you lower memory use and, consequently, energy consumption.

 

Provisioned Throughput Units (PTUs)

Provisioned Throughput Units (PTUs) are designed to improve model latency and ensure consistent performance, serving a dual purpose. Firstly, by allocating dedicated capacity, PTUs mitigate the risk of API timeouts—a common source of inefficiency that can lead to unnecessary repeat requests by the end application. This conserves computational resources. Secondly, PTUs grant Microsoft valuable insight into anticipated demand, facilitating more effective data centre capacity planning.

 

Semantic Caching

Implementing caching mechanisms for frequently used prompts and completions can significantly reduce the computational resources and energy consumption of your generative AI workloads. Consider using in-memory caching services like Azure Cache for Redis for high-speed access and persistent storage solutions like Azure Cosmos DB for longer-term storage. Ensure the relevance of cached results through appropriate invalidation strategies. By incorporating caching into your model deployment strategy, you can minimise the environmental impact of your deployments while improving efficiency and response times.

 

Model Deployment

Considerations

Sustainability Impact

MaaS

Serverless deployment, managed infrastructure

Lower carbon intensity due to optimised infrastructure

MaaP

Flexible deployment, self-managed infrastructure

Higher carbon intensity, requires careful resource management

PTUs

Dedicated capacity for consistent performance

Improves efficiency by avoiding API timeouts and redundant requests

Semantic Caching

Store and reuse frequently accessed data

Reduces redundant computations, improves efficiency

 

Evaluate the model’s performance

Model Evaluation

As base models evolve and user needs shift, regular assessment of model performance becomes essential. Azure AI Studio facilitates this through its suite of evaluation tools, enabling both manual and automated comparison of actual outputs against expected ones across various metrics, including groundedness, fluency, relevancy, and F1 score. Importantly, assessing performance also means scrutinising your model for risk and safety concerns, such as the presence of self-harm, hateful, and unfair content, to ensure compliance with an ethical AI framework.

 

Model Performance

Model deployment strategy—whether via MaaS or MaaP—affects how you should monitor resource usage within your Azure environment. Key metrics like CPU, GPU, memory utilisation, and network performance are vital indicators of your infrastructure's health and efficiency. Tools like Azure Monitor and Azure carbon optimisation offer comprehensive insights, helping you check that your resources are allocated optimally. Consult the Azure Well-Architected Framework for detailed strategies on balancing performance enhancements with cost and energy efficiency, such as deploying to low-carbon regions, ensuring your AI implementations remain both optimal and sustainable.

 

A Note on Responsible AI

While sustainability is the main focus of this guide, it's important to also consider the broader context of responsible AI. Microsoft's Responsible AI Standard provides valuable guidance on principles like fairness, transparency, and accountability. Technical safeguards, such as Azure AI Content Safety, play a role in mitigating risks but should be part of a comprehensive approach that includes fostering a culture of responsibility, conducting ethical reviews, and combining technical, ethical, and cultural considerations. By taking a holistic approach, we can work towards the responsible development and deployment of generative AI while addressing potential challenges and promoting its ethical use.

 

Conclusion

As we explore the potential of generative AI, it’s clear that its use cases will continue to grow quickly. This makes it crucial to keep the environmental impact of our AI workloads in mind.

 

In this guide, we’ve outlined some key practices to help prioritise the environmental aspect throughout the lifecycle. With the field of generative AI changing rapidly, make sure to say up to its latest developments and keep learning.

 

Contributions

Special thanks to the UK GPS team who reviewed this article before it was published. In particular, Michael Gillett, George Tubb, Lu Calcagno, Sony John, and Chris Marchal.

Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

SDU Show 90 with guest Joe Sack

1 Share
SDU Show 90 features Microsoft Senior Product Manager Joe Sack discussing Copilot integration with Azure SQL DB



Download audio: http://sqldownunder.blob.core.windows.net/podcasts/SDU90FullShow.mp3
Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Request Response Messaging Pattern With MassTransit

1 Share

Thank you to our sponsors who keep this newsletter free to the reader:

Become a Postman master with Postman Intergalactic sessions. Test your APIs with ease and collaborate on API development with workspaces. Get Postman for FREE!

Free eBook: How to Choose a Component Library? Practical advice for choosing the right component libraries for your business requirements. Try the expert-curated solutions now!

Building distributed applications might seem simple at first. It's just servers talking to each other. Right?

However, it opens a set of potential problems you must consider. What if the network has a hiccup? A service unexpectedly crashes? You try to scale, and everything crumbles under the load? This is where the way your distributed system communicates becomes critical.

Traditional synchronous communication, where services call each other directly, is inherently fragile. It creates tight coupling, making your whole application vulnerable to single points of failure.

To combat this, we can turn to distributed messaging (and introduce an entirely different set of problems, but that's a story for another issue).

One powerful tool for achieving this in the .NET world is MassTransit.

In this week's issue, we'll explore MassTransit's implementation of the request-response pattern.

Request-Response Messaging Pattern Introduction

Let's start by explaining how the request-response messaging pattern works.

The request-response pattern is just like making a traditional function call but over the network. One service, the requester, sends a request message and waits for a corresponding response message. This is a synchronous communication approach from the requester's side.

The good parts:

  • Loose Coupling: Services don't need direct knowledge of each other, only of the message contracts. This makes changes and scaling easier.
  • Location Transparency: The requester doesn't need to know where the responder is located, leading to improved flexibility.

The bad parts:

  • Latency: The overhead of messaging adds some additional latency.
  • Complexity: Introducing a messaging system and managing the additional infrastructure can increase project complexity.
Request response messaging pattern diagram.

Request-Response Messaging With MassTransit

MassTransit supports the request-response messaging pattern out of the box. We can use a request client to send requests and wait for a response. The request client is asynchronous and supports the await keyword. The request will also have a timeout of 30 seconds by default, to prevent waiting for the response for too long.

Let's imagine a scenario where you have an order processing system that needs to fetch an order's latest status. We can fetch the status from an Order Management service. With MassTransit, you'll create a request client to initiate the process. This client will send a GetOrderStatusRequest message onto the bus.

public record GetOrderStatusRequest
{
    public string OrderId { get; init; }
}

On the Order Management side, a responder (or consumer) will be listening for GetOrderStatusRequest messages. It receives the request, potentially queries a database to get the status, and then sends a GetOrderStatusResponse message back onto the bus. The original request client will be waiting for this response and can then process it accordingly.

public class GetOrderStatusRequestConsumer : IConsumer<GetOrderStatusRequest>
{
    public async Task Consume(ConsumeContext<GetOrderStatusRequest> context)
    {
        // Get the order status from a database.

        await context.ResponseAsync<GetOrderStatusResponse>(new
        {
            // Set the respective response properties.
        });
    }
}

Getting User Permissions In a Modular Monolith

Here's a real-world scenario where my team decided to implement this pattern. We were building a modular monolith, and one of the modules was responsible for managing user permissions. The other modules could call out to the Users module to get the user's permissions. And this works great while we are still inside a monolith system.

However, at one point we needed to extract one module into a separate service. This meant that the communication with the Users module using simple method calls would no longer work.

Luckily, we were already using MassTransit and RabbitMQ for messaging inside the system.

So, we decided to use the MassTransit request-response feature to implement this.

The new service will inject an IRequestClient<GetUserPermissions>. We can use it to send a GetUserPermissions message and await a response.

A very powerful feature of MassTransit is that you can await more than one response message. In this example, we're waiting for a PermissionsResponse or an Error response. This is great, because we also have a way to handle failures in the consumer.

internal sealed class PermissionService(
    IRequestClient<GetUserPermissions> client)
    : IPermissionService
{
    public async Task<Result<PermissionsResponse>> GetUserPermissionsAsync(
        string identityId)
    {
        var request = new GetUserPermissions(identityId);

        Response<PermissionsResponse, Error> response =
            await client.GetResponse<PermissionsResponse, Error>(request);

        if (response.Is(out Response<Error> errorResponse))
        {
            return Result.Failure<PermissionsResponse>(errorResponse.Message);
        }

        if (response.Is(out Response<PermissionsResponse> permissionResponse))
        {
            return permissionResponse.Message;
        }

        return Result.Failure<PermissionsResponse>(NotFound);
    }
}

In the Users module, we can easily implement the GetUserPermissionsConsumer. It will respond with a PermissionsResponse if the permissions are found or an Error in case of a failure.

public sealed class GetUserPermissionsConsumer(
    IPermissionService permissionService)
    : IConsumer<GetUserPermissions>
{
    public async Task Consume(ConsumeContext<GetUserPermissions> context)
    {
        Result<PermissionsResponse> result =
            await permissionService.GetUserPermissionsAsync(
                context.Message.IdentityId);

        if (result.IsSuccess)
        {
            await context.RespondAsync(result.Value);
        }
        else
        {
            await context.RespondAsync(result.Error);
        }
    }
}

Closing Thoughts

By embracing messaging patterns with MassTransit, you're building on a much sturdier foundation. Your .NET services are now less tightly coupled, giving you the flexibility to evolve them independently and weather those inevitable network glitches or service outages.

The request-response pattern is a powerful tool in your messaging arsenal. MassTransit makes it remarkably easy to implement, ensuring that requests and responses are delivered reliably.

We can use request-response to implement communication between modules in a modular monolith. However, don't take this to the extreme, or your system might suffer from increased latency.

Start small, experiment, and see how the reliability and flexibility of messaging can transform your development experience.

That's all for this week. Stay awesome!




Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

OData .NET 8 Preview Release

1 Share

We would like to announce that we are planning a new major release of OData .NET core libraries in June. Ahead of this release, we have released preview versions of the libraries to get some early feedback from the community. Specifically, the following preview releases are now available on NuGet:

It has been almost 8 years since the last major release of OData .NET core libraries. This release presents an opportunity for us to modernize our stack, address some technical debt and take better advantage of investments in .NET. To make adoption and upgrading to the new version smooth, we have opted to limit the number of breaking changes. In many cases, you should be able to upgrade to the new release by changing the version number with minimal code changes. We share a list of breaking changes below.

The most disruptive change we are making in this release is dropping support for .NET Framework. The new release will only support .NET 8 and later. We understand that there are still a lot of people using .NET Framework. For this reason, we will continue to maintain OData .NET v7 suite of libraries for the foreseeable future. We currently have no plans to drop support for OData Core 7.x. We will eventually stop adding new features after the 8.0 major release.

With this release, we will also introduce an official support policy. The support policy will document the support lifecycle for the different official OData libraries. This will state which versions are supported and for how long and this should help customers plan their migration accordingly. We will also publish a comprehensive migration guide to help you move from version 7.x to 8.0.0.

We invite you to try out these new versions and share feedback with us. Feel free to create an issue on our GitHub repository if you run into any issues with the preview release.

OData .NET 8 release schedule

  • April 26th, 2024: First preview release 8.0.0-preview.1
  • May 16th, 2024: First release candidate 8.0.0-rc1
  • June 20th, 2024: Official release 8.0.0

We may release additional preview versions or release candidates before the final release depending on the feedback we get. We also plan to release new versions of Microsoft.AspNetCore.OData and Microsoft.OData.ModelBuilder that take advantage of OData Core 8.x. We will communicate the dates for these releases soon.

Breaking changes in Version 8.0.0-preview.1

The sections below list the breaking changes introduced the preview release. For a full list of changes (including non-breaking changes and improvements), check the changelog.

General breaking changes

  • Support for .NET 8 and later only: We dropped support for .NET Framework, NET Core 3.1 and lower, .NET 7 and lower.

Breaking changes in Microsoft.OData.Core

  • Merged IJsonWriter,IJsonWriterAsync, IJsonStreamWriter, IJsonStreamWriterAsync into a single interface IJsonWriter that exposes synchronous, asynchronous and streaming APIs for writing JSON payloads
  • Merged IJsonReader, IJsonReaderAsync, IJsonStreamReader, IJsonStreamReaderAsync into a single interface IJsonReader that exposes synchronous, asynchronous and streaming APIs for reading JSON payloads
  • Changed the default value of the leaveOpen argument of the ODataBinaryStreamValue constructor from true to false. This means that by default, when the ODataBinaryStreamValue object is disposed, the underlying stream will also be closed and disposed.
  • JSONP support has been deprecated and will be removed in Microsoft.OData.Core 9.
  • The IJsonWriterFactory.CreateJsonWriter method now accepts a Stream instead of a TextWriter.
  • The DefaultStreamBasedJsonWriterFactory class has been renamed to ODataUtf8JsonWriterFactory.
  • Remove our custom IContainerBuilder interface used for dependency injection in favor of the standard Microsoft.Extensions.DependencyInjection APIs.
  • Introduced a new extension method called AddDefaultODataServicesto IServiceCollection for registering default OData services.
  • Removed the ODataSimplifiedOptions class and moved its properties to more appropriate classes (see below).
  • Moved EnableParsingKeyAsSegment from ODataSimplifiedOptions to ODataUriParserSettings.
  • Moved EnableReadingKeyAsSegment and EnableReadingODataAnnotationPrefix from ODataSimplifiedOptions to ODataMessageReaderSettings.
  • Moved EnableWritingKeyAsSegmentGetOmitODataPrefix and SetOmitODataPrefix from ODataSimplifiedOptions to ODataMessageWriterSettings.
  • Removed the ODataMessageReader.CreateODataDeltaReader and CreateODataDeltaReaderAsync methods. Use CreateODataDeltaResourceSetReader instead.
  • Removed the ODataMessageWriter.CreateODataDeltaWriter and CreateODataDeltaWriterAsync methods. Use CreateODataDeltaResourceSetWriterAsync instead.
  • Added the INavigationSourceSegment interface that exposes a NavigationSource property to provide a consistent way of retrieving the navigation source without having to perform a type cast.

Breaking changes in Microsoft.OData.Client

  • Marked the DataServiceContext.KeyComparisonGeneratesFilterQuery property as deprecated and changed the default value from false to true. This means that the LINQ query context.People.Where(p => p.Id == 10) will generate a URL filter query option like /People?$filter=Id eq 10 instead of /People(10) by default.
  • Removed HttpWebRequestMessage class, and consequently, support for the legacy HttpWebRequest API. All requests from the client will be made using HttpClientRequestMessage which is based on HttpRequestMessage and HttpClient.
  • Removed the HttpRequestTransportMode enum property from DataServiceContext. This property was used to switch between HttpClient and HttpWebRequest. Now all requests are made using HttpClient.
  • Added DataServiceContext.HttpClientFactory property that allows you inject your HttpClient instance by passing a custom IHttpClientFactory. This replaces the previous approach of configuring a http client using the custom IHttpClientHandlerProvider (see below).
  • Removed the DataServiceContext.HttpClientHandleProvider property and the IHttpClientHandlerProvider interface. These were used to provide custom HttpClient configurations. The DataServiceContext.HttpClientFactory should be used instead.
  • Removed the DataServiceContext.Credentials property. The DataServiceContext.HttpClientFactory should be used to provide a HttpClient instance configured with the right credentials if needed.
  • Removed the HttpClientRequestMessage.ReadWriteTimeout property. The HttpClientRequestMessage.Timeout can be used to set the request timeout.
  • Removed the DataServiceQuery<TElement>.IncludeTotalCount(bool countQuery) method. Use IncludeCount(bool countQuery) instead.
  • Removed the DataServiceQuery<TElement>.IncludeTotalCount() method. Use DataServiceQuery<TElement>.IncludeCount() instead.
  • Removed the QueryOperationResponse.TotalCount property. Use Count instead.

Breaking changes in Microsoft.OData.Edm

  • Added the EntityType property to IEdmNavigationSource interface to make it easier to retrieve the entity type from a navigation source without having to perform a type cast.

Planned changes in Version 8.0.0

This section contains a list of planned changes that are expected to make it to the official 8.0.0 release that are not available in the first preview.

Planned changes in Microsoft.OData.Core

  • Make ODataUtf8JsonWriter the default JSON writer implementation.
  • Use ValueTask<T> instead of Task<T> for async I/O operations where applicable.
  • Allow customers to include custom annotations in the payload that are not included in the include-odata-annotations preference header.
  • When writing the Scale attribute in XML CSDL, write variable in lowercase instead of Variable.
  • Change the ODataLibraryCompatibility enum into a flags enum where each bit will represent a different compatibility setting that can be used to enable some legacy serializaiton behaviour.
  • Remove deprecated APIs and behavior flags.

 

The post OData .NET 8 Preview Release appeared first on OData.

Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete

Daily Reading List – April 26, 2024 (#306)

1 Share

Today was a quick day, and I took a detour in the middle to accompany my son on his quest to earn his driver’s license. For better or worse, he passed. Buckle up out there.

[article] Take Two: Eight Hard-Earned Lessons from Repeat Founders on Starting Over Again. Doing something once is awesome. Doing it again? That’s a whole other thing. I liked this article with lessons from repeat founders.

[article] How Burnout Became Normal — and How to Push Back Against It. These are good techniques for resisting burnout and creating an environment with a reasonable amount of stress.

[blog] 2024 DORA survey now live: share your thoughts on AI, DevEx, and platform engineering. Get involved in this year’s survey and become part of the industry’s largest body of research into software delivery performance.

[blog] OK Cloud, On-Prem is Alright. I don’t really believe “private cloud” is a real thing—nine times out of ten, it’s a nicely automated VM or container environment—but Ian offers up a thoughtful exploration of the hybrid future of most big companies.

[blog] isBooleanTooLongAndComplex. I don’t think I’ve ever seen this talked about, or thought about it much. Probably because I’m a subpar developer. It’s a short post, but give it a read.

[blog] Building DoorDash’s Product Knowledge Graph with Large Language Models. The engineering team at DoorDash wanted to automate a process, and avoid the cold start of a self-trained model, so they built out an LLM-based solution.

##

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
2 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories