With the new native hosting model, Azure Functions are now fully integrated into ACA. This means you can deploy and run your functions directly on ACA, taking full advantage of the robust app platform.
If you are using CLI, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the Container App resource.
Please note, in the new native hosting model,
Native Azure Functions on ACA unlock the complete feature set of Azure Container Apps, including:
In summary, by running Functions in Container Apps, you benefit from automatic scaling, access to native ACA features, official support and a fully managed container environment—all without having to manage the underlying infrastructure yourself.
Previously, hosting Azure Functions on Azure Container Apps (ACA) was made available using Microsoft.Web resource provider. While this method is effective, it introduced complexity and had limited access to some of ACA’s native features. Additionally, some of you may have deployed plain vanilla function images on the ACA environment, However, this approach does not offer advantages of auto-scaling and is not officially supported.
If you are currently deploying function images on Azure Container Apps, we recommend transitioning to the new native hosting model. Here are the steps to move to this new approach,
Ready to try it out? You can deploy your first function app natively on ACA using the Azure CLI, Bicep, ARM templates, or the Azure Portal.
Explore the documentation below to learn more.
This week, we discuss Zenoss finally getting acquired, Databricks buying Neon, and the debut of WizOS. Plus, updates on OpenAI, Google, Apple—and hot takes on Marmite, Vegemite, and Emacs.
Watch the YouTube Live Recording of Episode 519
Photo Credits
dotConnect and Entity Developer boost .NET development with high-performance ADO .NET providers and visual ORM builder. Try a 30-day free trial now!
"I remember I had the entire life cycle of the web forms printed on a wall. It was like six sheets of paper and it was very complex, and it was very useful to have it on the wall because, like, you could always look at it and say, "okay, this is going on before this one." So you have to like switch the order of things. But that's exactly what I call interesting"— Tomáš Herceg
Welcome friends to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. We are the go-to podcast for .NET developers worldwide, and I am your host: Jamie "GaProgMan" Taylor.
In this episode, we talk with Tomáš Herceg about strategies for modernizing .NET Framework web applications such that they leverage the very latest in the .NET stack. Tomáš shares his insights from the journey of upgrading his own applications and those of his clients, both of which provided the background for his new book: "Modernizing .NET Web Applications".
"The biggest problem of the YARP migrations: that they will force you to do a lot of infrastructure things at the beginning before you even start migrating some real functionality."— Tomáš Herceg
Along the way, we discuss how using his DotVVM project can help with the migration. Not only is the upgrade path for DotVVM projects a process of swapping a NuGet package, but is also keeps the upgrade as a single in-memory process—something that YARP-based migrations aren't able to do.
Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET.
If you find this episode useful in any way, please consider supporting the show by either leaving a review (check our review page for ways to do that), sharing the episode with a friend or colleague, buying the host a coffee, or considering becoming a Patron of the show.
The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-7/dotnet-web-app-modernization-made-easy-with-tomas-hercegs-new-book-and-dotvvm/
Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend.
And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch.
You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.
Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show
Azure AI Search provides a fast, scalable, reliable vector search service that you can extend with RAG and other AI services.
In this article, I will show how to create an Azure AI Search service.
Navigate to the Azure Portal and log in.
Click the [Create a resource] button (Fig. 1) and search for "ai search" or "azure ai search," as shown in Fig. 2.
Fig. 1
Fig. 2
From the list of results, select the [Create] button in the "Azure AI Search" panel (Fig. 3) to expand the menu, and select the [Create] option, as shown in Fig. 4.
Fig. 3
Fig. 4
The "Create a search service" dialog displays with the "Basics" tab selected, as shown in Fig. 5.
Fig. 5
At the "Subscription" dropdown, select the subscription in which you want to create this AI Search service. Many of you will have only one subscription, so you will not need to choose anything here.
At the "Resource group" field, select the resource group in which you want to create the Search service, or click the "Create new" link to create a new resource group in which to add the Search service. A resource group is a logical grouping of Azure resources you want to manage together.
At the "Service name" field, enter a unique name for this Search service.
At the "Location" dropdown, select an Azure region in which to create the Search service. Consider the location of the people and services using this service to minimize latency.
The "Pricing tier" field defaults to "Standard." If you want to change this, click the "Change pricing tier" link and select an appropriate tier from the list of options, as shown in Fig. 6.
Fig. 6
These pricing tiers are listed in ascending order of price and capacity. You should select one that meets your needs, but resist paying for more than you need.
Fig. 7 shows the "Scale" tab. It is unnecessary to change anything on this tab, but it allows you to add more Replicas and partitions. Increasing Replicas increases the availability of the service, while increasing Partitions increases the capacity of the service. You should set the Replicas to at least 3 for production environments to achieve high availability for read and write operations.
Fig. 7
Fig. 8 shows the "Networking" tab. It is unnecessary to change anything on this tab, but it allows you to restrict access to the account to specific networks and configure private endpoints for the account.
Fig. 8
Fig. 9 shows the "Tags" tab. It is not necessary to change anything on this tab, but you can apply name-value pairs to this resource that you may use to filter or sort your reports.
Fig. 9
Fig. 10 shows the "Review + create" tab. If you made any errors, such as leaving a required field empty or selecting an inconsistent combination of options, these errors will be listed here, and you will need to correct them before you can proceed.
Fig. 10
After correcting any errors, click the [Create] button (Fig. 11) to start creating the Azure AI Search service.
Fig. 11
After a short time, a confirmation message like the one in Fig. 12 will display, indicating that the Search service has been created.
Fig. 12
Click the [Go to resource] button (Fig. 13) to show the "Overview" blade of the newly created Azure AI Search service, as shown in Fig. 14.
Fig. 13
Fig. 14
This article showed you how to create an Azure AI Search service.
We are excited to share a summary of recent updates and continuous clean-up efforts across the Semantic Kernel .NET codebase. These changes focus on improving maintainability, aligning with the latest APIs, and ensuring a consistent experience for users. Below you’ll find details on package graduations, deprecations, and a few other improvements.
Microsoft.SemanticKernel.Plugins.Core
package has been moved from “alpha” to “preview” status, reflecting its maturity and readiness for broader use. This change does not introduce new features but signals increased stability for those relying and building on these core plugins.Microsoft.SemanticKernel.Markdown
package has been removed due to lack of usage. If you still use this package, please refer to the
migration guide.Microsoft.SemanticKernel.Planners.Handlebars
and Microsoft.SemanticKernel.Planners.OpenAI
planners were deprecated in favor of more reliable mechanisms such as function calling, and the decision was made to discontinue their availability on NuGet packages. For migration details, see the
stepwise planner migration guide.2024-10-02-preview
). This required some breaking changes to the plugin public API surface.
See the migration guide for details.SendWithSuccessCheckAsync
extension methods for HTTP requests, aligning with other Semantic Kernel components.These updates are part of our ongoing effort to keep the Semantic Kernel codebase clean, stable, and easy to use. For more information on migrating from deprecated or updated components, please refer to the linked migration guides.
If you have feedback or questions, please join the discussion on our GitHub repository.
The post Semantic Kernel: Package previews, Graduations & Deprecations appeared first on Semantic Kernel.