Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
132666 stories
·
29 followers

The Honda Zero EVs look even more compelling up close

1 Share
Honda 0 SUV
Image: Vjeran Pavic / The Verge

I’m not saying I want to buy one. I’m just very curious to see where this is going.

Honda released one of the more interesting concepts at last year’s CES with two Honda Zero prototypes: the Saloon and the Space-Hub. It promised to come back in a year with something a little closer to production. But rather than temper those space-age design elements, Honda leaned into them. Way in.

The Honda 0 Saloon and Honda 0 SUV retain a lot of what made the concepts so weird and different — and not necessarily in an off-putting way. But it’s definitely not the electric CR-V that customers have been begging the company to make for years. In fact, Honda seems to be saying to all those people who want normie-looking EVs, “We see you. We hear you. We don’t care.”

Much has already been said about the similarities between these Honda Zero prototypes and certain iconic vehicles from the ’70s and ’80s, like the Lamborghini Countach, AMC Gremlin, Aston Martin Lagonda Shooting Brake, and (h/t Jason Torchinsky) the Brubaker Box.

My theory is that Honda is reaching for these design inspirations as a way to offset the future shock of an ultra-minimalist interior and all the marketing speak about “software-defined vehicles.” After all, Honda’s real announcement this year was the operating system it developed in-house, named after its iconic Asimo robot.

The Zero EVs mostly feel like a lot of window dressing for the actual product, which is software. What better way to draw people into listening to a TED talk about “high-performance system-on-a-chip” than to stand in front of a car that looks like it should be floating in low orbit?

Honda 0 Saloon

One of the things I noticed about the Saloon was the lack of a rear window — that rounded rectangle in the back isn’t transparent. The depth effect is very impressive, but it’s not obscuring an incognito window. It’s just the taillight.

Something else that caught my attention was the lack of sideview mirrors. Honda is using cameras instead. Drivers who want to check their blind spots will need to use two screens embedded at either end of the long piece of glass that spans the length of the dashboard. Of course, US safety regulations require regular old sideview mirrors, so this seems mostly aspirational.

Honda 0 SUV

The SUV is less “out there” than the Saloon, and that probably means we’re likely to see some version of it on US roads before the sedan. There’s definitely a rear window, and the airiness of the greenhouse seems to allude to Honda Zero’s design principles of “thin, light, and wise.”

We don’t have any specs for either vehicle, though Honda has said that its Zero EVs will draw from the automaker’s Formula 1 racing experience. The automaker is also aiming for optimum battery efficiency through its e-Axle system consisting of a motor, inverter, and gearbox that convert electric power into energy for driving. Each EV is expected to have around 300 miles of range, which translates to an 80–90kWh battery.

Other important details include an effort to consolidate electronic control units, similar to Rivian’s recently relaunched R1 vehicles. By reducing the number of components and wiring, Honda is clearly trying to limit its costs in an environment where the price of production seems to be on the rise.

Interior

The absence of anything remotely resembling a physical knob or dial inside either vehicle is a pretty good sign that automakers continue to ignore the pleas of customers to stop porting every last bit of functionality through its digital interfaces. Yes, I’m an old man yelling at clouds, but for the love of god, give me something to twist or push. Trying to adjust the heat by tapping blindly at a smooth pane of glass while careening down a highway at 75mph isn’t exactly my idea of a good time.

The yoke is... a yoke. Automakers love their steering yokes! But when it comes time to actually put something into production, they mostly retreat back to wheel shapes. The moonroof is another one of those features that suggest “thin” principles. And obviously, Honda’s promise that its Zero vehicles will come with Level 3 autonomy, also known as “hands-off, eyes-off” driving, needs a lot more explanation. What’s the handoff between autonomous system and driver look like? And how will it account for our very human tendency to zone out when we’re not actively engaged in driving?

There are a lot of questions swirling around these vehicles! Will they ever go into production? There’s a nonzero chance.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

DevOps and Azure IaC Series: Deploy

1 Share

Disclaimer: This post was originally published on Azure with AJ and has been reproduced here with permission. You can find the original post here.

Welcome back to our Azure IaC and DevOps series! In our previous article, we delved into the Build phase, highlighting best practices to achieve consistent, repeatable builds. Today, we embark on an exciting journey into the Deploy phase. A streamlined deployment process is vital for ensuring successful and consistent infrastructure deployments. We’ll explore how leveraging powerful tools like Azure Bicep can play a pivotal role in achieving this goal and transforming our deployment strategy.

The Deploy Phase

The Deploy phase for Azure IaC is quite similar to software deployments. Both require a well-defined process to ensure successful and consistent deployments. Here are some key aspects to consider:

  • Defining an IaC Tool: Just as software deployments rely on well-defined tools and scripts, IaC deployments require a robust IaC tool like Azure Bicep. This ensures that your infrastructure is deployed consistently and accurately.
  • Environment Definitions and Approvals: Like software, IaC deployments often target multiple environments (e.g., development, staging, production). Each environment needs to be defined clearly, and deployment to each environment should be gated by approval processes to ensure quality and compliance.
  • Artifact Management: Ensuring that the correct artifacts from the build phase are used in deployments is crucial for maintaining consistency and reliability. This parallels the software deployment process, where binaries and other artifacts must be managed carefully.
  • Monitoring and Rollbacks: After deployment, monitoring the deployed resources and having rollback mechanisms in place are essential. This is similar to how software deployments are managed to ensure stability and quick recovery in case of issues.

Pipeline Templates

In the deploy phase, leveraging a reusable GitHub workflow (pipeline template) can significantly enhance the efficiency and consistency of deploying Azure IaC Bicep templates. To demonstrate this, I’ve added an example workflow to the repository linked below. This workflow integrates essential steps such as:

  • Download Build Artifacts: Ensuring that the same artifacts generated during the Build phase are used in the deployment phase. This step guarantees consistency and integrity across environments.
  • Environment Definitions: Define the environments to which the infrastructure will be deployed. This can include development, staging, production, and other custom environments.
  • Deployment Steps: Defined script files or inline scripts enable the pipeline to automatically deploy to various Azure scopes, such as tenant, management group, subscription, or resource groups.
  • Post Deployment Checks: After deployment, run validation scripts or tools to ensure that the deployed resources are functioning as expected. This step is crucial for maintaining the integrity of your infrastructure.

Check out the repository for this series to see how these concepts come to life and integrate them into your workflows.

Click here to view the deploy example GitHub workflow

Conclusion

In this article, we explored the Deploy phase of Azure IaC and DevOps, highlighting the critical steps involved in deploying infrastructure using Azure Bicep. By following best practices and leveraging reusable GitHub workflows, we can streamline our deployment process and ensure consistency across environments. Don’t miss our next post, where we’ll explore centralised pipelines and the crucial role they play in streamlining deployments and enhancing efficiency. Stay tuned!

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

DevOps and AI Series: Landing Zones

1 Share

Disclaimer: This post was originally published on Azure with AJ and has been reproduced here with permission. You can find the original post here.

In today’s fast-paced digital landscape, the integration of artificial intelligence (AI) and DevOps practices is revolutionising how organisations manage their infrastructure and streamline operations. This blog post, the first in a series, will delve into the critical importance of establishing safe and scalable landing zones for AI deployments and how DevOps teams enable this transformation.

AI and DevOps may seem like distinct entities, but their convergence can significantly enhance the efficiency and scalability of IT operations. AI technologies, powered by platforms like Azure OpenAI or Azure AI Foundry, offer unprecedented capabilities for automating processes, analysing vast amounts of data, and making intelligent decisions. However, to fully leverage these benefits, it’s crucial to build safe and scalable landing zones that support seamless AI integration.

What are Landing Zones?

Landing zones are foundational environments that provide a secure and compliant foundation for deploying workloads in the cloud. They establish the core infrastructure components, governance policies, and security controls necessary to support the organisation’s cloud strategy. In the context of AI deployments, landing zones play a critical role in ensuring that AI models and applications can be deployed, managed, and scaled effectively.

AI Landing Zone Components

When designing a landing zone for AI deployments, several key components must be considered along with the separation of ownership and responsibilities between DevOps and other teams such as application, platform and data science teams.

The following components are typically owned and managed by the application and data science teams:

Azure OpenAI or Azure AI Foundry: These platforms provide the tools and services needed to develop, train, and deploy AI models. Application and data science teams are responsible for managing the AI workloads and ensuring that they meet the required performance and scalability standards.

Azure App Service: This platform enables the deployment of web applications. Application teams are responsible for managing the deployment and scaling of these services to support AI applications.

AI Search: Azure AI Search provides AI-powered search capabilities for applications. Data science teams are responsible for configuring and managing the search indexes and query pipelines to deliver relevant search results.

Azure Monitor and Application Insights: These tools provide monitoring and telemetry capabilities for AI applications. Data science and application teams are responsible for setting up monitoring alerts, tracking performance metrics, and diagnosing issues.

Azure Application Gateway: This service provides secure ingress access to AI applications. This component can either be managed by the application team or as a centralised resource managed by the platform team, depending on the organisation’s security and compliance requirements.

The following components are typically owned and managed by the Platform and DevOps team:

Azure Firewall: This service provides network security and threat protection for AI workloads. The platform and DevOps teams are responsible for configuring and managing the firewall rules to ensure that AI applications are protected from external threats.

Azure Policy: This service enables the enforcement of governance policies and compliance standards across the Azure environment. The platform and DevOps teams are responsible for defining and enforcing policies that govern the use of AI resources and services.

User-Defined Routes: These routes define the path that network traffic takes within the Azure environment. The platform and DevOps teams are responsible for configuring and managing these routes to ensure that AI workloads can communicate with other services and force traffic to the hub network.

Azure Bastion: This service provides secure remote access to AI workloads. The platform and DevOps teams are responsible for configuring and managing the bastion host to ensure that access to AI resources is restricted to authorised users.

DevOps in AI Deployments

DevOps practices play a pivotal role in supporting AI deployments by fostering a culture of continuous integration, continuous delivery (CI/CD), and automation. Key DevOps responsibilities include:

Infrastructure Provisioning: DevOps teams use infrastructure as code (IaC) tools to automate the provisioning of scalable, secure, and reliable infrastructure on platforms like Azure. This ensures that AI models have the necessary computational resources and storage capacity.

Security and Compliance: Establishing robust security protocols and compliance measures to protect sensitive data and meet regulatory requirements. DevOps teams implement components such as IAM policies and network configurations to secure the AI infrastructure.

Collaboration Between Teams

An area that is often overlooked is the role of DevOps in deploying and managing AI models. DevOps teams need to work closely with application and data science teams to assist with:

Continuous Integration and Continuous Delivery (CI/CD): Integrating AI model updates seamlessly into the existing infrastructure. DevOps teams implement CI/CD pipelines to automate the deployment of AI models, ensuring rapid and reliable releases.

Automated Testing and Validation: Ensuring AI models are tested and validated through automated testing frameworks, reducing the risk of errors and ensuring consistent performance.

Monitoring and Maintenance: Using monitoring tools to track the performance of AI models and make necessary adjustments. DevOps teams set up comprehensive monitoring and logging mechanisms to ensure AI services are running optimally and can quickly address any issues that arise.

Conclusion

I recently had the opportunity to speak at the Melbourne Azure User Group about AI Landing Zones. I emphasised the importance of collaboration among all teams to understand the AI workloads and ensure that the landing zones are designed to meet both the performance and security requirements of the organisation.

For a deeper dive into the topic, you can find my presentation and a demo on deploying a secure AI landing zone using a DevOps approach on my GitHub repository.

In the next post, we’ll dive deeper into the role of DevOps in managing AI models and explore best practices for integrating AI into your DevOps processes. Stay tuned for more insights on how AI and DevOps are transforming the digital landscape!

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Empowering AI Agents with Tools via OpenAPI: A Hands-On Guide with Microsoft Semantic Kernel Agents

1 Share

Today the Semantic Kernel team is happy to welcome back our guest author, Akshay Kokane. We will turn it over to him to dive into his recent Medium article on Semantic Kernel.

As we advance towards an Agentic Approach in the AI world, I would like to share my insights on how Semantic Kernel can assist in building AI agents and leveraging existing APIs to create intelligent agents. 

In a previous article, I discussed how Semantic Kernel can develop a multi-agent system that fosters a collaborative environment for these agents to interact and deliver results. In this new blog post, I am pleased to share that by using the OpenAPI specification, we can utilize existing APIs to integrate with agents. These are referred to as tools or plugins, which AI agents can call upon to obtain contextual data. 

Using Semantic Kernel, adding plugins via the OpenAPI specification is simplified. Furthermore, these plugins are automatically triggered as and when required using the OpenAI Tool Calling. Semantic Kernel eliminates the complexities associated with tool invocation, streamlining the development of AI agents by removing the need to focus on these intricacies.  

In this blog post, I will showcase how to create a chat-based application for an e-commerce platform that manages customer payments through the integration of their existing Payment Service APIs. Here are high level steps to import plugin through OpenApi: 

To add a plugin to your kernel:

1. Add the following NuGet Reference: 

<PackageReference Include="Microsoft.SemanticKernel.Plugins.OpenApi" Version="1.32.0-preview" />

2. Import the Plugin to the Kernel:

await kernel.ImportPluginFromOpenApiAsync( 

   pluginName: "paymentProcessor", 

   uri: new Uri("localhost:8080/swagger/v1/swagger.json"), 

   executionParameters: new OpenApiFunctionExecutionParameters() { 

       EnablePayloadNamespacing = true 

   } 

); 

This approach is transformative for enterprises aiming to leverage their existing APIs without reinventing the wheel. Additionally, the OpenAPI spec enables microservice architecture for your AI agents, facilitating easy scalability and resource management at the plugin level. 

Semantic Kernel offers several methods for calling APIs. The blog includes a simple example of non-Auth APIs, but it is also possible to call Auth-based APIs. For more information, refer to the documentation on Give agents access to OpenAPI APIs | Microsoft Learn For more details, please read the full article below.  

Empowering AI Agents with Tools via OpenAPI: A Hands-On Guide with Microsoft Semantic Kernel Agents

I recently explored how the OpenAPI Specification can be utilized to give actionable capabilities to AI agents. In this blog, I aim to share my insights and demonstrate the process.

Understanding the OpenAPI Specification

TThe OpenAPI Specification (formerly known as Swagger) is a standardized, language-agnostic framework for defining HTTP APIs. It allows both humans and machines t discover a service’s capabilities without needing access to its source code or documentation. Properly defined OpenAPI specs enable consumers to interact with services using minimal implementation logic.. Learn more here.

Why Use OpenAPI with AI Agents?

Many enterprises already have robust APIs in place. With the growing demand for AI-enabled applications, using OpenAPI-based plugins simplifies the process of empowering AI agents with existing APIs. Semantic Kernel, when integrated with OpenAPI, provides AI agents with detailed API semantics, including endpoint descriptions, data types, and expected responses.

For instance, imagine an e-commerce platform launching a new feature called ShopChat.AI, where customers can interact with an AI agent to find and purchase products. The AI agent can handle payments and check order statuses by leveraging the existing Payment Service APIs through OpenAPI specs.

Key Benefits of Combining OpenAPI with Semantic Kernel

Integrating OpenAPI specs with Semantic Kernel and Azure OpenAI provides numerous advantages:

1. Simplified AI Integration

OpenAPI specs offer a standardized method for AI agents to understand and interact with existing APIs, eliminating the need for complex custom integrations.

2. Enhanced Agent Functionality

By leveraging existing APIs, AI agents can handle a wide range of tasks, such as inventory management and payment processing, resulting in more versatile applications.

3. Improved Scalability

As your application grows, integrating new APIs becomes easier with OpenAPI. This ensures that your AI agents can evolve alongside your platform.

Building an AI-Powered Application with OpenAPI and Semantic Kernel

Example Agent based Application

Let’s consider an example: Suppose you have a payment service for your e-commerce platform already in use. The service exposes two APIs:

  1. payment/accept: This API accepts credit card information, processes the payment, and returns a transactionId.
  2. payment/status: This API uses the transactionId to retrieve the payment status.

Now the e-commerce website is planning to launch new feature “ShopeChat.AI” where you can interact with AI agent like you interact with actual shopkeeper to find best product and buy that product.

Step 1: Exposing OpenAPI Specs for Your Service

Make sure your Payment Service is exposing the OpenAPI Spec. I am doing it in .net, so in .net you can expose the OpenAPI Spech by using Swagger. Complete guide can be found here https://learn.microsoft.com/en-us/aspnet/core/tutorials/web-api-help-pages-using-swagger?view=aspnetcore-8.0

This is using SwagsBuckler package, I will add this service

builder.Services.AddSwaggerGen(options =>
{
    var xmlFile = $"{System.Reflection.Assembly.GetExecutingAssembly().GetName().Name}.xml";
    var xmlPath = Path.Combine(AppContext.BaseDirectory, xmlFile);
    options.IncludeXmlComments(xmlPath);
});

Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle

This should create output http://localhost:5272/swagger/v1/swagger.json.

{
"openapi": "3.0.1",
"info": {
"title": "PaymentProcessor",
"version": "1.0"
},
"paths": {
"/api/Payments/status": {
"get": {
"tags": [
"Payments"
],
"summary": "Retrieves the status of a specific payment.",
"responses": {
"200": {
"description": "Success"
}
}
}
},
"/api/Payments/process": {
"post": {
"tags": [
"Payments"
],
"summary": "Processes a payment request.",
"requestBody": {
"description": "The payment request details.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PaymentRequest"
}
},
"text/json": {
"schema": {
"$ref": "#/components/schemas/PaymentRequest"
}
},
"application/*+json": {
"schema": {
"$ref": "#/components/schemas/PaymentRequest"
}
}
}
},
"responses": {
"200": {
"description": "Success"
}
}
}
}
},
"components": {
"schemas": {
"PaymentRequest": {
"required": [
"transactionId"
],
"type": "object",
"properties": {
"transactionId": {
"minLength": 1,
"type": "string"
},
"amount": {
"minimum": 0.01,
"type": "number",
"format": "double"
}
},
"additionalProperties": false
}
}
}
}

To ensure the Payment Service APIs are invoked correctly, I’ve added a constant static transaction ID for verification purposes:

TransactionId = “TestService123”

Step 2: Optional Deployment to Azure

While optional, deploying your service to Azure ensures accessibility and scalability. You can follow Microsoft’s official guide on deploying ASP.NET Core apps to Azure App Services for a smooth cloud integration.If you plan to host your application in the cloud, deploy it to Azure for accessibility and scalability. While optional, this step ensures seamless integration and availability for testing and deployment.

Step 3: Configuring a Semantic Kernel Agent. We will call this agent as “SalesAgent”.

We’ll set up a Semantic Kernel agent, which we’ll refer to as “SalesAgent” Here’s how to configure it:

  1. Add the required package reference in your project
  2. Import the paymentProcessor plugin from your OpenAPI specification:
<PackageReference Include="Microsoft.SemanticKernel.Plugins.OpenApi" Version="1.32.0-preview" />
await kernel.ImportPluginFromOpenApiAsync(
               pluginName: "paymentProcessor",
               uri: new Uri("http://payment-backend-f4bjgseugsdhddb4.eastus-01.azurewebsites.net/swagger/v1/swagger.json."),
               executionParameters: new OpenApiFunctionExecutionParameters()
                {
                      EnablePayloadNamespacing = true
                }
            );

Once your application is launched, the Kernel object should display the paymentProcessor plugin. This confirms the successful integration of the APIs.

If you want to learn how to define kernel, checkout my previous blogs

Step 4: Run the app

Now that everything is set up, deploy your application and begin interacting with the Semantic Kernel agent. The system is designed to handle various tasks seamlessly.

Sample Application Overview

Here’s a quick summary of the sample application I’ve created. It includes two key plugins:

  1. Inventory Plugin Retrieves the latest inventory details. This plugin resides inside SalesAgent service
  2. PaymentProcessor Plugin This plugin is hosted on another service. Makes API calls hosted on the PaymentProcessor service for handling transactions.

Transaction id confirms that InventoryService’s both APIs were called successfully by our AI Agent!!

Conclusion: A Powerful Combination for Streamlined AI Integration

The OpenAPI Specification, along with Semantic Kernel and Azure OpenAI, presents a compelling solution for empowering AI agents with real-world capabilities. This combination offers several key benefits:

  • Simplified AI Integration: OpenAPI specs provide a standardized way for AI agents to understand and interact with existing APIs, eliminating the need for complex custom integrations.
  • Enhanced Agent Functionality: By leveraging existing APIs, AI agents can perform a wider range of tasks, such as processing payments or managing inventory. This leads to more versatile and helpful AI experiences.
  • Improved Scalability: As your application grows, you can easily integrate new APIs using the OpenAPI, allowing your AI agents to keep pace with evolving functionalities.

This blog has provided a step-by-step guide on utilizing OpenAPI specs with Semantic Kernel Agents and Azure OpenAI. By following these steps and embracing this powerful combination, you can empower your AI agents to deliver a more comprehensive and valuable user experience.

The post Empowering AI Agents with Tools via OpenAPI: A Hands-On Guide with Microsoft Semantic Kernel Agents appeared first on Semantic Kernel.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

GCast 190: Organizing Your Day Using Microsoft 365 Copilot

1 Share

GCast 190:

Organizing Your Day Using Microsoft 365 Copilot

Us M365 Copilot to reflect on yesterday and plan for tomorrow, based on your emails, meetings, and chats.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete

Lunar New Year 2025 in Philly: 16 Parades, Events & Dinners

1 Share

For many people of East and Southeast Asian descent, the calendar doesn’t officially flip until the Lunar New Year celebration commences.

This year, Lunar New Year falls on January 29, 2025, and Philadelphia marks the Year of the Snake with celebratory parades through Chinatown and family-friendly festivals at Philly favorites like the Penn Museum, the Please Touch Museum and Franklin Square.

Traditional lion dances wind their way through the region at area attractions — indoors and out — from Dilworth Park in Center City to State Street in Media.

And foodies can get their fill of delicious dishes and special menus at many Philly restaurants helmed by talented Asian chefs like Kampar in Bella Vista.

Why limit your celebration to one day? Stay the night with the Visit Philly Overnight Package which includes free hotel parking and perks.

Keep reading for a few ways to usher in a prosperous, happy and healthy Lunar New Year in Philadelphia.

Read the whole story
alvinashcraft
1 hour ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories