Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
121302 stories
·
30 followers

How Many APIs Are Too Many?

1 Share
I hit a ceiling with my Artisanal APIs.json API profiling, in that I am pushing over a thousand individual APIs. It reminds me of running conferences, and once you get over 500 people, everything begins to change. I only have a little over 100 API providers, but when some API providers have 100+ individual APIs, it adds up very quickly. When you are working to properly profile APIs using OpenAPI and the operations around them with APIs.json, then augment with searching, ratings, and other overlays, it becomes a lot to deal with. But it really isn’t just the volume of data, it is about the cognitive load involved with working with so many different types of APIs at once–the context switching begins to kill ya. To help me make things more manageable I broke my Artisanal APIs.json work into what I am calling search nodes. I started with 12 individual topics, breaking down the top 25 API providers I have in the Artisanal index. It is just a start. I’ll dump more in there now that I have it all set up. But it is already more manageable to have things broken up into nodes. There is something about having things broken down into manageable and more meaningful chunks that changes the rules of the game. The scripts I run to automate things run faster. I feel less overwhelmed when I am working in the middle of the search index. We’ll see how it all plays out with the next couple of rounds of work, but I am feeling like I can more comfortably scale this all in a federated way. Once I got the APIs.json broken up into 12 separate repositories I needed a way to search across each of these nodes. To do this I wrote a simple starter search script, I just needed a way to spider the YAML in each repository. Each search node is defined using APIs.json, with individual APIs.json for each API provider, as well as a single central apis.json for each node index. I just needed what I am calling a network node with a single APIs.json that provides me with an index of the 12 individual topical search nodes-—with more coming soon. The search is janky as hell, but it works. I don’t want to overthink it and just get the bare minimum of what I need to prove that I can do this. I’ll harden the search over time. This is just a simple network search. I aim to index all my nodes using a database and provide a richer and faster API-driven search. I have things federated, and I have my automation in place to find and process new APIs that I want to profile. I have a new incubator repository setup where I am taking in any new APIs, but before it can graduate to one of the search nodes it has to have an OpenAPI. This is the line I’ve drawn between API intake and APIs that I...
Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

APIs.json APIs, Includes, and Network Properties

1 Share
Since the beginning of the specification, APIs.json has had an “apis” property as well as an “includes” properties, providing a way to immediately index your APIs, or “include” a reference to other APIs.json indexes. The APIs.json specification is designed to be flexible in how you can define and organize your collections, but as I spend more time profiling and indexing APIs, I am finding a need to further expand on how APIs.json indexes work together in concert to provide a more federated approach to API discovery. APIs.json APIs Property The top level apis property for APIs.json is where you index your immediate APIs and publish in the index of your developer portal, where you might have 1-25 APIs indexed as part of your public operation—-maybe a few more (I stuffed 360+ in AWS). I am thinking that depending on the size of your APIs, that 25 is going to be my recommended limit. Of course you can index way more, but why? It really depends on who your audience is and what information you are looking to provide. specificationVersion: '0.17' aid: stripe name: Stripe description: >- Millions of companies of all sizes use Stripe online and in person to accept payments, send payouts, automate financial processes, and ultimately grow revenue. image: https://kinlane-productions2.s3.amazonaws.com/apis-json/apis-json-logo.jpg url: https://artisinal.apisjson.org/apis/stripe/apis.yml created: 2023/10/06 modified: '2024-03-09' tags: [] apis: - aid: stripe:stripe-balance-api name: Stripe Balance API description: >- This is an object representing your Stripe balance. You can retrieve it to see the balance currently on your Stripe account. You can also retrieve the balance history, which contains a list of transactions that contributed to the balance (charges, payouts, and so forth). image: https://kinlane-productions2.s3.amazonaws.com/apis-json/apis-json-logo.jpg humanURL: https://stripe.com/docs/api/balance baseURL: https://api.stripe.com tags: [] properties: - type: Documentation url: https://stripe.com/docs/api/balance - type: OpenAPI url: properties/balance-openapi-original.yml overlays: - type: APIs.io Search url: overlays/balance-openapi-search.yml - type: API Evangelist Ratings url: overlays/balance-openapi-api-evangelist-ratings.yml common: - type: Sign Up url: https://dashboard.stripe.com/register - type: Authentication url: https://stripe.com/docs/api/authentication - type: Blog url: https://stripe.com/blog - type: Status url: https://status.stripe.com/ - type: Change Log url: https://stripe.com/docs/upgrades#api-versions - type: Terms of Service url: https://stripe.com/privacy - type: Support url: https://support.stripe.com/ overlays: - type: APIs.io Search url: overlays/apis-io-search.yml - type: API Evangelist Ratings url: overlays/apis-io-search.yml maintainers: - FN: APIs.json email: info@apis.io After breaking up APIs like AWS, Stripe, and others into many individual APIs indexed using the APIs.json apis property. It is the quickest way to organize them, but once I get them all tagged, I begin to think about other bounded contexts that I could also start breaking them down on. AWS has some pretty clear groupings, but Stripe and others take more domain expertise to think about to break down. I am opting to aggregate many APIs using the apis object before I begin to shard them even further—I will do more of that down the road. APIs.json Includes Property The APIs.json includes properties is how you can begin to shard and/or stitch together many different types of APIs. For example, payments API—you can publish an APIs.json with the name of Payments,...
Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Multi-resource metrics query support in the Azure Monitor Query libraries

1 Share

This past January, the Azure Monitor team announced the stable release of the Azure Monitor Metrics data plane API. This API grants the ability to query metrics for up to 50 Azure resources in a single call, providing a faster and more efficient way to retrieve metrics data for multiple resources.

To allow developers to seamlessly integrate multi-resource metrics queries into their applications, the Azure SDK team is excited to announce support in the latest stable releases of our Azure Monitor Query client libraries.

Sovereign cloud support


At the time of writing, the Azure Monitor Query libraries’ multi-resource metrics query feature is only available to Azure Public Cloud customers. Support for the Azure US Government and Azure China sovereign clouds is planned for later in the calendar year. For more information, see the language-specific GitHub issues:

You can access multi-resource metrics query APIs from the following package versions:

Ecosystem Minimum package version
.NET 1.3.0
Go 1.0.0
Java 1.3.0
JavaScript 1.2.0
Python 1.3.0

A new home for multi-resource metrics queries

Earlier versions of the Azure Monitor Query client libraries required sending calls to the Azure Resource Manager APIs to query metrics on a per-resource basis. However, with the introduction of the Azure Monitor Metrics data plane API, a new client, MetricsClient, was added to facilitate data plane metrics operations in the .NET, Java, JavaScript, and Python libraries. For Go, instead of introducing a new client in the existing azquery module, we released the new azmetrics module.

Presently, a multi-resource metrics query is the sole supported operation in MetricsClient and azmetrics. However, we anticipate expanding the clients’ functionality to support other metrics-related query operations in the future.

Developer usage

Developers can now provide a list of resource IDs to the new client operation, eliminating the need for individual calls for each resource. However, to use this API, the following criteria must be satisfied:

  1. Each resource must be in the same Azure region denoted by the endpoint specified when instantiating the client.
  2. Each resource must belong to the same Azure subscription.
  3. User must have authorization to read monitoring data at the subscription level. For example, the Monitoring Reader role on the subscription to be queried.

The metric namespace that contains the metrics to be queried must also be specified. You can find a list of metric namespaces in this list of supported metrics by resource type.

The following code snippets demonstrate how to query the “Ingress” metric for multiple storage account resources in various languages.

Python

from azure.identity import DefaultAzureCredential
from azure.monitor.query import MetricsClient

resource_ids = [
    "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount",
    "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount2"
]

credential = DefaultAzureCredential()
endpoint = "https://eastus.metrics.monitor.azure.com"
client = MetricsClient(endpoint, credential)

metrics_query_results = client.query_resources(
    resource_ids=resource_ids,
    metric_namespace="Microsoft.Storage/storageAccounts",
    metric_names=["Ingress"]
)

Check out the Azure SDK for Python repository for more sample usage.

JavaScript/TypeScript

import { DefaultAzureCredential } from "@azure/identity";
import { MetricsClient } from "@azure/monitor-query";

export async function main() {
    let resourceIds: string[] = [
      "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount",
      "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount2"
    ];
    let metricsNamespace: string = "Microsoft.Storage/storageAccounts";
    let metricNames: string[] = ["Ingress"];
    const endpoint: string = "https://eastus.metrics.monitor.azure.com";

    const credential = new DefaultAzureCredential();
    const metricsClient: MetricsClient = new MetricsClient(
      endpoint,
      credential
    );

    const result: : MetricsQueryResult[] = await metricsClient.queryResources(
      resourceIds,
      metricNames,
      metricsNamespace
    );
}

Check out the Azure SDK for JavaScript repository for more sample usage.

Java

import com.azure.identity.DefaultAzureCredentialBuilder;
import com.azure.monitor.query.models.MetricResult;
import com.azure.monitor.query.models.MetricsQueryResourcesResult;
import com.azure.monitor.query.models.MetricsQueryResult;

import java.util.Arrays;
import java.util.List;

public class MetricsSample {

    public static void main(String[] args) {
        MetricsClient metricsClient = new MetricsClientBuilder()
                    .credential(new DefaultAzureCredentialBuilder().build())
                    .endpoint("https://eastus.metrics.monitor.azure.com")
                    .buildClient();

        List<String> resourceIds = Arrays.asList(
            "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount",
            "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount2"
        );

        MetricsQueryResourcesResult metricsQueryResourcesResult = metricsClient.queryResources(
            resourceIds,
            Arrays.asList("Ingress"),
            "Microsoft.Storage/storageAccounts");

        for (MetricsQueryResult result : metricsQueryResourcesResult.getMetricsQueryResults()) {
            List<MetricResult> metrics = result.getMetrics();
        }
    }
}

Check out the Azure SDK for Java repository for more sample usage.

.NET

using Azure;
using Azure.Core;
using Azure.Identity;
using Azure.Monitor.Query.Models;
using Azure.Monitor.Query;

List<ResourceIdentifier> resourceIds = new List<ResourceIdentifier>
{
    new ResourceIdentifier("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount"),
    new ResourceIdentifier("/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount2")
};
var client = new MetricsClient(
    new Uri("https://eastus.metrics.monitor.azure.com"),
    new DefaultAzureCredential());

Response<MetricsQueryResourcesResult> result = await client.QueryResourcesAsync(
    resourceIds: resourceIds,
    metricNames: new List<string> { "Ingress" },
    metricNamespace: "Microsoft.Storage/storageAccounts").ConfigureAwait(false);

MetricsQueryResourcesResult metricsQueryResults = result.Value;

Check out the Azure SDK for .NET repository for more sample usage.

Go

import (
    "context"

    "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
    "github.com/Azure/azure-sdk-for-go/sdk/monitor/query/azmetrics"
)

func main() {
    endpoint := "https://eastus.metrics.monitor.azure.com"
    subscriptionID := "00000000-0000-0000-0000-000000000000"
    resourceIDs := []string{
        "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount",
        "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Storage/storageAccounts/myStorageAccount2",
    }

    cred, err := azidentity.NewDefaultAzureCredential(nil)
    if err != nil {
        // Handle error
    }
    client, err := azmetrics.NewClient(endpoint, cred, nil)
    if err != nil {
        // Handle error
    }

    res, err = client.QueryResources(
        context.Background(),
        subscriptionID,
        "Microsoft.Storage/storageAccounts",
        []string{"Ingress"},
        azmetrics.ResourceIDList{ResourceIDs: resourceIDs},
        nil
    )
    if err != nil {
        // Handle error
    }
}

Check out the Azure SDK for Go repository for more sample usage.

Summary

The introduction of the Azure Monitor Metrics data plane API and this new functionality in our client libraries marks a significant advancement in the way developers can query metrics for Azure resources. This feature simplifies and streamlines the process of querying multiple resources, greatly reducing the number of HTTP requests that need to be processed. As we continue to enhance these features and expand their capabilities, we look forward to seeing the innovative ways developers use them to optimize their applications and workflows.

Any feedback you have on how we can improve the libraries is greatly appreciated. Let’s have those conversations on GitHub at these locations:

The post Multi-resource metrics query support in the Azure Monitor Query libraries appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Announcing GA of enhanced patching for SQL Server on Azure VM with Azure Update Manager

1 Share

We are pleased to announce the GA release of enhanced patching capabilities for SQL Server on Azure VMs using Azure Update Manager. When you register your SQL Server on Azure VM with the SQL IaaS Agent extension, you unlock a number of feature benefits, including patch management at scale with Azure Update Manager.  

 

Overview

Azure Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on other cloud platforms from a single dashboard. By enabling Azure Update Manager, customers will now be able to:    

 

  • Perform one-time updates (or Patch on-demand): Schedule manual updates on demand
  • Update management at scale: patch multiple VMs at the same time
  • Configure schedules: configure robust schedules to patch groups of VMs based on your business needs
  • Periodic Assessments: Automatically check for new updates every 24 hours and identify machines that may be out of compliance

Azure Update Manager has more categories to include for updates, including the ability to automatically install SQL Server Cumulative Updates (CUs), unlike the existing Automated Patching feature which can only install updates marked Critical or Important.  

To get started using Azure Update Manager go to the SQL virtual machine resource in the Azure portal, choose Updates under Settings.  

 

SQLVM_AUM_Updates.png

To allow your SQL VM to get SQL Server updates, customers will need to enable Microsoft Updates. 

EnableMU.png

 

Migrate from Automated Patching to Azure Update Manager 

If you are currently using the Automated Patching feature offered by the SQL Server IaaS agent extension, and want to migrate to Azure Update Manager, you can do so by using the MigrateSQLVMPatchingSchedule PowerShell module to perform following steps: 

 

  • Disable Automated Patching 
  • Enable Microsoft Update on the virtual machine 
  • Create a new maintenance configuration in Azure Update Manager with a similar schedule to Automated Patching 
  • Assign the virtual machine to the maintenance configuration  

To migrate to Azure Update Manager by using PowerShell, use the following sample script:  

 

$rgname = 'YourResourceGroup' $vmname = 'YourVM' # Install latest migration module Install-Module -Name MigrateSQLVMPatchingSchedule-Module -Force -AllowClobber # Import the module Import-Module MigrateSQLVMPatchingSchedule-Module Convert-SQLVMPatchingSchedule -ResourceGroupName $rgname -VmName $vmname

 

 

The output of the script includes details about the old schedule in Automated Patching and details about the new schedule in Azure Update Manager. For example, if the Automated Patching schedule was every Friday, with a start hour of 2am, and a duration of 150 minutes, the output from the script is: 

 

migration-output-powershell.png

 

Additional Considerations 

If you are currently using the SQL IaaS extension to patch, then be aware of conflicting schedules or consider disabling Automated Patching and migrating to Azure Update Manager to take advantage of the robust features.   

 

At this point, patching SQL Server on Azure VMs through Azure Update Manager or Automated Patching via the SQL IaaS extension is not aware if the SQL Server is a part of an Always On availability group. It is important to keep this in mind when scheduling your updates with an automated process.  

 

You can always go back to Automated Patching by selecting Leave new experience from the new Updates page.   

  

Learn More   

 

Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Glenn Condron: .NET Web Development - Episode 293

1 Share

Glenn is a Principal Product Manager for the App Platform team within the Developer Division at Microsoft, focusing on .NET. Before joining Microsoft, Glenn was a developer in Australia where he worked on software for various government departments.

 

Topics of Discussion:

[2:47] Glenn’s career path.

[6:33] The old .NET vs the new .NET.

[8:09] .NET was initially Windows-only but is now being rebuilt as open-source, cross-platform software.

[9:40] The evolution of .NET.

[9:53] .NET core.

[14:04] New features and ideas presented at .NET Conf.

[16:26] Aspire.

[18:58] Every piece of an Aspire solution uses open Telemetry as a standard.

[19:26] Redis. 

[27:15] Aspire knows all the “what’ and “how” to deploy to the cloud, without explicit cloud knowledge.

[32:36] The intent of AZD.

[36:57] Handling the components of Aspire.

[40:21] How to add custom resources to Aspire.

[41:00] Opinionated vs non-opinionated development in the .NET ecosystem.

 

Mentioned in this Episode:

Clear Measure Way

Architect Forum

Software Engineer Forum

Programming with Palermo — New Video Podcast! Email us at programming@palermo.net.

Clear Measure, Inc. (Sponsor)

.NET DevOps for Azure: A Developer’s Guide to DevOps Architecture the Right Way, by Jeffrey Palermo — Available on Amazon!

Jeffrey Palermo’s Twitter — Follow to stay informed about future events!

Glenn Condron on New Capabilities on .NET - Ep 58

Glenn C GitHub

DevBlogs Glenn C

Building Cloud Native Apps with .NET 8

Introducing .NET Aspire

 

Want to Learn More?

Visit AzureDevOps.Show for show notes and additional episodes.





Download audio: https://traffic.libsyn.com/secure/azuredevops/ADP_293-00-05-48.mp3?dest-id=768873
Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete

Visual Studio Code Day 2024

1 Share

Read the full article

Read the whole story
alvinashcraft
6 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories