Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153035 stories
·
33 followers

OpenAI, Google, and Microsoft Back Bill To Fund 'AI Literacy' In Schools

1 Share
An anonymous reader quotes a report from 404 Media: A new, bipartisan bill introduced (PDF) by Democratic Senator of California Adam Schiff and endorsed by the biggest AI developers in the world -- including OpenAI, Google, and Microsoft -- would change the K-12 curriculum to shoehorn in "AI literacy," something that young people and teachers alike already hate in schools. The Literacy in Future Technologies Artificial Intelligence, or LIFT AI Act, would empower the new director of the National Science Foundation (NSF) to make grant awards "on a merit-reviewed, competitive basis to institutions of higher education or nonprofit organizations (or a consortium thereof) to support research activities to develop educational curricula, instructional material, teacher professional development, and evaluation methods for AI literacy at the K-12 level," the bill says. It defines AI literacy as using AI; specifically, "having the age-appropriate knowledge and ability to use artificial intelligence effectively, to critically interpret outputs, to solve problems in an AI-enabled world, and to mitigate potential risks." The bill is endorsed by the American Federation of Teachers, Google, OpenAI, Information Technology Industry Council, Software & Information Industry Association, Microsoft, and HP Inc. [...] The grant would support "AI literacy evaluation tools and resources for educators assessing proficiency in AI literacy," according to the bill. It would also fund "professional development courses and experiences in AI literacy," and the development of "hands-on learning tools to assist in developing and improving AI literacy." Most importantly for real-world implications, it would fund changing the existing curriculum "to incorporate AI literacy where appropriate, including responsible use of AI in learning."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Architecture to Resilience: A Decision Guide

1 Share

Start with the framework, accelerate with the tool

Watch the video walkthrough

The Application Resilience Framework originated from a practical gap we saw in resilience reviews: teams had architecture diagrams, monitoring data, incident history, and runbooks, but no consistent way to connect them into a measurable resilience model.

The framework is intended to close that gap by turning architecture context into a structured lifecycle for risk identification, mitigation validation, health modeling, and governance. It aligns closely with the Reliability pillar of the Azure Well-Architected Framework, especially the guidance around identifying critical flows, performing Failure Mode Analysis, defining reliability targets, and building health models.

Application Resilience Framework flow from artifact import to measurable operational resilience.

The Application Resilience Framework Tool helps teams apply this framework faster by starting with artifacts they already have, such as data flow diagrams or sequence diagrams in Mermaid or image format. The tool extracts workflows, application components, platform components, dependencies, and initial failure modes, then guides the team through the decisions needed to make resilience measurable.

From those artifacts, the tool creates the first version of a resilience model by extracting workflows, application components, platform components, dependencies, and initial failure modes. It then guides the team through one import step followed by four phases:

Import Artifacts -> Phase 1: Failure Mode Analysis -> Phase 2: Mitigation and Validation -> Phase 3: Health Model Mapping -> Phase 4: Operations and Governance

It is not a replacement for WAF guidance or Resilience Hub style assessments. It is a practical way to operationalize those concepts at the workload and workflow level, producing prioritized risks, mitigation plans, validation paths, health signals, dashboards, reports, and governance ownership.

How to use this guide

This guide follows the same flow as the tool. For each step, it covers:

  1. The decision: What needs to be decided?
  2. The options: What paths are available?
  3. The guidance: When each option fits

Use this with the video walkthrough. The video shows the tool in action. This guide explains the choices behind each step.

Question 1: What artifact should you import first?

The import step creates the starting point for the model. Regardless of the input path, the output is the same: workflows that move into Phase 1: Failure Mode Analysis.

Options

Import option

Best for

What happens

Data flow diagram

System, module, data movement, and dependency views

If imported as an image, the tool breaks it into sequence-style flows. Selected flows become workflows.

Sequence diagram

Transaction flow and service interaction views

Converted directly into workflows.

Mermaid input

Diagrams maintained as code in Mermaid format

Converted directly into workflows.

Image input

JPG or PNG diagrams

Azure Foundry Vision models interpret the image and convert it into workflows.

Manual entry

Missing or incomplete diagrams

User creates or corrects workflows manually.

When to pick which

Use data flow for system and dependency views. Use sequence diagrams for transaction or interaction views. Regardless of import path, the output is the same: workflows, components, dependencies, and initial failure modes ready for Phase 1.

Question 2: Which workflows should be analyzed first?

Phase 1 is Failure Mode Analysis. This is where the tool identifies what can fail and how important each failure is.

Options

  • Critical user flows: Login, checkout, payment, onboarding, request processing.
  • High-risk platform flows: Database writes, queue processing, storage access, identity, messaging, external APIs.
  • Known issue areas: Workflows with recent incidents, recurring alerts, or customer impact.

When to pick which

Start where failure creates the highest customer or business impact. The goal is not to model everything at once. The goal is to model the right thing first.

Deliverables

  • Failure Mode Analysis catalog
  • RPV risk scores
  • Criticality classification

Question 3: How should failure modes be prioritized?

After workflows and components are imported, the tool helps score each failure mode using Risk Priority Value or RPV, which uses the four factors of Impact, Likelihood, Detectability and Outage severity.

Options

  • Use generated failure modes and scores: Best for a fast first pass.
  • Tune the RPV scores with engineering input: Best when workload context matters.
  • Add custom failure modes: Best when known risks come from incidents, reviews, or customer experience.

When to pick which

Use the generated model to accelerate the first pass, then adjust it with real system knowledge. The goal is not to create the longest list of risks. The goal is to identify the risks that deserve attention first.

Deliverables

  • Failure Mode Catalog
  • RPV Risk Scores
  • Prioritized criticality list

Question 4: Are mitigations defined or validated?

Phase 2 is Mitigation and Validation. This is where each failure mode gets a response plan.

Options

  • Detection only: The team can detect the failure, but the response is not defined.
  • Defined mitigation: The response is documented, such as retry, fallback, failover, scaling, restore, or rebalance.
  • Validated mitigation: The response has been tested through a controlled validation or chaos test.

When to pick which

For low-risk items, documented mitigation may be enough. For critical and high-risk items, validation is the key. A mitigation that has not been tested is still an assumption.

Deliverables

  • Mitigation playbooks
  • Chaos test plans
  • Support playbooks

Question 5: Which risks need health signals?

Phase 3 is Health Model Mapping. This is where the tool connects risks to observability.

A failure mode should not just sit in a document. It should map to a signal that can show whether the system is healthy, degraded, or unhealthy.

Options

  • Map all failure modes: Best for small systems or highly critical workloads.
  • Map critical and high-risk failure modes first: Best for large systems.
  • Track unmapped risks as gaps: Best when observability coverage is still improving.

When to pick which

Start with the highest RPV items. Every critical failure mode should have at least one signal, such as a metric, log, alert, availability check, or dependency signal.

Deliverables

  • Health model
  • Signal definitions
  • Coverage report
  • Bicep templates

Question 6: Should the health model be exported or deployed?

Once the health model is built, the next decision is how to use it.

Options

  • Export for review: Best when the team needs to validate the model first.
  • Generate monitoring templates: Best when the team wants repeatable implementation.
  • Deploy to Azure: Best when the model is ready to become part of operations.
  • Use outputs in downstream tools: Best when support, SRE, or incident response workflows need structured playbooks.

When to pick which

Export first if the model is still being reviewed. Deploy when component relationships, signals, and coverage are accurate enough for operational use.

Question 7: How will governance keep the model current?

Phase 4 is Operations and Governance. This is where the resilience model becomes an ongoing practice.

Options

  • One-time assessment: Useful for quick discovery but limited long term.
  • Recurring review: Best for production workloads that change regularly.
  • Closed-loop governance: Best when incidents, failed validations, and monitoring gaps feed back into the model.

When to pick which

For production systems, use a recurring governance cadence. Assign owners, track gaps, review dashboards, and update the model as the system changes.

Deliverables

  • Governance model
  • Dashboards
  • Reports and exports
  • Runbooks

Putting it together: three adoption patterns

Once governance is defined, the tool can be used in different ways depending on the team’s maturity and objective. The three common adoption patterns are:

Pattern A: Quick resilience review

  • Import one critical workflow
  • Generate failure modes
  • Review RPV scores
  • Identify top risks
  • Export findings

Best for fast architecture reviews or early customer conversations.

Pattern B: Full workload assessment

  • Import multiple workflows
  • Build a full Failure Mode Catalog
  • Define mitigations and recovery steps
  • Create chaos test plans
  • Map risks to signals
  • Produce coverage reports

Best for structured resilience assessments.

Pattern C: Operational health model

  • Build and tune the health model
  • Export or deploy monitoring artifacts
  • Track risk and signal coverage
  • Review mitigation effectiveness
  • Assign governance ownership
  • Feed findings back into the model

Best when the goal is continuous operational improvement.

A short checklist before using the tool

  1. Which workflow should we import first?
  2. Do we have a data flow diagram, sequence diagram, or Mermaid file?
  3. What components and dependencies should be included?
  4. Which failure modes matter most?
  5. How should RPV be adjusted for this workload?
  6. Do critical failure modes have mitigations?
  7. Have those mitigations been validated?
  8. Are failure modes mapped to health signals?
  9. What coverage gaps remain?
  10. Should the health model be exported or deployed?
  11. Who owns ongoing review?
  12. How often should the model be updated?

Closing thought

The Application Resilience Framework Tool provides a practical way to move from architecture artifacts to measurable, continuously improving resilience.

It starts with data flow or sequence diagrams, builds a structured view of the system, and guides teams through the decisions that matter: what can fail, how severe it is, how it is mitigated, how it is detected, and how it is governed.

Tool repo: Application Resilience Framework Tool 

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI won’t speed up software delivery — nothing has

1 Share

In my formative years, I had a Labrador/Whippet cross called Barclay. We did almost everything together. If I were digging a hole in the garden, he would be there snuffling around it. If I were reading a book, he’d lie on my legs until I got pins and needles. When my family went for walks in the forest, he would scout ahead and race back to check on us, running ten times as far as we walked in his excited loops.

One of the best things about dogs is that you can talk to them when times are tough. They are great listeners and understand you when no one else does.

If I were to get another dog, companionship would be my top reason. There are many other reasons to own a dog, but for me? That’s the top one.

Now imagine I went to the rescue center and asked to see their dogs, but they wanted me to take home a cheetah, because it’s much faster than a dog. That’s what I hear when an organization says they are adopting AI with a myopic focus on speed.

Speed Is Not The Goal: Speed Was Never The Goal

We’ve been here before. We should all lament how the Agile movement withered to a dried husk that only offered “speed”. Speed was never the goal. The primary reason to increase your throughput is to get feedback earlier. When you find out your amazing new feature doesn’t excite your users, you can stop working on it right away.

You throw away far less when you can stop a bad idea early, and you get to move on to a better idea straight away. Nobody should be trying to build software with the most features and the fastest rate of change. When you have too many features and things are constantly changing to accommodate the ever-expanding list of stuff your software does, people start to hate what you built.

Microsoft Word is the most powerful and full-featured word processor available today, yet nobody uses it anymore. They are using Google Docs, which has far fewer features. That means the features Google selected must be more compelling, or that fewer features make the software easier to use. In reality, it’s a combination of many small factors like these. Sometimes one really compelling feature eclipses all others, and the ease of collaborating on a Google Doc in a web browser might have been just that.

If you’d asked someone twenty years ago, they would have told you Microsoft Word had an unbreakable hold on its category, but now it has 3.9% of the market, compared to Google Docs’ 9.6% (source: 6sense). If you believe this market shift is solely a question of pricing, you’re probably working for the kind of organization that wants straight-line speed, because you’ve already stopped believing in the idea of creating software that users value.

Adopting AI For Speed Lacks Credibility

Software leaders have a track record, and that should be considered when they announce they are adopting AI for straight-line speed. Very often, you’ll find that over the past decade or two, they have announced an Agile Transformation for straight-line speed, adopted DevOps for straight-line speed, and started a platform initiative for straight-line speed.

The fact that they have burned through all these initiatives without achieving significant results is a strong indication that they don’t want speed as much as they claim. Sure, they want to slap a “DORA Elite Performance” badge on their work history. Still, they don’t have a genuine reason to go faster, because they aren’t interested in that fundamental outcome from shipping more often: feedback.

Any leader who has put their teams through the mangle this many times in the name of speed and who now says AI will be the thing that finally brings it is deluded.

The Feedback Metronome

When you want the feedback more than the speed, you’ll let the feedback loop set the pace of the whole software delivery process. Setting the pace to this beat gives you crucial space to process that feedback and do the one thing Agile wants you to do, which is change direction quickly.

Organizations and teams that use feedback as the metronome, setting the rhythm for the whole orchestra, are likely to seek out and eliminate work that disrupts the beat. They design teams to complete the work with minimal (and well-designed) dependencies. They streamline change approvals. They make sure the team can decide when to push the button to deploy their software and that they can observe what happens when they do.

The DORA model, with its generative culture, transformational leadership, lean product management, and continuous delivery process, wasn’t created by accident. This is the result of decades of work. Teams applying these concepts have speed, but that’s not why they adopt the culture and practices. They want frequent high-quality feedback so they can discover what really needs to be built.

This Is What Team Elite Did

Team Elite was a software team in a large healthcare company. The organization provided software for patient management and emergency triage. When it comes to safety-critical industries, this was software that really could mean life or death.

They shipped their patient management system once every six months, and the testing cycle for their decision support system was two weeks, plus another two weeks if they found a problem. Repeat until a version passes!

Despite this history, we managed to run a program of work for six months that created a deployable software version every three hours. We were following a set of very strong technical practices, but what we removed was likely more important than what we added. However, there’s a balance that came from adding specification by example with executable specifications and removing bureaucratic checking stages that were slower and less effective.

When it comes to outcomes, a crucial deal with a new healthcare provider required a decision management API to integrate into their website. We delivered a working API safely in two weeks and went live on a contract valued at $1.8 million ($2.5 million in today’s money).

If your organization hasn’t reviewed the route to production and made similar changes, nothing will deliver the speed you claim you want. You’ll introduce AI, just as you introduced Scrum, DevOps, and Platform Engineering, and it will make zero difference, just as before.

The most important thing you could do right now is map the flow of value, especially from code commit to production deployment, and start fixing the parts that are broken. There is no secret to what needs to change. Dave Farley and Jez Humble gave away the magic in their book “Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation”.

Why Are You Adopting AI?

Before today, you might have said you were adopting AI to speed things up. If you’re serious about software, I hope you’ll adjust your answer and make it clear to those around you that frequent feedback and decision agility are high on your list of priorities.

Teams that have already solved the throughput/stability trade-off through practices like Continuous Delivery will be less attracted to speed. They are more likely to seek more valuable opportunities. I wanted to close by suggesting a couple of these.

Small teams are better. We have compromised on this because we need things sooner than a very small team can manage. We might double the team size even though we know it won’t halve the time it takes. The COCOMO model had a complex calculation for this diminishing return, though Fred Brooks said it more memorably. Adding people to a late project makes it later.

As a result, most software teams that understand the complexity of communication and coordination fall into the 6-12 team-member range. The famous two-pizza team. But it’s not an ideal team size; it’s too large. It has simply been a pragmatic way to balance several factors. With AI, we should look at one-pizza teams, and even a smaller pizza.

Small teams with high autonomy working on a loosely coupled component could be the power move to unlock the value of AI-assisted software development.

My final suggestion is that, rather than building the same software faster, AI may let your teams build more ambitious software. You could tackle that globalization work you’ve always lacked the courage to start. You might have a feature idea that has always been impossible to get sufficient clarity on, where an AI prototype would allow an exploration you couldn’t attempt before.

In any case, start with a more aggressive improvement to your software delivery process and deployment pipelines. Make sure you’re joining up the feedback loop and that you use it like a metronome to set the rhythm. Once you’ve done these, you will seek something more imaginative than “speed” from your AI adoption.

The post AI won’t speed up software delivery — nothing has appeared first on The New Stack.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Publishing Plans, Rate Limits, and FinOps for 3,837 API Providers

1 Share

I just finished work to publish three new machine-readable artifacts — API Commons Plans & Pricing, API Commons Rate Limits, and a FinOps Foundation FOCUS-aligned framework across as many API providers in the API Evangelist Network that I can. That ended up being 3,837 of the 5,127 repositories in the API Evangelist GitHub network. 11,511 new YAML artifacts. What I consider to be a full Tier 1 set of 184 well-known API providers possessing the devil in the details when it comes to the business of these APIs.

The Gap I Was Trying to Close

For many years I have been making the case that an OpenAPI and AsyncAPI are the leading machine-readable artifacts API providers should be publishing. OpenAPI for synchronous APIs. AsyncAPI for event-driven APIs. JSON Schema for data shapes and validation. JSON-LD for vocabulary. Spectral rulesets for governance. And APIs.json to tie them all together at a known URL.

What I have not been making the case for strongly enough recently, and something AI agents have made urgent, is the financial half of the API contract. The OpenAPI spec tells a consumer how to call the API. It does not tell the consumer what the call costs, what tier they are on, what their throughput allowance is, what the meter rolls up to on the invoice, or what the dimensions are that let them allocate cost back to a team, domain, or line of business. None of that lives in any of the existing machine-readable specs.

For a developer building a one-off integration, the gap is fine. They read the pricing page, sign up, watch their first invoice, and learn. The discovery loop is human-paced.

For an AI agent making decisions at machine speed across hundreds of providers, the gap is load-bearing. The agent can fetch the OpenAPI spec and call the right endpoint with the right parameters and get the right response — and have no idea whether that call cost a hundredth of a cent or fifty dollars. It cannot pick the cheaper of two equivalent providers. It cannot stop itself before exhausting a quota. It cannot tell its caller what the bill is going to look like over time.

The fix is the same thing I have been arguing for a while now: publish the missing data as a machine-readable file, in the provider’s repo, indexed by APIs.json, alongside everything else. Three new artifact types — plans, rate limits, finops. This will help close the busienss gap I am speaking of.

What Got Generated

This work happened in two phases.

Phase one Generated scaffold artifacts for every qualifying API provider in the API Evangelist network. A provider qualifies if its apis.yml declares at least one API in the apis: array — which matched 3,837 of the 5,127 repos. For each qualifying provider I generated three YAML files in three new directories:

  • plans/{provider}-plans-pricing.yml — three scaffold tiers (Free, Professional, Enterprise) with placeholder limits and prices, conforming to the API Commons Plans schema.
  • rate-limits/{provider}-rate-limits.yml — standard rate-limit envelope with the common X-RateLimit-* headers, 429 response code, per-tier limits across each API in the provider’s catalog, and policy notes for backoff, burst handling, quota reset.
  • finops/{provider}-finops.yml — FOCUS-v1.3-aligned cost framework with the four FinOps Foundation domains (Understand, Quantify, Optimize, Manage), a billing model section, FOCUS column mapping, default meters (api_requests, data_egress, compute_seconds), and unit-economics targets.

That is 11,511 files. They went into the same repos as the other specs. But they carry no reconciled flag — a downstream consumer should treat them as templates until the per-vendor research has occurred–validating what is happening.

Phase two I went back and reconciled the 184 providers I tagged Tier 1 with researched values from each vendor’s actual pricing and rate-limit pages. That set includes:

  • The frontier AI labs (Anthropic, OpenAI, Cohere, Mistral, Perplexity, Replicate, Deepgram, ElevenLabs, Stability AI)
  • The vector databases (Pinecone, Weaviate, Qdrant)
  • The payment platforms (Stripe, Adyen, PayPal, Square, Plaid, Lemon Squeezy, Modern Treasury, Moov)
  • The communications APIs (Twilio, SendGrid, Vonage, Bandwidth, Sinch, MessageBird, Postmark, Mailchimp, Klaviyo)
  • The dev platforms (GitHub, GitLab, Bitbucket, Atlassian, Jira, Confluence, CircleCI)
  • The hosting and edge providers (Vercel, Netlify, Cloudflare, Fastly, Fly.io, Heroku, Render, Railway, Linode, Supabase, Neon)
  • The databases (MongoDB Atlas, CockroachDB, Couchbase, PlanetScale, Redis, Snowflake, Databricks, Elastic)
  • The identity stack (Auth0, Okta, WorkOS, Stytch, SuperTokens)
  • The observability stack (Datadog, New Relic, Sentry, Honeycomb, Lightstep, Bugsnag, Rollbar, Splunk, PagerDuty, Opsgenie, incident.io, Heap, Mixpanel, Amplitude, PostHog)
  • The productivity and collaboration tools (Notion, Linear, Airtable, Asana, ClickUp, monday.com, Figma, Dropbox, Box, Slack, Discord, Zoom, Trello)
  • The marketing and CDP stack (HubSpot, Salesforce, Twilio Segment, Braze, Iterable, Optimizely, Drift, Pandadoc, Constant Contact, Copper)
  • The commerce stack (Shopify, BigCommerce, Adobe Commerce, WooCommerce, VTEX, commercetools)
  • The integration platforms (Zapier, n8n, Make, Workato, Tray.io, Pipedream, Nango, Merge, Unified.to, Paragon)
  • The mega-platforms (AWS, Google Cloud, Microsoft Azure, IBM, Oracle, Cisco, Apple, Meta) — captured at platform-overview level since each has hundreds of services with their own pricing
  • The enterprise providers without public API pricing (AT&T, Verizon, Mastercard, Visa, FedEx, UPS, Walmart, Booking.com, Expedia, etc.) — labeled as partner-contract-only with pointers to the developer portals
  • A handful of niche / test APIs (Beeceptor, the-racing-api, TheCocktailDB, TheMealDB, Chainlens) — labeled as free / donation-funded

Every reconciled artifact now carries a reconciled: true flag and a sources: array citing the URLs the values came from. The schema reference the API Commons Plans and Rate Limit schema. The FinOps artifacts cite the FOCUS v1.3 specification at focus.finops.org and the FinOps Foundation framework at finops.org/framework.

What the Artifacts Look Like

Every reconciled provider’s plans-pricing file follows the same shape. Here is Anthropic’s Claude Sonnet 4.6 entry, abbreviated:

specification: API Commons Plans
provider: Anthropic
plans:
  - id: anthropic-claude-sonnet-4-6
    name: Claude Sonnet 4.6
    type: usage-based
    entries:
      - label: Input tokens (<=200K)
        type: metered
        metric: tokens
        unit: 1000000
        price: '3.00'
      - label: Input tokens (>200K)
        type: metered
        metric: tokens
        unit: 1000000
        price: '6.00'
      - label: Output tokens (<=200K)
        type: metered
        metric: tokens
        unit: 1000000
        price: '15.00'
      - label: Cache read
        type: metered
        metric: tokens
        unit: 1000000
        price: '0.30'

Stripe’s payments take rate, with the actual fee structure and per-channel breakdown:

plans:
  - id: stripe-payments-standard
    name: Payments — Standard
    entries:
      - label: Domestic card transaction
        type: metered
        price: 2.9% + $0.30
      - label: International card surcharge
        type: metered
        price: +1.5%
      - label: Currency conversion
        type: metered
        price: +1.0%

Every reconciled rate-limits file documents the actual headers each provider returns. GitHub’s:

headers:
  limit: x-ratelimit-limit
  remaining: x-ratelimit-remaining
  used: x-ratelimit-used
  reset: x-ratelimit-reset
limits:
  - name: Authenticated (Personal Access Token)
    scope: user
    metric: requests_per_hour
    limit: 5000
  - name: GitHub App (Enterprise Cloud)
    scope: installation
    metric: requests_per_hour
    limit: 15000
secondaryLimits:
  - name: Concurrent Requests
    scope: shared REST+GraphQL
    limit: 100
  - name: REST Endpoint Points Per Minute
    scope: user
    limit: 900

Every reconciled FinOps file maps the provider’s billing into the FOCUS column set:

specification: FinOps Framework
provider: Stripe
alignedWith:
  framework: FinOps Foundation Framework
  dataSpec: FOCUS
  dataSpecVersion: '1.3'
focusColumns:
  ServiceName: Stripe
  ServiceCategory: Payments
  ProviderName: Stripe
  PublisherName: Stripe, Inc.
  BillingCurrency: USD
meters:
  - name: card_transactions
    unit: transaction
    aggregation: sum
    dimensions: [country, card_brand, card_type, currency]
  - name: transaction_volume
    unit: USD
    aggregation: sum
  - name: radar_screened_transactions
    unit: transaction
    aggregation: sum

The shape is consistent. The dimensions match what each provider actually exports on a billing report. An agent or FinOps platform consuming these can do real cost prediction, real allocation, real chargeback — without scraping HTML pricing pages.

What This Exposed About the State of API Pricing Transparency

Reconciling 184 providers produced one observation strong enough to call out: the gap between “we have an API” and “you can predict your bill” is enormous, even for providers I would have called transparent.

The frontier AI labs (Anthropic, OpenAI, Cohere, Mistral) are the most transparent class of provider in the network. Per-token pricing is fully published. Rate limits are tier-by-tier documented. Headers are specified. Discount opportunities (caching, batch, committed-use) are in the docs. This is what every API provider’s pricing posture should look like in 2026. But sadly it doesn’t.

The classic SaaS providers with API access are mid-transparent. Stripe, Twilio, Datadog, Cloudflare publish per-product pricing. The plans tiers are clear. The rate limits are buried but findable. The FinOps mapping is implicit but constructible, but not provided in a a machine-redabale format.

The Fortune 500 enterprises with API products almost never publish API pricing. AT&T Network APIs, Verizon ThingSpace, Mastercard, Visa, FedEx, UPS, Walmart Marketplace, Booking.com, Expedia — every single one of these is “contact partner sales for pricing.” The developer portals exist. The OpenAPI specs are sometimes published. The commercial terms are universally not. That whole category got reconciled with the partner-contract-only pattern in this pass; if those companies want to be discoverable by AI agents at any point in the next decade they are going to have to publish more than a developer portal that requires a login by a human.

The mega-platforms (AWS, Google Cloud, Azure, IBM, Oracle) have so many products that per-product reconciliation is a project all by itself. Each of these landed at platform-overview level in this pass — pointers to the vendor pricing calculator, the major service families catalogued, the FinOps principles spelled out, and the FOCUS guidance applied. Per-service AWS pricing is feasible work; it is just not the same project as the rest of this.

What Comes Next

The Tier 1 reconciliation is the floor, not the ceiling. The next steps:

Sample Tier 2. I have not yet validated how often Tier 2 providers actually publish API pricing. My intuition is that maybe a third of the Fortune 500 / strong-API-surface set has reconcilable pricing on a public page; the other two-thirds are partner-contract-only. The right next step is to sample 50-100 Tier 2 providers across categories and find out.

Wire the artifacts into APIs.json. The new files exist in each repo, but the apis.yml index does not yet list them in the properties array of each API entry. They should — it is exactly the same pattern as how OpenAPI, JSON Schema, JSON-LD, and Spectral rules are referenced today. That is a one-script update.

Publish the artifacts at apis.io. The APIs.io developer portal just rebuilt with nine static feeds. Plans, rate limits, and finops should become the next three feeds — plans.apis.io, rate-limits.apis.io, finops.apis.io — with the same compact JSON shape and CORS-friendly format the existing nine feeds use. An agent doing FinOps on a multi-provider stack should be able to fetch one URL per dimension and get the entire network’s data.

Surface them as Naftiko capabilities. The capability runtime that proxies a paid third-party API is the place these artifacts get most useful — gating the call before it incurs cost, picking the cheaper of two equivalent providers, attributing the spend back to a workspace. That is a separate post but the groundwork is now in place.

Get to fix-mode for Tier 1. The 184 reconciled providers are best-effort against a rapidly moving target. Pricing changes; the scripts that produced the artifacts will need to re-run. The reconciliation pattern will evolve into a CI job that re-fetches each Tier 1 vendor’s pricing page on some cadence and proposes diffs. This will take time to rollout.

The Pitch

Every API provider should publish a plans/, rate-limits/, and finops/ directory next to their OpenAPI spec. The schemas exist. The publishing convention exists (APIs.json & Naftiko Capabilities). The reconciliation pattern is now demonstrated against 184 well-known providers with cited sources, which can be used to further validate and rate providers. The environment for an AI agent or a FinOps platform to reason about API spend across vendors is in place.

I will have to align the work in this sprint with AP2 (Agent Payments Protocol), x402 Payment, and Universal Commerce Protocol (UCP) that I was learning about a couple weeks back. There is so much work to make the rubber meet the road here and move from fantasy and specification to something that will actually work at scale, and make sense to API providers. We have a long ways to go when it comes to automation as part of the API economy, but the urgency applied by the agentic discussions is looking to light the fire under the ass of API providers. As articulated in my application research the other day, we have some serious behaviorial changed needed to level everything up, so the fire needs to be hot.



Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Only-in-2026: A Guide to Can’t-Miss Live Sporting Events in Philly

1 Share

Philadelphia and sports go together like steak and cheese — inseparable and always on the menu.

While the year-long Semiquincentennial in Philadelphia centers around Independence Day, the year promises to enter the books as a monumental one in Philly sports history.

On the sports menu this year: FIFA World Cup 26 (and, a 39-day free FIFA Fan Festival, expected to be the biggest in the country), the MLB All-Star Game, and the PGA Championship.

2026 also welcomes the return of UFC, women’s pro tennis and the Philadelphia Cycling Classic, now back after a 10-year hiatus. Philly’s favorite fictional boxer turns the big 5-0 this year, with a new exhibit (including the iconic Rocky statue) at the Philadelphia Museum of Art and plenty of celebratory programming.

Philadelphia fans have known joy and heartache, anger and exultation in their 100-plus years of living with professional sports teams. Notably, Philadelphia remains one of just a few cities with a professional franchise in five major league sports.

Read on for all the details of this year’s extra special sporting events.

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

We the People: 20 Ways Philly Is Highlighting Underrepresented Voices for America’s 250th

1 Share

Philadelphia — the genesis of democracy as we know it and the bedrock of the ambitious ideals written into our foundational documents — earned its “birthplace of the nation” title.

But that doesn’t mean America — or Philadelphia, for that matter — is perfect.

As the city rolls out star-spangled celebrations commemorating the nation’s milestone 250th anniversary, cultural institutions, galleries, universities and residents are also reflecting on America’s complicated past, present and future.

New collaborations, like the Native North America gallery at the Penn Museum, and bigger-than-ever annual events, like the Juneteenth Parade and Festival in West Philly, amplify stories that have historically been undertold from the nation’s story, including those of Indigenous peoples, enslaved Americans, immigrant communities and religious minorities.

And installations across the city — like Radical Americana — pass the mic to activists and artists who push America forward, challenging this big, multicultural nation to realize the very values the Founding Fathers debated, drafted and enshrined right here in Philadelphia.

Read on for a guide to special Semiquincentennial events, exhibitions and experiences in Greater Philadelphia that challenge, enhance and expand America’s story.

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories