Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152934 stories
·
33 followers

We Standardized the API. We Didn’t Standardize the Application.

1 Share

During my Thursday office hours this week I explored adding applications and obtaining keys for six developer portals back to back: Notion, Slack, LinkedIn, GitHub, Cloudflare, and Google. Here was my plan — create an application against each one, the same way any developer would when wiring a new integration. I had Claude open in a side panel as a co-tracker, capturing every URL, every form field, every required input, every gating dialog. Three hours later I had a map of what it actually takes to onboard six common APIs (out of the hundreds we use).

Seeing the Diff Between

Every provider exposes the same conceptual thing: a configurable “application” that holds the credentials, scopes, and metadata required to consume their API. None of the providers call it the same thing. None of them shape it the same way. None of them gate it on the same conditions. None of them export it in the same format.

Notion calls it an “internal connection.” It has a single Configuration page with capability checkboxes (read content, update content, insert content, read comments, insert comments, user info), a Content Access tab where you select individual pages and teamspaces, and a single installation access token. There is no client ID, no client secret, no verification gate. For internal use it’s the cleanest shape of the six providers I dug into.

Slack calls it an “App.” Slack has the largest surface area of any provider in this research, with seventeen distinct configuration pages spread across two host domains, three sidebar groups (Settings, Features, Submit to Marketplace), and two scope namespaces (bot token vs. user token). It has a Socket Mode option that routes events over WebSockets instead of public HTTP. It has a Block Kit framework for app home tabs. It has a JSON or YAML manifest that exports the entire app config, and that manifest is the most agent-friendly artifact across all six providers. No one else has anything like it.

LinkedIn calls it an “App,” gated behind a “company-page verification” flow you complete by sending a magic URL to a Page Admin who has thirty days to act. Your scopes are not free-form — they are bundled inside “products” (Default tier, Standard tier, Development tier) and only the Default tier products are available without verification. The default access token TTL is sixty days. If you accidentally type a personal profile URL into the LinkedIn Page field instead of a Company Page URL, the form lets you submit and then warns you it cannot be undone.

GitHub gives you two completely separate paths. Personal Access Tokens, 1) fine-grained or classic are the path of least resistance for any application: name, expiration, repos, permission matrix. OAuth Apps are the path for multi-user or distributed scenarios: client ID, client secret, callback URL, optional Device Flow toggle. There is no verification gate on either. There is also no auto-cleanup; my own account had expired tokens piling up and one with no expiration date at all, which is its own problem (for me).

Cloudflare doesn’t have an “App” abstraction at all. It has profile-scoped API tokens. The entire authorization surface is a single flat list under your user profile, with permissions × resources × IP filtering × TTL configurable per token. Cloudflare is the only provider in this set that exposes TTL and IP allowlist as first-class fields on token creation. It is also the only one that ships a curated set of token templates for pre-built “common shapes” that bypass the robut yet complex permission matrix.

Google has eight surfaces. Credentials, OAuth Overview, Branding, Audience, Clients, Data Access (scopes), Verification Center, and the APIs Dashboard — split across two product groups, APIs & Services and the Google Auth Platform. You can have API Keys, OAuth 2.0 Client IDs, and Service Accounts coexisting on the same project. Verification has two independent axes — branding verification and data-access verification — surfaced as separate cards. There is a hard, irreversible 100-user cap for unapproved sensitive scopes that applies over the entire lifetime of the project. Google’s makes my head hurt.

Six providers. Six different vocabularies for the same essential part of using an API.

Why No Standard?

OpenAPI exists. We have a shared way to describe what an API does. We do not have a shared way to describe what it takes to use one.

Part of the answer is incentives. The application-creation flow is the developer’s first prolonged exposure to a provider’s brand and product. It is where Slack shows you Block Kit, where LinkedIn shows you their product tiers, where Google shows you the consent screen they want you to brand. None of them are motivated to commodify that surface. It is also where the provider’s monetization model first becomes visible — what’s gated, what’s free, what triggers a verification queue, what requires an account upgrade. That is product surface, and product surfaces resist standardization–especially across the competition.

Part of it is history. Each of these portals accreted over a decade or more, layer by layer. Slack’s seventeen pages did not appear at once; they grew as the platform shipped App Home, Socket Mode, Workflow Steps, Org Level Apps, MCP support. Google’s eight surfaces are the geological accumulation of OAuth 2.0 evolution, the GDPR-driven verification regime, the post-2020 Audience publishing model, and the per-API enablement architecture. There was never a moment when all of this could have been frozen into a clean standard, because none of it was finished, or is finished today.

Manageable with 10 APIs

Most of us internalized this divergence years ago without noticing. You learned the GitHub OAuth flow once and re-used it for ten years. You learned where Google hides the consent screen and you came back to it twice a year. You learned that LinkedIn product tiers existed because you had to once, for one integration, and then you forgot. The cost was distributed thin enough across a long enough time horizon that it never registered as a tax we all were paying.

Then the number of APIs we touch in a given quarter went from ten to fifty, and the cost stopped being bearable. I have eight Google API Keys on my personal account, three of which are flagged as unrestricted, two of which date back to 2017 and have no business still being active. I have six Cloudflare API tokens, two of which were last used in 2019 and 2021 and somehow still have permissions. I have an OAuth client on my Google project that hasn’t been used in five months and is scheduled for automatic deletion. None of these portals talked to each other when I created the credentials, none of them talk to each other now, and none of them will talk to each other when the credentials expire or get revoked.

That was the state of the world before agents. This won’t scalle y’all.

Agentic Bottleneck

When you point an agent or a copilot at the tools you actually use to do your work — Notion for notes, Slack for communication, GitHub for code, Google for everything else, you are asking it to navigate exactly this divergence. The agent does not have ten years of muscle memory for where Google hides the OAuth client editor. It does not remember that LinkedIn requires Page-Admin verification before the Products tab unlocks. It does not know that Slack’s Token Rotation requires Redirect URLs to be configured first.

You can paper over this with bespoke per-provider integrations — every major agent harness today does — but that is the same trap we hit with the ten-to-fifty-API transition, just one layer up. Every new provider is bespoke. Every change to a provider’s portal silently breaks the integration. The cost compounds with the number of providers, not the number of agents. This means it will get much worse in a much faster timeline as the agent ecosystem expands.

What we actually need is some sort of manifest. Slack already has one — a JSON or YAML document that captures the full application configuration in a portable, inspectable format. Of the six providers I walked through, Slack’s manifest is the only artifact that an agent could read, modify, and re-apply without scraping a web UI. The other five require either browser automation or per-provider SDKs that wrap the underlying portal APIs (when those APIs exist at all, which is not always).

A cross-provider application manifest would not need to be that big. It would need to just capture the things that are recurring, like the credential type (PAT, OAuth app, internal connection, service account), the scope vocabulary (with provider-specific mappings), the resource selection model (repos, pages, workspaces, accounts, projects), the verification state, the TTL and rotation policy. It would not need to flatten every provider into one shape. It would need to be lossless enough that an agent could understand what an application is, what it can do, and what it would take to provision it against any new provider.

I have felt this many for years. So many I am almost numb to it. But everytime I hear someone say how easy it will be for agents to do all this work for us, I remember just how hard it is to do across all of the APIs I depend on as a small business. It ain’t easy. I know we have OAuth automation moving in to support some of this, but I think we are going to need something like SLA4OAS in the play helping us with the application, SLA, pricing, and other layers too. It just feels like we are tending to the economics of this at scale and just kind of perpetually sweeping things under the run.



Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Fast Focus: GitHub Models—An AI-infused Developer Experience | Visual Studio Live! Las Vegas 2026

1 Share
From: VisualStudio
Duration: 21:20
Views: 67

Exploring AI models shouldn’t require complex setup or expensive infrastructure. In this Visual Studio Live! Las Vegas 2026 session, Brian Randell introduces GitHub Models, a free, cloud-based tool that lets developers experiment with different AI models side-by-side.

See how to evaluate models, compare outputs, and integrate them into your development workflow so you can make smarter decisions about performance, cost, and real-world use cases.

🔑 What You’ll Learn
• What GitHub Models is and how it fits into modern AI development
• How to explore and compare models from OpenAI, Meta, Mistral, and more
• How to test prompts and evaluate model performance side-by-side
• Key differences between model types, capabilities, and use cases
• How tokens, context windows, and parameters impact results
• How to generate code snippets to integrate models into your apps
• When to use hosted models vs. open source or self-hosted options
• How to think about cost, performance, and scalability when choosing models

⏱️ Chapters
00:28 What GitHub Models is and why it matters
02:11 Accessing GitHub Models and navigating the marketplace
03:52 Exploring the model catalog and capabilities
08:15 Using the playground to test prompts and responses
10:34 Comparing models side-by-side
13:54 Configuring models and calling them from your code
15:24 Understanding model size, parameters, and performance tradeoffs
18:02 Using models inside GitHub repositories
19:28 Final thoughts: choosing the right model for your needs

👤 Speaker
Brian Randell (@brianrandell)
Partner, MCW Technologies | VSLive! Conference Co-Chair

🔗 Links
• Download Visual Studio 2026: http://visualstudio.com/download
• Explore more VS Live! Las Vegas sessions: https://aka.ms/VSLiveLV26
• Join upcoming VS Live! events: https://aka.ms/VSLiveEvents

#github #ai #copilot #visualstudio #vslive

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Busy .NET Developer's Guide to Python | Visual Studio Live! Las Vegas 2026

1 Share
From: VisualStudio
Duration: 1:12:58
Views: 67

Curious about Python but coming from a .NET background? In this Visual Studio Live! Las Vegas 2026 session, Ted Neward walks through what .NET developers need to know to get started with Python.

From setup and environments to core language concepts, you’ll see how Python compares to C# and Java, where it fits best, and how to start building real applications quickly.

🔑 What You’ll Learn
• Where Python fits in modern development, including AI and automation
• How Python compares to C#, Java, and other object-oriented languages
• Setting up Python environments using common tools and installers
• Managing dependencies with pip and virtual environments
• Core language concepts like dynamic typing and simple syntax
• Working with common data structures such as lists, dictionaries, and sets
• How Python handles flow control, functions, and error handling
• Key conventions and philosophy behind Python

⏱️ Chapters
01:26 What Python is used for (AI, scripting, automation)
06:04 What Python is and its core philosophy
10:26 Installing Python and choosing a distribution
16:53 Running Python and using the REPL
24:29 Managing packages with pip and virtual environments
34:11 Python fundamentals: types, variables, and syntax
44:18 Core Python data types: strings, lists, and dictionaries
52:16 Flow control and pattern matching
58:46 Functions, type hints, and common patterns
1:10:45 Classes, globals, and Python scope behavior

👤 Speaker
Ted Neward
Principal, Neward & Associates

🔗 Links
• Download Visual Studio 2026: http://visualstudio.com/download
• Explore more VS Live! Las Vegas sessions: https://aka.ms/VSLiveLV26
• Join upcoming VS Live! events: https://aka.ms/VSLiveEvents

#python #dotnet #visualstudio #vslive

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

618. The Story of Fallout (with Chris Avellone)

1 Share

Chris Avellone, writer on Fallout 2 and Fallout: New Vegas, joins us to discuss the role of storytelling in the Fallout series of post-apocalyptic video games. Ad-free episodes are available to our paid supporters over at patreon.com/geeks.

Learn more about your ad choices. Visit megaphone.fm/adchoices





Download audio: https://www.podtrac.com/pts/redirect.mp3/pdst.fm/e/mgln.ai/e/495/tracking.swap.fm/track/bwUd3PHC9DH3VTlBXDTt/pscrb.fm/rss/p/traffic.megaphone.fm/SBP4865583508.mp3
Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft ODBC Driver 17.11.1 for SQL Server Released

1 Share

We are pleased to announce the general availability of Microsoft ODBC Driver 17.11.1 for SQL Server, released on April 30, 2025. This servicing update delivers important bug fixes and expands Linux platform support.

Key Highlights

  • Stability and correctness fixes for parameter array processing, including accurate updates to SQL_ATTR_PARAMS_PROCESSED_PTR and improved row counting when SQL_PARAM_IGNORE is used in parameter arrays.
  • Fixed a connection error that could occur when processing Data Classification metadata in ODBC asynchronous mode.
  • Updated RPM packaging rules to allow installation of multiple driver versions side by side.
  • Corrected XA recovery to ensure proper computation of transaction IDs and recovery of missing transactions.
  • Debian package installation now honors license acceptance for successful completion.

New Platform Support

PlatformVersions
macOS14, 15, 26
Debian13
Red Hat Enterprise Linux10
Oracle Linux9, 10
SUSE Linux Enterprise Server16
Ubuntu24.04, 25.10
Alpine Linux3.21, 3.22, 3.23

Download

The driver is available for download from the Microsoft ODBC Driver for SQL Server documentation page.

Linux Installation

Install or update using your distribution's package manager:

Debian/Ubuntu:

sudo apt-get update sudo apt-get install msodbcsql17

Red Hat/Oracle Linux:

sudo yum install msodbcsql17

SUSE:

sudo zypper install msodbcsql17

Alpine:

sudo apk add msodbcsql17

Feedback

We welcome your feedback. Please report issues on the SQL Server feedback site or open an issue on the ODBC Driver GitHub repository.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What's new in Swift: April 2026 Edition

1 Share

Welcome to “What’s new in Swift,” a curated digest of releases, videos, and discussions in the Swift project and community.

The 1.0 release of valkey-swift was recently announced on the Valkey blog. We’ve invited one of the authors to be this month’s guest contributor:

Hi, I’m Adam Fowler, an open source developer working in the Swift on server ecosystem. I am excited to announce the 1.0 release of valkey-swift - a production-grade Swift client for Valkey.

Valkey is a high-performance datastore commonly used as a caching layer or message broker in server applications. It is an open source fork of Redis.

Valkey-swift is a client library targeted at Valkey servers but is equally capable of working with Redis. It is built from the ground up with Swift 6 and structured concurrency. Every Valkey command returns typed responses checked at compile time, and strict concurrency checking is enabled throughout so that data races are caught by the compiler, not in production. Connections and subscriptions are all scoped through structured concurrency, so resources clean up automatically.

The client covers every standard Valkey command, auto-generated from Valkey’s own command specifications to stay in sync as the server evolves.

Previously, the de facto client library for Redis was RediStack, which was built on top of pre-concurrency concepts. Retrofitting structured concurrency would have been awkward and some of the new features in valkey-swift infeasible. Around the same time Redis changed its licensing structure and the open source fork Valkey was created. So it felt like a good time to make a clean break and build a new library.

If you’re building server-side Swift and need a fast key-value store, add valkey-swift via Swift Package Manager, and you’re ready to go. If you are using RediStack to connect with a Redis server, we have a guide to help you migrate to valkey-swift. Complete documentation is available, and contributions are welcome on GitHub.

Now on to other news about Swift:

Videos to watch

New package releases

  • IndustrialKit is a framework for designing, programming, and controlling robotic systems that was recently discussed on the Swift forums.
  • swift-tar is a pure Swift library for reading, writing, and extracting TAR archives. swift-tar is cross-platform, works without requiring any system framework or Foundation, and supports GNU & PAX extensions.
  • Xylem is a pure Swift XML parser with zero dependencies, covering SAX, DOM, and XPath 1.0.

Swift Evolution

The Swift project adds new language features through the Swift Evolution process. These are some of the proposals currently under review or recently accepted for a future Swift release.

Under active review:

  • SE-0529 Add FilePath to the Standard Library - FilePath in the swift-system package parses platform-specific path syntax on the developer’s behalf, provides a normalized view of path components, and enables filesystem resolution. However, shipping in an external package means the standard library, Swift runtime, and toolchain libraries such as Foundation cannot depend on it. This proposal adds FilePath and its associated types to the Swift module, alongside essential functionality for construction, decomposition, resolution, and C interoperability.

Recently accepted:

  • Vision for Networking - Swift’s networking ecosystem is getting an overhaul. This vision document proposes three initial areas of focus: evolve HTTP APIs including new HTTP client and server implementations, define currency types to reduce duplicated effort and integration friction, and define a unified networking stack.
  • SE-0517 UniqueBox - Sometimes in Swift it’s necessary to manually put something on the heap that wouldn’t otherwise live there. This proposal introduces a new type in the standard library, UniqueBox, which is a smart pointer type that uniquely owns a value on the heap.
  • ST-0022 Custom reflection during testing - When a test fails, Swift Testing reflects the values involved to help diagnose the failure, but types have no way to customize what appears in that output. This proposal adds a customization point, CustomTestReflectable, for developers to specify exactly what should be included in test output, whether they want to simplify, obscure, extend, or reformat that information.

One more thing

Have you recently looked at the Swift.org community page? It includes updated content, plus a new How we work page that describes opportunities to get involved.

Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories