Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148014 stories
·
33 followers

The Missing Link Between AI and Business Value

1 Share

Let’s call a spade a spade. Some enterprises are already using AI agents, but very few can explain their impact on business performance.

Metrics such as DORA, SPACE, and developer experience indicators captured through third-party platforms offer insight into delivery velocity and developer quality of life, but it is still difficult to cleanly map this all the way to business impact. 

Unless you work directly on model development, model metrics themselves rarely determine whether AI is creating enterprise value.

The gap between technical performance signals and sustained business outcomes is an obstacle to scaling AI responsibly.

From technical metrics to business value

Abstract benchmarks such as SWE-Bench Pro and Tau2-bench are directionally useful in selecting AI tools, but can be orthogonal to how these tools perform in enterprise systems. An agent that performs well in a controlled environment can fail once integrated into production workflows. What matters is not benchmark scores, but the impactfulness, traceability, and resilience of AI systems under real-world conditions. 

Recent data underscores the urgent need to find an accurate way of measuring these variables. Though 88% of employees use AI at work today, just 5% use it “in a transformative way”, according to the EY 2025 Work Reimagined Survey.

Blindly adopting AI is unlikely to be fruitful. Enterprises should instead experiment with and evaluate AI through operational metrics on the systems they are accountable to build and operate. The focus should be on the lifetime cost of maintaining systems, the average time humans spend over baseline, and throughput as a function of Total Cost of Ownership (TCO).

Auditability matters for tracing decisions and meeting governance needs, while human readability ensures teams can understand and manage system behavior now, and later. These are table stakes for technical teams to have in place as they adopt AI at scale.

The ROI problem

Every enterprise wants to link AI to ROI, but the data rarely aligns. The problem is not limited to model telemetry. AI is embedded into enterprise systems and assigned responsibility for specific parts of the SDLC and operational workflows.

Evidence of its impact must therefore span system behaviour, human intervention, and downstream business KPIs. These signals live in different systems and move on different timescales, which creates a gap between AI activity and measurable business outcomes. This is why most organizations rely on proxies or assumptions rather than proof.

Closing the gap

The next generation of AI orchestration platforms will need to close this gap by correlating technical performance with operational and financial signals. When those systems mature, ROI will shift from being an abstract target to a measurable outcome grounded in data.

The impact of this gap is already visible in enterprise outcomes. The WRITER 2025 Enterprise AI Adoption Report found that organizations without a formal AI strategy report only 37% success when adopting AI, compared with 80% for those that tie performance to clear operational outcomes.

The data is unambiguous. Only when an organization measures technical and operational signals together does it finally gain a true picture of AI’s value.

Towards continuous benchmarking

What underlies enterprise AI is not static. Data drifts, workflows evolve, and compliance obligations expand. Measurement must therefore become a continuous feedback loop rather than an annual report. 

The same principle should apply across the enterprise: Performance metrics should remain stable, but they must either stay independent of changing conditions or explicitly measure those changes over time.

Measuring what matters

Meaningful AI performance measurement is not about bigger numbers or more dashboards. It is about connecting operational signals with business truth. 

Enterprise leaders must grapple with model performance alongside how intelligently it scales, how transparently it operates, and how clearly its impact can be proven.

Taking benchmark numbers at face value resembles trusting a car manufacturer’s fuel efficiency without ever driving the car to see if it holds up in real conditions. 

Only when these can be addressed with real data will AI become a truly accountable part of the enterprise stack.

The real question for leaders is simple: Are you measuring the numbers that prove AI is working in practice, or just parroting back the numbers on public benchmarks?

Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Even GenAI uses Wikipedia as a source

1 Share
Ryan is joined by Philippe Saade, the AI project lead at Wikimedia Deutschland, to dive into the Wikidata Embedding Project and how their team vectorized 30 million of Wikidata’s 119 million entries for semantic search.
Read the whole story
alvinashcraft
42 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Using Azure API Management with Azure Front Door for Global, Multi‑Region Architectures

1 Share

Modern API‑driven applications demand global reach, high availability, and predictable latency. Azure provides two complementary services that help achieve this: Azure API Management (APIM) as the API gateway and Azure Front Door (AFD) as the global entry point and load balancer.

Going over the available documentation available, my team and I found this article on how to front a single-region APIM with an Azure Front Door , but we wanted to extend this to a multi-region APIM as well. That led us to design the solution detailed in this article which explains how to configure multi‑regional, active‑active APIM behind Azure Front Door using Custom origins and regional gateway endpoints.

(I have also covered topics like why organizations commonly pair APIM with Front Door, when to use internal vs. external APIM modes, etc. but main topic first! Scroll down to the bottom for more info).

Configuring Multi‑Regional APIM with Azure Front Door

WHAT TO KNOW: If using APIM Premium with multi‑region gateways, each region exposes its own regional gateway endpoint, formatted as:

https://<service-name>-<region>-01.regional.azure-api.net

Examples:

  • https://mydemo-apim-westeurope-01.regional.azure-api.net
  • https://mydemo-apim-eastus-01.regional.azure-api.net

where 'mydemo' is the name of the APIM instance. 

You will use these regional endpoints and configure them as a separate origin in Azure Front Door—using the Custom origin type.

Solution Architecture

 

Azure Front Door Configuration Steps

1. Create an Origin Group

Inside your Front Door profile, define a group (Settings -> Origin Groups - > Add -> Add an origin) that will contain all APIM regional gateways. See images below: 

2. Add Each APIM Region as a Custom Origin

Use the Custom origin type:

  • Origin type: Custom
    • Host name: Use the APIM regional endpoint
      Example: mydemo-apim-westeurope-01.regional.azure-api.net
  • Origin host header: Same as the host name.
  • Enable certificate subject name validation
    (Recommended when private link or TLS integrity is required.)
  • Priority: Lower value = higher priority (for failover).
  • Weight: Controls how traffic is distributed across equally prioritized origins.
  • Status: Enable origin.

And repeat the same steps for additional APIM regions giving them priority and weightage as you feel appropriate.

How to Know Which Region is being Invoked

To test this setup, create 2 Virtual Machines (VMs) in Azure - one for each region. For this guide, we chose to create the VMs in West Europe and East US. 

Open up a Command Prompt from the VM and do a curl on the sample Echo API that comes with every new APIM deployment:

Example: curl -v "afd-blah.b01.azurefd.net/echo/resource?param1=sample"

Your results should show the region being hit as shown below: 

How AFD Routes Traffic Across Multiple APIM Regions

AFD evaluates origins in this order:

  1. Available instances — the Health Probe removes unhealthy origins
  2. Priority — selects highest‑priority available origins
  3. Latency — optionally selects lowest‑latency pool
  4. Weight — round‑robin distribution across selected origins

Example

When origins are configured as below: 

  • West Europe (priority 1, weight 1000)
  • East US (priority 1, weight 500)
  • Central US (priority 2, weight 1000)

AFD will:

  • Use West Europe + East US in a 1000:500 ratio.
  • Only use Central US if both West Europe & East US become unavailable.

For more information on this nice algorithm, see here: Traffic routing methods to origin - Azure Front Door | Microsoft Learn

More Info (as promised)

Why Use Azure API Management?

Azure API Management is a fully managed service providing:

1. Centralized API Gateway

  • Enforces policies such as authentication, rate limiting, transformations, and caching.
  • Acts as a single façade for backend services, enabling modernization without breaking existing clients.

2. Security & Governance

  • Integrates with Azure AD, OAuth2, and mTLS (mutual TLS).
  • Provides threat protection and schema validation.

3. Developer Ecosystem

  • Developer portal, API documentation, testing console, versioning, and releases.

4. Multi‑Region Gateways (Premium Tier)

  • Allows deployment of additional regional gateways for active‑active, low‑latency global experiences.

APIM Deployment Modes: Internal vs. External

External Mode

  • The APIM gateway is reachable publicly over the internet.
  • Common when:
    • Exposing APIs to partners, mobile apps, or public clients.
  • You can easily front this with an Azure Front Door for reasons listed in the next section. 

Internal Mode

  • APIM gateway is deployed inside a VNet, accessible only privately.
  • Used when:
    • APIs must stay private to an enterprise network.
    • Only internal consumers/VPN/VNet peered systems need access.
    • To make your APIM publicly accessible, you need to front it with both an Application Gateway and an Azure Front Door because: 
      • Azure Front Door (AFD) cannot directly reach an internal‑mode APIM because AFD requires a publicly routable origin.
      • Application Gateway is a Layer‑7 reverse proxy that can expose a public frontend while still reaching internal private backends (like APIM gateway). [Ref]

But Why Put Azure Front Door in Front of API Management?

Azure Front Door provides capabilities that APIM alone does not offer:

1. Global Load Balancing 

  • As discussed above.

2. Edge Security

  • Web Application Firewall, TLS termination at the edge, DDoS absorption.
  • Reduces load on API gateways.

3. Faster Global Performance

  • Anycast network and global POPs reduce round‑trip latency before requests hit APIM.
    • A POP (Point of Presence) is an Azure Front Door edge location—a physical site in Microsoft’s global network where incoming user traffic first lands. Azure Front Door uses numerous global and local POPs strategically placed close to end‑users (both enterprise and consumer) to improve performance.

    • Anycast is a networking protocol Azure Front Door uses to improve global connectivity. Ref: Traffic acceleration - Azure Front Door | Microsoft Learn 

4. Unified Global Endpoint

  • A single public endpoint (e.g., https://api.contoso.com) that intelligently distributes traffic across multiple APIM regions.

 

With all of the above features, it is best to pair API Management with a Front Door, especially when dealing with multi-region architectures. 

 

 

 

Credits: 

Junee Singh, Senior Solution Engineer at Microsoft

Isiah Hudson, Senior Solution Engineer at Microsoft

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Paper Cuts Microsoft Actually Fixes: A Deep Dive into .NET 10 with Mark J Price

1 Share

Strategic Technology Consultation Services

This episode of The Modern .NET Show is supported, in part, by RJJ Software's Strategic Technology Consultation Services. If you're an SME (Small to Medium Enterprise) leader wondering why your technology investments aren't delivering, or you're facing critical decisions about AI, modernization, or team productivity, let's talk.

Show Notes

"There's so much that we can talk about with. NET 10 and related things like C# 14. So I'm going to try and focus on a few of the highlights that are personal highlights for me So let's start with the language actually, C# 14."— Mark J Price

Hey everyone, and welcome back to The Modern .NET Show; the premier .NET podcast, focusing entirely on the knowledge, tools, and frameworks that all .NET developers should have in their toolbox. I'm your host Jamie Taylor, bringing you conversations with the brightest minds in the .NET ecosystem.

Today, we're joined by Mark J Price to talk about some of our favourite things in .NET 10 and his new four-part book series on, quite literally, everything .NET. Mark is one of the most prolific authors in the .NET space at the moment, and his new book series is shaping up to be a fantastic resource.

"One of the things that I've always appreciated with Microsoft and their culture is that they have a very strong requirement that thing things are as backwards compatible as possible."— Mark J Price

Along the way, we talked about the recent changes to the STS (aka Standard Term Support) lifecycle for .NET, brining more support to the odd numbered versions of .NET and giving companies more time to migrate from one version to the next. We also covered a very important point when it comes to either STS or LTS towards the end of the episode: essentially, keep your runtimes up to date, folks.

This episode marks the fifth appearance of Mark on the show. Mark has been a wonderful collaborator over the years, and long may that continue. We joke about the fact that Mark deserves an award for the guest with the most episodes, but maybe he does deserve an award. Unless someone out there is willing to beat his record, of course.

Before we jump in, a quick reminder: if The Modern .NET Show has become part of your learning journey, please consider supporting us through Patreon or Buy Me A Coffee. Every contribution helps us continue bringing you these in-depth conversations with industry experts. You'll find all the links in the show notes.

Anyway, without further ado, let's sit back, open up a terminal, type in `dotnet new podcast` and we'll dive into the core of Modern .NET.

Full Show Notes

The full show notes, including links to some of the things we discussed and a full transcription of this episode, can be found at: https://dotnetcore.show/season-8/the-paper-cuts-microsoft-actually-fixes-a-deep-dive-into-net-10-with-mark-j-price/

Useful Links:

Supporting the show:

Getting in Touch:

Remember to rate and review the show on Apple Podcasts, Podchaser, or wherever you find your podcasts, this will help the show's audience grow. Or you can just share the show with a friend.

And don't forget to reach out via our Contact page. We're very interested in your opinion of the show, so please get in touch.

You can support the show by making a monthly donation on the show's Patreon page at: https://www.patreon.com/TheDotNetCorePodcast.

Music created by Mono Memory Music, licensed to RJJ Software for use in The Modern .NET Show.

Editing and post-production services for this episode were provided by MB Podcast Services.





Download audio: https://traffic.libsyn.com/clean/secure/thedotnetcorepodcast/813-MarkPrice.mp3?dest-id=767916
Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 560: You Can Feel It Coming

1 Share

This week, we discuss personal AI hype cycles, bottoms-up adoption, and "The Modern Stack" simplifying cloud. Plus, thoughts on new cars and the dogs that ride in them.

Watch the YouTube Live Recording of Episode 560

Runner-up Titles

  • They only talk about AI
  • I bet you have 5 shop vacs
  • It’s all going into applesauce
  • Live Claude Code It
  • Good defaults and opinions, nobody wants that
  • Give me 5 minutes, and I’ll give you fifty 6 page memos
  • I know I just pulled off a Pivotal scab.
  • Free Tier as a Service

Rundown

Relevant to your Interests

Conferences

  • DevOpsDay LA at SCALE23x, March 6th, Pasadena, CA
    • Use code: DEVOP for 50% off.
  • Devnexus 2026, March 4th to 6th, Atlanta, GA.
    • Use this 30% off discount code from your pals at Tanzu: DN26VMWARE30.
    • Check out the Tanzu and Spring talks and trading cards on THE LANDING PAGE.
    • Shout out to the people who saw the trading cards and messaged me!
  • Austin Meetup, March 10th, Listener Steve Anness speaking on Grafana
  • KubeCon EU, March 23rd to 26th, 2026 - Coté will be there on a media pass.
  • Devopsdays Atlanta 2026, April 21-22, 2026
  • DevOpsDays Austin, May 5 - 6, 2026
  • WeAreDevelopers, July 8th to 10th, Berlin, Coté speaking.
  • VMware User Groups (VMUGs):
    • Amsterdam (March 17-19, 2026) - Coté speaking.
    • Minneapolis (April 7-9, 2026)
    • Toronto (May 12-14, 2026)
    • Dallas (June 9-11, 2026)
    • Orlando (October 20-22, 2026)

SDT News & Community

Recommendations





Download audio: https://aphid.fireside.fm/d/1437767933/9b74150b-3553-49dc-8332-f89bbbba9f92/35125f53-608a-491f-bcb4-2d0d5e9ca17c.mp3
Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

API Security Best Practices: A Developer’s Guide to Protecting Your APIs

1 Share

This guide explains how to secure an API in production. You’ll learn:

  • The most important API security best practices
  • Common vulnerabilities like Broken Object Level Authorization (BOLA)
  • How to implement authentication, authorization, rate limiting, and input validation correctly
  • How to test and verify your API security controls before release


Try Postman today →

APIs are the backbone of authentication, payments, mobile apps, partner integrations, and internal microservices. This also makes them prime targets for attackers.

When developing a REST API for either internal use or for public consumption, it’s crucial to have layered security controls implemented from the start. A single misconfigured endpoint can expose sensitive data, enable injection attacks, or allow unauthorized users to access resources they shouldn’t.

No single control is enough. Effective API security comes from combining multiple defenses that work together.

How do you secure an API?

To secure an API, use a combination of defensive strategies: require HTTPS for all traffic, set up strong authentication (OAuth 2.0, JWTs), conduct thorough input validation, implement rate limiting, practice the principle of least privilege, and watch for abnormal API behavior. Effective API security requires multiple integrated practices, not just one.

Practice What it protects against Priority
HTTPS/TLS encryption Data interception, man-in-the-middle attacks Required
Authentication (OAuth 2.0, JWT) Unauthorized access, identity spoofing Required
Authorization and access control Privilege escalation, data leakage Required
Input validation Injection attacks, malformed requests Required
Rate limiting Brute-force attacks, DDoS, abuse High
API key management Credential theft, unauthorized usage High
Logging and monitoring Undetected breaches, slow incident response High
Security testing Unknown vulnerabilities, regressions High

Why API security matters

APIs expose application logic and data to external consumers by design. Every endpoint is a potential attack surface that you’re deliberately making accessible. The more APIs you manage, the larger that surface becomes, and the harder it is to enforce security consistently across all of them.

The consequences of poor API security are well documented: broken authentication, excessive data exposure, and missing access controls consistently appear in the OWASP API Security Top 10, a widely referenced list of the most critical API vulnerabilities. The risks involved are real. Misconfigured APIs have led to breaches exposing millions of user records, unauthorized access to internal systems, and financial losses from abuse of unprotected endpoints.

The good news: most API vulnerabilities are preventable with the right practices applied consistently throughout the development lifecycle.

Encrypt all traffic with HTTPS

While TLS encryption is essential for external-facing APIs, most teams overlook its importance for internal traffic. Service-to-service communication in distributed systems faces interception risks, particularly in shared infrastructure or multi-tenant cloud environments where network traffic might breach trust boundaries.

# Enforce HTTPS in your API responses
Strict-Transport-Security: max-age=31536000; includeSubDomains

Use TLS 1.2 or higher (preferably TLS 1.3) and disable older protocols like SSLv3 and TLS 1.0. Configure your servers to reject unencrypted connections entirely rather than redirecting from HTTP to HTTPS, which still exposes the initial request.

For service-to-service communication, consider mutual TLS (mTLS), where both the client and server present certificates. This confirms identity from both sides, which is especially helpful in microservice setups to ensure internal callers are legitimate, beyond just encrypted connections.

Use strong authentication

The most common and preventable API vulnerability is weak or absent authentication. OAuth 2.0 with JWTs is the standard for production APIs, giving you delegated authorization flows with stateless token verification on every request.

POST /api/login HTTP/1.1
Content-Type: application/json

{
  "grant_type": "client_credentials",
  "client_id": "your_client_id",
  "client_secret": "your_client_secret"
}

Response:

{
  "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
  "token_type": "Bearer",
  "expires_in": 3600
}

Then pass the token in the Authorization header:

GET /api/users/12345 HTTP/1.1
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...

Key authentication practices:

Use short-lived access tokens (minutes to hours, not days or weeks) and refresh tokens for extended sessions.

Validate JWT signatures on every request. Never trust a token without verifying its signature, expiration, and issuer.

Store tokens securely. Keep access tokens in memory on the client side, not in localStorage or cookies without proper flags.

Never rely on API keys alone for authentication. API keys identify the calling application but don’t verify user identity. Use them alongside OAuth 2.0, not as a replacement.

Implement proper authorization

The top vulnerability on the OWASP API Security Top 10 list is Broken Object Level Authorization (BOLA), and it’s surprisingly common. An API causes this when it verifies user authentication but does not check if they have permission to access the requested resource.

# Vulnerable: No ownership check
GET /api/orders/789
# Any authenticated user can access any order

# Secure: Verify the requesting user owns the resource
GET /api/orders/789
# Server checks: Does the authenticated user own order 789?

When you configure authentication, apply the principle of least privilege. Grant the minimum permissions needed for each role or scope. A reporting dashboard doesn’t need write access to user records, and a mobile app shouldn’t have admin privileges.

Use Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) to enforce permissions consistently:

// Middleware example: Check user role before allowing access
function authorize(requiredRole) {
  return (req, res, next) => {
    if (req.user.role !== requiredRole) {
      return res.status(403).json({ error: "Forbidden" });
    }
    next();
  };
}

app.delete("/api/users/:id", authorize("admin"), deleteUser);

Validate and sanitize all input

The concept of input validation is easy, but ensuring it’s applied uniformly to every endpoint, parameter, header, and query string is challenging, particularly for growing APIs. This validation layer is designed to stop injection attacks (SQL, NoSQL, command injection), reject improperly formed payloads, and detect business logic exploits before they hit your application code.

// Validate input before processing
app.post("/api/users", (req, res) => {
  const { name, email, role } = req.body;

  // Check required fields exist
  if (!name || !email) {
    return res.status(400).json({ error: "Name and email are required" });
  }

  // Validate email format
  const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
  if (!emailRegex.test(email)) {
    return res.status(400).json({ error: "Invalid email format" });
  }

  // Whitelist allowed values
  const allowedRoles = ["viewer", "editor", "admin"];
  if (role && !allowedRoles.includes(role)) {
    return res.status(400).json({ error: "Invalid role" });
  }

  // Proceed with validated data
  createUser({ name, email, role: role || "viewer" });
});

Key validation practices:

  • Use allowlists over denylists. Define what’s accepted rather than trying to block every possible malicious input.
  • Validate data types, lengths, ranges, and formats. A user ID should be a number, not an arbitrary string.
  • Sanitize output as well as input. Encode data before including it in responses to prevent cross-site scripting (XSS) in API consumers that render HTML.
  • Use a schema validation library (like Joi, Zod, or JSON Schema) to enforce request structure consistently across endpoints.

Apply rate limiting and throttling

Rate limiting restricts the volume of requests a client can send over a defined period. Without it, your API is vulnerable to brute-force attacks, credential stuffing, denial-of-service (DoS) attacks, and general abuse from misbehaving clients.

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 67
X-RateLimit-Reset: 1700000000

When a client exceeds the limit, return a 429 Too Many Requests status with a Retry-After header:

HTTP/1.1 429 Too Many Requests
Retry-After: 30
Content-Type: application/json

{
  "error": "Rate limit exceeded. Try again in 30 seconds."
}

Set different rate limits based on the endpoint’s sensitivity. For example, login endpoints and password reset flows should have stricter limits than read-only data endpoints. Consider per-user, per-IP, and per-API-key limits to prevent abuse from multiple angles.

When duplicate requests could cause actual harm to payment or transaction endpoints, use idempotency keys for secure retry handling.

POST /api/payments HTTP/1.1
Idempotency-Key: a1b2c3d4-unique-key
Content-Type: application/json

{
  "amount": 99.99,
  "currency": "USD"
}

Manage API keys and secrets securely

Careful handling of API keys, tokens, and credentials is essential at every stage, including generation, rotation, and revocation.

Never hardcode secrets in source code. This is one of the most common and most preventable security mistakes. Repository history retains keys even after you delete them from the current version.

# Bad: Hardcoded in source
API_KEY = "sk_live_abc123xyz789"

# Good: Loaded from environment variables
API_KEY = os.environ.get("API_KEY")

Key practices:

  • Store secrets in environment variables or a dedicated secrets manager (like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault).
  • Rotate API keys on a regular schedule and immediately after any suspected exposure.
  • Use scoped keys with the minimum permissions needed. A key that only needs read access shouldn’t have write privileges.
  • Set expiration dates on keys and tokens. Long-lived credentials increase the window of exposure if they’re compromised.
  • Monitor key usage for anomalies. A surge in requests from one key could signal a security breach.

Return only what’s needed

Oversharing data in API responses is a subtle but serious security risk. If your API returns full user objects when the client only needs a name and email, you’re unnecessarily exposing fields like internal IDs, roles, password hashes, or other sensitive attributes.

// Bad: Returns everything from the database
app.get("/api/users/:id", async (req, res) => {
  const user = await db.users.findById(req.params.id);
  res.json(user); // Includes password_hash, internal_notes, etc.
});

// Good: Return only what the client needs
app.get("/api/users/:id", async (req, res) => {
  const user = await db.users.findById(req.params.id);
  res.json({
    id: user.id,
    name: user.name,
    email: user.email,
    role: user.role
  });
});

This approach, also known as data minimization, extends to error messages. Detailed error responses give attackers insight into your API’s structure. Return enough information for legitimate clients to fix their requests, but avoid exposing stack traces, database details, or internal service names in production.

Log, monitor, and alert

You can’t protect what you can’t see. With extensive logging and monitoring, you can spot ongoing attacks, examine incidents later, and find patterns that identify potential threats.

What to log:

  • All authentication attempts (successful and failed)

  • Authorization failures (403 responses)

  • Unusual request patterns (high volume from a single source, requests to non-existent endpoints)

  • Changes to sensitive resources (user roles, payment methods, API keys)

  • Rate limit violations

What not to log: Full request bodies containing passwords, tokens, credit card numbers, or other sensitive data. Log enough to investigate without creating a new exposure point.

Set up alerts for anomalies. For instance, a sharp rise in 401 responses could mean a credential stuffing attack, and a cluster of 500 errors might reveal an exploit attempt on a specific endpoint.

Test your API security in Postman

Security practices only work if they’re verified. Postman makes it straightforward to test authentication, authorization, input validation, and error handling across your API.

Test authentication enforcement:

  1. Create a request to a protected endpoint without including an auth token.

  2. Verify that you receive a 401 Unauthorized response.

  3. Add a valid token and confirm the request succeeds.

  4. Send an expired or malformed token and confirm you get a 401.

Test authorization controls:

  1. Authenticate as a regular user.

  2. Atstructurestempt to access or modify a resource owned by a different user.

  3. Verify the API returns 403 Forbidden, not the resource data.

Test input validation:

  1. Send requests with missing required fields and verify you get 400 Bad Request with a clear error message.

  2. Send oversized payloads, special characters, and SQL injection strings to confirm the API rejects them.

  3. Check that error responses don’t leak internal details like stack traces or database structures.

Test rate limiting:

  1. Use Postman’s Collection Runner to send rapid sequential requests to a rate-limited endpoint.

  2. Verify the API returns 429 Too Many Requests after hitting the limit.

  3. Confirm the response includes Retry-After or rate limit headers.

Add assertions to your test scripts to automate these checks:

pm.test("Unauthorized request returns 401", function () {
  pm.response.to.have.status(401);
});

pm.test("Response does not contain sensitive fields", function () {
  const body = pm.response.json();
  pm.expect(body).to.not.have.property("password_hash");
  pm.expect(body).to.not.have.property("internal_notes");
});

Common API security mistakes to avoid

Relying on API keys as your only authentication. API keys identify applications, not users. They’re easily leaked in client-side code, browser history, or server logs. Pair them with proper user authentication like OAuth 2.0.

Returning too much data in responses. Don’t send entire database records when the client only needs two fields. Filter response data at the application layer to prevent accidental exposure of sensitive information.

Skipping authorization checks on individual resources. Verifying that a user is authenticated isn’t enough. Your API needs to verify permission for every resource requested, on every endpoint, without fail.

Inconsistent security policies across internal and external APIs. Internal APIs often get a pass on authentication, rate limiting, or input validation because they’re “not public-facing.” But lateral movement through internal APIs is a common attack pattern once an attacker gains initial access. Apply the same security baseline everywhere.

Using outdated dependencies. Vulnerabilities in third-party libraries are a common attack vector. Keep your dependencies updated and monitor security advisories for the frameworks and packages your API relies on.

Exposing detailed error messages in production. Stack traces and database errors help during development but give attackers a roadmap in production. Return generic error messages and log the details server-side.

API Security FAQs

Question Answer
What’s the most critical API security practice? Strong authentication and authorization on every endpoint.
Should internal APIs use HTTPS? Yes. Always encrypt API traffic, including service-to-service communication.
Are API keys enough for security? No. API keys identify applications but don’t authenticate users. Use them alongside OAuth 2.0 or JWTs.
What is BOLA? Broken Object Level Authorization, which is when an API doesn’t verify a user’s access to a specific resource. It’s the #1 OWASP API vulnerability.
How often should I rotate API keys? Regularly (every 60–90 days) and immediately after any suspected compromise.
What status code for rate limiting? 429 Too Many Requests, with a Retry-After header.
Should I validate input on the server? Always. Validate at the API layer regardless of what upstream clients or gateways do.
How can I test API security? Verify authentication, authorization, input validation, and rate limiting. Automate checks with test scripts in your CI/CD pipeline.

The post API Security Best Practices: A Developer’s Guide to Protecting Your APIs appeared first on Postman Blog.

Read the whole story
alvinashcraft
43 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories