Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152272 stories
·
33 followers

What's new in the Jetpack Compose April '26 release

1 Share
Posted by Meghan Mehta, Android Developer Relations Engineer


Today, the Jetpack Compose April ‘26 release is stable. This release contains version 1.11 of core Compose modules (see the full BOM mapping), shared element debug tools, trackpad events, and more. We also have a few experimental APIs that we’d love you to try out and give us feedback on.

To use today’s release, upgrade your Compose BOM version to:
implementation(platform("androidx.compose:compose-bom:2026.04.01"))

Changes in Compose 1.11.0

Coroutine execution in tests

We’re introducing a major update to how Compose handles test timing. Following the opt-in period announced in Compose 1.10, the v2 testing APIs are now the default, and the v1 APIs have been deprecated. The key change is a shift in the default test dispatcher. While the v1 APIs relied on UnconfinedTestDispatcher, which executed coroutines immediately, the v2 APIs use the StandardTestDispatcher. This means that when a coroutine is launched in your tests, it is now queued and does not execute until the virtual clock is advanced.

This better mimics production conditions, effectively flushing out race conditions and making your test suite significantly more robust and less flaky.

To ensure your tests align with standard coroutine behavior and to avoid future compatibility issues, we strongly recommend migrating your test suite. Check out our comprehensive  migration guide for API mappings and common fixes.

Shared element improvements and animation tooling


We’ve also added some handy visual debugging tools for shared elements and Modifier.animatedBounds. You can now see exactly what’s happening under the hood—like target bounds, animation trajectories, and how many matches are found—making it much easier to spot why a transition might not be behaving as expected. To use the new tooling, simply surround your SharedTransitionLayout with the LookaheadAnimationVisualDebugging composable.

LookaheadAnimationVisualDebugging(
    overlayColor = Color(0x4AE91E63),
    isEnabled = true,
    multipleMatchesColor = Color.Green,
    isShowKeylabelEnabled = false,
    unmatchedElementColor = Color.Red,
) {
    SharedTransitionLayout {
        CompositionLocalProvider(
            LocalSharedTransitionScope provides this,
        ) {
            // your content
        }
    }
}

Trackpad events

We’ve revamped Compose support for trackpads, like built-in laptop trackpads, attachable trackpads for tablets, or external/virtual trackpads. Basic trackpad events will now generally be considered PointerType.Mouse events, aligning mouse and trackpad behavior to better match user expectations. Previously, these trackpad events were interpreted as fake touchscreen fingers of PointerType.Touch, which led to confusing user experiences. For example, clicking and dragging with a trackpad would scroll instead of selecting. By changing the pointer type these events have in the latest release of Compose, clicking and dragging with a trackpad will no longer scroll.

We also added support for more complicated trackpad gestures as recognized by the platform since API 34, including two finger swipes and pinches. These gestures are automatically recognized by components like Modifier.scrollable and Modifier.transformable to have better behavior with trackpads.

These changes improve behavior for trackpads across built-in components, with redundant touch slop removed, a more intuitive drag-and-drop starting gesture, double-click and triple-click selection in text fields, and desktop-styled context menus in text fields.

To test trackpad behavior, there are new testing APIs with performTrackpadInput, which allow validating the behavior of your apps when being used with a trackpad. If you have custom gesture detectors, validate behavior across input types, including touchscreens, mice, trackpads, and styluses, and ensure support for mouse scroll wheels and trackpad gestures.

Before After

Composition host defaults (Compose runtime)

We introduced HostDefaultProvider, LocalHostDefaultProvider, HostDefaultKey, and ViewTreeHostDefaultKey to supply host-level services directly through compose-runtime. This removes the need for libraries to depend on compose-ui for lookups, better supporting Kotlin Multiplatform. To link these values to the composition tree, library authors can use compositionLocalWithHostDefaultOf to create a CompositionLocal that resolves defaults from the host.

Preview wrappers

Android Studio custom previews is a new feature that allows you to define exactly how the contents of a Compose preview are displayed.

By implementing the PreviewWrapperProvider interface and applying the new @PreviewWrapper annotation, you can easily inject custom logic, such as applying a specific Theme. The annotation can be applied to a function annotated with @Composable and @Preview or @MultiPreview, offering a generic, easy-to-use solution that works across preview features and significantly reduces repetitive code.

class ThemeWrapper: PreviewWrapper {

    @Composable

    override fun Wrap(content: @Composable (() -> Unit)) {

        JetsnackTheme {

            content()

        }

    }

}


@PreviewWrapperProvider(ThemeWrapper::class)

@Preview

@Composable

private fun ButtonPreview() {

    // JetsnackTheme in effect

    Button(onClick = {}) {

        Text(text = "Demo")

    }

}

Deprecations and removals

  • As announced in the Compose 1.10 blog post, we’re deprecating Modifier.onFirstVisible(). Its name often led to misconceptions, particularly within lazy layouts, where it would trigger multiple times during scrolling. We recommend migrating to Modifier.onVisibilityChanged(), which allows for more precise manual tracking of visibility states tailored to your specific use case requirements.
  • The ComposeFoundationFlags.isTextFieldDpadNavigationEnabled flag was removed because D-pad navigation for TextFields is now always enabled by default. The new behavior ensures that the D-pad events from a gamepad or a TV remote first move the cursor in the given direction. The focus can move to another element only when the cursor reaches the end of the text.

Upcoming APIs

In the upcoming Compose 1.12.0 release, the compileSdk will be upgraded to compileSdk 37, with AGP 9 and all apps and libraries that depend on Compose inheriting this requirement. We recommend keeping up to date with the latest released versions, as Compose aims to promptly adopt new compileSdks to provide access to the latest Android features. Be sure to check out the documentation here for more information on which version of AGP is supported for different API levels.

In Compose 1.11.0, the following APIs are introduced as @Experimental, and we look forward to hearing your feedback as you explore them in your apps. Note that @Experimental APIs are provided for early evaluation and feedback and may undergo significant changes or removal in future releases.

Styles (Experimental)

We are introducing a new experimental foundation API for styling. The Style API is a new paradigm for customizing visual elements of components, which has traditionally been performed with modifiers. It is designed to unlock deeper, easier customization by exposing a standard set of styleable properties with simple state-based styling and animated transitions. With this new API, we’re already seeing promising performance benefits. We plan to adopt Styles in Material components once the Style API stabilizes.

A basic example of overriding a pressed state style background:

@Composable
fun LoginButton(modifier: Modifier = Modifier) {
    Button(
        onClick = {
            // Login logic
        },
        modifier = modifier,
        style = {
            background(
                Brush.linearGradient(
                    listOf(lightPurple, lightBlue)
                )
            )
            width(75.dp)
            height(50.dp)
            textAlign(TextAlign.Center)
            externalPadding(16.dp)

            pressed {
                background(
                    Brush.linearGradient(
                        listOf(Color.Magenta, Color.Red)
                    )
                )
            }
        }
    ){
        Text(
            text = "Login",
        )
    }
}



Check out the documentation and file any bugs here.

MediaQuery (Experimental)

The new mediaQuery API provides a declarative and performant way to adapt your UI to its environment. It abstracts complex information retrieval into simple conditions within a UiMediaScope, ensuring recomposition only happens when needed.

With support for a wide range of environmental signals—from device capabilities like keyboard types and pointer precision, to contextual states like window size and posture—you can build deeply responsive experiences. Performance is baked in with derivedMediaQuery to handle high-frequency updates, while the ability to override scopes makes testing and previews seamless across hardware configurations.

Previously, to get access to certain device properties — like if a device was in tabletop mode — you’d need to write a lot of boilerplate to do so:

@Composable
fun isTabletopPosture(
    context: Context = LocalContext.current
): Boolean {
    val windowLayoutInfo by
        WindowInfoTracker
            .getOrCreate(context)
            .windowLayoutInfo(context)
            .collectAsStateWithLifecycle(null)

    return windowLayoutInfo.displayFeatures.any { displayFeature ->
        displayFeature is FoldingFeature &&
            displayFeature.state == FoldingFeature.State.HALF_OPENED &&
            displayFeature.orientation == FoldingFeature.Orientation.HORIZONTAL
    }
}

@Composable
fun VideoPlayer() {
    if(isTabletopPosture()) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Now, with UIMediaQuery, you can add the mediaQuery syntax to query device properties, such as if a device is in tabletop mode:

@OptIn(ExperimentalMediaQueryApi::class)
@Composable
fun VideoPlayer() {
    if (mediaQuery { windowPosture == UiMediaScope.Posture.Tabletop }) {
        TabletopLayout()
    } else {
        FlatLayout()
    }
}

Check out the documentation and file any bugs here.

Grid (Experimental)

Grid is a powerful new API for building complex, two-dimensional layouts in Jetpack Compose. While Row and Column are great for linear designs, Grid gives you the structural control needed for screen-level architecture and intricate components without the overhead of a scrollable list.

Grid allows you to define your layout using tracks, gaps, and cells, offering familiar sizing options like Dp, percentages, intrinsic content sizes, and flexible "Fr" units.

@OptIn(ExperimentalGridApi::class)

@Composable

fun GridExample() {

    Grid(

        config = {

            repeat(4) { column(0.25f) }

            repeat(2) { row(0.5f) }

            gap(16.dp)

        }

    ) {

        Card1(modifier = Modifier.gridItem(rowSpan = 2)

        Card2(modifier = Modifier.gridItem(colmnSpan = 3)

        Card3(modifier = Modifier.gridItem(columnSpan = 2)

        Card4()

    }

}


You can place items automatically or explicitly span them across multiple rows and columns for precision. Best of all, it’s highly adaptive—you can dynamically reconfigure your grid tracks and spans to respond to device states like tabletop mode or orientation changes, ensuring your UI looks great across form factors.


Check out the documentation and file any bugs here.

FlexBox (Experimental)

FlexBox is a layout container designed for high performance, adaptive UIs. It manages item sizing and space distribution based on available container dimensions. It handles complex tasks like wrapping (wrap) and multi-axis alignment of items (justifyContent, alignItems, alignContent). It allows items to grow (grow) or shrink (shrink) to fill the container.

@OptIn(ExperimentalFlexBoxApi::class)
fun FlexBoxWrapping(){
    FlexBox(
        config = {
            wrap(FlexWrap.Wrap)
            gap(8.dp)
        }
    ) {
        RedRoundedBox()
        BlueRoundedBox()
        GreenRoundedBox(modifier = Modifier.width(350.dp).flex { grow(1.0f) })
        OrangeRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.7f) })
        PinkRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.3f) })
    }
}





Check out the documentation and file any bugs here.

New SlotTable implementation (Experimental)

We’ve introduced a new implementation of the SlotTable, which is disabled by default in this release. SlotTable is the internal data structure that the Compose runtime uses to track the state of your composition hierarchy, track invalidations/recompositions, store remembered values, and track all metadata of the composition at runtime. This new implementation is designed to improve performance, primarily around random edits.

To try the new SlotTable, enable ComposeRuntimeFlags.isLinkBufferComposerEnabled.

Start coding today!

With so many exciting new APIs in Jetpack Compose, and many more coming up, it's never been a better time to migrate to Jetpack Compose. As always, we value your feedback and feature requests (especially on @Experimental features that are still baking) — please file them here. Happy composing!

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

First Public Working Drafts for the Linked Web Storage (LWS) 1.0 Authentication Suite

1 Share

The Linked Web Storage Working Group has published the following four First Public Working Drafts: 

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Write azd hooks in Python, JavaScript, TypeScript, or .NET

1 Share

Hooks are one of the most popular features in azd, and now you can write them in Python, JavaScript, TypeScript, or .NET, not just Bash and PowerShell.

What’s new?

The Azure Developer CLI (azd) hook system now supports four more languages beyond Bash and PowerShell. You can write hook scripts in Python, JavaScript, TypeScript, or .NET. azd automatically detects the language from the file extension, manages dependencies, and runs the script with no extra configuration required.

Why it matters

Hooks let you run custom logic at key points in the azd lifecycle before provisioning, after deployment, and more. Previously, hooks supported scripts written in Bash and PowerShell, which meant developers had to context-switch to a shell scripting language even if their project was entirely Python or TypeScript. Now you can write hooks in the same language as your application, reuse existing libraries, and skip the shell scripting.

How to use it

Point a hook at a script file in azure.yaml, and azd infers the language from the extension. If the extension is ambiguous or missing, you can specify the language explicitly with the kind field:

hooks:
  preprovision:
    run: ./hooks/setup
    run: ./hooks/setup.py
    kind: python    # explicit — overrides extension inference

Here’s what a typical setup looks like:

# azure.yaml
hooks:
  preprovision:
    run: ./hooks/setup.py

  postdeploy:
    run: ./hooks/seed.ts

  postprovision:
    run: ./hooks/migrate.cs

Python hooks

Place a requirements.txt or pyproject.toml in the same directory as your script or a parent directory. azd walks up the directory tree from the script location to find the nearest project file, creates a virtual environment, installs dependencies, and runs the script.

hooks/
├── setup.py
└── requirements.txt

Then reference the script in your azure.yaml:

hooks:
  preprovision:
    run: ./hooks/setup.py

JavaScript and TypeScript hooks

Place a package.json in the same directory as your script or a parent directory. azd runs npm install (or the package manager specified in config) and executes the script. TypeScript scripts run via npx tsx with no compile step or tsconfig.json needed:

hooks/
├── seed.ts
└── package.json

Then reference the script in your azure.yaml:

hooks:
  postdeploy:
    run: ./hooks/seed.ts

.NET hooks

Two modes are supported:

  • Project mode: If a .csproj, .fsproj, or .vbproj exists in the same directory as the script, azd runs dotnet restore and dotnet build automatically.
  • Single-file mode: On .NET 10+, standalone .cs files run directly via dotnet run script.cs without a project file.
- **Project mode**: If a .csproj, .fsproj, or .vbproj exists in the same directory as the script or a parent directory, �zd runs dotnet restore and dotnet build automatically.
├── migrate.cs
└── migrate.csproj   # optional — omit for single-file mode on .NET 10+

Then reference the script in your azure.yaml:

hooks:
  postprovision:
    run: ./hooks/migrate.cs

Override the working directory

Use the dir field to set the working directory for a hook. This configuration is useful when the project root differs from the script location:

hooks:
  preprovision:
    run: main.py
    dir: hooks/preprovision

Executor-specific configuration

Each language supports an optional config block for executor-specific settings:

hooks:
  preprovision:
    run: ./hooks/setup.ts
    config:
      packageManager: pnpm    # npm | pnpm | yarn

  postdeploy:
    run: ./hooks/seed.py
    config:
      virtualEnvName: .venv   # override default naming

  postprovision:
    run: ./hooks/migrate.cs
    config:
      configuration: Release  # Debug | Release
      framework: net10.0      # target framework

Mixed formats

You can mix single-hook and multi-hook formats in the same hooks: block, including platform-specific overrides:

hooks:
  preprovision:
    run: ./hooks/setup.py
  predeploy:
    windows:
      run: ./hooks/build.ps1
    posix:
      run: ./hooks/build.sh

Try it out

To make sure you have this feature, update to the latest azd version:

azd update

For a fresh install, see Install azd.

We hope you like this new feature for writing azd hooks in your preferred language! Let us know what you think and how you’re using it.

Feedback

Have questions or ideas? File an issue or start a discussion on GitHub. Want to help shape the future of azd? Sign up for user research.

🙋‍♀️ New to azd?

If you’re new to the Azure Developer CLI, azd is an open-source command-line tool that takes your application from local development environment to Azure. azd provides best practice, developer-friendly commands that map to key stages in your workflow, whether you’re working in the terminal, your editor or CI/CD.


This feature was introduced across several PRs: #7451 (Python), #7626 (JS/TS), #7652 (.NET/C#/F#/VB.NET), #7690 (config), #7618 (mixed formats).

The post Write azd hooks in Python, JavaScript, TypeScript, or .NET appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How We Scaled Security Reviews Without Slowing Down Engineering

1 Share

At Postman, we ship fast. Velocity is part of our culture. But as we scaled, one question kept resurfacing: How do we maintain strong security assurance without becoming the bottleneck?

We’re sharing the evolution of our Security Review Process (SRP). What didn’t work, what we changed, and how we built SRP v2, a risk-based, automation-first security model embedded directly into our SDLC.

Why Our Previous Model Stopped Scaling

Our previous security review process fully revolved around a VAPT (Vulnerability Assessment & Penetration Testing) Jira ticket and the corresponding security test plan. Here’s how it worked:

  • When a service release moved to staging environment (‘develop’ pre-production branch code merge), a VAPT ticket was automatically created
  • The primary AppSec contact for that service was notified
  • AppSec would manually review the entire release before GA

On paper, it sounds reasonable. In practice, it created several challenges:

  1. Security Notification Was Tied Too Late to VAPT Ticket Creation

Our process relied almost entirely on VAPT ticket generation as the trigger for notifying AppSec. In many cases, AppSec was not formally looped in during the design or planning phase; hence, AppSec became aware of some changes only when a VAPT ticket was created after the service moved to the staging environment. By that point, the release was already close to GA. This led to:

  • Late architectural and implementation questions on certain releases
  • Compressed timelines for meaningful manual security review
  • Crunch mode for the AppSec team
  • Major security findings surfacing as release blockers, impacting delivery schedules
  1. Design-Phase Review Process Was Informal

For major features, AppSec was often consulted during design. But there was:

  • No formal process
  • No central tracking
  • No consistent documentation
  • No measurable assurance

As we scaled, this informality became friction.

  1. Lack of Risk Differentiation

The model treated all releases similarly. A low-risk configuration change and a high-impact auth redesign would both trigger similar review mechanics. That doesn’t scale. And at Postman, scale matters.

The Realization: Security Must Scale With Product

We had to confront a fundamental truth: Security cannot just sit as a manual gate in front of a fast-moving train. If engineering scales through automation, frameworks, and reusable systems,  AppSec must do the same. The goal wasn’t to review less. The goal was to review smarter.

About a year ago, we started acting on this. We piloted an early version of this model with a subset of services and teams, working closely with engineers, engineering managers, and product leaders. This helped us identify gaps, refine the approach, and validate what worked in practice.

That led to Security Review Process v2 (SRP v2). SRP v2 wasn’t designed in theory, it was built on operational evidence. Instead of arbitrarily cutting manual reviews, we used data to design risk-based pathways. We started by adding structured, required fields for the security engineers to record before a VAPT Jira ticket can be closed. The following custom fields were added to the VAPT jira ticket:

  • Was manual security review required? (Yes/No)
  • Manual review effort (In hours)
  • Release type reviewed (Major/Minor)

This gave us something we previously lacked: consistent, field-level operational data directly from the security engineers performing the reviews. With this, and other data points, we now measure:

  • How often releases for a given service or team truly require manual review
  • The distribution of major vs. minor releases
  • Average manual effort spent per release
  • Trends in early vs. late AppSec involvement
  • Where manual review effort can possibly be reduced

SRP v2: A Risk-Based Security Review Model

SRP v2 is a calibrated system designed to learn, adjust, and evolve as our product and organization grow.

How We Re-Architected the Security Review Process

  1. New AppSec Review (ASR) Jira Ticket

Every major release now has a dedicated ASR Jira ticket; acting as the central source of truth for the release’s security lifecycle. ASRs are auto created as soon as the product release Jira ticket is moved to “In Progress” state. Tracking:

  • Product release ticket
  • VAPT tickets (per service release)
  • Dependent release ASR tickets
  • Triage questionnaire for initial triage and planning
  • Master security test plan
  • Security review status and final sign-off state

Earlier security sign-offs lived mostly in scattered VAPT tickets, now every significant release has a canonical security hub. VAPTs are linked children, not isolated artifacts.

  1. Mandatory ASR-Level Security Triage Questionnaire

Every ASR includes a security triage questionnaire. Product Managers/Release Owners are directly responsible for filling this out when the ASR is created. These answers drive:

  • Initial risk classification
  • AppSec prioritization
  • Review depth decisions

The goal is simple: give security structured visibility during design and iteration phase.

  1. Mandatory PR-Level Security Triage Questionnaire

In addition to the ASR-level security questionnaire (which captures release intent and high-level risk context), we introduced a second layer of triage directly inside the pull request workflow. Before commits can be merged into the develop branch, the code author must answer a mandatory set of yes/no security questions.

Unlike the ASR questionnaire, PR-level triage focuses on concrete engineering changes. These questions are tightly scoped to implementation details because developers understand their changes best, and the context is fresh at merge time.

Over the past year, we’ve iterated on and refined this question set based on real-world usage and review outcomes. As a result, we’ve found it to be highly effective in surfacing meaningful signals early.

For every service release, our system aggregates all PR-level yes/no responses, and then a consolidated view is automatically posted to the VAPT Jira ticket tracking that release. If any PR answers “Yes” to a question, the corresponding VAPT-level field is marked “Yes.”

  1. Security Masterplan per Major Release

Every major release now requires a central security masterplan document. This document will:

  • Link to the ASR Jira ticket
  • List all VAPT Jira tickets for that release
  • Track required security controls discussed during design reviews
  • List security test coverage, and findings from tooling and manual penetration test

Earlier security notes were scattered across service-level VAPT tickets, now we have one canonical security tracker per major release. The entire release’s security posture is documented in one place.

  1. Risk-Adaptive Review Routing

One of the most important changes in SRP v2 is that not every service, and not every release,  follows the same review path. In the previous model, manual review was the default. In SRP v2, review depth is intentionally aligned to service criticality and the actual sensitivity of the changes being shipped.

For Low and Medium severity services: if all triage answers across PRs indicate no security-sensitive changes, the VAPT Jira ticket is automatically closed. That closure represents security sign-off. There is no waiting period, no manual gate, and no unnecessary friction. If any triage response signals risk, the release moves into a normal manual security review flow.

Auto-closure does not mean reduced security coverage. These releases still pass through our baseline guardrails, including secure SDLC requirements, automated SAST and SCA (dependency scanning), secret detection, infrastructure policy enforcement, cloud security posture management, logging standards, and centralized detection and response monitoring. Many services are also backed by Postman collection–based regression and security test suites that validate critical flows on every release. In addition, releases remain covered by broader assurance mechanisms such as periodic internal reviews and our bug bounty program. Automation replaces synchronous review, it does not replace security.

For High and Critical severity services: manual review is always required for now, regardless of how triage questions are answered. The potential blast radius is simply too large to rely solely on current automation. These services demand deliberate human scrutiny, and VAPT Jira tickets are closed only after explicit AppSec sign-off.

  1. Continuous Validation & Improvement

Automation without oversight is risky; hence, we introduced a safety net where all the auto-closed releases are audited monthly by AppSec to validate that triage decisions were accurate and that risk signals remain trustworthy.

During these reviews, we examine whether any releases should have received deeper review, whether triage responses accurately reflected the underlying changes, and whether a service’s criticality classification still makes sense given its current evolution.

These audits allow us to refine the questionnaire, recalibrate criticality levels, and continuously improve the safety of our automation. If patterns shift, the model adapts.

What SRP v2 Has Enabled for Us

  • 25% of Releases Auto-Unblocked: 1 in 4 service releases now move from staging to production with no real-time manual security sign-off
  • Wait Times Cut: Removed 2 business days of average per-VAPT closure time for qualifying releases
  • Developer Productivity Gains: 22 developer and security work-hours per quarter saved in reduced context switching
  • Security Capacity Reclaimed: 3 full business weeks of AppSec team capacity freed up every half-year

What’s Next: Organically Scaling Security With a Living Risk Matrix

We’re moving toward a system where risk decisions and security insights continuously improve over time. That means building a data-driven risk matrix to better decide where human effort truly belongs, while also integrating AI to bring faster, more contextual intelligence into the security review workflow.

Think of this as a continuous conversation with the system, and not a one-time design. As our product grows in complexity, our framework will adapt, and recalibrate based on real signals, not assumptions.

A Structured, Data-Backed Risk Model

We’re building a model that categorizes releases into three actionable risk buckets based on a composite risk score. That score blends:

  • Service Criticality: The potential blast radius if a service is compromised
  • Historical Vulnerability Patterns: Past hygiene and recurring issues
  • Change Sensitivity Trends: Signals from aggregated triage data (e.g., auth changes, new APIs, PII handling, infra changes)

This structure balances intrinsic risk (what the service is) with operational signals (how it’s evolving).

Actionable Risk Zones: 

The risk score will ultimately place services and releases into three clear buckets tied to a different level of security involvement.

  • Autonomous Release Zone (Low Risk): Low criticality, clean history, and low recent sensitivity. These rely primarily on automation and periodic assurance; no routine blocking security reviews
  • Assured Oversight Zone (Medium Risk): Moderate criticality or elevated change sensitivity. Limited to scheduled reviews, focused scope bug bounty coverage, and targeted pentesting; not necessarily per-release gates
  • Critical Review Zone (High Risk): High-impact services or compounded risk signals. Mandatory manual security reviews for each such release with deeper scrutiny

AI Augmented Security

Parallely, we are integrating intelligence closer to the development workflow, from surfacing vulnerabilities directly in pull requests to experimenting with agentic approaches that go beyond traditional LLM-based scanning. We’re also exploring autonomous validation at runtime to verify findings more realistically, along with an internal RAG-based assistant to help our security team reason faster using historical context and prior decisions.

Final Thoughts

We care deeply about safeguarding our users and our platform, while making sure our engineering teams can move fast with confidence and trust in the system around them. Being product-first doesn’t mean relaxing standards. It means designing security mechanisms that understand business context, developer workflows, and real-world risk. It means building frameworks that scale naturally, adapt to change, and stay grounded in data.

We’ll keep refining this model as our product and threat landscape evolve. Because in the end, strong security is about building trust at the speed of innovation.

The post How We Scaled Security Reviews Without Slowing Down Engineering appeared first on Postman Blog.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Like Vertical Slice Architecture? Meet Wolverine.Http!

1 Share

Before you read any of this, just know that it’s perfectly possible to mix and match Wolverine.HTTP, MVC Core controllers, and Minimal API endpoints in the same application.

If you’ve built ASP.NET Core applications of any size, you’ve probably run into the same friction: MVC controllers that balloon with constructor-injected dependencies, or Minimal API handlers that accumulate scattered app.MapGet(...) calls across multiple files. And if you’ve reached for a Mediator library to impose some structure, you’ve added a layer of abstraction that — while familiar — brings its own ceremony and a seam that can make unit testing harder than it should be.

Wolverine.HTTP is a different model. It’s a first-class HTTP framework built on top of ASP.NET Core that’s designed from the ground up for vertical slice architecture, has built-in transactional outbox support, and delivers a middleware story that is arguably more powerful than IEndpointFilter. And it doesn’t need a separate “Mediator” library because the Wolverine HTTP endpoints very naturally support a “Vertical Slice” style without so many moving parts as the average “check out my vertical slice architecture template!” approach online.

Moreover, Wolverine.HTTP has first class support for resilient messaging through Wolverine’s transactional outbox and asynchronous messaging. No other HTTP endpoint library in .NET has any such smooth integration.

What Is Vertical Slice Architecture?

The core idea is organizing code by feature rather than by technical layer. Instead of a Controllers/ folder, a Services/ folder, and a Repositories/ folder that all have to be navigated to understand one feature, you co-locate everything that belongs to a single use case: the request type, the handler, and any supporting types.

The payoff is locality. When a bug is filed against “create order”, you open one file. When a feature is deleted, you delete one file. There’s no hunting across layers.

Wolverine.HTTP is a natural fit for this style. A Wolverine HTTP endpoint is just a static class — no base class, no constructor injection, no framework coupling. The framework discovers it by scanning for [WolverineGet][WolverinePost][WolverinePut][WolverineDelete], and [WolverinePatch] attributes.

And because of the world we live in now, I have to mention that there is already plenty of anecdotal evidence that AI assisted coding works better with the “vertical slice” approach than it does against heavily layered approaches.

Getting Started

Install the NuGet package:

dotnet add package WolverineFx.Http

Wire it up in Program.cs:

var builder = WebApplication.CreateBuilder(args);
builder.Host.UseWolverine();
builder.Services.AddWolverineHttp();
var app = builder.Build();
app.MapWolverineEndpoints();
return await app.RunJasperCommands(args;

A Complete Vertical Slice

Here’s what a full feature slice looks like with Wolverine.HTTP. Request type, response type, and handler all in one place:

// The request
public record CreateTodo(string Name);
// The response
public record TodoCreated(int Id);
// The handler — a plain static class, no base class required
public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static async Task<IResult> Post(
CreateTodo command,
IDocumentSession session) // injected by Wolverine from the IoC container
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
return Results.Created($"/todoitems/{todo.Id}", todo);
}
}

Compare that to what this would look like in MVC Core with a service layer and constructor injection. The Wolverine version is shorter, has no framework coupling in the handler method itself, and every dependency is explicit in the method signature. There’s no hidden state, and the method is trivially unit-testable in isolation.

For reading data, it’s even cleaner:

public static class TodoEndpoints
{
[WolverineGet("/todoitems")]
public static Task<IReadOnlyList<Todo>> Get(IQuerySession session)
=> session.Query<Todo>().ToListAsync();
[WolverineGet("/todoitems/{id}")]
public static Task<Todo?> GetTodo(int id, IQuerySession session, CancellationToken cancellation)
=> session.LoadAsync<Todo>(id, cancellation);
[WolverineDelete("/todoitems/{id}")]
public static void Delete(int id, IDocumentSession session)
=> session.Delete<Todo>(id);
}

No controller. No service interface. No repository abstraction. Just the feature.

No Separate Mediator Needed

One of the most common patterns in .NET vertical slice architecture is using a Mediator library like MediatR to dispatch commands from controllers to handlers. Wolverine makes this unnecessary — it handles both HTTP routing and in-process message dispatch with the same execution pipeline.

If you’re coming from MediatR, the key difference is that there’s no IRequest<T> base type to implement, no IRequestHandler<TRequest, TResponse> to wire up, and no _mediator.Send(command) call to thread through your controllers. The HTTP endpoint is the handler. When you also want to dispatch a message for async processing, you just return it from the method (more on that below).

See our converting from MediatR guide for a detailed side-by-side comparison.

If you’re coming from MVC Core controllers or Minimal API, we have migration guides for both:

The Outbox: The Feature That Changes Everything

Here is where Wolverine.HTTP really pulls ahead. In any event-driven architecture, HTTP endpoints frequently need to do two things atomically: save data to the database and publish a message or event. If you do these as two separate operations and something crashes between them, you’ve lost a message — or worse, written corrupted state.

The standard solution is a transactional outbox: write the message to the same database transaction as the data change, then have a background process deliver it reliably.

With plain IMessageBus in a Minimal API handler, you’re responsible for the outbox mechanics yourself. With Wolverine.HTTP, the outbox is automatic. Any message returned from an endpoint method is enrolled in the same transaction as the handler’s database work.

The simplest pattern uses tuple return values. Wolverine recognizes any message types in the return tuple and routes them through the outbox:

public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static (Todo todo, TodoCreated created) Post(
CreateTodo command,
IDocumentSession session)
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
// Both the HTTP response (Todo) and the outbox message (TodoCreated)
// are committed in the same transaction. No message is lost.
return (todo, new TodoCreated(todo.Id));
}
}

The Todo becomes the HTTP response body. The TodoCreated message goes into the outbox and is delivered durably after the transaction commits. The database write and the message write are atomic — no coordinator needed.

If you need to publish multiple messages, use OutgoingMessages:

[WolverinePost("/orders")]
public static (OrderCreated, OutgoingMessages) Post(CreateOrder command, IDocumentSession session)
{
var order = new Order(command);
session.Store(order);
var messages = new OutgoingMessages
{
new OrderConfirmationEmail(order.CusmerId),
new ReserveInventory(order.Items),
new NotifyWarehouse(order.Id)
};
return (new OrderCreated(order.Id), messages);
}

All four database and message operations commit together. This is the kind of correctness that is genuinely difficult to achieve with raw IMessageBus calls in Minimal API, and it comes for free in Wolverine.HTTP.

Middleware: Better Than IEndpointFilter

ASP.NET Core Minimal API introduced IEndpointFilter as its extensibility hook — a way to run logic before and after an endpoint handler. It works, but it has a few rough edges: you write a class that implements an interface with a single InvokeAsync method that receives an EndpointFilterInvocationContext, and you have to dig values out by index or type from the context object. It’s not especially readable, and composing multiple filters is verbose.

Wolverine.HTTP’s middleware model is different. Middleware is just a class with Before and After methods that can take any of the same parameters the endpoint handler can take — including the request body, IoC services, HttpContext, and even values produced by earlier middleware. Wolverine generates the glue code at compile time (via source generation), so there’s no runtime reflection and no boxing.

Here’s a stopwatch middleware that times every request:

public class StopwatchMiddleware
{
private readonly Stopwatch _stopwatch = new();
public void Before() => _stopwatch.Start();
public void Finally(ILogger logger, HttpContext context)
{
_stopwatch.Stop();
logger.LogDebug(
"Request for route {Route} ran in {Duration}ms",
context.Request.Path,
_stopwatch.ElapsedMilliseconds);
}
}

A middleware method can also return IResult to conditionally stop the request. If the returned IResult is WolverineContinue.Result(), processing continues. Anything else — Results.Unauthorized()Results.NotFound()Results.Problem(...) — short-circuits the handler and writes the response immediately:

public class FakeAuthenticationMiddleware
{
public static IResult Before(IAmAuthenticated message)
{
return message.Authenticated
? WolverineContinue.Result() // keep going
: Results.Unauthorized(); // stop here
}
}

This same pattern powers Wolverine’s built-in FluentValidation middleware — every validation failure becomes a ProblemDetails response with no boilerplate in the handler itself.

The IHttpPolicy interface lets you apply middleware conventions across many endpoints at once:

public class RequireApiKeyPolicy : IHttpPolicy
{
public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IServiceContainer container)
{
foreach (var chain in chains.Where(c => c.Method.Tags.Contains("api")))
{
chain.Middleware.Insert(0, new MethodCall(typeof(ApiKeyMiddleware), nameof(ApiKeyMiddleware.Before)));
}
}
}

Policies are registered during bootstrapping:

app.MapWolverineEndpoints(opts =>
{
opts.AddPolicy<RequireApiKeyPolicy>();
})

ASP.NET Core Middleware: Everything Still Works

Wolverine.HTTP is built on top of ASP.NET Core, not around it. Every piece of standard ASP.NET Core middleware works exactly as you’d expect — Wolverine endpoints are just routes in the middleware pipeline.

Authentication and Authorization work via the standard [Authorize] and [AllowAnonymous] attributes:

public static class OrderEndpoints
{
[WolverineGet("/orders")]
[Authorize]
public static Task<IReadOnlyList<Order>> GetAll(IQuerySession session)
=> session.Query<Order>().ToListAsync();
[WolverinePost("/orders")]
[Authorize(Roles = "admin")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}
}

You can also require authorization on a set of routes at bootstrapping time:

app.MapWolverineEndpoints(opts =>
{
opts.ConfigureEndpoints(chain =>
{
chain.Metadata.RequireAuthorization();
});
});

Output caching via [OutputCache]:

[WolverineGet("/products/{id}")]
[OutputCache(Duration = 60)]
public static Task<Product?> Get(int id, IQuerySession session)
=> session.LoadAsync<Product>(id)

Rate limiting via [EnableRateLimiting]:

builder.Services.AddRateLimiter(options =>
{
options.AddFixedWindowLimiter("per-user", opt =>
{
opt.PermitLimit = 100;
opt.Window = TimeSpan.FromMinutes(1);
});
options.RejectionStatusCode = 429;
});
app.UseRateLimiter();
// In your endpoint class:
[WolverinePost("/api/orders")]
[EnableRateLimiting("per-user")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

The UseRateLimiter() call in the pipeline hooks standard ASP.NET Core rate limiting middleware, and the [EnableRateLimiting] attribute wires up the policy exactly as it does for Minimal API or MVC — no Wolverine-specific configuration required.

OpenAPI / Swagger Support

Wolverine.HTTP integrates with Swashbuckle and the newer Microsoft.AspNetCore.OpenApi package. Endpoints are discovered as standard ASP.NET Core route metadata, so Swagger UI works out of the box. You can use [Tags][ProducesResponseType], and [EndpointSummary] to enrich the generated spec:

[Tags("Orders")]
[WolverinePost("/api/orders")]
[ProducesResponseType<Order>(201)]
[ProducesResponseType(400)]
public static (CreationResponse<Guid>, OrderStarted) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

Summary

Wolverine.HTTP gives you a cleaner foundation for vertical slice architecture in .NET:

  • No Mediator library needed — Wolverine handles both HTTP routing and in-process dispatch in the same pipeline
  • Discoverability built in for vertical slices — which is an advantage over Minimal API + Mediator style “vertical slices”
  • Lower ceremony than MVC controllers — static classes, method injection, no base types
  • Built-in outbox — messages returned from endpoints commit atomically with the database transaction
  • Better middleware than IEndpointFilter — Before/After methods with full dependency injection and IResult for conditional short-circuiting
  • Full ASP.NET Core compatibility — authentication, authorization, rate limiting, output caching, and all other middleware work without changes

If you’re starting a new project or looking to reduce complexity in an existing one, Wolverine.HTTP is worth a close look.

Further reading:



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Mapping the page tables into memory via the page tables

1 Share

On the 80386 processor, there is a trick for mapping the page tables into memory: You set a slot in the top-level page directory to point to… the page directory itself. When you follow through this page directory entry, you end up back at the page directory, and the effect is that the process of mapping a linear address to a physical page ends one stop early.¹ You end up pointing not at the destination page, but at the page table that points at the destination page. From the point of view of the address space, it looks like all of the page tables have been mapped into memory. This makes it easier to edit page directory entries² because you can do it within the address space.

I learned about this trick from the developer in charge of the Windows 95 memory manager.³ He said that this technique was actually suggested by Intel itself. In the literature, it appears to be known as fractal page mapping.

Seeing as Intel itself suggested the use of this trick, it is hardly a coincidence that the page table and page directory entry formats are conducive to it. The trick carries over to the x86-64 page table structure, and my understanding is that it works for most other processor architectures as well.

¹ And if you access an address within that loopback page directory entry that itself corresponds to the loopback page directory entry, then you stop two steps early, allowing you to access the page directory entry.

² Or page table entries.

³ It appears that Windows NT uses the same trick. See slides 36 and 37 of Dave Probert’s 2008 presentation titled Architecture of the Windows Kernel.

The post Mapping the page tables into memory via the page tables appeared first on The Old New Thing.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories