Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152268 stories
·
33 followers

Like Vertical Slice Architecture? Meet Wolverine.Http!

1 Share

Before you read any of this, just know that it’s perfectly possible to mix and match Wolverine.HTTP, MVC Core controllers, and Minimal API endpoints in the same application.

If you’ve built ASP.NET Core applications of any size, you’ve probably run into the same friction: MVC controllers that balloon with constructor-injected dependencies, or Minimal API handlers that accumulate scattered app.MapGet(...) calls across multiple files. And if you’ve reached for a Mediator library to impose some structure, you’ve added a layer of abstraction that — while familiar — brings its own ceremony and a seam that can make unit testing harder than it should be.

Wolverine.HTTP is a different model. It’s a first-class HTTP framework built on top of ASP.NET Core that’s designed from the ground up for vertical slice architecture, has built-in transactional outbox support, and delivers a middleware story that is arguably more powerful than IEndpointFilter. And it doesn’t need a separate “Mediator” library because the Wolverine HTTP endpoints very naturally support a “Vertical Slice” style without so many moving parts as the average “check out my vertical slice architecture template!” approach online.

Moreover, Wolverine.HTTP has first class support for resilient messaging through Wolverine’s transactional outbox and asynchronous messaging. No other HTTP endpoint library in .NET has any such smooth integration.

What Is Vertical Slice Architecture?

The core idea is organizing code by feature rather than by technical layer. Instead of a Controllers/ folder, a Services/ folder, and a Repositories/ folder that all have to be navigated to understand one feature, you co-locate everything that belongs to a single use case: the request type, the handler, and any supporting types.

The payoff is locality. When a bug is filed against “create order”, you open one file. When a feature is deleted, you delete one file. There’s no hunting across layers.

Wolverine.HTTP is a natural fit for this style. A Wolverine HTTP endpoint is just a static class — no base class, no constructor injection, no framework coupling. The framework discovers it by scanning for [WolverineGet][WolverinePost][WolverinePut][WolverineDelete], and [WolverinePatch] attributes.

And because of the world we live in now, I have to mention that there is already plenty of anecdotal evidence that AI assisted coding works better with the “vertical slice” approach than it does against heavily layered approaches.

Getting Started

Install the NuGet package:

dotnet add package WolverineFx.Http

Wire it up in Program.cs:

var builder = WebApplication.CreateBuilder(args);
builder.Host.UseWolverine();
builder.Services.AddWolverineHttp();
var app = builder.Build();
app.MapWolverineEndpoints();
return await app.RunJasperCommands(args;

A Complete Vertical Slice

Here’s what a full feature slice looks like with Wolverine.HTTP. Request type, response type, and handler all in one place:

// The request
public record CreateTodo(string Name);
// The response
public record TodoCreated(int Id);
// The handler — a plain static class, no base class required
public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static async Task<IResult> Post(
CreateTodo command,
IDocumentSession session) // injected by Wolverine from the IoC container
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
return Results.Created($"/todoitems/{todo.Id}", todo);
}
}

Compare that to what this would look like in MVC Core with a service layer and constructor injection. The Wolverine version is shorter, has no framework coupling in the handler method itself, and every dependency is explicit in the method signature. There’s no hidden state, and the method is trivially unit-testable in isolation.

For reading data, it’s even cleaner:

public static class TodoEndpoints
{
[WolverineGet("/todoitems")]
public static Task<IReadOnlyList<Todo>> Get(IQuerySession session)
=> session.Query<Todo>().ToListAsync();
[WolverineGet("/todoitems/{id}")]
public static Task<Todo?> GetTodo(int id, IQuerySession session, CancellationToken cancellation)
=> session.LoadAsync<Todo>(id, cancellation);
[WolverineDelete("/todoitems/{id}")]
public static void Delete(int id, IDocumentSession session)
=> session.Delete<Todo>(id);
}

No controller. No service interface. No repository abstraction. Just the feature.

No Separate Mediator Needed

One of the most common patterns in .NET vertical slice architecture is using a Mediator library like MediatR to dispatch commands from controllers to handlers. Wolverine makes this unnecessary — it handles both HTTP routing and in-process message dispatch with the same execution pipeline.

If you’re coming from MediatR, the key difference is that there’s no IRequest<T> base type to implement, no IRequestHandler<TRequest, TResponse> to wire up, and no _mediator.Send(command) call to thread through your controllers. The HTTP endpoint is the handler. When you also want to dispatch a message for async processing, you just return it from the method (more on that below).

See our converting from MediatR guide for a detailed side-by-side comparison.

If you’re coming from MVC Core controllers or Minimal API, we have migration guides for both:

The Outbox: The Feature That Changes Everything

Here is where Wolverine.HTTP really pulls ahead. In any event-driven architecture, HTTP endpoints frequently need to do two things atomically: save data to the database and publish a message or event. If you do these as two separate operations and something crashes between them, you’ve lost a message — or worse, written corrupted state.

The standard solution is a transactional outbox: write the message to the same database transaction as the data change, then have a background process deliver it reliably.

With plain IMessageBus in a Minimal API handler, you’re responsible for the outbox mechanics yourself. With Wolverine.HTTP, the outbox is automatic. Any message returned from an endpoint method is enrolled in the same transaction as the handler’s database work.

The simplest pattern uses tuple return values. Wolverine recognizes any message types in the return tuple and routes them through the outbox:

public static class CreateTodoEndpoint
{
[WolverinePost("/todoitems")]
public static (Todo todo, TodoCreated created) Post(
CreateTodo command,
IDocumentSession session)
{
var todo = new Todo { Name = command.Name };
session.Store(todo);
// Both the HTTP response (Todo) and the outbox message (TodoCreated)
// are committed in the same transaction. No message is lost.
return (todo, new TodoCreated(todo.Id));
}
}

The Todo becomes the HTTP response body. The TodoCreated message goes into the outbox and is delivered durably after the transaction commits. The database write and the message write are atomic — no coordinator needed.

If you need to publish multiple messages, use OutgoingMessages:

[WolverinePost("/orders")]
public static (OrderCreated, OutgoingMessages) Post(CreateOrder command, IDocumentSession session)
{
var order = new Order(command);
session.Store(order);
var messages = new OutgoingMessages
{
new OrderConfirmationEmail(order.CusmerId),
new ReserveInventory(order.Items),
new NotifyWarehouse(order.Id)
};
return (new OrderCreated(order.Id), messages);
}

All four database and message operations commit together. This is the kind of correctness that is genuinely difficult to achieve with raw IMessageBus calls in Minimal API, and it comes for free in Wolverine.HTTP.

Middleware: Better Than IEndpointFilter

ASP.NET Core Minimal API introduced IEndpointFilter as its extensibility hook — a way to run logic before and after an endpoint handler. It works, but it has a few rough edges: you write a class that implements an interface with a single InvokeAsync method that receives an EndpointFilterInvocationContext, and you have to dig values out by index or type from the context object. It’s not especially readable, and composing multiple filters is verbose.

Wolverine.HTTP’s middleware model is different. Middleware is just a class with Before and After methods that can take any of the same parameters the endpoint handler can take — including the request body, IoC services, HttpContext, and even values produced by earlier middleware. Wolverine generates the glue code at compile time (via source generation), so there’s no runtime reflection and no boxing.

Here’s a stopwatch middleware that times every request:

public class StopwatchMiddleware
{
private readonly Stopwatch _stopwatch = new();
public void Before() => _stopwatch.Start();
public void Finally(ILogger logger, HttpContext context)
{
_stopwatch.Stop();
logger.LogDebug(
"Request for route {Route} ran in {Duration}ms",
context.Request.Path,
_stopwatch.ElapsedMilliseconds);
}
}

A middleware method can also return IResult to conditionally stop the request. If the returned IResult is WolverineContinue.Result(), processing continues. Anything else — Results.Unauthorized()Results.NotFound()Results.Problem(...) — short-circuits the handler and writes the response immediately:

public class FakeAuthenticationMiddleware
{
public static IResult Before(IAmAuthenticated message)
{
return message.Authenticated
? WolverineContinue.Result() // keep going
: Results.Unauthorized(); // stop here
}
}

This same pattern powers Wolverine’s built-in FluentValidation middleware — every validation failure becomes a ProblemDetails response with no boilerplate in the handler itself.

The IHttpPolicy interface lets you apply middleware conventions across many endpoints at once:

public class RequireApiKeyPolicy : IHttpPolicy
{
public void Apply(IReadOnlyList<HttpChain> chains, GenerationRules rules, IServiceContainer container)
{
foreach (var chain in chains.Where(c => c.Method.Tags.Contains("api")))
{
chain.Middleware.Insert(0, new MethodCall(typeof(ApiKeyMiddleware), nameof(ApiKeyMiddleware.Before)));
}
}
}

Policies are registered during bootstrapping:

app.MapWolverineEndpoints(opts =>
{
opts.AddPolicy<RequireApiKeyPolicy>();
})

ASP.NET Core Middleware: Everything Still Works

Wolverine.HTTP is built on top of ASP.NET Core, not around it. Every piece of standard ASP.NET Core middleware works exactly as you’d expect — Wolverine endpoints are just routes in the middleware pipeline.

Authentication and Authorization work via the standard [Authorize] and [AllowAnonymous] attributes:

public static class OrderEndpoints
{
[WolverineGet("/orders")]
[Authorize]
public static Task<IReadOnlyList<Order>> GetAll(IQuerySession session)
=> session.Query<Order>().ToListAsync();
[WolverinePost("/orders")]
[Authorize(Roles = "admin")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}
}

You can also require authorization on a set of routes at bootstrapping time:

app.MapWolverineEndpoints(opts =>
{
opts.ConfigureEndpoints(chain =>
{
chain.Metadata.RequireAuthorization();
});
});

Output caching via [OutputCache]:

[WolverineGet("/products/{id}")]
[OutputCache(Duration = 60)]
public static Task<Product?> Get(int id, IQuerySession session)
=> session.LoadAsync<Product>(id)

Rate limiting via [EnableRateLimiting]:

builder.Services.AddRateLimiter(options =>
{
options.AddFixedWindowLimiter("per-user", opt =>
{
opt.PermitLimit = 100;
opt.Window = TimeSpan.FromMinutes(1);
});
options.RejectionStatusCode = 429;
});
app.UseRateLimiter();
// In your endpoint class:
[WolverinePost("/api/orders")]
[EnableRateLimiting("per-user")]
public static (Order, OrderCreated) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

The UseRateLimiter() call in the pipeline hooks standard ASP.NET Core rate limiting middleware, and the [EnableRateLimiting] attribute wires up the policy exactly as it does for Minimal API or MVC — no Wolverine-specific configuration required.

OpenAPI / Swagger Support

Wolverine.HTTP integrates with Swashbuckle and the newer Microsoft.AspNetCore.OpenApi package. Endpoints are discovered as standard ASP.NET Core route metadata, so Swagger UI works out of the box. You can use [Tags][ProducesResponseType], and [EndpointSummary] to enrich the generated spec:

[Tags("Orders")]
[WolverinePost("/api/orders")]
[ProducesResponseType<Order>(201)]
[ProducesResponseType(400)]
public static (CreationResponse<Guid>, OrderStarted) Post(CreateOrder command, IDocumentSession session)
{
// ...
}

Summary

Wolverine.HTTP gives you a cleaner foundation for vertical slice architecture in .NET:

  • No Mediator library needed — Wolverine handles both HTTP routing and in-process dispatch in the same pipeline
  • Discoverability built in for vertical slices — which is an advantage over Minimal API + Mediator style “vertical slices”
  • Lower ceremony than MVC controllers — static classes, method injection, no base types
  • Built-in outbox — messages returned from endpoints commit atomically with the database transaction
  • Better middleware than IEndpointFilter — Before/After methods with full dependency injection and IResult for conditional short-circuiting
  • Full ASP.NET Core compatibility — authentication, authorization, rate limiting, output caching, and all other middleware work without changes

If you’re starting a new project or looking to reduce complexity in an existing one, Wolverine.HTTP is worth a close look.

Further reading:



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Mapping the page tables into memory via the page tables

1 Share

On the 80386 processor, there is a trick for mapping the page tables into memory: You set a slot in the top-level page directory to point to… the page directory itself. When you follow through this page directory entry, you end up back at the page directory, and the effect is that the process of mapping a linear address to a physical page ends one stop early.¹ You end up pointing not at the destination page, but at the page table that points at the destination page. From the point of view of the address space, it looks like all of the page tables have been mapped into memory. This makes it easier to edit page directory entries² because you can do it within the address space.

I learned about this trick from the developer in charge of the Windows 95 memory manager.³ He said that this technique was actually suggested by Intel itself. In the literature, it appears to be known as fractal page mapping.

Seeing as Intel itself suggested the use of this trick, it is hardly a coincidence that the page table and page directory entry formats are conducive to it. The trick carries over to the x86-64 page table structure, and my understanding is that it works for most other processor architectures as well.

¹ And if you access an address within that loopback page directory entry that itself corresponds to the loopback page directory entry, then you stop two steps early, allowing you to access the page directory entry.

² Or page table entries.

³ It appears that Windows NT uses the same trick. See slides 36 and 37 of Dave Probert’s 2008 presentation titled Architecture of the Windows Kernel.

The post Mapping the page tables into memory via the page tables appeared first on The Old New Thing.

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Things That Caught My Attention Last Week - April 19

1 Share

caught-my-i

Security

How exposed is your code? Find out in minutes—for free by Dorothy Pearce, Eric Tooley

Software Architecture

Yoda Principle for better integrations by Oskar Dudycz

Evaluating CRON and RRule expressions in .NET by Gérald Barré

Why I Switched to Primary Constructors for DI in C# by Milan Jovanović

GitHib

Bringing more transparency to GitHub’s status page by Jakub Oleksy

.NET

It's Time for a Visual Studio Upgrade by Mark Downie

Docker Volume Location on Windows by Joseph Guadagno

From AI to .NET: 20 VS Live! Las Vegas Sessions You Can Watch Now by Jim Harrer

Pin Clustering in .NET MAUI Maps by David Ortinau

Critter Stack Sample Projects and Our Curated AI Skills by Jeremy D. Miller

Stop Hunting Bugs: Meet the New Visual Studio Debugger Agent Workflow by Harshada Hole

.NET 11 Preview 3 is now available! by .NET Team

htmxRazor v2.0.0: Platform and DX by Chris Woodruff

Suppressing Roslyn Analyzer Warnings Programmatically using DiagnosticSuppressor by Gérald Barré

REST/APIs

How DO you Paywall an API? by Alexander Karan

An API Consumer Interoperability Mindset by Kin Lane

Azure

One-click security scanning and org-wide alert triage come to Advanced Security by Laura Jiang

Stop juggling package managers—just run azd update by Kristen Womack

Software Development

Why is there a long delay between a thread exiting and the Wait­For­Single­Object returning? by Raymond Chen

Testing Needs a Seam, Not an Interface by Derek Comartin

How Software Developers Fail by Ardalis (Steve Smith)

AI

Optimizing AI Agents with Progressive Disclosure by Ardalis (Steve Smith)

Azure MCP tools now ship built into Visual Studio 2022 — no extension required by Yun Jung Choi

Running AI agents with customized templates using docker sandbox by Andrew Lock

A useful definition at the right time by Mike Amundsen

Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

7 tips to optimize Azure Cosmos DB costs for AI and agentic workloads

1 Share

AI apps and agentic workloads expose inefficiencies in your data layer faster than any previous generation of apps. You’re storing embeddings, serving low-latency retrieval, handling bursty traffic from chat and orchestration, and often operating across regions. Done right, Azure Cosmos DB can support these patterns with high performance and cost controls built in. Done wrong, it is easy to over-provision or pay for inefficient queries.

Insights from a recently published Azure Cosmos DB cost savings white paper, combined with guidance from Microsoft leader John Savill and real-world feedback from Azure Cosmos DB customers, reveal a clear pattern. The teams that succeed financially do so by aligning Azure Cosmos DB design decisions to workload behavior early, then refining those decisions as applications scale.

Below are seven practical, field-tested tips to help you scale AI applications on Azure Cosmos DB while keeping costs under control.

Tip 1: Start free for dev/test so you’re not burning budget before launch

A surprisingly common cost trap is paying for non-production environments longer than necessary. Consider using two levers with Azure Cosmos DB: Free Tier and the Emulator. Each subscription gets a set amount of throughput and storage free each month, and you can develop locally with the emulator at zero cloud cost.

Azure Cosmos DB reviewers on PeerSpot frequently mention ease of setup and cost management as part of the value story. A senior director of product management at Sitecore states that “the search, configuration, and ease of cost management have been a really great experience…Azure Cosmos DB has reduced our total cost of ownership significantly, allowing us to sell our product at extremely competitive pricing.”

In addition to easy setup and management, customers rely on Azure Cosmos DB to eliminate traditional database friction. Users benefit from:

  • No schema management – flexible JSON data allowing schema management inside application code.
  • Automatic indexing – no need for manual tuning
  • Rich SDKs across all major languages including Python, node.JS, GO, .NET, Java and so on
  • Serverless and autoscale – no need to manage capacity

Tip 2: Pick the right throughput mode early, then change it as needs evolve

Understanding Azure Cosmos DB service options and throughput modes is foundational, with free SKU, provisioned, and autoscale being distinct choices.

The eBook translates this into clear guidance:

Novo Nordisk avoided paying for unused capacity, using serverless with a redesigned data model. Simon Kofod, Lead Software Developer at Novo Nordisk, said, “We went from a database that would set us back $240 per month to Azure Cosmos DB that costs less than a buck per month. And we can multiply this saving by four because we have four environments.”

Tip 3: If your AI traffic is spiky, autoscale is often the highest-impact cost lever

AI app demands tend to be uneven: launches, feature rollouts, prompt changes, batch jobs, or simply time-of-day usage can create bursts. Autoscale allows Azure Cosmos DB to scale up and down automatically within a defined range, helping you avoid overprovisioning for peak capacity that you rarely use. For workloads with steady, predictable usage, manual provisioned throughput or reserved capacity may deliver better long-term efficiency.

Kinectify’s platform sees “very spiky load patterns,” and they describe their goal clearly: scale up fast, scale down when quiet, optimize cost.

Michael Calvin, CTO of Kinectify, said, “We have large volumes of data coming in and very spiky load patterns. So we needed a solution that could scale quickly and also scale down when we weren’t receiving those traffic patterns so we can optimize cost… auto-scaling has been invaluable for optimizing both cost and performance on our platform daily.”

Kinectify also implemented a tenant-based logical partition and paired it with autoscale so they could share throughput across tenants while keeping the platform efficient.

Tip 4: Treat partitioning as a cost decision, not just a scale decision

Partition strategy determines whether your queries stay efficient or fan out across physical partitions. John Savill explicitly calls out partition key importance and high cardinality.

Veeam’s implementation connects cost directly to partition-aware architecture and search scope: “What Azure Cosmos DB does for us is deliver low operational overhead with infinite scaling capability… We can narrow down our search to a very limited space within physical partitions, and this saves costs and decreases the latency,” (Zack Rossman, Staff Software Engineer, Veeam).

Veeam used autoscale plus a hierarchical partitioning strategy to distribute billions of items without hot spots, keeping queries efficient at massive scale.

Shahid Syed, director of technology, at Unite Digital, touts Azure Cosmos DB partition-based scaling has having “significantly reduced costs of over $25,000 per month with minimal effort.”

Tip 5: Optimize RU consumption by aligning data models, queries, and indexing with access patterns

Request Units, or RU/s, are the currency of Azure Cosmos DB. Every read, write, and query consumes RU/s, and inefficient operations increase costs even if throughput looks reasonable on paper.

John Savill emphasizes the importance of understanding which operations consume the most RUs and why. AI applications often rely on complex queries, vector searches, or high-volume reads and writes, all of which can drive RU usage if not carefully designed. Reducing document size where possible can lower storage costs and reduce RU consumption for reads and writes. In some scenarios, separating large embeddings from frequently accessed metadata can lead to more efficient access patterns.

The goal is not to prematurely optimize, but to ensure that document design reflects how data is actually used by inference paths, retrieval workflows, and downstream systems.

Novo Nordisk found a concrete modeling change reduced consumption and improved performance: instead of storing tasks as separate documents, they redesigned so an entire checklist and tasks lived in a single document.

RU consumption is not driven by queries alone – it is a combined effect of data modeling, indexing strategy, and how consistently your access patterns align with your partitioning model.

Tip 6: Avoid paying for extra services you don’t need in your AI retrieval pipeline

Running multiple databases for related workloads or vector search often introduces unnecessary cost and complexity: duplicate throughput, fragmented data access, and higher operational overhead. By consolidating related datasets into a single Azure Cosmos DB account, teams can significantly improve cost efficiency without sacrificing scale or performance.

Lead Cloud Architect at Solliance Joel Hullen says, “[H]aving the vector store in Microsoft Azure Cosmos DB makes a lot of sense because the vector store lives in line with the data. It is in the same workspace and the same region. We do not have to worry about ingress and egress charges because with it being co-located with our data, we are going to have better performance.”

Consolidation allows you to:

  • Share throughput (RU/s) across workloads instead of over-provisioning each database independently
  • Reduce management and monitoring overhead by operating within a unified data plane
  • Enable simpler querying and data access patterns when applications need to reason across datasets
  • Eliminate duplicate infrastructure costs that add up quickly as systems scale

This pattern is particularly effective for SaaS platforms, internal line-of-business apps, and analytics-heavy workloads where datasets are highly related and benefit from centralized access.

Tip 7: Keep multi-region setups aligned to actual traffic patterns

Multi-region is a superpower but it can quickly become expensive if regions are added by default rather than based on actual user demand. To keep costs under control, Azure Cosmos DB multi‑region setups should be intentionally aligned to where traffic is truly coming from.

A cost‑aware multi‑region strategy includes:

  • Adding regions only where there is sustained read or write traffic, rather than pre‑provisioning global coverage
  • Regularly reviewing per‑region usage metrics to identify regions that are underutilized
  • Using a single write region with selective read regions when user activity is geographically skewed
  • Removing or consolidating regions as traffic patterns change over time

Check out how provisioned throughput multiplies across regions and how to reason about the multi-region cost model.

Scale AI without the surprise bill

Azure Cosmos DB provides the foundation required to build low-latency AI apps that reach any scale. The insights above underscore an important point. Cost efficiency comes from intentional design, not shortcuts.

By choosing the right throughput model, optimizing RU usage, designing effective partitions, and having setups align to how apps behave in the real world, teams can support demanding AI workloads without sacrificing financial predictability.

The result is an AI-ready data platform that grows with your ambitions and stays aligned with your budget.

 

About Azure Cosmos DB

Azure Cosmos DB is a fully managed and serverless NoSQL and vector database for modern app development, including AI applications. With its SLA-backed speed and availability as well as instant dynamic scalability, it is ideal for real-time NoSQL and MongoDB applications that require high performance and distributed computing over massive volumes of NoSQL and vector data.

To stay in the loop on Azure Cosmos DB updates, follow us on XYouTube, and LinkedIn.  Join the discussion with other developers on the #nosql channel on the Microsoft Open Source Discord.

The post 7 tips to optimize Azure Cosmos DB costs for AI and agentic workloads appeared first on Azure Cosmos DB Blog.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

2.7.2: Add admin protection error message for shadow admin scenarios (#40170)

1 Share
  • Add admin protection error message for shadow admin scenarios

When Windows Admin Protection is enabled, the elevated process runs as a
shadow admin with a different SID, so distributions registered under the
real user are not visible. Surface an informational message in two cases:

  1. Launching a distribution by name that is not found (WSL_E_DISTRO_NOT_FOUND)
  2. Listing distributions when none are registered (WSL_E_DEFAULT_DISTRO_NOT_FOUND)
  • formatting

  • Show admin protection message for non-elevated users too

When Admin Protection creates a shadow admin, distros registered under
the real user are invisible to the shadow admin and vice versa. Remove
the elevation check so the informational message appears for both
elevated and non-elevated callers.

Co-authored-by: Copilot 223556219+Copilot@users.noreply.github.com


Co-authored-by: Ben Hillis benhill@ntdev.microsoft.com
Co-authored-by: Copilot 223556219+Copilot@users.noreply.github.com

Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

DIY To Power Our Planet: An (Mini) Earth Day Field Guide

1 Share
DIY To Power Our Planet: An (Mini) Earth Day Field Guide

Earth Day 2026 isn’t about lofty pledges or abstract policy—it’s about what you can build, hack, repair, and share today. The theme, “Our Power, Our Planet,” shares some key tenets of Maker culture, elevating tools over talk, prototypes over promises, and communities over complacency. Its a distillation and an exhortation to do what we can, […]

The post DIY To Power Our Planet: An (Mini) Earth Day Field Guide appeared first on Make: DIY Projects and Ideas for Makers.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories