Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151412 stories
·
33 followers

Process Explorer v17.11

1 Share

Process Explorer v17.11

This update to Process Explorer, an advanced process, DLL, and handle viewing utility, includes stability fixes.
 
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Escaping the Fork: How Meta Modernized WebRTC Across 50+ Use Cases

1 Share
  • At Meta, WebRTC powers real-time audio and video across various platforms. But forking a large open-source project like WebRTC within our monorepo presents unique challenges – over time, an internal fork can drift behind upstream, cutting itself off from community upgrades.
  • We’re sharing how we escaped this “forking trap” – from building a dual-stack architecture that enabled safe A/B testing across 50+ use cases, to the workflows that now keep us continuously upgraded with upstream.
  • This approach improved performance, binary size, and security – and we continue to use it today to A/B test each new upstream release before rolling it out.

At Meta, real-time communication (RTC) powers various services, from global Messenger and Instagram video chats to low-latency Cloud Gaming and immersive VR casting on Meta Quest. To meet the performance demands of billions of users, we spent years developing a specialized, high-performance variant of the open-source WebRTC library.

Permanently forking a big open-source project can result in a common industry trap. It starts with good intentions: You need a specific internal optimization or a quick bug fix. But over time, as the upstream project evolves and your internal features accumulate, the resources needed to merge in external commits can become prohibitive.

Recently, we officially concluded a massive multiyear migration to break this cycle. We successfully moved over 50 use cases from a divergent WebRTC fork to a modular architecture built on top of the latest upstream version – using it as a skeleton while injecting our own proprietary implementations of key components.

This article details how we engineered a solution to solve the “forking trap,” allowing us to build two versions of WebRTC simultaneously within a single library for the sake of A/B testing, while living in a monorepo environment, with continuous upgrade cycles of the library that’s being tested.

The Challenge: The Monorepo and the Static Linker

Upgrading a library like WebRTC can be risky, especially when upgrading while serving billions of users and introducing regressions that are hard to rollback. This also eliminates the possibility of a one-time upgrade, which could break some users’ experiences due to the variety of devices and environments we are running at.

To mitigate this, we prioritized A/B testing capabilities in order to run the legacy version of WebRTC alongside the new upstream version with clean patches and apply our features in the same app while being able to dynamically switch users between them to verify the new version.

Due to application build graph and size constraints, we also prioritized finding a solution to statically link two WebRTC versions. However, this violates the C++ linker One Definition Rule (ODR), causing thousands of symbol collisions, so we turned to finding a way to make two versions of the same library coexist in the same address space.

Furthermore, Meta is using a monorepo and we don’t want to undergo the same process over and over again. This motivated us to find a solution to maintain custom patches for open-source projects in a monorepo environment, while being able to pull new versions from upstream and apply the patches over and over again. 

This led us to focus on solving two challenges:

  1. We desired A/B testing capability. To achieve that, we built two copies of WebRTC in the same library due to application constraints.
  2. With no feature branches in monorepo, how do we track patches and rebase them? Other libwebrtc-based OSS projects usually do this by applying a set of stored patch files sequentially on top of the clean repo on each library upgrade. Due to scalability concerns, we explored more nuanced options.

Solution 1: The Shim Layer and Dual-Stack Architecture

To address the A/B testing capability, we chose to build two copies of WebRTC within the same app. However, doing this statically within the same overarching call orchestration library creates unique challenges. To tackle this, we built a shim layer between the application layer and WebRTC. It is a proxy library that sits between our application code and the underlying WebRTC implementations. Instead of the app calling WebRTC directly, it calls the shim API. The shim exposes a single, unified, version-agnostic API.

The shim layer holds a “flavor” configuration and dispatches each call to either the legacy or latest WebRTC implementation at runtime. This approach – shimming at the lowest possible layer – avoids a significant binary size regression that duplicating the higher-layer call orchestration library would have caused. Duplication would have resulted in an uncompressed size increase of approximately 38 MB, whereas our solution added only about 5 MB – an 87% reduction.

Next, we’ll look at the hurdles introduced by this dual-stack architecture and how we resolved them. 

Solving Symbol Collisions

Statically linking two copies of WebRTC into a single binary produces thousands of duplicate symbol errors.

In order to ensure every symbol in each flavor is unique, we leveraged automated renamespacing: We built scripts that systematically rewrite every C++ namespace in a given WebRTC version, so the webrtc:: namespace in the latest upstream copy becomes webrtc_latest::, while the legacy copy becomes webrtc_legacy::. This rename was applied to every external namespace in the library.

But not everything in WebRTC lives in a namespace – global C functions, free variables, and classes that were left outside namespaces intentionally or accidentally also collide.

For those, we moved what we could into namespaces and manipulated the symbols of the rest (like global C functions) with flavor-specific identifiers.

Macros and preprocessor flags presented a subtler problem. Macros like RTC_CHECK and RTC_LOG can be used outside of WebRTC in wrapper libraries, so including both versions’ headers in the same translation unit triggers redefinition errors. 

We addressed this through a combination of strategies: 

  1. Removing spurious includes. 
  2. Renaming rarely-used macros. 
  3. Sharing internal WebRTC modules across versions where possible, like rtc_base. This last approach had the added benefit of reducing both binary size and the surface area of code that needed shimming.

Backward Compatibility

Renamespacing every symbol in WebRTC would break every external call site. Our focus was to keep existing code working without disruption. Some call sites are built with a constant WebRTC flavor, and not dual-stack.

Our initial approach was to forward-declare every used symbol from the new namespace and wire it to the old one. This worked, but produced a large fragile header file that required a high level of maintenance. 

We iterated to a better solution: bulk namespace imports using C++ using declarations. By importing an entire flavor namespace into the familiar webrtc:: namespace, we achieved a concise declaration header where new symbols are handled automatically, with no binary size implications since these are pure compiler directives. External engineers continue writing code exactly as before – the wiring happens in parallel, where we migrate only external call sites we care about.

Flavoring: Runtime Version Dispatch

With the shim layer wrapping both WebRTC versions, the next question was: How do we dispatch to the correct version at runtime? Each adapter and converter needs to instantiate the right underlying object – webrtc_legacy:: or webrtc_latest::, based on a global configuration flag.

We addressed this with a template-based helper library. Shared logic (which constitutes a large portion of the adapter code) is written once. Version-specific behavior is expressed through C++ template specializations. This keeps the code DRY while supporting backward compatibility with single-flavor builds during the transition period. A global flavor enum, set early in each app’s startup sequence, determines which flavor to activate.

We use directional adapters as intermediary objects that implement the unified API and dispatch to the underlying WebRTC object, or vice versa. We use directional converters as utility functions to translate structs and enums between the shim and WebRTC type systems.

Left: Used to expose internal WebRTC classes to external callers. Right: Used to inject custom components into WebRTC.

Shim Generation

The shim layer itself required adapters and converters. With a large number of objects to shim across dozens of APIs – each requiring an abstract API definition, adapter and converters implementations, and unit tests – the estimated manual effort was huge!

We turned to automation. Using abstract syntax tree (AST) parsing, we built a code generation system that produces baseline shim code for classes, structs, enums, and constants. The generated code is fully unit-tested and easy to extend. This increased our velocity from one shim per day to three or four per day while reducing the risk of human error. For simple shims where the API is identical across versions, the generated code required close to zero manual intervention. For more complex cases – API discrepancies between versions, factory patterns, static methods, raw pointer semantics, and object ownership transfers – engineers refined the generated baseline.

Wiring and Building Dual-Stack Apps

With the shim layer in place, we began the painstaking work of rewiring all application references from direct WebRTC types to their shim equivalents. For example, webrtc::Foo became webrtc_shim::Foo. This introduced object ownership complexities and the potential for subtle bugs around null handling and memory management. We mitigated this through comprehensive unit testing that replicated problematic scenarios of ownership transfer and object lifetime, supplemented by end-to-end testing for particularly risky diffs.

We then worked iteratively toward building full apps in dual-stack mode, starting with small targets and working up. Each iteration surfaced new issues: missing shims, incorrectly flavored objects, and new macro or symbol collisions. 

Some internal components that were injected into WebRTC from outside posed a particular challenge due to their deep dependencies on WebRTC internals. Since shimming these components would mean proxying WebRTC against itself, we instead “duplicated” them using C++ macro and Buck build machinery – dynamically changing namespaces at build time, duplicating the high-level build target, and exposing symbols for both flavors through a single header.

Once finished, we had our internal app, as well as some external applications, all building and running audio and video calls in dual-stack mode for both legacy and latest flavors.  

Over 10,000 lines of shim code were added, and hundreds of thousands of lines were modified across thousands of files. Despite the scope, careful testing and review meant no major issues.

Using this approach, we were able to A/B test the legacy WebRTC release against the latest one, app-by-app, mitigate regressions, ship, and delete the legacy code. Today, the shim approach is used in some applications so we can continuously upgrade the internal WebRTC code with the latest upstream updates.

Solution 2: The Feature Branches

Since we use a monorepo without widespread support for branches, we sought a way to track patches over time that would be continuously rebased on top of upstream. Our clear requirement was that each patch would have a clearly delineated purpose and owning team.

We had two choices here: We could track patch files checked into source control and reapply them one by one in the correct order, or we could track patches in a separate repository that supported branching.

In the end we chose to go with tracking feature branches in a separate Git repository. One of the reasons for this was to establish a good pipeline for making it very easy to submit feature branches and fixes upstream. 

By basing them on top of the libwebrtc Git repo, we could easily reuse existing upstream Chromium tools for building, testing, and submitting (`gn`, `gclient`, `git cl`, and more).

For each upstream Chromium release (such as M143 which has tag 7499 in git), we create a “base/7499” branch. Then, for each of our patches (e.g. “debug-tools”) we create a “debug-tools/7499” branch on top of the base/7499 commit. During a version upgrade, we merge forward all feature branches, debug-tools/7499 gets merged into debug-tools/7559, hw-av1-fixes/7499 into hw-av1-fixes/7599, and so on. 

Once all features are merged forward with resolved conflicts and working builds + tests, we merge all the feature branches sequentially together to create the release candidate branch r7559.

Some nice benefits from this approach are that it is highly parallelizable if there are many branches, it automatically preserves all Git history/context, and it is well-suited for future improvements in LLM-driven auto-resolution of merge conflicts. Additionally, the feature branches make it easy to submit the branch as a whole as an upstream contribution into OSS.

The Result: Continuous Upgrades

This architecture allowed us to ship a binary containing both the old and new WebRTC stacks. We launched webrtc/latest on version M120 and have since progressed to M145. Instead of being years behind, we now stay current with the latest stable Chromium releases, ingesting upstream upgrades immediately.

Key Engineering Wins

  • Performance: We saw CPU usage drop by up to 10% and crash rates improve by up to 3% across major apps.
  • Binary Size: The new upstream version is more efficient, resulting in a 100-200 KB (compressed) size reduction depending on the app.
  • Security: We eliminated deprecated libraries (like usrsctp) and fixed security vulnerabilities present in the legacy stack.
  • All the above drove observable user engagement improvements while running on a modern stack.

This project proves that even in a complex monorepo environment with various constraints, it is possible to modernize technical debt without a complete rewrite. The shim layer with dual-stack approach offers a blueprint for any organization looking to escape the forking trap.

Future Work: AI-Driven Maintenance

With the migration complete, we are entering a new era of maintenance. While we are now “living at head,” we still apply internal patches on top of upstream. To manage this efficiently, we are leveraging tools to automate our workflows:

  1. Build Health: We are developing agents to automatically fix build errors in our Git branches.
  2. Conflict Resolution: When rebasing our patches on new WebRTC releases, we encounter merge conflicts. We are training AI agents to resolve the majority of these conflicts automatically, leaving only the most complex architectural changes for human engineers.

Acknowledgements

This work was accomplished by a small team of engineers who recognized the value of this strategic project and dove in head-first despite its complexity. They brought creative ideas and solutions, did the heavy lifting, and ultimately drove the project to completion in the face of unexpected blockers and unique challenges along the way: Dor Hen, Guy Hershenbaum, Jared Siskin, Liad Rubin, Tal Benesh, and Yosef Twaik.

The post Escaping the Fork: How Meta Modernized WebRTC Across 50+ Use Cases appeared first on Engineering at Meta.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Webinar – OSS Power-Ups: XenoAtom.Terminal.UI

1 Share

Join us Thursday, April 16, 2026, 15:00 – 16:30 UTC (check other timezones) for our free live webinar, OSS PowerUps – XenoAtom.Terminal.UI, with Alexandre Mutel. This is the fifteenth episode of our series of OSS Power-Ups, where we put a spotlight on open-source .NET projects.

Register now and get a reminder, or join on YouTube

What does it take to build a terminal UI that feels modern – and keeps it maintainable as it grows? In this talk, I’ll share the behind-the-scenes journey of creating XenoAtom.Terminal.UI, focusing on the design choices that enabled a reactive, binding-first retained model, a consistent layout pipeline, and a foundation that scaled to 60+ controls without turning into framework spaghetti.

I’ll also show how AI coding agents accelerated key parts of the work – from API exploration to implementation, refactoring, and tests – what worked, what didn’t, and the practical guardrails that kept the project shippable. Finally, I’ll connect these lessons to how I’m evolving my broader OSS portfolio: what’s next, how I choose projects, and how AI is changing the way I build open source.

Register for the webinar

You can attend Alexandre Mutel’s webinar on YouTube, or register here to get a reminder closer to the webinar.

About the presenter:

Alexandre Mutel

Alexandre Mutel is VP of Engineering at DataGalaxy, driving innovation in the data governance space. With 20+ years in .NET, he’s known for widely used open-source projects like SharpDX, Markdig, Scriban, and the profiler Ultra. He enjoys pushing .NET into unexpected territory – from high-performance tooling to retro experiments like building a .NET framework for the Commodore 64.

You can follow Alexandre on Mastodon, BlueSky, LinkedIn, and GitHub.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Simplified Authentication with Better Auth

1 Share

We’ll build a complete email and password authentication system with session management to see how Better Auth works. Users will be able to sign up, log in and stay authenticated across page refreshes. We’ll use Next.js, Drizzle ORM and SQLite.

Authentication should be straightforward, but in practice, it takes time. You’re dealing with sessions, cookies, password hashing, OAuth redirects and email verification, and each piece has its own quirks and scattered documentation. The whole time, you may also be asking yourself if you’re doing this securely. One mistake exposes your users.

Most existing tools don’t help much. NextAuth is solid if you’re using Next.js, but switching frameworks means starting over. Building from scratch gives you control at the cost of maintaining every security detail yourself. Clerk and Auth0 work well until you reach their pricing tiers or require a feature they don’t support. Passport.js works, but it hasn’t aged well—there’s callback hell and endless boilerplate.

Better Auth takes a different approach. It’s lightweight, works with any framework (Next.js, Remix, Svelte, etc.), and comes with TypeScript baked in. You get OAuth, 2FA and password resets out of the box. You can also customize when you need to, or just use the defaults.

In this guide, we’ll build a complete email and password authentication system with session management to see how Better Auth works. Users will be able to sign up, log in and stay authenticated across page refreshes. We’ll use Next.js, Drizzle ORM and SQLite.

Prerequisites

To follow along with this guide, you’ll need:

  • A decent understanding of JavaScript/TypeScript
  • Basic familiarity with React and Next.js
  • A database ready (PostgreSQL, MySQL or SQLite)

What Is Better Auth?

Better Auth is a TypeScript-first authentication library that avoids the usual trade-offs like vendor lock-in, expensive monthly fees or complex configurations. It is framework-agnostic, so you can use it with Next.js today and switch to Remix or SvelteKit tomorrow without rewriting your auth logic. It’s not a managed service, so there’s no per-user pricing or vendor lock-in.

What makes it different? It’s database-agnostic, meaning you can use PostgreSQL, MySQL, SQLite or MongoDB. Better Auth adapts through adapters like Prisma, Drizzle and Mongoose. It’s type-safe by default, built in TypeScript from the ground up, so your IDE knows what methods exist, what data comes back, and catches errors before you run your code. It also has security measures built in by default, with proper password hashing, HttpOnly cookies, CSRF protection and secure session management. The defaults follow best practices.

Better Auth supports OAuth providers (Google, GitHub, etc.), magic links, two-factor authentication, passkeys, even enterprise SSO, and more than what we’ll cover in this article. We’re focusing on the fundamentals so you understand how the system works. Once you grasp the basics, adding these features is straightforward through Better Auth’s plugin system.

Project Setup

Let’s start by creating a new Next.js project with TypeScript and Tailwind CSS. Run the following command in your terminal:

npx create-next-app@latest better-auth --ts --tailwind --eslint --app

This creates a Next.js project with TypeScript, Tailwind CSS, ESLint and App Router (the modern Next.js routing system) configured.

Run this command to navigate into the project:

cd better-auth

Run the following command to install Better Auth and its dependencies:

npm install better-auth drizzle-orm better-sqlite3 
npm install -D drizzle-kit @types/better-sqlite3

Database Setup

Before configuring Better Auth, we need to prepare our database. We’ll break this down into three distinct files to keep things organized:

  • Define the tables: Create the tables Better Auth needs (users, sessions, accounts, verification)
  • Database initialization: Set up the SQLite connection and wrap it with Drizzle for type-safe queries
  • Configure auth: Finally, we connect our database to Better Auth using the Drizzle adapter

Define the Tables

We need to explicitly define the tables Better Auth expects. We will add all of these to a single file. Open your lib/auth-schema.ts file and add the following to it:

The User Table
This stores basic user information.

//lib/auth-schema.ts
import { sqliteTable, text, integer } from "drizzle-orm/sqlite-core";
export const user = sqliteTable("user", {
  id: text("id").primaryKey(),
  name: text("name").notNull(),
  email: text("email").notNull().unique(),
  emailVerified: integer("emailVerified", { mode: "boolean" }).notNull(),
  image: text("image"),
  createdAt: integer("createdAt", { mode: "timestamp" }).notNull(),
  updatedAt: integer("updatedAt", { mode: "timestamp" }).notNull(),
});

The email field is unique, meaning no two users can share the same email. Better Auth requires these specific fields for its internal functionality.

Notice we mark name and email as notNull() here. This is important because Better Auth infers its types from this schema. By making them required in the database, TypeScript will automatically force us to provide them in the sign-up form later.

The Session Table
This manages active login sessions.

//lib/auth-schema.ts
export const session = sqliteTable("session", {
  id: text("id").primaryKey(),
  expiresAt: integer("expiresAt", { mode: "timestamp" }).notNull(),
  token: text("token").notNull().unique(),
  createdAt: integer("createdAt", { mode: "timestamp" }).notNull(),
  updatedAt: integer("updatedAt", { mode: "timestamp" }).notNull(),
  ipAddress: text("ipAddress"),
  userAgent: text("userAgent"),
  userId: text("userId")
    .notNull()
    .references(() => user.id),
});

Each session has a unique token and links to a user via userId. The expiresAt timestamp determines when the session becomes invalid.

The Account Table
This handles OAuth providers and stores authentication credentials.

//lib/auth-schema.ts
export const account = sqliteTable("account", {
  id: text("id").primaryKey(),
  accountId: text("accountId").notNull(),
  providerId: text("providerId").notNull(),
  userId: text("userId")
    .notNull()
    .references(() => user.id),
  accessToken: text("accessToken"),
  refreshToken: text("refreshToken"),
  idToken: text("idToken"),
  accessTokenExpiresAt: integer("accessTokenExpiresAt", { mode: "timestamp" }),
  refreshTokenExpiresAt: integer("refreshTokenExpiresAt", {
    mode: "timestamp",
  }),
  scope: text("scope"),
  password: text("password"),
  createdAt: integer("createdAt", { mode: "timestamp" }).notNull(),
  updatedAt: integer("updatedAt", { mode: "timestamp" }).notNull(),
});

The account table stores the user’s credentials, including the password. This separation keeps the architecture flexible, allowing you to link other providers like GitHub or Google to the same user identity.

Verification Table
This stores temporary code for email verification and password resets.

export const verification = sqliteTable("verification", {
  id: text("id").primaryKey(),
  identifier: text("identifier").notNull(),
  value: text("value").notNull(),
  expiresAt: integer("expiresAt", { mode: "timestamp" }).notNull(),
  createdAt: integer("createdAt", { mode: "timestamp" }),
  updatedAt: integer("updatedAt", { mode: "timestamp" }),
});

This table acts as a temporary vault for security tokens. When the system sends an email to verify a user, the unique code is stored here to ensure the link is valid and hasn’t expired when clicked.

Database Initialization

We need a running database connection. Let’s create a file called lib/db.ts. This is where we initialize SQLite and wrap it in Drizzle so we can use it everywhere else. We wrap it because we want to write TypeScript, not SQL strings. By passing the connection to Drizzle, you get to query your database using typed methods.

// lib/db.ts 
import { drizzle } from "drizzle-orm/better-sqlite3";
import Database from "better-sqlite3";

// This creates a local 'sqlite.db' file if it doesn't exist
const sqlite = new Database("sqlite.db"); 

export const db = drizzle(sqlite);

We need to do this first so we can import db into our auth configuration without TypeScript yelling at us.

Configure Auth

Now we can write the auth config. This file is where we tell Better Auth about our database and configure how users will authenticate.

Create a lib/auth.ts file and add the following to it:

//lib/auth.ts
import { betterAuth } from "better-auth";
import { drizzleAdapter } from "better-auth/adapters/drizzle";
import { db } from "./db";
import * as schema from "./auth-schema";

export const auth = betterAuth({
  database: drizzleAdapter(db, {
    provider: "sqlite",
    schema: schema, // passing the schema here
  }),
  emailAndPassword: {
    enabled: true,
  },
  })

In the code above, we import the database connection db and schema we created, then pass them to Better Auth through the drizzleAdapter. This adapter translates Better Auth operations into Drizzle queries so it can read and write user data.

The provider: "sqlite" tells Better Auth we’re using SQLite. If we were using something else like PostgreSQL, we would change it to whatever we’re using.

It is important to note that the emailAndPassword: {enabled: true} option activates email and password authentication. Better Auth will generate the signup and signin endpoints we need.

Syncing Database

Now that we have three core files, we need to create the database file. Currently, sqlite.db doesn’t exist.

Drizzle Configuration

Create a file named drizzle.config.ts at the your root of your project and add the following to it:

//drizzle.config.ts
import { defineConfig } from "drizzle-kit";

export default defineConfig({
  schema: "./lib/auth-schema.ts", // where our tables will be defined
  dialect: "sqlite",
  dbCredentials: {
    url: "sqlite.db", // This is the name of the file it will create
  },
});

Here, we tell Drizzle where to find our schema and what database to use. The config points to our schema file ./lib/auth-schema.ts and specifies SQLite as the database type. The url is the filename Drizzle will create and in this case, it’s sqlite.db.

Now, run this command in your terminal to sync your code with the database:

npx drizzle-kit push

If everything is set up correctly, you should see a success message. Drizzle just created a local sqlite.db file in your project root with all your user, session and account tables.

Drizzle configuration

API Route

Now we need to set up an API route so our frontend can communicate with Better Auth. Create a file named app/api/auth/[...all]/route.ts. This will be the API route that handles all auth requests:

// app/api/auth/[...all]/route.ts

import { auth } from "@/lib/auth";
import { toNextJsHandler } from "better-auth/next-js";

export const { GET, POST } = toNextJsHandler(auth);

If your project uses src/, this goes in src/api/auth/[...all]/route.ts.

Auth Client

Finally, for our setup, we need a way for our frontend to talk to the backend without writing messy fetch calls.

Create lib/auth-client.ts file and add the following to it:

import { createAuthClient } from "better-auth/react";

export const authClient = createAuthClient({
  baseURL: "http://localhost:3000",
});

This small utility will give us type-safe methods for signing in, signing up and checking sessions.

Email and Password Authentication

Now let’s build the authentication forms. We’ll create two components: one for sign-up and one for sign-in.

Sign-up Form

Create a file named components/sign-up.tsx and add the following to it:

// components/sign-up.tsx
"use client";

import { useState } from "react";
import { authClient } from "@/lib/auth-client";
import { useRouter } from "next/navigation";

export default function SignUp() {
  const [email, setEmail] = useState("");
  const [password, setPassword] = useState("");
  const [name, setName] = useState("");
  const [isLoading, setIsLoading] = useState(false);
  const router = useRouter();

  const signUp = async () => {
    await authClient.signUp.email(
      {
        email,
        password,
        name,
      },
      {
        onRequest: () => {
          setIsLoading(true);
        },
        onSuccess: () => {
          router.push("/dashboard");
        },
        onError: (ctx) => {
          alert(ctx.error.message);
          setIsLoading(false);
        },
      }
    );
  };

return (
  <div className="flex flex-col gap-4 w-full max-w-md mx-auto mt-10 border border-gray-200 p-6 rounded-lg shadow-sm bg-white">
    <h1 className="text-xl font-bold text-gray-900">Create Account</h1>

    <div className="flex flex-col gap-2">
      <label className="text-sm font-medium text-gray-700">Name</label>
      <input
        type="text"
        value={name}
        onChange={(e) => setName(e.target.value)}
        placeholder="John Doe"
        className="border border-gray-300 p-2 rounded focus:outline-none focus:ring-2 focus:ring-black text-black bg-white"
      />
    </div>

    <div className="flex flex-col gap-2">
      <label className="text-sm font-medium text-gray-700">Email</label>
      <input
        type="email"
        value={email}
        onChange={(e) => setEmail(e.target.value)}
        placeholder="user@example.com"
        className="border border-gray-300 p-2 rounded focus:outline-none focus:ring-2 focus:ring-black text-black bg-white"
      />
    </div>

    <div className="flex flex-col gap-2">
      <label className="text-sm font-medium text-gray-700">Password</label>
      <input
        type="password"
        value={password}
        onChange={(e) => setPassword(e.target.value)}
        placeholder="••••••••"
        className="border border-gray-300 p-2 rounded focus:outline-none focus:ring-2 focus:ring-black text-black bg-white"
      />
    </div>

    <button
      onClick={signUp}
      disabled={isLoading}
      className={`mt-2 p-2 rounded text-white font-medium transition-colors ${
        isLoading
          ? "bg-gray-400 cursor-not-allowed"
          : "bg-black hover:bg-gray-800"
      }`}
    >
      {isLoading ? "Creating an account..." : "Sign Up"}
    </button>
  </div>
); 

This component handles the entire sign-up flow in a single file. If you look closer, you’ll see that the heavy lifting is done by authClient.signUp.email(). This function is type-safe and accepts two distinct arguments (the payload and the event handlers):

  • The payload: This is the actual data we’re sending. In this case, email, password and name. Since our database schema requires these fields, the client verifies we actually provide them. If we miss a required field, TypeScript will flag it instantly.
  • Event handlers: The second argument is an object that controls the request lifecycle, from the moment we click Sign Up to when we get a result. This is one of the perks of Better Auth because it replaces try/catch blocks with clean event hooks. All we have to do is define what happens at each stage: onRequest, onSuccess and onError, which catches errors and alerts the user in this case.

Sign-in Form

With our current setup, users can only register. We need to log them in if they’re existing users.

Create a file named components/sign-in.tsx and add the following to it:

//components/sign-in.tsx
"use client";

import { useState } from "react";
import { authClient } from "@/lib/auth-client";
import { useRouter } from "next/navigation";

export default function SignIn() {
  const [email, setEmail] = useState("");
  const [password, setPassword] = useState("");
  const [isLoading, setIsLoading] = useState(false);

  const router = useRouter();

  const signIn = async () => {
    await authClient.signIn.email(
      {
        email,
        password,
      },
      {
        onRequest: () => {
          setIsLoading(true);
        },
        onSuccess: () => {
          setIsLoading(false);
          router.push("/dashboard");
        },
        onError: (ctx) => {
          setIsLoading(false);
          alert(ctx.error.message);
        },
      }
    );
  };

  return (
    <div className="flex flex-col gap-4 w-full max-w-md mx-auto mt-10 border border-gray-200 p-6 rounded-lg shadow-sm bg-white">
      <h1 className="text-xl font-bold text-gray-900">Sign In</h1>

      <div className="flex flex-col gap-2">
        <label className="text-sm font-medium text-gray-700">Email</label>
        <input
          type="email"
          value={email}
          onChange={(e) => setEmail(e.target.value)}
          placeholder="user@example.com"
          className="border border-gray-300 p-2 rounded focus:outline-none focus:ring-2 focus:ring-black text-black bg-white"
        />
      </div>

      <div className="flex flex-col gap-2">
        <label className="text-sm font-medium text-gray-700">Password</label>
        <input
          type="password"
          value={password}
          onChange={(e) => setPassword(e.target.value)}
          placeholder="••••••••"
          className="border border-gray-300 p-2 rounded focus:outline-none focus:ring-2 focus:ring-black text-black bg-white"
        />
      </div>

      <button
        onClick={signIn}
        disabled={isLoading}
        className={`mt-2 p-2 rounded text-white font-medium transition-colors ${
          isLoading
            ? "bg-gray-400 cursor-not-allowed"
            : "bg-black hover:bg-gray-800"
        }`}
      >
        {isLoading ? "Loading..." : "Sign In"}
      </button>
    </div>
  );
}

This component is nearly identical to the sign-up form with one key difference: we only need email and password since we’re verifying an existing user, not creating a new one.

When a user submits the form, Better Auth verifies the credentials against our database. If they match, it automatically creates a session and sets a secure HttpOnly cookie. This cookie persists the login state, so users stay logged in even after refreshing the page.

Remember when we enabled emailAndPassword: { enabled: true } in the lib/auth.ts file? Better Auth read that configuration and automatically generated this method for us.

Now let’s create dedicated pages for these forms.

Creating Authentication Pages

Instead of embedding forms on the homepage, we’ll create dedicated routes for sign-in, sign-up and a dashboard to verify successful login.

Sign-up Page

Create a component called app/signup/page.tsx and add the following to it:

//app/signup/page.tsx
import SignUp from "@/components/sign-up";
import Link from "next/link";

export default function SignUpPage() {
  return (
    <div className="flex flex-col items-center justify-center min-h-screen bg-gray-50">
      <SignUp />
      <div className="mt-6 text-center text-sm text-gray-600">
        Already have an account?{" "}
        <Link
          href="/signin"
          className="text-blue-600 font-medium hover:underline"
        >
          Sign In
        </Link>
      </div>
    </div>
  );
}

This page wraps the <SignUp /> component and adds a link to sign in.

Sign-in Page

Create a component called app/signin/page.tsx and add the following to it:

//app/signin/page.tsx
import SignIn from "@/components/sign-in";
import Link from "next/link";

export default function SignInPage() {
  return (
    <div className="flex flex-col items-center justify-center min-h-screen bg-gray-50">
      <SignIn />
      <div className="mt-6 text-center text-sm text-gray-600">
        New here?{" "}
        <Link
          href="/signup"
          className="text-blue-600 font-medium hover:underline"
        >
          Create an account
        </Link>
      </div>
    </div>
  );
}

This page wraps the <SignIn /> component and adds a link to Sign Up.

Dashboard Page

We need a destination for users after they log in. For now, let’s create a simple static page named app/dashboard/page.tsx and add the following to it:

//app/dashboard/page.tsx
export default function Dashboard() {
  return (
    <div className="flex flex-col items-center justify-center min-h-screen bg-white text-black">
      <h1 className="text-3xl font-bold">Dashboard</h1>
      <p className="mt-4 text-gray-600">You are successfully logged in!</p>
    </div>
  );
}

Now you can start your server by running npm run dev and then open http://localhost:3000.

You should see a landing page with options to Sign In or Create Account. Follow these steps to test the authentication:

  • Click “Create Account” and create a new user. If it works, you should be redirected to the dashboard.
  • To test the login form, you don’t need to open another window. Just hit the back button in your browser to return to the landing page. Click “Sign In” this time, enter the email and password you just used to sign up, and watch it redirect you back to the dashboard.

Testing the Sign Up and Redirect flow

Accessing User Sessions

Right now, our dashboard just shows static text. We need to make it smart so it displays the actual user’s name and email.

To do this, we use the useSession hook from the auth-client. This hook gives us real-time access to the user’s data.

Update your app/dashboard/page.tsx file with the following:

//app/dashboard/page.tsx
"use client";

import { authClient } from "@/lib/auth-client";
import { useRouter } from "next/navigation";

export default function Dashboard() {
  const router = useRouter();
  const { data: session, isPending } = authClient.useSession();
  if (isPending) {
    return (
      <div className="flex min-h-screen items-center justify-center">
        <p className="text-gray-500">Loading...</p>
      </div>
    );
  }
  if (!session) {
    router.push("/signin");
    return null;
  }
  return (
    <div className="flex flex-col items-center justify-center min-h-screen gap-4 bg-white text-black">
      <h1 className="text-2xl font-bold">Welcome back, {session.user.name}!</h1>
      <p className="text-gray-600">
        You are logged in as{" "}
        <span className="font-semibold">{session.user.email}</span>
      </p>

      <button
        onClick={async () => {
          await authClient.signOut({
            fetchOptions: {
              onSuccess: () => {
                router.push("/signin");
              },
            },
          });
        }}
        className="px-4 py-2 bg-red-500 text-white rounded hover:bg-red-600 transition"
      >
        Sign Out
      </button>
    </div>
  );
}

We introduced the useSession() hook. This is the bridge between the frontend and the user’s session. It gives us access to the current session and keeps it in sync as authentication changes. It returns two key values:

  • data: The session object containing user information (or null if not logged in).
  • isPending: A boolean indicating whether the session is still loading. With this, we’re able to show a loading state so the user doesn’t see empty content.

Finally, the Sign Out button calls authClient.signOut(). This function invalidates the session cookie and uses the onSuccess callback to send the user straight back to the login screen.

Displaying the authenticated user's session data

Session Management and Protected Routes

Now that our app works, let’s look at what happens under the hood.

When a user signs in, Better Auth creates a session and stores it as an HttpOnly cookie. Unlike regular cookies that you can read with document.cookie, these are blocked from frontend JavaScript entirely. Only the browser can send them automatically with each request to the server.

There are two main ways of securing your pages, and they serve different purposes:

  • Client-Side Protection: This is what we implemented in our app/dashboard/page.tsx earlier. We wait for the session to load in the browser, and, if it’s missing, we redirect the user to the “Sign In” page.
  • Server-Side Protection (better for security): We check the session on the server before the page renders. If there’s no valid session, the request gets blocked immediately, and the sensitive page content is never sent to the browser.

To implement server-side protection, we use middleware. Middleware is code that runs before a page loads. It sits between the user’s request and your page, checking conditions and deciding whether to allow access or redirect elsewhere. In Next.js, it runs on the server, so unauthorized users never even download the page.

Protecting Routes with Middleware

Create a new file named middleware.ts at the same level as your app folder and add the following to it:

//middleware.ts
import { betterFetch } from "@better-fetch/fetch";
import type { Session } from "better-auth/types";
import { NextResponse, type NextRequest } from "next/server";

export default async function authMiddleware(request: NextRequest) {
  const { data: session } = await betterFetch<Session>(
    "/api/auth/get-session",
    {
      baseURL: request.nextUrl.origin,
      headers: {
        cookie: request.headers.get("cookie") || "",
      },
    }
  );

  if (!session) {
    return NextResponse.redirect(new URL("/signin", request.url));
  }
  return NextResponse.next();
}

export const config = {
  matcher: ["/dashboard"],
};

In a nutshell, the config object at the bottom defines the rules of engagement, telling Next.js to strictly apply this security check only to routes starting with /dashboard. When a user attempts to visit the page, the middleware steps in to verify their session first. If the session is missing, it instantly blocks the request and redirects them to the “Sign In” page, preventing the protected content from reaching the browser.

Refactor the Dashboard

Now that the server (middleware) is handling the security, we can simplify our dashboard page. We don’t need to redirect from inside the component anymore, but we’ll keep the data fetching to show the user’s name and email address.

//app/dashboard/page.tsx
"use client";

import { authClient } from "@/lib/auth-client";
import { useRouter } from "next/navigation";

export default function Dashboard() {
  const router = useRouter();
  const { data: session, isPending } = authClient.useSession();

  if (isPending) {
    return (
      <div className="flex min-h-screen items-center justify-center">
        <p className="text-gray-500">Loading...</p>
      </div>
    );
  }

  return (
    <div className="flex flex-col items-center justify-center min-h-screen gap-4">
      <h1 className="text-2xl font-bold">Dashboard</h1>
      <div className="p-4 border rounded shadow-sm bg-white min-w-[300px]">
        <p className="text-gray-600 mb-2">
          Signed in as:{" "}
          <span className="font-semibold text-black">
            {session?.user.email}
          </span>
        </p>
        <p className="text-xs text-gray-400">User ID: {session?.user.id}</p>
      </div>

      <button
        onClick={async () => {
          await authClient.signOut({
            fetchOptions: {
              onSuccess: () => {
                router.push("/signin");
              },
            },
          });
        }}
        className="px-4 py-2 bg-red-500 text-white rounded hover:bg-red-600 transition"
      >
        Sign Out
      </button>
    </div>
  );
}

Now when we sign out from the app and try to manually visit http://localhost:3000/dashboard, we should be instantly redirected back to the sign-in page. The dashboard will never attempt to render because there’s no valid session.

Trying to access the dashboard without a session

Conclusion

At the base level, Better Auth handles the heavy lifting for authentication. The good thing is you aren’t reinventing the wheel, and because it works with pretty much everything, you don’t even have to rewrite your whole auth setup if you switch frameworks down the line. It just works, so you can focus on your app.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Visual Studio Code 1.116

1 Share

Learn what's new in Visual Studio Code 1.116 (Insiders)

Read the full article

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How do you add or remove a handle from an active Wait­For­Multiple­Objects?

1 Share

Last time, we looked at adding or removing a handle from an active Msg­Wait­For­Multiple­Objects, and observed that we could send a message to both break out of the wait and update the list of handles. But what if the other thread is waiting in a Wait­For­Multiple­Objects? You can’t send a message since Wait­For­Multiple­Objects doesn’t wake for messages.

You can fake it by using an event which means “I want to change the list of handles.” The background thread can add that handle to its list, and if the “I want to change the list of handles” event is signaled, it updates its list.

One of the easier ways to represent the desired change is to maintain two lists, the “active” list (the one being waited on) and the “desired” list (the one you want to change it to). The background thread can make whatever changes to the “desired” list it wants, and then it signals the “changed” event. The waiting thread sees that the “changed” event is set and copies the “desired” list to the “active” list. This copying needs to be done with Duplicate­Handle because the background thread might close a handle in the “desired” list, and we can’t close a handle while it is being waited on.

wil::unique_handle duplicate_handle(HANDLE other)
{
    HANDLE result;
    THROW_IF_WIN32_BOOL_FALSE(
        DuplicateHandle(GetCurrentProcess(), other,
            GetCurrentProcess(), &result,
            0, FALSE, DUPLICATE_SAME_ACCESS));
    return wil::unique_handle(result);
}

This helper function duplicates a raw HANDLE and returns it in a wil::unique_handle. The duplicate handle has its own lifetime separate from the original. The waiting thread operates on a copy of the handles, so that it is unaffected by changes to the original handles.

std::mutex desiredMutex;
_Guarded_by_(desiredMutex) std::vector<wil::unique_handle> desiredHandles;
_Guarded_by_(desiredMutex) std::vector<std::function<void()>> desiredActions;

The desiredHandles is a vector of handles we want to be waiting for, and the The desiredActions is a parallel vector of things to do for each of those handles.

// auto-reset, initially unsignaled
wil::unique_handle changed(CreateEvent(nullptr, FALSE, FALSE, nullptr));

void waiting_thread()
{
    while (true)
    {
        std::vector<wil::unique_handle> handles;
        std::vector<std::function<void()>> actions;
        {
            std::lock_guard guard(desiredMutex);

            handles.reserve(desiredHandles.size() + 1);
            std::transform(desiredHandles.begin(), desiredHandles.end(),
                std::back_inserter(handles),
                [](auto&& h) { return duplicate_handle(h.get()); });
            // Add the bonus "changed" handle
            handles.emplace_back(duplicate_handle(changed.get()));

            actions = desiredActions;
        }

        auto count = static_cast<DWORD>(handles.size());
                        
        auto result = WaitForMultipleObjects(count,
                        handles.data()->addressof(), FALSE, INFINITE);
        auto index = result - WAIT_OBJECT_0;
        if (index == count - 1) {
            // the list changed. Loop back to update.
            continue;
        } else if (index < count - 1) {
            actions[index]();
        } else {
            // deal with unexpected result
            FAIL_FAST(); // (replace this with your favorite error recovery)
        }
    }
}

The waiting thread makes a copy of the desiredHandles and desiredActions, and adds the changed handle to the end so we will wake up if somebody changes the list. We operate on the copy so that any changes to desiredHandles and desiredActions that occur while we are waiting won’t affect us. Note that the copy in handles is done via Duplicate­Handle so that it operates on a separate set of handles. That way, if another thread closes a handle in desiredHandles, it won’t affect us.

void change_handle_list()
{
    std::lock_guard guard(desiredMutex);
    ⟦ make changes to desiredHandles and desiredActions ⟧
    SetEvent(changed.get());
}

Any time somebody wants to change the list of handles, they take the desiredMutex lock and can proceed to make whatever changes they want. These changes won’t affect the waiting thread because it is operating on duplicate handles. When finished, we set the changed event to wake up the waiting thread so it can pick up the new set of handles.

Right now, the purpose of the changed event is to wake up the blocking call, but we could also use it as a way to know whether we should update our captured handles. This allows us to reuse the handle array if there were no changes.

void waiting_thread()
{
    bool update = true;                        
    std::vector<wil::unique_handle> handles;   
    std::vector<std::function<void()>> actions;

    while (true)
    {
        if (std::exchange(update, false)) {
            std::lock_guard guard(desiredMutex);

            handles.clear();
            handles.reserve(desiredHandles.size() + 1);
            std::transform(desiredHandles.begin(), desiredHandles.end(),
                std::back_inserter(handles),
                [](auto&& h) { return duplicate_handle(h.get()); });
            // Add the bonus "changed" handle
            handles.emplace_back(duplicate_handle(changed.get()));

            actions = desiredActions;
        }

        auto count = static_cast<DWORD>(handles.size());
                        
        auto result = WaitForMultipleObjects(count,
                        handles.data()->get(), FALSE, INFINITE);
        auto index = result - WAIT_OBJECT_0;
        if (index == count - 1) {
            // the list changed. Loop back to update.
            update = true;
            continue;
        } else if (index < count - 1) {
            actions[index]();
        } else {
            // deal with unexpected result
            FAIL_FAST(); // (replace this with your favorite error recovery)
        }
    }
}

In this design, changes to the handle list are asynchronous. They don’t take effect immediately, because the waiting thread might be busy running an action. Instead, they take effect when the waiting thread gets around to making another copy of the desiredHandles vector and call Wait­For­Multiple­Objects again. This could be a problem: You ask to remove a handle, and then clean up the things that the handle depended on. But before the worker thread can process the removal, the handle is signaled. The result is that the worker thread calls your callback after you thought had told it to stop!

Next time, we’ll see what we can do to make the changes synchronous.

The post How do you add or remove a handle from an active <CODE>Wait­For­Multiple­Objects</CODE>? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories