Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149043 stories
·
33 followers

Quiet UI Came and Went, Quiet as a Mouse

1 Share

A few weeks ago, Quiet UI made the rounds when it was released as an open source user interface library, built with JavaScript web components. I had the opportunity to check out the documentation and it seemed like a solid library. I’m always super excited to see more options for web components out in the wild.

Unfortunately, before we even had a chance to cover it here at CSS-Tricks, Quiet UI has disappeared. When visiting the Quiet UI website, there is a simple statement:

Unavailable

Quiet UI is no longer available to the general public. I will continue to maintain it as my personal creative outlet, but I am unable to release it to the world at this time.
Thanks for understanding. I’m really sorry for the inconvenience.

The repository for Quiet UI is no longer available on Quiet UI’s GitHub, and its social accounts seem to have been removed as well.

The creator, Cory LaViska, is a veteran of UI libraries and most known for work on Shoelace. Shoelace joined Font Awesome in 2022 and was rebranded as Web Awesome. The latest version of Web Awesome was released around the same time Quiet UI was originally announced.

According to the Quiet UI site, Cory will be continuing to work on it as a personal creative outlet, but hopefully we’ll be able to see what he’s cooking up again, someday. In the meantime, you can get a really good taste of what the project is/was all about in Dave Rupert’s fantastic write-up.


Quiet UI Came and Went, Quiet as a Mouse originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

CSS Gamepad API Visual Debugging With CSS Layers

1 Share

When you plug in a controller, you mash buttons, move the sticks, pull the triggers… and as a developer, you see none of it. The browser’s picking it up, sure, but unless you’re logging numbers in the console, it’s invisible. That’s the headache with the Gamepad API.

It’s been around for years, and it’s actually pretty powerful. You can read buttons, sticks, triggers, the works. But most people don’t touch it. Why? Because there’s no feedback. No panel in developer tools. No clear way to know if the controller’s even doing what you think. It feels like flying blind.

That bugged me enough to build a little tool: Gamepad Cascade Debugger. Instead of staring at console output, you get a live, interactive view of the controller. Press something and it reacts on the screen. And with CSS Cascade Layers, the styles stay organized, so it’s cleaner to debug.

In this post, I’ll show you why debugging controllers is such a pain, how CSS helps clean it up, and how you can build a reusable visual debugger for your own projects.

Even if you are able to log them all, you’ll quickly end up with unreadable console spam. For example:

[0,0,1,0,0,0.5,0,...]
[0,0,0,0,1,0,0,...]
[0,0,1,0,0,0,0,...]

Can you tell what button was pressed? Maybe, but only after straining your eyes and missing a few inputs. So, no, debugging doesn’t come easily when it comes to reading inputs.

Problem 3: Lack Of Structure

Even if you throw together a quick visualizer, styles can quickly get messy. Default, active, and debug states can overlap, and without a clear structure, your CSS becomes brittle and hard to extend.

CSS Cascade Layers can help. They group styles into “layers” that are ordered by priority, so you stop fighting specificity and guessing, “Why isn’t my debug style showing?” Instead, you maintain separate concerns:

  • Base: The controller’s standard, initial appearance.
  • Active: Highlights for pressed buttons and moved sticks.
  • Debug: Overlays for developers (e.g., numeric readouts, guides, and so on).

If we were to define layers in CSS according to this, we’d have:

/* lowest to highest priority */
@layer base, active, debug;

@layer base {
  /* ... */
}

@layer active {
  /* ... */
}

@layer debug {
  /* ... */
}

Because each layer stacks predictably, you always know which rules win. That predictability makes debugging not just easier, but actually manageable.

We’ve covered the problem (invisible, messy input) and the approach (a visual debugger built with Cascade Layers). Now we’ll walk through the step-by-step process to build the debugger.

The Debugger Concept

The easiest way to make hidden input visible is to just draw it on the screen. That’s what this debugger does. Buttons, triggers, and joysticks all get a visual.

  • Press A: A circle lights up.
  • Nudge the stick: The circle slides around.
  • Pull a trigger halfway: A bar fills halfway.

Now you’re not staring at 0s and 1s, but actually watching the controller react live.

Of course, once you start piling on states like default, pressed, debug info, maybe even a recording mode, the CSS starts getting larger and more complex. That’s where cascade layers come in handy. Here’s a stripped-down example:

@layer base {
  .button {
    background: #222;
    border-radius: 50%;
    width: 40px;
    height: 40px;
  }
}

@layer active {
  .button.pressed {
    background: #0f0; /* bright green */
  }
}

@layer debug {
  .button::after {
    content: attr(data-value);
    font-size: 12px;
    color: #fff;
  }
}

The layer order matters: baseactivedebug.

  • base draws the controller.
  • active handles pressed states.
  • debug throws on overlays.

Breaking it up like this means you’re not fighting weird specificity wars. Each layer has its place, and you always know what wins.

Building It Out

Let’s get something on screen first. It doesn’t need to look good — just needs to exist so we have something to work with.

<h1>Gamepad Cascade Debugger</h1>

<!-- Main controller container -->
<div id="controller">
  <!-- Action buttons -->
  <div id="btn-a" class="button">A</div>
  <div id="btn-b" class="button">B</div>
  <div id="btn-x" class="button">X</div>

  <!-- Pause/menu button (represented as two bars) -->
  <div>
    <div id="pause1" class="pause"></div>
    <div id="pause2" class="pause"></div>
  </div>
</div>

<!-- Toggle button to start/stop the debugger -->
<button id="toggle">Toggle Debug</button>

<!-- Status display for showing which buttons are pressed -->
<div id="status">Debugger inactive</div>

<script src="script.js"></script>

That’s literally just boxes. Not exciting yet, but it gives us handles to grab later with CSS and JavaScript.

Okay, I’m using cascade layers here because it keeps stuff organized once you add more states. Here’s a rough pass:

/* ===================================
   CASCADE LAYERS SETUP
   Order matters: base → active → debug
   =================================== */

/* Define layer order upfront */
@layer base, active, debug;

/* Layer 1: Base styles - default appearance */
@layer base {
  .button {
    background: #333;
    border-radius: 50%;
    width: 70px;
    height: 70px;
    display: flex;
    justify-content: center;
    align-items: center;
  }

  .pause {
    width: 20px;
    height: 70px;
    background: #333;
    display: inline-block;
  }
}

/* Layer 2: Active states - handles pressed buttons */
@layer active {
  .button.active {
    background: #0f0; /* Bright green when pressed */
    transform: scale(1.1); /* Slightly enlarges the button */
  }

  .pause.active {
    background: #0f0;
    transform: scaleY(1.1); /* Stretches vertically when pressed */
  }
}

/* Layer 3: Debug overlays - developer info */
@layer debug {
  .button::after {
    content: attr(data-value); /* Shows the numeric value */
    font-size: 12px;
    color: #fff;
  }
}

The beauty of this approach is that each layer has a clear purpose. The base layer can never override active, and active can never override debug, regardless of specificity. This eliminates the CSS specificity wars that usually plague debugging tools.

Now it looks like some clusters are sitting on a dark background. Honestly, not too bad.

Adding the JavaScript

JavaScript time. This is where the controller actually does something. We’ll build this step by step.

Step 1: Set Up State Management

First, we need variables to track the debugger’s state:

// ===================================
// STATE MANAGEMENT
// ===================================

let running = false; // Tracks whether the debugger is active
let rafId; // Stores the requestAnimationFrame ID for cancellation

These variables control the animation loop that continuously reads gamepad input.

Step 2: Grab DOM References

Next, we get references to all the HTML elements we’ll be updating:

// ===================================
// DOM ELEMENT REFERENCES
// ===================================

const btnA = document.getElementById("btn-a");
const btnB = document.getElementById("btn-b");
const btnX = document.getElementById("btn-x");
const pause1 = document.getElementById("pause1");
const pause2 = document.getElementById("pause2");
const status = document.getElementById("status");

Storing these references up front is more efficient than querying the DOM repeatedly.

Step 3: Add Keyboard Fallback

For testing without a physical controller, we’ll map keyboard keys to buttons:

// ===================================
// KEYBOARD FALLBACK (for testing without a controller)
// ===================================

const keyMap = {
  "a": btnA,
  "b": btnB,
  "x": btnX,
  "p": [pause1, pause2] // 'p' key controls both pause bars
};

This lets us test the UI by pressing keys on a keyboard.

Step 4: Create The Main Update Loop

Here’s where the magic happens. This function runs continuously and reads gamepad state:

// ===================================
// MAIN GAMEPAD UPDATE LOOP
// ===================================

function updateGamepad() {
  // Get all connected gamepads
  const gamepads = navigator.getGamepads();
  if (!gamepads) return;

  // Use the first connected gamepad
  const gp = gamepads[0];

  if (gp) {
    // Update button states by toggling the "active" class
    btnA.classList.toggle("active", gp.buttons[0].pressed);
    btnB.classList.toggle("active", gp.buttons[1].pressed);
    btnX.classList.toggle("active", gp.buttons[2].pressed);

    // Handle pause button (button index 9 on most controllers)
    const pausePressed = gp.buttons[9].pressed;
    pause1.classList.toggle("active", pausePressed);
    pause2.classList.toggle("active", pausePressed);

    // Build a list of currently pressed buttons for status display
    let pressed = [];
    gp.buttons.forEach((btn, i) => {
      if (btn.pressed) pressed.push("Button " + i);
    });

    // Update status text if any buttons are pressed
    if (pressed.length > 0) {
      status.textContent = "Pressed: " + pressed.join(", ");
    }
  }

  // Continue the loop if debugger is running
  if (running) {
    rafId = requestAnimationFrame(updateGamepad);
  }
}

The classList.toggle() method adds or removes the active class based on whether the button is pressed, which triggers our CSS layer styles.

Step 5: Handle Keyboard Events

These event listeners make the keyboard fallback work:

// ===================================
// KEYBOARD EVENT HANDLERS
// ===================================

document.addEventListener("keydown", (e) => {
  if (keyMap[e.key]) {
    // Handle single or multiple elements
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.add("active"));
    } else {
      keyMap[e.key].classList.add("active");
    }
    status.textContent = "Key pressed: " + e.key.toUpperCase();
  }
});

document.addEventListener("keyup", (e) => {
  if (keyMap[e.key]) {
    // Remove active state when key is released
    if (Array.isArray(keyMap[e.key])) {
      keyMap[e.key].forEach(el => el.classList.remove("active"));
    } else {
      keyMap[e.key].classList.remove("active");
    }
    status.textContent = "Key released: " + e.key.toUpperCase();
  }
});

Step 6: Add Start/Stop Control

Finally, we need a way to toggle the debugger on and off:

// ===================================
// TOGGLE DEBUGGER ON/OFF
// ===================================

document.getElementById("toggle").addEventListener("click", () => {
  running = !running; // Flip the running state

  if (running) {
    status.textContent = "Debugger running...";
    updateGamepad(); // Start the update loop
  } else {
    status.textContent = "Debugger inactive";
    cancelAnimationFrame(rafId); // Stop the loop
  }
});

So yeah, press a button and it glows. Push the stick and it moves. That’s it.

One more thing: raw values. Sometimes you just want to see numbers, not lights.

At this stage, you should see:

  • A simple on-screen controller,
  • Buttons that react as you interact with them, and
  • An optional debug readout showing pressed button indices.

To make this less abstract, here’s a quick demo of the on-screen controller reacting in real time:

Now, pressing Start Recording logs everything until you hit Stop Recording.

2. Exporting Data to CSV/JSON

Once we have a log, we’ll want to save it.

<div class="controls">
  <button id="export-json" class="btn">Export JSON</button>
  <button id="export-csv" class="btn">Export CSV</button>
</div>

Step 1: Create The Download Helper

First, we need a helper function that handles file downloads in the browser:

// ===================================
// FILE DOWNLOAD HELPER
// ===================================

function downloadFile(filename, content, type = "text/plain") {
  // Create a blob from the content
  const blob = new Blob([content], { type });
  const url = URL.createObjectURL(blob);

  // Create a temporary download link and click it
  const a = document.createElement("a");
  a.href = url;
  a.download = filename;
  a.click();

  // Clean up the object URL after download
  setTimeout(() => URL.revokeObjectURL(url), 100);
}

This function works by creating a Blob (binary large object) from your data, generating a temporary URL for it, and programmatically clicking a download link. The cleanup ensures we don’t leak memory.

Step 2: Handle JSON Export

JSON is perfect for preserving the complete data structure:

// ===================================
// EXPORT AS JSON
// ===================================

document.getElementById("export-json").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Create a payload with metadata and frames
  const payload = {
    createdAt: new Date().toISOString(),
    frames
  };

  // Download as formatted JSON
  downloadFile(
    "gamepad-log.json", 
    JSON.stringify(payload, null, 2), 
    "application/json"
  );
});

The JSON format keeps everything structured and easily parseable, making it ideal for loading back into dev tools or sharing with teammates.

Step 3: Handle CSV Export

For CSV exports, we need to flatten the hierarchical data into rows and columns:

// ===================================
// EXPORT AS CSV
// ===================================

document.getElementById("export-csv").addEventListener("click", () => {
  // Check if there's anything to export
  if (!frames.length) {
    console.warn("No recording available to export.");
    return;
  }

  // Build CSV header row (columns for timestamp, all buttons, all axes)
  const headerButtons = frames[0].buttons.map((_, i) => btn${i});
  const headerAxes = frames[0].axes.map((_, i) => axis${i});
  const header = ["t", ...headerButtons, ...headerAxes].join(",") + "\n";

  // Build CSV data rows
  const rows = frames.map(f => {
    const btnVals = f.buttons.map(b => b.value);
    return [f.t, ...btnVals, ...f.axes].join(",");
  }).join("\n");

  // Download as CSV
  downloadFile("gamepad-log.csv", header + rows, "text/csv");
});

CSV is brilliant for data analysis because it opens directly in Excel or Google Sheets, letting you create charts, filter data, or spot patterns visually.

Now that the export buttons are in, you’ll see two new options on the panel: Export JSON and Export CSV. JSON is nice if you want to throw the raw log back into your dev tools or poke around the structure. CSV, on the other hand, opens straight into Excel or Google Sheets so you can chart, filter, or compare inputs. The following figure shows what the panel looks like with those extra controls.

3. Snapshot System

Sometimes you don’t need a full recording, just a quick “screenshot” of input states. That’s where a Take Snapshot button helps.

<div class="controls">
  <button id="snapshot" class="btn">Take Snapshot</button>
</div>

And the JavaScript:

// ===================================
// TAKE SNAPSHOT
// ===================================

document.getElementById("snapshot").addEventListener("click", () => {
  // Get all connected gamepads
  const pads = navigator.getGamepads();
  const activePads = [];

  // Loop through and capture the state of each connected gamepad
  for (const gp of pads) {
    if (!gp) continue; // Skip empty slots

    activePads.push({
      id: gp.id, // Controller name/model
      timestamp: performance.now(),
      buttons: gp.buttons.map(b => ({ 
        pressed: b.pressed, 
        value: b.value 
      })),
      axes: [...gp.axes]
    });
  }

  // Check if any gamepads were found
  if (!activePads.length) {
    console.warn("No gamepads connected for snapshot.");
    alert("No controller detected!");
    return;
  }

  // Log and notify user
  console.log("Snapshot:", activePads);
  alert(Snapshot taken! Captured ${activePads.length} controller(s).);
});

Snapshots freeze the exact state of your controller at one moment in time.

4. Ghost Input Replay

Now for the fun one: ghost input replay. This takes a log and plays it back visually as if a phantom player was using the controller.

<div class="controls">
  <button id="replay" class="btn">Replay Last Recording</button>
</div>

JavaScript for replay:

// ===================================
// GHOST REPLAY
// ===================================

document.getElementById("replay").addEventListener("click", () => {
  // Ensure we have a recording to replay
  if (!frames.length) {
    alert("No recording to replay!");
    return;
  }

  console.log("Starting ghost replay...");

  // Track timing for synced playback
  let startTime = performance.now();
  let frameIndex = 0;

  // Replay animation loop
  function step() {
    const now = performance.now();
    const elapsed = now - startTime;

    // Process all frames that should have occurred by now
    while (frameIndex < frames.length && frames[frameIndex].t <= elapsed) {
      const frame = frames[frameIndex];

      // Update UI with the recorded button states
      btnA.classList.toggle("active", frame.buttons[0].pressed);
      btnB.classList.toggle("active", frame.buttons[1].pressed);
      btnX.classList.toggle("active", frame.buttons[2].pressed);

      // Update status display
      let pressed = [];
      frame.buttons.forEach((btn, i) => {
        if (btn.pressed) pressed.push("Button " + i);
      });
      if (pressed.length > 0) {
        status.textContent = "Ghost: " + pressed.join(", ");
      }

      frameIndex++;
    }

    // Continue loop if there are more frames
    if (frameIndex < frames.length) {
      requestAnimationFrame(step);
    } else {
      console.log("Replay finished.");
      status.textContent = "Replay complete";
    }
  }

  // Start the replay
  step();
});

To make debugging a bit more hands-on, I added a ghost replay. Once you’ve recorded a session, you can hit replay and watch the UI act it out, almost like a phantom player is running the pad. A new Replay Ghost button shows up in the panel for this.

Hit Record, mess around with the controller a bit, stop, then replay. The UI just echoes everything you did, like a ghost following your inputs.

Why bother with these extras?

  • Recording/export makes it easy for testers to show exactly what happened.
  • Snapshots freeze a moment in time, super useful when you’re chasing odd bugs.
  • Ghost replay is great for tutorials, accessibility checks, or just comparing control setups side by side.

At this point, it’s not just a neat demo anymore, but something you could actually put to work.

Real-World Use Cases

Now we’ve got this debugger that can do a lot. It shows live input, records logs, exports them, and even replays stuff. But the real question is: who actually cares? Who’s this useful for?

Game Developers

Controllers are part of the job, but debugging them? Usually a pain. Imagine you’re testing a fighting game combo, like ↓ → + punch. Instead of praying, you pressed it the same way twice, you record it once, and replay it. Done. Or you swap JSON logs with a teammate to check if your multiplayer code reacts the same on their machine. That’s huge.

Accessibility Practitioners

This one’s close to my heart. Not everyone plays with a “standard” controller. Adaptive controllers throw out weird signals sometimes. With this tool, you can see exactly what’s happening. Teachers, researchers, whoever. They can grab logs, compare them, or replay inputs side-by-side. Suddenly, invisible stuff becomes obvious.

Quality Assurance Testing

Testers usually write notes like “I mashed buttons here and it broke.” Not very helpful. Now? They can capture the exact presses, export the log, and send it off. No guessing.

Educators

If you’re making tutorials or YouTube vids, ghost replay is gold. You can literally say, “Here’s what I did with the controller,” while the UI shows it happening. Makes explanations way clearer.

Beyond Games

And yeah, this isn’t just about games. People have used controllers for robots, art projects, and accessibility interfaces. Same issue every time: what is the browser actually seeing? With this, you don’t have to guess.

Conclusion

Debugging a controller input has always felt like flying blind. Unlike the DOM or CSS, there’s no built-in inspector for gamepads; it’s just raw numbers in the console, easily lost in the noise.

With a few hundred lines of HTML, CSS, and JavaScript, we built something different:

  • A visual debugger that makes invisible inputs visible.
  • A layered CSS system that keeps the UI clean and debuggable.
  • A set of enhancements (recording, exporting, snapshots, ghost replay) that elevate it from demo to developer tool.

This project shows how far you can go by mixing the Web Platform’s power with a little creativity in CSS Cascade Layers.

The tool I just explained in its entirety is open-source. You can clone the GitHub repo and try it for yourself.

But more importantly, you can make it your own. Add your own layers. Build your own replay logic. Integrate it with your game prototype. Or even use it in ways I haven’t imagined. For teaching, accessibility, or data analysis.

At the end of the day, this isn’t just about debugging gamepads. It’s about shining a light on hidden inputs, and giving developers the confidence to work with hardware that the web still doesn’t fully embrace.

So, plug in your controller, open up your editor, and start experimenting. You might be surprised at what your browser and your CSS can truly accomplish.



Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Making the Most of Your Docker Hardened Images Trial – Part 1

1 Share

First steps: Run your first secure, production-ready image

Container base images form the foundation of your application security. When those foundations contain vulnerabilities, every service built on top inherits the same risk. 

Docker Hardened Images addresses this at the source. These are continuously-maintained, minimal base images designed for security: stripped of unnecessary packages, patched proactively, and built with supply chain attestation. Instead of maintaining your own hardened bases or accepting whatever vulnerabilities ship with official images, you get production-ready foundations with near-zero CVEs and compliance metadata baked in.

What to Expect from Your 30-days Trial?

You’ve got 30 days to evaluate whether Docker Hardened Images fits your environment. That’s enough time to answer the crucial question: Would this reduce our security debt without adding operational burden?

It’s important to note that while DHI provides production‑grade images, this trial isn’t about rushing into production. Its primary purpose is educational: to let you experience the benefits of a hardened base image for supply‑chain security by testing it with the actual services in your stack and measuring the results.

By the end of the trial, you should have concrete results

  • CVE counts before and after, 
  • engineering effort required per image migration, and
  • whether your team would actually use this. 

Testing with real projects always outshines promises.

The DHI quickstart guide walks through the actions. This post covers what the docs don’t: the confusion points you may hit, what metrics actually matter, and how to evaluate results easily.

Step 1: Understanding the DHI Catalog 

To get started with your Free trial, you must be an organization owner or editor. This means you will get your own Repository where you can mirror images, but we’ll get back to this later.

If you are familiar with Docker Hub, the DHI catalog should already look familiar:

DHI Trial 4 1

The most obvious difference are the little lock icons indicating a Hardened Image. But what exactly does it mean?
The core concept behind hardened images is that they present a minimal attack surface, which in practical terms means that only the strict minimum is included (as opposed to “battery-included” distributions like Ubuntu or Debian). 
Think of it like this: The hardened images maintain compatibility with the distro’s core characteristics (libc, filesystem hierarchy, package names) while removing the convenience layers that increase attack surface (package managers, extra utilities, debugging tools).
So the “OS” designation you can see below every DHI means this image is built on top of those  distributions (uses the same base operating system), but with security hardening and package minimization applied.

DHI Trial 3

Sometimes, you need these convenient Linux utilities, for development or testing purposes. 
This is where variants come into play.

DHI Trial 2

The catalog shows multiple variants for each base image: standard versions, (dev) versions, (fips) versions. 
The variant choice matters for security posture. If you can run your application without a package manager in the final image (using multi-stage builds, for example), always choose the standard variant. Fewer tools in the container means fewer potential vulnerabilities.
Here’s what they mean: 
Standard variants (e.g., node-base:24-debian13):

  • Minimal runtime images
  • No package managers (apk, apt, yum removed)
  • Production-ready
  • Smallest attack surface

Fips variants (e.g., node-base:24-debian13-fips):
FIPS variants come in both runtime and build-time variants. These variants use cryptographic modules that have been validated under FIPS 140, a U.S. government standard for secure cryptographic operations. They are required for highly-regulated environments

Dev variants (e.g., node-base:24-debian13-dev):

  • Include package managers for installing additional dependencies
  • Useful during development or when you need to add packages at build time
  • Larger attack surface (but still hardened)
  • Not recommended for production

The catalog includes dozens of base images: language runtimes (Python, Node, Go), distros (Alpine, Ubuntu, Debian), specialized tools (nginx, Redis). 
Instead of trying to evaluate everything from the start, start narrow by picking one image (that you use frequently (Alpine, Python, Node are common starting points) for the first test.
What “Entitlements” and “Mirroring” Actually Mean
You can’t just ‘docker pull’ directly from Docker’s DHI catalog. Instead, you mirror images to your organization’s namespace first. Here’s the workflow:

  1. Your trial grants your organization access to a certain number of DHIs through mirroring: these are called entitlements.
  2. As an organization owner, you first create a copy of the DHI image in your namespace (e.g., yourorg/dhi-node), which means you are mirroring the image and will automatically receive new updates in your repository.
  3. Your team pulls from your org’s namespace, not Docker’s.

Mirroring takes a few minutes and copies all available tags. Once complete, the image appears in your organization’s repositories like any other image.
Why this model? Two reasons:

  • Access control: Your org admins control which hardened images your team can use
  • Availability: Mirrored images remain available even if your subscription changes

The first time you encounter “mirror this image to your repository,” it feels like unnecessary friction. But once you realize it’s a one-time setup per base image (not per tag), it makes sense. You mirror node-base once and get access to all current and future Node versions
Now that you’ve mirrored a hardened image, it’s time to test it with an actual project. The goal is to discover friction points early, when stakes are low.

DHI Trial 1


Step 2: Your First Real Migration Test

Choose a project that is:

  • Simple enough to debug quickly if something breaks (fewer moving parts)
  • Real enough to represent actual workloads
  • Representative of your stack

Drop-In Replacement


Open your Dockerfile and locate the FROM instruction. The migration is straightforward:

# Before
FROM node:22-bookworm-slim
# After
FROM <your-org-namespace>/dhi-node:22-debian13-fips

Replace your organization’s namespace and choose the appropriate tag. If you were using a generic tag like node:22, switch to a specific version tag from the hardened catalog (like 22-debian13-fips). Pinning to specific versions is a best practice anyway – hardened images just make it more explicit.

For other language runtimes, the pattern is similar:

# Python example
FROM python:3.12-slim
# becomes
FROM <your-org-namespace>/dhi-python-base:3.12-bookworm

# Node example
FROM node:20-alpine
# becomes
FROM <your-org-namespace>/dhi-node-base:20.18-alpine3.20


Build the image with your new base:

docker build . -t my-service-hardened

Watch the build output: if your Dockerfile assumes certain utilities exist (like wget, curl, or package managers), the build may fail. This is expected. Hardened bases strip unnecessary tools to reduce attack surface. Here are some common build failures and fixes:

Missing package manager (apt, yum):

  • If you’re installing packages in your Dockerfile, you’ll need to use the (dev) variant, and probably switch to a multi-stage build (install dependencies in a builder stage using a dev variant, then copy artifacts to the minimal runtime stage use a fips hardened base image variant)


Missing utilities (wget, curl, bash):

  • Network tools are removed unless you’re using a debug variant
  • Solution: same as above, install what you need explicitly in a builder stage, or verify you actually need those tools at runtime

Different default user:

  • Some hardened images run as non-root by default
  • If your application expects to write to certain directories, you may need to adjust permissions or use USER directives appropriately

For my Node.js test, the build succeeded without changes. The hardened Node base contained everything the runtime needed – npm dependencies installed normally, and the packages removed were system utilities my application never touched.


Verify It Runs


Build success doesn’t mean runtime success. Start the container and verify it behaves correctly:

docker run --rm -p 3000:3000 my-service-hardened

Test the service:

  • Does it start without errors?
  • Do API endpoints respond correctly?
  • Are logs written as expected?
  • Can it connect to databases or external services?


Step 3: Comparing What Changed

Before moving to measurement, build the original version alongside the hardened one:

# Switch to your main branch
git checkout main
# Build original version
docker build . -t my-service-original
# Switch back to your test branch with hardened base
git checkout dhi-test
# Build hardened version
docker build . -t my-service-hardened


Now you have two images to compare: one with the official base, one with the hardened base. Now comes the evaluation: what actually improved, and by how much?

Docker Scout

Docker Scout compares images and reports on vulnerabilities, package differences, and size changes. If you haven’t enrolled your organization with Scout yet, you’ll need to do that first (it’s free for the comparison features we’re using).


Run the comparison (here we are comparing Node base images) :

docker scout compare --to <your-org-namespace>/dhi-node:24.11-debian13-fips node:24-bookworm-slim

Scout outputs a detailed breakdown. Here’s what we found when comparing the official Node.js image to the hardened version.


1. Vulnerability Reduction


The Scout output shows CVE counts by severity:

                     Official Node          Hardened DHI
                      24-bookworm-slim       24.11-debian13-fips
Critical              0                      0
High                  0                      0
Medium                1                      0  ← eliminated
Low                   24                     0  ← eliminated
Total                 25                     0


The hardened image achieved complete vulnerability elimination. While the official image already had zero Critical/High CVEs (good baseline), it contained 1 Medium and 24 Low severity issues – all eliminated in the hardened version.
Medium and Low severity vulnerabilities matter for compliance frameworks. If you’re pursuing SOC2, ISO 27001, or similar certifications (especially in regulated industries with strict security standards), demonstrating zero CVEs across all severity levels significantly simplifies audits.


2. Package Reduction 


Scout shows a dramatic difference in package count:

                     Official Node          Hardened DHI
Total packages        321                    32
Reduction             —                      289 packages (90%)


The hardened image removed 289 packages including:

  • apt (package manager)
  • gcc-12 (entire compiler toolchain)
  • perl (scripting language)
  • bash (replaced with minimal shell)
  • dpkg-dev (Debian package tools)
  • gnupg2, gzip, bzip2 (compression and crypto utilities)
  • dozens of libraries and system utilities


These are tools your Node.js application never uses at runtime. Removing them drastically reduces attack surface: 90% fewer packages means 90% fewer potential targets for exploitation.
This is important because even if packages have no CVEs today, they represent future risk. Every utility, library, or tool in your image could become a vulnerability tomorrow. The hardened base eliminates that entire category of risk.


3. Size Difference


Scout reports image sizes:

                     Official Node          Hardened DHI
Image size            82 MB                  48 MB
Reduction             —                      34 MB (41.5%)


The hardened image is 41.5% smaller – that’s 34 MB saved per image. For a single service, this might seem minor. But multiply across dozens or hundreds of microservices, and the benefits start to become obvious: faster pulls, lower storage costs, and reduced network transfer.


4. Extracting and Reading the SBOM

One of the most valuable compliance features is the embedded SBOM (Software Bill of Materials). Unlike many images where you’d need to generate the SBOM yourself, hardened images include it automatically.


Extract the SBOM to see every package in the image:

docker scout sbom <your-org-namespace>/dhi-node:24.11-debian13-fips --format list

This outputs a complete package inventory:

Name                  Version          Type
base-files            13.8+deb13u1     deb
ca-certificates       20250419         deb
glibc                 2.41-12          deb
nodejs                24.11.0          dhi
openssl               3.5.4            dhi
openssl-provider-fips 3.1.2            dhi
...


The Type column shows where packages came from:

  • deb: Debian system packages
  • dhi: Docker Hardened Images custom packages (like FIPS-certified OpenSSL)
  • docker: Docker-managed runtime components

The SBOM includes name, version, license, and package URL (purl) for each component – everything needed for vulnerability tracking and compliance reporting.
You can can easily the SBOM in SPDX or CycloneDX format for ingestion by a vulnerability tracking tools:

# SPDX format (widely supported)
docker scout sbom <your-org>/dhi-node:24.11-debian13-fips \
  --format spdx \
  --output node-sbom.json
# CycloneDX format (OWASP standard)
docker scout sbom <your-org>/dhi-node:24.11-debian13-fips \
  --format cyclonedx \
  --output node-sbom-cyclonedx.json

Beyond the SBOM, hardened images include 17 different attestations covering SLSA provenance, FIPS compliance, STIG scans, vulnerability scans, and more. We’ll explore how to verify and use these attestations in Part 2 of this blog series.

Trust, But Verify


You’ve now:
✅ Eliminated 100% of vulnerabilities (25 CVEs → 0)
✅ Reduced attack surface by 90% (321 packages → 32)
✅ Shrunk image size by 41.5% (82 MB → 48 MB)
✅ Extracted the SBOM for compliance tracking


The results look good on paper, but verification builds confidence for production. But how do you verify these security claims independently? In Part 2, we’ll explore:

  • Cryptographic signature verification on all attestations
  • Build provenance traced to public GitHub source repositories
  • Deep-dive into FIPS, STIG, and CIS compliance evidence
  • SBOM-driven vulnerability analysis with exploitability context

View related documentation:



Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Investigating the Great AI Productivity Divide: Why Are Some Developers 5x Faster?

1 Share
undefined Imgur 5

AI-powered developer tools claim to boost your productivity, doing everything from intelligent auto-complete to [fully autonomous feature work](https://openai.com/index/introducing-codex/). 

But the productivity gains users report have been something of a mixed bag. Some groups claim to get 3-5x (or more), productivity boosts, while other devs claim to get no benefit at all—or even losses of up to 19%.

I had to get behind these contradictory reports. 

As a software engineer, producing code is a significant part of my role. If there are tools that can multiply my output that easily, I have a professional responsibility to look into the matter and learn to use them.

I wanted to know where, and more importantly, what separates the high-performing groups from the rest. This article reports on what I found.

The State of AI Developer Tools in 2025

AI dev tooling has achieved significant adoption: 84% of StackOverflow survey respondents in 2025 said they’re using or planning to use AI tools, up from 76% in 2024, and 51% of professional developers use these tools daily.

However, AI dev tooling is a fairly vague category. The space has experienced massive fragmentation. When AI tools first started taking off in the mainstream with the launch of GitHub Copilot in 2021, they were basically confined to enhanced IDE intellisense/autocomplete, and sometimes in-editor chat features. Now, in 2025, the industry is seeing a shift away from IDEs toward CLI-based tools like Claude Code

Some AI enthusiasts are even suggesting that IDEs are obsolete altogether, or soon will be.

That seems like a bold claim in the face of the data, though.

While adoption may be up, positive sentiment about AI tools is down to 60% from 70% in 2024. A higher portion of developers also actively distrust the accuracy of AI tools (46%) compared to those who trust them (33%).

These stats paint an interesting picture. Developers seem to be reluctantly (or perhaps enthusiastically at first) adopting these tools—likely in no small part due to aggressive messaging from AI-invested companies—only to find that these tools are perhaps not all they’ve been hyped up to be.

The tools I’ve mentioned so far are primarily those designed for the production and modification of code. Other AI tool categories cover areas like testing, documentation, debugging, and DevOps/deployment practices. In this article, I’m focusing on code production tools as they relate to developer productivity, whether they be in-IDE copilots or CLI-based agents.

What the Data Says about AI Tools’ Impact on Developer Productivity

Individual developer sentiment is one thing, but surely it can be definitively shown whether or not these tools can live up to their claims?

Unfortunately, developer productivity is difficult to measure at the best of times, and things don’t get any easier when you introduce the wildcard of generative AI. 

Research into how AI tools influence developer productivity has been quite lacking so far, likely in large part because productivity is so difficult to quantify. There have been only a few studies with decent sample sizes, and their methodologies have varied significantly, making it difficult to compare the data on a 1:1 basis.

Nevertheless, there are a few datapoints worth examining.

In determining which studies to include, I tried to find two to four studies for each side of the divide that represented a decent spread of developers with varying levels of experience, working in different kinds of codebases, and using different AI tools. This diversity makes it harder to compare the findings, but homogenous studies would not produce meaningful results, as real-world developers and their codebases vary wildly.

Data that Shows AI Increases Developer Productivity

In the “AI makes us faster” corner, studies like this one indicate that “across three experiments and 4,867 developers, [their] analysis reveals a 26.08% increase (SE: 10.3%) in completed tasks among developers using the AI tool. Notably, less experienced developers had higher adoption rates and greater productivity gains.”

This last point—that less experienced devs have greater productivity gains—is worth remembering; we’ll come back to it.

In a controlled study by GitHub, developers who used GitHub Copilot completed tasks 55% faster than those who did not. This study also found that 90% of developers found their job more fulfilling with Copilot, and 95% said they enjoyed coding more when using it. While it may not seem like fulfillment and enjoyment are directly tied to productivity, there is evidence that suggests they’re contributing factors.

I couldn’t help but notice that the most robust studies that find AI improves developer productivity are tied to companies that produce AI developer tools. The first study mentioned above has authors from Microsoft—an investor of OpenAI— and funding from the MIT Generative AI Impact Consortium, whose founding members include OpenAI. The other study was conducted by GitHub, a subsidiary of Microsoft and creator of Copilot, a leading AI developer tool. While it doesn’t invalidate the research or the findings, it is worth noting.

Data that Shows AI Tools Do Not Increase Productivity

On the other side of the house, studies have also found little to no gains from AI tooling. 

Perhaps most infamous among these is the METR study from July 2025. Even though developers who participated in the study predicted that AI tools would make them 24% faster, the tools actually made them 19% slower when completing assigned tasks.

A noteworthy aspect of this study was that the developers were all working in fairly complex codebases that they were highly familiar with.

Another study by Uplevel points in a similar direction. Surveying 800 developers, they found no significant productivity gains in objective measurements, such as cycle time or PR throughput. In fact, they found that developers who use Copilot introduced a 41% increase in bugs, suggesting a negative impact on code quality, even if there wasn’t an impact on throughput.

What’s Going On?

How can it be that the studies found such wildly different results?

I must acknowledge again: productivity is hard to measure, and generative AI is notoriously non-deterministic. What works well for one developer might not work for another developer in a different codebase.

However, I do believe some patterns emerge from these seemingly contradictory findings.

undefined Imgur 6

Firstly, AI does deliver short-term productivity and satisfaction gains, particularly for less experienced developers and in well-scoped tasks. However, AI can introduce quality risks and slow teams down when the work is complex, the systems are unfamiliar, or developers become over-reliant on the tool.

Remember the finding that less experienced developers had higher adoption rates and greater productivity gains? While it might seem like a good thing at first, it also holds a potential problem: by relying on AI tools, you run the risk of stunting your own growth. You are also not learning your codebase as fast, which will keep you reliant on AI. We can even take it a step further: do less experienced developers think they are being more productive, but they actually lack enough familiarity with the code to understand the impact of the changes being made?

Will these risks materialize? Who knows. If I were a less experienced developer, I would have wanted to know about them, at least.

My Conclusions

My biggest conclusion from this research is that developers shouldn’t expect anything in the order of 3-5x productivity gains. Even if you manage to produce 3-5x as much code with AI as you would if you were doing it manually, the code might not be up to a reasonable standard, and the only way to know for sure is to review it thoroughly, which takes time.

Research findings suggest a more reasonable expectation is that you can increase your productivity by around 20%.

undefined Imgur 7

If you’re a less experienced developer, you’ll likely gain more raw output from AI tools, but this might come at the cost of your growth and independence.

My advice to junior developers in this age of AI tools is probably nothing you haven’t heard before: learn how to make effective use of AI tools, but don’t assume that it makes traditional learning and understanding obsolete. Your ability to get value from these tools depends on knowing the language, the systems, and the context first. AI makes plenty of mistakes, and if you hand it the wheel, it can generate broken code and technical debt faster than you ever could on your own. Use it as a tutor, a guide, and a way to accelerate learning. Let it bridge gaps, but aim to surpass it.

If you’re already an experienced developer, you almost certainly know more about your codebase than the AI does. So while it might type faster than you, you won’t get as much raw output from it, purely because you can probably make changes with more focused intent and specificity than it can. Of course, your mileage may vary, but AI tools will often try to do the first thing they think of, rather than the best or most efficient thing.

That is not to say you shouldn’t use AI. But you shouldn’t see it as a magic wand that will instantly 5x your productivity.

Like any tool, you need to learn how to use AI tools to maximize your efficacy. This involves prompt crafting, reviewing outputs, and refining subsequent inputs, something I’ve written about in another post. Once you get this workflow down, AI tools can save you significant time on code implementation while you focus on understanding exactly what needs to be done.

If AI tooling is truly a paradigm shift, it stands to reason that you would need to change your ways of working to get the most from it. You cannot expect to inject AI into your current workflow and reap the benefits without significant changes to how you operate.

For me, the lesson is clear: productivity gains don’t come from the tools alone; they come from the people who use them and the processes they follow. I’ve seen enough variation across developers and codebases to know this isn’t just theory, and the findings from these studies say the same thing: same tools, different outcomes.

The difference is always the developer.

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Auth0 CLI: Leveling Up Your Developer Workflow with Powerful Enhancements

1 Share
Discover the recent Auth0 CLI enhancements, built for deeper API coverage, interactive logs, and streamlined automation based on community feedback.

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Unlocking the full power of Copilot code review: Master your instructions files

1 Share

Copilot code review (CCR) helps you automate code reviews and ensure your project meets your team’s standards. We recently added support for both copilot-instructions.md and path-specific *.instructions.md files, so now you can customize Copilot’s behavior to fit your workflow. This flexibility empowers you to guide Copilot with clear, actionable rules for effective and consistent reviews.

But with this flexibility comes some uncertainty:

  • When is Copilot code review reading your instructions?
  • Why doesn’t it always follow your instructions exactly?
  • How can you ensure Copilot code review listens to your guidance?

While you can format your instructions file however you want, Copilot code review is non-deterministic and has specific limitations that will evolve as we improve the product. Understanding how to guide Copilot within its current capabilities is key to getting the most from your reviews.

After reviewing many instructions files, common questions, and feedback, we’ve created this guide to help you write instructions that really work—and avoid some pitfalls along the way.

⚠️ Note: While these tips are designed for Copilot code review, you might find some of them useful when writing instructions files for other Copilot products.

General tips

Getting started is the hardest part. Here are some things to keep in mind when starting with your instructions. 

  • Keep it concise: Copilot works best with focused, short instructions. Start small and iterate. Even a single line can help guide Copilot. On the flip side, long instructions files (over ~1,000 lines) can lead to inconsistent behavior.
  • Structure matters: Use headings and bullet points to keep information organized and easy for Copilot to process.
  • Be direct: Short, imperative rules are more effective than long paragraphs.
  • Show examples: Demonstrate concepts with sample code or explanations—just like you would with a teammate.

Repo-wide vs. path-specific instructions

In addition to the centralized repo-wide copilot-instructions.md file, we recently expanded your customization options by enabling Copilot code review to read any NAME.instructions.md file with an applyTo frontmatter in your .github/instructions directory. It can be confusing to have two seemingly similar options for customization, but each provides different value! Here are some tips for how to differentiate between the two, and use them both effectively.

  • Place language-specific rules in *.instructions.md files and then use the applyTo frontmatter property to target specific languages (e.g., applyTo: **/*.py or applyTo: documentation/*.md).
  • Place rules meant specifically for only Copilot code review or only Copilot coding agent in *.instructions.md files, and use the excludeAgent frontmatter property to prevent either agent from reading your file.
  • Organize different topics (e.g., security, language-specific guidelines, etc.) into separate *.instructions.md files.
  • Reserve general instructions, team standards, and guidelines for the whole repository for copilot-instructions.md (e.g., “Flag use of deprecated libraries across the codebase”).

Rules of thumb

We’ve gone over what not to do. Now for what to do, effective instructions files often include:

  1. Clear titles
  2. A purpose or scope statement to clarify intent
  3. Lists of guidelines/rules instead of dense paragraphs
  4. Best practice recommendations
  5. Style conventions (indentation, naming, organization)
  6. Sample code blocks for clarification
  7. Section headings for organization
  8. Task-specific instructions (e.g., for tests or endpoints)
  9. Language/tooling context
  10. Emphasis on readability and consistency
  11. Explicit directives for Copilot (“Prefer X over Y”)

What not to do

Certain types of instructions aren’t supported by Copilot code review. Here are common pitfalls to avoid:

  • Trying to change the UX or formatting of Copilot comments (e.g., “Change the font of Copilot code review comments”).
  • Trying to modify the “Pull Request Overview” comment (e.g. prompting to remove it or change its purpose to be something other than provide an overview of the pull request).
  • Requesting Copilot code review performs tasks outside of code review. (e.g., trying to modify the product behavior like asking it to block a pull request from merging).
  • Including external links. Copilot won’t follow them. You should copy relevant info into your instructions files instead.
  • Adding requests meant to generally and non-specifically improve behavior (e.g., “Be more accurate” or “Identify all issues”). Copilot code review is already tuned to do this and adding language like this adds more noise that confuses the LLM.

Starting off with a blank Markdown file can feel daunting. Here’s one structure that you can use, ready to copy-paste into your instructions file as a starting point!

# [Your Title Here]
*Example: ReactJS Development Guidelines*

## Purpose & Scope
Briefly describe what this file covers and when to use it.

---

## Naming Conventions
- [Add rules here, e.g., "Use camelCase for variable names."]

## Code Style
- [Add rules here, e.g., "Indent using 2 spaces."]

## Error Handling
- [Add rules here.]

## Testing
- [Add rules here.]

## Security
- [Add rules here.]

---

## Code Examples
```js
// Correct pattern
function myFunction() { ... }

// Incorrect pattern
function My_function() { ... }
```

---

## [Optional] Task-Specific or Advanced Sections

### Framework-Specific Rules
- [Add any relevant rules for frameworks, libraries, or tooling.]

### Advanced Tips & Edge Cases
- [Document exceptions, advanced patterns, or important caveats.]

Example: A typescript.instructions.md file

Now let’s implement all of these guidelines in an example path-specific instruction file.

---
applyTo: "**/*.ts"
---
# TypeScript Coding Standards
This file defines our TypeScript coding conventions for Copilot code review.

## Naming Conventions

- Use `camelCase` for variables and functions.
- Use `PascalCase` for class and interface names.
- Prefix private variables with `_`.

## Code Style

- Prefer `const` over `let` when variables are not reassigned.
- Use arrow functions for anonymous callbacks.
- Avoid using `any` type; specify more precise types whenever possible.
- Limit line length to 100 characters.

## Error Handling

- Always handle promise rejections with `try/catch` or `.catch()`.
- Use custom error classes for application-specific errors.

## Testing

- Write unit tests for all exported functions.
- Use [Jest](https://jestjs.io/) for all testing.
- Name test files as `<filename>.test.ts`.

## Example

```typescript
// Good
interface User {
  id: number;
  name: string;
}

const fetchUser = async (id: number): Promise<User> => {
  try {
    // ...fetch logic
  } catch (error) {
    // handle error
  }
};

// Bad
interface user {
  Id: number;
  Name: string;
}

async function FetchUser(Id) {
  // ...fetch logic, no error handling
}

Get started

Getting started with Copilot code review

New to Copilot code review? Get started by adding Copilot as a reviewer to your pull requests! 

Adding new custom instructions

Create a copilot-instructions.md file in the .github directory of your repository, or a path-specific *.instructions.md file within the .github/instructions directory in your repository, and use this post and examples in the awesome-copilot repository for inspiration.

Or just ask Copilot coding agent to generate an instructions file for you, and iterate from there. 

Editing existing custom instructions

Have existing custom instructions for Copilot code review that you think could use some editing after reading this post, but don’t know where to begin? Have Copilot coding agent edit your file for you!

  1. Navigate to the agents page at github.com/copilot/agents.
  2. Using the dropdown menu in the prompt field, select the repository and branch where you want Copilot to edit custom instructions.

Copy the following prompt, editing it for your use-case as needed. Make sure to modify the first sentence to specify which instruction files you want it to edit. This prompt will tailor your instructions file for Copilot code review, so it may make unwanted edits if used for instruction files meant for other agents.

**Prompt for Copilot Coding Agent: Revise My Instructions File**

---
Review and revise my existing `NAME-OF-INSTRUCTION-FILES` files. Preserve my file's meaning and intention—do NOT make unnecessary changes or edits. Only make improvements where needed, specifically:

- Remove unsupported or redundant content.  
  Unsupported content includes:
  - instructions to change Copilot code review comment formatting (font, font size, adding headers, etc)
  - instructions to change "PR Overview" comment content
  - instructions for product behavior changes outside of existing code review functionality (like trying to block a pull request from merging)
  - Vague, non-specific directives like “be more accurate”, "identify all issues" or similar
  - Directives to “follow links” or inclusion of any external links
- Reformat sections for clarity if they do not have any structure. 
  - If my file does not have any structure, reformat into the structure below or similar, depending on the topics covered in the file. 
  - Do not change the intent or substance of the original content unless the content is not supported.
- Organize content with section headings and bullet points or numbered lists.
- Add sample code blocks if clarification is needed and they are missing.
- When applicable, separate language-specific rules into path-specific instructions files with the format `NAME.instructions.md`, with the `applyTo` property, if not already done.
- If the file is over 4000 characters long, prioritize shortening the file by identifying redundant instructions, instructions that could be summarized, and instructions that can be removed due to being unspported. 

**Example Structure:**

# Python Coding Standards

Guidelines for Python code reviews with Copilot.

## Naming Conventions

- Use `snake_case` for functions and variables.
- Use `PascalCase` for class names.

## Code Style

- Prefer list comprehensions for simple loops.
- Limit lines to 80 characters.

## Error Handling

- Catch specific exceptions, not bare `except:`.
- Add error messages when raising exceptions.

## Testing

- Name test files as `test_*.py`.
- Use `pytest` for tests.

## Example

```python
# Good
def calculate_total(items):
    return sum(items)

# Bad
def CalculateTotal(Items):
    total = 0
    for item in Items:
        total += item
    return total
```

---

### Framework-Specific Rules
- For Django, use class-based views when possible.

### Advanced Tips & Edge Cases
- Use type hints for function signatures.
  1. Click Start task or press Return.

Copilot will start a new session, which will appear in the list below the prompt box. Copilot will create a draft pull request, modify your custom instructions, push them to the branch, then add you as a reviewer when it has finished. This will trigger a notification for you.

Resources to check out

Customizing with Copilot instructions files makes code review work for you—give it a try and see the difference in your workflow!

The post Unlocking the full power of Copilot code review: Master your instructions files appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
38 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories