Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152467 stories
·
33 followers

Explaining Contract Tracked Changes Automatically Using .NET C# and AI

1 Share
Learn how to use AI and .NET C# to automatically explain changes to contracts, improving the document review and collaboration processes. This comprehensive guide provides practical implementation strategies and best practices.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Developers are Solving The Wrong Problem

1 Share

Everyone is either offended or excited about “vibe coding.” It’s all the rage and going to solve all your problems,…

The post Developers are Solving The Wrong Problem appeared first on Caseysoftware.
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Security Is a Developer Experience Problem, Rooted in Our Foundations

1 Share

For more than a decade, the industry has tried to improve software security by pushing it closer to developers. We moved scanners into CI, added security checks to pull requests, and asked teams to respond faster to an ever-growing stream of vulnerabilities. And yet, the underlying problems have not gone away.

The issue is not that developers care too little about security. It is that we keep trying to fix security at the edges, instead of fixing the foundations. Hardened container images change that dynamic by reducing attack surface and eliminating much of the low-signal security noise before it ever reaches development teams.

Security Fails When It Becomes Noise

Most developers I know care deeply about building secure software. What they do not care about is security theater.

The way we handle security issues today, especially CVEs, often creates a steady stream of low-signal work for development teams. Alerts fire constantly. Many are technically valid but practically irrelevant. Others ask developers to patch components they did not choose and do not meaningfully control. Over time, this turns security into background noise.

When that happens, the system has already failed. Developers are forced to context switch, teams burn time debating severity scores, and real risk gets buried alongside issues that do not matter. This is not a motivation problem. It is a system design problem.

The industry responded by trying to “shift left” and push security earlier in the development cycle. In practice, this often meant pushing more work onto developers without giving them better defaults or foundations. The result was more toil, more alerts, and more reasons to tune it all out.

Shifting left was the right instinct but the wrong execution. The goal should not be making developers do more security work. It should be making secure choices the painless, obvious default so developers do less security work while achieving better outcomes.

Why Large Images Were the Default

To understand how we got here, it helps to be honest about why most teams start with large, generic base images.

When Docker launched in 2013, containers were unfamiliar. Developers reached for what they knew: full Linux distributions and familiar Debian or Ubuntu environments with all the debugging tools they relied on. 

Large images that had everything were a rational default. This approach optimized for ease and flexibility. When everything you might ever need is already present, development friction goes down. Builds fail less often. Debugging is simpler. Unknown dependencies are less likely to surprise you at the worst possible time.

For a long time, doing something more secure has required real investment. Teams needed a platform group that could design, harden, and continuously maintain custom base images. That work had to compete with product features and infrastructure priorities. Most organizations never made that tradeoff, and that decision was understandable.

So the industry converged on a familiar pattern. Start with a big image. Ship faster in the short term. Deal with the consequences later.

Those consequences compound. Large images dramatically increase the attack surface. They accumulate stale dependencies. They generate endless CVEs that developers are asked to triage long after the original choice was made. What began as a convenience slowly turns into persistent security and operational drag that slows development velocity and software shipments.

Secure Foundations Can Improve Developer Experience

There is a widely held belief that better security requires worse developer experience. In practice, the opposite is often true.

Starting from a secure, purpose-built foundation, like Docker Hardened Images, reduces complexity rather than adding to it. Smaller images contain fewer packages, which means fewer vulnerabilities and fewer alerts. Developers spend less time chasing low-impact CVEs and more time building actual product.

The key is that security is built into the foundation itself. Image contents are explicit and reproducible. Supply chain metadata like signatures, SBOMs, and provenance are part of the image by default, not additional steps developers have to wire together themselves. At the same time, these foundations are easy to customize securely. Teams can extend or tweak their images without undoing the hardening, thanks to predictable layering and supported customization patterns. This eliminates entire categories of hidden dependencies and security toil that would otherwise fall on individual teams.

There are also tangible performance benefits. Smaller images pull faster, build faster, and deploy faster. In larger environments, these gains add up quickly.

Importantly, this does not require sacrificing flexibility. Developers can still use rich build environments and familiar tools, while shipping minimal, hardened runtime images into production.

This is one of the rare cases where improving security directly improves developer experience. The tradeoff we have accepted for years is not inevitable.

What Changes When Secure Foundations Are the Default

When secure foundations and hardened images become the default starting point, the system behaves differently. Developers keep using the same Docker workflows they already know. The difference is the base they start from. 

Security hardening, patching, and supply chain hygiene are handled once in the foundation instead of repeatedly in every service. Secure foundations are not limited to operating system base images. The same principles apply to the software teams actually build on top of, such as databases, runtimes, and common services. Starting from a hardened MySQL or application image removes an entire class of security and maintenance work before a single line of application code is written.

This is the problem Docker Hardened Images are designed to address. The same hardening principles are applied consistently across widely used open source container images, not just at the operating system layer, so teams can start from secure defaults wherever their applications actually begin. The goal is not to introduce another security workflow or tool. It is to give developers better building blocks from day one.

Because the foundation is maintained by experts, teams see fewer interruptions. Fewer emergency rebuilds. Fewer organization-wide scrambles when a widely exploited vulnerability appears. Security teams can focus on adoption and posture instead of asking dozens of teams to solve the same problem independently.

The result is less security toil and more time spent on product work. That is a win for developers, security teams, and the business.

Build on Better Defaults

For years, we have tried to improve security by asking developers to do more. Patch faster. Respond to more alerts. Learn more tools. That approach does not scale.

Security scales when defaults are strong. When foundations are designed to be secure and maintained over time. When developers are not forced to constantly compensate for decisions that were made far below their code.

If we want better security outcomes without slowing teams down, we should start where software actually starts. That requires secure foundations, like hardened images, that are safe by default. With better foundations, security becomes quieter, development becomes smoother, and the entire system works the way it should.

That is the bar we should be aiming for.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Free Dockhand Tool Simplifies Docker Container Management

1 Share

Do you have too many Docker containers running? A quick check of my home lab and on just a single server, I have 10 running Docker containers. How do I manage them, let alone just keep track of what’s what?

I could deploy Portainer, but that’s overkill at this point. Besides, Portainer is more an enterprise app now, so using it on a home lab, a development environment, or a small business just isn’t really all that practical.

That’s where an app like Dockhand comes into play.

Dockhand is a powerful and easy-to-use container manager/monitor that is free to use for home labs, and I have found it indispensable for keeping tabs on my containers. With Dockhand, I can view logs, access the shell, view stacks, images, volumes, networks, registries, activities, and schedules. I can stop, pause, restart, edit, and delete containers, create health checks, check for and apply updates and so much more.

You can even create and deploy containers!

Once you start using Dockhand, you’ll wonder how you got along without it. In fact, I find I can actually do more with my Docker containers when using Dockhand than without. This app just simplifies everything.

But how do you deploy and use Dockhand?

Easily, that’s how.

Let me show you.

Deploying Dockhand

It should go without saying that you need a platform that supports Docker. You’ll also need some running containers, which I assume you already have; otherwise, why would you need a platform for which to monitor them?

Deploying Dockhand is as simple as running this single command on your hosting platform:

Give the container a moment to spin up. After a couple of minutes have passed, open a web browser and point it to http://SERVER:3000 (where SERVER is the IP address of your hosting server). You should be presented with an empty dashboard. What we need to do next is configure your first environment.

Adding an Environment

We’ll first add our local environment. To do this, “Add environment” from the Dashboard (Figure 1).

Figure 1: Your Dockhand dashboard gives you quick access to your environments and other features.

In the resulting pop-up (Figure 2), all you have to do is give the environment a name and click Add.

Figure 2: We’re adding a local environment here.

Once you’ve added the environment, you should see it appear in the Dashboard.

Using the Dashboard

Click on the Dashboard icon in the side panel and then click on the environment you just created. What you should see now is a listing of every Docker service running (Figure 3).

Figure 3: I have several containers running.

Let’s say you want to create a new container in this local environment. For that, click Create at the top. In the resulting pop-up (Figure 4), type the name of the image you want to pull and click Pull.

Figure 4: If you haven’t already pulled an image, you’ll need to do so here.

I’m going to pull the latest Vaultwarden image (vaultwarden/server:latest). Once that’s taken care of, Dockhand will automatically switch you to the Container tab, where you can start building your new container (Figure 5).

Figure 5: We’re going to deploy a Vaultwarden container.

The information you’ll need is as follows:

  • Name: vaultwarden
  • Volume mapings: hostpath – /vw-data/ and container path – :/data/
  • Ports: Host – 443 Container – 443 (you have to use SSL for Vaultwarden)

Of course, you’ll need to fill that out according to your needs. Also, keep in

When you’ve finished filling out the necessary information for your container and click “Create container.” Your container should be listed as running.

It really is that simple.

Troubleshooting a Container

Let’s say you have a container that’s giving you fits. What can you do? With the help of Dockhand, you can troubleshoot.

For example, I have a GitLab container that has failed. To find out what’s going on, click the offending container from within the list of Containers, and you’ll see a Logs tab. Click that tab to reveal any information that’s been logged (Figure 6).

Figure 6: My Gitlab container isn’t behaving.

The log will be set to autoscroll. I find downloading the log file makes it much easier to comb through. To do that, click the downward-pointing arrow near the upper-right corner (Figure 7).

Figure 7: The running log of my failing Gitlab container.

The log will be a .txt file, which you can open on your machine and look through it to troubleshoot the container. Of course, you’ll need to know how to read a Docker log file to get the most out of this feature.

You can also look at the overview tab, which will give you any error codes. In my case, I see exit code 137, which indicates that a container was terminated due to an out-of-memory (OOM) condition or received a kill signal

Getting closer.

As you can see, Dockhand is incredibly handy to have around. If you’re looking for an easy way to manage your containers, I highly recommend give this system a try and see if it doesn’t simplify managing the running containers on your home lab or small business.

The post Free Dockhand Tool Simplifies Docker Container Management appeared first on The New Stack.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Interpolate contrast-color() to manipulate lightness

1 Share

In my first post on contrast-color() I demo’d using color-mix() to change a background-color on hover, but I will be honest… mixing black and white isn’t always what you want. It would be cool and helpful to coerce contrast-color() to return either 1 or -1 so that we could adjust lightness in a color function on hover instead of only mixing white and black.

Building on the inline CSS if() statements in my last post, we can use the same trick to interpolate the result of contrast-color() into a number.

Disclaimer: All caveats from the previous post about browser support, caching quirks, and expected syntax changes still apply.

See the Pen contrast-color() powered design system colors by Dave Rupert (@davatron5000) on CodePen.

Ahh… feel that? Now our states maintains its harmonious color palette where mixing in white or black gets us a bit muddier results. We’re picking another note on the scale of the color’s lightness ramp.

The relevant CSS to make this trick work goes like this:

/* Needed for if() statement */
@property --captured-color {
  syntax: "<color>";
  inherits: true;
  initial-value: white;
}

/* https://lea.verou.me/blog/2024/contrast-color/ */
@function --contrast-color(--bg-color) {
  --l: clamp(0, (l / var(--l-threshold, 0.623) - 1) * -infinity, 1);
  result: oklch(from var(--bg) var(--l) 0 0);
}

button {
  background-color: var(--ds-button-bg);
	--captured-color: --contrast-color(var(--ds-button-bg));
  --lighter-or-darker: if(
    style(--captured-color: oklch(1 0 0)): 2.5; /* go extra lighter */
    else: -1; /* go darker */
  );  
  ...
  
	&:hover, &:focus {
		background-color: oklch(
			from var(--ds-button-bg) 
			calc(l + (0.1 * var(--lighter-or-darker))) c h 
		);
  }
}

For comparison’s sake, I web-inspected up a little apples-to-apples, side-by-side of the adjusting lightness way and the color-mixing way of algorithmic hover states where it’s 10% lightened/darkened versus 10% mix of white/black.

a side by side look at adjusting lightness vs mixing in white and black. the rest for all the buttons is on top and the hover state for all the buttons is on the bottom. The hover states for the buttons on the adjusting lightness demo are a bit warmer, but probably hard to notice to the average person.

The difference is almost imperceptible, but the hover states from the “Adjust Lightness” method feel a tad bit warmer, particularly on the bottom row with the green, blue, and purple buttons. The difference becomes more obvious if the step is greater than 10%.

Unless your customers are a bunch of color dorks, they probably won’t see it the care you put into this. But I’m willing to wager that even if they don’t see the difference, they will be able to feel the difference.

We also get a lot more control with this if() statement route. For example, I can set the lighten amount to 2.5 (+25%) instead of 1 (+10%) because that’s what felt better. If you’re algorithmically generating your color palettes, it should be easy to find the ideal values; either a step-up or step-down, perhaps. And we’re also not just limited to lightness! You could mess with chroma or whatever the b in lab() is. Find what makes sense for your system.

A part of me wants to take this even further to get more control. For example, if the color is super dark (e.g. black) and the lightness value is below 0.1, lighten by 25%, otherwise lighten by 10%. That might be possible with an if() inside the oklch() or a sin() function… but that sounds like a lot of Math and probably hurts readability. More experiments to do though.

It’s fun to embark on this new world of algorithmic color schemes in vanilla CSS. While I’m excited to play, I’m more excited to see what your beautiful brains come up with.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

ESLint v10.0.0-rc.0 released

1 Share

Highlights

This version of ESLint is not ready for production use and is provided to gather feedback from the community before releasing the final version. Please let us know if you have any problems or feedback by creating issues on our GitHub repo.

Note that this prerelease version of ESLint has a separate documentation section.

Enhancements to RuleTester

Since its earliest days, ESLint has provided the RuleTester API to help plugin authors test their rules against custom test cases and configurations. This release introduces several enhancements to RuleTester to enforce more robust test definitions and improve debugging.

requireData assertion option

A new assertion option, requireData, is now available. When set to true, RuleTester will require invalid test cases to include a data object whenever a messageId references a message with placeholders. This helps ensure that tests remain consistent with rule messages that rely on placeholder substitution.

For example, consider a hypothetical rule no-trivial-sum that reports on expressions such as 1 + 2 and defines a message with placeholders:

trivialSum: "Trivial sum found. Replace {{actualExpression}} with {{sum}}."
1

If an invalid test case includes messageId: "trivialSum" but omits data:

assertionOptions: { requireData: true },
invalid: [
  {
    code: "const a = 1 + 2;",
    errors: [{ messageId: "trivialSum" }],
  },
],
1
2
3
4
5
6
7

RuleTester will now throw an assertion error indicating that the data property is missing.

To resolve this, include the placeholder values in the error object:

  {
    code: "const a = 1 + 2;",
    errors: [
      {
        messageId: "trivialSum",
        data: { actualExpression: "1 + 2", sum: 3 },
      },
    ],
  },
1
2
3
4
5
6
7
8
9

Improved location reporting for failing tests

RuleTester now decorates stack traces with information that makes it easier to locate failing test cases in your source code. For example, if the no-trivial-sum rule fails to report an error for 1 + 2, the test case in the previous section will fail and the test output will include stack trace lines like:

roughly at RuleTester.run.invalid[0] (/my-project/test/no-trivial-sum.js:10)
roughly at RuleTester.run.invalid (/my-project/test/no-trivial-sum.js:7)
1
2

The first line indicates:

  • invalid[0]: the index of the failing test case in the invalid array
  • /my-project/test/no-trivial-sum.js:10: the file and line number where that test case is defined. Many IDE terminals, including Visual Studio Code’s, recognize this format and allow you to click directly to the relevant line.

The second line points to the start of the entire invalid array.

Note that these line numbers may not always be included, depending on how your tests are structured. When the lines cannot be determined precisely, the failing test index (e.g., 0) and the printed code snippet are still available to locate the test case.

countThis option in max-params rule

The max-params rule now supports the new countThis option, which supersedes the deprecated countVoidThis. With the setting countThis: "never", the rule will now ignore any this annotation in a function’s argument list when counting the number of parameters in a TypeScript function. For example:

function doSomething(this: SomeType, first: string, second: number) {
 // ...
}
1
2
3

will be considered a function taking only 2 parameters.

Installing

Since this is a pre-release version, you will not automatically be upgraded by npm. You must specify the next tag when installing:

npm i eslint@next --save-dev
1

You can also specify the version directly:

npm i eslint@10.0.0-rc.0 --save-dev
1

Migration Guide

As there are a lot of changes, we’ve created a migration guide describing the breaking changes in great detail along with the steps you should take to address them. We expect that most users should be able to upgrade without any build changes, but the migration guide should be a useful resource if you encounter problems.

Breaking Changes

  • f9e54f4 feat!: estimate rule-tester failure location (#20420) (ST-DDT)

Features

Bug Fixes

  • d186f8c fix: update eslint (#20427) (renovate[bot])
  • 2332262 fix: error location should not modify error message in RuleTester (#20421) (Milos Djermanovic)
  • ab99b21 fix: ensure filename is passed as third argument to verifyAndFix() (#20405) (루밀LuMir)
  • 8a60f3b fix: remove ecmaVersion and sourceType from ParserOptions type (#20415) (Pixel998)
  • eafd727 fix: remove TDZ scope type (#20231) (jaymarvelz)
  • 39d1f51 fix: correct Scope typings (#20404) (sethamus)
  • 2bd0f13 fix: update verify and verifyAndFix types (#20384) (Francesco Trotta)

Documentation

Chores

  • b4b3127 chore: package.json update for @eslint/js release (Jenkins)
  • f658419 refactor: remove raw parser option from JS language (#20416) (Pixel998)
  • 2c3efb7 chore: remove category from type test fixtures (#20417) (Pixel998)
  • 36193fd chore: remove category from formatter test fixtures (#20418) (Pixel998)
  • e8d203b chore: add JSX language tag validation to check-rule-examples (#20414) (Pixel998)
  • bc465a1 chore: pin dependencies (#20397) (renovate[bot])
  • 703f0f5 test: replace deprecated rules in linter tests (#20406) (루밀LuMir)
  • ba71baa test: enable strict mode in type tests (#20398) (루밀LuMir)
  • f9c4968 refactor: remove lib/linter/rules.js (#20399) (Francesco Trotta)
  • 6f1c48e chore: updates for v9.39.2 release (Jenkins)
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories