The CSS contrast() filter function increases or decreases the contrast of an element, either making colors pop out more or dulling them to gray. Unlike other filter functions like brightness() or saturate(), contrast() affects both saturation and lightness, keeping only the color’s hue.
.low {
filter: contrast(50%);
}
.normal {
filter: contrast(100%);
}
.high {
filter: contrast(200%);
}
The contrast() function is defined in the Filter Effects Module Level 1 specification.
The official syntax for the contrast() function is:
<contrast()> = contrast( [ <number> | <percentage> ]? )
Or simply:
filter: contrast(<amount>);
The contrast() function is only compatible with the CSS filter and backdrop-filter properties.
/* Using percentages */
filter: contrast(0%); /* Totally grayed out */
filter: contrast(50%); /* Partially grayed out */
filter: contrast(100%); /* No change */
filter: contrast(150%); /* Element is 1.5 times more defined */
/* Using numbers (0–1 range) */
filter: contrast(0); /* Totally grayed out */
filter: contrast(0.5); /* Partially grayed out */
filter: contrast(1); /* No change */
filter: contrast(1.5); /* Element is 1.5 times more defined */
/* Using percentages */
filter: contrast(0%); /* Totally grayed out */
filter: contrast(50%); /* Partially grayed out */
filter: contrast(100%); /* No change */
filter: contrast(150%); /* Element is 1.5 times more defined */
/* Using numbers (0–1 range) */
filter: contrast(0); /* Totally grayed out */
filter: contrast(0.5); /* Partially grayed out */
filter: contrast(1); /* No change */
filter: contrast(1.5); /* Element is 1.5 times more defined */
/* Works with CSS variables */
--amount: 200%;
filter: contrast(--amount);
/* No argument */
filter: contrast(); /* No change */
/* Negative value */
filter: contrast(-1.5); /* No effect */
filter: contrast(--amount);
/* No argument */
filter: contrast(); /* No change */
/* Negative value */
filter: contrast(-1.5); /* No effect */
The contrast() function takes a single argument, which can be a positive decimal or percentage value. The argument determines the new contrast for the element, where:
0 or 0% dries out all contrast from the element, resulting in a completely gray image.1 or 100% leaves the element completely unchanged.1 or 100% increase the contrast linearly.Negative values aren’t allowed. But CSS variables are:
.element {
--filter-amount: 150%;
filter: contrast(var(--filter-amount));
}
contrast() affects colorLike other filter functions, the contrast() filter operates purely on RGB math. Specifically, given an <amount> it multiplies each RGB channel by that <amount> and then adds 255 * (0.5 - 0.5 * <amount>) to the result. In practice, this affects colors in one of two ways:
1) makes light pixels get lighter and dark pixels get darker, so colors become more vivid.1) pulls all pixels toward a middle gray. This reduces the difference between light and dark areas, making the image look flat and muted.Some background images, usually in hero sections or carousels, can make the foreground text difficult to read. Especially if it has very bright and dark colors, which compete with any text color. To solve this, we can use contrast() to reduce the difference between the image’s whites and blacks, making text more readable against the whole image.
img {
filter: contrast(70%) brightness(60%);
}
The low contrast flattens the image, and as a plus, we can also reduce the image’s brightness to make the text pop regardless of its colors.
Another useful application for contrast() is to highlight an image in a user’s interaction. For example, in a row of image cards, we could increase the image’s contrast and also scale it on hover
.card img {
transition:
filter 0.4s ease,
transform 0.4s ease;
}
.card:hover img {
filter: contrast(125%);
transform: scale(1.05);
}
contrast() the same as contrast-color()?While both CSS functions have similar names, they are not to be confused with each other.
contrast() is a filter function that makes an element more vivid by making whites lighter and blacks darker.contrast-color() returns the text color with the highest contrast to a solid background. Its resulting color is either white or black, depending on which color contrasts most with the background. It is also not a filter function.The contrast() function is currently supported across all modern browsers.
contrast() originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.
The CSS contrast-color() function takes a <color> value (as well as a variable) and returns either black or white, whichever is the most contrasting color for that value.
In other words, contrast-color() is sort of an accessibility tool for conforming to WCAG contrast requirements.
.card {
background-color: var(--swatch);
color: contrast-color(var(--swatch));
}
For example, on the next demo update the background color to see the text color change automatically.
The contrast-color() function is defined in the CSS Color Module Level 5 specification.
The CSS contrast-color() function syntax is is formatted like this:
contrast-color() = contrast-color( <color> )
Let’s break that down with examples.
/* Using a custom variable */
contrast-color(var(--base-background));
/* Passing a color directly */
contrast-color(#34cdf2);
contrast-color(green);
contrast-color() takes a <color> as its only argument and resolves to white or black, depending on which has the highest contrast. If both white and black have the same contrast level, the function defaults to white.
The contrast-color() give us a simple alternative to defining multiple background and text colors, while also ensuring they are contrasting enough. Imagine we had the following scenario:
:root {
--primary-text: #f1f8e9;
--primary-bg: #2d5a27;
--secondary-text: #311b92;
--secondary-bg: #d1c4e9;
--tertiary-text: #002b36;
--tertiary-bg: #ff5722;
}
.primary {
color: var(--primary-text);
background-color: var(--primary-bg);
}
.secondary {
color: var(--secondary-text);
background-color: var(--secondary-bg);
}
.tertiary {
color: var(--tertiary-text);
background-color: var(--tertiary-bg);
}
We defined a text color for each background color in our variables, and if we had more than three possible backgrounds, we’d have had to define them all. Instead, using contrast-color(), we could define only the background color for each theme and let the function return the appropriate contrasting color for the texts.
:root {
--primary: #2d5a27;
--secondary: #d1c4e9;
--tertiary: #ff5722;
}
.primary {
color: contrast-color(var(--primary));
background-color: var(--primary);
}
.secondary {
color: contrast-color(var(--secondary));
background-color: var(--secondary);
}
.tertiary {
color: contrast-color(var(--tertiary-bg));
background-color: var(--tertiary-bg);
}
It is important to note that contrast-color() is still a work in progress (at the time of this writing), and in some cases might not be appropriate from a design standpoint since it only returns black or white. Therefore, I recommend using it only in simple scenarios where either black or white make sense.
In fact, it has some shortcomings that are worth noting.
contrast-color() shortcomingsWhile contrast-color() appears to improve web accessibility, it has buts we should be aware of before using it.
contrast-color() only works with colors for now. So, in cases where you’re working with text on background images or using font weights to increase contrast, you’ll have to find a different way to meet contrast requirements. And even if it can be technically used with gradients, these too can only go between black to white which might not provide enough contrast between the gradient colors.contrast-color() doesn’t account for the font-size, which is a defining criterion, in choosing a contrast color. Hopefully, this will be accounted for in the future.So, at the time of writing, it seems it’s better to manually define colors that are contrasting enough in our themes as contrast-color() isn’t really feasible right now.
Based on earlier articles, the contrast-color() function used to take multiple color arguments–the base color versus multiple contrasting color options to choose from:
contrast-color(var(--bg) vs red, lightgreen, blue)
This syntax no longer exists in the draft. It’s one color and one color only.
The contrast-color() function is defined in the CSS Color Module Level 5 specification.
While browser support is limited at the time of this writing, it’s a good idea to include a fallback if you’re planning to use it on a project. We can use the @supports at-rule to detect if the browser understands the function:
.card {
--bg-color: #2d5a27;
background-color: var(--bg-color);
/* Default Fallback */
color: ghostwhite;
}
/* Use the function if supported */
@supports (color: contrast-color(red)) {
.card {
color: contrast-color(var(--bg-color));
}
}
contrast-color() originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.
Often, attackers will attempt to prevent security software from interfering with their attack chains by abusing a vulnerable driver to kill or otherwise disable the system’s security software (antivirus/edr/etc). Because drivers run in highly-privileged OS Kernel mode, it is difficult to prevent attackers from achieving their goals if they manage to achieve code execution in the kernel.
To ensure that only legitimate code gets to run in the kernel, Windows requires that the driver code bear an Authenticode signature from a particular certificate authority. Microsoft signs these drivers only after verifying their provenance and running through various driver-verification suites to help ensure their robustness.
However, even if all of the drivers on a system are legitimate, attackers have had success in finding vulnerabilities in legitimate drivers that allow them to abuse the driver to achieve their goals. Like any code, some drivers have bugs that allow them to corrupt memory, leak data that needs to be secret, or otherwise perform functions unintended by the original author. These vulnerable drivers represent a critical attack surface that attackers abuse to achieve their own ends.
Beyond abusing drivers already present on a victim device, in a BYOVD attack (Bring your own vulnerable driver) an attacker drops a vulnerable driver onto the device, then abuses it with their malware.
To address this threat vector, Microsoft has three main mechanisms:

Editor’s note: Bill Hilf is the former CEO of Vulcan/Vale Group, current board chair of Ai2 and American Prairie, and the author of the new sci-fi novel,”The Disruption,” which explores the topics of AI and natural ecosystems. He spoke about the book on the GeekWire Podcast, and elaborates on the themes in this companion essay.
We are building AI at civilizational scale while still talking about it as if it were a software release.
Which model tops which benchmark. Which chatbot sounds most human. Those questions matter, but they’re the wrong altitude. AI systems no longer just answer questions. They mediate hiring, diagnostics, logistics, finance, and growing pieces of public decision-making. We are not shipping products anymore. We are reshaping environments.
At this scale, AI is heavily interconnected. It has linked failure modes. Emergent behavior. Invasive species. Tipping points.
Treating an environment like a product is a category error, and it’s already compounding.
I spent three decades building the systems now at the center of this conversation, from scientific computing at IBM to early Azure and large-scale enterprise systems at HP. The working model was deterministic: specify the system, build it, tune it, control it. If something breaks, diagnose and patch. That model works right up until it doesn’t.
At sufficient scale, distributed systems stop behaving like machines and start behaving more like ecosystems. They adapt. They route around failure. They develop dependencies no one designed and interactions no one completely understands. You can still architect and engineer them. But once they are embedded everywhere, connected to everything, and optimized across too many layers for any one person to hold in mind, they are no longer just tools.
And the curve is steepening. McKinsey’s latest State of AI says 88% of surveyed organizations now use AI in at least one business function, up from 55% two years earlier. Gartner forecasts worldwide software spending above $1.4 trillion in 2026. In investor commentary circulated this year, Thoma Bravo argues that agentic AI could create a roughly $3 trillion incremental application revenue opportunity by converting labor spend into software spend. That is not a feature upgrade. It is the system rewiring itself mid-flight, faster than most firms can govern, audit, or even classify what they have already built.
That realization didn’t come only from technology. It also came from conservation.
Ecology has a name for what happens when you pull out a load-bearing layer too fast: trophic cascade. The Aleutian fur trade nearly wiped out sea otters in the 18th century. Otters eat urchins. Urchins eat kelp. Remove the otters, and you don’t get an otter-shaped hole. You get an urchin explosion, collapsed kelp forests, and the loss of every fish nursery the kelp was quietly holding up.
That is the pattern we should be watching in AI-dependent infrastructure. The AI will probably be better than your people at screening, scoring, and forecasting. The real problem is the speed. We are replacing the people who were providing judgment, correction, and restraint, the connective tissue that never showed up on a workflow diagram. The voice in the gray areas, the non-computable decisions. Remove that layer faster than the organization can discover what it was holding up, and you get the same cascade.
If we’re serious about building durable AI infrastructure, those patterns are worth studying, and some of the lessons are uncomfortable.
Efficiency is overrated. In technology, as in ecology, a system optimized too tightly becomes brittle. Slack and redundancy matter. So do firebreaks, and so does local autonomy.
In July 2024, a single CrowdStrike configuration update crashed 8.5 million machines worldwide. Airlines, hospitals, 911 centers, banks. $5.4 billion in losses. They reverted the bad update in 78 minutes. The recovery took days. Southwest Airlines was largely unaffected. It simply wasn’t running CrowdStrike’s software. Sometimes the absence of a dependency is its own firebreak. If every important function in your stack depends on one model, one provider, or one training pipeline, you haven’t built an intelligent marvel. You’ve built a future outage.
Ecosystems don’t only fail by cascade. They also fail by accretion. AI is entering workflows the way invasive species enter ecosystems: through low-visibility vectors, one deployment at a time. A copilot here, a summarization layer there, an autonomous scheduler somewhere no one is tracking. Each deployment is defensible on its own. The cumulative effect is something no one chose. The review and friction that kept earlier processes honest were built for human speed. Nothing has replaced them at machine speed.
A model does not remain what it was in the lab once it begins shaping the environment that later shapes it. AI systems do the same when deployed into markets, media, institutions, and human behavior. You do not regulate an ecosystem by inspecting individual organisms. You regulate the conditions that determine whether the whole system recovers or collapses. Those conditions include observability.
Systems that cannot be inspected, studied, or independently evaluated are systems no one can truly understand or govern well. Openness matters here, not as a slogan, but as a requirement for analysis and earned trust. The same logic applies to fault tolerance. Before a model is allowed inside critical systems, its operator should have to prove the full environment can still function without it. That means mandatory degradation testing, the way we stress-test banks and bridges.
Builders don’t have to wait for regulators. If an AI layer is entering a production workflow, builders need to know what happens when the model is wrong, the vendor is down, or the behavior changes after deployment. If the honest answer is “we don’t know,” the layer is not ready to be load-bearing. That’s true for a hospital triage system and for a customer support bot. It is especially true for agents with open-ended scope: software that can plan, call tools, and act inside environments no one fully controls. For those systems, model quality is the easy question. The hard one is who is accountable when it fails.
Multi-agent architectures and ensemble approaches can improve resilience, but only when the diversity is real. Three agents routing to the same foundation model may improve reasoning, but they are not three independent safeguards. They are one dependency wearing three hats.
There’s a broader strategic consequence here. In stable ecosystems, dominant species compound their advantage slowly. Shorten the disturbance cycle and many of those advantages erode before they mature. That is happening to business moats now. When disruption gets radically cheaper, the winning question stops being what you’re building and becomes what still compounds when nothing around you lasts. In real-world deployments, the ‘best’ model loses to the most adaptive system.
Recovery matters as much as prevention. In the conservation work I do, the question is never how to stop change. Disturbance is inevitable. The question is what survives, how quickly a system recovers, and what hidden capacities remain after the shock. We should ask the same of AI-dependent infrastructure. Not just “Is it safe?” but “How does it fail? Who can override it? How far does the failure spread? What grows back after the mistake?”
The thing that breaks, in my experience, is the assumption of control. Real systems do not collapse cleanly and they do not recover cleanly. Some parts fail. Some adapt. Some mutate into things no one intended.
Nature has been running distributed sensing, local response, and recovery for hundreds of millions of years. It has been operating the kind of network we keep trying to invent. Not because forests are conscious or because the planet is an AI, but because the engineering problems are structurally similar: how does a system without central control maintain coherence, adapt to damage, and persist across time?
The question is no longer just what AI systems can do. It is what kind of world they create around themselves, what kind of world they inherit from us, and whether we are wise enough to build systems that we can still steer.
If we take this seriously, a few principles follow. Design for diversity before efficiency. Build for recovery before performance. Keep humans in the loop, not as a compliance measure but as the system’s stewards, its source of judgment, and its memory of why it exists. Insist on openness, at all levels, as the precondition for trust at scale. None of this slows AI down. It’s what keeps AI working the day something fails.
You can switch off a machine.
You have to live within an ecosystem.