Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152756 stories
·
33 followers

contrast()

1 Share

The CSS contrast() filter function increases or decreases the contrast of an element, either making colors pop out more or dulling them to gray. Unlike other filter functions like brightness() or saturate()contrast() affects both saturation and lightness, keeping only the color’s hue.

.low {
  filter: contrast(50%);
}

.normal {
  filter: contrast(100%);
}

.high {
  filter: contrast(200%);
}

The contrast() function is defined in the Filter Effects Module Level 1 specification.

Syntax

The official syntax for the contrast() function is:

<contrast()> = contrast( [ <number> | <percentage> ]? )

Or simply:

filter: contrast(<amount>);

The contrast() function is only compatible with the CSS filter and backdrop-filter properties.

Arguments

/* Using percentages */
filter: contrast(0%); /* Totally grayed out */
filter: contrast(50%); /* Partially grayed out */
filter: contrast(100%); /* No change */
filter: contrast(150%); /* Element is 1.5 times more defined */

/* Using numbers (0–1 range) */
filter: contrast(0); /* Totally grayed out */
filter: contrast(0.5); /* Partially grayed out */
filter: contrast(1); /* No change */
filter: contrast(1.5); /* Element is 1.5 times more defined */

/* Using percentages */
filter: contrast(0%); /* Totally grayed out */
filter: contrast(50%); /* Partially grayed out */
filter: contrast(100%); /* No change */
filter: contrast(150%); /* Element is 1.5 times more defined */

/* Using numbers (0–1 range) */
filter: contrast(0); /* Totally grayed out */
filter: contrast(0.5); /* Partially grayed out */
filter: contrast(1); /* No change */
filter: contrast(1.5); /* Element is 1.5 times more defined */

/* Works with CSS variables */
--amount: 200%;
filter: contrast(--amount);

/* No argument */
filter: contrast(); /* No change */

/* Negative value */
filter: contrast(-1.5); /* No effect */
filter: contrast(--amount);

/* No argument */
filter: contrast(); /* No change */

/* Negative value */
filter: contrast(-1.5); /* No effect */

The contrast() function takes a single argument, which can be a positive decimal or percentage value. The argument determines the new contrast for the element, where:

  • 0 or 0% dries out all contrast from the element, resulting in a completely gray image.
  • 1 or 100% leaves the element completely unchanged.
  • Values above 1 or 100% increase the contrast linearly.

Negative values aren’t allowed. But CSS variables are:

.element {
  --filter-amount: 150%;
  filter: contrast(var(--filter-amount));
}

How contrast() affects color

Like other filter functions, the contrast() filter operates purely on RGB math. Specifically, given an <amount> it multiplies each RGB channel by that <amount> and then adds 255 * (0.5 - 0.5 * <amount>) to the result. In practice, this affects colors in one of two ways:

  • High contrast (greater than 1) makes light pixels get lighter and dark pixels get darker, so colors become more vivid.
  • Low contrast (smaller than 1) pulls all pixels toward a middle gray. This reduces the difference between light and dark areas, making the image look flat and muted.

Basic usage

Some background images, usually in hero sections or carousels, can make the foreground text difficult to read. Especially if it has very bright and dark colors, which compete with any text color. To solve this, we can use contrast() to reduce the difference between the image’s whites and blacks, making text more readable against the whole image.

img {
    filter: contrast(70%) brightness(60%);
}

The low contrast flattens the image, and as a plus, we can also reduce the image’s brightness to make the text pop regardless of its colors.

Demo: Making product card images pop on hover

Another useful application for contrast() is to highlight an image in a user’s interaction. For example, in a row of image cards, we could increase the image’s contrast and also scale it on hover

.card img {
  transition:
    filter 0.4s ease,
    transform 0.4s ease;
}

.card:hover img {
  filter: contrast(125%);
  transform: scale(1.05);
}

Is contrast() the same as contrast-color()?

While both CSS functions have similar names, they are not to be confused with each other.

  • contrast() is a filter function that makes an element more vivid by making whites lighter and blacks darker.
  • contrast-color() returns the text color with the highest contrast to a solid background. Its resulting color is either white or black, depending on which color contrasts most with the background. It is also not a filter function.

Browser support

The contrast() function is currently supported across all modern browsers.


contrast() originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

contrast-color()

1 Share

The CSS contrast-color() function takes a <color> value (as well as a variable) and returns either black or white, whichever is the most contrasting color for that value.

In other words, contrast-color() is sort of an accessibility tool for conforming to WCAG contrast requirements.

.card {
  background-color: var(--swatch);
  color: contrast-color(var(--swatch));
}

For example, on the next demo update the background color to see the text color change automatically.

The contrast-color() function is defined in the CSS Color Module Level 5 specification.

Syntax

The CSS contrast-color() function syntax is is formatted like this:

contrast-color() = contrast-color( <color> )

Let’s break that down with examples.

Arguments

/* Using a custom variable */
contrast-color(var(--base-background));

/* Passing a color directly */
contrast-color(#34cdf2);
contrast-color(green);

contrast-color() takes a <color> as its only argument and resolves to white or black, depending on which has the highest contrast. If both white and black have the same contrast level, the function defaults to white.

Basic usage

The contrast-color() give us a simple alternative to defining multiple background and text colors, while also ensuring they are contrasting enough. Imagine we had the following scenario:

:root {
  --primary-text: #f1f8e9;
  --primary-bg: #2d5a27;
  --secondary-text: #311b92;
  --secondary-bg: #d1c4e9;
  --tertiary-text: #002b36;
  --tertiary-bg: #ff5722;
}

.primary {
  color: var(--primary-text);
  background-color: var(--primary-bg);
}

.secondary {
  color: var(--secondary-text);
  background-color: var(--secondary-bg);
}

.tertiary {
  color: var(--tertiary-text);
  background-color: var(--tertiary-bg);
}

We defined a text color for each background color in our variables, and if we had more than three possible backgrounds, we’d have had to define them all. Instead, using contrast-color(), we could define only the background color for each theme and let the function return the appropriate contrasting color for the texts.

:root {
  --primary: #2d5a27;
  --secondary: #d1c4e9;
  --tertiary: #ff5722;
}

.primary {
  color: contrast-color(var(--primary));
  background-color: var(--primary);
}

.secondary {
  color: contrast-color(var(--secondary));
  background-color: var(--secondary);
}

.tertiary {
  color: contrast-color(var(--tertiary-bg));
  background-color: var(--tertiary-bg);
}

It is important to note that contrast-color() is still a work in progress (at the time of this writing), and in some cases might not be appropriate from a design standpoint since it only returns black or white. Therefore, I recommend using it only in simple scenarios where either black or white make sense.

In fact, it has some shortcomings that are worth noting.

contrast-color() shortcomings

While contrast-color() appears to improve web accessibility, it has buts we should be aware of before using it.

  • It resolves to only black or white texts. Although the draft promises more control in the future, we have to stick to those two colors for now.
  • We’re stuck with white when using colors where neither black nor white is a sufficient contrast, or they both have the same contrast.
  • contrast-color() only works with colors for now. So, in cases where you’re working with text on background images or using font weights to increase contrast, you’ll have to find a different way to meet contrast requirements. And even if it can be technically used with gradients, these too can only go between black to white which might not provide enough contrast between the gradient colors.
  • contrast-color() doesn’t account for the font-size, which is a defining criterion, in choosing a contrast color. Hopefully, this will be accounted for in the future.

So, at the time of writing, it seems it’s better to manually define colors that are contrasting enough in our themes as contrast-color() isn’t really feasible right now.

Older syntax

Based on earlier articles, the contrast-color() function used to take multiple color arguments–the base color versus multiple contrasting color options to choose from:

contrast-color(var(--bg) vs red, lightgreen, blue)

This syntax no longer exists in the draft. It’s one color and one color only.

Specification

The contrast-color() function is defined in the CSS Color Module Level 5 specification.

Browser support

While browser support is limited at the time of this writing, it’s a good idea to include a fallback if you’re planning to use it on a project. We can use the @supports at-rule to detect if the browser understands the function:

.card {
  --bg-color: #2d5a27;
  background-color: var(--bg-color);

  /* Default Fallback */
  color: ghostwhite;
}

/* Use the function if supported */
@supports (color: contrast-color(red)) {
  .card {
    color: contrast-color(var(--bg-color));
  }
}

Further reading:


contrast-color() originally handwritten and published with love on CSS-Tricks. You should really get the newsletter as well.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Exploiting Vulnerable Drivers

1 Share

Often, attackers will attempt to prevent security software from interfering with their attack chains by abusing a vulnerable driver to kill or otherwise disable the system’s security software (antivirus/edr/etc). Because drivers run in highly-privileged OS Kernel mode, it is difficult to prevent attackers from achieving their goals if they manage to achieve code execution in the kernel.

To ensure that only legitimate code gets to run in the kernel, Windows requires that the driver code bear an Authenticode signature from a particular certificate authority. Microsoft signs these drivers only after verifying their provenance and running through various driver-verification suites to help ensure their robustness.

However, even if all of the drivers on a system are legitimate, attackers have had success in finding vulnerabilities in legitimate drivers that allow them to abuse the driver to achieve their goals. Like any code, some drivers have bugs that allow them to corrupt memory, leak data that needs to be secret, or otherwise perform functions unintended by the original author. These vulnerable drivers represent a critical attack surface that attackers abuse to achieve their own ends.

Beyond abusing drivers already present on a victim device, in a BYOVD attack (Bring your own vulnerable driver) an attacker drops a vulnerable driver onto the device, then abuses it with their malware.

To address this threat vector, Microsoft has three main mechanisms:

  1. Exploitable driver blocklist – Enforced by the Windows kernel itself, allows blocking the load of drivers known to be vulnerable.
  2. Microsoft Defender Attack Surface Reduction rule – Enforced by Microsoft Defender, prevents writing of known vulnerable drivers to the system. By preventing the write of the driver before it loads, the risk of compatibility problems is somewhat reduced (because in a legitimate scenario, the installer for the device will fail at install time rather than at runtime).
  3. Microsoft Defender Signatures – Enforced by Microsoft Defender Antivirus, blocks vulnerable drivers directly using the AV engine. This approach is appropriate only for drivers under active exploitation and with little legitimate use.



Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Opinion: AI is not a product — it’s an environment

1 Share
(BigStock Image)

Editor’s note: Bill Hilf is the former CEO of Vulcan/Vale Group, current board chair of Ai2 and American Prairie, and the author of the new sci-fi novel,”The Disruption,” which explores the topics of AI and natural ecosystems. He spoke about the book on the GeekWire Podcast, and elaborates on the themes in this companion essay.

We are building AI at civilizational scale while still talking about it as if it were a software release.

Which model tops which benchmark. Which chatbot sounds most human. Those questions matter, but they’re the wrong altitude. AI systems no longer just answer questions. They mediate hiring, diagnostics, logistics, finance, and growing pieces of public decision-making. We are not shipping products anymore. We are reshaping environments.

At this scale, AI is heavily interconnected. It has linked failure modes. Emergent behavior. Invasive species. Tipping points.

Treating an environment like a product is a category error, and it’s already compounding.

I spent three decades building the systems now at the center of this conversation, from scientific computing at IBM to early Azure and large-scale enterprise systems at HP. The working model was deterministic: specify the system, build it, tune it, control it. If something breaks, diagnose and patch. That model works right up until it doesn’t.

At sufficient scale, distributed systems stop behaving like machines and start behaving more like ecosystems. They adapt. They route around failure. They develop dependencies no one designed and interactions no one completely understands. You can still architect and engineer them. But once they are embedded everywhere, connected to everything, and optimized across too many layers for any one person to hold in mind, they are no longer just tools.

And the curve is steepening. McKinsey’s latest State of AI says 88% of surveyed organizations now use AI in at least one business function, up from 55% two years earlier. Gartner forecasts worldwide software spending above $1.4 trillion in 2026. In investor commentary circulated this year, Thoma Bravo argues that agentic AI could create a roughly $3 trillion incremental application revenue opportunity by converting labor spend into software spend. That is not a feature upgrade. It is the system rewiring itself mid-flight, faster than most firms can govern, audit, or even classify what they have already built.

That realization didn’t come only from technology. It also came from conservation.

Ecology has a name for what happens when you pull out a load-bearing layer too fast: trophic cascade. The Aleutian fur trade nearly wiped out sea otters in the 18th century. Otters eat urchins. Urchins eat kelp. Remove the otters, and you don’t get an otter-shaped hole. You get an urchin explosion, collapsed kelp forests, and the loss of every fish nursery the kelp was quietly holding up. 

That is the pattern we should be watching in AI-dependent infrastructure. The AI will probably be better than your people at screening, scoring, and forecasting. The real problem is the speed. We are replacing the people who were providing judgment, correction, and restraint, the connective tissue that never showed up on a workflow diagram. The voice in the gray areas, the non-computable decisions. Remove that layer faster than the organization can discover what it was holding up, and you get the same cascade.

If we’re serious about building durable AI infrastructure, those patterns are worth studying, and some of the lessons are uncomfortable.

Efficiency is overrated. In technology, as in ecology, a system optimized too tightly becomes brittle. Slack and redundancy matter. So do firebreaks, and so does local autonomy.

In July 2024, a single CrowdStrike configuration update crashed 8.5 million machines worldwide. Airlines, hospitals, 911 centers, banks. $5.4 billion in losses. They reverted the bad update in 78 minutes. The recovery took days. Southwest Airlines was largely unaffected. It simply wasn’t running CrowdStrike’s software. Sometimes the absence of a dependency is its own firebreak. If every important function in your stack depends on one model, one provider, or one training pipeline, you haven’t built an intelligent marvel. You’ve built a future outage.

Ecosystems don’t only fail by cascade. They also fail by accretion. AI is entering workflows the way invasive species enter ecosystems: through low-visibility vectors, one deployment at a time. A copilot here, a summarization layer there, an autonomous scheduler somewhere no one is tracking. Each deployment is defensible on its own. The cumulative effect is something no one chose. The review and friction that kept earlier processes honest were built for human speed. Nothing has replaced them at machine speed.

A model does not remain what it was in the lab once it begins shaping the environment that later shapes it. AI systems do the same when deployed into markets, media, institutions, and human behavior. You do not regulate an ecosystem by inspecting individual organisms. You regulate the conditions that determine whether the whole system recovers or collapses. Those conditions include observability.

Systems that cannot be inspected, studied, or independently evaluated are systems no one can truly understand or govern well. Openness matters here, not as a slogan, but as a requirement for analysis and earned trust. The same logic applies to fault tolerance. Before a model is allowed inside critical systems, its operator should have to prove the full environment can still function without it. That means mandatory degradation testing, the way we stress-test banks and bridges.

Builders don’t have to wait for regulators. If an AI layer is entering a production workflow, builders need to know what happens when the model is wrong, the vendor is down, or the behavior changes after deployment. If the honest answer is “we don’t know,” the layer is not ready to be load-bearing. That’s true for a hospital triage system and for a customer support bot. It is especially true for agents with open-ended scope: software that can plan, call tools, and act inside environments no one fully controls. For those systems, model quality is the easy question. The hard one is who is accountable when it fails.

Multi-agent architectures and ensemble approaches can improve resilience, but only when the diversity is real. Three agents routing to the same foundation model may improve reasoning, but they are not three independent safeguards. They are one dependency wearing three hats.

There’s a broader strategic consequence here. In stable ecosystems, dominant species compound their advantage slowly. Shorten the disturbance cycle and many of those advantages erode before they mature. That is happening to business moats now. When disruption gets radically cheaper, the winning question stops being what you’re building and becomes what still compounds when nothing around you lasts. In real-world deployments, the ‘best’ model loses to the most adaptive system.

Recovery matters as much as prevention. In the conservation work I do, the question is never how to stop change. Disturbance is inevitable. The question is what survives, how quickly a system recovers, and what hidden capacities remain after the shock. We should ask the same of AI-dependent infrastructure. Not just “Is it safe?” but “How does it fail? Who can override it? How far does the failure spread? What grows back after the mistake?”

The thing that breaks, in my experience, is the assumption of control. Real systems do not collapse cleanly and they do not recover cleanly. Some parts fail. Some adapt. Some mutate into things no one intended.

Nature has been running distributed sensing, local response, and recovery for hundreds of millions of years. It has been operating the kind of network we keep trying to invent. Not because forests are conscious or because the planet is an AI, but because the engineering problems are structurally similar: how does a system without central control maintain coherence, adapt to damage, and persist across time?

The question is no longer just what AI systems can do. It is what kind of world they create around themselves, what kind of world they inherit from us, and whether we are wise enough to build systems that we can still steer.

If we take this seriously, a few principles follow. Design for diversity before efficiency. Build for recovery before performance. Keep humans in the loop, not as a compliance measure but as the system’s stewards, its source of judgment, and its memory of why it exists. Insist on openness, at all levels, as the precondition for trust at scale. None of this slows AI down. It’s what keeps AI working the day something fails.

You can switch off a machine.

You have to live within an ecosystem.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

8 best practices for CISOs conducting risk reviews

1 Share

The Deputy CISO blog series is where Microsoft Deputy Chief Information Security Officers (CISOs) share their thoughts on what is most important in their respective domains. In this series, you will get practical advice, tactics to start (and stop) deploying, forward-looking commentary on where the industry is going, and more. In this blog, Rico Mariani, Deputy CISO for Microsoft Security Products, Research Infrastructure, and Engineering Systems shares some of his best practices and expertise in conducting risk reviews.

The nature of cyberthreats has never been static, but it’s hard to accurately convey the scale of their recent evolution and proliferation. As we’ve seen in many other arenas, AI has become a very powerful productivity tool for would-be cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion in fraud attempts.1 And as of the writing of the Microsoft Digital Defense Report 2025, we are tracking 100 trillion security signals each day (a 40% increase since 2023).2

This is why I decided to write a blog about risk reviews. By asking the right questions, risk reviews help us transform the utility of our security data from primarily reactive remediation and response information into key insights helping to inform our proactive security stances. And embracing strong proactive security is something we can all do to mitigate our increased exposure to security threats.  

Risk reviews are also a topic I’ve lent focus to during my first six months as Deputy CISO for Microsoft Security. It’s a very interesting role for me, as I’ve traditionally described myself as performance specialist and a systems specialist more than a security specialist. It’s not necessarily a distinction of skill set, but more one of mindset, and what I’d like to share with you is actually a bit of a synthesis of my inherent performance- and systems-first way of thinking and things I’ve brought into that practice after working with many of the other Microsoft Deputy CISOs over the last few months.

There are roughly eight points I want to bring up concerning risk reviews in this blog. Each point has the potential to help expose potential security vulnerabilities when brought up with security teams. Together, they represent a structured and approachable way to initiate necessary conversations and drive meaningful results:

  1. Assets
  2. Applications 
  3. Authentication 
  4. Authorization 
  5. Network isolation 
  6. Detections 
  7. Auditing 
  8. Things not to miss 

Now, why did I choose to highlight these areas and not others? Generally, I find that looking at problems from the lens of risk management gives me a fresh perspective. When you very consistently ask specific questions around these areas, they often effectively start the conversation you want to have.

Just one last thing before we dive in: What I’m about to tell you is only approximately correct. There will be edge cases and exceptions, but generally I think you’ll find this information helpful.

1. Assets

The best place to start a review is identifying the assets that you need to protect. This will largely define the scope of the review. A good place to find those assets is, of course, on your architecture diagrams and your threat models. The assets we’re talking about could be storage (where perhaps you’re storing sensitive or otherwise important data) or they could be highly-privileged applications like command-and-control systems or something similar. This is, in short, the list of things that your cyberattacker wants to get to. 

2. Applications

In the next step, you identify your applications. These are, broadly speaking, the active part of your system. They are the outward-facing surfaces that customers will use and the set of microservices that support your interface. These systems could be providing any set of services that you might need—and herein lies the problem. It’s entirely normal for your applications to require access to your most important assets, but that means the applications themselves can become viable targets for a cyberattacker. So how do we make this situation better? At this point, it’s reasonable to start talking about possible controls. 

Read up on Zero Trust for source code access.

3. Good quality authentication 

The next thing you will want to inspect is the form of authentication that your system is using. The best systems are using tokens for authentication, and they are getting these tokens from standard token issuers like, for instance, Microsoft Entra. It’s sometimes viable to have your own token generation system, but remember that such systems tend to have bugs. Those bugs can be exploitable. And even lacking bugs, there could be, say, gaps or vulnerabilities in your token issuing system such that perhaps the tokens cannot be properly scoped. The tokens could also tend to be too long-lived, or difficult to be made fine-grained enough, or lack the capacity to allow for flowing user context from the request to the authorization system. Many such deficiencies are possible. 

Even with a good quality token issuing system, you can easily find yourself in a situation where the tokens that you’re creating are too fungible, or too powerful, or both. Thinking back to the assets you’re trying to protect and the applications that you have, you can likely categorize some of the applications as having more “power,” if you will, than others. Sometimes we call these “highly privileged applications” because they have the capability to do something that is especially of interest to cyberattackers, like reading a lot of data, changing configuration, or anything like that. 

To best manage the privileges associated with these applications, it needs to be the case that the kinds of tokens that they use are as limited as possible. So, a particular token might authorize a capability for a certain customer, on behalf of a certain user, for a certain set of data—and nothing more than that. When privileges are very generic, like “I can do this operation for anyone, anywhere,” things become much more dangerous. So, here the idea is to make sure that the tokens that you’re getting are very specific to the intent that you have and that only the applications that need those tokens can get them, and, again, the tokens are as limited as possible. This goes a long way in reducing the possible damage that a cyberattacker could do if they found such a token errantly stored somewhere. 

A lot of the things we think about when we’re working with tokens and trying to limit them fall into the category of limiting what a cyberattacker can do if they get a foothold somewhere. This is the Zero Trust model, where you assume breach everywhere.  

Additionally, it’s essential to use standard libraries to accurately authenticate with tokens, so that all the aspects and limitations of the token are certain to be honored. 

Learn about phishing-resistant multifactor authentication from the Microsoft Secure Future Initiative (SFI). 

4. Good quality authorization  

Good quality tokens are not going to help you if they’re enforced poorly (or not at all). And bugs can creep into code. Ad hoc authorization code can render the good authentication that you’ve done moot. 

Any time you can use declarative style patterns that help you verify tokens against incoming APIs and the data that the client is attempting to access with your API, you’ll find yourself in a better place. Simple, consistent authorization yields fewer bugs and therefore less risk. 

5. Network isolation 

In addition to having good quality tokens, it’s important to isolate the pieces of your environment to the maximum extent possible. Again, this is done because it’s prudent to assume that a cyberattacker has a foothold somewhere in your network. The questions are “where exactly can that foothold be,” and “once they have that foothold, where in my network can they get to?” If a threat actor can reach any part of your system from any other part of your system, this is obviously less good than if your most sensitive systems can be accessed from exactly one or two key places and nowhere else. When properly controlled, most footholds become useless to a cyberattacker—or at least only indirectly useful.  

Use service tags to create boundaries around your various assets such that applications are used by exactly those systems that are supposed to be using them and data is accessed by exactly those applications that are supposed to be accessing the data. This goes a long way to take many cyberthreats off the table.  

Network isolation can happen at several levels in the network stack. Popularly, level 7 is used at the perimeter. Maybe this manifests as some kind of HTTP proxy, for example, or an HTTP routing gateway. However, protection is incomplete without additional work happening at level 3 within your network. You want to limit IP traffic to be going to exactly the places that you want it to go. You might use techniques like virtual LANs, or similar constructs like network security groups (NSGs) in Microsoft Azure. The idea is to limit connectivity to exactly what is necessary to do the job and not give the cyberattacker freedom to move around. 

With good network isolation comes the ability to log any attempts to gain access at the perimeter, and potentially even internally. Depending on what networking technology you’re using, all of this is great for hunting. We’ll talk about that in the next section.  

Learn more about network isolation and other best practices from SFI.

6. Detections  

It’s normal to think about monitoring for reliability. Systems need to stay within their operating parameters in the face of changes and external conditions. But it’s also important to think about detection from the perspective of your threat model. If you identify five or ten risks in your threat model that need controls, it’s useful to think about how you might detect if any of those things are actually happening in your environment.  

In this context, one place to look is at the perimeter—by examining your incoming HTTP traffic, for instance. But you can also look anywhere in your environment where you predict that attacks might happen. You might look for badly formatted requests, or fuzzing, or evidence of DDoS attack—whatever is appropriate to the risks you have. The idea is that you want to be able to create alerts if you have evidence of a threat actor operating in your estate.  

And, of course, security products can be very helpful here.  

7. Auditing

We separate the notions of auditing from detection. Specifically, auditing is what I will call the pieces of data that you would use after a breach to determine the extent of the breach and the customers that were affected by it. In the event that you find a vulnerability without any evidence of threat actor exploitation, you’d want to go and check your auditing again to verify those claims. That way you can have evidence that whatever problem you found was not in fact exploited. If it was exploited, you’ll know to what extent, who was affected, and who needs to be notified. 

Some parts of your endpoint detection and response (EDR) stream will be very useful for auditing. Additional auditing information can come from the logs you create in your applications that record suitable information concerning recent activity. 

8. Things not to miss 

It’s important to think about all the applications and data that you have in your estate. For instance, it’s easy to overlook the backup data that you have stored. A cyberattacker might not be able to get access to your primary systems but might find that your backups are entirely unprotected and they can just read the backup.

Similarly, support systems often go overlooked. There are frequently important customer support scenarios that require access, and it’s easy to fall into the trap of not giving those systems the highest level of scrutiny. 

We should add systems that are under development and test systems to this problematic set. In both these cases, the code that’s running those systems is less trustworthy than normal production code. Development code, for instance, can be presumed to have more bugs than production code. Some of those bugs might be authorization bugs. And if there are authorization bugs, that buggy code might provide access to important assets. Therefore, your plans should include even greater scrutiny when it comes to these kinds of systems. 

Explore actionable patterns and practices from SFI

In summary

If you’ve gotten as far as identifying all of your assets, all your applications, and then thinking about the access patterns and controls that you have between them—including authentication, authorization, network isolation, and the use of bug-resistant patterns—you’re in a pretty good place to write a risk summary that can guide your actions for many months. And we haven’t even touched on basic things like vulnerability management, security, bug management, and the usual software lifecycle things that are necessary to keep the system in good health. Combine all of the above and you should have a good-looking risk plan. 

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


1Microsoft Cyber Signals Issue 9

2Microsoft Digital Defense Report 2024.

The post 8 best practices for CISOs conducting risk reviews appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Microsoft Desired State Configuration v3.2.0

1 Share

Announcing DSC v3.2.0

We’re excited to announce the General Availability of Microsoft Desired State Configuration (DSC) v3.2.0. This release delivers new built-in Windows resources, experimental Bicep integration via gRPC, version pinning, a richer expression language, custom functions, and continued adapter improvements. All these changes are driven by real-world use, partner feedback, and community contributions. Special thanks to the WinGet team and the incredible DSC community.

For background on the DSC v3 platform, see:

What’s New in DSC v3.2

New Windows resources

DSC v3.2 ships several new built-in Windows resources, significantly expanding what you can manage out of the box:

  • Microsoft.Windows/Service — manage Windows services
  • Microsoft.Windows/OptionalFeatureList — manage Windows Optional features
    • Requires using the ZIP package of DSC for now
  • Microsoft.Windows/FeatureOnDemandList — manage Windows Features on Demand
    • Requires using the ZIP package of DSC for now
  • Microsoft.Windows/FirewallRuleList — manage Windows Firewall rules
  • Microsoft.OpenSSH.SSHD/sshd_config — manage entire SSH server configuration
  • Microsoft.OpenSSH.SSHD/Subsystem and Microsoft.OpenSSH.SSHD/SubsystemList — manage SSH server configuration for subsystem entries
  • Microsoft.OpenSSH.SSHD/Windows — manage Windows SSH server configuration, such as the default shell

These resources are included in the DSC package and ready to use without additional installation.

Bicep integration via gRPC (experimental)

DSC v3.2 introduces a gRPC server, enabling Bicep to orchestrate DSC resources directly. The dsc-bicep-ext extension is now included in the MSIX package and exposed on PATH.

This is the foundation for the Bicep to DSC integration. Write your configuration in Bicep. Bicep orchestrates the execution directly over gRPC without going through ARM.

Extended WhatIf support

DSC v3.2 adds --what-if support to the dsc resource set command, letting you preview changes before applying them:

dsc resource set --what-if --resource Microsoft.Windows/Service --input '{
    "name": "spooler",
    "startupType": "disabled"
}'

Prior to this release there was no way to run --what-if against individual resources. You could use the --what-if flag with the dsc config set command, which ran all resources in your configuration in --what-if mode.

Resource manifests can now declare whatIfReturns to describe what a what-if operation returns, enabling richer preview output across resources.

Version pinning

DSC v3.2 supports pinning configuration documents to specific versions of DSC and pinning resource instances in configuration documents to specific versions of the resource.

Now you can author a DSC configuration document and ensure that it only executes when the given versions of DSC and resources that are available on the system. Prior to this release, DSC always used the latest version of a resource discovered on a system for configuration operations.

The following example shows how to pin a configuration document to a specific version of DSC using the version directive and how to pin individual resource instances to specific versions using the requireVersion field.

$schema: https://aka.ms/dsc/schemas/v3/bundled/config/document.json
directives:
  version: '=3.2.0' # This configuration is only valid for exactly version 3.2.0
resources:
- name: os
  type: Microsoft/OSInfo
  requireVersion: '^1.0' # Resource versions >= 1.0.0 and < 2.0.0 are valid
  properties: {}
- name: echo
  type: Microsoft.DSC.Debug/Echo
  requireVersion: '>=1.0.0, <1.3'
  properties:
    output: echo

When DSC evaluates a resource version pin in a configuration document, it looks for the latest version of the resource that meets the given requirement. If no compatible version is discovered on the system, DSC raises an error.

Starting with version 3.2, when you specify the version directive, DSC raises an error when the version of DSC operating on the configuration document isn’t compatible.

Expression language improvements

Configuration documents now support a richer expression syntax:

  • Lambda expressions with map() and filter() functions (ARM syntax)
  • dataUri() and dataUriToString() functions
  • reference() usage inside copy loops
  • requireVersion replaces apiVersion for version requirements

These additions make configuration documents more expressive and reduce the need to duplicate values across resources.

Adapter improvements

DSC 3.2 adds support for adapted resource manifests to the PowerShell adapters. Resource authors can create adapted resource manifests that prevent adapters from needing to do more intensive discovery operations.

This release also includes other improvements to the PowerShell adapters:

  • Added automatic conversion of PowerShell streams to DSC traces. Resource authors can participate in DSC’s tracing model by using the normal Write-* cmdlets.
  • Fixed passing credentials to adapted PSDSC resource instances.

Metadata and execution improvements

  • Microsoft.DSC metadata is now split into directives and executionInformation — cleaner separation of configuration intent from execution context.
  • _refreshEnv resource metadata updates Windows environment variables during deployment without requiring a restart.
  • Resource manifests can now specify requireSecurityContext per operation, helping users avoid problems where they retrieve data for a resource with a get or test operation and then get an access denied error when they try to run the set command.
  • Resources and extensions can now be marked as deprecated, with a deprecation message surfaced at runtime.

New extension capabilities

DSC 3.2 adds support for two new extension capabilities: importing configurations and retrieving secrets.

You can use an extension with the import capability to process arbitrary files as DSC configuration documents. For example, a hypothetical extension with this capability could transform the following TOML snippet into a DSC configuration document:

# example.dsc.toml
[directives]
version = '3.2.0'
[resources.os]
type       = 'Microsoft/OSInfo'
properties = {}

The resulting DSC configuration document:

# effective DSC configuration document
$schema: https://aka.ms/dsc/schemas/v3/bundled/config/document.json
directives:
  version: '=3.2.0' # This configuration is only valid for exactly version 3.2.0
resources:
- name: os
  type: Microsoft/OSInfo
  properties: {}

When you use the --file option with the dsc config * commands, DSC checks the file extension to see whether an extension can process that file. If there is no DSC extension that handles the given file extension, DSC tries to parse the file as a configuration document.

You can use a DSC extension with the secret capability to retrieve secrets at runtime. Presenting secret retrieval through the extension model enables DSC to be used with secrets in a variety of contexts without requiring the core engine to handle these operations directly. This capability is paired with the new secret() configuration expression for retrieving secrets by name.

Experimental PowerShell discovery extension

DSC now includes a discovery extension for finding DSC resources in PowerShell modules. This extension looks for resource manifests and adapted resource manifests located inside PowerShell modules on the system. This makes it possible for resource authors to ship DSC resources written in PowerShell that are not PSDSC resources.

For example, with this extension, DSC could discover a resource implemented as a PowerShell script as long as the module also includes a valid manifest for the resource.

Bug fixes

  • Fixed duplicate resources appearing in dsc resource list
  • Added a clear error when attempting to use DISM resources via Appx (previously a silent failure)
  • Fixed executionInformation in config export results
  • Fixed discovery failures when encountering unsupported manifests

Community contributions

DSC v3.2 reflects the work of an active and growing contributor community. The following community members made notable contributions to this release:

  • @Gijsreyn (Gijs Reijn) — experimental PowerShell discovery extension, lambda/map/filter expressions, dataUri functions, adapted resource manifest fixes, and more.
  • @mimachniak — PowerShell adapter credentials fix for passing username and password.

Thank you to everyone who filed issues, tested previews, and submitted fixes during the DSC v3.2 release cycle.

Installing DSC

On Windows, you can install DSC from the Microsoft Store using winget. Installing from the Store gives you automatic updates.

Search for the latest version of DSC:

winget search DesiredStateConfiguration --source msstore

Name                              Id           Version
------------------------------------------------------
DesiredStateConfiguration         9NVTPZWRC6KQ Unknown
DesiredStateConfiguration-Preview 9PCX3HX4HZ0Z Unknown

Install DSC using the id parameter:

# Install latest stable
winget install --id 9NVTPZWRC6KQ --source msstore
# Install latest preview
winget install --id 9PCX3HX4HZ0Z --source msstore

To install the ZIP package on Windows, Linux, or macOS:

  1. Download the latest release from the PowerShell/DSC repository.
  2. Expand the release archive.
  3. Add the folder containing the expanded archive contents to your PATH environment variable.

Support lifecycle

DSC follows semantic versioning. DSC v3.2.0 is the current stable release. Patch releases update the third digit of the semantic version number — for example, 3.2.1 is a patch update to 3.2.0.

Stable releases receive patches for critical bugs and security vulnerabilities for three months after the next stable release. For example, v3.2.0 is supported for three months after v3.3.0 is released.

Always update to the latest patch version of the release you’re using.

Looking ahead

Work continues on DSC v3.3, with previews starting shortly after the v3.2.0 GA release.

Call to action

For more information about DSC, see the DSC documentation. We value your feedback. Stop by our GitHub repository and let us know of any issues you find.

Jason Helmick

Sr. Product Manager, PowerShell

The post Announcing Microsoft Desired State Configuration v3.2.0 appeared first on PowerShell Team.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories