Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148773 stories
·
33 followers

How to Run PowerShell Scripts: A Complete Guide

1 Share

This guide covers how to run PowerShell scripts: everything from basic script execution to remote operations, parameter passing, and troubleshooting.

PowerShell is useful for automating administrative tasks, but before you can run PowerShell scripts, you need to understand the mechanisms that control script execution. The core concepts stay constant whether you are running a basic script locally from the command line or coordinating complicated tasks on many remote servers.

Run PowerShell scripts: the basic methods

From a PowerShell console session

Open PowerShell, navigate to your script’s folder, and run it:

.\MyScript.ps1

Use full paths when executing from other directories:

C:\Users\Admin\Scripts\MyScript.ps1

Tip: navigate first using cd to shorten paths when testing multiple scripts.

From File Explorer

Right-click the .ps1 file and select Run with PowerShell. Windows temporarily applies a Bypass policy, runs your script, and closes the window. This works for simple scripts without output. For anything interactive, use the console or Visual Studio Code.

From Visual Studio Code

Open the script, press F5 to run the entire file, or F8 to execute selected lines. Output appears in the integrated terminal.

Run a PowerShell script
Run a PowerShell script (Image Credit: Mike Kanakos/Petri.com)

To run from Command Prompt:

PowerShell -File C:\Users\Admin\Scripts\MyScript.ps1

Understanding PowerShell execution policies

What are execution policies?

When you first try to run a PowerShell script on a Windows client computer, you might encounter the error “running scripts is disabled on this system.” That message comes from PowerShell’s execution policy. Execution policies are not strict security boundaries—they are more like guardrails to help prevent accidental script execution. Windows client systems set the default policy to Restricted, blocking all script execution and allowing only individual commands in the console.

The different policy settings

PowerShell offers six execution policy settings that balance security and convenience differently.

Execution policyDefault platformDescriptionSecurity level
RestrictedWindows 10/11Blocks all scripts; only interactive commands run in console.Highest
RemoteSignedWindows ServerAllows local scripts; requires digital signature for downloaded scripts.High
AllSignedNone (manual)Requires all scripts—local or downloaded—to be signed by a trusted publisher.High
UnrestrictedCross-platformRuns all scripts but warns for untrusted sources.Medium
BypassCross-platformNo restrictions or warnings; scripts run freely.Low
UndefinedN/ANo policy is set; defaults apply.N/A

On Linux, macOS, and Windows Subsystem for Linux, the default policy is Unrestricted and cannot be changed.

For most environments, RemoteSigned provides the right balance between flexibility and protection. To learn more, run Get-Help about_Execution_Policies in PowerShell or read Microsoft’s execution policy documentation.

Checking and changing your execution policy

To view your current policy:

Get-ExecutionPolicy

To list policies at all scopes:

Get-ExecutionPolicy -List
Change PowerShell execution policy before you run PowerShell scripts
Change PowerShell execution policy before you run PowerShell scripts (Image Credit: Mike Kanakos/Petri.com)

To change it:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

If you lack admin rights, set it for your user only:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

Restart PowerShell for the change to take effect.

Passing parameters to scripts

How parameters work

Parameters make scripts flexible and reusable:

.\BackupScript.ps1 -BackupPath "D:\\Backups" -KeepDays 30

Each name (for example, -BackupPath) corresponds to a parameter defined in the script.

Common execution options

You can adjust how PowerShell itself runs:

OptionDescriptionExample
-NoExitKeeps PowerShell open after script finishes.PowerShell -NoExit -File script.ps1
-NoProfileRuns without loading your profile.PowerShell -NoProfile -File script.ps1
-ExecutionPolicy BypassTemporarily ignores policy restrictions.PowerShell -ExecutionPolicy Bypass -File script.ps1

Dot sourcing

To retain variables and functions after a script finishes, dot-source it:

. .\MyFunctions.ps1

This loads its contents into your current session.

Working with PowerShell and Visual Studio Code
Working with PowerShell and Visual Studio Code (Image Credit: Mike Kanakos/Petri.com)

Functions vs. scripts: understanding the difference

Scripts are complete .ps1 files that run and exit. Functions are reusable code blocks you load once and call repeatedly. Loading frequently used functions into your PowerShell profile makes them available every session.

To load functions automatically, add this line to your profile:

. C:\Scripts\MyFunctionLibrary.ps1

Learn more with Get-Help about_Profiles or the PowerShell profiles documentation.

Running scripts on remote computers

PowerShell remoting

Remoting lets you run code across many computers simultaneously using either WinRM or SSH.

  • WinRM (ports 5985 HTTP, 5986 HTTPS):
    Enable it with Enable-PSRemoting -Force.
  • SSH (port 22):
    Works across Windows, Linux, and macOS when SSH servers are configured.

Enter-PSSession vs. Invoke-Command

Use Enter-PSSession for interactive remote control of a single computer:

Enter-PSSession -ComputerName Server01

Exit with Exit-PSSession.

04 Connecting to remote pc w enter pssession

Use Invoke-Command to execute scripts or commands on multiple systems:

$servers = "Server01","Server02","Server03"
Invoke-Command -ComputerName $servers -FilePath C:\Scripts\ConfigureServer.ps1
05 Using Invoke Command to query multiple computers

Common errors and how to fix them

Error messageCauseFix
Running scripts is disabledExecution policy set to RestrictedSet-ExecutionPolicy RemoteSigned
Script path not foundIncorrect path or missing .\ prefixVerify with Test-Path and use full path
Access deniedPermissions or policy restrictionRun PowerShell as Administrator; check with Get-Acl

Conclusion

PowerShell allows administrators to automate and scale tasks efficiently across many systems. Understanding script execution, functions, and remoting gives you the foundation to build powerful automation workflows.

Frequently asked questions

How do I run a PowerShell script?

Open PowerShell, go to the script’s folder, and run .\ScriptName.ps1. In Command Prompt, use powershell.exe -File "C:\Path\To\Script.ps1".

Why does the error “Running scripts is disabled on this system” appear?

The default Restricted execution policy blocks scripts. Change it with:

Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

How do I run a PowerShell script as administrator?

Right-click PowerShell and choose Run as administrator, or use:

Start-Process PowerShell -Verb RunAs -ArgumentList "-File C:\Scripts\MyScript.ps1"

Can I run a PowerShell script from File Explorer?

Yes. Right-click the .ps1 file and select Run with PowerShell. To see output, add Read-Host at the end or run from console.

How do I pass parameters to a PowerShell script?

Define them in a param block:

param([string]$Name,[int]$Age)
Write-Host "Hello, $Name!"

Run it like .\MyScript.ps1 -Name "Alice" -Age 30.

The post How to Run PowerShell Scripts: A Complete Guide appeared first on Petri IT Knowledgebase.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

97: Invokers and commands

1 Share

In this episode of The CSS Podcast, we're diving into the power of invoker commands! Discover how the command and commandfor attributes allow you to declaratively open dialogs and show popovers. We'll explore standard commands and then jump into creating your own custom commands for more complex interactions. To close off, we're covering the concept of "interest invokers" and how the new CSS properties and selectors they bring.

Resources:

Introducing command and commandfor blogpost → https://goo.gle/4ozmEy4 

Authors Cards (Interest Invokers Demo) → https://goo.gle/42LU3x2 

Invoker Commands Explainer → https://goo.gle/4o0DC8n 

Interest Invokers Explainer → https://goo.gle/4nfyZGi 

 

Una Kravets (co-host)
Bluesky | Twitter | YouTube | Website
Making the web more colorful @googlechrome 

Bramus Van Damme (co-host)
Bluesky | Mastodon | YouTube | Website





Download audio: https://traffic.libsyn.com/secure/thecsspodcast/TCP097_final.mp3?dest-id=1891556
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The power of the prompt

1 Share

Is prompt writing really that hard?

#promptsMatter sticker featuring two emoji faces, a poop emoji on the left and a happy face emoji on the right, with a colorful logo in the center.

Well, that depends.  Seriously, it depends.  I’m not just being a consultant, I promise.

Let’s back up a little.

Since we know my day-to-day is in Power Platform, let’s start there.

My recent talk at PPCC25 was about using Copilot to accelerate your data modeling buildout.  I mean seriously, did anyone ever actually like the tediousness of manual table creation?  Click here, type this, add that, change the thing over here. 

With Copilot assisting, we can negotiate our data model and just let it be built in the background while we move on to the next task, or grab a coffee, or if it’s a huge data model, take the dog for a walk.  And then tadah, it’s done.

Back to prompting. 

For my session at the conference we used a scenario for a gated community and how to improve their management of gate traffic. 

As long as we give at least some context for our business use case, the LLM does its thing and make a pretty good data model for us.  Each of these prompts actually returns darn near identical results.

So I’m trying to build something for gates and cars and people and like tracking stuff? Can you make a thing for that?  Make it smart, k?

I need to track residents, vehicles, gate access events, and security personnel.

Create a data model with Residents, Vehicles (linked to Residents), Gates, Access Logs (timestamp, vehicle, gate, granted yes/no), and optional Security Staff.

Even this one full of typos gives pretty good returns

I need a data modle for a gated comunity app. It shoud track residnets and their vehicals. Each vechile can be loged enterring or exiting a gate. Gates have names and loctions. Secuirty staff may be assinged to log evnets. I want to know when acces was granted, wich vehical it was, and wich gate.

These are all great starting points.  You can also give a serialized, detailed list like this:

Create the following tables and appropriate relationships:

Resident table- Name, specific address details, contact information, notification preferences, vehicles owned/leased

Vendor table- access duration, name, vehicles, relationship to resident

Vehicle table- make, model, year, color, plate, type (resident or vendor)

Access log table- accessed by (vendor or resident), vehicle information,  in or out, timestamps, logged by staff or automated

Where you get further, is when you combine the LLM smarts, with your explicit instructions. 

Let’s combine the LLM-dependent prompt, with our explicit serialized list.

I need a data modle for a gated comunity app. It shoud track residnets and their vehicals. Each vechile can be loged enterring or exiting a gate. Gates have names and loctions. Secuirty staff may be assinged to log evnets. I want to know when acces was granted, wich vehical it was, and wich gate.

Create the following tables and appropriate relationships:

Resident table- Name, specific address details, contact information, notification preferences, vehicles owned/leased

Vendor table- access duration, name, vehicles, relationship to resident

Vehicle table- make, model, year, color, plate, type (resident or vendor)

Access log table- accessed by (vendor or resident), vehicle information,  in or out, timestamps, logged by staff or automated

This gives me the best results yet. 

Data model diagram showing tables for Gates, Vendor, Access Log, Resident, Security Staff, and Vehicle. Each table includes relevant fields for tracking data related to a gated community.

We hear all the time about the need for human oversight with AI, and I think this is a great use case for that.  Give your Copilot the right context, and your list of tables and columns.  This reduces the time to build tremendously.

Bad prompts offer conflicting information. When you do that, then Copilot has to decide what you REALLY mean. And while I love me some good Copilot help, I’m not sure I want it to make those decisions on my behalf.

Prompts aren’t poetry; they’re specs. Start messy if you must, but add context and a tidy list and you’ll get 80–90% of the way there on the first pass. The magic isn’t the wording—it’s the clarity. Let Copilot do the clicking while you do the thinking.

(Yes, I left the typo in the AI-generated image for so many reasons)

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

BlueCodeAgent: A blue teaming agent enabled by automated red teaming for CodeGen AI

1 Share
Three white icons on a blue-to-green gradient background: the first icon shows a circle with connected nodes, the second shows a circuit, and the third shows a flowchart

Introduction

Large language models (LLMs) are now widely used for automated code generation across software engineering tasks. However, this powerful capability in code generation also introduces security concerns. Code generation systems could be misused for harmful purposes, such as generating malicious code. It could also produce bias-filled code reflecting underlying logic that is discriminatory or unethical. Additionally, even when completing benign tasks, LLMs may inadvertently produce vulnerable code that contains security flaws (e.g., injection risks, unsafe input handling). These unsafe outcomes undermine the trustworthiness of code generation models and pose threats to the broader software ecosystem, where safety and reliability are critical.

Many studies have explored red teaming code LLMs, testing whether the models can reject unsafe requests and whether their generated code exhibits insecure patterns. For more details, see our earlier MSR blog post on RedCodeAgent. While red teaming has significantly improved our understanding of model failure modes, progress on blue teaming—i.e., developing effective defensive mechanisms to detect and prevent such failures—remains relatively limited. Current blue teaming approaches face several challenges: (1) Poor alignment with security concepts: additional safety prompts struggle to help models understand high-level notions, such as what constitutes a malicious or bias instruction, and typically lack actionable principles to guide safe decision-making. A case study is shown in Figure 1. (2) Over-conservatism: especially in the domain of vulnerable code detection, models tend to misclassify safe code as unsafe, leading to more false positives and reduced developer trust. (3) Incomplete risk coverage: without a strong knowledge foundation, models perform poorly when dealing with subtle or previously unseen risks.   

To address these challenges, researchers from the University of Chicago, University of California, Santa Barbara, University of Illinois Urbana–Champaign, VirtueAI, and Microsoft Research recently released a paper: BlueCodeAgent: A Blue Teaming Agent Enabled by Automated Red Teaming for CodeGen AI. This work makes the following key contributions: 

  1. Diverse red-teaming pipeline: The authors design a comprehensive red-teaming process that integrates multiple strategies to synthesize diverse red-teaming data for effective knowledge accumulation.
  2. Knowledge-enhanced blue teaming: Building on the foundation of red-teaming knowledge, BlueCodeAgent significantly improves blue-teaming performance by leveraging constitutions derived from knowledge and dynamic testing. 
  3. Principled-Level Defense and Nuanced-Level analysis: The authors propose two complementary strategies—Principled-Level Defense (via constitutions) and Nuanced-Level Analysis (via dynamic testing)—and demonstrate their synergistic effects in vulnerable code detection tasks. 
  4. Generalization to seen and unseen risks: Empowered by comprehensive red-teaming knowledge, BlueCodeAgent generalizes effectively to unseen risks. Overall, BlueCodeAgent achieves an average 12.7% improvement in F1 score across four datasets and three tasks, attributed to its ability to distill actionable constitutions that enhance context-aware risk detection. 
Figure 1. A case study of BlueCodeAgent on the bias instruction detection task. Even when concepts such as “biased” are explicitly included in additional safety prompts, models often fail to recognize biased requests (left). BlueCodeAgent (right) addresses this gap by summarizing constitutions from knowledge and applying concrete, actionable constraints benefited from red teaming to improve the defense.
Figure 1. A case study of BlueCodeAgent on the bias instruction detection task. Even when concepts such as “biased” are explicitly included in additional safety prompts, models often fail to recognize biased requests (left). BlueCodeAgent (right) addresses this gap by summarizing constitutions from knowledge and applying concrete, actionable constraints benefited from red teaming to improve the defense.

A blue teaming agent enabled by red teaming

Figure 2: Overview of BlueCodeAgent, an end-to-end blue teaming framework powered by automated red teaming for code security. By integrating knowledge derived from diverse red teaming and conducting dynamic sandbox-based testing, BlueCodeAgent substantially strengthens the defensive capabilities beyond static LLM analysis.
Figure 2: Overview of BlueCodeAgent, an end-to-end blue teaming framework powered by automated red teaming for code security. By integrating knowledge derived from diverse red teaming and conducting dynamic sandbox-based testing, BlueCodeAgent substantially strengthens the defensive capabilities beyond static LLM analysis.

Figure 2 presents an overview of the pipeline. The framework unifies both sides of the process: red teaming generates diverse risky cases and behaviors, which are then distilled into actionable constitutions that encode safety rules on the blue-teaming side. These constitutions guide BlueCodeAgent to more effectively detect unsafe textual inputs and code outputs, mitigating limitations such as poor alignment with abstract security concepts. 

This work targets three major risk categories, covering both input/textual-level risks—including biased and malicious instructions—and output/code-level risks, where models may generate vulnerable code. These categories represent risks that have been widely studied in prior research. 

Diverse red-teaming process for knowledge accumulation 

Since different tasks require distinct attack strategies, the red-teaming employs multiple attack methods to generate realistic and diverse data. Specifically, the red-teaming process is divided into three categories:

  1. Policy-based instance generation: To synthesize policy-grounded red-teaming data, diverse security and ethical policies are first collected. These high-level principles are then used to prompt an uncensored model to generate instances that intentionally violate the specified policies.
  2. Seed-based adversarial prompt optimization: Existing adversarial instructions are often overly simplistic and easily rejected by models. To overcome this limitation, an adaptive red-teaming agent invokes various jailbreak tools to iteratively refine initial seed prompts until the prompts achieve high attack success rates.
  3. Knowledge-driven vulnerability generation: To synthesize both vulnerable and safe code samples under realistic programming scenarios, domain knowledge of common software weaknesses (CWE) is leveraged to generate diverse code examples.

Knowledge-enhanced blue teaming agent 

After accumulating red-teaming knowledge data, BlueCodeAgent set up Principled-Level Defense via Constitution Construction and Nuanced-Level Analysis via Dynamic Testing.

  1. Principled-Level Defense via Constitution Construction 
    Based on the most relevant knowledge data, BlueCodeAgent summarizes red-teamed knowledge into actionable constitutions—explicit rules and principles distilled from prior attack data. These constitutions serve as normative guidelines, enabling the model to stay aligned with ethical and security principles even when confronted with novel or unseen adversarial inputs. 
  2. Nuanced-Level Analysis via Dynamic Testing 
    In vulnerable code detection, BlueCodeAgent augments static reasoning with dynamic sandbox-based analysis, executing generated code within isolated Docker environments to verify whether the model-reported vulnerabilities manifest as actual unsafe behaviors. This dynamic validation effectively mitigates the model’s tendency toward over-conservatism, where benign code is mistakenly flagged as vulnerable. 

Spotlight: Microsoft research newsletter

Microsoft Research Newsletter

Stay connected to the research community at Microsoft.

Opens in a new tab

Insights from BlueCodeAgent 

BlueCodeAgent outperforms prompting baselines 

As shown in Figure 3, BlueCodeAgent significantly outperforms other baselines. Several findings are highlighted. 

(1) Even when test categories differ from knowledge categories to simulate unseen scenarios, BlueCodeAgent effectively leverages previously seen risks to handle unseen ones, benefiting from its knowledge-enhanced safety reasoning. 

(2) BlueCodeAgent is model-agnostic, working consistently across diverse base LLMs, including both open-source and commercial models. Its F1 scores for bias and malicious instruction detection approach 1.0, highlighting strong effectiveness. 

(3) BlueCodeAgent achieves a strong balance between safety and usability. It accurately identifies unsafe inputs while maintaining a reasonable false-positive rate on benign ones, resulting in a consistently high F1 score. 

(4) By contrast, prompting with general or fine-grained safety reminders remains insufficient for effective blue teaming, as models struggle to internalize abstract safety concepts and apply them to unseen risky scenarios. BlueCodeAgent bridges this gap by distilling actionable constitutions from knowledge, using concrete and interpretable safety constraints to enhance model alignment. 

Figure 3. F1 scores on bias instruction detection task (BlueCodeEval-Bias) in the first row and on malicious instruction detection task (BlueCodeEval-Mal, RedCode-based) in the second row.
Figure 3: F1 scores on bias instruction detection task (BlueCodeEval-Bias) in the first row and on malicious instruction detection task (BlueCodeEval-Mal) in the second row. 

Complementary effects of constitutions and dynamic testing 

In vulnerability detection tasks, models tend to behave conservatively—an effect also noted in prior research. They are often more likely to flag code as unsafe rather than safe. This bias is understandable: confirming that code is completely free from vulnerabilities is generally harder than spotting a potential issue. 

To mitigate this over-conservatism, BlueCodeAgent integrates dynamic testing into its analysis pipeline. When BlueCodeAgent identifies a potential vulnerability, it triggers a reliable model (Claude-3.7-Sonnet-20250219) to generate test cases and corresponding executable code that embeds the suspicious snippet. These test cases are then run in a controlled environment to verify whether the vulnerability actually manifests. The final judgment combines the LLM’s analysis of the static code, the generated test code, run-time execution results, and constitutions derived from knowledge. 

Researchers find the two components—constitutions and dynamic testing—play complementary roles. Constitutions expand the model’s understanding of risk, increasing true positives (TP) and reducing false negatives (FN). Dynamic testing, on the other hand, focuses on reducing false positives (FP) by validating whether predicted vulnerabilities can truly be triggered at run-time. Together, they make BlueCodeAgent both more accurate and more reliable in blue-teaming scenarios. 

Summary 

BlueCodeAgent introduces an end-to-end blue-teaming framework designed to address risks in code generation. The key insight behind BlueCodeAgent is that comprehensive red-teaming can greatly strengthen blue-teaming defenses. Based on this idea, the framework first builds a red-teaming process with diverse strategies for generating red-teaming data. It then constructs a blue-teaming agent that retrieves relevant examples from the red-teaming knowledge base and summarizes safety constitutions to guide LLMs in making accurate defensive decisions. A dynamic testing component is further added to reduce false positives in vulnerability detection. 

Looking ahead, several directions hold promise.  

First, it is valuable to explore the generalization of BlueCodeAgent to other categories of code-generation risks beyond bias, malicious code, and vulnerable code. This may require designing and integrating novel red-teaming strategies into BlueCodeAgent and creating corresponding benchmarks for new risks.  

Second, scaling BlueCodeAgent to the file and repository levels could further enhance its real-world utility, which requires equipping agents with more advanced context retrieval tools and memory components.  

Finally, beyond code generation, it is also important to extend BlueCodeAgent to mitigate risks in other modalities, including text, image, video, and audio, as well as in multimodal applications. 

Opens in a new tab

The post BlueCodeAgent: A blue teaming agent enabled by automated red teaming for CodeGen AI appeared first on Microsoft Research.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

e233 – Transforming Presentation Workflows with Brandin – and we like it!

1 Share

Show Notes – Episode #233

In episode 233 of The Presentation Podcast the hosts talk about deploying PowerPoint templates across an organization can be a nightmare. They are joined by guests Jamie Garroch and Hannah Harper of BrightCarbon to discuss “BrandIn” – a PowerPoint add-in that centralizes templates, assets, and brand resources for easy access and management in a seamless interface all within PowerPoint. Jamie and Hannah explain how BrandIn streamlines template distribution, enhances brand consistency, and empowers agencies, designers and corporate users to access PowerPoint templates and assets to create on-brand presentations efficiently.

Highlights:

  • Overview of the BrandIn add-in for PowerPoint
  • Benefits of a centralized repository for PowerPoint templates and assets
  • Comparison of BrandIn with traditional solutions like Microsoft’s Organizational Asset Library for SharePoint
  • User experience and ease of installation for the BrandIn add-in
  • Features that enhance brand consistency and productivity for users
  • Discussion on the challenges of distributing PowerPoint templates within organizations
  • Upcoming features for BrandIn , including Brand Check and Text assets
  • User feedback and productivity improvements reported by organizations using BrandIn

Resources from this Episode:  

Show Suggestions? Questions for your Hosts?

Email us at: info@thepresentationpodcast.com

New Episodes 1st and 3rd Tuesday Every Month

Thanks for joining us!

The post e233 – Transforming Presentation Workflows with Brandin – and we like it! appeared first on The Presentation Podcast.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The Microsoft Zero Trust Assessment: Helping you operationalize the hardening of your Microsoft security products

1 Share
Evolving Threats, Adaptive Defenses: The Security Practitioner’s New Reality 

Cyber threats are advancing faster than ever, and the arrival of highly accessible AI tools with a low proficiency barrier has made this challenge one that most organizations cannot keep up with. According to the latest Microsoft Digital Defense Report, 28% of breaches begin with phishing, and we also see a 4.5x increase in AI automated phishing campaigns with higher click through rates. This example highlights the need for security organizations to not only prioritize hardened security policies but also automating detection of misconfigurations and deviations from the desired security posture. 
 
To help address these challenges, Microsoft launched the Secure Future Initiative (SFI) in November 2023, a multiyear effort to transform how we design, build, test, and operate our products and services, to meet the highest security standards. SFI unites every part of Microsoft to strengthen cybersecurity across our company and products. We’ve committed to transparency by sharing regular updates with customers, partners, and the security community. Today, we released our third SFI progress report, which highlights 10 actionable patterns and practices customers can adopt to reduce risk, along with additional best practices and guidance. In this report, we share updates across every engineering pillar, introduce mapping to the NIST Cybersecurity Framework to help customers measure progress against a recognized industry standard, and showcase new security capabilities delivered to customers. We also provide implementation guidance aligned to Zero Trust principles, ensuring organizations have practical steps to reduce risk and strengthen resilience. 

Building on these learnings, we’re excited to announce the public preview of the Microsoft Zero Trust Assessment tool, designed to help you identify common security gaps starting with Identity and Device pillars with the remaining pillars of Zero Trust coming soon. This assessment is informed by our own SFI learnings and aligned with widely recognized frameworks such as CISA’s SCuBA project. Your feedback is critical as we continue to iterate and expand this tool. Our goal is for you to operationalize it in your environment and share insights as we add more pillars in the coming months. 

Introducing Zero Trust Assessment  

A deep dive into how the Microsoft Zero Trust Assessment works including report structure, prioritization logic, and implementation guidance is available below in this blog. The Microsoft Zero Trust Assessment empowers teams to make informed decisions, reduce blind spots, and prioritize remediation, turning insights into action. Once you download and run the tool (installation guide), it will assess your policy configurations and scan objects to generate a comprehensive report that not only highlights gaps and risks but also explains what was checked, why a test failed, and how your organization can implement the recommended configuration. This makes the results immediately actionable; security teams know exactly what steps to take next. The report features an overview page that presents aggregated data across your tenant, highlighting overall risk levels, patterns, and trends. This allows security teams to quickly assess their organization’s posture, identify high-impact areas, and prioritize remediation efforts. 

Figure 1: Overview Page

The assessment provides a detailed list of all the tests that were conducted, including those not applicable, so the results are clear and relevant. Each test includes risk level, user impact, and implementation effort, enabling teams to make informed decisions and prioritize fixes based on business impact. By combining clear guidance with prioritized recommendations, the Zero Trust Assessment turns insights into action, helping organizations reduce blind spots, strengthen security, and plan remediation effectively. Future updates will expand coverage to additional Zero Trust pillars, giving organizations even broader visibility and guidance.  

Figure 2: Outcome of the Identity/Devices Checks

For each test performed, customers can see the exact policies or objects that are passing or failing the test with a direct link to where they can address it in the product, and guidance on how to remediate.  

Figure 3: Details of the test performed

The report also provides granular details of the policies evaluated and any applicable assignment groups. In addition, the tool provides clear guidance on details of the test performed and why it matters, and the steps required to resolve issues effectively. 

How It Works 

Here’s a quick summary of the steps for you to run the tool. Check our documentation for full details. 

First, you install the ZeroTrustAssessment PowerShell module. 

Install-Module ZeroTrustAssessment -Scope CurrentUser

Then, you connect to Microsoft Graph and to Azure by signing into your tenant. 

Connect-ZtAssessment

After that, you run a single command to kick off the data gathering. Depending on the size of your tenant, this might take several hours. 

Invoke-ZtAssessment

After the assessment is complete, the tool will display the assessment results report. A sample report of the assessment can be viewed at aka.ms/zerotrust/demo. 

The tool uses read-only permissions to download the tenant configuration, and it runs the analysis locally on your computer. We recommend you treat the data and artifacts it creates as highly sensitive organization security data.  

Get Started Today 

Ready to strengthen your security posture? Download and run the Zero Trust Assessment to see how your tenant measures up. Review the detailed documentation for Identity and Devices to understand every test and recommended action. If you have feedback or want to help shape future releases, share your insights at aka.ms/zerotrust/feedback. If you find the assessment valuable, pass it along to your peers and help raise the bar for all our customers.

To learn more about Microsoft Security solutions, visit our website.  Bookmark the Security blog and Technical Community blogs to keep up with our expert coverage on security matters, including updates on this assessment. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

What’s Next 

This is just the first step in the journey. We will be launching new SFI-infused assessments across the other pillars of Zero Trust in the coming months. Please stay tuned for updates.  

Want to go deeper? 

Visit the SFI webpage to explore the report, actionable patterns, NIST mapping, and best practices that can help you strengthen your security posture today.  

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories