Windows provides a powerful and flexible management framework that enables organizations to configure, restrict, or simplify the Settings experience for their users. One of the key mechanisms behind this framework is the ms-settings: URI scheme — a consistent, internal navigation system that defines how each Settings page in Windows is accessed.
If you have ever opened Windows Settings and navigated to Privacy → Camera, you were effectively visiting a page identified by a specific internal URI:
ms-settings:privacy-webcamThese URIs exist for virtually every part of the modern Settings interface — from Windows Update to Bluetooth, Accounts, and Accessibility. Administrators and automation tools can call these URIs directly to open pages, but they can also use them for more advanced purposes, such as controlling which pages are available to users through Group Policy or MDM.
Each page within Windows Settings has an internal URI (Uniform Resource Identifier) that begins with the prefix ms-settings: followed by a descriptive identifier. For example:
Typing any of these URIs into Win + R or a command prompt immediately opens the corresponding Settings page. To open the camera settings, for example:
The corresponding Powershell command would be:
Start-Process "ms-settings:privacy-webcam"These same URIs are also what Group Policy and Intune use when defining visibility rules for Settings. Over time, new pages and features are added to Windows, and therefore new URIs are introduced. Most of the available URIs are documented here:
Having a reliable way to extract these URIs directly from Windows helps administrators maintain accurate and consistent configuration policies across versions.
In enterprise environments, Windows devices are rarely unmanaged. Organizations typically enforce baseline configurations to meet security, usability, and compliance requirements. Control over the Settings experience can be a crucial part of that strategy. Common scenarios include:
By controlling access to specific Settings pages, IT administrators can:
Windows includes a built-in Group Policy setting that allows administrators to control which pages in the Settings app are visible to users.
Policy location:
This policy uses a semicolon-separated list of rules that reference Settings pages by their ms-settings identifiers. The part after the colon is what the policy recognizes. For example:
Hide selected pages
hide:privacy-webcam;bluetooth;displayShow only selected pages
showonly:windowsupdate;aboutAdministrators can combine these directives to tailor the Settings experience precisely to the needs of their organization. This approach is particularly useful in locked-down environments where users have a limited set of configuration options, or where privacy and security policies mandate restricted access to certain features. Find more information about managing Settings URIs here: https://learn.microsoft.com/en-us/windows/client-management/client-tools/manage-settings-app-with-group-policy
The Windows Settings app evolves continuously. With each new feature update or release, new categories, pages, and URIs may appear. For administrators maintaining long-term device configurations, that means GPO lists need to be reviewed and updated regularly.
While Microsoft provides extensive documentation for the most common pages, enterprise administrators often need a complete and current list of all URIs available on the system they are managing. This ensures that policies remain accurate and compatible, even when upgrading from one Windows build to another.
To simplify this process, I created a small PowerShell script called Get-MSSettingsURIs.ps1.
This script scans the system’s SystemSettings.dll file — the core component behind the modern Settings interface — and extracts every ms-settings: URI it contains. Because it reads directly from the operating system, it always reflects exactly what that Windows build supports.
You can use the script to:
At a high level, Get-MSSettingsURIs.ps1 reads binary data from C:\Windows\ImmersiveControlPanel\SystemSettings.dll and searches for all strings that match the ms-settings: pattern. It supports both ASCII and Unicode encodings to ensure no identifiers are missed.
It then sorts and outputs the unique list of URIs. You can optionally format the results to make them directly usable in a Group Policy setting. Because the script runs locally and reads system files, it does not require any administrative privileges beyond read access to the Windows directory.
To run the script with default settings:
.\Get-MSSettingsURIs.ps1This outputs a complete list of all ms-settings: URIs found in the current Windows installation. If you want the results formatted for direct use in Group Policy (without the ms-settings: prefix):
.\Get-MSSettingsURIs.ps1 -GpoStyleTo scan a copy of SystemSettings.dll from another Windows build (for testing or preparation):
.\Get-MSSettingsURIs.ps1 -Binary "C:\Temp\SystemSettings.dll"Standard output might look like this:
ms-settings:privacy-webcam ms-settings:privacy-microphone ms-settings:windowsupdate ms-settings:network-proxy ms-settings:bluetooth ms-settings:display ...When using the -GpoStyle parameter, the output is trimmed for direct use in Group Policy:
privacy-webcam privacy-microphone windowsupdate network-proxy bluetooth display ...Once you have the list of Settings URIs, you can automate several key tasks:
Because the script runs without external dependencies, it can easily be distributed as part of enterprise configuration management or imaging workflows.
Example Workflow for Administrators:
By incorporating this simple step into your management process, you ensure that every system in your environment reflects the intended configuration and that new Settings pages introduced by future Windows versions are quickly identified.
The ms-settings: URI system is one of Windows’ most useful yet under-appreciated administrative capabilities. For enterprises that depend on configuration consistency, compliance, or controlled user experiences, understanding and managing these URIs is key.
The Get-MSSettingsURIs.ps1 script gives administrators an easy way to extract, review, and apply these identifiers directly from any Windows installation. Combined with Group Policy or MDM, it provides a fast and reliable method to shape the Settings experience for users — ensuring that each system remains secure, focused, and predictable.
This Sample Code is provided for the purpose of illustration only
and is not intended to be used in a production environment. THIS
SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED "AS IS" WITHOUT
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS
FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free
right to use and modify the Sample Code and to reproduce and distribute
the object code form of the Sample Code, provided that You agree:
(i) to not use Our name, logo, or trademarks to market Your software
product in which the Sample Code is embedded; (ii) to include a valid
copyright notice on Your software product in which the Sample Code is
embedded; and (iii) to indemnify, hold harmless, and defend Us and
Our suppliers from and against any claims or lawsuits, including
attorneys' fees, that arise or result from the use or distribution
of the Sample Code.
This sample script is not supported under any Microsoft standard
support program or service. The sample script is provided AS IS
without warranty of any kind. Microsoft further disclaims all implied
warranties including, without limitation, any implied warranties of
merchantability or of fitness for a particular purpose. The entire
risk arising out of the use or performance of the sample scripts and
documentation remains with you. In no event shall Microsoft, its
authors, or anyone else involved in the creation, production, or
delivery of the scripts be liable for any damages whatsoever
(including, without limitation, damages for loss of business
profits, business interruption, loss of business information, or
other pecuniary loss) arising out of the use of or inability to
use the sample scripts or documentation, even if Microsoft has
been advised of the possibility of such damages.
Bob calls it the worst tech job market in 45 years. Josh has been grinding harder than ever with a slower pipeline. In this raw year-in-review episode, the hosts share what's actually helped them survive 2025—human networking over LinkedIn vanity metrics, community over isolation, and resilience over despair. Plus: Bob announces he's quitting the Scrum Alliance and both hosts call out the AI bandwagon for what it is.
Josh Anderson's "Leadership Lighthouse"
Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.
Bob Galen's "Agile Moose"
Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."
Do More Than Listen:
We publish video versions of every episode and post them on our YouTube page.
Help Us Spread The Word:
Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!
James and Frank unwrap 2025 as the Year of AI Development, covering new models, the rise of agents, and editor integrations like Copilot in VS Code that changed how developers write and maintain code. You’ll hear practical takeaways—how next-edit, local models, RAG/vectorization and app‑on‑demand sped prototyping, slashed maintenance time, and why the hosts think the AI boom has legs into 2026 despite looming uncertainty.
⭐⭐ Review Us ⭐⭐
Machine transcription available on http://mergeconflict.fm
Data Loss Prevention (DLP) and Data Security Posture Management (DSPM) form the enforcement layer that ensures Copilot can only operate within safe, compliant boundaries. While permissions determine what users can access, DLP and DSPM determine what users, and Copilot acting on their behalf, are allowed to do with the data. As organizations adopt generative AI, these controls are no longer optional. They become the policy engine that governs summarization, extraction, and cross-content interpretation at scale.
Copilot does not bypass DLP. It does not circumvent policies. It obeys the same data security enforcement path as every Microsoft 365 workload.
When properly configured, DLP and DSPM introduce guardrails that prevent unsafe prompts, accidental oversharing, or unauthorized extraction of sensitive content, even when the user legitimately has access to the file.
This step ensures your data security posture is mature enough to support large-scale, AI-driven workflows.
Microsoft Purview DLP protects information by monitoring actions such as copying, printing, saving, uploading, and pasting content. With Copilot in Microsoft 365, DLP expands its relevance because:
DSPM (Data Security Posture Management), now delivered through Microsoft Purview Data Security and Microsoft 365 Sensitivity Indexing, adds visibility across SharePoint, OneDrive, Exchange, and Teams by identifying:
Together, DLP and DSPM form the policy foundation that limits AI interactions to approved contexts, prevents the movement of risky data, and ensures sensitive material is handled correctly.
DLP policies must be designed for modern collaboration, not just endpoint or email scenarios. Copilot underscores the importance of this model because it interacts with data across multiple workloads simultaneously. A Copilot-ready DLP framework should cover:
Below is a table of Microsoft-supported DLP actions usable in SharePoint, OneDrive, Exchange, and Endpoint DLP policies.
| DLP Action (Microsoft Supported) | Description | How It Controls Copilot |
|---|---|---|
| Block | Prevents an activity such as print, copy, or upload | Prevents Copilot from retrieving or using the content in prohibited workflows |
| Block with Override | Allows user justification to proceed | Provides flexibility for legitimate business needs while still gating AI extraction |
| Audit Only | Logs the action but allows it | Helpful in learning how users prompt Copilot before enforcing controls |
| Restrict Access or Encrypt | Applies encryption or reduces permissions | Prevents Copilot from summarizing or interpreting restricted content |
| Block Sharing (internal or external) | Prevents risky share events | Ensures Copilot cannot surface data to users who lack access |
| Endpoint DLP: Block Copy or Paste | Prevents data exfiltration on devices | Stops AI-assisted workflows from moving sensitive data into unsafe endpoints |
| Endpoint DLP: Block Print or Screen Capture | Controls output channels | Prevents printing or screenshotting of AI-generated content that contains sensitive data |
Documented Purview DLP capabilities support these actions. Copilot respects these controls because AI must follow the underlying Microsoft 365 permission and policy engine. DLP does not scan Copilot. It governs the user’s ability to perform protected actions, and Copilot executes under those permissions.
Disclaimer: Microsoft has not published Copilot-specific DLP outcomes for each action. The behaviors described above are based on the documented principle that Copilot operates entirely within the user’s allowed actions and Purview DLP enforcement pipeline. Organizations should test DLP enforcement with AI prompts to validate expected outcomes.
DSPM in Microsoft Purview provides a macro-level visibility layer across your tenant. It identifies where sensitive data lives, where it is overshared, and where security policies do not align with regulatory expectations. This is essential before enabling Copilot, because AI depends on the underlying health of your data security posture.
DSPM helps identify:
DSPM should be used to:
DSPM does not control Copilot. It provides visibility into where Copilot could interpret or summarize data that is currently under-secured.
Your DLP configuration should include a minimal baseline policy set that specifically governs AI-driven behaviors. The table below lists fully supported and valid DLP rule categories you can deploy today in Microsoft 365.
| Policy Type | Purpose | Supported Enforcement Action |
|---|---|---|
| Financial Data Policy (PCI, ABA, SWIFT, IBAN) | Prevent financial data leakage through Copilot | Block, Block with Override, Audit |
| Privacy or PII Data Policy (GDPR, CCPA, NIST) | Prevent AI summarization or the sharing of personal data | Restrict Access, Block |
| Health Information Policy (HIPAA Alignment) | Prevent accidental PHI exposure through prompts | Block, Restrict |
| Source Code Protection Policy | Stop Copilot from exposing internal IP or code artifacts | Block, Endpoint DLP Block Copy |
| M&A or Legal Confidential Policy | Protect legal case files and board materials | Restrict Access (Encryption) |
| Internal Only Business Data Policy | Prevent movement of internal files to external channels | Block External Sharing, Block Print |
| High Business Impact (HBI) Policy | Establish boundaries for sensitive operations | Block or Block with Override |
| Universal Audit Policy | Monitor all Copilot-related actions during rollout | Audit Only |
These categories come from Microsoft’s built-in sensitive information types and Purview DLP policy templates.
Disclaimer: The mapping to Copilot relies on Microsoft’s documented rule that Copilot obeys user permissions and Purview DLP enforcement. Microsoft does not publish rule-by-rule matrices for Copilot, so enforcement expectations are based on the underlying Microsoft 365 security model.
Successful AI adoption depends not only on policy enforcement but also on user awareness. Many data risks occur unintentionally, especially when employees prompt Copilot without understanding the sensitivity of the underlying content. In-app DLP alerts and user coaching messages serve as real-time guardrails that educate users while preventing risky actions before they occur. These prompts are embedded directly in Microsoft 365 applications, so they appear when a user attempts an action that violates or approaches a DLP boundary.
User-coaching messages can be tailored to your policies and should provide clear, actionable guidance, such as:
“This file contains confidential financial data and cannot be used in Copilot.”
“Your action would send sensitive personal data outside approved boundaries. Please review data handling requirements.”
“Extraction of regulated data is restricted by corporate policy. Contact your compliance team if this task is required.”
These alerts do more than block or warn. They reinforce the organization’s data handling expectations and help employees understand why a particular action is sensitive in the context of AI-driven workflows. Over time, user coaching reduces accidental policy violations, increases responsible AI usage, and strengthens your overall data culture. It introduces friction exactly where it is most effective: at the moment of decision, when a user is about to misuse or mishandle data, intentionally or not.
Once your DLP policies are configured, verify that they correctly govern Copilot’s behavior. Copilot operates inside the same compliance boundary as Microsoft 365, but real-world testing is the only way to confirm that policies behave as intended across AI-driven scenarios. Controlled validation ensures your enforcement logic, user prompts, override rules, and data controls function predictably when Copilot interacts with sensitive or regulated information.
A structured testing process should involve multiple personas, including standard users, power users, and, where appropriate, exempt users. Each test should be executed under a controlled identity with documented permission levels, giving you clear insight into how AI behaves under different user contexts.
Practical test scenarios include:
Beyond individual tests, you should evaluate the end-to-end auditing path, confirming that AI-related actions generate the expected entries in Purview Audit and that these logs clearly indicate whether DLP enforcement occurred. This is essential for investigations, regulatory reviews, and AI safety governance.
By performing these controlled scenarios, you gain measurable assurance that your DLP framework is not only correctly configured but also resilient under real AI workloads. These tests form a critical part of your Copilot readiness program, ensuring that AI behaves safely, consistently, and in complete alignment with your organization’s compliance requirements.
Configuring DLP and DSPM for Copilot is not simply a compliance exercise. It is how you create safe and predictable boundaries around AI operations. By combining sensitive information identification, least-privilege access control, real-time enforcement, user coaching, and policy-based protection, you ensure Copilot works with your security posture rather than around it.
Organizations that implement DLP and DSPM before enabling Copilot gain three critical advantages:
These safeguards create the conditions necessary for AI adoption at scale. A secure data foundation ensures that Copilot enhances productivity while remaining aligned with regulatory requirements, internal policy, and organizational risk tolerance.
In the following article, we will build on this enforcement layer by focusing on identity-driven protections. We will explore how to strengthen security with Conditional Access and Session Controls for Copilot Access, ensuring that every AI interaction is validated through identity assurance, device health, conditional risk scoring, and session-based restrictions. These controls complete the defensive perimeter, tying together identity, data, and AI governance under a single, cohesive framework.