Read more of this story at Slashdot.
Read more of this story at Slashdot.
Hi, it’s Brent from the Windows Directory Services team. I recently worked a case concerning a user who had the Windows Hello for Business (“WHfB”) policy disabled, but the user could still sign-in to the computer using their PIN. As you may have guessed, the Windows admin team of the Active Directory domain for this user wanted to know how this could be and how they could remove this sign-in option from the user.
Let’s Talk About the Problem
The user retaining the ability to sign-in using their PIN wasn’t the only issue the admin team encountered. After requesting the user to remove the WHfB PIN sign-in, they discovered the option to remove the Windows Hello PIN sign-in was greyed out:
Now, it seemed there wasn’t a way to remove the user’s ability to sign-in with their WHfB PIN.
How Did We Get Here?
A Microsoft Intune policy or Windows Active Directory Group Policy Object (“GPO”) was originally enabled for this user to provision Windows Hello for Business sign-in. Sometime after the user was provisioned and using their PIN to sign-in, the Windows admin team determined this user should no longer use WHfB credentials. To remove the user’s ability to do so, they configured the Intune and/or GPO policy to disable Windows Hello for Business. After refreshing the policy to the user’s computer successfully, they confirmed the PassportforWork registry key was set to disabled as follows:
HKLM\SOFTWARE\Policies\Microsoft\PassportForWork
Enabled REG_DWORD 0x0
The actions performed above will not remove the ability of an already provisioned user from using Windows Hello for Business PIN to sign-in to the Windows computer. To better understand the issue, the following details are provided to clarify the use of policies such as Intune and GPOs in relation to the Windows Hello for Business credential provider.
When either an Intune policy or Windows GPO is configured for a user to enable WHfB, the policy is only enabling the user to enroll for provisioning to use Windows Hello for Business. The provisioning process and authentication process for Windows Hello for Business are two separate components within the Windows Hello for Business feature.
Since the policy only enables the ability for a user to activate the provisioning process to enroll for Windows Hello for Business, the policy becomes irrelevant after the user successfully provisions. Once a user is provisioned, they will be able to continue using the Windows Hello for Business PIN sign-in even when the policy has been set to disabled.
This behavior is expected and by design, which is documented in the following published article: Manage Windows Hello in your organization - Windows Security | Microsoft Learn
However, by setting the policy to disabled, the user no longer has the ability to activate the provisioning process. The remove button under the Windows Hello PIN sign-in option is used to activate provisioning, which would allow the user to un-enroll for Windows Hello for Business. Therefore, the inability to select the remove button is also expected and by design in this configuration.
How will the PIN Sign-in be Removed if Provisioning is Disabled?
To disable Windows Hello for Business in this situation, the Windows Hello container will need to be deleted for the user. To do so, the user will perform the following steps under their user context on each Windows computer they were successfully provisioned prior to the policy being disabled:
certutil.exe -deleteHelloContainer
With the policy set to disabled, the user will no longer be able to activate the provisioning process on this or any other Windows computer going forward. We wouldn’t want the user to enroll for Windows Hello for Business again after we removed it, right?
I hope you found this information helpful in your understanding of Windows Hello for Business administration. Until next time.
Brent Crummey
Related Registry Keys
Computer registry - HKLM\SOFTWARE\Policies\Microsoft\PassportForWork
User registry - HKCU\SOFTWARE\Policies\Microsoft\PassportForWork
References
Windows Hello for Business Frequently Asked Questions (FAQ) - Windows Security | Microsoft Learn
Enterprise teams building AI agents often need to route model requests through their own infrastructure — whether for compliance, governance, or other controls provided by gateways. Today, we are excited to announce general availability of the Bring Your Own Model (BYOM) for Foundry Agent Service feature, letting you connect prompt agents to models hosted behind Azure API Management or any third-party AI model gateway.
This means you can build agents in Foundry while keeping full control over how and where model traffic flows.
BYOM support in Foundry Agent Service enables organizations to:
Setting up BYOM takes just two steps:
1. Create a model connection
In the Foundry portal, go to Operate > Admin, select your project's parent resource, and add a model connection under the Admin-connected models tab. Choose either Azure API Management or Other source as your connection type, configure authentication, and define one or more models.
You can also deploy connections programmatically using the Azure CLI with the Bicep templates in the Foundry samples repository.
2. Create a prompt agent
In the Foundry portal, go to Build > Agents, create a new agent, and pick a model added using the BYOM feature. Test the agent in the playground.
BYOM is built around a set of capabilities designed to fit enterprise model platforms:
BYOM for Foundry Agent Service is available today. Here's how to get started:
If you're already running agents in Foundry, adding a gateway connection does not require a re-architecture — just connect your gateway and configure your agent to use a newly added model.
Note: When you use a third-party model, you are directly responsible for implementing your own responsible AI mitigations, ensuring that your use satisfies your data handling requirements, and complying with the model’s license. You are also responsible for the use of such models, as their data handling practices may differ from Microsoft's standards.
Kubernetes v1.36 promotes the ability to modify container resource requests and limits in the pod template of a suspended Job to beta. First introduced as alpha in v1.35, this feature allows queue controllers and cluster administrators to adjust CPU, memory, GPU, and extended resource specifications on a Job while it is suspended, before it starts or resumes running.
Batch and machine learning workloads often have resource requirements that are not precisely known at Job creation time. The optimal resource allocation depends on current cluster capacity, queue priorities, and the availability of specialized hardware like GPUs.
Before this feature, resource requirements in a Job's pod template were immutable once set. If a queue controller like Kueue determined that a suspended Job should run with different resources, the only option was to delete and recreate the Job, losing any associated metadata, status, or history. This feature also provides a way to let a specific Job instance for a CronJob progress slowly with reduced resources, rather than outright failing to run if the cluster is heavily loaded.
Consider a machine learning training Job initially requesting 4 GPUs:
apiVersion: batch/v1
kind: Job
metadata:
name: training-job-example-abcd123
labels:
app.kubernetes.io/name: trainer
spec:
suspend: true
template:
metadata:
annotations:
kubernetes.io/description: "ML training, ID abcd123"
spec:
containers:
- name: trainer
image: example-registry.example.com/training:2026-04-23T150405.678
resources:
requests:
cpu: "8"
memory: "32Gi"
example-hardware-vendor.com/gpu: "4"
limits:
cpu: "8"
memory: "32Gi"
example-hardware-vendor.com/gpu: "4"
restartPolicy: Never
A queue controller managing cluster resources might determine that only 2 GPUs are available. With this feature, the controller can update the Job's resource requests before resuming it:
apiVersion: batch/v1
kind: Job
metadata:
name: training-job-example-abcd123
labels:
app.kubernetes.io/name: trainer
spec:
suspend: true
template:
metadata:
annotations:
kubernetes.io/description: "ML training, ID abcd123"
spec:
containers:
- name: trainer
image: example-registry.example.com/training:2026-04-23T150405.678
resources:
requests:
cpu: "4"
memory: "16Gi"
example-hardware-vendor.com/gpu: "2"
limits:
cpu: "4"
memory: "16Gi"
example-hardware-vendor.com/gpu: "2"
restartPolicy: Never
Once the resources are updated, the controller resumes the Job by setting
spec.suspend to false, and the new Pods are created with the adjusted
resource specifications.
The Kubernetes API server relaxes the immutability constraint on pod template resource fields specifically for suspended Jobs. No new API types have been introduced; the existing Job and pod template structures accommodate the change through relaxed validation.
The mutable fields are:
spec.template.spec.containers[*].resources.requestsspec.template.spec.containers[*].resources.limitsspec.template.spec.initContainers[*].resources.requestsspec.template.spec.initContainers[*].resources.limitsResource updates are permitted when the following conditions are met:
spec.suspend set to true.status.active equals 0) before resource
mutations are accepted.Standard resource validation still applies. For example, resource limits must be greater than or equal to requests, and extended resources must be specified as whole numbers where required.
With the promotion to beta in Kubernetes v1.36, the
MutablePodResourcesForSuspendedJobs feature gate is enabled by default.
This means clusters running v1.36 can use this feature without any additional
configuration on the API server.
If your cluster is running Kubernetes v1.36 or later, this feature is available
by default. For v1.35 clusters, enable the MutablePodResourcesForSuspendedJobs
feature gate on
the kube-apiserver.
You can test it by creating a suspended Job, updating its container resources
using kubectl edit or a controller, and then resuming the Job:
# Create a suspended Job
kubectl apply -f my-job.yaml --server-side
# Edit the resource requests
kubectl edit job training-job-example-abcd123
# Resume the Job
kubectl patch job training-job-example-abcd123 -p '{"spec":{"suspend":false}}'
If you suspend a Job that was already running, you must wait for all of that Job's active
Pods to terminate before modifying resources. The API server rejects resource
mutations while status.active is greater than zero. This prevents inconsistency
between running Pods and the updated pod template.
When using this feature with Jobs that may have failed Pods, consider setting
podReplacementPolicy: Failed. This ensures that replacement Pods are only
created after the previous Pods have fully terminated, preventing resource
contention from overlapping Pods.
Dynamic Resource Allocation (DRA) resourceClaimTemplates remain immutable.
If your workload uses DRA, you must recreate the claim templates separately
to match any resource changes.
This feature was developed by SIG Apps This feature was developed by SIG Apps with input from WG Batch. Both groups welcome feedback as the feature progresses toward stable.
You can reach out through:

Today, we’re introducing a new way that people can pay for your auto-renewable subscriptions on the App Store: monthly subscriptions with a 12-month commitment. This new payment option allows you to offer subscribers more affordable options. People can cancel their subscription at any time, which will prevent the subscription from renewing after they’ve completed their agreed-to payments to fulfill their commitment.
To provide transparency, people can easily view the number of completed and remaining payments for the subscription in their Apple Account. Apple will also send email and, if opted in, push notifications ahead of their renewal date to remind them of their upcoming purchase.
Starting today, you can configure this type of subscription in App Store Connect and test it in Xcode. With the exception of the United States and Singapore, monthly subscriptions with a 12-month commitment will be available worldwide to people on iOS 26.4, iPadOS 26.4, macOS Tahoe 26.4, and visionOS 26.4, or later, with the release of iOS 26.5, iPadOS 26.5, macOS Tahoe 26.5, and visionOS 26.5 in May.