Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150792 stories
·
33 followers

ADK Go 1.0 Arrives!

1 Share
The launch of Agent Development Kit (ADK) for Go 1.0 marks a significant shift from experimental AI scripts to production-ready services by prioritizing observability, security, and extensibility. Key updates include native OpenTelemetry integration for deep tracing, a new plugin system for self-healing logic, and "Human-in-the-Loop" confirmations to ensure safety during sensitive operations. Additionally, the release introduces YAML-based configurations for rapid iteration and refined Agent2Agent (A2A) protocols to support seamless communication across different programming languages. This framework empowers developers to build complex, reliable multi-agent systems using the high-performance engineering standards of Golang.
Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Run and Iterate on LLMs Faster with Docker Model Runner on DGX Station

1 Share

Back in October, we showed how Docker Model Runner on the NVIDIA DGX Spark makes it remarkably easy to run large AI models locally with the same familiar Docker experience developers already trust. That post struck a chord: hundreds of developers discovered that a compact desktop system paired with Docker Model Runner could replace complex GPU setups and cloud API calls.

Recently at NVIDIA GTC 2026, NVIDIA is raising the bar with NVIDIA DGX Station and we’re excited to add support for it in Docker Model Runner!  The new DGX Station brings serious performance, and Model Runner helps make it practical to use day to day. With Model Runner, you can run and iterate on larger models on a DGX Station, using the same intuitive Docker experience you already know and trust.

From NVIDIA DGX Spark to DGX Station: What has changed and why does this matter?

NVIDIA DGX Spark, powered by the GB10 Grace Blackwell Superchip, gave developers 128GB of unified memory and petaflop-class AI performance in a compact form factor. A fantastic entry point for running models.

NVIDIA DGX Station is a different beast entirely. Built around the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, it connects a 72-core NVIDIA Grace CPU and NVIDIA Blackwell Ultra GPU through NVIDIA NVLink-C2C, creating a unified, high-bandwidth architecture built for frontier AI workloads. It brings data-center-class performance to a deskside form factor. Here are the headline specs:

DGX Spark (GB10)

DGX Station (GB300)

GPU Memory

128 GB unified

252 GB

GPU Memory Bandwidth

273 GB/s

7.1 TB/s

Total Coherent Memory

128 GB

748 GB

Networking

200 Gb/s

800 Gb/s

GPU Architecture

Blackwell (5th-gen Tensor Cores, FP4)

Blackwell Ultra (5th-gen Tensor Cores, FP4)

With 252GB of GPU memory at 7.1 TB/s of bandwidth and a total of 748GB of coherent memory, the DGX Station doesn’t just let you run frontier models,  it lets you run trillion-parameter models, fine-tune massive architectures, and serve multiple models simultaneously, all from your desk.

Here’s what 748GB of coherent memory and 7.1 TB/s of bandwidth unlock in practice:

  • Run the largest open models without quantization. DGX Station can run the largest open 1T parameter models with quantization.
  • Serve a team, not just yourself. NVIDIA Multi-Instance GPU (MIG) technology lets you partition NVIDIA Blackwell Ultra GPUs into up to seven isolated instances. Combined with Docker Model Runner’s containerized architecture, a single DGX Station can serve as a shared AI development node for an entire team — each member getting their own sandboxed model endpoint.
  • Faster iteration on agentic workflows. Agentic AI pipelines often require multiple models running concurrently — a reasoning model, a code generation model, a vision model. With 7.1 TB/s of memory bandwidth, switching between and serving these models is dramatically faster than anything a desktop system has offered before.

Bottom line: The DGX Spark made that fast. The DGX Station makes it transformative. And raw hardware is only half the story. With Docker Model Runner, the setup stays effortless and the developer experience stays smooth, no matter how powerful the machine underneath becomes.

Getting Started: It’s the Same Docker Experience

For the full step-by-step walkthrough check out our guide for DGX Spark. Every instruction applies to the DGX Station as well.

NVIDIA’s new DGX Station puts data-center-class AI on your desk with 252GB of GPU memory, 7.1 TB/s bandwidth, and 748GB of total coherent memory. Docker Model Runner makes all of that power accessible with the same familiar commands developers already use on the DGX Spark. Pull a trillion-parameter model, serve a whole team, and iterate on agentic workflows. No cloud required, no new tools to learn.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. To get involved:

  • Contribute your ideas: Create an issue or submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends and colleagues who might be interested in running AI models with Docker.

Learn More

Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From Custom to Open: Scalable Network Probing and HTTP/3 Readiness with Prometheus

1 Share

The Problem: Legacy Tooling and Its Limitations

Currently, Slack utilizes a hybrid approach to network measurement, incorporating both internal (such as traffic between AWS Availability Zones) and external (monitoring traffic from the public internet into Slack’s infrastructure) solutions. These tools comprise a combination of commercial SaaS offerings and custom-built network testing solutions developed by our internal teams over time. This was a suitable enough solution for our needs.

When we began rolling out HTTP/3 support on the edge, there was a significant challenge that we encountered: A lack of client-side observability. 

Since HTTP/3 is built on top of the QUIC transport protocol, it uses UDP instead of the traditional TCP. This fundamental shift to a new transport meant that existing monitoring tools and SaaS solutions were not capable of probing our new HTTP/3 endpoints for metrics.

At that time, there was a major gap in the market:

  • None of the SaaS observability tools we investigated supported HTTP/3 probing out of the box.
  • Our internal Prometheus Blackbox Exporter (BBE), a cornerstone of our monitoring, didn’t have native support for QUIC.

Without the ability to probe hundreds of thousands of HTTP/3 endpoints  in our new infrastructure, we couldn’t get the client-side visibility we needed to monitor regressions to HTTP/2 or accurate round trip measurements. 

The Intern Who Made It Happen

The Open Source Contribution  

Our intern, Sebastian Feliciano, scoped, implemented, and ultimately open-sourced QUIC support for Prometheus BBE

Choosing the Right HTTP Client: The first step was selecting a QUIC-capable HTTP client. After careful consideration, they chose quic-go to serve as the foundation for the new functionality. The choice was settled on due to its wide adoption across other open source technologies, as well as the first-class support it provides in creating http clients in go.

Here’s how Sebastian integrated quic-go into BBE’s HTTP client:

http3Transport := &http3.Transport{
    TLSClientConfig: tlsConfig,
    QUICConfig:      &quic.Config{},
}

client = &http.Client{
    Transport: http3Transport,
}

Maintaining Composability: Sebastian had to add this new logic while following the Blackbox Exporter’s existing architecture, ensuring the new features maintained the tool’s configuration patterns. 

The result of this work was a functional and configurable HTTP/3 probe within Prometheus, and by open-sourcing their contribution, they provided a solution that the entire Prometheus community could use. By following existing patterns and earning community buy-in, Sebastian successfully landed the HTTP/3 feature. 


Final Step: Integration  

Making an open-source contribution as an intern is a huge accomplishment. As many of us know, maintainers don’t always merge PRs quickly, especially for new features. Sebastian’s internship timeline was limited, so he couldn’t wait. Sebastian took matters into his own hands and architected an in-house system that utilized the new upstream features for probing out HTTP/3 endpoints.

Operational Improvements

Single Pane of Glass: We now have a unified view of both HTTP/1.1, HTTP/2, and HTTP/3 metrics in Grafana, allowing for easier correlation with other telemetry and comparison.

Better and More Reliable Alerts: With the new probes, we can create more reliable alerts on the health and performance of our HTTP/3 endpoints.

Easier Correlation: Having all our data in one place makes it easier to correlate HTTP/3 performance with other metrics and debug issues faster.

The Open Source Win

Community Benefit: This contribution benefits the wider Prometheus community, helping other organizations facing the same challenges with HTTP/3 adoption. By building this support, we have future-proofed our observability for the ongoing adoption of QUIC and HTTP/3.

Looking Ahead

While this is a major step, our work isn’t done. Future improvements could be made through adding advanced features, such as:

  • Server Name Indication (SNI) routing tests
    • Validating that the SNI extension is correctly handled by our edge infrastructure. This ensures that when a client requests a specific hostname over a shared IP (like a CDN or a multi-tenant load balancer), the gateway correctly routes the traffic to the intended backend and serves the matching SSL certificate, preventing misrouting errors.
  • end-to-end path visualization
    • Moving beyond simple “up/down” checks by mapping the entire network hop-by-hop from the monitoring agent to the service endpoint. This provides a visual representation of the network path, making it possible to pinpoint exactly where latency spikes, or packets are lost.

We invite others in the community to try out this new QUIC support in Prometheus Blackbox Exporter and join us in building the next generation of observability tools. You can find the HTTP/3 configuration in the configuration documentation in the Prometheus Black Box Exporter repository.

Conclusion

There were a few takeaways from this project:

1. Monitor first, and migrate second

This should go without saying, but getting observability right as a precursor to migration makes everything faster. We know that the industry is going towards QUIC, but proving to ourselves that it’s the right move long term enables us to invest more into its future.

2. Contributing open source pays dividends

It feels good to give back to open source communities who provide us so much. When a game changing protocol like QUIC comes through, and there’s a gap in existing technologies supporting it, everyone wins when we fill the gap, and we win when everyone decides to support it long term.

3. Bet on your interns

We were incredibly fortunate to have landed Sebastian as an intern for our team. His proactiveness and creativity in problem solving helped us push the QUIC migration across the line, and gave us tangible exposure to the benefits of black-box monitoring.

This journey from having an observability gap to an open-sourced solution perfectly illustrates our commitment to simplicity and scalability. As HTTP/3 adoption grows industry-wide, we’re committed to keeping our monitoring tools ahead of the curve. We welcome community feedback and contributions to help evolve these capabilities further.

Interested in taking on interesting projects, making people’s work lives easier, or just building some pretty cool forms? We’re hiring! 💼

Apply now
Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Docker Sandboxes: Run Agents in YOLO Mode, Safely

1 Share

Agents have crossed a threshold.

Over a quarter of all production code is now AI-authored, and developers who use agents are merging roughly 60% more pull requests. But these gains only come when you let agents run autonomously. And to unlock that, you have to get out of the way.That means letting agents run without stopping to ask permission at every step, often called YOLO mode.

Doing that on your own machine is risky. An autonomous agent can access files or directories you did not intend for it to touch, read sensitive data, execute destructive commands, or make broad changes while trying to help.

So yes, guardrails matter, but only when they’re enforced outside the agent, not by it.  Agents need a true bounding box: constraints defined before execution and clear limits on what it can access and execute. Inside that box, the agent should be able to move fast.

That’s exactly what Docker Sandboxes provide.

They let you run agents in fully autonomous mode with a boundary you define. And Docker Sandboxes are standalone; you don’t need Docker Desktop. That dramatically expands who can use them. For the newest class of builder, whether you’re just getting started with agents or building advanced workflows, you can run them safely from day one.

Docker Sandboxes work out of the box with today’s coding agents like Claude Code, Github Copilot CLI, OpenCode, Gemini CLI, Codex, Docker Agent, and Kiro. They also make it practical to run next-generation autonomous systems like NanoClaw and OpenClaw locally, without needing dedicated hardware like a Mac mini.

Here’s what Docker Sandboxes unlock.

You Actually Get the Productivity Agents Promise

The difference between a cautious agent and a fully autonomous one isn’t just speed. The interaction model changes entirely. In a constrained setup, you become the bottleneck: approving actions instead of deciding what to build next. In a sandbox, you give direction, step away, and come back to a cloned repo, passing tests, and an open pull request. No interruptions. That’s what a real boundary makes possible.

You Stop Worrying About Damage

Running an agent directly on your machine exposes everything it can reach. Mistakes are not hypothetical. Commands like rm -rf, accidental exposure of environment variables, or unintended edits to directories like .ssh can all happen.

Docker Sandboxes offer the strongest isolation environments for autonomous agents. Under the hood, each sandbox runs in its own lightweight microVM, built for strong isolation without sacrificing speed. There is no shared state, no unintended access, and no bleed-through between environments. Environments spin up in seconds (now, even on Windows), run the task, and disappear just as quickly. 

Other approaches introduce tradeoffs. Mounting the Docker socket exposes the host daemon. Docker-in-Docker relies on privileged access. Running directly on the host provides almost no isolation. A microVM-based approach avoids these issues by design. 

Run Any Agent

Docker Sandboxes are fully standalone and work with the tools developers already use, including Claude Code, Codex, GitHub Copilot, Docker Agent, Gemini, and Kiro. They also support emerging autonomous systems like OpenClaw and NanoClaw. There is no new workflow to adopt. Agents continue to open ports, access secrets, and execute multi-step tasks. The only difference is the environment they run in. Each sandbox can be inspected and interacted with through a terminal interface, so you always have visibility into what the agent is doing.

What Teams Are Saying

“Every team is about to have their own team of AI agents doing real work for them. The question is whether it can happen safely. Sandboxes is what that looks like at the infrastructure level.”
— Gavriel Cohen, Creator of NanoClaw

“Docker Sandboxes let agents have the autonomy to do long-running tasks without compromising safety.”
— Ben Navetta, Engineering Lead, Warp

Start in Seconds

For macOS: brew install docker/tap/sbx

For Windows: winget install Docker.sbx

Read the docs to learn more, or get in touch if you’re deploying for a team. If you’re already using Docker Desktop, the new Sandboxes experience is coming there soon. Stay tuned.

What’s Next

You already trust Docker to build, ship, and run your software. Sandboxes extend that trust to agents, giving them room to operate without giving them access to everything.

Autonomous agents are becoming more capable. The limiting factor is no longer what they can do, but whether you can safely let them do it.

Sandboxes make that possible.





Download video: https://www.docker.com/app/uploads/2026/03/productivity.mp4,videoProvider:HTML5,videoID:https://www.docker.com/app/uploads/2026/03/productivity.mp4



Download video: https://www.docker.com/app/uploads/2026/03/eliminating-risk.mp4,videoProvider:HTML5,videoID:https://www.docker.com/app/uploads/2026/03/eliminating-risk.mp4



Download video: https://www.docker.com/app/uploads/2026/03/sandbox-options.mp4,videoProvider:HTML5,videoID:https://www.docker.com/app/uploads/2026/03/sandbox-options.mp4
Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code's Source Code Leaks Via npm Source Maps

1 Share
Grady Martin writes: A security researcher has leaked a complete repository of source code for Anthropic's flagship command-line tool. The file listing was exposed via a Node Package Manager (npm) mapping, with every target publicly accessible on a Cloudflare R2 storage bucket. There's been a number of discoveries as people continue to pore over the code. The DEV Community outlines some of the leak's most notable architectural elements and the key technical choices: Architecture Highlights The Tool System (~40 tools): Claude Code uses a plugin-like tool architecture. Each capability (file read, bash execution, web fetch, LSP integration) is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript. The Query Engine (46K lines): This is the brain of the operation. It handles all LLM API calls, streaming, caching, and orchestration. It's by far the largest single module in the codebase. Multi-Agent Orchestration: Claude Code can spawn sub-agents (they call them "swarms") to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions. IDE Bridge System: A bidirectional communication layer connects IDE extensions (VS Code, JetBrains) to the CLI via JWT-authenticated channels. This is how the "Claude in your editor" experience works. Persistent Memory System: A file-based memory directory where Claude stores context about you, your project, and your preferences across sessions. Key Technical Decisions Worth Noting Bun over Node: They chose Bun as the JavaScript runtime, leveraging its dead code elimination for feature flags and its faster startup times. React for CLI: Using Ink (React for terminals) is bold. It means their terminal UI is component-based with state management, just like a web app. Zod v4 for validation: Schema validation is everywhere. Every tool input, every API response, every config file. ~50 slash commands: From /commit to /review-pr to memory management -- there's a command system as rich as any IDE. Lazy-loaded modules: Heavy dependencies like OpenTelemetry and gRPC are lazy-loaded to keep startup fast.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The threat to critical infrastructure has changed. Has your readiness?

1 Share

Critical infrastructure (CI) organizations underpin national security, public safety, and the economy. In 2026, the cyber threat landscape facing these sectors is structurally different than it was even two years ago. What Microsoft Threat Intelligence is observing across critical infrastructure environments right now is not a forecast. It is already happening. Threat actors are no longer focused solely on data theft or opportunistic disruption. They are establishing persistent access, footholds they can sit in quietly, undetected, and activate at the moment of maximum disruption. That is the threat CI leaders need to be preparing for today. Not someday. Now.

Given these rising threats, governments worldwide are advancing policies and regulations to require critical infrastructure organizations to prioritize continuous readiness and proactive defense. The regulatory trajectory is clear. The U.S. National Cybersecurity Strategy published in March 2023 explicitly frames cybersecurity of critical infrastructure as a national security imperative. Japan issued a basic policy to implement the Active Cyber Defense legislation in 2025. Europe continues to implement the NIS2 Directive across the essential sectors. And Canada is advancing a more prescriptive approach to critical infrastructure security through Bill C8.

What Microsoft Threat Intelligence hears from law enforcement agencies reinforces what we observe in our own telemetry. For example, Operation Winter SHIELD is a joint initiative led by the FBI Cyber Division focused on helping CI organizations move from awareness to verified readiness. Implementation not just awareness, not just policy. It is what closes the gap between knowing you are a target and being ready when it matters.

The water sector offers a clear illustration of what that implementation gap looks like in practice and what it takes to close it. The findings from Microsoft, released on March 19, 2026, in collaboration with the Cyber Readiness Institute and the Center on Cyber Technology and Innovation show that hands-on coaching paired with practical training materially improves cyber readiness in water and wastewater utilities in ways that guidance alone does not. When attacks succeed, communities face safety concerns, loss of trust, and service disruptions. That is not an abstraction. That is what is at stake across every CI sector.

To say that environments CI organizations are defending today were not designed for the threat they are facing is an understatement. Legacy systems now operate within hybrid IT–OT environments connected by cloud-based identity, remote access, and complex vendor ecosystems that did not exist when those systems were built. Identity has become the central control layer across all of it. Microsoft Threat Intelligence and Incident Response investigations show a convergence of identity-driven intrusion, living-off-the-land (LOTL) persistence, and nation-state prepositioning across CI. Against this backdrop, five facts define the resilience priorities CI leaders must address in 2026.

Explore CI readiness resources

Five critical threat realities

Five facts CI leaders can’t ignore

Today’s threat landscape reflects five structural realities: identity as the primary entry point, hybrid IT–OT architecture expanding attacker reach, nation-state pre-positioning as an ongoing concern, preventable exposure continuing to drive intrusions, and a shift from data compromise to operational disruption. Together, these dynamics are reshaping critical infrastructure resilience in 2026.

1. Identity is the dominant attack pathway into CI environments

Identity is where we see attackers start, almost every time. In CI environments, identity bridges enterprise IT and operational technology, making it the primary attack path. More than 97% of identity-based attacks target password-based authentication, most commonly through password spray or brute force techniques. As identity systems centralize access to cloud and operational assets, adversaries rely on LOTL techniques and legitimate credentials to evade detection. Because identity now governs access across these connected domains, a single compromised account can provide privileged reach into operationally relevant systems.
 

 97% of identity-based attacks target password-based authentication.

2. Cloud and hybrid environments expand operational risk

The cloud did not just change how CI organizations operate. It changed how attackers get in and how far they can go. Cloud and hybrid incidents increased 26% in early 2025 as identity, automation, and remote management converged within cloud control planes. Microsoft research shows 18% of intrusions originate from web-facing assets, 12% from exposed remote services, and 3% from supply chain pathways. As long-lived OT systems depend on cloud-based identity and centralized remote access, identity compromise can extend beyond IT into operational environments. Incidents that once remained contained within IT environments can now extend directly into operational systems. For CI operators, this means cloud and hybrid architecture now directly influence operational resilience—not just IT security.

18% of cyber intrusions originate from web-facing assets

3. Nation-state prepositioning is a strategic reality

This is the one that keeps me up at night. Nation-state operators are actively maintaining long-term, low-visibility access inside U.S. critical infrastructure environments. Microsoft and the Cybersecurity and Infrastructure Security Agency (CISA) have documented campaigns attributed to Volt Typhoon, a PRC state-sponsored actor, in which intruders relied on valid credentials and built-in administrative tools rather than custom malware to evade detection across sectors. Using LOTL techniques and legitimate accounts, these actors embed within routine operations and persist where IT and OT visibility gaps exist. CISA Advisory AA24-038A warns that PRC state-sponsored actors are maintaining persistent access to U.S. critical infrastructure that could be activated during a future crisis. For security leaders, this represents sustained, deliberate positioning inside operational environments and underscores how adversaries shape conditions for future leverage.

 PRC-sponsored cyber actors targeting U.S. critical infrastructure.

4. Exposure and misconfiguration enable initial access

Most of what Microsoft sees in our investigations is not sophisticated. It is preventable. Most intrusions into critical infrastructure begin with preventable exposure rather than advanced exploits. Internet-facing VPNs left enabled too long, contractor identities that outlive project timelines, misconfigured cloud tenants, and dormant privileged accounts create quiet, low-effort entry points. Microsoft research shows that 12% of intrusions originate from exposed remote services. Over time, configuration drift and unmanaged access expand the attack surface, allowing adversaries to gain initial access before persistence or lateral movement is required. Reducing unnecessary exposure remains one of the highest-leverage risk-reduction actions available to CI operators.

12% of cyber intrusions originate from exposed remote services

5. Operational impact is increasing

The goal has shifted. Attackers are no longer just trying to steal data. They are trying to take things offline. Operational disruption is becoming a primary objective, not a secondary outcome. Attack campaigns surged 87% in early 2025, alongside increased destructive cloud activity and hands-on-keyboard operations targeting critical infrastructure. Identity systems, cloud control planes, and remote management layers are targeted because they provide direct operational leverage. For CI operators, the impact extends beyond data loss to service availability and physical processes. Organizations must ensure operational pathways are resilient against disruptive activity, not only monitored for signs of compromise.

Destructive cyber campaigns increased by 87% in early 2025.

Common attack patterns

Scenario patterns observed in CI environments

These are not hypothetical. They are patterns we see repeatedly in incident response engagements across sectors. The actors may vary. The access pathways do not.

Continuous Readiness approach

Four reinforcing pillars of continuous readiness

Point-in-time hardening does not work against attackers who are playing a long game. In hybrid IT–OT environments, resilience requires sustained practices, not one-time fixes. CI leaders need a continuous approach that strengthens identity, reduces exposure, increases cross-domain visibility, and ensures effective response. Microsoft’s work across critical infrastructure environments consistently highlights four reinforcing pillars:

Readiness validation

Why continuous readiness works

Continuous readiness is most effective when it is grounded in integrated visibility across identity, endpoint, and cloud environments, particularly in hybrid IT–OT architectures common to critical infrastructure. Microsoft’s telemetry enables investigators to correlate activity across these domains, surfacing patterns that isolated tools may miss. CI-informed playbooks, shaped by incident response engagements across sectors, help organizations prioritize the pathways most likely to affect operations. In practice, readiness engagements frequently uncover active or dormant compromise, reinforcing the importance of validating resilience before disruption occurs. For CI leaders, this visibility and correlation are especially critical given the operational consequences of undetected identity misuse or cross‑domain movement.
 

Because adversaries prioritize quiet, long-term access rather than immediate disruption, many organizations only discover exposure after operations are impacted—unless readiness is actively validated.

Next steps

Take action: Validate resilience before it’s tested

Here is what every CI leader reading this should ask themselves: have threat actors already established the access they need and how would I know?

Operational resilience depends on verified assurance, not assumptions. Security leaders must confirm that identity pathways are hardened, exposure is reduced, and adversaries have not established durable footholds. A proactive compromise assessment delivered by Microsoft Incident Response can determine whether adversaries are already present—active or dormant—and help close high-risk gaps before disruption occurs.


For more information, read our blog post, Explore the latest Microsoft Incident Response proactive services for enhanced resilience, or access the CI readiness resources.


Contact your Microsoft representative to schedule a proactive compromise assessment and validate your resilience posture.

Explore resources for CI readiness

The post The threat to critical infrastructure has changed. Has your readiness? appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
47 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories