Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149425 stories
·
33 followers

GDC 2026: Announcing new tools and platform updates for Windows PC game developers

1 Share
Our focus is clear: making Windows 11 the best place for game developers to create, experiment, ship and scale. Windows is an open, flexible platform that supports choice across engines, tools, hardware and distribution models. That commitment is rooted in close collaboration across our ecosystem, including partners like AMD, Intel, NVIDIA and Qualcomm, and strengthening our deep partnership with Xbox, driving innovation to shape the future of gaming with the development community. This year at GDC, we’re introducing new Windows 11 platform updates and tools designed to deliver faster load times, smoother game play and a strong foundation for Windows ML-enhanced graphics.

What’s new at GDC:

  • Beginning in April, Xbox mode will be generally available on all Windows 11 PC form factors, rolling out to users in select markets
  • Faster load times and streaming performance enabled with Advanced Shader Delivery available to more games, including new self-enablement, and zStandard support in DirectStorage coming soon
  • Improved developer experience with DirectX Dump Files and additional PIX improvements, including Shader Explorer, and more
  • New linear algebra capabilities to accelerate inference in shaders, with a preview of WinML models in graphics workloads coming soon

Xbox mode available on all Windows 11 form factors in April

Starting in April, Xbox mode will start rolling out to users in select markets on all Windows 11 PC form factors, including laptops, desktops and tablets, bringing the experience to a broader set of devices. Xbox mode makes it easier for players to jump into a streamlined, full‑screen, dedicated gaming experience whenever they want to lean back and play. Xbox mode delivers a controller-optimized experience to your Windows 11 device, letting players browse their library, launch games, use Game Bar and switch between apps. Designed to keep players immersed, the experience features a clean, distraction-free interface, while still giving them the flexibility to seamlessly switch back to the Windows desktop at any time.

Advanced Shader Delivery (ASD): Reducing shader stutter at scale

Last year, we introduced Advanced Shader Delivery support for the ROG Xbox Ally handheld, a new approach designed to enable faster startup times and smoother performance. At GDC 2026, we’re expanding ASD to all game developers, allowing them to future-proof their titles and self-enable support through the Xbox Store. Testing is currently underway for the new workflow, with trials for third-party studios expected to begin in May. With new API-level support now available in the DirectX Agility SDK, developers can deterministically collect and package shaders as part of their process. When games are uploaded for publication, the Xbox Partner Center can ingest these shader packages so that supported devices automatically detect and deliver the ASD experience to gamers. This update represents a foundational shift in how PC games handle shaders at scale, bringing more predictable performance, faster startup times for players and reduced stutter on the first run of a game. To learn more, join us for the Advanced Shader Delivery on Windows session at GDC on Thursday, March 12,  from 3:40 p.m. - 4:40 p.m. in Room 2011, West Hall.

DirectStorage: Faster asset streaming and storage

We’re continuing to advance DirectStorage to help developers fully unlock modern NVMe hardware on Windows and build richer, more responsive worlds. Today, we’re introducing support for Zstandard compression and a new tool, the Game Asset Conditioning Library, improving compression efficiency while simplifying asset conditioning across production pipelines. Expanded high‑throughput streaming scenarios reduce I/O latency and increase throughput for data‑heavy environments, without adding complexity to existing workflows. Together, these improvements make it easier to stream larger assets faster, reduce load times and deliver more responsive gameplay at scale. To learn more, join us for the DirectX State of the Union 2026: DirectStorage and Beyond session at GDC on Wednesday, March 11,  from 11:30 a.m. - 12:30 p.m. in Room 3001, West Hall.

Evolving DirectX for the ML era

Machine learning is becoming a core part of real‑time graphics, and DirectX is evolving to support the next generation of ML‑driven rendering on Windows. We’re introducing new capabilities that make it easier for developers to bring neural techniques into their graphics pipelines, starting with linear algebra support in HLSL to unlock hardware‑accelerated ML operations directly in shaders. We will also give a preview of advances in Windows ML that will enable game developers to bring their own models directly into gameplay for a new paradigm of immersive experiences, reducing the need for hand‑authored shader logic. Together, these investments lay the foundation for scalable, AI‑driven graphics pipelines, extending what’s possible on Windows 11 today while preserving the flexibility developers expect from DirectX. To learn more, join us for the Evolving DirectX for the ML Era on Windows session at GDC on Thursday, March 12, from 12:45 p.m. - 1:45 p.m. in Room 2024, West Hall.

New DirectX and PIX tooling updates

We’re bringing the best of console graphics debugging to PC with the largest wave of new DirectX and PIX tooling features in more than a decade.
  • DirectX Dump Files – A new standardized way to capture GPU crash and state data, with first‑class PIX support and programmatic access
  • DebugBreak() in HLSL – New shader‑level breakpoints that enable faster debugging and iteration
  • Shader Explorer – A new way to inspect, understand and debug compiled shaders, with deeper live analysis coming later this year
We’re also delivering additional PIX improvements, including a Tile Mappings Viewer and hardware-specific GPU counters in the System Monitor view, to make debugging and profiling DirectX applications easier. Most of these features will be available in preview starting May 2026, with broader availability later in the year. To learn more, join us for the DirectX: Bringing Console-Level GPU Tools to Windows session at GDC on Thursday, March 12,  from 11:30 a.m. - 12:30 p.m. in Room 2020, West Hall.

See you at GDC

From faster asset streaming and smoother shader compilation to new debugging tools and streamlined publishing paths, these updates are shaped by direct feedback from studios building and shipping games at scale. At GDC 2026, we are going deeper on how to put these capabilities to work in real production environments. Join our Windows and DirectX sessions to hear from teams adopting these technologies in shipping titles, connect with the engineers building them and get practical guidance you can apply today. Together, we’re shaping the future of PC game development on Windows. We’re excited to share what’s next, and we look forward to seeing you at GDC.
Session Date/Time (all times PDT) Description
Xbox Developer Summit Keynote: Building for the Future with Xbox Wednesday, March 11 — 10:10 a.m. – 11:10 a.m.; Room 3001/3003, West Hall Join us for a conversation about the vision shaping the future of Xbox and how we're building a more flexible, connected future for game creators and players everywhere.
DirectX State of the Union 2026: DirectStorage and Beyond Wednesday, March 11 — 11:30 a.m. – 12:30 p.m.; Room 3001/3003, West Hall An update on the state of DirectX, including the latest advances in DirectStorage, compression, HLSL and the broader graphics roadmap on Windows.
Windows Game Development and Visual Studio 2026 Thursday, March 12 — 10:10 a.m. – 11:10 a.m.; Room 2009, West Hall Explore what's new for game development on Windows — from setting up repeatable dev environments to everyday productivity gains with PowerToys. Then dive into Visual Studio 2026 with faster C++ builds, agentic code editing and large-scale refactoring — all designed to help teams iterate faster and ship smarter.
DirectX: Bringing Console‑Level GPU Tools to Windows Thursday, March 12 — 11:30 a.m. – 12:30 p.m.; Room 2020/2022, West Hall Join us for a deep dive into how PIX on Windows is getting closer to console‑parity GPU debugging through new unified crash analysis and other cross‑vendor tooling.
Evolving DirectX for the ML Era on Windows Thursday, March 12 — 12:45 p.m. – 1:45 p.m.; Room 2024, West Hall Learn how DirectX is evolving to support neural rendering and AI‑powered graphics, including Linear Algebra in HLSL and early work toward model‑level ML integration in game engines.
Advanced Shader Delivery on Windows Thursday, March 12 — 3:40 p.m. – 4:40 p.m.; Room 2011, West Hall Discover a new approach to reducing shader compilation stutter on PC by distributing precompiled shaders through storefronts — improving startup times and in‑game performance.
Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Designing AI agents to resist prompt injection

1 Share
How ChatGPT defends against prompt injection and social engineering by constraining risky actions and protecting sensitive data in agent workflows.
Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

From model to agent: Equipping the Responses API with a computer environment

1 Share
How OpenAI built an agent runtime using the Responses API, shell tool, and hosted containers to run secure, scalable agents with files, tools, and state.
Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Over 30 new plugins join the Cursor Marketplace

1 Share
Extend Cursor with prebuilt capabilities in our marketplace.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

A roadmap for safer generative AI for young people

1 Share
Adapted remarks from Google VP Christy Abizaid’s keynote at the "Growing Up in the Digital Age" Summit at Google Dublin.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What OpenClaw Reveals About the Next Phase of AI Agents

1 Share

In November 2025, Austrian developer Peter Steinberger published a weekend project called Clawdbot. You could text it on Telegram or WhatsApp, and it would do things for you: manage your calendar, triage your email, run scripts, and even browse the web. By late January 2026, it had exploded. It gained 25,000 GitHub stars in a single day and surpassed React’s star count within two months, a milestone that took React over a decade. By mid-February, Steinberger had joined OpenAI, and the project moved to an open-source foundation under its final name: OpenClaw.

What was so special about OpenClaw? Why did this one take off when so many agent projects before it didn’t?

Autonomous AI isn’t new

Where we are today feels similar to April 2023 when AutoGPT hit the scene. It had the same GitHub trajectory with its promise of autonomous AI. It didn’t take long for reality to hit. Agents got stuck in loops, hallucinated a lot, and racked up token costs. It didn’t take long for people to walk away.

OpenClaw has one critical advantage: the models have gotten better.  Recent LLMs like Claude Opus 4.6 and GPT-5.4 allow models to chain tools together, recover from errors, and plan multi-step strategies. Steinberger’s weekend project benefited from timing as much as design.

The architecture is intentionally simple. There are no vector databases and no multi-agent orchestration frameworks. Persistent memory is Markdown files on disk. Let me repeat that: persistent memory is Markdown files on disk! The agent can read yesterday’s notes and search its own files for additional context. You can view and edit the agent’s files as needed. There’s a useful lesson in that: not every agent system needs a complex memory strategy. It’s more important that you understand what the agent is doing and that it retains context across runs.

What fascinates me about OpenClaw is that none of the individual pieces are new. Persistent memory across sessions? We’ve been building that for years. Cron jobs to trigger agent actions on a schedule? Decades old infrastructure. Plugin systems for extensibility? A very standard pattern. Webhooks into WhatsApp and Telegram? There are well-documented APIs for that. What Steinberger did was wire them together at the exact moment the underlying models could execute on multi-step plans. The combination created something that felt quite different from anything that had come before!

Why this time feels different

OpenClaw nailed three things that previous agent projects missed: proximity, creativity, and extensibility.

Proximity—it lives where you already are every day. OpenClaw connects to WhatsApp, Slack, Discord, Telegram, and Signal. That single design decision changed its trajectory. The agent becomes an active participant in your workflow. People use it to manage their sales pipeline, automate emails, and kick off code reviews from their phones.

Next, it’s proactive. OpenClaw doesn’t wait for you to ask; it uses cron jobs to run tasks on a set schedule. It can check your email every day at 6 AM, draft a reply before you wake up, and even send it for you! And it reaches out when anything needs your attention. Agents become part of everyday life when integrated into familiar channels.

And finally, my favorite, it’s open and extensible. OpenClaw’s plugin system, called “skills”, lets the community build and share modular extensions on ClawHub. There are thousands of skills ready to be plugged into your agent. Agents can even write their own new skills and use them going forward. That extensibility meant more skills, more users, and more attack surfaces, which we’ll get to.

The community ran with it. A social network exclusively for AI Agents, Moltbook, launched in late January and grew to over 1.5 million agent accounts. One agent created a dating profile for its owner on MoltMatch and started screening matches without being asked.

I’ll admit, I got swept up in it, but that’s not surprising; I’ve always been an early adopter of emerging technology. I bought a Mac Mini, installed OpenClaw, and connected it to my JIRA, AWS, and GitHub accounts. In no time, I had my agent, Jarvis, writing code and submitting PRs, running my daily standups, and deploying my code to AWS using AWS CloudFormation and the AWS CLI.

I spent a lot of time binding the gateway to localhost, auditing every skill, and restricting file system permissions. For me, hardening the setup was not optional. I’m now deploying via AWS Lightsail, which adds network isolation and managed security layers that are hard to replicate on a Mac Mini in your home office.

The security problem no one wants to talk about

OpenClaw requires root-level access to your system by design. It needs your email credentials, API keys, calendar tokens, browser cookies, file system access, and terminal permissions. If you’re like me, that would keep you up at night.

Security researchers found 135,000 OpenClaw instances exposed on the open internet, over 15,000 vulnerable to remote code execution. The default configuration binds the gateway to 0.0.0.0 with no authentication. A zero-click exploit disclosed in early March allowed attackers to hijack an instance simply by getting the user to visit a webpage.

The skills marketplace got hit, too. Researchers discovered over 800 malicious skills distributing malware on ClawHub, including credential stealers targeting macOS. Cisco confirmed that one third-party skill was performing data exfiltration and prompt injection without user awareness. These are not edge cases and point directly to what happens when an agent can act across real systems with real permissions and weak controls.

What practitioners should take away

OpenClaw matters for the same reason ChatGPT mattered in late 2022. A huge number of people just experienced, for the first time, what it feels like to have an AI agent do real work for them. That changes what they expect from every product going forward.

If you’re building AI systems, pay attention to three signals here.

The killer interface for agents turned out to be the one on everyone’s phone. Your agent strategy shouldn’t require users to learn a new tool; that’s why most products are introducing agentic capabilities.

Control is the central design challenge. Prompt injection, credential exposure, and attacks through plugin marketplaces are real-world problems you need to solve before you ship features. Oversight has to be available at runtime. You need visibility into what your agents are accessing, what they’re doing, and how failures are handled.  Permission boundaries, approval gates, audit logging, and recovery mechanisms are non-negotiable.

OpenClaw is a proof of market. It proved that people are ready to make AI personal. People want a personal AI agent that has access to their applications and can do things for them. That demand is now validated at scale. While AutoGPT proved that people wanted autonomous AI, Perplexity and Cursor built businesses around that. The same pattern is likely playing out here. If you’re building in this space, the window is wide open.

The more interesting question now is what gets built next. The next phase of agent design will be shaped by how governable, observable, and safe agents are in real-world environments.

For a deeper dive into OpenClaw, join us on March 19 for AI Product Lab: OpenClaw Up and Running with Aman Khan and Tal Raviv. You’ll learn more about why OpenClaw is a viral sensation, how to get it up and running in a way you won’t regret, and how to use it to build and manage safe, agentic workflows.



Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories