Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150178 stories
·
33 followers

Windows Package Manager 1.12.420

1 Share

This is a servicing release build of Windows Package Manager v1.12. If you find any bugs or problems, please help us out by filing an issue.

New in v1.12

  • MCP server available; run winget mcp for assistance on configuring your client.
  • App Installer now uses WinUI 3. The package dependency on WinUI 2 has been replaced by a dependency on the Windows App Runtime 1.8.
  • Manifest schema and validation updated to v1.12. This version update adds Font as an InstallerType and NestedInstallerType.
  • Font Install, Uninstall, and a winget-fonts source have been added and are non-experimental.

Bug Fixes

  • Manifest validation no longer fails using UTF-8 BOM encoding when the schema header is on the first line.
  • Upgrading a portable package with dev mode disabled will no longer remove the package from the PATH variable.
  • Fixed source open failure when there were multiple sources but less than two non-explicit sources.
  • Fixed an issue where App Installer would not update its progress.
  • Fixed an issue with opening packages that require elevation in App Installer.
  • Fixed an issue that prevented App Installer from launching on older OS builds when the Windows App Runtime is missing.

Font Support

Font Install and Uninstall via manifest and package source for user and machine scopes has been added.
A sample Font manifest can be found at:
https://github.com/microsoft/winget-pkgs/tree/master/fonts/m/Microsoft/FluentFonts/1.0.0.0

At this time install and removal of fonts is only supported for fonts installed via WinGet Package.

Fonts must either be the Installer or a .zip archive of NestedInstaller fonts.

A new explicit source for fonts has been added "winget-font".
winget search font -s winget-font

This source is not yet accepting public submissions at this time.

Experimental Features

  • Experimental support still exists for the 'font' command.

Experimental support for Fonts

The following snippet enables experimental support for fonts via winget settings. The winget font list command will list installed font families and the number of installed font faces.

{
  "$schema" "https://aka.ms/winget-settings.schema.json",
  "experimentalFeatures": {
    "fonts": true
  }
}

The font 'list' command has been updated with a new '--details' feature for an alternate view of the installed fonts.

What's Changed

  • Make Repair-WGPM a COM-aware cmdlet and rework version retrieval (CP to 1.12) by @JohnMcPMS in #5858
  • Unregister signal handler (CP to 1.12) by @JohnMcPMS in #5862
  • Support associating export units with packages in subdirectories (1.12) by @JohnMcPMS in #5866

Full Changelog: v1.12.350...v1.12.420

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Agent Mode Fixes 500s, Patches Code, and Returns 200s

1 Share
From: Postman
Duration: 2:11
Views: 6

Watch Agent Mode run a full Postman collection, analyze every failing request, and automatically fix backend issues from 500 errors to 400s and 404s. In this demo, Agent Mode patches code, restarts the server, and reruns the failing requests until each one returns a clean 200 or 201 response.

See the complete debug loop in action as Agent Mode removes problematic lines, adds missing fields, and verifies every fix with a fresh collection run. If you want to understand how Agent Mode streamlines API debugging and error resolution, this walkthrough shows the full workflow end to end.

🔗 Resources:
- Sign up for Agent Mode: https://www.postman.com/product/agent-mode/?utm_campaign=global_growth_user_fy26q4_ytbftrad&utm_medium=social_sharing&utm_source=youtube&utm_content=25198-L
- Read the docs: https://learning.postman.com/docs/agent-mode/overview/?utm_campaign=global_growth_user_fy26q4_ytbftrad&utm_medium=social_sharing&utm_source=youtube&utm_content=25198-L

📌 Timestamps
0:00 - Running the full collection with Agent Mode
0:12 - Identifying 500, 400, and 404 errors
0:28 - Agent Mode patches backend code automatically
0:50 - Restarting the server and re-running failing requests
1:05 - Fixing additional 400-level errors
1:21 - Final failing request: department by ID
1:34 - Full collection passes with all 200/201 responses

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Create Your Own Chatbot with NVIDIA DGX SPARK

1 Share

Imagine having the power of an AI, like ChatGPT, right within your own home, accessible only to you and those you choose on your network. Intriguing, right? Well, the good news is that with technologies such as NVIDIA’s DGX Spark and the clever use of Docker containers, this is not just possible but actually quite manageable. Let’s explore how this kind of setup can be constructed and dive into the flexibility and privacy it offers.

Setting up your personal AI begins with the establishment of Docker containers. Docker, for those unfamiliar, allows you to pack, ship, and run any application as a lightweight, portable, self-sufficient container, which can run virtually anywhere. For our purposes, we’re using two types of containers. The first is called “Lama,” which acts as the backbone of the AI system. Based on the popular Llama CPP, Lama now predominantly runs with Open Web UI—a user-friendly web server that enables your server to be accessible by anyone on your network.

This video is from Micro Center.

The journey starts with ensuring your machine is primed for Docker commands. Running a simple `docker ps` checks for active Docker containers and verifies if you have the necessary permissions to view or modify them. It’s common at this stage to encounter a permission error, typically resolved by adding your user to the Docker group using administrative privileges—a process facilitated by the Linux ‘sudo’ command.

Once you have permissions sorted, the next step involves pulling down the Docker containers we need from a repository. Here, the syntax `docker pull` fetches everything required to get Lama and Open Web UI operational. This command is crucial because it gathers all the necessary components to build your local AI server.

Following this setup, running the Docker containers is next. The beauty of Docker really shines here, as you isolate your program within your computer, using what we call “volumes.” This method essentially restricts the program’s view of your system to just the directories you specify—it can’t interact with anything you haven’t explicitly allowed. This both safeguards your system’s integrity and compartmentalizes data effectively.

After your containers are up and running, navigating to localhost on your specified port, you initiate setting up your instance of ChatGPT. Registration usually requires entering basic details like your name, email, and a password.

The next phase of excitement begins when you start importing AI models. Thanks to the robustness of the setup, even massive models like GPT-3 120B can be handled, though for simplicity, you might start with something smaller like GPT-OSS 20B. This model fetching from the remote repository shows the seamless integration capabilities of modern server setups. The model you choose becomes the brain behind the prompts you’ll eventually run.

Speaking of prompts, using a standardized prompt library can ensure consistency and quality in outputs—essential if you’re experimentally comparing model outputs or developing an application. Once the model is loaded into system memory, you can start interacting with your AI. The first response from this script is always slightly magical—it really feels like something out of a sci-fi novel!

This entire process illustrates not just a technological marvel, but also a paradigm shift in how personal computing power is harnessed and utilized at home. It’s incredibly empowering to realize that such sophisticated technology can be run from a personal home server, putting significant capabilities at your fingertips.

This exploration into setting up a localized version of ChatGPT using Docker and NVIDIA technologies is just the tip of the iceberg. The potential applications are vast—from developing personal virtual assistants, enhancing local data security, to potentially transforming your home into a smart, AI-driven hub. The question isn’t just about what AI can do, but what you can do with AI.

What would you create if you had such power at your disposal? How would this transform your interaction with technology daily?

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Get Nvidia GPU Working on Raspberry Pi

1 Share

Today, I’ve got some fascinating insights to share—primarily centered around an Nvidia A4000 GPU engineered to play nicely with a Raspberry Pi CM5. Both challenging and intriguing, this endeavor reminds me why I live for this stuff.

This video is from Jeff Geerling.

While the setup doesn’t have a display output just yet (we’re on it!), it successfully facilitates full GPU acceleration, particularly for AI applications. Imagine this: loading AI models into memory and running benchmarks at a stellar rate of 121 tokens per second, all on a slim power budget of about 160 watts. This level of efficiency can make any tech aficionado’s heart skip a beat. The secret sauce? A patch for open GPU kernel modules from Nvidia, with commendable community contributions that optimize compatibility of Nvidia drivers with ARM systems like the Raspberry Pi.

This scenario isn’t just about tinkering with hardware; it’s a gateway to broader possibilities. For instance, the setup I’m experimenting with is inclusive of various models and makes, from the Compute Module 5 to non-Pi boards like the Rock 5 with an RK3588 chip. The versatility on display here is akin to a Swiss Army knife, poised to address a spectrum of tech challenges with various tools at its disposal.

But my tech Thanksgiving doesn’t end at AI and GPUs. Oh no, it extends to gaming—an essential pulse of the tech community. Curious about running demanding games on unconventional setups? Well, would you believe me if I said, “Yes, it can run Crysis”? Not just run it, but handle Crisis Remastered in “Can it Run Crisis” mode incredibly well on an ARM desktop proclaimed as the fastest on the planet—the System 76 Thelio Astra. Though its CPU cores aren’t the speediest, a staggering count of 128 cores propels it to robust efficiencies, showcasing just how far we’ve come in computing power distribution and multi-tasking prowess.

This journey through ARM systems compatibility with Nvidia GPUs reveals more than just technical feasibility; it showcases the buoyant spirit of the tech community and the ceaseless pursuit of pushing boundaries. Whether it’s integrating advanced GPU capabilities into compact and energy-efficient setups or exploring how high-end gaming experiences can be translated onto the most unexpected platforms, the mantra is clear: innovate, integrate, and inspire.

What drives this explorative urge, you might ask? It’s the enthralling rollercoaster of making a so-called “bad idea” work wonders. It’s about proving that with the right tweaks, unconventional setups can rival traditional configurations, challenging our preconceptions of performance and efficiency. I thank the spirited tech community—hobbyists, developers, and enthusiasts—who contribute patches, debug rigorously, and share their insights, fostering an ecosystem where amazing becomes the norm.

This saga of integrating mainstream graphics processing capabilities into smaller, less conventional systems like Raspberry Pis not only broadens the horizon for what these tiny machines can do but also sets the stage for a future where the size might no longer impede power. As systems become stable and user-friendly, and as patches weave into mainstream kernels, we’re standing on the brink of a small-form-factor revolution.

So, as we delve into turkey leftovers, I’ll be diving deeper into the realm of possibilities that opens up when ARM meets Nvidia—testing, tweaking, and perhaps, even accomplishing the dream of making bad ideas good. If you’re as enthralled by these adventures as I am, stay tuned for more explorations and findings.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

DirectX Graphics Samples Updated to Agility SDK + Enhanced Barriers examples

1 Share

We’re excited to share all DirectX Graphics Samples have been updated to use the latest DirectX 12 Agility SDK.

What’s New:

  • Agility SDK Integration: All samples now leverage the latest Agility SDK for improved compatibility and faster adoption of new DirectX capabilities.
  • Enhanced Barriers Support: Three samples (Multithreading, Small Resources, and nBodyGravity) have been upgraded to use D3D12 Enhanced Barriers when supported by hardware and drivers.

Why Enhanced Barriers Matter:

Enhanced Barriers introduce a more expressive and predictable barrier model, reducing synchronization overhead and simplifying resource layout management and aliasing.

Get Started: Visit https://github.com/microsoft/DirectX-Graphics-Samples to explore the updated samples.

The post DirectX Graphics Samples Updated to Agility SDK + Enhanced Barriers examples appeared first on DirectX Developer Blog.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Release v0.96.1

1 Share

This patch release fixes several important stability issues identified in v0.96.0 based on incoming reports. Check out the v0.96.0 notes for the full list of changes.

Installer Hashes

Description Filename sha256 hash
Per user - x64 PowerToysUserSetup-0.96.1-x64.exe 86EC5D7639ED3624ECD53193C0B0411174D17E72F8C7507E8206A52F78FA4A24
Per user - ARM64 PowerToysUserSetup-0.96.1-arm64.exe 8B1FEDE0F9E0BA9C24270DBF1D40F967168C83B3F10B26AFC778286D560DFFE6
Machine wide - x64 PowerToysSetup-0.96.1-x64.exe E67D2D4098CA0CEB6910D418989567F4A62CD868210E1B29D8A76E0E64A3626E
Machine wide - ARM64 PowerToysSetup-0.96.1-arm64.exe 0F5562A1C2AD8BD42D19B79EAD72BB7AD77C3EE2B93C2415AC15015CCDFB6F8E

Highlights

Advanced Paste

  • #43766: Removed deprecated OpenAI Prompt Execution Settings properties, enabling use of new models such as GPT-5.1 in Azure OpenAI.
  • #43768: Updated Foundry Local model parameters to allow for longer output tokens.
  • #43716: Fixed an issue where a model could appear unavailable immediately after being downloaded from Foundry Local.

Image Resizer

  • #43763: Brought Image Resizer back to Windows 10.

Awake

  • #43785: Fixed timed mode not expiring correctly.
Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories