Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153215 stories
·
33 followers

Build a Real-Time Voice Interview Coach with TypeScript and LiveKit

1 Share

Do you struggle with interviews? You're not alone! You can have the best interview notes in the world, but when you start having that vocal conversation, you might end up a deer in the headlights, freezing and forgetting everything you thought you prepared for.

There's good news though!

With modern AI tools like LiveKit, you can have a voice conversation with an AI agent, mocking a real interview experience. Imagine uploading a job description and your resume and being immediately paired with an expert (the agent) to ask you real questions for the job and providing feedback on how you answer and present yourself.

In this tutorial we'll explore using LiveKit and TypeScript, paired with various LLM models and Apache Tika, to establish a very realistic interview coaching experience.

The post Build a Real-Time Voice Interview Coach with TypeScript and LiveKit appeared first on The Polyglot Developer.



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

3 strikes and you're an AI skill

1 Share

Back in the day when we wrote actual code instead of poking at an AI, I had a general rule for when to refactor repeated code. Do it once, fine. Do it a second time, fine. Do it a third time - refactor.

It’s ok to have blocks of identical code in two places, but once I added it to a third place, I’d refactor it to a shared location, such as base class or helper function. Some might say this doesn’t follow DRY principles as I should refactor it the second time, but I find it a good balance between pragmatism and clean code. It’s too easy to stress over clean code and over complete everything to avoid code duplication, ending up making your code harder to understand.

So how does this apply to AI? I’m going to ignore code duplication here, cos the AI is very good at duplicating code. Instead let’s consider how we prompt the AI to do certain tasks.

A typical example is I often ask Copilot or Claude to check its work. I use a prompt like:

Review your work. Check for:

- compliance with the original spec for this work
- unit test coverage
- consistency with the rest of the code base including style, naming, commenting, and architecture
- code reuse, and compliance with DRY principles
- does the code pass the linter
- does the code work

Do multiple passes over all the code changes made using this review plan. When you identify areas that do not pass this review, fix them, then re-run the review.

Work until this review passes.

The goal of this is to ensure that Copilot (or Claude, or your coding assistant of choice) takes time to review the changes it has made to ensure they are appropriate, they work, and they confirm to any standards you have for your code. The ‘multiple passes’ request is to get the agent to review, fix, then re-review. This usually ends up with better quality code after the agent has finished.

My 3 strikes rule not only applies to code, but to prompts as well. The first time I asked my agent to review code, I used a prompt like this. Same with the second time. Once I realized there was a patter here in how I want the coding agent to behave, it was time to create a skill.

What are skills?

Skills are instructions that are written in plain text or markdown, that provide reusable directions to your agent. You can think of this as analogous to components or packages you use with code. Instead of installing a package from nuget, PyPi, or npm, you add a markdown file that your agent reads.

These skills can be project level, so installed in the same folder as your project, or at a user level so they apply to any session with your coding agent. For example, if you use GitHub Copilot, you can put skills into ~/.copilot/skills to be used in any coding agent session, or in the .github/skills folder in your project to use them just for a single project.

Skills consist of a directory with the name of the skill, containing a file called skill.md, along with additional files or folders that contain more reference information for the skill. The skill.md file contains front matter with the name and description of the skill. This is used by the coding agent to determine when it should use this skill. The rest of this file contains the instructions for the skill, including an example of the kind of prompt a user would use that should trigger this skill.

Building a code review skill

How do you build a skill? Well the easiest way is to use your coding agent to convert the prompt into a skill. These agents understand skills, so are great at creating them. I asked copilot:

I want to create a user level skill that reviews my code. A typical prompt I use to review code is:

Review your work. Check for:

- compliance with the original spec for this work
- unit test coverage
- consistency with the rest of the code base including style, naming, commenting, and achitecture
- code reuse, and compliance with DRY principles
- does the code pass the linter
- does the code work

Do multiple passes over all the code changes made using this review plan. When you identify areas that do not pass this review, fix them, then re-run the review.

Work until this review passes.

Convert this to a user level skill

This was enough for Copilot to create me a skill in my home folder, with the following content:

---
name: review-work
description: Review code changes against a multi-criteria checklist (spec compliance, test coverage, consistency, DRY, linter, runtime correctness), fix issues found, and re-run the review until it passes. Use when the user asks to "review your work", "review the code", "do a review pass", or similar.
---

# Review Work

Review the code changes you have made in this session against the checklist below. Make multiple passes. When a pass surfaces issues, fix them, then run the review again. Repeat until a full pass produces no findings.

## Review checklist

For every pass, evaluate the changes against each of these criteria:

1. **Spec compliance** — Do the changes implement what was originally asked? Re-read the original request/spec and verify every requirement is met. Flag missing pieces, scope creep, or anything that drifted from intent.
2. **Unit test coverage** — Are the new/changed code paths covered by unit tests? Are edge cases tested? Are the tests meaningful (asserting behavior, not just running code)?
3. **Consistency with the codebase** — Match the surrounding code's style, naming conventions, commenting patterns, and architectural choices. Don't introduce a new pattern when an existing one fits.
4. **Code reuse / DRY** — Is anything duplicated that should be extracted? Is there an existing helper, utility, or abstraction this code should use instead of reimplementing?
5. **Linter** — Run the project's linter(s). The code must pass with no new warnings or errors.
6. **Does the code work** — Run the tests. Run the build. Where applicable, exercise the feature (CLI invocation, dev server, etc.) to confirm runtime behavior matches the spec. Type-checking and tests passing is necessary but not sufficient — verify the feature actually does what it should.

## Process

1. Identify the scope of changes to review (the work done in this session, or the diff against the base branch — whichever matches the user's request).
2. Run a full pass over all changed code, evaluating against every checklist item. Collect findings.
3. If findings exist, fix them. Don't batch — fix and move on.
4. Re-run the full review. Continue until a complete pass produces zero findings.
5. Report the final result: confirm each checklist item passes, and summarize what was fixed during the review (if anything).

## Notes

- Don't stop after one pass. The first round of fixes can introduce new issues; the loop is the point.
- Run linters and tests with the project's actual commands (check `package.json` scripts, `Makefile`, `pyproject.toml`, etc.). Don't assume `npm test` works if the project uses something else.
- If a check is genuinely not applicable (e.g., a docs-only change has no linter target), say so explicitly rather than skipping silently.
- If you cannot verify runtime behavior (no way to exercise the feature in this environment), say so explicitly — don't claim "it works" based on tests alone.
What's great here is the expansion of the prompt to more detailed instructions. The front matter contains a description of the skill, along with guidance as to what prompt should trigger this. It then has the basics list I provided but expanded with more detailed instructions. "compliance with the original spec for this work" becomes "Spec compliance — Do the changes implement what was originally asked? Re-read the original request/spec and verify every requirement is met. Flag missing pieces, scope creep, or anything that drifted from intent."

It also adds helpful notes, like don't run checks that are not applicable, such as run multiple passes as one pass might introduce new issues, or avoid linter checks for a pure docs change.

Use the skill

Now I have my skill, I can reload my coding agent and it will pick up this new skill. Instead of typing my review prompt in detail each time I need a review, I can just ask Review your work, and the coding agent will load this skill and follow it for a thorough review.

I can also iterate on this skill. If there is something I’ve missed, such as adding rules on running unit tests, or pointing it to a coding style standard, I can update the skill and these changes will be picked up every time I ask the agent to review its work.

Summary

Skills are a great way to build repeatable processes into how you interact with a coding agent. If you do a task more than once with your agent, consider building it into a skill, a task that is pretty easy to do by asking your coding agent to create the skill for you.

You can get the code for my skill here: github.com/jimbobbennett/ai-skills

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How Writers Use The 4 Main Characters As Literary Devices

1 Share

Who are the four main characters in fiction? We explore how these character roles function as literary devices—and how you can use them to strengthen your stories.

One of the easiest ways to tell if you have a plot, and not just a story idea, is by looking at the characters you’ve included in your story. You need to pay special attention to the four main characters who give your story the structure it needs.

Who Are The 4 Main Characters?

  1. The Protagonist
  2. The Antagonist
  3. The Confidant
  4. The Love Interest

Is It A Plot Or A Story Idea? The Protagonist Will Tell You

Beginner fiction writers often struggle to distinguish between describing a series of events and telling a story that truly makes readers care. If we do not learn how to shape our ideas into a plot using storytelling techniques — including using characters as literary devices — readers are far more likely to abandon our books.

Have you ever read a book where things simply happen, but there is no clear character for the reader to connect with? Stories without a strong central problem quickly lose momentum, and without a protagonist readers can identify with or empathise with, it becomes difficult to keep them emotionally invested. A compelling protagonist gives readers someone to follow through the conflict, making the story clearer, more engaging, and far more memorable.

Something happens in your story that negatively affects your protagonist’s life, creating a problem they must resolve. For the story to matter, the problem needs to be significant enough to create meaningful consequences and force the protagonist to act.

Why The Others Matter As Literary Devices

One of the most effective ways to make readers care about the protagonist is by using the other key characters to reveal who they are. A story rarely exists in isolation—without these relationships, the protagonist has nothing to react to, struggle against, or grow from. Each of these characters serves a distinct narrative purpose.

The love interest introduces emotional stakes and vulnerability. The antagonist creates conflict, which is essential for any story to move forward. The confidant offers support, and insight into the protagonist’s inner world.

Together, these characters shape the protagonist and the plot from different angles, making the story more dynamic and far more engaging.

How Writers Use The 4 Main Characters As Literary Devices

1. The Protagonist

A good protagonist is one who wants something (story goal), and sets out to get it. We need a proactive character in this role. A passive character will kill your story. A great protagonist makes decisions and chooses to act. These decisions and actions influence your story. John Gardner says, ‘Failure to recognise that the central character must act, not simply be acted upon, is the single most common mistake in the fiction of beginners.’

2. The Antagonist

On this journey, they meet resistance. This is usually a result of the antagonist’s actions. This causes the conflict that creates a plot. Remember that conflict must have consequences, so your antagonist has to be as strong as, or stronger than, your hero. As Franz Kafka said: ‘From a real antagonist one gains boundless courage.’ This character should be believable. Their motivation should be reasonable from their perspective. This character is the hero of their story, and your protagonist is their villain. (Suggested reading: Very Important Characters)

3. The Confidant

Along the way, the protagonist needs some help. Provide a confidant or a sidekick to support them in this quest. You need this character so that your hero does not spend too much time alone thinking about things. The friend is a sounding board for the main character. As Chuck Palahniuk says: ‘One of the most-common mistakes that beginning writers make is leaving their characters alone. Writing, you may be alone. Reading, your audience may be alone. But your character should spend very, very little time alone. Because a solitary character starts thinking or worrying or wondering.’

4. The Love Interest

To make your protagonist three-dimensional and to complicate their life, you should add a love interest to the mix. This character reveals the protagonist’s strengths and more importantly, their weaknesses. (Suggested reading: The Romantic Sub-Plot) Please remember that the character we use for this device does not have to be a romantic love interest. It just has to be somebody who is able to make your hero act irrationally and unreliably. Love makes fools of all of us. As Ernest Hemingway wrote: ‘I am so in love with you that there isn’t anything else.’

Why Are These Four Main Characters Important?

As literary devices, the main characters force us to show and not tell. The nature of the relationship between the protagonist and the other three leads to tangible interactions.

We have to talk to these characters and interact with them. We cannot avoid our worst enemies if they are determined to find us. We cannot ignore our best friends, unless we are prepared to risk losing those friendships. We cannot abandon the people we love most if we are human. (Suggested reading: The 3 Most Effective Types of Inner Conflict)

There will be other characters in your book, but they will be easier for your protagonist to deal with in a perfunctory manner. Too much of this type of interaction makes the character unsympathetic and boring.

(TOP TIP: Fill in Character Questionnaires for your main characters.)

So, you use your four main characters as literary devices to create a plot when:

  1. An action (inciting moment) somewhere (setting),
  2. Taken by somebody (your antagonist),
  3. Has a negative impact on somebody else (your protagonist).
  4. This creates a problem
  5. That your protagonist must solve (story goal) by acting,
  6. Which leads to confrontations with the antagonist and other characters (conflict in scenes and sequels).
  7. This goes on for approximately 60-80 scenes and sequels.
  8. Your protagonist is supported (can be a sub-plot) by somebody (confidant),
  9. And made aware of their weaknesses (sub-plot) by somebody else (love interest)
  10. Until they achieve, or fail to achieve, the story goal. (ending)”

In the next four instalments, I will discuss each of these four main characters as literary devices, and how you build a story around them, in more detail.

  1. The Protagonist As A Literary Device
  2. The Antagonist As A Literary Device
  3. The Confidant As A Literary Device
  4. The Love Interest As A Literary Device

The Last Word

Understanding the four main characters in fiction gives you a powerful set of tools to shape stronger stories. Use these roles as literary devices to deepen conflict, reveal character, and drive your plot forward. They can transform your novels and stories into more compelling, cohesive narratives.


by Amanda Patterson
© Amanda Patterson

If you enjoyed this article, read:

  1. 5 Tips For Writing Vivid Fiction From Edgar Allan Poe
  2. The 7 Qualities You Need To Become A Fiction Writer
  3. Use Your Antagonist To Define Your Story Goal
  4. The 7 Critical Elements Of A Great Book
  5. Cheat Sheets For Writing Body Language
  6. 9 Literary Terms You Need To Know
  7. 3 Steps To Help You Find Your Story’s Theme

Top Tip: Sign up for our free daily writing links.

The post How Writers Use The 4 Main Characters As Literary Devices appeared first on Writers Write.

Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – May 6, 2026 (#778)

1 Share

Today’s list has a terrific mix of opinions, not all in agreement. But what’s important is that we’re learning things together.

[article] Agent Skills. Developers are loving these twenty agent skills. They serve the SDLC as a whole and force your agents through the same stages an engineer goes through.

[article] How To Be Direct And Strategic. You can be a straight-shooter and still be thoughtful in how you deliver the message. Good post on how to be strategic and direct.

[article] When an Executive Asks You an Unexpected Question. Related to the above, in some way. It’s not just about the fast and factual answer; you need to understand where the question is coming from.

[blog] Five must-have guides to move agents into production with Gemini Enterprise Agent Platform. Thinking of getting your agents to production? There’s a lot to consider, and these guides point out areas of architecture, governance, and more.

[article] Designing the AI-native engineering organization. This group says that AI-native engineering looks like shorter planning cycles, more frequent releases, smaller squads of engineers, and more types of contributors.

[article] Agentic Coding is a Trap. I definitely get the arguments here, and the author makes a reasonable case. Some is speculation and anecdotal, but no one really knows how agentic coding is going to impact the field of software engineering.

[blog] The AWS MCP Server is now generally available. Every platform, including clouds, need to talk MCP or make themselves agent friendly.

[blog] Pioneering AI-assisted code migration: How Google achieved 6x faster migration from TensorFlow to JAX. This wasn’t just about asking an AI agent to plow through a codebase and do work, It used a strong pattern with specialized agents.

[blog] Closing the Loop: Google Just Validated Deterministic Code in the Loop. Keith noticed that the above post proved what he’s been seeing. He’s been doing some important analysis lately!

[article] Improving AI agents through better evaluations. Good advice, and some very specific calls to action at the end. Invest here!

[blog] What’s new in IAM: Security, governance, and runtime defense. The identity management space heated up quick. Check this out for what you should be expecting from your modern IAM solutions.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Things I Think I Think... about Junior Developers

1 Share

Of late, it's become fashionable among AI CTOs to proclaim the death of the junior developer. (Some are suggesting that college graduates should take up plumbing or farming.) I find this entire line of thought to be ridiculous, silly, and almost entirely bereft of actual logical thinking.

To put this emerging myth to bed, I can follow one of several different argument paths:

  • Argumenter analysis
  • Historical analysis
  • Logical analysis

Let's take these in order, shall we?

Analyzing the argumenter (speaker)

In many circles, it's common to want to examine the argument absent the person presenting it. "This laptop is the best laptop ever manufactured" can take on very different meanings when uttered by the manufacturer of said laptop as opposed to a neutral third party who has expertise in evaluating said genre of hardware. "Let the argument stand alone" is the rallying cry of these "argument in a vacuum" proponents.

Unfortunately, lots of arguments are not entirely objective in nature. "This is the best" is an opinionated statement that might, at best, be supported by evidence, but only if you (a) agree with the weight by which the evidence is evaluated, and (b) accept as legitimate the manner in whichthe evidence was gathered. Only if we all agree on (a) and (b) can we even begin to consider agreeing on the conclusion drawn.

On top of that, the entire premise rests on the idea that the invidual speaking the opinion/conclusion is doing so in entirely good faith. And that, dear reader, is where many of our "junior developers are obsolete" pronunciations go horribly, horribly wrong, because so many of those who make that statement are in positions wherein they benefit directly or indirectly from the prophecy becoming a self-fulfilling one.

Consider: If the collective society that LinkedIN seeks to persuade is, in fact, persuaded, then all the currently-enrolled and future enrollees at colleges and universities all over the world will choose degrees in something (anything!) other than Computer Science. Zero graduates. Zero additional inputs into the system. Current junior developers may even pick up their laptops and go take up plumbing. Which leaves us the current crop of senior developers who have maybe--at most--two more decades of work experience left to them, after which, zero developers left to create all that software that will--by definition--still be in demand.

So where, dear reader, do we think the future CEOs and program managers are going to turn to get the new software development they will be demanding? Surely these junior-developer-doomsayers aren't expecting that we'll have to turn to coding agents (and whatever price they're charging for tokens consumed as part of the act of generating that software!) instead, thus making a ton of profits for the owners of those coding agents. Would they--could they--really be that selfish?

(This, of course, presumes that the coding agents are sufficiently good enough such that developers don't need to touch code anymore, and I hold the opposite to be axiomatic.)

Historical analysis

Another approach is to examine parallels in our history, recent or ancient, that hold similar kinds of pronunciations and examine the results. One parallel that frequently appears in the discussion of LLM-based AIs and coding agents is that of the pocket calculator: In the 1970s, when digitial manufacture had reached a point of miniturization enough such that average consumers (e.g., schoolchildren) could have a pocket-sized computer sufficient to carry out cardinal mathematical operations, the educational community went into something of a frenzy, debating how much the calculator was dooming our current generation of schoolchildren to ignorance and incompetence. Larry Cuban has an interesting analysis of the debate that took shape in the 1970s, 80's and even up through the turn of the millenium. In fact, the debate continued all the way up until the present day, though "computers" later came to replace "calculators", mostly because the technology advanced so far as to give students computers in their pocket, by way of "graphing calculators" in the late 80s/early 90s and then later mobile devices (phones, tablets) in the 10s and 20s.

I won't try to challenge or confirm the debates of the time--I was in grade school during the 70s, then middle school and high school in the 80s, so I was one of those schoolchildren everybody was debating about. (My mother, a career educator and administrator up until she married my father and transitioned to "happy homemaker", was often a part of these debates, fiercely so.) Instead, I'll talk about what things looked like from the ground as they were going on.

Most of my teachers were pretty much of the same mind: Calculators definitely do the work faster than you can. However, just knowing how to punch numbers into the calculator only takes you so far--you still need to know which operations you punch in and what numbers go where. The device can do long division for you, but if you're not familiar with how it works, you're going to get certain parts of it wrong, and oh, for whatever it's worth, fractions are still something outside the realm of your average pocket calculators, so if you try to add 1/3 and 2/3, you're going to get 0.99999999999999....

In the modern day, 2026, we find that certain operations are actually faster to calculate in your head, but some of the fears of the calculators-will-rot-our-brains crowd certainly seem to have come true: a fifth of adults apparently have no idea how to do basic math with fractions, or even long division, and that was a study from 10 years ago. But if there's a comforting fact to that study, it's that they've also forgotten basic punctuation skills and simple science facts. (Quick, what are all the planets in our solar system? If you have three other people around you, one of you can't.)

The larger point here, as it relates to LLMs and coding agents? The people who need to do maths on a daily basis--accountants, for example--make heavy use of those very same tools that were feared and loathed fifty years ago, but they don't use them as replacements, but as supplements, to the job at hand. No accountant with even a remote shred of sense is going to go through and do the double-entry bookkeeping by hand anymore, because spreadsheets and accounting software can make all of that just a click away. However, they still go through the basics of how accounting works, because understanding the principles that underlie the software or the tool are actually the key to understanding accounting.

You don't get to an understanding of those principles without doing a bunch of it by hand. Once you understand what the tool is doing and (to some degree) how the tool works, you're allowed to use the tool. There's zero reason why this should be any different for coding agents than it was for calculators.

Logical analysis

And then, of course, there's the logical analysis which presents the logical conclusion of the argument--that GenZ is literally the last generation of programmers produced by the human race, and over time the last of the programmers will die out and all code will only ever be replaced or fixed by other code going forward--which depends on precisely one thing: That the code generated by these coding agents is (a) good code (for whatever definition we care to use for the word "good" here), (b) maintainable code (or at least can be re-generated from scratch on demand), and/or (c) code that would never need to be changed, fixed, or replaced.

I'm pretty sure (c) doesn't count. Even if the code is perfect at the time it was generated, it's not going to magically adjust to the changing demands of the market, humanity, or whimsy, much less the combination thereof.

As far as (a) and (b) go, however, we know that LLM-powered coding agents are not perfect, or at least I've yet to find someone who wants to stand up in a public place and lay out the argument that they are, anyway. We're getting different "takes" on how to best use coding agents, but the "zero-shot" "vibe coding" crowd definitely seems to be running out of steam (based entirely on the unscientific data that LinkedIN's been passing into my feed over the last few months), and we're all of us very clearly aware that LLM-generated code is not deterministic 1. This feeds us back into the basic premise I established earlier (developers will still touch code), which then means that we need people who will be able to read the code and understand it. That means there will always be a demand for programmers to examine and maintain that code, which then (by the basic principles of supply-and-demand market economics) means that there will be a supply of people who provide that skill.

(As an aside, we can easily spend quite a lot of time going back and forth on what their equilibrium price will or should be, and that takes us into an economic analysis of the market and what sort of supply-side and demand-side shocks to their respective curves on the supply-demand graph look like, which is really outside of what I want to do with this essay. So long as supply is greater than zero and demand is greater than zero, there's a graph, and therefore there's some kind of industry here that we can talk about. Which, by definition, therefore, means that there has to be a pipeline of junior developers who fill in the lost ranks from the senior developers who retired or died or worse.)

The Junior Developer Development Pipeline

If you're a medium-to-large company that's trying to think about all of this, here's my suggestion, which is actually the same suggestion I've made to several companies at which I've either been employed or consulted:

Don't let market conditions define the supply constraints around your development.

Take aggressive action to control your supply-chain inputs (in this case, the people who provide the necessary skills your IT department needs) by hiring juniors to take part in an internal educational journey towards employment at your firm. In other words, deliberately create and shape your apprenticeship program. Hire people with either college degrees or even just straight high school diplomas, and teach them what you need them to know. Grow them into the kind of employees you want. Teach them fundamentals, ground them in the principles that guide your thinking and culture, and give them the tools and skills they need to do the job you want them to do and do it well.

Because if your team follows a basic "skill pyramid" (1 senior, 2 mid-level, 3 juniors on any given team), you're constantly training up some new folks, and assuming the senior and mid-levels follow typical career progression and leave after a few years in the role, you'll have a constant supply of likely replacements/promotions for the team. Grow through the draft, and you won't have to worry about having to hire in expensive free agents except in very specific scenarios.


  1. To be fair, in my own experiments, when working with local LLMs, the overall gist and tone and tenor of a response can be within the same ballpark of other responses to the exact same prompt, but the code is definitely not symbol-for-symbol identical across any two responses.

Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Windows Package Manager 1.29.170-preview

1 Share

This is a preview build of WinGet for those interested in trying out upcoming features and fixes. While it has had some use and should be free of major issues, it may have bugs or usability problems. If you find any, please help us out by filing an issue.

New in v1.29

New Feature: Source Priority

Note

Experimental under sourcePriority; defaulted to disabled.

With this feature, one can assign a numerical priority to sources when added or later through the source edit
command. Sources with higher priority are sorted first in the list of sources, which results in them getting put first
in the results if other things are equal.

Tip

Search result ordering in winget is currently based on these values in this order:

  1. Match quality (how well a valid field matches the search request)
  2. Match field (which field was matched against the search request)
  3. Source order (was always relevant, but with priority you can more easily affect this)

Beyond the ability to slightly affect the result ordering, commands that primarily target available packages
(largely install) will now prefer to use a single result from a source with higher priority rather than prompting for
disambiguation from the user. Said another way, if multiple sources return results but only one of those sources has
the highest priority value (and it returned only one result) then that package will be used rather than giving a
"multiple packages were found" error. This has been applied to both winget CLI and PowerShell module commands.

REST result match criteria update

Along with the source priority change, the results from REST sources (like msstore) now attempt to correctly set the
match criteria that factor into the result ordering. This will prevent them from being sorted to the top automatically.

Minor Features

Preserve installer arguments across export and import

winget export now captures the --override and --custom arguments that were used when a package was originally installed and saves them into the export file. When subsequently running winget import, those values are automatically re-applied during installation — --override replaces all installer arguments and --custom appends extra switches — so packages can be reinstalled with the same customizations without any manual intervention. Both fields are optional and independent of each other; packages without stored installer arguments are unaffected.

--no-progress flag

Added a new --no-progress command-line flag that disables all progress reporting (progress bars and spinners). This flag is universally available on all commands and takes precedence over the visual.progressBar setting. Useful for automation scenarios or when running WinGet in environments where progress output is undesirable.

MCP upgrade support

The WinGet MCP server's existing tools have been extended with new parameters to support upgrade scenarios:

  • find-winget-packages now accepts an upgradeable parameter (default: false). When set to true, it lists only installed packages that have available upgrades — equivalent to winget upgrade. The query parameter becomes optional in this mode, allowing it to filter results or be omitted to list all upgradeable packages. AI agents can use this to answer requests like "What apps can I update with WinGet?"
  • install-winget-package now accepts an upgradeOnly parameter (default: false). When set to true, it only upgrades an already-installed package and returns a clear error if the package is not installed (pointing to install-winget-package without upgradeOnly instead). AI agents can use this to answer requests like "Update WinGetCreate" or, in combination with find-winget-packages with upgradeable=true, "Update all my apps."

Authenticated GitHub API requests in PowerShell module

The PowerShell module now automatically uses GH_TOKEN or GITHUB_TOKEN environment variables to authenticate GitHub API requests. This significantly increases the GitHub API rate limit, preventing failures in CI/CD pipelines. Use -Verbose to see which token is being used.

Improved list output when redirected

  • winget list (and similar table commands) no longer truncates output when stdout is redirected to a file or variable — column widths are now computed from the full result set.
  • Spinner and progress bar output are suppressed when no console is attached, keeping redirected output clean.

Bug Fixes

  • winget export now works when the destination path is a hidden file
  • Fixed the useLatest property in the DSC v3 Microsoft.WinGet/Package resource schema to emit a boolean default (false) instead of the incorrect string "false".
  • SignFile in WinGetSourceCreator now supports an optional RFC 3161 timestamp server via the new TimestampServer property on the Signature model. When set, signtool.exe is called with /tr <url> /td sha256, embedding a countersignature timestamp so that signed packages remain valid after the signing certificate expires.
  • File and directory paths passed to signtool.exe and makeappx.exe are now quoted, fixing failures when paths contain spaces.
  • DSC export now correctly exports WinGet Admin Settings

What's Changed

New Contributors

Full Changelog: v1.29.160-preview...v1.29.170-preview

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories