Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149816 stories
·
33 followers

At what point in the Windows development cycle is it too late to change the text of a translatable string?

1 Share

Back in 2009, I noted that the “Prevent windows from being automatically arranged when moved to the edge of the screen” check box is a dreaded negative checkbox. When is it too late in the Windows development cycle to change the text of a translatable string?

The translation team sets a deadline for when no further string changes are permitted. This deadline usually comes well before the engineering “no code changes” deadline because the translators require a lot of time to go through all the strings and translate them into the many target languages that Windows supports.

Service packs (back when we had service packs) and monthly updates follow an even stricter set of rules. Not only does the translation team set a deadline for strings, but strings that have already shipped in the base operating system or previous service packs or monthly updates are considered permanently locked and may not be changed. The reason is that changing those strings would invalidate the translations, causing the existing translation packs to say, “Whoa, that’s not the string I was asked to translate.” Depending on what language the user has chosen for their user interface, this could result in devolving to the base language (for Language Interface Packs), or if the base language’s translation has also been invalidated, possibly falling back to English.

If you want to change a string in a service pack or monthly update, you have to create a new string, let the translators translate that new string, and simply abandon the old string.

As a result, as monthly updates accumulate, there’s also a build-up of unused and abandoned strings lying around in translation packs. They only become available for cleanup when a major release occurs, which tends to be very infrequent because major releases are quite large, and the installation process of a major release takes the form of a clean install of the new operating system, followed by migrating the state of the old operating system to the new one. Not only is this a much longer process than a normal monthly patch update, it also means that the next time the user logs on, they go through the “We’re getting everything ready for you” screen, which is particularly annoying.

Bonus chatter: While it’s true that there are settings to disable the “We’re getting everything ready for you” screen, those settings don’t speed up anything. They just change what you see while the system is finishing setting up your profile.

The post At what point in the Windows development cycle is it too late to change the text of a translatable string? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The inner workings of Wikipedia (Interview)

1 Share

Let’s hear how Wikipedia actually works from long-time Wikipedian, Bill Buetler! Bill has been heavily involved with this “8th wonder of the modern world” for two decades and even built a career on it, founding Buetler Ink –a digital agency known for its pioneering work in Wikipedia public relations.

We discuss: the official (and not so official) rules, the editor cabal (which isn’t one), the business model (which really isn’t one), how an edit sticks (or not), how AI chatbots threaten the future of the site (or don’t), and a whole lot more.

Join the discussion

Changelog++ members get a bonus 7 minutes at the end of this episode and zero ads. Join today!

Sponsors:

  • Tiger Data – Postgres for Developers, devices, and agents The data platform trusted by hundreds of thousands from IoT to Web3 to AI and more.
  • Augment Code – Developer AI that uses deep understanding of your large codebase and how you build software to deliver personalized code suggestions and insights. Augment provides relevant, contextualized code right in your IDE or Slack. It transforms scattered knowledge into code or answers, eliminating time spent searching docs or interrupting teammates.
  • Depot10x faster builds? Yes please. Build faster. Waste less time. Accelerate Docker image builds, and GitHub Actions workflows. Easily integrate with your existing CI provider and dev workflows to save hours of build time.
  • Framer – Design and publish in one place. Get started free at framer.com/design, code CHANGELOG for a free month of Pro.

Featuring:

Show Notes:

Something missing or broken? PRs welcome!





Download audio: https://op3.dev/e/https://cdn.changelog.com/uploads/podcast/668/the-changelog-668.mp3
Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Observability for the Age of Generative AI

1 Share

Every generation of computing brings new challenges in how we monitor and trust our systems. 

With the rise of Generative AI, applications are no longer static code—they’re living systems that plan, reason, call tools, and make choices dynamically. 

Traditional observability, built for servers and microservices, simply can’t tell you when an AI agent is correctsafe, or cost-efficient. 

We’re reimagining observability for this new world. 

At Ignite, we introduced the next wave of Azure Monitor and AI Foundry integration—purpose-built for GenAI apps and agents. 

End-to-End GenAI Observability Across the AI Stack 

Customers can see not just whether their systems are up or fast, but also whether their agent responses are accurate. 

Azure Monitor, in partnership with Foundry, unifies agent telemetry with infrastructure, application, network, and hardware signals—creating a true end-to-end view that spans AI agents, the services they call, and the compute they run on. 

New capabilities include: 

  • Agent Overview Dashboard in Grafana and Azure – Gain a unified view of one or more GenAI agents, including success rate, grounding quality, safety violations, latency, and cost per outcome. Customize dashboards in Grafana or Azure Monitor Workbooks to detect regressions instantly after a model or prompt change—and understand how those changes affect user experience and spend. 
  • AI-Tailored Trace View – Follow every AI decision as a readable story: plan → reasoning → tool calls → guardrail checks. Identify slow or unsafe steps in seconds, without sifting through thousands of spans. 
  • AI-Aware Trace Search by Attributes – Search, sort, and filter across millions of runs using GenAI-specific attributes like model ID, grounding score, or cost. Find the “needle” in your GenAI haystack in a single query. 
  • Foundry Low-Code Agent Monitoring – Agents created through Foundry’s visual, low-code interface are now automatically observable. Without writing a single line of code, you can track reliability, safety, and cost metrics from day one. 
  • Full-Stack Visibility Across the AI Stack – All evaluations, traces, and red-teaming results are now published to Azure Monitor, where agent signals correlate seamlessly with infrastructure KPIs and application telemetry to deliver a unified operational view. 

Here’s a demo video that demonstrates some of the new capabilities: 

2025_IgniteAct3Video.mp4 

Check out our get started documentation.  

 

Powered by OpenTelemetry Innovation 

This work builds directly on the new OpenTelemetry extensions announced in our recent Azure AI Foundry blog post. 

Microsoft is helping define the OpenTelemetry agent specification, extending it to capture multi-agent orchestration tracesLLM reasoning context, and evaluation signals—enabling interoperability across Azure Monitor, AI Foundry, and partner tools such as Datadog, Arize, and Weights & Biases. 

By building on open standards, customers gain consistent visibility across multi-cloud and hybrid AI environments—without vendor lock-in. 

 

Built for Enterprise Scale and Trust 

With open standards and deep integration between Azure Monitor and AI Foundry, organizations can now apply the same discipline they use for traditional applications to their GenAI workloads, complete with compliance, cost governance, and quality assurance. 

GenAI is redefining what it means to operate software. 

With these innovations, Microsoft is giving customers the visibility, control, and confidence to operate AI responsibly, at enterprise scale. 

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A first look at the Web Install API

1 Share

I was excited to see that the proposed new Web Install API has entered Origin Trial in Chromium. It kind of works in Chromium Canary, but is most complete in Microsoft Edge beta (or, if you’re reading this after December 2025, starting with version 143). This makes sense, as the work has been done by the Edge team in Microsoft.

The reason I was excited is because I read the Web install API explainer when it was announced a few months ago:

The Web Install API provides a way to democratise and decentralise web application acquisition, by enabling “do-it-yourself” end users and developers to have control over the application discovery and distribution process. It provides the tools needed to allow a web site to install a web app. This means end users have the option to more easily discover new applications and experiences that they can acquire with reduced friction.

The current way of acquiring a web app may involve search, navigation, proprietary protocols, proprietary app banners, and multiple other workarounds. This can be confusing for end users that must learn how to acquire apps in every different platform, with even different browsers handling the acquisition process differently.

The web platform doesn’t have a built-in way to facilitate app discovery and distribution, the Web Install API aims to fix this.

Let’s dive in. As someone who doesn’t habitually use Edge, it took me a while to find the beta – download it from the Insider Channel. You’ll need to go to chrome://flags and enable Web App Installation API and then restart the browser.

There’s a nice demo page called Edge Demos, laid out with CSS Masonry (Rachel Remix), which can be installed as a standalone Web App. Clicking the Install button shows a normal (browser-generated) permission dialog:

Install Edge Demos app
Publisher: microsoftedge.github.io
Use this site often? Install the app which:
• Opens in a focused window
• Has quick access options like pin to Dock
• Syncs across multiple devices

Installation permission prompt

This opens a new window, in which the site is installed stand-alone:

standalone webapp

As the site is itself an “App store”, I can install another app with a similar UX (although it’s not clear to me why the ‘use this site often?’ wording has changed):

Install PWAmp music player... Publisher: microsoftedge.github.io/ From: Publisher: microsoftedge.github.io/

And this too opens full-window, with no browser chrome:

standalone music playe app in its own window

Each of these installed apps shows up as a separate app when I do cmd+tab (on a Mac)

PWA store and PWA music player showing in MacOS list of open apps, on euqal footing with Slack, Vivaldi etc

If you try to install an app that’s already installed in Edge, it knows this and shows a dialog “Open with…” or “Not now” (note to Microsoft: “no” is a perfectly good English word). I don’t know how you uninstall an app; on a Mac, they’re stored in Applications > Edge Beta Apps, but when I deleted them and re-started the browser, they were still installed.

The Webkitephant in the room

Of course, there’s a big question about Apple, who try their best to hide installation of Web Apps on iOS (and last year, even tried to kill all PWAs in the EU).

iOS also has its own proprietary MarketplaceKit which “enables alternative app marketplaces to install the apps they distribute to peoples’ devices”, but only in the European Union, and you must go cap-in-hand to Apple to ask for entitlement, provide Apple a stand-by letter of credit in the amount of €1,000,000 from a financial institution (or be a member of good standing in the Apple Developer Program for two continuous years or more, and have an app that had more than one million first annual installs on iOS in the EU), and pay Apple €0.50 for each first annual install of their marketplace app.

However, Diego González, the glamorous PM for PWAs on the Edge team, said on Mastodon

in scope of the W3C WebApps WG, Firefox, Safari and Chromium agreed to work on ‘current document’ installation. There’s discussion on a declarative way of enabling this as well, so there is cross-vendor progress

I notice that in the explainer, one of the people thanked for input is Marcos Cáceres. Marcos is a glamorous and jolly good chap who is now at Apple, but previously worked with me at Opera on W3C Widgets (a precursor to Web Apps) and then later on the W3C Web App Manifest spec that, along with Service Worker, powers lots of lovely PWAs. He’s someone who has worked for years to make Web Apps work well (even at Apple; along with Microsoft, he was one of the editors of the Badging API spec). So I’m cautiously hopeful.

The Web Install API is currently being road-tested by pwastore.io and progressier.com, among others. If you have a PWA, why not sign up for the origin trial? It’s just a matter of adding a meta tag to your site.

Diego again:

now this means to start we’d only get ability to install the same page you’re browsing, but the cross-site install is something that continues discussion on WICG.

In the meantime, in Chromium we are committed to enabling and experimenting with ‘background-document’ installations.

I admit I’m not known for complimenting Microsoft’s management and their browser tactics at the Operating System level, but I have huge respect for chums on the Edge team for their standards work. I hope the origin trials are successful, so that the Web Install API and <install> element come to the web platform and all browsers, including on iOS where, so far, there is still a WebKit monopoly. Well done, Edge team!

Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Ask AI from Anywhere: No GUI, No Heavy Clients, No Friction

1 Share

Ever wished you could ask AI from anywhere without needing an interface? Imagine just typing ? and your question in any terminal the moment it pops into your head, and getting the answer right away! In this post, I explain how I wrote a tiny shell script that turns this idea into reality, transforming the terminal into a universal AI client. You can query Reka, OpenAI, or a local Ollama model from any editor, tab, or pipeline—no GUI, no heavy clients, no friction.

Small, lightweight, and surprisingly powerful: once you make it part of your workflow, it becomes indispensable.

💡 All the code scripts are available at: https://github.com/reka-ai/terminal-tools


The Core Idea

There is almost always a terminal within reach—embedded in your editor, sitting in a spare tab, or already where you live while building, debugging, and piping data around. So why break your flow to open a separate chat UI? I wanted to just type a single character (?) plus my question and get an answer right there. No window hopping. No heavy client.

How It Works

The trick is delightfully small: send a single JSON POST request to whichever AI provider you feel like (Reka, OpenAI, Ollama locally, etc.):

# Example: Reka
curl https://api.reka.ai/v1/chat
     -H "X-Api-Key: <API_KEY>" 
     -d {
          "messages": [
            {
              "role": "user",
              "content": "What is the origin of thanksgiving?"
            }
          ],
          "model": "reka-core",
          "stream": false
        }
# Example: Ollama local
curl http://127.0.0.1:11434/api/chat 
-d {  
      "model": "llama3",   
      "messages": [
        {
          "role": "user", 
          "content": "What is the origin of thanksgiving?"
        }], 
      "stream": false
    }

Once we get the response, we extract the answer field from it. A thin shell wrapper turns that into a universal “ask” verb for your terminal. Add a short alias (?) and you have the most minimalist AI client imaginable.

Let's go into the details

Let me walk you through the core script step-by-step using reka-chat.sh, so you can customize it the way you like. Maybe this is a good moment to mention that Reka has a free tier that's more than enough for this. Go grab your key—after all, it's free!

The script (reka-chat.sh) does four things:

  1. Captures your question
  2. Loads an API key from ~/.config/reka/api_key
  3. Sends a JSON payload to the chat endpoint with curl.
  4. Extracts the answer using jq for clean plain text.

1. Capture Your Question

This part of the script is a pure laziness hack. I wanted to save keystrokes by not requiring quotes when passing a question as an argument. So ? What is 32C in F works just as well as ? "What is 32C in F".

if [ $# -eq 0 ]; then
    if [ ! -t 0 ]; then
        QUERY="$(cat)"
    else
        exit 1
    fi
else
    QUERY="$*"
fi

2. Load Your API Key

If you're running Ollama locally you don't need any key, but for all other AI providers you do. I store mine in a locked-down file at ~/.config/reka/api_key, then read and trim trailing whitespace like this:

API_KEY_FILE="$HOME/.config/reka/api_key"
API_KEY=$(cat "$API_KEY_FILE" | tr -d '[:space:]')

3. Send The JSON Payload

Building the JSON payload is the heart of the script, including the API_ENDPOINT, API_KEY, and obviously our QUERY. Here’s how I do it for Reka:

RESPONSE=$(curl -s -X POST "$API_ENDPOINT" \
     -H "X-Api-Key: $API_KEY" \
     -H "Content-Type: application/json" \
     -d "{
  \"messages\": [
    {
      \"role\": \"user\",
      \"content\": $(echo "$QUERY" | jq -R -s .)
    }
  ],
  \"model\": \"reka-core\",
  \"stream\": false
}")

4. Extract The Answer

Finally, we parse the JSON response with jq to pull out just the answer text. If jq isn't installed we display the raw response, but a formatted answer is much nicer. If you are customizing for another provider, you may need to adjust the JSON path here. You can add echo "$RESPONSE" >> data_sample.json to the script to log raw responses for tinkering.

With Reka, the response look like this:

{
    "id": "cb7c371b-3a7b-48d2-829d-70ffacf565c6",
    "model": "reka-core",
    "usage": {
        "input_tokens": 16,
        "output_tokens": 460,
        "reasoning_tokens": 0
    },
    "responses": [
        {
            "finish_reason": "stop",
            "message": {
                "role": "assistant",
                "content": " The origin of Thanksgiving ..."
            }
        }
    ]
}
The value we are looking for and want to display is the `content` field inside `responses[0].message`. Using `jq`, we do:
echo "$RESPONSE" | jq -r '.responses[0].message.content // .error // "Error: Unexpected response format"'

Putting It All Together

Now that we have the script, make it executable with chmod +x reka-chat.sh, and let's add an alias to your shell config to make it super easy to use. Add one line to your .zshrc or .bashrc that looks like this:

alias \\?=\"$REKA_CHAT_SCRIPT\"

Because ? is a special character in the shell, we escape it with a backslash. After adding this line, reload your shell configuration with source ~/.zshrc or source ~/.bashrc, and you are all set!

The Result

Now you can ask questions directly from your terminal. Wanna know what is origin of Thanksgiving, ask it like this:

? What is the origin of Thanksgiving

And if you want to keep the quotes, please you do you!

Extra: Web research

I couldn't stop there! Reka also supports web research, which means it can fetch and read web pages to provide more informed answers. Following the same pattern described previously, I wrote a similar script called reka-research.sh that sends a request to Reka's research endpoint. This obviously takes a bit more time to answer, as it's making different web queries and processing them, but the results are often worth the wait—and they are up to date! I used the alias ?? for this one.

On the GitHub repository, you can find both scripts (reka-chat.sh and reka-research.sh) along with a script to create the aliases automatically. Feel free to customize them to fit your workflow and preferred AI provider. Enjoy the newfound superpower of instant AI access right from your terminal!

What's Next?

With this setup, the possibilities are endless. Reka supports questions related to audio and video, which could be interesting to explore next. The project is open source, so feel free to contribute or suggest improvements. You can also join the Reka community on Discord to share your experiences and learn from others.


Resources




Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

HP Announces Up to 6,000 Layoffs in Massive AI Restructuring Plan

1 Share

HP will cut up to 6,000 jobs by 2028 as it doubles down on AI, betting automation and AI PCs can boost productivity despite chip cost headwinds.

The post HP Announces Up to 6,000 Layoffs in Massive AI Restructuring Plan appeared first on TechRepublic.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories