Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146166 stories
·
33 followers

Links For You (1/25/26)

1 Share

I write this in the midst of a huge ice event - which thankfully isn't so bad here in south Louisiana. We're very cold and rainy, but no real ice yet, which is good. The worst is coming in later tonight and the schools have already shut down, but thankfully I work at home so there's no need to get on the roads. Today is also the 26th birthday of my eldest child, which makes the age ranges of my little army (8 kids total) go from 10 to 26. Wow.

Temporal is Coming...

Ok, most likely you've seen this across your feeds already, I swear I saw it at least ten times, but "Date is out, Temporal is in" is a great introduction to the new date hotness in JavaScript, the Temporal API.

According to MDN, the support is good so I imagine I'll be using this soon myself.

Temporal is so hot right now

The Personal Site is Back!

Or at least that's what I hear. Personally, I miss seeing all the cute, weird, personal web pages from the old days, so any effort to help promote this trend is a good thing in my book. Personalsit.es is a collection of personal web sites and a quick way to hop to a random one. Anyone can submit a PR to add your own. (I did!)

Free APIs to Get Your Hack On

This isn't new, but I love a good API, and what's better than one good API? How about near 500 of them! Free Public APIs is exactly what it sounds like, a collection of free APIs. Building simple API wrappers are a great way to learn a new language. Sadly though there are only three cat APIs - someone fix that please!

Just For Fun

Another music discovery for me, hemlocke springs is an American singer and songwriter who just started being active in the music scene over the past few years. She's got a great sound, and while this is the only track I've tried so far, I'm looking forward to listening to more from her.

Play Video

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Google Discover Replaces News Headlines With Sometimes Inaccurate AI-Generated Alternatives

1 Share
An anonymous reader shared this report from The Verge: In early December, I brought you the news that Google has begun replacing Verge headlines, and those of our competitors, with AI clickbait nonsense in its content feed [which appears on the leftmost homescreen page of many Android phones and the Google app's homepage]. Google appeared to be backing away from the experiment, but now tells The Verge that its AI headlines in Google Discover are a feature, one that "performs well for user satisfaction." I once again see lots of misleading claims every time I check my phone... For example, Google's AI claimed last week that "US reverses foreign drone ban," citing and linking to this PCMag story for the news. That's not just false — PCMag took pains to explain that it's false in the story that Google links to...! What does the author of that PCMag story think? "It makes me feel icky," Jim Fisher tells me over the phone. "I'd encourage people to click on stories and read them, and not trust what Google is spoon-feeding them." He says Google should be using the headline that humans wrote, and if Google needs a summary, it can use the ones that publications already submit to help search engines parse our work. Google claims it's not rewriting headlines. It characterizes these new offerings as "trending topics," even though each "trending topic" presents itself as one of our stories, links to our stories, and uses our images, all without competent fact-checking to ensure the AI is getting them right... The AI is also no longer restricted to roughly four words per headline, so I no longer see nonsense headlines like "Microsoft developers using AI" or "AI tag debate heats." (Instead, I occasionally see tripe like "Fares: Need AAA & AA Games" or "Dispatch sold millions; few avoided romance.") But Google's AI has no clue what parts of these stories are new, relevant, significant, or true, and it can easily confuse one story for another. On December 26th, Google told me that "Steam Machine price & HDMI details emerge." They hadn't. On January 11th, Google proclaimed that "ASUS ROG Ally X arrives." (It arrived in 2024; the new Xbox Ally arrived months ago.) On January 20th, it wrote that "Glasses-free 3D tech wows," introducing readers to "New 3D tech called Immensity from Leia" — but linking to this TechRadar story about an entirely different company called Visual Semiconductor... Google declined our request for an interview to more fully explain the idea. The site Android Police spotted more inaccurate headlines in December: A story from 9to5Google, which was actually titled 'Don't buy a Qi2 25W wireless charger hoping for faster speeds — just get the 'slower' one instead' was retitled as 'Qi2 slows older Pixels.' Similarly, Ars Technica's 'Valve's Steam Machine looks like a console, but don't expect it to be priced like one' was changed to 'Steam Machine price revealed.' At the time, we believed that the inaccuracies were due to the feature being unstable and in early testing.... Now, Google has stopped calling Discover replacing human-written headlines as an "experiment." "Google buries a 'Generated with AI, which can make mistakes' message under the 'See more' button in the summary," reports 9to5Google, "making it look like this is the publisher's intended headline." While it is obvious that Google has refined this feature over the past couple of months, it doesn't take long to still find plenty of misleading headlines throughout Discover... Another article from NotebookCheck about an Anker power bank with a retractable cable was given a headline that's about another product entirely. A pair of headlines from Tom's Hardware and PCMag, meanwhile, show the two sides of using AI for this purpose. The Tom's Hardware headline, "Free GPU & Amazon Scams," isn't representative of the actual article, which is about someone who bought a GPU from Amazon, canceled their order, and the retailer shipped it anyway. There's nothing about "Amazon Scams" in the article.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Optimizing Python scripts with AI

1 Share

One of the first steps we take when we want to optimize software is to look
at profiling data. Software profilers are tools that try to identify where
your software spends its time. Though the exact approach can vary, a typical profiler samples your software (steps it at regular intervals) and collects statistics. If your software is routinely stopped in a given function, this function is likely using a lot of time. In turn, it might be where you should put your optimization efforts.

Matteo Collina recently shared with me his work on feeding profiler data for software optimization purposes in JavaScript. Essentially, Matteo takes the profiling data, and prepares it in a way that an AI can comprehend. The insight is simple but intriguing: tell an AI how it can capture profiling data and then let it optimize your code, possibly by repeatedly profiling the code. The idea is not original since AI tools will, on their own, figure out that they can get profiling data.

How well does it work? I had to try it.

Case 1. Code amalgamation script

For the simdutf software library, we use an amalgamation script: it collects all of the C++ files on disk, does some shallow parsing and glues them together according to some rules.

I first ask the AI to optimize the script without access to profiling data. What it did immediately was to add a file cache. The script repeatedly loads the same files from disk (the script is a bit complex). This saved about 20% of the running time.

Specifically, the AI replaced this naive code…

def read_file(file):
    with open(file, 'r') as f:
        for line in f:
            yield line.rstrip()

by this version with caching…

def read_file(file):
    if file in file_cache:
        for line in file_cache[file]:
            yield line
    else:
        lines = []
        with open(file, 'r') as f:
            for line in f:
                line = line.rstrip()
                lines.append(line)
                yield line
        file_cache[file] = lines

Could the AI do better with profiling data? I instructed it to run the Python profiler: python -m cProfile -s cumtime myprogram.py. It found two additional optimizations:

1. It precompiled the regular expressions (re.compile). It replaced

  if re.match('.*generic/.*.h', file):
    # ...

by

if generic_pattern.match(file):
    # ...

where elsewhere in the code, we have…

generic_pattern = re.compile(r'.*generic/.*\.h')

2. Instead of repeatedly calling re.sub to do a regular expression substitution, it filtered the strings by checking for the presence of a keyword in the string first.

if 'SIMDUTF_IMPLEMENTATION' in line: # This IF is the optimization
  print(uses_simdutf_implementation.sub(context.current_implementation+"\\1", line), file=fid)
else:
  print(line, file=fid) # Fast path

These two optimizations could probably have been arrived at by looking at the code directly, and I cannot be certain that they were driven by the profiling data. But I can tell that they do appear in the profile data.

Unfortunately, the low-hanging fruit, caching the file access, represented the bulk of the gain. The AI was not able to further optimize the code. So the profiling data did not help much.

Case 2: Check Link Script

When I design online courses, I often use a lot of links. These links break over time. So I have a simple Python script that goes through all the links, and verifies them.

I first ask my AI to optimize the code. It did the same regex trick, compiling the regular expression. It created a thread pool and made the script asynchronous.

with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
    url_results = {url: executor.submit(check_url, url) for url in urls_to_check}
    for url, future in url_results.items():
        url_cache[url] = future.result()

This parallelization more than doubled the speed of the script.

It cached the URL checks in an interesting way, using functools:

from functools import lru_cache

@lru_cache(maxsize=None)
def check(link):
    # ...

I did not know about this nice trick. This proved useless in my context because I rarely have several times the same link.

I then started again, and told it to use the profiler. It did much the same thing, except for the optimization of the regular expression.

As far as I can tell all optimizations were in vain, except for the multithreading. And it could do this part without the profiling data.

Conclusion so far

The Python scripts I tried were not heavily optimized, as their performance was not critical. They are relatively simple.

For the amalgamation, I got a 20% performance gain for ‘free’ thanks to the file caching. The link checker is going to be faster now that it is multithreaded. Both optimizations are valid and useful, and will make my life marginally better.

In neither case I was able to discern benefits due to the profiler data. I was initially hoping to get the AI busy optimizing the code in a loop, continuously running the profiler, but it did not happen in these simple cases. The AI optimized code segments that contributed little to the running time as per the profiler data.

To be fair, profiling data is often of limited use. The real problems are often architectural and not related to narrow bottlenecks. Even when there are identifiable bottlenecks, a simple profiling run can fail to make them clearly identifiable. Further, profilers become more useful as the code base grows, while my test cases are tiny.

Overall, I expect that the main reason for my relative failure is that I did not have the right use cases. I think that collecting profiling data and asking an AI to have a look might be a reasonable first step at this point.

Read the whole story
alvinashcraft
3 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Run Claude Code With Local Models Using Ollama

1 Share

In January 2026, Ollama added support for the Anthropic Messages API, enabling Claude Code to connect directly to any Ollama model. This tutorial explains how to install Claude Code, pull and run local models using Ollama, and configure your environment for a seamless local coding experience.

Installing Ollama

Ollama is a locally deployed AI model runner that lets you download and run large language models on your own machine. It provides a command-line interface and an API, supports open models such as Mistral and Gemma, and uses quantization to make models run efficiently on consumer hardware. A model file allows you to customise base models, system prompts, and parameters (temperature, top-p, top-k). Running models locally gives you offline capability and protects sensitive data.

To use Claude Code with local models, you need Ollama v0.14.0 or later. The January 2026 blog notes that this version implements Anthropic Messages API compatibility. For streaming tool calls (used when Claude Code executes functions or scripts), a pre-release such as 0.14.3‑rc1 may be required.

curl -fsSL https://ollama.com/install.sh | sh

After installation, verify the version with ollama version.

Pulling a model

Choose a local model suitable for coding tasks. You can see the full list on https://ollama.com/search website. Pulling a model downloads and configures it. For example:

# Pull the 20 B parameter GPT‑OSS model
ollama pull gpt-oss:20b
# Pull Qwen Coder (a general coding model)
ollama pull qwen3-coder

To use Claude Code’s advanced tool features locally, the article Running Claude Code fully local recommends GLM-4.7-flash because it supports tool-calling and provides a 128K context length. Pull it with:

ollama pull glm-4.7-flash:latest

Installing Claude Code

Claude Code is Anthropic’s agentic coding tool. It can read and modify files, run tests, fix bugs, and even handle merge conflicts across your entire code base. It uses large language models to act as a pair of autonomous hands in your terminal, letting you vibe-code (describing what you want in plain language and letting the AI generate the code).

curl -fsSL https://claude.ai/install.sh | bash

From your terminal, run:

export ANTHROPIC_AUTH_TOKEN=ollama
export ANTHROPIC_BASE_URL=http://localhost:11434
# Launch the integration interactively
ollama launch claude

Then you will see the model list that you installed in the previous step. Select the one you want to test, then hit Enter.

Press enter or click to view image in full size Model list

And that’s it! Now your Claude code works with Ollama and local models.

Now your Claude code works with Ollama

Video Tutorial

https://youtu.be/COpg79ab6ug?si=8sSpPzd0xwYctiFJ&embedable=true

Watch on YouTube: Claude Code with Ollama

Summary

By pairing Claude Code with Ollama, you can run agentic coding workflows entirely on your own machine. Don’t expect the same experience as with the Anthropic models!

Experiment with different models and share with me which one worked the best for you!

Cheers! ;)

\

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

C# Console menus with Actions

1 Share

Introduction

The focus of this article is to provide an easy-to-use menu system for C# console projects.

NuGet package Spectre.Console is required to construct the menu using Actions to execute items in the menu.

Benefits of using a menu

A developer can easily test different operations, whether to learn something new, quickly try out code slated for a project, or provide options for a dotnet tool.

Also, many online classes are organized into chapters/sections. Consider breaking them up into menu items.

Base parts

A class which is responsible for displaying menu items and what code to execute using an Action with or without parameters.

public class MenuItem
{

    public int Id { get; set; }
    public required string Text { get; set; }
    public required Action Action { get; set; }
    public override string ToString() => Text;
}

A class that builds the menu using the class above, and another class that has methods to execute when a menu item is selected.

In the sample projects provided

  • The MenuOperations class is responsible for building the menu
  • The Operations class contains methods to execute using an Action from the menu selection
    • Each method has code to display the method name along with code to stop execution, which, when pressing ENTER, returns to the menu.

Entry point

A while statement is used to present a menu, with one menu option to exit the application.

Shows a menu

Example 1 uses an Action with no parameters

internal partial class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            Console.Clear();
            var menuItem = AnsiConsole.Prompt(MenuOperations.SelectionPrompt());
            menuItem.Action();
        }
    }
}

Example 2 uses an Action with a parameter whose menuItem.Id property references a primary key in a database table; the operation, in this case, saves an image to disk.

shows a menu with text read from a database

internal partial class Program
{
    static void Main(string[] args)
    {
        while (true)
        {
            Console.Clear();
            var menuItem = AnsiConsole.Prompt(MenuOperations.SelectionPrompt());
            menuItem.Action(menuItem.Id);
        }
    }
}

dotnet tool example to read column descriptions for tables in a database.

dotnet tool sample

Implmentation

Using one of the provided sample projects, create a new console project.

  • Add NuGet package Spectre.Console to the project
  • Add folders, Models and Classes
  • Add MenuItem under the Models folder
  • Add an empty MenuOperations class under the Classes folder
  • Add code provided from one of Main methods to display the menu
  • Export the project as a new project template for a starter project

Tips

  • When something is unclear, set breakpoints and examine the code in the local window
  • Consider writing code in class projects that are executed from the Operations class, as in this class project, Customer record used here.

Source code

Source code 1 Source code 2 Source code 3

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Science fiction writers, Comic-Con say goodbye to AI

1 Share
Some of the major players in science fiction and pop culture are taking firmer stances against generative AI.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories