Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149764 stories
·
33 followers

Orion 1.0 is a privacy focused browser for macOS

1 Share
Kagi, a small company best known for its paid, ad free search engine, has announced the launch of version 1.0 of Orion, a new web browser designed around privacy and user control rather than advertising or data collection. Kagi has already released iPhone and iPad versions of Orion, but this is the company's first desktop browser and arrives following a long beta phase. SEE ALSO: OpenAI launches ChatGPT Atlas, a new browser built around AI -- but it’s macOS only for now Many of the main browsers, including Chrome, Edge, Brave, and new AI focused ones, are built on Chromium.… [Continue Reading]
Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Copilot your Holidays - Ep. 1

1 Share
From: Microsoft Healthcare and Life Blog Videos
Duration: 4:43
Views: 4

Join Maria, Saurabh, and Samhita as they take on holiday chaos with the power of Copilot!

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Postman VS Code extension + MCP working in Google Antigravity

1 Share
From: Postman
Duration: 2:58
Views: 16

Take a first look at Google Anti-gravity, released alongside Gemini 3, and see how the Postman VS Code extension and Postman MCP service run inside this new environment. See how to install the extension, connect the workspace, and configure the minimal MCP service to work within Anti-gravity’s tool limits. If you're exploring Gemini Anti-gravity for API development, this demo shows exactly how Postman integrates with it.

📌 Timestamps
0:00 - Google releases Gemini 3 and Anti-gravity
0:10 - Downloading Anti-gravity and installing Postman VS Code extension
0:33 - Signing into Postman and accessing workspaces
0:56 - Navigating collections, variables, and workspace resources
1:16 - Adding the Postman MCP service inside Anti-gravity
1:48 - Using the minimal MCP due to tool limits
2:31 - Starting the MCP server inside Anti-gravity
2:37 - Final thoughts on using Postman tools with Anti-gravity

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Christmas Gifts for SysAdmins with Joey Snow and Rick Claus

1 Share

The seasonal gift show is back - Joey Snow and Rick Claus bring their lists of great gifts for sysadmins. You know they're impossible to buy for, so we are making it easier for you with a range of prices and seriousness for your favorite sysadmin. Useful gadgets, upgrades to older devices, and some fun stuff that reminds the sysadmins in your life that you get their struggles. Share this show and toy list with everyone you know who struggles to find the right thing!

Links

Recorded October 30, 2025





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/34533377-2b9a-4613-879f-d344ff3284b1/audio/d62e9525-30c0-4b34-b127-17b3c57b4e48/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SE Radio 696: Flavia Saldanha on Data Engineering for AI

1 Share

Flavia Saldanha, a consulting data engineer, joins host Kanchan Shringi to discuss the evolution of data engineering from ETL (extract, transform, load) and data lakes to modern lakehouse architectures enriched with vector databases and embeddings. Flavia explains the industry's shift from treating data as a service to treating it as a product, emphasizing ownership, trust, and business context as critical for AI-readiness. She describes how unified pipelines now serve both business intelligence and AI use cases, combining structured and unstructured data while ensuring semantic enrichment and a single source of truth. She outlines key components of a modern data stack, including data marketplaces, observability tools, data quality checks, orchestration, and embedded governance with lineage tracking. This episode highlights strategies for abstracting tooling, future-proofing architectures, enforcing data privacy, and controlling AI-serving layers to prevent hallucinations. Saldanha concludes that data engineers must move beyond pure ETL thinking, embrace product and NLP skills, and work closely with MLOps, using AI as a co-pilot rather than a replacement.

Brought to you by IEEE Computer Society and IEEE Software magazine.





Download audio: https://traffic.libsyn.com/secure/seradio/696-flavia-saldanha-data-engineering-ai.mp3?dest-id=23379
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

IDistributedCache (Redis) - remove by prefix

1 Share

I'm a heavy user of Redis. On a recent project, we cached a bunch of content that had the following key structure:

content-{company-id}-checkout-...

The requirement was that we needed to drop all keys starting with the above prefix when any changes were made to the company in our CMS.

We're using IDistributedCache with a Redis implementation based on the StackExchange.Redis package.

The IDistributedCache interface does not provide any "remove by prefix" methods:

Get, GetAsync: Accepts a string key and retrieves a cached item as a byte[] array if found in the cache.
Set, SetAsync: Adds an item (as byte[] array) to the cache using a string key.
Refresh, RefreshAsync: Refreshes an item in the cache based on its key, resetting its sliding expiration timeout (if any).
Remove, RemoveAsync: Removes a cache item based on its string key.

To enable support for removing all keys with a certain prefix, I created the following interface:

// The name is not `IMyDistributedCache` IRL, just for this post.
public interface IMyDistributedCache : IDistributedCache
{
    Task<(long ItemsDeleted, long ItemsFound)> RemoveItemsByPrefix(string prefix);
}

With the following implementation:

public class RedisMyDistributedCache : RedisCache, IMyDistributedCache
{
    private static readonly LuaScript RemoveByPrefixScript;
    private readonly RedisDatabaseProvider _redisDatabaseProvider;
    private readonly RedisCacheOptions _redisCacheOptions;

    static RedisMyDistributedCache()
    {
        var script =
            """
            local prefix = @prefix
            local cursor = '0'
            local count = 0
            local batch_size = 500
            local scanned = 0
            
            repeat
                local result = redis.call('SCAN', cursor, 'MATCH', prefix .. '*', 'COUNT', batch_size)
                cursor = result[1]
                local keys = result[2]
                scanned = scanned + #keys
                
                if #keys > 0 then
                    local deleted = redis.call('DEL', unpack(keys))
                    count = count + deleted
                end
            until cursor == '0'
            
            return {count, scanned}
            """;
        RemoveByPrefixScript = LuaScript.Prepare(script);
    }

    public RedisMyDistributedCache(
        RedisDatabaseProvider redisDatabaseProvider,
        IOptions<RedisCacheOptions> optionsAccessor) : base(optionsAccessor)
    {
        _redisDatabaseProvider =
            redisDatabaseProvider ?? throw new ArgumentNullException(nameof(redisDatabaseProvider));
        _redisCacheOptions = optionsAccessor.Value ?? throw new ArgumentNullException(nameof(optionsAccessor));
    }

    public async Task<(long ItemsDeleted, long ItemsFound)> RemoveItemsByPrefix(string prefix)
    {
        if(!string.IsNullOrWhiteSpace(_redisCacheOptions.InstanceName))
        {
            prefix = $"{_redisCacheOptions.InstanceName}{prefix}";
        }

        var database = await _redisDatabaseProvider.GetDatabase();
        var result =
            await database.ScriptEvaluateAsync(
                RemoveByPrefixScript, new { prefix = prefix }, CommandFlags.DemandMaster);

        if (result.Resp3Type == ResultType.Array)
        {
            var array = (RedisValue[])result!;
            return ((long)array[0], (long)array[1]);
        }

        return ((long)result, 0);
    }
}
public class RedisDatabaseProvider
{
    private readonly Lazy<Task<IDatabaseAsync>> _database;

    public RedisDatabaseProvider(ConnectionMultiplexerProvider connectionMultiplexerProvider)
    {
        _database = new Lazy<Task<IDatabaseAsync>>(async () =>
        {
            var connectionMultiplexer = await connectionMultiplexerProvider.GetConnectionMultiplexer();
            return connectionMultiplexer.GetDatabase();
        });
    }

    public async Task<IDatabaseAsync> GetDatabase()
    {
        return await _database.Value;
    }
}

As you can see, 'm inheriting from the RedisCache, which is the original implementation of IDistributedCache. I'm also implementing the new interface IMyDistributedCache.

The key part here is the LUA-script. Redis supports running LUA-scripts whenever you need to do something that is not supported out of the box.

How it works:

  • SCAN loop - Iterates through the keyspace in batches of 500 using SCAN (non-blocking, unlike KEYS)
  • Pattern matching - Finds keys matching prefix*
  • Batch deletion - Deletes found keys with DEL
  • Continues until cursor returns to '0' (full scan complete)

It returns the total keys deleted (and found).
Why SCAN instead of KEYS? — KEYS blocks Redis on large datasets; SCAN is incremental and production-safe.

One thing to note here, that is not a problem for us since we only have about ~50 keys in total matching a given prefix and they are always created at the same time, is that the operation is not atomic.

New keys matching the prefix could, theoretically, be added mid-execution for example. If you want it to be atomic you would have to use KEYS instead of SCAN, but remember that KEYS is a blocking operation and it's not recommended in production (unless you really know what you're doing).

The registration of the IMyDistributedCache looks like this:

public static void AddCaching(this IServiceCollection services, IConfiguration configuration)
{
    services.AddSingleton<RedisDistributedCacheOptions>(_ =>
    {
        var redisDistributedCacheOptions = new RedisDistributedCacheOptions();
        configuration.GetSection("Redis").Bind(redisDistributedCacheOptions);
        return redisDistributedCacheOptions;
    });

    services.AddSingleton<ConfigurationOptions>(x =>
    {
        var redisDistributedCacheOptions = x.GetRequiredService<RedisDistributedCacheOptions>();
        var configurationOptions = new ConfigurationOptions
        {
            Password = redisDistributedCacheOptions.Password, AllowAdmin = true
        };

        if(redisDistributedCacheOptions.UseSentinel)
        {
            configurationOptions.ServiceName = redisDistributedCacheOptions.ServiceName;
        }

        foreach(var dnsEndPoint in redisDistributedCacheOptions.GetEndpoints())
        {
            configurationOptions.EndPoints.Add(dnsEndPoint.Host, dnsEndPoint.Port);
        }

        return configurationOptions;
    });
    services.AddSingleton<ConnectionMultiplexerProvider>();
    services.AddSingleton<RedisDatabaseProvider>();
    services.AddSingleton<RedisCacheOptions>(x =>
    {
        var connectionMultiplexerProvider = x.GetRequiredService<ConnectionMultiplexerProvider>();
        var redisDistributedCacheOptions = x.GetRequiredService<RedisDistributedCacheOptions>();
        var configurationOptions = x.GetRequiredService<ConfigurationOptions>();
        return new RedisCacheOptions
        {
            ConfigurationOptions = configurationOptions,
            ConnectionMultiplexerFactory = () => connectionMultiplexerProvider.GetConnectionMultiplexer(),
            InstanceName = redisDistributedCacheOptions.KeyPrefix
        };
    });
    services.AddSingleton<IMyDistributedCache>(x =>
    {
        var redisDatabaseProvider = x.GetRequiredService<RedisDatabaseProvider>();
        var redisCacheOptions = x.GetRequiredService<RedisCacheOptions>();
        return new LoggingMyDistributedCache(
            new RedisMyDistributedCache(redisDatabaseProvider, redisCacheOptions),
            x.GetRequiredService<ILogger<LoggingMyDistributedCache>>());
    });
}
Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories