Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152017 stories
·
33 followers

GCast 207: Mastering GitHub Copilot course, Using GitHub Copilot with Python, Part 2

1 Share

GCast 207:

Mastering GitHub Copilot course, Using GitHub Copilot with Python, Part 2

Learn how to use GitHub Copilot with a Python application. This video covers sections 3 and 4 of the "Using GitHub Copilot with Python" lesson. It shows how to use GitHub Copilot Chat in Agent mode and how to customize GitHub Copilot with instruction files

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Server Regular Expression Performance and Guidelines

1 Share

SQL Server 2025 introduces regular expression (regex) functions to the TSQL language. You could do this in previous versions with user defined CLR functions, but the 2025 functions are natively available and supported by Microsoft. I prefer native functionality when it is available due to the rigorous testing involved with vendor products and the automatic updates and bug fixes. Regex functions are no different. If my team doesn’t have to do the development and maintenance on a feature, I’m going to take advantage of it if performance and functionality are close enough.

I started writing this post with the intention of showing performance details and differences between the legacy TSQL functions and the new regex functions, along with basic functionality examples. There is just too much information to put this into a single post, so I’ve split it. This post covers my initial findings and basic guidelines for using regex functions. The next post is an in-depth look at performance. It compares legacy functions and their equivalent regex functions. I’ve already seen posts covering functionality, so that’s why I’m primarily focused on performance. I spend a good portion of my time on performance tuning, so it’s one of the first questions I ask about a new solution, especially one with functionality that could replace some legacy functionality.

After using regex functions and comparing them to legacy TSQL functions, I will definitely be using regex functions in future projects. The problems they solve and how often I’ll use them require more thought. I try to address those questions in this post, but it is going to take more time and some real-world problems to be sure. I have moved the detailed testing results to the second post. If you only want the high-level stuff, this is the section for you. If you want the details with the CPU usage, elapsed execution time in milliseconds, scans versus seeks, and a few query plans, stay tuned for the next section.

Note: Regex functions are available in SQL Server 2025. Your database compatibility level also needs to be set to 170. This is also available with Azure SQL Databases and SQL Server MI with the Always-up-to-date update policy or at the SQL 2025 update policy.

ALTER DATABASE WideWorldImporters
SET COMPATIBILITY LEVEL = 170

Overview

During my testing, legacy TSQL functions and regex functions have similar performance, with a few exceptions. One of the new regex functions is much slower than legacy functions and one of the new regex functions is faster than legacy patterns. I emphasize this several times, but you’ll need to test regex functions as you implement them. Each scenario is different. Indexes, table design and size, hardware and exact usage will impact your outcome.

REGEXP_LIKE

REGEXP_LIKE works in a similar way to the LIKE keyword. The LIKE keyword performs 10-50 (even more in some scenarios) times faster versus the regex function REGEXP_LIKE, depending on the exact pattern and table size. This is an extreme case and assumes you can make a LIKE pattern equivalent to the regex pattern. I didn’t see this disparity until I compared them on large datasets, but is very consistent. If you can work against a small dataset, the difference is less noticeable.

REGEXP_REPLACE

REGEXP_REPLACE works like the legacy REPLACE function, but it is a streamlined version. REGEXP_REPLACE is the big winner in the new functions, from a performance perspective. Functionality that previously required a user-defined function, TSQL or CLR, can now be accomplished faster with this new function. It’s native functionality and REGEXP_REPLACE is about 5 times faster than my legacy equivalents. The only exception to this is when REGEXP_REPLACE is put into a user defined function to standardize a pattern. I’ll explore this scenario more in my next post.

REGEXP_SUBSTR

REGEXP_SUBSTR returns part of a string, based on the regex pattern specified. The legacy SUBSTRING combined with PATINDEX is roughly equivalent to REGEXP_SUBSTR from a functionality and performance perspective. As with REGEXP_REPLACE, it can provide a simplified way to replace the legacy functions.

REGEXP_INSTR

REGEXP_INSTR returns either the starting or ending position of a regex pattern. This is similar to the functionality of PATINDEX or CHARINDEX, depending on the required pattern. Note, that the legacy functions only provide the starting location. Performance of PATINDEX and CHARINDEX are also roughly equivalent to REGEXP_INSTR.

REGEXP_COUNT

I have multiple legacy methods to count a pattern in another string. The fastest legacy method, using a CLR, is also roughly equivalent to the new regex function, REGEXP_COUNT. If you don’t have a CLR function for this and use a TSQL UDF, the regex function is about 5 times faster. This is another potential win for the regex functions.

REGEXP_MATCHES

REGEXP_MATCHES returns the starting and ending position for a pattern match, plus the matching expression and a JSON document with the matches. I don’t have a good legacy comparison for REGEXP_MATCHES, so if you need this functionality, now you have it. This looks like it might work well for data imports with mixed formats.

REGEXP_SPLIT_TO_TABLE

REGEXP_SPLIT_TO_TABLE returns a table of strings, split by the specified pattern. It is the regex version of STRING_SPLIT. Performance of the REGEXP_SPLIT_TO_TABLE function is close to the previous methods, including STRING_SPLIT, CLR functions, and TSQL user functions.

Initial Performance Observations

Legacy functions use indexes more efficiently when used in the WHERE clause and they have a chance of performing an index seek instead of a scan. I couldn’t get the regex functions to use an index seek. They perform index scans rather than seeks. Just like the legacy TSQL functions, they don’t always use indexes and using functions, legacy or regex, in the WHERE clause or JOIN can cause very poor performance. In my testing, legacy functions in the WHERE clause are more efficient. That’s where I saw the 10-50 times difference in elapsed time. You shouldn’t use functions in the WHERE clause if you have any other options.

CPU usage tends to mimic the execution time when comparing legacy functions and regex functions. It’s really not noticeable in small datasets. For larger datasets and high concurrency systems, differences add up and is something you should consider. Complicated regular expression functions take longer to execute. The difference becomes very noticeable with Unicode expressions. Testing is necessary.

These, sometimes, minor differences may not be enough to warrant using regex functions, but here’s the good part. Regex functions are much more flexible than legacy TSQL functions. They solve some problems that would require multiple legacy TSQL functions to be combined or a user defined function if regex functions aren’t available. They have the ability to solve complex problems elegantly. As a result, they can use more CPU, I/O, and sometimes perform poorly if they get too complex. But this trade off may be worth it, depending on your team and the nature of the problem. Since this is the first generation of native regex functions in SQL Server, I would expect them to improve as the library matures in future versions of SQL (and even with Cumulative Updates). Regular expressions aren’t efficient enough to warrant refactoring existing code, but I’m hopeful and watching them. For new code, the development efficiency may justify the performance differences when they are there.

General Guidelines

The following are my current, high-level guidelines and observations for using the SQL Server regex functions. Most of these follow standard performance guidelines. Pay attention to execution times and your testing strategy as you try these new functions.

Maintain Current Performance Standards

The primary thing I noticed when testing regex functions was the need to maintain and follow current performance standards. They aren’t a solution to improve query performance; they solve specific needs. Sometimes they are faster than existing functions, sometimes they are slower. Good practices need to be used during their implementation.

Keep your queries simple. This isn’t always possible, but it’s a good starting point. If a simple legacy TSQL function will meet your needs, it’s probably better to use that legacy function instead of adding a regex function. Legacy functions are generally easier to troubleshoot and test. As mentioned above, they usually have similar performance. Start simple and try regex functions if legacy functions don’t easily meet your needs.

Filter your queries with WHERE clauses. As with any function, you want the regex function to perform against the smallest dataset possible. This will be done anyway if you are actually modifying data. When looking for specific patterns in the data, the use case for regex functions, you want the smallest subset of data possible.

Proper indexing is critical to minimizing I/O requirements. The same guideline applies to any other TSQL function used to manipulate data. I haven’t seen the REGEXP_LIKE function perform an index seek yet (I’m still running test scenarios), even when it seems like it might be possible, but they do use index scans. Index scans are still more efficient than table scans and reduce I/O significantly. The other regex functions will likely be used in the body of the query and not used for filtering, so this isn’t as crucial.

Extensive details will be in the next section of this post, but I do want to show what I’m seeing with the LIKE versus REGEXP_LIKE functions in relation to index usage.

LIKE using an index seek

SELECT
CV.ChicagoTrafficCrashesVehiclesID
,CV.MAKE
FROM Stage.ChicagoTrafficCrashesVehicles CV
WHERE CV.Make LIKE '(a%'

Table 'ChicagoTrafficCrashesVehicles'. Scan count 1, logical reads 3, physical reads 1...

 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 1 ms.

REGEXP_LIKE using an index scan

SELECT
CV.ChicagoTrafficCrashesVehiclesID
,CV.MAKE
FROM Stage.ChicagoTrafficCrashesVehicles CV
WHERE REGEXP_LIKE (CV.Make, '^\(a','i')

Table 'ChicagoTrafficCrashesVehicles'. Scan count 9, logical reads 10748, physical reads 0 ...

 SQL Server Execution Times:
   CPU time = 113419 ms,  elapsed time = 16362 ms.

This is an extreme example against a table with 3.8 million rows. The legacy LIKE function took about 2 milliseconds to return results, using an index seek, and the new REGEXP_LIKE function took about 16 seconds, using an index scan. If you narrow the results first with a separate WHERE clause, essentially pre-processing the data, you can get the results much closer. This is just to emphasize the point, reduce the dataset as much as possible before evaluating it with the regex pattern.

Even though this query doesn’t perform a seek, pulling data from an index (as an index scan) rather than the whole table will help query performance and overall server load. This ties into the previous recommendation to limit your dataset with a WHERE clause. Don’t use regex functions for the WHERE clause if you can avoid it. If your index has an INCLUDE with the columns used by the regex functions, that will also help performance by eliminating a cluster lookup or RID lookup.

Test Your Regex Functions

Test the regex pattern and be sure it is doing what you expect, especially if the data isn’t from a trusted source or needs to be cleaned. Untrusted data usually requires a more complex regex pattern and it’s easy to miss edge cases. My, short, experience with SQL regex patterns versus older patterns is that regex patterns are much more precise. This precision means more testing. It also means some things you could take for granted in older patterns won’t work with regex patterns, such as case sensitivity.

Test the performance of each regular expression you create and don’t assume each regex function will have similar performance if the pattern being evaluated changes. During my testing, the difference between a very well performing regex pattern and one that takes much longer can be a single added comparison. The execution time is usually the most interesting metric, but you’ll also want to look at CPU usage.

Pay special attention to Unicode patterns. This is where the performance differences between traditional SQL functions and regex functions was the most obvious. I’ll give similar advice for regex patterns that I give for Unicode columns in general. If you need them, use them, but it shouldn’t be your default. There are times when the business need for a Unicode (nvarchar, nchar) column is necessary, but there are performance differences that can compound, especially in large systems. The same goes for Unicode patterns in regex functions. They shouldn’t be your default.

You should never assume the sort order of a query without a WHERE clause. This is true for legacy functions and for regex functions. It’s true for any query, so don’t assume regex functions are different. During my testing, I noticed that the returned order was different between legacy and regex functions while testing performance. Use an ORDER BY when you need it.

Candidates for Regex Functions

The obvious question for me, after performance, was correct use cases for the new regex functions. Since performance isn’t obviously better with most of the regex functions, they won’t be replacing all of the equivalent legacy functions in my queries. Refactoring current queries to utilize regex functions isn’t a good use of time without a specific reason. And the added complexity can make them harder to test and troubleshoot.

New queries are good candidates. Some of the regex functions solve specific problems much more elegantly than the standard TSQL functions. These problems are a good starting place to try regex functions. For example, replacing text in a string with legacy functions would need to use PATINDEX, SUBSTRING, and REPLACE together. This could potentially be done with the single REGEXP_REPLACE function.

Development teams familiar with regex functions are also good candidates. If the team is comfortable with regex patterns and has a library of expressions in use, this can standardize development practices and make testing easier when integrating with SQL Server. Creating complex regex patterns can be time consuming, so re-using patterns for SQL is potentially a good choice.

As mentioned above, queries that meet the unique requirements of the new regex functions can make your development process easier. These are strong candidates. I haven’t had a need for the functionality provided by REGEXP_MATCHES, but there are some data loads that might have been easier if it was available previously. If I didn’t already have a CLR function for REGEXP_COUNT, that would also be a candidate. It still might be a candidate since this is native functionality.

There are often environment restrictions, such as security and deployment restrictions, that make installing custom functions cumbersome or impossible. This depends on your environment and team culture. It can be even harder when interacting with external teams or doing an evaluation of a client server. Installing functions can be very intrusive, but the built in regex functions eliminate that need for some common use cases.

My Plans for Regex Functions

None of the new regex functions perform so well that I want to retrofit my existing queries. Most of my plans align with the use cases presented above.

When importing and cleaning data, I frequently have a need to use SUBSTRING, PATINDEX / CHAINDEX combined with the REPLACE function. As mentioned above, this can be simplified greatly with REGEXP_REPLACE or REGEXP_SUBSTR. I don’t know that I can always do this, but I’m going to be looking at these queries closely.

Barring complicated business requirements, I won’t be using regex functions in WHERE clauses. Even when the business requirements seem to match, it would have to be a last resort. Using a simple TSQL function, or even better – a simple WHERE statement, to limit the data before applying the regex function is my initial recommendation. I will be using regex functions to supplement queries, not replace them.

There are a couple of complicated data parsing queries that would fit well with the new regex functions. I have some queries that work most of the time, but not always, when parsing deadlock reports and blocked process reports. These look like very good candidates for regex functions. I’d like to be able to parse these complicated strings and extract the database, table, page, RID, etc. to find the exact values causing issues on a system. I think I can automate these queries using a regex pattern.

Summary

The new regex functions in SQL Server 2025 have a lot of potential. I’ll be trying them in-depth as opportunities appear. My first impression of the regex functions is that they are very useful. I wouldn’t refactor existing code to use the new regex functions, but they are candidates for new queries. Some performance is better with the new functions and sometimes it is worse. I expect performance in the functions to improve, but due to the complicated nature of regex patterns, it will be hard to match the performance for every use case. Check out my next post where I show the details on how I tested and exact performance differences for each regex function. My biggest take-away is that the functions need to be tested thoroughly, both for performance and to be sure the regex pattern meets expectations.

Additional Resources

The post SQL Server Regular Expression Performance and Guidelines appeared first on Simple Talk.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

C# Smart Enums: advanced

1 Share

The Problem: The Copy-Paste Trap

By Part 2, we had a high-performance O(1) dictionary lookup. However, if your application has dozens of status types (Order, Payment, User, etc.), you might find yourself copy-pasting that same dictionary and lookup logic repeatedly.

A Suggestion: An Advanced Generic Base Class

To keep your code DRY (Don't Repeat Yourself), we can move the "heavy lifting" into a single abstract base class. This allows your specific status classes to focus purely on defining their values while inheriting all the optimized lookup logic for free.

This implementation uses the Curiously Recurring Template Pattern (CRTP). It ensures that each specific Smart Enum (like ProductStatus) maintains its own private dictionary in memory, preventing data collisions between different types.

The Implementation:

We define a simple interfaceto ensure every value has an Id, followed by a base classproviding multiple ways to access your data safely.

using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;

public interface ISmartEnumValue 
{ 
    int Id { get; } 
}

public abstract class SmartEnum<TValue, TSelf> 
    where TValue : class, ISmartEnumValue
    where TSelf : SmartEnum<TValue, TSelf>
{
    private static readonly Dictionary<int, TValue> _lookup = new();

    protected static TValue Register(TValue value)
    {
        _lookup[value.Id] = value;
        return value;
    }

    // Forces the static constructor of the child class to run immediately
    public static void Initialize() 
    {
        RuntimeHelpers.RunClassConstructor(typeof(TSelf).TypeHandle);
    }

    // 1. Strict Get: Throws if ID is missing (Use when existence is mandatory)
    public static TValue Get(int id)
    {
        if (_lookup.TryGetValue(id, out var value))
            return value;

        throw new KeyNotFoundException(
            $"Value with ID {id} not found in {typeof(TSelf).Name}");
    }

    // 2. Safe Get: Returns null if ID is missing (Use for optional data)
    public static TValue? GetOrDefault(int id) => _lookup.GetValueOrDefault(id);

    // 3. Pattern Matching Get: Returns bool (Standard .NET 'Try' pattern)
    public static bool TryGet(int id, out TValue? value)
    {
        return _lookup.TryGetValue(id, out value);
    }

    public static bool Exists(int id) => _lookup.ContainsKey(id);
    public static IEnumerable<TValue> GetAll() => _lookup.Values;
}

Usage examples:

Once initialized, you have total flexibility in how you consume your Smart Enums.

// Initialize once at startup (e.g., Program.cs)
ProductStatus.Initialize();

// Example 1: Strict access (Expects ID to exist)
var status = ProductStatus.Get(1); 
Console.WriteLine(status.Description);

// Example 2: Safe access with null check
var maybeStatus = ProductStatus.GetOrDefault(99);
if (maybeStatus != null) { /* Do something */ }

// Example 3: Pattern matching for clean branching
if (ProductStatus.TryGet(2, out var foundStatus))
{
    Console.WriteLine($"Found: {foundStatus?.Description}");
}

Why this is a Robust Architectural Choice

  • Flexible Consumption: You can choose between exceptions, nulls, or booleans based on your specific business flow.
  • Strict Type Safety: The TSelf constraint ensures that ProductStatus.Get() returns a ProductStatusValue directly, with no casting required.
  • No Reflection: By using Initialize(), we avoid the performance overhead of assembly scanning.
  • Zero Boilerplate: Your specific Enum classes focus entirely on the data, while the engine remains encapsulated in the base.

Try it yourself

Version Note

This advanced implementation requires .NET 6 or higher. The use of CRTP and modern generic constraints ensures type safety and performance in modern C# environments.

Further Reading & Resources

Let's Discuss!

How are you currently handling magic numbers? Do you prefer a simple Record approach or a Generic Base Class for larger systems?

Drop a comment below and let’s talk Clean Code!

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

C# Smart Enums: optimized

1 Share

The Problem: The "LINQ Tax"

In Part 1, we successfully replaced magic numbers with Records. To find a specific status, we used LINQ:

var status = Status.All.SingleOrDefault(x => x.Id == productStatusId);

While this works perfectly, it has two drawbacks:

  1. Complexity: You have to repeat this LINQ logic every time you need to fetch a status.
  2. Performance: LINQ performs a linear search O(n). For a large set of statuses in a high-traffic app, this is unnecessary overhead.

The Solution: The Internal Lookup

We can optimize this by adding a private Dictionaryinside our Status class. This gives us instant O(1) lookups regardless of how many statuses you have.

public record StatusValue(int Id, string Description);

public static class Status
{
    public static readonly StatusValue Pending = new(1, "Pending Approval");
    public static readonly StatusValue Available = new(2, "Available for Sale");
    public static readonly StatusValue OutOfStock = new(3, "Out of Stock");

    public static readonly StatusValue[] All = { Pending, Available, OutOfStock };

    private static readonly Dictionary<int, StatusValue> _lookup = All.ToDictionary(s => s.Id);

    // O(1) access to the full object
    public static StatusValue? Get(int id) => _lookup.GetValueOrDefault(id);

    // Quick existence check
    public static bool Exists(int id) => _lookup.ContainsKey(id);

    // Accessing properties
    public static string GetDescription(int id) => Get(id)?.Description ?? "Unknown";
}

How your Business Logic looks now

Instead of writing LINQ queries, your services now look like this:

if (Status.Exists(userInputId))
{
    var label = Status.GetDescription(userInputId);
    Console.WriteLine($"Processing: {label}");
}

Try it yourself

Further Reading

Version Note: These performance optimizations are designed for .NET 6+ environments.

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

C# Smart Enums: escape magic number hell

1 Share

The Problem: Magic Number Hell

We've all seen code like this:

if (product.StatusId == 2) 
{
    // What is 2? Pending? Deleted? Available?
    product.StatusName = "?"; 
}

This is Magic Number Hell. It’s hard to read, impossible to maintain, and a magnet for bugs. You have to search the codebase and figure out the meaning:

This code has several issues:

  • The number 2 has no meaning without context
  • The description will likely be duplicated string everywhere
  • It leads to errors when setting id or description
  • Refactoring is a struggle

Let's look at the traditional ways often adopted, and finally see the Smart Enum pattern and why it would be the best option.

Traditional approaches : why they fall short ?

const Properties

Following you see a valid often used approach using constants:

public static class Statuses
{
    public const int Available = 1;
    public const int Unavailable = 2;

    public const string AvailableDescription = "Available";
    public const string UnavailableDescription = "Unavailable";
}

if (product.StatusId == Statuses.Available)
{
    product.StatusName = Statuses.AvailableDescription;
}

Downsides:

  • Descriptions are separate from IDs
  • Can't iterate
  • Not type-safe

enum with Attributes

Built-in enum is better than const, and often used as well. See the following example of its usage:

public enum Statuses
{
    [Description("Available")]
    Available = 1,
    [Description("Unavailable")]
    Unavailable = 2
}

// Usage
product.StatusId = (int)Statuses.Available;
product.StatusName = GetDescription(Statuses.Available);

But it also has downsides:

  • Reflection is slow when used to retrieve and item description (the following method shows an example using reflection)
  • Alternatives to Reflection are messy helper methods
  • It´s verbose
  • It´s not LINQ-friendly
// A reflection version method for retrieving the item description
public static string GetDescription(Statuses status)
{
    var field = status.GetType().GetField(status.ToString());
    var attribute = field?.GetCustomAttribute<DescriptionAttribute>();
    return attribute?.Description ?? status.ToString();
}

The Solution: Smart Enum pattern with C# Records

Although traditional approaches are valid, Smart Enum pattern offers a modern way to solve the magic number nightmare and without any downside. See how it looks like:

public record StatusValue(int Id, string Description);

public static class Status
{
    public static readonly StatusValue Pending = new(1, "Pending Approval");
    public static readonly StatusValue Available = new(2, "Available for Sale");
    public static readonly StatusValue OutOfStock = new(3, "Out of Stock");

    public static readonly StatusValue[] All = { Pending, Available, OutOfStock };
}

How to use it today

Considering the prior Smart Enum sample type, its usage would produce some lines like the following:

product.StatusId = Status.Available.Id;
product.StatusName = Status.Available.Description;

if (product.StatusId == Status.Available.Id) { ... }

Even without complex architecture, you can use standard LINQ to clean up your logic:

// Retrieve safely
var status = Status.All.SingleOrDefault(x=>x.Id == userInput);
if (status!=null)
{
   product.StatusId = status.Id;
   product.StatusDescription = status.Description;+
}

// Compare with confidence
if (product.StatusId == Status.Available.Id) { ... }

// dropdown list
var dropdownList = Status.All.Select(s => new
{ 
        Key = s.Id, 
        Value= s.Description 
});

Comparison

Approach Type-safe LINQ-ready Performance Maintainable Metadata
Magic numbers No No Fast No No
enum Yes Sort of Slow Sort of Limited
const Sort of No Fast Sort of
Smart Enum Yes Yes Fast Yes Rich

Try It yourself

Further Reading

Version Note: The examples in this series require .NET 6 or higher

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

We built a Windows app that blocks trackers and encrypts your traffic automatically

1 Share

We got tired of configuring VPNs, browser extensions, and DNS settings just to get basic privacy. So we built HSIP.

What it does
HSIP is a Windows app that:

Blocks trackers (Google Analytics, DoubleClick, ad networks)
Encrypts your traffic with ChaCha20-Poly1305
Runs silently in the background
Shows a colored tray icon (green = protected, red = offline)

Install

Download HSIP-Setup.exe
Run it
Done

No configuration. No browser extensions. No DNS changes. It just works.

How it works
HSIP runs three components:
hsip-gateway.exe - HTTP/HTTPS proxy that blocks trackers
hsip-cli.exe - Background daemon with status API
hsip-tray.exe - System tray icon

The installer configures Windows to route traffic through the gateway automatically. When you uninstall, your original settings are restored.

Check if it's working
curl http://127.0.0.1:8787/status

{
"protected": true,
"cipher": "ChaCha20-Poly1305"
}

Tech stack
Rust
Ed25519 for identity
X25519 for key exchange
ChaCha20-Poly1305 for encryption

License
Free for personal use. Commercial use requires a license.
GitHub: https://github.com/nyxsystems/HSIP-1PHASE-1

Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories