Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152625 stories
·
33 followers

Making AI Do Things Right: Introduce Determinism

1 Share

Today, a quick example of introducing determinism when an AI struggles.

I have a simple instruction for it:

Run `date` to identify the current week. Check the calendar for the week to identify workdays, travel, and meetings

I tell it to run date because it’s terrible at knowing what day it is. In its CLAUDE.md, I gave it a command for checking the calendar.

Today is Monday. What does it say?

Calendar summary:

  • Monday Jan 12 (TODAY): No meetings showing in your calendar – open day!
  • Tuesday Jan 13: Heavy meetings (10am-5pm solid)
  • Wednesday Jan 14: Moderate meetings + evening at Avdi’s
  • Thursday Jan 15: Moderate meetings
  • Friday Jan 16: Some focus work time (2:15-4pm blocked), meetings, evening events

Ha ha I wish!

It’s doing it wrong! After yelling at my computer, I set about fixing this for the future.

Step 1: Ask why it’s wrong.

It can tell me what it did and notice what went wrong. In this case, I have a pretty good idea where to point it.

Me: what command did you use for gcalcli? because it is not working
Claude: Ah, I see the problem! I used:

gcalcli --calendar jessitron@honeycomb.io "Monday Jan 13" "Sunday Jan 19" 

But Monday is Jan 12, not Jan 13! So I missed today entirely. Let me fix that…

Yet again, the LLM proves bad at math. Especially dates.

Step 2: Introduce determinism

I want to fix this problem permanently. Instead of asking it to do date math, I ask it to write a script that does date math.

Me: ok, let’s make a script. Make a script for week-agenda, and put make it
calculate the start and end date as monday through friday. Then it calls
gcalcli

Claude: Good idea!

It writes a script. I try the script. It left out Friday this time. I tell it so, and it fixes the script.

Next I tell it to change its own instructions. No more trying, only succeeding, from now on.

Me: Now update your skill to point to the script
Claude: Good idea. Let me update the skill to use the script.

Finally, my favorite part: /clear. Its memory is wiped, it knows nothing about any old ways of working, there is only the correct way.

A bit of deterministic code goes a long way.

The weaknesses of AI can be obviated by its strengths. It’s bad at date math, but it’s good at writing code. A little direction from me, and it gets better at its job all the time. This is working with AI.

Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Unit Testing in Go - A Beginner's Guide

1 Share

If you're learning Go and you’re already familiar with the idea of unit testing, the main challenge is usually not why to test, but how to test in Go.

Go takes a deliberately minimal approach to testing. There are no built-in assertions, no annotations, and no special syntax. Instead, tests are written as regular Go code using a small standard library package, and run with a single command. This can feel unusual at first if you're coming from ecosystems with richer testing frameworks, but it quickly becomes predictable and easy to reason about.

In this article, we'll look at how unit testing works in Go in practice. We'll write a few small tests, run them from the command line, and cover the most common patterns you'll see in real Go codebases, such as table-driven tests and testing functions that return errors. We'll focus on the essentials and won't cover more advanced topics like mocks or external frameworks.

The goal is to show how familiar testing concepts translate into idiomatic Go. By the end, you should feel comfortable reading and writing basic unit tests and integrating them into your regular Go workflow.

What We'll Cover:

  1. Prerequisites

  2. Writing Your First Test

  3. Table-Driven Tests

  4. Testing Functions That Return Errors

  5. Best Practices and Tips

  6. Conclusion

  7. Solutions to Exercises

Prerequisites

Before you start, you should be comfortable with:

  • Writing and running basic Go programs

  • Defining and calling functions in Go

  • Understanding basic Go types (int, string, bool, and so on)

  • Using the Go command-line tool (go run, go build)

  • Basic understanding of unit tests: what a test is and why it's useful

  • Familiarity with Test-Driven Development concepts like testing before or alongside writing code

  • Awareness of common testing ideas such as assertions, test coverage, and checking error conditions

You don't need prior experience with Go's testing package or Go-specific test patterns, as this guide will cover all of that.

Writing Your First Test

Let's start with a simple function to test. Imagine you have a small calc package with an Add function:

// calc.go
package calc

// Add returns the sum of two integers
func Add(a, b int) int {
    return a + b
}

To test this function, create a new file named calc_test.go in the same package. In Go, test files must end with _test.go to be recognized by the testing tool.

Inside calc_test.go, you write a test function:

// calc_test.go
package calc

import "testing"

func TestAdd(t *testing.T) {
    got := Add(2, 3)
    want := 5
    if got != want {
        t.Errorf("Add(2, 3) = %d; want %d", got, want)
    }
}

Here's what's happening:

  • The function name starts with Test and takes a single *testing.T parameter. Go automatically discovers and runs any function that follows this convention.

  • The t.Errorf call reports a test failure. Unlike some frameworks, Go doesn't provide special assertions – you simply check a condition and call t.Errorf or t.Fatalf if it fails.

  • Each test is a standalone function. You can write as many as you like, and Go will run them all.

Running Your Test

Once the file is saved, you can run your test with:

go test

This runs tests for the current package (files ending with _test.go). If you want to run tests recursively in all subdirectories of your project, use:

go test ./...

The ./... pattern is shorthand for "run tests in this directory and all subdirectories". This is especially useful in larger projects where your code is spread across multiple packages.

If everything is working, you should see output indicating that the test passed:

$ go test
PASS
ok      _/C_/projects/Articles/Go_Testing       0.334s

You can add the -v flag for verbose output:

go test -v

This will show you the names of the tests as they run:

$ go test -v
=== RUN   TestAdd
--- PASS: TestAdd (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.356s

Not much difference for a single test, but it becomes useful as you add more tests.

Now let's see what happens if the test fails. Change the expected value in calc_test.go to an incorrect one:

  ...
    want := 6 // Incorrect expected value
  ...

Run the tests again:

$ go test
--- FAIL: TestAdd (0.00s)
    calc_test.go:9: Add(2, 3) = 5; want 6
FAIL
exit status 1
FAIL    _/C_/projects/Articles/Go_Testing       0.340s

or with verbose output:

$ go test -v
=== RUN   TestAdd
    calc_test.go:9: Add(2, 3) = 5; want 6
--- FAIL: TestAdd (0.00s)
FAIL
exit status 1
FAIL    _/C_/projects/Articles/Go_Testing       0.337s

Of course, your tests should always check for the correct expected values! A failing (but correct) test is a sign that your code needs to be fixed.

We only created one test file and one test function with one assertion here, but Go's testing tool can handle many files and functions at once. Behind the scenes, Go will automatically:

  • Find all _test.go files in the specified packages (for example, current directory for go test, or recursively in all subdirectories with go test ./...).

  • Identify functions that start with Test and have the correct signature.

  • Compile them together with your package into a temporary test binary.

  • Execute each test function and report the results.

To prove this, let's quickly add a Divide function to our package:

// calc.go
...
// Divide returns the result of dividing a by b
func Divide(a, b int) int {
    return a / b
}

(Note that this is an integer division, so fractional parts are discarded. Divide(5, 2) would return 2.)

And another test file with a corresponding test:

// calc_2_test.go
package calc

import "testing"

func TestDivide(t *testing.T) {
    got := Divide(10, 2)
    want := 5    
    if got != want {
        t.Errorf("Divide(10, 2) = %d; want %d", got, want)
    }    
}

Now when you run go test, both TestAdd and TestDivide will be executed:

$ go test
PASS
ok      _/C_/projects/Articles/Go_Testing       0.325s

Or:

$ go test -v
=== RUN   TestAdd
--- PASS: TestAdd (0.00s)
=== RUN   TestDivide
--- PASS: TestDivide (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.323s

Divide by Zero

What happens if we try to Divide by zero? Let's add another test case for that:

// calc_test.go
...
func TestDivideByZero(t *testing.T) {
    defer func() {
        if r := recover(); r == nil { // Check if a panic occurred
            t.Errorf("Divide did not panic on division by zero")
        }
    }()
    Divide(10, 0) // This should cause a panic
}

This test checks that the Divide function panics when dividing by zero. When you run the tests again, you'll see that this new test also passes:

$ go test -v
=== RUN   TestAdd
--- PASS: TestAdd (0.00s)
=== RUN   TestDivide
--- PASS: TestDivide (0.00s)
=== RUN   TestDivideByZero
--- PASS: TestDivideByZero (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.312s

(Note that in real-world Go code, it's better to return (int, error) for unsafe operations instead of panicking.)

Feel free to experiment by adding more test cases, changing expected values, and exploring how Go's testing framework handles different scenarios.

t.Errorf vs t.Fatalf

In the examples above, we used t.Errorf to report test failures. This function logs the error but allows the test to continue running. This is useful when you want to check multiple conditions in a single test function.

In contrast, t.Fatalf logs the error and immediately stops the execution of the current test. Use t.Fatalf when continuing the test after a failure doesn't make sense or could cause misleading results.

For example, in the TestDivideByZero test, if the Divide function does not panic, we use t.Errorf to report the failure but continue to the end of the test. But if we had additional checks after the division, we might want to use t.Fatalf to stop execution immediately upon failure.

While t.Errorf and t.Fatalf use fmt-style formatting, for simple messages without formatting, you can also use t.Error and t.Fatal, respectively.

In the next section, we'll look at table-driven tests, a common Go pattern for testing multiple cases efficiently.

Table-Driven Tests

In Go, it's common to want to run the same test logic for multiple inputs and expected outputs. Rather than writing a separate test function for each case, Go developers often use table-driven tests. This pattern keeps your tests concise, readable, and easy to extend.

Table-Driven Add Test

Let's rewrite our Add test using a table-driven approach (and delete calc_2_test.go for clarity):

// calc_test.go
package calc

import "testing"

func TestAddTableDriven(t *testing.T) {
    tests := []struct {// Define a struct for each test case and create a slice of them
        name string
        a, b int
        want int
    }{
        {"both positive", 2, 3, 5},
        {"positive + zero", 5, 0, 5},
        {"negative + positive", -1, 4, 3},
        {"both negative", -2, -3, -5},
    }

    for _, tt := range tests {// Loop over each test case
        t.Run(tt.name, func(t *testing.T) {// Run each case as a subtest
            got := Add(tt.a, tt.b)
            if got != tt.want {// Check the result
                t.Errorf("Add(%d, %d) = %d; want %d", tt.a, tt.b, got, tt.want) // Report failure if it doesn't match
            }
        })
    }
}

Here's how it works:

  • We define a slice of structs, each representing a test case.

  • Each struct contains the test name, input values, and the expected result.

  • We loop over the slice and call t.Run(tt.name, func(t *testing.T) { ... }) to run each test as a subtest.

  • If a subtest fails, you can see which one by its name in the output.

$ go test
PASS
ok      _/C_/projects/Articles/Go_Testing       0.452s

Or to see detailed output:

$ go test -v
=== RUN   TestAddTableDriven
=== RUN   TestAddTableDriven/both_positive
=== RUN   TestAddTableDriven/positive_+_zero
=== RUN   TestAddTableDriven/negative_+_positive
=== RUN   TestAddTableDriven/both_negative
--- PASS: TestAddTableDriven (0.00s)
    --- PASS: TestAddTableDriven/both_positive (0.00s)
    --- PASS: TestAddTableDriven/positive_+_zero (0.00s)
    --- PASS: TestAddTableDriven/negative_+_positive (0.00s)
    --- PASS: TestAddTableDriven/both_negative (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.385s

Table-Driven Divide Test

We can apply the same pattern to Divide, including checking for divide-by-zero:

// calc_test.go
...
func TestDivideTableDriven(t *testing.T) {
    tests := []struct { // Define test cases
        name     string
        a, b     int
        want     int
        wantPanic bool
    }{
        {"normal division", 10, 2, 5, false},
        {"division by zero", 10, 0, 0, true},
    }

    for _, tt := range tests { // Loop over
        t.Run(tt.name, func(t *testing.T) { // Run subtest
            if tt.wantPanic { // Check for expected panic
                defer func() { // Recover from panic
                    if r := recover(); r == nil {
                        t.Errorf("Divide(%d, %d) did not panic", tt.a, tt.b)
                    }
                }()
            }
            got := Divide(tt.a, tt.b) // Tests that do not panic
            if !tt.wantPanic && got != tt.want {
                t.Errorf("Divide(%d, %d) = %d; want %d", tt.a, tt.b, got, tt.want)
            }
        })
    }
}

This example shows how to handle both normal and panic cases in a single table-driven test:

  • The wantPanic field tells the test whether we expect a panic.

  • We use defer and recover to check for a panic when needed.

  • Normal test cases still check the result as usual.

Run all tests as before:

$ go test -v
=== RUN   TestAddTableDriven
=== RUN   TestAddTableDriven/both_positive
=== RUN   TestAddTableDriven/positive_+_zero
=== RUN   TestAddTableDriven/negative_+_positive
=== RUN   TestAddTableDriven/both_negative
--- PASS: TestAddTableDriven (0.00s)
    --- PASS: TestAddTableDriven/both_positive (0.00s)
    --- PASS: TestAddTableDriven/positive_+_zero (0.00s)
    --- PASS: TestAddTableDriven/negative_+_positive (0.00s)
    --- PASS: TestAddTableDriven/both_negative (0.00s)
=== RUN   TestDivideTableDriven
=== RUN   TestDivideTableDriven/normal_division
=== RUN   TestDivideTableDriven/division_by_zero
--- PASS: TestDivideTableDriven (0.00s)
    --- PASS: TestDivideTableDriven/normal_division (0.00s)
    --- PASS: TestDivideTableDriven/division_by_zero (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.321s

Subtest names make it easy to see which case passed or failed.

Exercise

Try creating your own table-driven test for a new function, Subtract(a, b int) int. Include at least four test cases:

  • Both positive numbers

  • Positive minus zero

  • Negative minus positive

  • Both negative

Then run your tests and verify the output.

Testing Functions That Return Errors

Many Go functions return an error as the last return value. Writing tests for these functions is slightly different from testing pure functions like our Add or Divide, because you need to check both the result and whether an error occurred.

Safe Divide Function

Let's add a SafeDivide function to return an error instead of panicking:

// calc.go
...
import "fmt"
...
// SafeDivide returns the result of dividing a by b.
// It returns an error if b is zero.
func SafeDivide(a, b int) (int, error) {
    if b == 0 {
        return 0, fmt.Errorf("cannot divide by zero")
    }
    return a / b, nil
}

Writing Tests for SafeDivide()

We can use a table-driven test again:

// calc_test.go
func TestSafeDivide(t *testing.T) {
    tests := []struct {
        name      string
        a, b      int
        want      int
        wantError bool
    }{
        {"normal division", 10, 2, 5, false},
        {"division by zero", 10, 0, 0, true},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            got, err := SafeDivide(tt.a, tt.b)
            if tt.wantError {
                if err == nil {
                    t.Errorf("SafeDivide(%d, %d) expected error, got nil", tt.a, tt.b)
                }
                return // stop here, no need to check `got`
            }
            if err != nil {
                t.Errorf("SafeDivide(%d, %d) unexpected error: %v", tt.a, tt.b, err)
            }
            if got != tt.want {
                t.Errorf("SafeDivide(%d, %d) = %d; want %d", tt.a, tt.b, got, tt.want)
            }
        })
    }
}

What's happening here:

  • We added a wantError field to indicate whether the test expects an error.

  • If an error is expected, we check that err != nil. If not (that is, err == nil), we fail the test.

  • If no error is expected, we check both the returned value (got) and that err == nil.

  • Using t.Run subtests keeps everything organized and readable.

Running the tests again:

$ go test -v
...
=== RUN   TestSafeDivide
=== RUN   TestSafeDivide/normal_division
=== RUN   TestSafeDivide/division_by_zero
--- PASS: TestSafeDivide (0.00s)
    --- PASS: TestSafeDivide/normal_division (0.00s)
    --- PASS: TestSafeDivide/division_by_zero (0.00s)
PASS
ok      _/C_/projects/Articles/Go_Testing       0.323s

Showing that both normal and error cases are handled correctly.

Exercise

Update your Subtract(a, b int) int function to a SafeSubtract(a, b int) (int, error) variant that returns an error if the result would be negative. Then write a table-driven test that covers:

  • A positive result

  • Zero result

  • A negative result (should return an error)

Best Practices and Tips

Writing tests in Go is straightforward, but there are a few conventions and tips that make your tests more readable, maintainable, and idiomatic:

Name Tests Clearly

First, make sure you use descriptive names for test functions and subtests. A good name explains what you're testing and under what conditions.

Here’s an example:

t.Run("Divide positive numbers", func(t *testing.T) { ... })
t.Run("Divide by zero returns error", func(t *testing.T) { ... })

Keep Tests Small and Focused

Each subtest should verify one thing, and each test function should cover a single function or method.

Try to avoid combining multiple unrelated checks in the same test function, and use table-driven tests help keep multiple similar checks concise without losing clarity.

Use Table-Driven Tests for Repetitive Cases

If you find yourself writing multiple similar test functions, switch to a table-driven pattern. It makes it easier to add new cases, reduces duplicated code, and keeps output organized with t.Run.

Check Errors Explicitly

In Go, functions often return error. So make sure you always check for errors in tests, even if you expect nil.

You can use the wantError pattern in table-driven tests for clarity.

if tt.wantError {
    if err == nil {
        t.Errorf("expected error, got nil")
    }
}

Avoid Panics When Possible

Panics are fine for some internal checks, but in production code, prefer returning an error.

Your tests can check for panics using defer and recover, but this should be the exception rather than the norm.

Run Tests Frequently

Try to make running tests a habit: go test -v ./.... Frequent testing helps catch mistakes early and reinforces TDD practices.

Keep Tests in the Same Package

By convention, tests live in the same package as the code they test. You can create _test.go files for testing, and Go automatically recognizes them.

Only use a separate package calc_test if you want to test your code from the outside, like a consumer. External test packages (just like every other external package) cannot access unexported identifiers.

Use t.Fatalf vs t.Errorf Appropriately

  • t.Errorf reports a failure but continues running the test.

  • t.Fatalf stops the test immediately, which is useful if subsequent code depends on successful setup.

These tips will help you write clean, maintainable, and idiomatic Go tests that are easy to read and extend. Following these practices early in your Go journey will make testing less intimidating and more effective.

Conclusion

Unit testing in Go may feel different at first, especially if you're coming from ecosystems with heavy frameworks and assertions. But the simplicity of Go's testing tools is one of its strengths: once you understand the conventions, writing, running, and organizing tests becomes predictable and intuitive.

In this guide, you've seen how to:

  • Write basic test functions with the testing package

  • Run tests from the command line and interpret the results

  • Use table-driven tests to cover multiple cases efficiently

  • Handle functions that return errors and check for expected failures

Beyond these fundamentals, testing is not just about verifying correctness, it's also about confidence. Well-tested code allows you to refactor, experiment, and add new features with less fear of breaking existing functionality.

As you continue writing Go code, try to integrate testing early, follow the idiomatic patterns you've learned, and explore more advanced topics such as:

  • Using mocks or interfaces to isolate dependencies

  • Benchmark tests with testing.B

  • Coverage analysis with go test -cover

The key takeaway is that testing in Go is accessible, flexible, and powerful, even without fancy frameworks. By building these habits now, you'll write code that's more reliable, maintainable, and enjoyable to work with.

Solutions to Exercises

Subtract Function and Tests

// calc.go
package calc

func Subtract(a, b int) int {
    return a - b
}
// calc_test.go
package calc

import "testing"

func TestSubtractTableDriven(t *testing.T) {
    tests := []struct {
        name string
        a, b int
        want int
    }{
        {"both positive", 5, 3, 2},
        {"positive minus zero", 5, 0, 5},
        {"negative minus positive", -1, 4, -5},
        {"both negative", -3, -2, -1},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            got := Subtract(tt.a, tt.b)
            if got != tt.want {
                t.Errorf("Subtract(%d, %d) = %d; want %d", tt.a, tt.b, got, tt.want)
            }
        })
    }
}

SafeSubtract Function and Tests

// calc.go
package calc

import "fmt"

func SafeSubtract(a, b int) (int, error) {
    result := a - b
    if result < 0 {
        return 0, fmt.Errorf("result would be negative")
    }
    return result, nil
}
// calc_test.go
package calc

import "testing"

func TestSafeSubtract(t *testing.T) {
    tests := []struct {
        name      string
        a, b      int
        want      int
        wantError bool
    }{
        {"positive result", 5, 3, 2, false},
        {"zero result", 3, 3, 0, false},
        {"negative result", 2, 5, 0, true},
    }

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            got, err := SafeSubtract(tt.a, tt.b)
            if tt.wantError {
                if err == nil {
                    t.Errorf("SafeSubtract(%d, %d) expected error, got nil", tt.a, tt.b)
                }
                return
            }
            if err != nil {
                t.Errorf("SafeSubtract(%d, %d) unexpected error: %v", tt.a, tt.b, err)
            }
            if got != tt.want {
                t.Errorf("SafeSubtract(%d, %d) = %d; want %d", tt.a, tt.b, got, tt.want)
            }
        })
    }
}


Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Deep C# - The Console

1 Share
Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing rnr - A Zero-Dependency Task Runner

1 Share
Meet rnr (pronounced 'runner') - a cross-platform task runner that lives inside your repo. Contributors clone and run, zero friction.
Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing DotnetPsCmds - PowerShell CmdLets for .NET

1 Share

DotNet Powershell Commands

Have you ever wondered why there aren't PowerShell Cmdlets similar to dotnet CLI commands?

I've played around with some PowerShell scripts—including some scripts for scaffolding various .NET projects—and some C#-based CmdLets for various things. But I decided to take that experience to the next level by publishing a PowerShell module that provides dotnet CLI functionality while leveraging PowerShell idioms and features.

PowerShell has some interesting features that can make working in the command-line and building scripts a little nicer. One of those features is pipelining. Cmd and Bash have pipes, I know; but PowerShell takes it a step further by supporting object-oriented pipelining. A PowerShell CmdLet can have a .NET object as output (not just text) and can be propagated through the PowerShell command pipeline.

MVP

The initial (pre) release is very much an MVP--it provides what I consider minimal functionality to support scaffolding .NET solutions and projects. It essentially wraps dotnet CLI commands adding some convenience and opinions ("opinionation"?) to defaults and some features.

To get started with the cmdlets, you can import the module from the PowerShell Gallery:

Install-Module -Name Pri.Essentials.DotnetPsCmds

As this release builds on the .NET CLI, the .NET CLI still needs to be installed.

Creating a Solution

To create a solution in the current directory, with a solution name matching the name of the current directory (equivalent of dotnet new sln):

$s = New-DotnetSolution;

The output of this cmdlet is a object-oriented representation of the new solution.

Creating a Project

To create a project (a classlib project) in a new Domain directory with a project name of MyProduct.Domain (equivalent of dotnet new classlib -o Domain -n MyProduct.Domain).

$d = New-DotnetProject 'classlib' Domain MyProduct.Domain;

The output of this cmdlet is a object-oriented representation of the new project.

Adding Package References

With object instances of projects and solutions, we can now re-use those objects for other commands, like adding package references to a project:

$t = New-DotnetProject 'xunit' MyProduct/Tests MyProduct.Tests;
Add-DotnetPackages -PackageIds NSubstitute -Project $t;

Adding Project References

Similar to adding package references, project objects can be re-used when a project needs to reference another project:

$t = New-DotnetProject 'xunit' MyProduct/Tests MyProduct.Tests;
$d = New-DotnetProject 'classlib' Domain MyProduct.Domain;
Add-DotnetProjectReference -Project $d -TargetProject $t;

Leveraging Pipelining

DotNet PowerShell Commands support pipelining. For example, the output of New-DotnetSolution can be piped to New-DotnetProject to add the newly created project to the newly created solution:

New-DotnetSolution | New-DotnetProject 'classlib' Domain MyProduct.Domain;

You can store the solution object in a variable to enable adding multiple new projects to the solution:

$s = New-DotnetSolution;
$s | New-DotnetProject 'classlib' Domain MyProduct.Domain;
$s | New-DotnetProject 'xunit' Test MyProduct.Tests;

And pipelining works with adding package references:

$t = New-DotnetProject 'xunit' MyProduct/Tests MyProduct.Tests;
$t | Add-DotnetPackages -PackageIds NSubstitute;

As well as project references:

$d = New-DotnetProject 'classlib' Domain MyProduct.Domain;
New-DotnetProject 'xunit' Tests MyProduct.Tests `
    | Add-DotnetProjectReference -Project $d;

Or with project references and package references:

$d = New-DotnetProject 'classlib' Domain MyProduct.Domain;
$t = New-DotnetProject 'xunit' Tests MyProduct.Tests `
    | Add-DotnetPackages NSubstitute `
    | Add-DotnetProjectReference -Project $d;

Leveraging Objects

You can leverage properties of objects to avoid repeating yourself when creating solutions and projects with namespaces. For example, rather than having the "MyProduct" portion of the namespace/filename repeated, you can use the solution's Name property (assuming the current directory has the correct name as the "product name"):

$s = New-DotnetSolution;
$d = $s | New-DotnetProject 'classlib' "Domain" "$($s.Name).Domain";
$t = $s | New-DotnetProject 'xunit' "Tests" "$($s.Name).Tests" `
    | Add-DotnetPackages NSubstitute `
    | Add-DotnetProjectReference -Project $d;

Which, with equivalent to the following with dotnet:

dotnet new sln
dotnet new classlib -o Domain -n MyProduct.Domain
del Domain/Class1.cs
dotnet sln add Domain --in-root
dotnet new xunit -o Tests -n MyProduct.Tests
del Tests/UnitTest1.cs
dotnet add Tests package NSubstitute
dotnet add Tests reference Domain
dotnet sln add Tests --in-root

(For details on deleting Class1.cs and UnitTest1.cs, see below.)

Opinionated

I mentioned DotNet PowerShell Commands is opinionated in certain respects, here's where.

Solution Folders For Projects

I noticed recently that adding projects to a solution (dotnet sln add) automatically creates solution folders for projects whose name wasn't identical to the parent folder and doesn't add solution folders when the names are identical. (Using -o and -n with dotnet new <template> creates a project with a different name from the directory name.) Personally, I find it cumbersome to navigate the file system and especially repos like GitHub when I have project folders that match the project file name and fully-qualified project names (for the sake of default namespaces). e.g. repos/Pri.Essentials.DotnetPsCmds/src/Pri.Essentials.DotnetPsCmds/Pri.Essentials.DotnetPsCmds.csproj and tend toward shortened directory names for navigability and readability (repos/DotnetPsCmds/src/DotnetPsCmds/Pri.Essentials.DotnetPsCmds.csproj) because directory listings often just show a less than useful truncated list of folders that appear to have the same name (Maybe this is better solved with support for setting the default namespace when creating a project? But dotnet doesn't support that. 😉)

Unreadable Folder List

The implementation for this currently manifests in DotNet PowerShell Commands by using --in-root with dotnet sln add to avoid the creation of solution folders with new projects.

Class1.cs

Personally, the first thing I do when creating a class library project via dotnet or Visual Studio is to delete the Class1.cs file. DotNet PowerShell Commands automatically deletes this file when it creates a classlib project. I'm assuming users of DotNet PowerShell Commands are intermediate to advanced and feel the same way.

UnitTest1.cs

DotNet PowerShell Commands does the same thing with UnitTest1.cs in xunit/xunit3 projects as it does with Class1.cs in classlib projects.

Extra Features, So Far

Over and above what dotnet provides, DotNet PowerShell Commands also allows creating a custom solution folder when adding a project to a solution via Add-DotnetProject (not to be confused with New-DotnetProject). This is partially to circumvent the opinionated view that hardly anyone would want a solution folder per project by default. But this provides extra value in that if you use the same folder name with the addition multiple projects, all the projects will be grouped in the same solution folder.

$d = New-DotnetProject 'classlib' Domain MyProduct.Domain;
$s = New-DotnetSolution;
$s | Add-DotnetProject -Project $d -SolutionFolder MyFolder;

More details

For more details, please see the open-source project on GitHub. In there you'll find a more thorough README that goes into a bit more detail on CmdLets and examples. You can also create issues or discussions with questions/suggestions in addition to contributing.

I'd like to pull together a simple roadmap for what I'm thinking for this in the near future. Some things I'm thinking so far are:

  • supporting slnx files (by default?)
  • Templated solutions: creating all the parts of common solution templates in a single command.
  • supporting solution-level features like Directory.Build.props, global.json
  • GitHub repo features like workflows, dependabot, etc.
  • Support modifying project-level properties via commands (like GenerateDocumentationFile, PublishRepositoryUrl, etc.)

If you find this useful
I'm a freelance software architect. If you find this post useful and think I can provide value to your team, please reach out to see how I can help. See About for information about the services I provide.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

How to Generate a .NET Banded Report in Your Desktop Application

1 Share
Learn how to generate a .NET banded report in your desktop application. See more from ActiveReports today. Continue reading
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories