Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150531 stories
·
33 followers

freeCodeCamp's New JavaScript Certification is Now Live

1 Share

The freeCodeCamp community just published our new JavaScript certification. You can now sit for the exam to earn the free verified certification, which you can add to your résumé, CV, or LinkedIn profile.

Each certification is filled with hundreds of hours worth of interactive lessons, workshops, labs, and quizzes.

List of JavaScript modules in the new JavaScript certification

How Does the New JavaScript Certification Work?

The new JavaScript certification will teach you core concepts including variables, functions, loops, objects, higher order functions, DOM manipulation, working with events, asynchronous JavaScript, and more.

The certification is broken down into several modules that include lessons, workshops, labs, review pages, and quizzes to ensure that you truly understand the material before moving onto the next module.

The lessons are your first exposure to new concepts. They provide crucial theory and context for how things work in the software development industry.

These lessons include our new interactive editor so you can see previews of the code. You can also play around with the examples for deeper understanding and comprehension.

Example of how to use the interactive editor in the JavaScript lessons.

At the end of each lesson, there will be three comprehension check questions to test your understanding of the material from the lesson.

Example question from a JavaScript objects quiz.

After the lesson blocks, you will do the workshops. These workshops are guided step-based projects that provide you with an opportunity to practice what you have learned in the lessons.

Example step from the Build a muse player workshop.

After the workshops, you will complete a lab which will help you review what you have learned so far. This will give you chance to start building projects on your own, which is a crucial skill for a developer. You will be presented with a list of users stories and will need to pass the tests to complete the lab.

Example user stories for the Build a drum machine lab.

At the end of each module, there is a review page containing a list of all of the concepts covered. You can use these review pages to help you study for the quizzes.

Portions from the functional programming review page.

The last portion of the module is the quiz. This is a 20 question multiple choice quiz designed to test your understanding from the material covered in the module. You will need to get 18 out of 20 correct to pass.

Example question on objects from the JavaScript objects quiz.

Throughout the certification, there will be five certification projects you will need to complete in order to qualify for the exam.

List of certification projects in the new JavaScript certification

Once you’ve completed all 5 certification projects, you’ll be able to take the 50 question exam using our new open source exam environment. The freeCodeCamp community designed this exam environment tool with two goals: respecting your privacy while also making it harder for people to cheat.

Once you download the app to your laptop or desktop, you can take the exam.

Frequently Asked Questions

Is all of this really free?

Yes. freeCodeCamp has always been free, and we’ve now offered free verified certifications for more than a decade. These exams are just the latest expansion to our community’s free learning resources.

What prevents people from just cheating on the exams?

Our goal is to strike a balance between preventing cheating and respecting people's right to privacy.

We've implemented a number of reliable, yet non-invasive, measures to help prevent people from cheating on freeCodeCamp's exams:

  1. For each exam, we have a massive bank of questions and potential answers to those questions. Each time a person attempts an exam, they'll see only a small, randomized sampling of these questions.

  2. We only allow people to attempt an exam one time per week. This reduces their ability to "brute force" the exam.

  3. We have security in place to validate exam submissions and prevent man-in-the-middle attacks or manipulation of the exam environment.

  4. We manually review each passing exam for evidence of cheating. Our exam environment produces tons of metrics for us to draw from.

We take cheating, and any form of academic dishonesty, seriously. We will act decisively.

This said, no one's exam results will be thrown out without human review, and no one's account will be banned without warning based on a single suspicious exam result.

Are these exams “open book” or “closed book”?

All of freeCodeCamp’s exams are “closed book”, meaning you must rely only on your mind and not outside resources.

Of course, in the real world you’ll be able to look things up. And in the real world, we encourage you to do so.

But that is not what these exams are evaluating. These exams are instead designed to test your memory of details and your comprehension of concepts.

So when taking these exams, do not use outside assistance in the form of books, notes, AI tools, or other people. Use of any of these will be considered academic dishonesty.

Do you record my webcam, microphone, or require me to upload a photo of my personal ID?

No. We considered adding these as additional test-taking security measures. But we have less privacy-invading methods of detecting most forms of academic dishonesty.

If the environment is open source, doesn't that make it less secure?

"Given enough eyeballs, all bugs are shallow." – Linus’s Law, formulated by Eric S. Raymond in his book The Cathedral and the Bazaar

Open source software projects are often more secure than their closed source equivalents. This is because a lot more people are scrutinizing the code. And a lot more people can potentially help identify bugs and other deficiencies, then fix them.

We feel confident that open source is the way to go for this exam environment system.

How can I contribute to the Exam Environment codebase?

It's fully open source, and we'd welcome your code contributions. Please read our general contributor onboarding documentation.

Then check out the GitHub repo.

You can help by creating issues to report bugs or request features.

You can also browse open help wanted issues and attempt to open pull requests addressing them.

Are the exam questions themselves open source?

For obvious exam security reasons, the exam question banks themselves are not publicly accessible. :)

These are built and maintained by freeCodeCamp's staff instructional designers.

What happens if I have internet connectivity issues mid-exam?

If you have internet connectivity issues mid exam, the next time you try submit an answer, you’ll be told there are connectivity issues. The system will keep prompting you to retry submitting until the connection succeeds.

What if my computer crashes mid-exam?

If your computer crashes mid exam, you’ll be able to re-open the Exam Environment. Then, if you still have time left for your exam attempt, you’ll be able to continue from where you left off.

Can I take exams in languages other than English?

Not yet. We’re working to add multi-lingual support in the future.

I have completed my exam. Why can't I see my results yet?

All exam attempts are reviewed by freeCodeCamp staff before we release the results. We do this to ensure the integrity of the exam process and to prevent cheating. Once your attempt has been reviewed, you'll be notified of your results the next time you log in to freeCodeCamp.org.

I am Deaf or hard of hearing. Can I still take the exams?

Yes! While some exams may include audio components, we do make written transcripts available for reading.

I am blind or have limited vision, and use a screen reader. Can I still take the exams?

We’re working on it. Our curriculum is fully screen reader accessible. We're still refining our screen reader usability for the Exam Environment app. This is a high priority for us.

I use a keyboard instead of a mouse. Can I navigate the exams using just a keyboard?

This is a high priority for us. We hope to add keyboard navigation to the Exam Environment app soon.

Are exams timed?

Yes, exams are timed. We err on the side of giving plenty of time to take the exam, to account for people who are non-native English speakers, or who have ADHD and other learning differences that can make timed exams more challenging.

If you have a condition that usually qualifies you for extra time on standardized exams, please email support@freecodecamp.org. We’ll review your request and see whether we can find a reasonable solution.

What happens if I fail the exam? Can I retake it?

Yes. You get one exam attempt per week. After you attempt an exam, there is a one-week (exactly 168 hour) “cool-down” period where you cannot take any freeCodeCamp exams. This is to encourage you to study and to pace yourself.

There is no limit to the number of times you can take an exam. So if you fail, study more, practice your skills more, then try again the following week.

Do I need to redo the projects if I fail the exam?

No. Once you’ve submitted a certification project, you do not need to ever submit it again.

You can re-do projects for practice, but we recommend that you instead build some of our many practice projects in freeCodeCamp’s developer interview job search section.

A screenshot of the "Prepare for the developer interview job search" section with lots of coding projects

What happens if I already have the old Legacy Responsive Web Design certification? Should I claim the new one?

The new certification has more theory and practice as well as an exam. So if you’re looking to brush up on your skills, then you can go through the new version of this certification.

What will happen to my existing coursework progress on the Full Stack Certification? Does it transfer over to the Responsive Web Design course?

If you’ve already started the Certified Full Stack Developer Curriculum, all of your previously completed work should already be saved there.

To be clear, we’ve copied over all of the coursework from the full stack certification to this newer certification.

Can I still continue with the current Full Stack Developer Certification and just not do the new certification?

We’ve moved the coursework for the Full Stack Developer Certification over and broken it up into smaller certifications. Currently there are seven courses available for you to go through. Here is the complete list:

The Certified Full Stack Developer Certification button will remain on the learn page for a short time to give people the opportunity to switch over to the new certifications. Over the next few months, though, this option will disappear.

List of all certifications on the freeCodeCamp learn page.

Will my legacy certifications become invalid?

No. Once you claim a certification, it’s yours to keep.

Also note that we previously announced that freeCodeCamp certifications would have an expiration date and require recertification. We don’t plan to implement this anytime soon. And if we do decide to, we will give everyone at least a year’s notice.

Will the exam be available to take on my phone?

At this time, no. You’ll need to use a laptop or desktop to download the exam environment and take the exam. We hope to eventually offer these certification exams on iPhone and Android.

I have a disability or health condition that is not covered here. How can I request accommodations?

If you need specific accommodations for the exam (for example extra time, breaks, or alternative formats), please email support@freecodecamp.org. We’ll review your request and see whether we can find a reasonable solution.

Anything else?

Good luck working through freeCodeCamp’s coursework, building projects, and preparing for these exams.

Happy coding!



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Resolving Overload Ambiguity with Collection Expressions

1 Share
OverloadResolutionPriority allows you to specify which method overload should be preferred by the compiler when multiple overloads are applicable. This can be useful in scenarios where you have multiple methods with types that can be implicitly converted to each other, and you want to control which overload is chosen. I've found this feature particularly useful when having existing overloads that take…
Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Making it easier to sponsor Rust contributors

1 Share

TLDR: You can now find a list of Rust contributors that you can sponsor on this page.

Same as with many other open-source projects, Rust depends on a large number of contributors, many of whom make Rust better on a volunteer basis or are funded only for a fraction of their open-source contributions.

Supporting these contributors is vital for the long-term health of the Rust language and its toolchain, so that it can keep its current level of quality, but also evolve going forward. Of course, this is nothing new, and there are currently several ongoing efforts to provide stable and sustainable funding for Rust maintainers, such as the Rust Foundation Maintainer Fund or the RustNL Maintainers Fund. We are very happy about that!

That being said, there are multiple ways of supporting the development of Rust. One of them is sponsoring individual Rust contributors directly, through services like GitHub Sponsors. This makes it possible even for individuals or small companies to financially support their favourite contributors. Every bit of funding helps!

Previously, if you wanted to sponsor someone who works on Rust, you had to go on a detective hunt to figure out who are the people contributing to the Rust toolchain, if they are receiving sponsorships and through which service. This was a lot of work that could provide a barrier to sponsorships. So we simplified it!

Now we have a dedicated Funding page on the Rust website, which helpfully shows members of the Rust Project that are currently accepting funds through sponsoring1. You can click on the name of a contributor to find out what teams they are a part of and what kind of work they do in the Rust Project.

Note that the list of contributors accepting funding on this page is non-exhaustive. We made it opt in, so that contributors can decide on their own whether they want to be listed there or not.

If you ever wanted to support the development of Rust "in the small", it is now simpler than ever.

  1. The order of people on the funding page is shuffled on every page load to reduce unnecessary ordering bias.

Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

⭐ Typemock Architecture: Inside the .NET Isolator Engine (Part 2)

1 Share

In Part 2 of our Typemock Architecture series, we explore the inner workings of the .NET Isolator Engine: the runtime component that gives the Typemock architecture its unique ability to mock statics, privates, sealed methods, legacy systems, and third-party code without rewriting anything.

In Part 1 we covered where Typemock installs, how the desktop layout works, and how Visual Studio, test runners, and CI connect to the mocking engine.

Now it’s time to zoom into the real heart of the typemock architecture:
the Isolator Engine the component responsible for static mocking, constructor interception, sealed method overrides, and deep legacy testing that other frameworks simply cannot do.

This is the part of Typemock developers describe as:

“Wait… it mocked THAT?!”

Let’s open the hood.


🔥 1. The Problem: Traditional Mocking Lives Outside the Runtime

Before diving into Typemock, it’s worth understanding why other frameworks can’t do what Typemock does.

Traditional mocks depend on:

  • Dependency injection
  • Interfaces
  • Virtual methods
  • Reflection
  • Context objects
  • Manual refactoring
  • and sometimes ugly workarounds

This means they fail on:

❌ Static methods
❌ Sealed classes
❌ Private methods
❌ Hidden dependencies
❌ Legacy systems
❌ Third-party SDK calls
❌ Code you can’t change

This is not because they are “bad.”
It’s because they work at the language level, not the runtime level.

Typemock is different.


🧠 2. Typemock Enters at the Runtime Level (Not the Language Level)

Typemock integrates into the .NET runtime using the CLR Profiler API, a low-level interface designed for performance monitoring – not mocking.
But with deep engineering, Typemock uses this interface to redirect execution, not just observe it.

This is the architectural breakthrough that powers Typemock.


🧬 3. The Lifecycle: How the Isolator Engine Loads Into Your Test Process

The best way to understand the Isolator Engine is through its lifecycle.

When you run:

dotnet test

Here is the sequence that follows:


Step 1: The Test Runner Starts a Clean Process

A new process launches (testhost.exe, vstest.console.exe, etc.).
This is an isolated space: no pollution, no global hooks, no background services.
To understand how .NET test hosts work at a low level, Microsoft’s documentation provides a useful overview.


Step 2: Typemock Registers as a CLR Profiler

Typemock tells the CLR:

“Notify me every time a method is JIT compiled.”

This is a legal, documented API in .NET not a hack, not a patch.


Step 3: CLR Begins JIT Compilation

As the CLR encounters each method in your test run, it begins compiling them.

This is where the Typemock architecture comes alive.


Step 4: Typemock Intercepts and Rewrites IL In Memory

When a method is about to be compiled, Typemock sees its IL and can:

  • Replace the call target
  • Rewrite instructions
  • Override return values
  • Redirect constructor calls
  • Modify behavior of sealed/private/static members
  • Inject fake logic

All in memory, before it becomes machine code.

Nothing touches your binaries.
Nothing touches your source.
Nothing persists after the test.

This is why InfoSec teams love Typemock.


Step 5: Typemock Hands the Modified IL Back to the CLR

The CLR compiles the IL into native code – the code that now reflects your fake behavior.


Step 6: The Test Runs with Full Control

This is the moment you get:

Isolate.WhenCalled(() => File.ReadAllText("settings.json"))
       .WillReturn("fake-config");

Or:

Isolate.Fake.StaticMethods(typeof(LegacyManager));

Or even:

Isolate.NonPublic.WhenCalled(legacyObject, "CalcInternalScore")
                 .WillReturn(999);

No wrappers.
No interfaces.
No constructor injection.
No code redesign.

Your code stays your code.


Step 7: Process Ends → Engine Unloads

No residue.
No DLL changes.
No registry keys.
No locked files.
No background tasks.

Your system remains clean.

typemock architecture interept

🎛️ 4. What the Isolator Engine Actually Does at Runtime

Let’s break it down in technical detail.

✔ Intercepts method calls

Before they become machine code.

✔ Rewrites IL instructions

Using controlled rewriting routines inside the JIT callback.

✔ Substitutes behavior on demand

Returning what your test instructs.

✔ Tracks execution paths

Used for Coverage + SmartRunner.

✔ Does not touch disk

All changes vanish when the process ends.

✔ Is fully deterministic

Tests behave identically in:

  • VS
  • CLI
  • CI
  • Docker
  • Build servers
  • Agent machines

This is one of the biggest architectural advantages of Typemock.


🧵 5. A Visual Metaphor: The JIT as the Gatekeeper

If you imagine the CLR as a gatekeeper, Typemock stands next to it and says:

“Before you compile this method, I want to adjust something.”

The CLR says:

“Okay, I’ll wait.”

Typemock modifies it.

CLR compiles the final version.

Execution proceeds normally except now it follows your rules.

This is the Typemock architecture in one sentence:

We intercept execution at the moment it matters and nowhere else.

typemock architecture engine

🔒 6. Why This Technique Is Safe (Enterprise-Level)

Large enterprises adopt Typemock because:

✔ No code modification

Your source code remains unchanged.

✔ No assembly rewriting on disk

You never risk corrupted binaries.

✔ No global hooks

No startup processes.
No runtime dependencies.

✔ No elevated permissions needed

Standard user rights are enough.

✔ No impact outside the test process

Production is never touched.


🧪 7. Standard Mock vs Typemock Runtime Version

❌ Standard Mock: Language-Level Limitations

var mock = new Mock<ILoginService>();
mock.Setup(s => s.Validate()).Returns(true);

var sut = new Authenticator(mock.Object);
Assert.IsTrue(sut.Authenticate());

You must:

  • Inject dependencies
  • Create interfaces
  • Rewrite architecture
  • Replace constructors
  • Do design gymnastics

✅ Typemock Mock: Runtime-Level Freedom

Isolate.WhenCalled(() => LegacyLoginService.Validate())
       .WillReturn(true);

var sut = new Authenticator();
Assert.IsTrue(sut.Authenticate());

No injection.
No redesign.
No interface ceremony.
Just control.

This is the power of going runtime-deep.


🔮 8. Why This Matters for Teams

Developers → ship faster

Architects → don’t force redesigns

DevOps → predictable CI behavior

InfoSec → zero footprint & no global code rewriting

Leaders → can test what used to be untestable

This is why Typemock has been adopted in:

  • Banking
  • Finance
  • Aviation
  • Healthcare
  • SaaS
  • Gaming
  • Embedded
  • Defense

Teams with real, messy, legacy, high-stakes systems.

Because Typemock’s architecture handles real-world complexity not theoretical code purity.


🎯 Conclusion: The Isolator Engine Is the Core of the Typemock Architecture

Typemock works differently because it is architected differently.

It doesn’t depend on:

❌ Interfaces
❌ Virtual methods
❌ DI containers
❌ Design purity
❌ Test-only wrappers

It depends on:

✔ CLR JIT interception
✔ In-memory IL rewriting
✔ Controlled runtime substitution
✔ Clean test-process isolation

This is what makes Typemock capable of mocking the “impossible.”


🚀 Want to See the Engine in Action?

👉 Download the Isolator Engine and experience the freedom:
https://www.typemock.com/download-isolator/

The post ⭐ Typemock Architecture: Inside the .NET Isolator Engine (Part 2) appeared first on Typemock.

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Interesting links of the week 2025-50

1 Share

Here are the most interesting articles, blog posts, videos, podcasts, and GitHub repositories I’ve run into over the last week (December 1, 2025 - December 7, 2025). Enjoy!

Microsoft / Dotnet / Azure - Other Software Dev - AI - Tech and Science - Leadership - Project Management / Agile - Social Media - Non-Tech / Random - Videos - GitHub Repos

Here are some posts I’ve written in the past week

  • Nothing this week; more coming soon

Microsoft / Dotnet / Azure

Other Software Development

AI

Technology and Science

  • Nothing this week; more coming soon

Leadership

Project Management / Agile

  • Nothing this week; more coming soon

Social Media

  • Nothing this week; more coming soon

Non-Technology / Random

  • Nothing this week; more coming soon

Videos

GitHub Repos

  • fizzy - Kanban as it should be. Not as it has been

A seal indicating this page was written by a human



Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How Aspire composes itself: an overview of Aspire's Docker Compose integration

1 Share

Aspire’s Docker Compose support has come up in a few conversations recently, so I figured it’s worth breaking down how it works under the hood. If you’re not familiar with it, Aspire is a framework for modeling cloud-based apps. It lets you define your services and their dependencies in code (databases, caches, message queues, and the like) and handles the orchestration of running them locally or deploying them to the cloud. Docker Compose is a tool for defining and running multi-container applications using a YAML file. Aspire’s Docker Compose integration bridges these two worlds: you model your app in Aspire’s code-first style and Aspire generates the Docker Compose assets you need to run it.

Today, I want to explore what I’ve started conceptualizing as the “deployment lifecycle” for an Aspire integration: the multi-step process of going from an AppHost (the code where you define your application and its dependencies) to an actual running service. Aspire’s deployment support for Docker Compose consists of four commands that build upon each other and model the lifecycle of a Docker Compose-based deployment. We’ll walk through each of these commands in this blog post and you can see a complete sample application in this repo to explore further.

The lifecycle, step by step

First is the aspire do publish command which generates Docker Compose YAML assets and .env files that are parameterized but unfilled. Note, this is equivalent to the aspire publish shorthand but I am using the aspire do command here to make it clear that these actions are modeled as steps in the pipeline.

$ aspire do publish
14:44:20 (pipeline-execution) → Starting pipeline-execution...
14:44:20 (publish-env) → Starting publish-env...
14:44:20 (publish-env) i [INF] Generating Compose output
14:44:20 (publish-env) → Writing the Docker Compose file to the output path.
14:44:20 (publish-env) ✓ Docker Compose file written successfully to /Users/captainsafia/git/tests/docker-compose-deploy/aspire-output/docker-compose.yaml. (0.0s)
14:44:20 (publish-env) ✓ publish-env completed successfully
14:44:20 (publish) → Starting publish...
14:44:20 (publish) ✓ publish completed successfully
14:44:20 (pipeline-execution) ✓ Completed successfully

The functionality of this command is powered by a light-weight, strongly-typed implementation of the Docker Compose YAML specification. This is what effectively allows you to manipulate the contents of the generated YAML file from code using Aspire’s PublishAsDockerComposeService. That in-memory representation of the Docker Compose services is emitted into YAML on disk when the publish command runs.

#:package Aspire.Hosting.Docker
#:package Aspire.Hosting.Python

#:sdk Aspire.AppHost.Sdk

var builder = DistributedApplication.CreateBuilder(args);

builder.AddDockerComposeEnvironment("env");

builder.AddPythonScript("todo-api", "./todos-fast-api", "main.py")
    .WithUvEnvironment()
    .WithHttpEndpoint(targetPort: 8000)
    .WithExternalHttpEndpoints()
    .PublishAsDockerComposeService((resource, service) =>
    {
        // Customizations go here
        service.Labels["target_env"] = "production";
    });

builder.Build().Run();

The .env file that is generated is not available for code-based editing in the same way that the Docker Compose declaration is. It’s entirely meant to be a reflection of the parameters and inputs that are available in the Aspire app model to the resource. Those values may be set when your AppHost is running because they are resolved from configuration or prompted for by the user. By default, the publish command doesn’t materialize any of these values to the generated .env file.

This is an important distinction to call out because, as previously discussed, the statement that the publish command generates assets that can be deployed is only partially true. In this particular case, the assets are essentially useless until you figure out how to fill in all the required parameters yourself.

This is particularly important because some of the required parameters are references to container images that need to be built. If you have runnable services that are modeled in your AppHost, Aspire needs to build a container image for it and push it to a registry. The publish command doesn’t do this, it just leaves a placeholder in the .env file. This means that you’ll need a way to build and push those images to a local or remote registry before you can actually deploy.

That’s where the second command comes in. The aspire do prepare-{resource-name} command generates Docker Compose YAML assets and env files that are parameterized and filled.

$ aspire do prepare-env
14:45:23 (pipeline-execution) → Starting pipeline-execution...
14:45:23 (publish-env) → Starting publish-env...
14:45:23 (process-parameters) → Starting process-parameters...
14:45:23 (publish-env) i [INF] Generating Compose output
14:45:23 (process-parameters) ✓ process-parameters completed successfully
14:45:23 (deploy-prereq) → Starting deploy-prereq...
14:45:23 (build-prereq) → Starting build-prereq...
14:45:23 (build-prereq) ✓ build-prereq completed successfully
14:45:23 (deploy-prereq) i [INF] Initializing deployment for environment 'Production'
14:45:23 (deploy-prereq) i [INF] Setting default deploy tag 'aspire-deploy-20251207224523' for compute resource(s).
14:45:23 (deploy-prereq) ✓ deploy-prereq completed successfully
14:45:23 (build-pythonista) → Starting build-pythonista...
14:45:23 (build-pythonista) i [INF] Building container image for resource pythonista
14:45:23 (build-pythonista) i [INF] Building image: pythonista
14:45:23 (publish-env) → Writing the Docker Compose file to the output path.
14:45:23 (publish-env) ✓ Docker Compose file written successfully to /Users/captainsafia/git/tests/docker-compose-deploy/aspire-output/docker-compose.yaml. (0.0s)
14:45:23 (publish-env) ✓ publish-env completed successfully
14:45:23 (publish) → Starting publish...
14:45:23 (publish) ✓ publish completed successfully
14:45:24 (build-pythonista) i [INF] docker buildx for pythonista:9d1f657d87f6e617e09020ecbb978a291156190c succeeded.
14:45:24 (build-pythonista) i [INF] Building image for pythonista completed
14:45:24 (build-pythonista) ✓ build-pythonista completed successfully
14:45:24 (build) → Starting build...
14:45:24 (build) ✓ build completed successfully
14:45:24 (prepare-env) → Starting prepare-env...
14:45:24 (prepare-env) i [INF] Environment file '/Users/captainsafia/git/tests/docker-compose-deploy/aspire-output/.env.Production' already exists and will be overwritten
14:45:24 (prepare-env) ✓ prepare-env completed successfully
14:45:24 (pipeline-execution) ✓ Completed successfully

It bridges the gap between publish and deploy by:

  • Filling in parameter values from configuration or user prompts
  • Building container images and pushing them to the local registry
  • Resolving connection strings and other resource references

By the time this command completes, you have assets that you can pass directly to the docker compose up command. Or you can use the third command in the stack which launches it for you and handles passing all the correct flags and arguments to Docker Compose. When I run aspire do deploy, Aspire launches Docker Compose locally on the machine.

$ aspire do deploy
14:45:41 (pipeline-execution) → Starting pipeline-execution...
14:45:41 (publish-env) → Starting publish-env...
14:45:41 (process-parameters) → Starting process-parameters...
14:45:41 (publish-env) i [INF] Generating Compose output
14:45:41 (process-parameters) ✓ process-parameters completed successfully
14:45:41 (deploy-prereq) → Starting deploy-prereq...
14:45:41 (build-prereq) → Starting build-prereq...
14:45:41 (build-prereq) ✓ build-prereq completed successfully
14:45:41 (deploy-prereq) i [INF] Initializing deployment for environment 'Production'
14:45:41 (deploy-prereq) i [INF] Setting default deploy tag 'aspire-deploy-20251207224541' for compute resource(s).
14:45:41 (deploy-prereq) ✓ deploy-prereq completed successfully
14:45:41 (build-pythonista) → Starting build-pythonista...
14:45:41 (build-pythonista) i [INF] Building container image for resource pythonista
14:45:41 (build-pythonista) i [INF] Building image: pythonista
14:45:41 (publish-env) → Writing the Docker Compose file to the output path.
14:45:41 (publish-env) ✓ Docker Compose file written successfully to /Users/captainsafia/git/tests/docker-compose-deploy/aspire-output/docker-compose.yaml. (0.0s)
14:45:41 (publish-env) ✓ publish-env completed successfully
14:45:41 (publish) → Starting publish...
14:45:41 (publish) ✓ publish completed successfully
14:45:42 (build-pythonista) i [INF] docker buildx for pythonista:8de08d760aa4d4227325d474e16815ea7be23b8d succeeded.
14:45:42 (build-pythonista) i [INF] Building image for pythonista completed
14:45:42 (build-pythonista) ✓ build-pythonista completed successfully
14:45:42 (build) → Starting build...
14:45:42 (build) ✓ build completed successfully
14:45:42 (prepare-env) → Starting prepare-env...
14:45:42 (prepare-env) i [INF] Environment file '/Users/captainsafia/git/tests/docker-compose-deploy/aspire-output/.env.Production' already exists and will be overwritten
14:45:42 (prepare-env) ✓ prepare-env completed successfully
14:45:42 (docker-compose-up-env) → Starting docker-compose-up-env...
14:45:42 (docker-compose-up-env) → Running docker compose up for env
14:45:44 (docker-compose-up-env) ✓ Service env is now running with Docker Compose locally (1.2s)
14:45:44 (docker-compose-up-env) ✓ docker-compose-up-env completed successfully
14:45:44 (print-pythonista-summary) → Starting print-pythonista-summary...
14:45:44 (print-pythonista-summary) i [INF] Successfully deployed pythonista to http://localhost:54845
14:45:44 (print-pythonista-summary) ✓ print-pythonista-summary completed successfully
14:45:44 (deploy) → Starting deploy...
14:45:44 (deploy) ✓ deploy completed successfully
14:45:44 (pipeline-execution) ✓ Completed successfully

One thing to note about deploy is that it will re-build the images and re-generate the Docker Compose assets each time it runs. It doesn’t support using cached values yet, although there are ways that can be modeled via custom pipeline steps. The current implementation also assumes that you are using images from the local registry. There’s no support for pulling remote images, although work is ongoing to support that kind of thing.

The fourth (and optional) command gives you the ability to tear down the environment that was created by running aspire do docker-compose-down-env.

Why this multi-step approach?

This multi-step approach reflects the philosophy discussed in my blog post about Aspire’s Pipelines feature: the more granular a representation you can create of the deployment steps that are involved in your workflow the the more extendable the system is. For example, in CI/CD pipelines you might want to run publish in one stage, prepare in another (perhaps with different credentials or in a different environment), and deploy in a third. This separation of concerns maps well to how pipelines are typically structured.

A more granular approach has value for debuggability and auditing, as well. If something goes wrong, you can inspect the intermediate artifacts. Did publish generate the right structure? Did prepare resolve the values correctly? This visibility is invaluable when troubleshooting.

Finally, more granular representations are more reusable. You can run publish once and then prepare multiple times with different configurations for dev, staging, and production environments. You can choose to enhance the default “deploy” behavior and do something else with the generated Docker Compose assets that isn’t running Docker Compose locally on your machine.

This multi-step lifecycle is something we’re refining across all of Aspire’s deployment targets. The Docker Compose integration serves as a good proving ground for these concepts because the model for a Docker Compose service and its inputs is fairly simple.

Fin

Aspire’s Docker Compose support follows a four-step deployment lifecycle: publish generates parameterized assets, prepare resolves values and builds images, deploy launches the composition, and docker-compose-down tears it down. This separation provides flexibility for different deployment scenarios while keeping the core logic within the Aspire ecosystem. If you’ve been following along with my previous posts on deployment state and the publish vs. deploy distinction, you’ll see how these concepts come together in practice with the Docker Compose integration.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories