Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153352 stories
·
33 followers

Building a REST API With Express Framework and MongoDB

1 Share

Almost every modern web application will need a REST API for the frontend to communicate with, and in almost every scenario, that frontend is going to expect to work with JSON data. As a result, the best development experience will come from a stack that will allow you to use JSON throughout, with no transformations that lead to overly complex code.

Take MongoDB, Express Framework, and Node.js as an example.

Node.js and Express Framework handle your application logic, receiving requests from clients, and sending responses back to them. MongoDB is the database that sits between those requests and responses. In this example, the client can send JSON to the application and the application can send the JSON to the database. The database will respond with JSON and that JSON will be sent back to the client. This works well because MongoDB is a document database that works with BSON, a JSON-like data format.

In this tutorial, we'll see how to create an elegant REST API using MongoDB and Express Framework.

The post Building a REST API With Express Framework and MongoDB appeared first on Hevo.



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Signed CycloneDX SBOMs for CRA Compliance Available for Text Control Products

1 Share
Text Control is proud to announce that we now provide signed CycloneDX Software Bill of Materials (SBOMs) for our products, ensuring compliance with the Cyber Resilience Act (CRA) and enhancing transparency for our customers. This initiative reflects our commitment to security and compliance, allowing our customers to easily access detailed information about the components used in our software.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

OpenSpec - a lightweight AI-driven spec framework

1 Share

I kept on hearing about these new "spec-driven development" tools for AI coding agents, and I was curious to have a play. I initially tried GitHub Spec Kit, but I found it a bit heavyweight, and to be honest I just couldn't get on with the term "constitution"! Trivial, I know - but it just grated on me.

Another popular one is OpenSpec. Nice and lightweight, and has given me a few really nice benefits, which I'll highlight in this blog post. I've now been using it for a month or so, and am really enjoying and getting a lot of value from it.

It's not really "spec-driven"

Before getting into the bits I like, it's worth clarifying something. People often describe these tools as "spec-driven development", but I'm not really finding that it is spec-driven. I'm finding the specs are an artifact of the workflow, not the starting point. The starting point is a brain-storming session with the agent. The closest part of using it that I could vaguely bucket under the term "spec-driven", is when I use an existing spec to prime the context of a new piece of related work (more on that later). Also note that it's the agent that writes these specs - not the human. The process doesn't involve lots of hand-writing PRDs, etc.

Explore mode is brilliant

The Getting Started page in OpenSpec's documentation doesn't really talk about "explore" mode - which is a bit of a shame, as for me, this is my go-to starting point, and is one of the killer features.

The getting started page describes running this workflow: /opsx:propose -> /opsx:apply -> /opsx:sync -> /opsx:archive. Those being skills / slash-commands that you or the agent can invoke. I'll talk more about what those steps mean in a bit.

But for me - my starting point is always the /opsx:explore skill (which then naturally follows onto the rest of the above-mentioned workflow afterwards). This enters "explore mode", where the agent deeply tries to understand the requirements and the best way to go about it. I give it some information about the requirement - I can even just tell it to look at a JIRA ticket or GitHub issue. I can paste images if that helps. I can point it at past specs for context (more on that later). Quite often I'll use voice dictation and just talk to it, brain-dumping my thoughts.

The agent will then make sure it fully understands the requirement, it'll explore the codebase, ask you probing questions to cover any gaps or unknowns, and basically start a discussion with you until everything's nailed down. It even shows ascii-art diagrams showing you architectural ideas and flow diagrams. If there are multiple ways of solving the problem, it'll show them to you explaining the pros and cons, and helping you make a decision.

At this point, you may be thinking that this is just like 'plan mode', which all the coding agents have. I've found that what I described above goes far beyond what happens in the out-of-the-box plan mode. Obviously, you can create a custom prompt/skill to tell the agent to interview you more heavily and do the above. You don't need OpenSpec to do this - but it is nice that OpenSpec gives you this out of the box, and then from what it's learnt, generates the artifacts and specs (which I'll describe below). If you want to see what explore mode is actually doing - check out the skill's markdown here.

Specs as context primers

The second big benefit is something I didn't appreciate when I first started using OpenSpec - but I'm finding it hugely valuable. And that's that the generated spec files are fantastic context primers for future work.

If I'm picking up a new piece of work that builds on, or relates to, something I built previously, I can just point the coding agent at the relevant spec from the previous change. Suddenly the agent has knowledge of that feature and the decisions made around it, including the whys. All without me having to dig out old chats, or re-explain things from scratch.

One could argue that the code is the source of truth, and the agent should be looking at that instead. But a feature might be spread out throughout a codebase, and it doesn't say the whys or the actual requirements. It's a lot of work for the agent to infer all that from the code. A spec is a very specific markdown file describing the whats and the whys. It's far better for context priming. Obviously there is the risk that the spec is out of date or wrong - but I find the benefits far outweigh that risk.

Terminology

Let's now explain some of the OpenSpec terminology:

  • Change -> This is the concept of a piece of work. The lifecycle of getting a feature or something done is called a 'change'. On disk, this is pretty much just the name of the directory that contains the artifacts. Almost like a 'container' for the artifacts.
  • Artifacts -> These are the files that OpenSpec generates during the change lifecycle. They include things like: proposal.md, design.md, tasks.md, delta specs, etc. These files are called artifacts, and they're generated by the agent for us. We don't need to manually write proposal documents or design docs - the agent does it for us, and we can edit them if we want to.
  • Proposal (proposal.md) -> (one of the artifacts) -> Once I finish the above-mentioned explore mode and both the agent and I are happy with the plan - then the agent typically asks me if I'm happy for it to generate a 'proposal'. This is when it generates the proposal.md artifact, creating the 'change' folder if it doesn't already exist, and puts the proposal document in there. The proposal describes the whats and the whys.
  • Design (design.md) -> This is the next artifact that gets generated, and covers the design decisions.
  • Tasks (tasks.md) -> This is another artifact that gets generated, and is a list of the tasks that need to be done to implement the change. It uses markdown checkboxes, and the agent ticks them off as it implements them. You can also add extra tasks to this if you want (or get the agent to do it for you). For example, if there are manual tasks, like adding KeyVault secrets to the different environments - then you can add those so you don't forget and they become part of the change's definition of done.

Applying the change (ie. the implementation)

Once OpenSpec has created the above artifacts, it's time to apply the change. You can just tell the agent to apply the change, or if you prefer, explicitly run the /opsx:apply command (skill). This can be done in a fresh context in your agent, as the full task, decisions, task list, etc. are now stored on disk in markdown files.

The agent will then start doing the actual coding, and implementing the change. It'll work through the tasks in tasks.md, and tick them off as it goes. You can watch this happen in real-time, which is quite satisfying!

Syncing / Archiving the change

Once the implementation is complete, the agent will then ask you if you want to archive the change (basically the /opsx:archive command/skill). This moves the change folder into an 'archive' directory for future reference.

There's also another stage before archiving called 'syncing'. I'm including it in this section because quite often the 'archive' step will do that for you automatically. Syncing means it'll take any "delta specs" from the change, and apply them to the output spec files.

A "delta spec" is just a markdown file describing what's changed compared to the existing spec. They use sections like ## ADDED Requirements, ## MODIFIED Requirements, and ## REMOVED Requirements, each containing the relevant requirements and their scenarios.

When you archive, OpenSpec takes those delta specs from that change, and folds them into the canonical spec for that feature - so the next time you (or the agent) reads that spec, the full picture is there from all the changes that touch that feature.

Note that syncing / archiving creates further pending Git changes - so it's worth doing this before your PR is merged - otherwise you'll have to create another PR for those sync/archiving changes.

Summary

If you're already using AI coding agents day-to-day - then OpenSpec is definitely worth a look. It's lightweight, and the combination of explore mode and the specs-as-context-primers adds a ton of value in my experience.

If you've already tried it - I'd love to hear your thoughts and experiences with it. Please to comment down below and let me and the other readers know what you think!

Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

C#: Inheritance vs Composition — When to Use Each and Why AI Can't Decide for You

1 Share

Every C# developer eventually hits this question: should I use inheritance or composition here? And if you ask an AI — or search for articles online — you’ll get the standard answer: “favor composition over inheritance.” That advice isn’t wrong. But it isn’t enough, either.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Shooting yourself in the foot with AI

1 Share

Today I had a fun one, where I finally dove into something that was bothering me for days: since a couple of days I had been getting questions from my different GitHub Copilot sessions to close some GitHub nofications, and only from a very specific type of notification: deployment statuses.

Screenshot of the AI asking to cleanup notifications

I noticed I got these accross different projects, so I was tempted to blame the use of a new tool that recently got updated for this. I checked it low and wide on configuration, user system prompts, etc., even checked my copilot-instructions files, but nothing to be found. Already logged an error with the tool creator, thinking it was something they added without me telling them.

Figuring out where this was coming from

Of course I dropped into a Copilot CLI session to figure this one out. Somewhere, in one of my config files, either on the repo or global level, this could have been added somewhere? Perhals a tool config, VS Code extension, or something else?

After checking a couple of these locations, Copilot found the issue: Screenshot of the AI finding the issue

So this was created in one of my sessions by myself (with Copilot), where had interpreted a command to become: write a prompt to disk to get notifications, as an extension for the Copilot CLI! I have been working on my own Agent that would look at my GitHub notifications and clean them up for me, so this must have flowed over into a tool (extension) somewhere :smile:

Cleaned it up and of course the behaviour is now gone! Happy days. So just an example of how easy it is to shoot yourself in the foot with AI. I take some pride in looking at the changes and what it does, and having a sense of what is happening in my sessions, but this one slipped through the cracks. I guess that’s what happens when you have a lot of different tools and sessions running at the same time, and you start to delegate more and more to AI assistants. It’s a good reminder to always check what your AI is doing, and to be careful with the commands you give it!

Side note: Copilot CLI extensions!

This whole thing now lead me to look at Copilot CLI extensions, which I had heard about before. There is some explanaion on the GitHub docs on installing plugins, but not really how that extension ecosystem is loaded. I found a great blogpost that goes into the details of how to create your own extensions, and how they are loaded by the Copilot CLI, which is really interesting. You can find it here: GitHub Copilot CLI Extensions: The Most Powerful Feature Nobody’s Talking About.

Have fun discovering that!

Read the whole story
alvinashcraft
44 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

No Dumb Questions: What is an MCP server and why do I care?

1 Share
Welcome to No Dumb Questions, a column where our least technical writer asks our technical staff the simple, basic tech questions people are afraid to ask. In this first entry, Stack's Director of Ecosystem Strategy Ben Marconi teaches us the basics of MCP servers and why they matter.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories