Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148351 stories
·
33 followers

‘Using PRIMM to teach programming’: A new short course for educators

1 Share

At the Raspberry Pi Foundation, we believe that learning to program equips young people with the knowledge and skills they need to thrive in an increasingly digital world. For many educators, teaching programming effectively can be challenging, particularly when their learners are at different stages in their programming journey. Ask learners to write code too early, and they might struggle or feel intimidated. Rely too heavily on step-by-step instructions, and you limit learners’ chances to explore ideas or develop deeper understanding.

Using PRIMM to teach programming artwork

The PRIMM framework — Predict, Run, Investigate, Modify, Make — provides educators with a structure for teaching programming. This research-informed teaching approach balances support with independence and helps learners build their understanding before they write their own code, whatever their starting point.

To help educators use this approach confidently, we have launched a new short online course, Using PRIMM to teach programming, which is available on our new Training Hub platform for free.

What is the course about?

This practical, self-paced course gives educators the knowledge they need to use the PRIMM approach to design and adapt programming activities to suit their learners.

The course takes 1–2 hours to complete, and we have designed it for educators working in formal or non-formal learning environments around the world, using any block-based or text-based programming language. All you need is some experience of creating and adapting simple programs.

The course starts with considering the five stages of PRIMM, when and why to use each stage, and how they work together to support learning. It covers how PRIMM aligns with key teaching principles such as scaffolding, managing cognitive load, and progression, and examines how the approach supports formative assessment by making learners’ thinking — and any misunderstandings — more visible.

Active, social learning

Although pedagogy forms the core of this course, we have deliberately avoided a theory-heavy approach. Instead, the course is designed to help you learn through hands-on activities. By reflecting, taking part in discussions with other computing educators, and completing practical tasks, you will explore how PRIMM works in real teaching contexts.

A computer science teacher sits with students at computers in a classroom.

After an introduction to the core ideas of PRIMM, you will design a new programming activity, or adapt an existing one, using the PRIMM structure. This will support you to think carefully about what your learners know and can do, likely misconceptions, and how each stage of PRIMM can be used effectively, including when your learners have varied learning needs and levels of programming experience.

With its emphasis on activity design, the course will support you to develop resources you can use and keep adapting in your own setting. By the end, you will have a complete PRIMM activity designed specifically for your learners, and a clear sense of how to teach programming in a structured and supportive way.

Join the course on the Training Hub

Using PRIMM to teach programming is available on our new Training Hub, where we offer all our professional development courses for free. The Training Hub offers flexible, reflective learning experiences across a range of topics, helping you build your subject knowledge and bring research-informed teaching approaches into your day-to-day practice.

Whether you are an experienced computing teacher, a volunteer educator, or a parent looking to support their child’s learning, we invite you to join us there.

The post ‘Using PRIMM to teach programming’: A new short course for educators appeared first on Raspberry Pi Foundation.

Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI As Normal Technology

1 Share

Computer scientist Sayash Kapoor joins legal scholar Kevin Frazier to discuss “AI as Normal Technology,” the paper he co-authored with Arvind Narayanan, arguing that artificial intelligence is not an apocalyptic superintelligence or miraculous cure-all, but a powerful, ordinary technology shaped by human institutions and incentives. Kapoor challenges today’s AI hype and panic, urging us to see AI less as destiny and more as infrastructure—and to focus on governance, accountability, and public benefit.

Grab your copy of AI as Normal Technology: https://knightcolumbia.org/content/ai-as-normal-technology

This conversation was recorded on 01/29/2026. Watch the full video recording at: https://archive.org/details/ai-as-normal-technology

Check out all of the Future Knowledge episodes at https://archive.org/details/future-knowledge





Download audio: https://media.transistor.fm/736b3cf6/23cffab0.mp3
Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

VLC for Android 3.7.0

1 Share

Here is the complete changelog for this new update of VLC 3 for Android.

New equalizer

The main feature of this release is a brand new equalizer, completely rewritten from scratch.

This new version is:

  • Easier to use
  • More reliable
  • Automatically saving your changes

You can now disable the default equalizer values, giving you full control over your sound customization.

Whether you want stronger bass, clearer vocals or a fully custom profile, the new equalizer makes audio tuning simpler and more flexible than ever.

Better settings import and export

We improved the backup system to make it more complete and more reliable.

Settings export now:

  • Includes equalizer values
  • Backs up more app preferences than before

This makes it easier to move your configuration between devices or restore everything after reinstalling the app.

⚠️ Please note: old backups are no longer compatible.

You will need to export your settings again. The app will notify you if needed.

Support for 16 KB page size

This release adds support for the 16 KB native library alignment requirement.

You can learn more about this Android platform change here:

https://developer.android.com/guide/practices/page-sizes

This ensures better compatibility with newer Android versions and future devices.

Improved subtitle search

We improved the subtitle search feature to provide:

  • Better matching results
  • More accurate suggestions

Finding the right subtitles for your videos should now be faster and more reliable.

Other improvements

As always, this release also includes various fixes and performance improvements to keep VLC stable and smooth on your device.

If you want to join the beta, click here. Happy testing!

Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rock Your Career: When Recruiters Treat Interviews Like Lottery Tickets

1 Share
The excerpt highlights two recruiting approaches: strategic, quality-driven methods and the ineffective flypaper method, which focuses on quantity over candidate suitability. The latter can harm candidates' reputations and trust with hiring managers. It emphasizes the importance of working with skilled recruiters who prioritize meaningful engagement and proper preparation for roles.



Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Reactive Extensions for .NET - Rx.NET v7 and Futures

1 Share

Ian Griffiths, Technical Fellow at endjin, .NET MVP, and author of Programming C# (O'Reilly), returns to On .NET Live to demo Rx.NET with live ship-tracking data from Norway's AIS network and walk through the major changes coming to the Reactive Extensions ecosystem in v7.

In this episode:

  • 🚢 Live demo — streaming real-time vessel data with Rx.NET and AIS.NET, using LINQ queries over observable sequences to join, group, and display ship positions on a WPF map
  • 📦 System.Linq.Async → System.Linq.AsyncEnumerable in .NET 10 — how LINQ for IAsyncEnumerable moved from the Rx repo into the .NET runtime, and what that means for your projects
  • ⚠️ Rx 7 Preview — unbundling WPF and Windows Forms support from System.Reactive to eliminate the 90MB binary bloat in self-contained deployments
  • 🔮 Rx 8 and beyond — plans for browser WASM and Unity support, improved trimability, and the path to production-ready Async Rx

Katie Savage: Hello everybody and welcome back to On .NET Live, where it's our mission to teach the .NET community to achieve more. This morning, or afternoon, or evening depending on where you are, we have an awesome show prepared for you. I'm one of your hosts — this is Katie Savage, here with Cam Soper and Frank Boucher. And I'm sorry, Frank, for my horrendous French accent. I don't have one. It's nonexistent. But I'm super excited to introduce our guest today, Ian Griffiths, who is actually a returning guest. Ian, could you tell us a little bit about yourself?

Ian Griffiths: So hi, I'm Ian Griffiths. I am an author with O'Reilly — I've written the last four or five editions of Programming C#. I've also long been a Pluralsight instructor. Going way back, I got started in computing doing kernel mode device drivers and embedded systems, and then I've gradually been working my way up the stack since then — through medical imaging, broadcast video systems, and then into UI stuff. And then more recently into data analytics and of course applications of that to AI these days.

I currently work as a Technical Fellow for endjin, who are the sponsors of the Reactive Extensions for .NET these days. So the reason Rx.NET, which we're going to talk about today, is still alive and ticking is the generosity of my employers. So thank you very much to them. And that's me.

Katie: Awesome, thank you so much for taking us through that. And you gave us a bit of a spoiler, but I'd love to hear more about what you're talking about today. And I believe last time you talked a little bit about this topic as well.

Ian: I was last on I think two and a half years ago, and at that point endjin had just taken over the maintenance of the Reactive Extensions for .NET. So Reactive Extensions, or Rx.NET for short — also known by the NuGet package name System.Reactive — these are one of the oldest open source projects actually in the Microsoft world. It originally came out of Microsoft.

The Reactive Framework was originally created by the Cloud Programmability Group inside of Microsoft back in around 2008. And it's the same people behind it as were behind LINQ — Language Integrated Query. It kind of came out of that. So it was Erik Meijer's team, essentially, that came up with this back in the day. Same people invented LINQ, same people — some of the same people who are behind the async language features for C# as well. So a very interesting team, and they created this thing called Rx.NET.

And the way I like to describe it is: Rx is useful in any program where things happen. So that's quite a broad category, although not everything — there are exceptions to this. Programs that basically reach into a database where the data's just sat there and do some processing and then write some results out — nothing really happens there. There's sort of data in and data out, and it's all like a batch process. Whereas applications where things are happening live tend to require a slightly different approach.

So Rx has been most well known, I guess, in the user interface world, because there things happen — the user interacts with the application and you need things to happen in response to that, and Rx is really good for that.

But it can also be used in things like monitoring applications. So if you imagine, for example, utility companies — one of the projects that endjin has done with this is we worked with a utility company that provided broadband services, and we were modelling all of the diagnostic data from their multiple millions of routers in people's homes that were reporting information about the state of the network, to provide analytics that were live so that they could, for example, see problems that were unfolding in their network before the customers were troubled by it.

And so you needed to be able to monitor literally millions of devices and to get analytics — some of which were kind of specific to the devices, some of which were at a more aggregate level. So you wanted to know: has this connection gone down because someone's just accidentally watering their plants and has drowned their router, or is it because the exchange is on fire? Those are two quite different conditions and the appropriate response for those two things is different.

And so the sort of analysis you want to do in real time in response to this changing data — it's very useful to be able to do stream analytics at various different levels of detail. And that's the kind of thing that Rx is also very good at. It's less obvious than perhaps the user interface type approaches, but it's equally valuable in that kind of live data analytics world.

Katie: I can imagine, and I'm super intrigued already by the scenarios you've brought up. And honestly that tagline is perfect — I would love to put that on my LinkedIn bio: "useful on any project where something happens." That's amazing. But I'd love to learn more about Rx and I'd love to have you get into it.

Ian: Okay, well I think possibly the best thing to start with would be a demo. So I've got a little WPF application here. Now those of you who did watch the same talk two and a half years ago would've seen an earlier iteration of this — let me get that on the right screen.

So just to show you what the thing does before I get into the code: this is obviously a map control, and as this runs for a while you'll see gradually appearing on the map these little markers. And what we're seeing here is actually live information, generously provided by the Norwegian government, basically for free. They provide this online service that reports the location and movement of all — basically all — ships anywhere near Norwegian waters.

So there's this standard called AIS, the Automatic Identification System, which basically any large or moderately sized vessel that operates in international waters is legally obliged to have. It's basically GPS plus a radio transmitter, so the ships can report where they are. So if you have a marine GPS system, you can see where the other vessels are, and that's because they're all broadcasting their information.

Now this is all the ships in Norway, so this gets quite busy quite quickly. But you can see if I zoom in, you can see it showing where the things are, the direction they're heading in. The API also offers things like speed. It tells you what kind of vessel they are. It will tell you whether they're moored or whether they're anchored or whether they're moving or whether they're doing diving operations. There's quite a lot of information there. This is obviously live, so these things will gradually move around. You saw it populate as I started to go, so this is kind of a good example of the sort of data that we might actually want to deal with in Rx.

So just to give you a flavour of how this looks, I'm going to kill it off and we can start to look at the code. And actually what I'll start with — I've got a couple of projects here — I've got a simpler console app that lets us just see how the API works at its most basic level.

So if I go and open up the Program.cs here: we have this API that lets us connect to servers that can provide AIS data. So that IP address happens to be the IP address of the service provided by the Norwegian government. If you connect to them on this port, they will give you AIS data.

And this library we're using here is a library called Ais.NET, also written and maintained by my employer, endjin — thanks again. So this is a free library that lets you process AIS messages and it exposes them through this property here, which is of type IObservable<AisMessage>.

So here, this is where Rx comes into the picture. This interface, IObservable<T> — it's actually built into the .NET runtime class library. So you don't need any extra libraries to have this. This has been built in since .NET Framework version 4.0. This was actually baked right into the framework.

All of the other support around it isn't, but that interface is baked in. This is the core of Rx. It represents a sequence of things, and in this case the things are AIS messages. So this basically says: if you've got a receiver host plugged into an AIS data source, it will give you a sequence of messages.

And right now this program is incredibly simple. It says: I would like to subscribe to that source of messages, and each time a message arrives, I want it to invoke this function, which just says "okay, what kind of message was it? Was it one that tells us the vessel's name? Does it tell us where the vessel is?" and prints out the details.

So if I run this one — and get that onto the right screen, let's make this a little bit bigger — we can see what the raw messages look like. I'm going to stop that and take a look at it, because this is actually going to show you a challenge with this data source that we're going to use Rx to address.

So Rx isn't just about receiving sequences of messages. If that was all it did, it wouldn't be very interesting. But it allows us to perform processing on those messages in a declarative way. So let's look at these messages.

So this is basically the raw data — or some of the raw data — being received from the ships, and they send different types of messages. So sometimes they send out a message saying "I am at this location and I am facing in" — that's not really a compass bearing, because compasses don't go up to 511. That's a magic value saying "I don't know which way I'm facing right now, so please ignore this," and "I'm not moving." Some of them will have more interesting information.

So there we go — that looks like a genuine compass bearing. So this one says: "I am currently engaged in fishing. I'm currently at these GPS coordinate locations, and I'm facing in this direction." And since I'm fishing, I'm not currently moving. So you can get this information about what vessels are doing, and they're all tagged with a unique vessel identifier.

So when you install one of these GPS systems, they have a unique vessel ID. Now some of these things say what the name of the boat is, so you can see this one here is saying "my name is Cans 21." And a lot of spaces — sometimes they seem to pad the space out with at symbols for some reason. Don't know why, but they do.

Now on the map, I wanted to label each of the nodes to say "the vessel here is called this, the vessel there is called that." But there's a slight problem, because the messages that tell us where they are don't include the name, and the messages that include the name don't say where they are.

And the reason for this is that boats tend to move more often than they change their names. And so they do not broadcast their names as often as they broadcast their locations. And because this whole radio standard is actually using relatively low frequency radio, the bandwidth is minuscule — there's maybe hundreds of bits per second. It's incredibly low bandwidth, and so they don't have a lot of space in these messages. And so they try and maximise efficiency.

But then, okay — if I'm going to draw these things on a map, when I receive a message that says "we're at this location doing this thing in this direction," how am I going to reconcile that with the name that I'd like to stick on the label?

Essentially what I want to do conceptually is a join. It's like if this information was in a couple of database tables — I had a table of "here are all the names" with the primary key of the vessel ID, and another table saying "here are the locations" also with the key of the vessel ID — I would just join across those and then I'd be able to get the answer.

That's fine if the data's already there, if nothing's happening, if it's not live, if it's just data that's sat there that I can query. But how am I going to do this when it's live data?

Well, this is where Rx comes in. So if I go back to the WPF version, which is actually labelling these things, I've got the same basic code. I've got a view model here that's powering the display — this is basically sitting behind the UI that you see. And as with the console app, again I just create myself a receiver and a host for that receiver. This is the thing that gives me observable messages.

But now I'm doing some more interesting things. I am saying I want to not just process the messages — I want to start running processing operations on them. In this case, I would like to group the vessels by their unique ID. So what this says is: rather than having a single stream, I want to get an observable stream of observable streams. So each time this sees a vessel it hasn't seen before — each time it sees a message where the vessel identifier is not the same as one it saw last time — it's going to emit a new group as the item that comes out of this observable.

So this is an emitter of groups. It's a sequence of sequences. And then what I can do is say, okay, within each stream I'd like to pick out the different types of messages. I'd like to pick out the navigation messages and the messages that say what the name is, and also the ones that say what kind of ship this is — is it a fishing vessel? Is it a tug? Is it a tanker? Is it something else?

And then combine those together. So I'm sort of doing a join here. In essence, I'm not actually using join syntax, but logically it's doing that sort of thing. I'm telling the Rx library I would like to combine these streams together to find the latest location, name, and type information within this single vessel stream. So basically this is going to emit a series of: this vessel is called this name, it's at this location, and it's this type. And every time any one of those things changes — if it changes location or changes speed, or if less likely it changes its name or its type — then I'll get a new message comes out of here.

And then finally, I basically merge them all back together again. This actually, because it's got two from clauses in here, turns into a LINQ SelectMany, which is a flattening thing. It flattens it back down again. And the net result is the messages that actually come out basically tell you the combined vessel name, location, and type for each vessel.

So anytime you get new information about a vessel, it comes out of here, and this is what we can then use to update the UI. So now I'm just moving into the world of WPF — I'm picking a colour to paint the ship based on its type, I am setting its location on the map based on the reported location, and I'm labelling it with the name.

So we're now back in basically the world of WPF data binding at this point. The point is I can use C#'s built-in query syntax — or if you prefer you can just write LINQ queries as method invocations. Some people prefer one style, some people prefer the other. They are exactly equivalent. You can write it either way, and you can essentially execute queries over live data streams.

So when we run — hang on, I've restarted the console app. Let me go back to the WPF app, click the right button. So when this runs, it starts receiving messages. Initially it's going "okay, well I've got a location but I don't have the name yet," or "maybe I've got a name but I haven't got a location." But once it starts seeing messages of all types from a single vessel, it goes "oh, okay — right now I've seen a location and a name and a type for this particular one here. Now I can actually emit that as a single message with all three." Hand that over to the map, and off we go.

And if we sat here long enough, you would gradually see them moving across the screen. Although, being ships, they're going to move quite slowly at this scale, so I'm not going to sit and make you wait for that.

The basic idea then is we've got this abstraction — observable — which we can do all the same things with that we might do with an IEnumerable, because they are essentially the same fundamental idea. An IEnumerable is just one thing after another. An observable is just one thing after another. The difference is: with an IEnumerable, we as the programmer say "I'd like the next item please." You write a foreach loop over the thing — "give me the next item, you do some work, give me the next item, you do some work."

So we as the developer are pulling items out of the source, so to speak. Whereas with Rx, the source decides when it has something for you. I can't walk up to this API and say "make that ship over there emit a message for me." That's not going to happen. The ship's transmitter will emit messages when it wants to.

And so that happens on its own schedule. I as a developer am not in control of that. And so Rx gives me a way of expressing that by having these things be emitting sources. So we have what's called a push-like way of consuming them, where the source delivers messages into us. So that's the fundamental concept.

It's designed specifically to be very similar to IEnumerable. It's just that you receive messages when the source has them for you, rather than retrieving them when you are ready to process the next thing. That's basically the heart of Rx.

And then the same LINQ query language is basically available on both — pretty much anything you can do with LINQ to Objects like searching, filtering, joining, windowing, all these sorts of things are available. And actually, as it happens, Rx provides a bunch of extra operators that are specifically temporal in nature, that wouldn't really make sense for a database. So you can, for example, say "I would like a sliding window that is two seconds long and I'd like you to give me all the events that happen within a two-second window." So I can process those that way. And that obviously only makes sense in the presence of timing. And this being a push-oriented thing, timing is inherently there in a way that it's not with a raw database.

So that's the heart of Rx.

Katie: It is super cool. When you started initially it was like "okay, we can buffer the messages and stuff" — no, just group them and it works. That's awesome.

Ian: It's a declarative style. So rather than you having to think "what code am I going to write to process it, where am I going to put these things, how do I bunch them together?" — if you can express the semantics of what you are doing through the language of LINQ, through the standard query operators that LINQ provides, then you don't have to think about how you're doing it. You can just say what you would like done.

Katie: That's pretty cool. Super cool. And Ian, one person here, John, is asking if there's a link for this code. They already want to start practising with this.

Ian: Oh right. So I wrote this like a couple of hours ago, so no, not yet. There is a notebook you can get hold of — we used to do this as a Polyglot notebook. Unfortunately the Polyglot notebooks are kind of going away. I think that project is winding down now. And so I rewrote this as a plain WPF project this afternoon so I could be sure I could run it. I will endeavour to make this available. I will do a blog post about this just to follow up. So I will make this code available to people who want it. If you go to endjin.com, then you can find our blogs there. You can find me. I will make this available later this week.

Katie: Perfect. Thank you. Good question, John.

Ian: So what I was hoping to talk about today, if I may, is some of the stuff we've actually been doing in the Rx project. So this is kind of the intro, entry-level stuff. But we've actually been doing some things lately.

One of the things I talked about the last time I was on is documentation and kind of learning Rx, because one of the things with Rx is it's very powerful, but it's not the easiest thing in the world to learn. People often struggle to get their heads around it for a while and then eventually reach a kind of "aha" moment, where it's like "oh, I get it now," and suddenly you can't imagine programming without this whole mechanism. And getting people to that point has been challenging.

Now last time I was on, I was talking about — there was a site called IntroToRx.com, which was written actually like 14 years ago, believe it or not, by a guy called Lee Campbell. But he hadn't updated it since then, so it was very good but it was also kind of out of date. But he very generously allowed us to take that content and update it. And since the last time I was on, that is now done — up-to-date site, IntroToRx.com. So if you want to learn in detail about this stuff, that is absolutely the place to go.

IntroToRx.com — it's available. We also take contributions. The community is regularly submitting changes or fixes or enhancements to that, and so it's a live, up-to-date place to go to learn about this stuff. I would check that out if you are in any way interested in this.

So, any questions you want to raise as hosts before I — because I know I can fill the entire hour without noticing it's gone, so I don't want to completely take things over.

Cam: Well there was — Cecil was asking — he thought that Rx and Ix operators were moving into the core framework.

Ian: Ah yes. This is exactly one of the things I wish to talk about. So let me just — I'm going to open up a web browser. Probably didn't want to do it in the same window that I'm running the session in. Two seconds and I've lost my mouse pointer. Where's it gone? That's what happens when you have a lot of screens, people.

Katie: I'm just jealous. I wish I had that problem.

Ian: Okay. So this is actually the source code for the documentation, but this is up on the — if you go to the main .NET website you will — hang on, no, this is not the one I meant to do. I meant to look at the one that's on the main .NET website. Hang on a second. .NET 10. It's this one here.

So this was announced as a breaking change in .NET 10, which is that they have added support for LINQ to IAsyncEnumerable. So what does this have to do with Rx.NET? Because IAsyncEnumerable is this interface that was introduced to the .NET runtime roundabout .NET Core 3.1 time.

Let me just clear that out the way to make a bit more space. Right. So the basic idea is it's like IEnumerable — it's a sequence of things — but it's async. So you want to produce items in a way where you sometimes need to await in the implementation and block and have a task that might complete asynchronously. IAsyncEnumerable lets you model that.

In this example I've written this as a C# iterator method, and then they introduced in C# 8, I think it was, this await flavour of the foreach loop, which is designed to provide direct integration for this. So this is IAsyncEnumerable.

But when it was introduced back in .NET Core 3.1, the .NET team did not provide an implementation of LINQ for this. So you could not, for example, do .Where(x => x is divisible by two). For example, if we try this, we get a squiggly saying "the type arguments for this method cannot be found."

However, the Rx team said "that's fine, we have an implementation of this that you can use." If you go to the NuGet package manager and if you search for System.Linq.Async — this says "provide language integrated query over IAsyncEnumerable sequences." And I'm going to add actually version six, and I'm going to just quickly zoom in on something if this will work. You may notice that the project URL is github.com/dotnet/reactive. That's the Rx repo. Mysteriously, this implementation lives in the Rx repository.

If we click — that squiggly goes away. So now if I run this, this basically reads input lines and emits them as numbers. So if I type 42, that will come through. If I type 43, it won't come through, because I filtered it to say I only want the even numbers. So those come through, those ones don't.

So this made LINQ available for IAsyncEnumerable, and this was released almost immediately after .NET Core 3.1 shipped.

Now, why, you might wonder, did the Rx team do this? What on earth does this have to do with Rx? Didn't I just say that Rx is all about push, whereas foreach is all about pull?

Well, here's the thing. The team, as I mentioned — the team that invented Rx, the people in it were also behind LINQ. And actually they invented IAsyncEnumerable first. IAsyncEnumerable had been around in .NET for about five years before it appeared in the .NET Core runtime itself, and it was written by the Rx team. They implemented this thing called IAsyncEnumerable themselves. And then later, several years later, the .NET team said "oh, that's useful, we should build that in." And they did.

And so the Rx team said "alright, we should probably stop trying to define it ourselves." They removed their own definition of IAsyncEnumerable, but it's like — hang on, we have a complete LINQ implementation for this. They'd already done it. They'd already done all the work to support LINQ for IAsyncEnumerable long before the .NET runtime and the C# compiler team built support for that interface in.

And so it was like "well, we could just make this available." And so they did. They made System.Linq.Async available as a library, because they just had it and people wanted it.

There was a slight problem with this, which is: by this time there wasn't really any full-time support inside of Microsoft for the Rx project. So it had started as a fully funded internal project and it turned into a fully community-supported project. And the result of this is that loads of people look at this and think "well, it's called System.Linq.Async — that must be part of the .NET runtime library. That must have full support. I must be able to demand feature enhancements and bug fixes for that, just like I could for any other bit of the .NET runtime." And those of us who are maintaining this in our own time for free were starting to get support requests from people who genuinely believed we were being paid to do this, when we weren't.

And when we decided to take over — to offer to take over maintenance of the Reactive Extensions for .NET — we didn't really sign up to also become the maintainers of LINQ for IAsyncEnumerable, but it sort of happened because it's the same repo. So we sort of became responsible for that as a direct result of taking over the repository.

So David Fowler actually said "this doesn't seem right, we should probably build this into the .NET runtime libraries." And he said that about two years ago, and that eventually came to fruition with .NET 10. So as of .NET 10, you don't need to do this. I can actually come in here and remove this library and this will go back to giving me an error. But if I now upgrade the project to .NET 10 and save that and build it — that squiggly should go away again. And now it's there, but now it's actually in a different place. If I mouse over this and if I zoom in again, you can see the location is .NET/shared/Microsoft.NETCore.App version 10. So the .NET Core runtime libraries, in a library called System.Linq.AsyncEnumerable — it's now built in.

And so you would think this would be just a slam dunk for us on the Rx maintenance team, but you would not believe how much work it has taken for us to step neatly out of the way of this development. Because the problem is: a lot of projects will have that reference to System.Linq.Async in them, and then they'll upgrade to .NET 10.

Let me show you what happens when you do that. If I install this again, if I put the library back — so now I'm on .NET 10 and I've got System.Linq.Async — and now I have a problem. It now says "the call is ambiguous between the following methods or properties." There are two implementations of LINQ for IAsyncEnumerable. There's the one that's built into the .NET runtime libraries, and there's the one that's in the System.Linq.Async package.

Now, you can fairly easily solve this by removing the reference to System.Linq.Async — unless you didn't add that reference in the first place. What if you are using some other library that depends on System.Linq.Async? Now you can't get rid of it. So that's fun.

So what we did is we actually released a new version of System.Linq.Async version seven. And if we update to that and now go back here, the problem goes away again. And essentially — if I mouse over again and zoom in — you can see we're back to using the one built into .NET Core, into the .NET runtime libraries.

So what essentially we've done is we've removed all the stuff that's now in the .NET runtime. So if you're on the latest version of our library, it no longer tries to provide you with LINQ for IAsyncEnumerable, because .NET does that for you.

However, it's more complicated than that, because what if you are using a library that was built against System.Linq.Async version six, and it's not been built against version seven? It's going to expect to find all of LINQ there in binary at runtime, because it's compiled against our DLLs and not the ones in the .NET runtime.

So we actually have to ship two completely different sets of binaries. If you look inside these NuGet packages, there are reference assemblies that say "oh no, we don't provide an implementation of LINQ for async anymore." But there are runtime assemblies that do continue to supply that for binary backwards compatibility. So it's all a bit hairy to make it work, but the net result is it should just work how you expect.

Basically, the only problem people are going to see, we hope — and so far this has panned out — the only problem we're expecting people to see is if they end up with a reference to System.Linq.Async version six and they upgrade to .NET 10. Or if someone else brings in this new runtime library, because you are allowed to add a reference to System.Linq.AsyncEnumerable even if you're on .NET 8 — they actually made it work down-level. If someone's done that to you, you end up with this ambiguous reference error. But you can fix it by just upgrading to the latest version of Ix.NET.

So we will eventually be deprecating the System.Linq.Async package. We're going to do that fairly soon. The only reason we haven't already done it is we wanted to make sure that this all worked for people. I had some sleepless nights when .NET 10 shipped, thinking "am I going to get a million bug reports because I've not thought of something and this is all going to go wrong?" But so far it seems to be fine.

So we are going to mark System.Linq.Async as deprecated so that people can stop using it eventually. But for the meantime, there's this kind of off-ramp where you just use this thing.

There's one other issue though, which is that there are some features we provided in System.Linq.Async that were not replicated in .NET 10. So for example, there's this slightly strange method called AsAsyncEnumerable, and there's an equivalent method in LINQ to Objects. The .NET runtime libraries do offer an AsEnumerable method — it basically says "I want to hide the concrete type of this thing and just turn it into the interface." It says "erase the type for me." I might have some MyAsyncEnumerableImplementationType and I'd like to treat it as an IAsyncEnumerable so that only the IAsyncEnumerable extension methods are available. It's occasionally useful to do this. And for some reason the .NET runtime library did not include that method when they did their own version of it.

So we continue to provide that, but it has moved. There is a new library. If you look in your dependencies, if you have a reference to the latest System.Linq.Async, you'll see that we have transitively given you a reference to System.Interactive.Async. Now this has always existed — this was always the library where we put non-standard LINQ-like things that aren't really proper LINQ, they're sort of extensions to LINQ. They live in this library for the async version. And there's also System.Interactive, which has existed for like 12 years — it's where we put non-standard but LINQ-like operators.

And so we've moved everything into there. So the idea is you would stop using System.Linq.Async — you would just remove your reference to that. If you need any of the functionality that we provide and .NET 10 did not copy over, you would instead add a reference to System.Interactive.Async, and then you're good.

So that was done back in November, and we'll be deprecating that library fairly soon. That is my not-very-brief answer to Cecil's excellent question, because I wanted to talk about that. People need to know.

Frank Boucher: Interesting story. I never thought about the merging stuff — the impact of what it can cause.

Ian: Well, I have an even bigger version of that that I want to talk about as well today, which is one of the big things that we're doing for Rx version seven.

So we shipped Rx 6, couple of years ago. We shipped Rx 6.1 earlier this year — it has some handful of minor new features in it, some additional operators and community contributions. But Rx v7.0 — we are trying to fix a problem I talked about two and a half years ago and that we haven't managed to fix yet. So let me talk about what this problem is.

I have another window open here. Let me get it onto the right monitor. Right, so — do we have a drum roll, Cam? Right, so this is a pretty simple WPF application. All I'm doing is creating an observable sequence. The nature of the sequence isn't terribly interesting — it just produces numbers at kind of randomly spaced intervals. It counts up, but it does so at a slightly lumpy speed. And then in my subscription, I'm just putting the output directly into a property of a control.

So if I actually run this — it just shows increasingly high numbers at slightly random intervals. Not very interesting. But the point of this is to illustrate one of the things you often have to do in user interface programs.

If I comment out a magic line, this will stop working. We've immediately hit an exception — InvalidOperationException. Let me close the live preview. The calling thread cannot access this object because a different thread owns it. If you've done much user interface programming, you will be familiar with this problem. Basically almost all user interface technologies require the UI to be updated from the right thread. So any window handle in Windows belongs to one particular thread, and most UI frameworks don't like it if you try and change something from any different thread.

Back in the day, Windows Forms version one used to just break weirdly when you did this — it didn't notice you'd got it wrong. It would just gradually melt its innards and would start to go wrong. Now it actually detects it and throws an exception, which is an improvement. But basically you've got to be on the right thread.

So in Rx we offer these helper mechanisms where you can say "okay, I do want to subscribe to this source, but actually I need to observe it on a particular context." I can't just take the raw notifications because I happen to know this source is going to deliver them to me on a thread that is not useful.

So if I add this ObserveOnDispatcher, what this says is: I know I'm in the WPF world, and so I want the dispatcher for whatever thread I'm on when this method runs — the current thread's dispatcher — to be captured. And anytime this source emits a value, I would like to basically redirect that back onto the user interface thread before I handle it.

So now if I run this — I'm going to stick a breakpoint here. So when it tries to raise the events, if we look at the thread that I'm on, you can see I'm on some sort of thread pool worker thread up there. So if this were to come straight through, it would not be the right thread to hit the UI. But if I now hit F5 and see — well, okay, now we've received that. What thread are we on now? Well, now we're on the main thread, because I told Rx that's what I need.

So the point here is that Rx offers integration with certain UI frameworks. We do this stuff for WPF and a few other helpers. We also have ones for Windows Forms — you can do ObserveOnControl. We also do ones for UWP, or indeed anything that uses the Windows Core Dispatcher. So we have ObserveOnCoreDispatcher.

Here's the problem. All of this today is built into the same library. If you want this, you just use the same System.Reactive NuGet package as you do for anything else. So if I were to look at this project, the only NuGet package reference that I've got there is the standard Rx one.

Why is that a problem? Well, in a way it's not a problem — in a way it simplifies things. It means I just said "Rx please," and if I happen to be using WPF, then I get the WPF features. They're just right there. The problem, however, is when you start doing things like AOT (ahead of time) compilation or self-contained deployment.

If you want to build this WPF app into a self-contained form where you don't have to pre-install the .NET runtime to use it — this has been a mode that's increasingly well supported in recent versions of .NET — if you want that to work, then the problem is that including Rx means that it will now ship a complete copy of WPF and Windows Forms with anything you build, whether or not you're using either of those frameworks.

So if, for example, you are only targeting .NET 10 with the Windows-specific TFM because you happen to want to call some API that's in there — let's say maybe you're writing a console app that wants access to the sensor framework that's available in the Windows API. Maybe you want to read orientation data and report that over the network. You don't have a UI, but the problem is: because you said "I want the Windows flavour," Rx goes "oh, well then you must want WPF and Windows Forms," and so your binaries get about 90 megabytes larger as a result of this. Which is not good.

The reason this was missed at the time is that back when the decision to unify everything into a single package was made, there was no such thing as WPF on .NET Core. That didn't come along till .NET Core 3.1, and this decision was taken earlier than that. You only had WPF if you were doing .NET Framework — classic .NET FX — and there was no way of doing a self-contained deployment if you were building a .NET Framework app. You had to install the .NET Framework on the target machine before you could install your app. And so this whole problem didn't arise.

Since they made this decision, it's now become a real problem. If you are targeting a Windows-specific target framework moniker and you include Rx, and you build a self-contained deployment of any kind, you now get 90 megabytes of unwanted stuff. If you turn on trimming, it goes down to a mere 60 megabytes, which I guess is slightly better, but it's still an awful lot.

So we wanted to fix this, and actually I said last time I was on, two and a half years ago, I said "we're trying to fix this, we haven't worked out how to yet." We now think we have worked out how to.

So if you go to the NuGet package manager — if you are using Rx, there is a preview of Rx 7 available on NuGet. And if we do this, it unbundles the UI framework support. So if you say "I want System.Reactive," you just get System.Reactive. We don't give you the WPF stuff anymore.

Now, actually the runtime binaries still have it, because we've had to do the same binary compatibility thing to make sure that anything built against older versions of Rx that was expecting everything to be there will still work. But we no longer declare a dependency on WPF. We no longer force your application to depend on WPF. And so this gets rid of this problem.

Now the obvious downside of this is that if you are building against this, it will now say "well, that method doesn't exist anymore." And that's fine — you just need to add the right library.

But what we've done is we've added an analyser that detects when you've done this and says "oh, you are trying to use ObserveOnDispatcher, and that used to be built into System.Reactive. Now it isn't." So we thought, rather than confusing people and them going "why is this method gone away?" — we're actually telling them "okay, you now need to add a reference to this package for this to continue working." So people at least get told what to do as part of the upgrade.

So it's like "oh, okay — add a reference to System.Reactive.WPF." Let's go and find that. System.Reactive.WPF — there it is. We install that, and now because we've actually asked for WPF, we're going to get it. And that's fine. If you ask for it, you get it. If you don't ask for it, you don't get it. That's the new model.

And so we are hopeful that this is a relatively painless way of getting past the problem. Because people that were not using Rx because of this — the Avalonia UI project abandoned Rx because their use of Rx meant that their binaries were 60 megabytes bigger than they needed, and they felt it was less painful just to stop using Rx than it was to force that on their users.

And so our goal is to say, well, we'd like them to come back. Ideally — maybe they never will. Maybe we've burnt that bridge. But we'd like Rx to be a good choice for anyone building Avalonia UI apps. And so we have to unbundle the UI elements. So that's the big change with Rx 7.

We've gone to great lengths to ensure binary compatibility is maintained, using similar tricks to the ones I just described for the System.Linq.Async stuff as well. It's currently in preview — we've had about 40,000 downloads so far. At some point we're going to have to pull the trigger. No one's told us it doesn't work yet. I would encourage people to try this and see if it works for them, because we suspect no one's told us it's not working because they haven't tried it.

We've done our utmost to test this in every way we can think of, and hopefully it's fine. But sooner or later — this year, not too many months from now — we will go for a proper release of this and then we'll find out whether it's as good as we think it is.

So that's the big thing coming in Rx 7. And we're basically making that the only feature of Rx 7, because we want to separate that change out from everything else. And then further feature work — there'll probably be an Rx version 8 fairly quickly on the heels of Rx 7, where we actually do new feature work.

So I can take a breath for a second.

Frank: I want to congratulate the team for the clarity of all those error messages, because not everybody makes that effort, and all those messages were very clear and helpful and all those things. So I think this is a great effort. It looks like the team cares.

Ian: We do really care. We've tried really hard to — oh, congrats on that. It's been controversial because there are a lot of people who would rather we just left it as it is. That's been quite a widespread opinion. But we know there are projects that have walked away from Rx because of this. And so our view is that that's not an acceptable solution.

I'm also just going to address the other thing people say, which is: "Well, can't you just tear it up and start again? Just build a new Rx. Just say it's the end of the line for System.Reactive. Just do System.Reactive.Two or System.MoreReactive or whatever it might be." Super Reactive.

That doesn't work. That absolutely does not work, because if you end up with dependencies on both those libraries — let's say you've decided to use the new Reactive, and then you take a dependency on some component that's using the old Reactive — now you get those ambiguous method errors again. The same problem I showed you with System.Linq.Async. Because you've now got two completely different implementations of Rx, both saying "I provide Where and GroupBy and Select and so on for IObservable." And the compiler doesn't know which one you want, and that's an absolute nightmare.

So the so-called "clean break" solution is no such thing. You basically have to fix this in System.Reactive if you're going to have any hope of proper compatibility going forward.

So there is a huge design document about this. If anyone wants to go into the details on this — let me try and find where that is. So if you go into the Reactive repo and look at ADR — Architectural Design Record — and look at the Package Split ADR, this is my attempt to summarise everything you need to understand in order to solve this problem correctly.

"Summarised."

Yes, it's not as simple as people think. So this explains everything that anyone has suggested to us, because we sought feedback from the community on "what are we going to do about this?" And we've evaluated every option and written the pros and cons about it and explained why it is we've chosen the solution we've eventually gone with. And we also explain exactly what the problem is as well. So you can see — that's how big a self-contained exe is: 90 megabytes. It grows to 182 megabytes if you add Rx. And that's just not acceptable, we don't think.

Katie: So that's the big deal with Rx 7. That makes sense. I mean, even just looking at that document, I'm like, I believe you, Ian, I believe you fully. We do have a couple of questions that have come in over the last couple of minutes that I want to make sure we get to.

Cam: Sure. This one from John came in a little bit ago. He says that he's a QA automation person and he's wondering where can we access TFS to log bugs?

Ian: As in, if you find bugs in the System.Reactive library, the place to log bugs would be — it's a GitHub repo. So just go to github.com/dotnet/reactive/issues, and anyone can report issues. So I'm not sure if that's what he actually means, but we're not on TFS, we're on GitHub. Although we do use Azure DevOps to do our build, because that's what the .NET Foundation originally set us up with for this project. But that is the place to go to report bugs.

Katie: Perfect. And I think Cam had sent a link to this repo earlier in the stream, so go ahead and search for that. And John, if that wasn't what you were asking, feel free to clarify and we'll ask again.

There is another question from MC Nets: is this Reactive library used by UNO Platform in MVUX, or did they write their own libraries?

Ian: I don't know about Uno. I mean, it's possible to use this in Uno. One thing I would say though is that we know there are two — at least two — environments where we have some problems. One of which is browser WASM, and the other of which is Unity, the games development environment or the 3D development environment. And both of those are to do with differences in threading in that world.

And that's actually one of the things we're going to work on with Rx 8. We want to address the problems that mean that Rx has some issues in browser WASM and on Unity. And the thing about Uno is that it can end up running in the browser. So I would anticipate that it has the same problems there.

I also know that people have written — there've been various attempts to fork Rx and do new versions of it, because for quite a while it was basically unsupported, and so people were asking for features to be done and nothing happened for a couple of years, and so people went off and did their own forks. Quite understandably.

Our goal is really to try and make it good for all .NET applications, and so that's a big driver of Rx 8 — dealing with these things. I don't actually know specifically with Uno — they may well have done their own libraries. But our goal is to get to the point where they don't have to. And if they wanted to come back to the original Rx, they could. But equally, if they're happy with their new solution, then more power to them.

Katie: Totally. That makes sense completely. I think that's all the questions I'm seeing. Frank, Cam, anything from you?

Cam: No, I've got nothing. I just wanted to comment on the work that you guys have done recently to handle the integration with .NET 10 and those various cases of ambiguous references. I am very impressed with that. Ian, I thank you for all your work you've done on that. And I know we've got a lot of comments out in the chat about how useful Reactive has been for them, and I think I saw somebody refer to the entire Reactive team as "goated." So I think we'll take that as a compliment.

Ian: High praise, high praise. So I should obviously give credit to the people who came before our involvement, because the original team at Microsoft and then the open source group that kind of carried it forwards — they eventually were unable to continue to put the work into it and so it became more abandoned for a bit. But it wouldn't be here today if it wasn't for those people. So there's been many, many people before our involvement, without whom it just wouldn't exist at all. So I can't take too much credit. I'm just trying to keep it available for the next generation, because I think I hugely admire the work that went into it before we got here and want that to continue to be available for everyone, because I think it's really good.

Frank: Do we have time for a last question? I just see — MC Nets was asking on Twitch: is it only lists, is it only objects that we can observe, or can we look for any change in the class or something like that? How do I trigger a change if some of the properties change?

Ian: Right. You want to look at a couple of projects out there. So there's a project called ReactiveUI, which is an Rx-based project. It uses Rx deeply to power a user interface-based framework, and that in itself depends on another library. I'm going to get the name wrong, so just go look at ReactiveUI and you'll find it that way.

There's basically a whole model for doing property changes integrated with Rx. So you can say "I've got this Rx stream, I'd like to present it through a property," or you can say "I've got this property that I update, I would like to turn that into an Rx stream." It's called something like Dependent Data, but that's not the right name — but the ReactiveUI library uses it. So that's kind of the best way in for that. So yes, absolutely you can do it.

There's a couple of things I just wanted to quickly talk about if we have a couple more minutes. Do we have time?

Katie: We've got about five minutes.

Ian: Okay. So I just wanted to say the other things we're aiming to do for the next version, so people know what we're working on. We want to make sure that we are usable for as many .NET applications as possible, and specifically we want to address WASM and Unity.

We also are going to improve the trimability support. So Rx 6 did make Rx trimmable, because it used to be: if you added a reference to Rx, that's one megabyte of extra stuff in your executable. In Rx 6, we added basic trimability, so it could chuck away most of Rx if you weren't using most of it. But we didn't do a complete job — we did just enough to be useful. We're going to do it properly with Rx 8.

The other thing we are going to come back to — we have been pushing this along in the background, but it's been slow for various reasons — is Async Rx. So just as you have IAsyncEnumerable, there is IObservable, but there's also IAsyncObservable. And that is a thing that the original Rx team kind of never really finished. It was always in prototype phase.

We did get a preview version of that out there on NuGet so you can use it today. I just wanted to explain to people why it's still in early preview. And the basic reason is we don't have a complete test suite for it yet.

What we are gradually doing is updating the way the Rx test suite works so we can have a single test suite that works across both regular Rx and Async Rx. And once we've got that, we'll then be happy that we're at production quality for both libraries. We want to test them to the same level and we can't do that yet. So until we're able to test Async Rx to the same extent that we do — because we have thousands of tests for proper Rx — until we can get those applied to everything, we're not happy to say that people should be using it. But we are still working on it. It's been slow because other things have taken priority and we really needed to fix this bloat issue. But that is coming. We're still working on it, for people who were wondering where it's gone.

Katie: That makes sense. Thank you so much for all those updates, and thank you for being here today, Ian. This was an incredible show. You've gotten a lot of love in the chat and we super appreciate it. And thanks to everybody who is watching. We're here every Monday, same time, same place — On .NET Live. Next week we'll be here with Mattias. Super excited for that one as well. And I hope you have a great rest of your day or evening. Thank you very much.

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The AI Coding Loop: How to Guide AI With Rules and Tests

1 Share

Building great software isn't about perfect prompts, it's about a disciplined process. In this guide, I'll share my workflow for shipping secure code: defining clear goals, mapping edge cases, and building incrementally with runnable tests.

Using a Node.js shopping cart example, I'll show why server-side validation and test-driven development beat "one-shot" AI outputs every time. Let's dive into how to make AI your most reliable collaborator.

Some Background

Last week I did something that felt amazing for about… five seconds. I opened an AI tool, typed one sentence, and it generated a whole shopping cart module for an e-commerce app. Lots of files, lots of code, even folders and patterns. It looked professional.

And then I realized something: the problem was not "how fast AI wrote code." The problem was "how do I know this code is correct?"

Here's the truth: a big pile of code that you didn't write is not a shortcut. For most developers, it's actually extra work. You have to read it, understand it, and still catch the hidden mistakes.

So today I'm not going to give you another "AI is coming" talk. Instead, I'll show you a simple loop that any developer can follow – beginner, mid-level, or senior – to get better results from AI, step by step, without getting trapped. And I'll show it with a real example you can run in one file.

Here’s What We’ll Cover:

The 5-Second High (and the Real Problem)

A lot of people misunderstand AI coding. They think the main job is typing code. But the main job is thinking clearly. Typing is cheap now. Thinking is expensive.

When AI produces a "perfect-looking" module in one shot, the real work doesn't disappear. It moves downstream:

  • You still need to understand what it generated

  • You still need to verify it matches your rules

  • You still need to catch the mistakes that hide inside "nice looking code"

If you can't verify it, you don't own it. And if you don't own it, you can't safely ship it.

Tip: Treat AI output like code from a stranger on the internet: useful, but untrusted until proven.

The Golden Rule: Never Trust User Prices

I started exactly like a beginner would start. I opened AI and wrote a vague prompt:

Design and develop an e-commerce shopping cart module for me.

AI replied with a big output. It looked clean. If you're new, you might think:

Wow, it solved it.

But then I asked myself:

What is the easiest way this can go wrong in real life?

And the answer is also simple: “money can be stolen”. Because a shopping cart has one golden rule: never trust prices coming from the user.

If the browser sends you: “T-shirt price is \(1" and you accept it, someone can pay \)1 for a $20 product. And when AI generates a big module quickly, that kind of mistake can easily hide inside "nice looking code."

Warning: Any system that accepts client-sent prices is basically inviting price tampering.

The Mindset Shift: Stop Asking for the Whole App

So instead of accepting the big AI output, I changed my approach. I said:

I'm not going to ask AI to build the whole app. I will break the big thing into small parts, and I will guide AI like a real engineer.

That is the first mindset shift. In the AI era, your value is not how fast you type. Your value is how well you can do three things:

  • define the problem clearly

  • break it into small pieces

  • prove the result is correct

Big systems are built from small correct pieces. That's not "prompt engineering." That's engineering.

The AI Coding Loop (the 7-Step Workflow)

Here's the loop I use. It's simple English. You can copy it and use it for any project:

  • Write the goal in one sentence

  • Write the rules (what must be true)

  • Write two examples (input → output)

  • Write two bad situations (weird cases)

  • Ask AI for a small piece, not the whole thing

  • Ask for tests, then run them

  • If something fails, improve the prompt and repeat

That's it. That's the loop. Here it is in visual form:

AI coding loop workflow

Tip: The loop is the skill. Tools will change. The loop will still work.

Apply the Loop: a Server-side Cart Total Calculator

Now let's apply it to the shopping cart example. Instead of "build me a cart module," I wrote a tiny requirement note:

We need a cart total calculator on the server. User sends productId and quantity. We must ignore any price from the user. We must use our own product list. We must handle unknown products and invalid quantity. We must calculate subtotal, discount, tax, and final total. We must round money correctly. We must have tests.

This is not a large or complex requirements specification - just a clear and concise note.

And then I asked AI for only one small piece:

  • Not the UI

  • Not the database

  • Not the entire architecture

  • Just one function, with tests

Because the fastest way to build something real is to prove one brick at a time. We have written down everything we discussed in the requirement note. It would be great to also create a visual representation of those ideas. Along with the requirement note, we can prepare a simple sketch or diagram for our own reference. This way, it can serve as a clean and well-documented requirement specification, which we can keep recorded in our project's GitHub README.md file.

In the diagram below, we can have a browser on the left and the server on the right. The browser/user is an untrusted input source. The user may send productId, qty, and even a fake price, but the server must treat only productId and qty as input and must ignore any client-sent price. The server then looks up the real price from its own trusted product catalog, validates the quantity, and calculates totals from server-side data. This is the trust boundary: prices come from the server, not from the client.

Trust boundary and price tampering

The prompt (small piece, strong constraints)

This is the shape of the prompt I used:

Create a single JavaScript file I can run with Node.

Goal:

Calculate shopping cart totals.

Rules:

  • Input items have productId and qty.

  • Do NOT trust price from user input.

  • Use my product catalog.

  • qty must be at least 1.

  • discountPercent and taxPercent must not be negative.

  • discount first, then tax.

  • round money to 2 decimals.

Examples:

  • 2 T-shirts (20 each) + 1 mug (12.50) => subtotal 52.50

  • discount 10%, tax 8% => discount first, then tax

Deliver:

  • one function

  • simple tests using Node's built-in assert

  • print one example output

One small change makes a massive difference: “rules + examples + tests”. AI still tries to help fast, but now it has guardrails. And if it still makes a mistake, you can catch it, because you asked for proof.

Here is a visual representation of the "Cart Totals Pipeline" that covers all the use cases involved in the cart totals calculation process.

Cart totals pipeline (discount then tax)

In the diagram, the cart total calculation follows a fixed pipeline. First, validate inputs (known productId, valid qty, non-negative discount/tax). Next, compute subtotal from the trusted product catalog. Then apply the discount to get the discounted amount. After that, calculate tax on the discounted amount (not on the original subtotal). Finally, round values correctly and return the result (subtotal, discount, tax, and total). The key rule is the order: discount first, then tax.

One-File Runnable Example (with a Wrong Version on Purpose)

Now here's the one-file example you can run right now. No setup. Just Node. Create a file named cart.js, paste in the below code, and run node cart.js.

It includes two versions:

  • a wrong version that trusts user price (this is the mistake we want to learn from)

  • a correct version that uses a trusted catalog

// cart.js

// Run: node cart.js

const assert = require("node:assert/strict");

// Trusted product catalog (server-side truth)

const PRODUCTS = {
    tshirt: { name: "T-shirt", priceCents: 2000 }, // $20.00

    mug: { name: "Mug", priceCents: 1250 }, // $12.50

    book: { name: "Book", priceCents: 1599 }, // $15.99
};

function money(cents) {
    return (cents / 100).toFixed(2);
}

// WRONG: trusts user price

function cartTotal_WRONG(cartItems, discountPercent = 0, taxPercent = 0) {
    let subtotalCents = 0;

    for (const item of cartItems) {
        const priceCents = Math.round((item.price ?? 0) * 100); // user can cheat

        subtotalCents += priceCents * item.qty;
    }

    const discountCents = Math.round(subtotalCents * (discountPercent / 100));

    const afterDiscount = subtotalCents - discountCents;

    const taxCents = Math.round(afterDiscount * (taxPercent / 100));

    const totalCents = afterDiscount + taxCents;

    return totalCents;
}

// Correct: uses trusted catalog + checks

function cartTotal(cartItems, discountPercent = 0, taxPercent = 0) {
    if (!Array.isArray(cartItems))
        throw new Error("cartItems must be an array");

    if (typeof discountPercent !== "number" || discountPercent < 0)
        throw new Error("discountPercent must be non-negative");

    if (typeof taxPercent !== "number" || taxPercent < 0)
        throw new Error("taxPercent must be non-negative");

    let subtotalCents = 0;

    for (const item of cartItems) {
        const { productId, qty } = item || {};

        if (typeof productId !== "string" || !PRODUCTS[productId]) {
            throw new Error("Unknown productId: " + productId);
        }

        if (typeof qty !== "number" || qty < 1) {
            throw new Error("qty must be at least 1");
        }

        subtotalCents += PRODUCTS[productId].priceCents * qty;
    }

    const discountCents = Math.round(subtotalCents * (discountPercent / 100));

    let afterDiscountCents = subtotalCents - discountCents;

    if (afterDiscountCents < 0) afterDiscountCents = 0;

    const taxCents = Math.round(afterDiscountCents * (taxPercent / 100));

    const totalCents = afterDiscountCents + taxCents;

    return { subtotalCents, discountCents, taxCents, totalCents };
}

function runTests() {
    // Normal example

    const cart = [
        { productId: "tshirt", qty: 2 },

        { productId: "mug", qty: 1 },
    ];

    const r = cartTotal(cart, 10, 8);

    assert.equal(r.subtotalCents, 5250); // 52.50

    assert.equal(r.discountCents, 525); // 10% of 52.50

    assert.equal(r.taxCents, 378); // 8% of 47.25

    assert.equal(r.totalCents, 5103); // 51.03

    // Attack example: user tries to cheat with price = 1

    const attackerCart = [
        { productId: "tshirt", qty: 2, price: 1 },

        { productId: "mug", qty: 1, price: 1 },
    ];

    const wrong = cartTotal_WRONG(attackerCart, 0, 0);

    assert.equal(money(wrong), "3.00"); // totally wrong in real life

    const safe = cartTotal(attackerCart, 0, 0);

    assert.equal(money(safe.totalCents), "52.50"); // correct, ignores user price

    // Edge cases

    assert.throws(() => cartTotal([{ productId: "unknown", qty: 1 }], 0, 0));

    assert.throws(() => cartTotal([{ productId: "tshirt", qty: 0 }], 0, 0));

    assert.throws(() => cartTotal(cart, -1, 0));

    assert.throws(() => cartTotal(cart, 0, -1));
}

runTests();

console.log("All tests passed.");

const example = cartTotal(
    [
        { productId: "tshirt", qty: 1 },

        { productId: "book", qty: 2 },
    ],

    15,

    5,
);

console.log("Example subtotal:", money(example.subtotalCents));

console.log("Example discount:", money(example.discountCents));

console.log("Example tax:", money(example.taxCents));

console.log("Example total:", money(example.totalCents));

In this code, we didn't do a magic trick. We did some engineering:

  • We took a big problem and broke it into a small piece

  • We wrote rules so the AI doesn't guess

  • We wrote examples so the AI understands

  • We asked for tests so we can prove it

  • We ran the tests so we can trust it

That is the loop you can reuse for any project.

How to Use Failing Tests as a Flashlight

This is the part many developers skip. They ask for code, but they don't ask for proof. When you run the tests, one of two things happens:

  • Tests pass: great, you earned confidence

  • Tests fail: even better, you earned clarity

A failing test is a flashlight. It shows you the exact place where your thinking (or your prompt) needs improvement. Instead of "AI is wrong," you get a real question:

Which rule was unclear, missing, or contradictory?

Then you adjust:

  • add a stricter rule

  • add an example that removes ambiguity

  • add an edge case that forces the correct behavior

  • regenerate only the small piece, not the whole codebase

Copy-Paste Prompt Template

Here is a copy-paste prompt template you can reuse from today (see below the image):

Copy-paste prompt template

Build ONE small piece, not the full app.

Goal:

(One sentence)

Rules:

(3 to 7 bullets)

Examples:

(2 examples: input -> output)

Edge cases:

(2 cases that can break it)

Deliver:

- one runnable file

- include tests using Node assert

- print one example output

Then ask:

Before giving code, list the possible mistakes and confirm the rules.

That last line is powerful. It forces the AI to think about failure before writing code.

A Calm Hype Check: Why Fundamentals Matter More Now

A lot of content online makes it sound like: "AI codes now, so you don't need to learn coding." That idea is a trap. Because yes, AI can type code. But AI cannot replace your responsibilities as a developer and engineer.

If you ship a broken cart, you can lose money. If you ship insecure code, you can get hacked. If you ship unreliable software, users leave. And in real life, nobody will accept the excuse: "The AI wrote it."

In the AI era, learning coding isn't less important. It's more important, just in a different way. The goal isn't to become a fast typist. The goal is to become a strong thinker.

Fundamentals matter more than before:

  • how data flows through a system

  • how to break big problems into small parts

  • how to write clear rules and requirements

  • how to test and verify

  • how to notice edge cases

  • how to think about security

  • how to understand the tools you use, not just copy answers

Average software will be everywhere. It will be cheap. It will be copied. It will be easy to make. So the only software that matters will be software that is truly valuable: safe, reliable, high quality, and built with real understanding.

That's good news for serious learners. Because the best engineers will become even more valuable, not less.

A Simple Exercise (do this once and you'll feel the skill)

Add one more rule to the cart, like:

  • qty cannot be more than 10

  • Write the test first. Then ask AI to update the function. Run the tests.

  • That's how you train the real AI skill: not prompting, but guiding and verifying.

  • Let AI type the code.

  • You do the thinking.

  • You do the breaking down.

  • You do the proof.

Recap

  • Don't ask AI to build the whole app

  • Break the problem into one small piece

  • Write rules, examples, and edge cases so AI doesn't guess

  • Always ask for tests and run them

  • Treat failing tests as a flashlight

  • Repeat the loop until you can trust what you ship

That's the game now. And if you play it well, you're not behind, you're ahead.

Final Words

If you found the information here valuable, feel free to share it with others who might benefit from it.

I’d really appreciate your thoughts – mention me on X @sumit_analyzen or on Facebook @sumit.analyzen, watch my coding tutorials, or simply connect with me on LinkedIn.

You can also checkout my official website sumitsaha.me for details about me.



Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories