Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149898 stories
·
33 followers

Enterprise Patterns, Real Code: Implementing Fowler’s Ideas in C#

1 Share

Most enterprise systems already use patterns from Martin Fowler’s Patterns of Enterprise Application Architecture. The twist is that many teams use them without naming them, half implement them and then wonder why the codebase fights back.

If your ASP.NET solution contains controllers that talk straight to SQL, services that return HTTP responses, and entities that call SaveChanges, you are already remixing Fowler’s patterns. You are just doing it implicitly and at great expense.

This series takes the opposite route. We will name the patterns, show where they fit, and implement each one in C# with concrete examples.

You will see where these patterns help, where they hurt, and how they combine into real architectures instead of diagram fantasies.

How this series will work

In each article, I will:

  • Explain a pattern in plain language, with the original intent from Fowler’s catalog
  • Show how it typically shows up in C# and .NET projects
  • Give a focused example of when to use it and when to avoid it
  • Connect it to neighboring patterns so you see design options, not isolated tricks

This introduction maps the territory. Every item in the index below links to a short description, a definition, and an example scenario. Each of those sections will become a full post with code.

Pattern index

Use these links to jump to the pattern you care about right now.

  • Layered Architecture
  • Transaction Script
  • Domain Model
  • Service Layer
  • Active Record
  • Data Mapper
  • Repository
  • Unit of Work
  • Identity Map
  • Lazy Load
  • Front Controller
  • Model View Controller (MVC)
  • Data Transfer Object (DTO)

Each section below is both a preview and a contract for the dedicated C# article that will follow.

Layered Architecture Pattern

Definition

Layered Architecture splits an application into distinct layers with clear responsibilities. Fowler’s baseline is:

  • Presentation: handles input and output
  • Domain: holds business rules and domain logic
  • Data source: manages persistence and integration with data stores

The core rule is brutal and simple: if your controllers know SQL or your repositories know HTTP, the layers have already collapsed.

Where to use it

Layered Architecture fits:

  • ASP.NET Core applications that will grow beyond a few controllers
  • Systems where different teams own UI, business rules, and infrastructure
  • Codebases that must survive several technology shifts over their lifetime

In the dedicated post you will see a C# solution where projects align with layers, controllers stay thin, domain services stay ignorant of transport, and repositories encapsulate EF Core without leaking it upward.

Transaction Script Pattern

Definition

Transaction Script organizes business logic as procedures that handle a single request or use case end to end. Each script:

  • Reads input
  • Performs calculations and decisions
  • Persists changes

There is minimal domain modeling. The focus stays on the flow of a transaction.

Where to use it

Transaction Script works best when:

  • The domain logic is simple and shallow
  • You are building reports, admin utilities, or migration tools
  • You need results quickly and long term complexity is limited

In C#, this often appears as an application service or handler class that works directly with DbContext and simple DTOs. In the article you will see both the benefits and the trap: it feels efficient until rules start to repeat across scripts.

Domain Model Pattern

Definition

Domain Model concentrates business rules inside a rich object model. Entities and value objects express invariants and behavior.

Instead of treating data as passive structures, you treat the domain as the center of gravity. Controllers and repositories orbit around it rather than injecting rules into every edge of the system.

Where to use it

Domain Model earns its weight when:

  • The business rules are complex and interdependent
  • Invariants matter more than raw throughput
  • You expect requirements to evolve frequently

In C# this means entities with methods that enforce rules, factories that control creation, and services that orchestrate multiple aggregates. The dedicated post will show an aggregate in code, along with tests that lock in behavior before you worry about EF mapping.

Service Layer Pattern

Definition

Service Layer defines a set of application operations that sit between the outside world and the domain model. It:

  • Coordinates multiple domain objects
  • Handles transactions and security policies
  • Exposes a clear API for controllers, message handlers, or other clients

It is the point where use cases live.

Where to use it

Service Layer fits when:

  • You have multiple clients hitting the same core logic: web, background jobs, workers
  • You want to expose a stable application API while the UI evolves
  • Cross cutting concerns such as logging, permissions, and transaction boundaries must stay consistent

In .NET this often becomes a set of application service classes injected into controllers and workers. In the article you will see a C# service layer that makes HTTP a detail, not the boss.

Active Record Pattern

Definition

Active Record merges domain objects with persistence. Each entity:

  • Maps directly to a database row
  • Contains business logic
  • Knows how to load and save itself

Fowler treats it as a close fit for simple domains where the object model mirrors the database closely.

Where to use it

Active Record suits:

  • Small systems with straightforward tables
  • Prototypes where getting something working matters more than deep abstraction
  • Places where simple CRUD with light behavior is enough

In C#, you often see Active Record flavor when EF Core entities call SaveChanges directly or static methods perform global queries. The dedicated post will show a disciplined version of Active Record and explain when to retire it in favor of a separate Data Mapper.

Data Mapper Pattern

Definition

Data Mapper sits between domain objects and the database. It:

  • Loads domain objects from data stores
  • Persists changes back
  • Shields the domain from knowledge of how persistence works

The domain classes stay persistence ignorant. The mapper takes on the burden of translation.

Where to use it

Data Mapper pays off when:

  • You have a rich Domain Model
  • The database schema must evolve independently of the object model
  • You want to test domain logic without a database in the way

In .NET, EF Core already plays the Data Mapper role. The article will show how to design domain classes that do not depend on EF, then map them using configurations and repositories that wrap the mapper.

Repository Pattern

Definition

Repository represents a collection-like interface for accessing aggregates. It:

  • Hides queries and persistence details
  • Exposes methods that work in domain terms, such as GetById, FindActiveForCustomer, Add, Remove
  • Lets the domain talk in its own language instead of in SQL or query APIs

Fowler includes Repository in the object relational patterns as a way to further isolate domain logic from data access.

Where to use it

Repository helps when:

  • The same aggregate appears across many use cases
  • You want consistent access patterns for aggregates
  • You expect to support multiple query strategies or stores behind the same domain interface

In C#, this usually means interface definitions in the domain layer and implementations in an infrastructure project. The dedicated post will include concrete repository designs, and also examples of where a repository introduces more indirection than it earns.

Unit of Work Pattern

Definition

Unit of Work tracks changes to domain objects during a business transaction and writes them out as a single logical batch. It:

  • Records inserts, updates, and deletes
  • Coordinates commit or rollback
  • Provides a boundary for transactional behavior

Fowler presents it as a way to stop writes from spreading unpredictably through a codebase.

Where to use it

Unit of Work is valuable when:

  • A single operation touches multiple aggregates or tables
  • You need clear transactional boundaries for consistency
  • You want to keep domain logic free of save calls

In .NET, DbContext already behaves as a Unit of Work, yet many codebases hide that fact. The article will show how to embrace this pattern explicitly and how to wrap EF Core in a higher level unit of work abstraction when needed.

Identity Map Pattern

Definition

Identity Map ensures that each logical entity from the database exists only once in memory per scope. It:

  • Tracks loaded objects by identity
  • Returns existing instances instead of creating new ones for the same key
  • Helps avoid inconsistent in memory states for the same row

This pattern often works with Unit of Work and Data Mapper.

Where to use it

Identity Map matters when:

  • The same entity is loaded through different paths in one request
  • You attach domain behavior to entities and depend on reference equality
  • You care about performance costs of repeated materialization

ORMs such as EF Core implement identity maps under the surface. The dedicated post will explain what EF is doing for you and show how to apply Identity Map explicitly when you move outside of ORMs or use multiple contexts.

Lazy Load Pattern

Definition

Lazy Load defers loading of related data until it is actually needed. Instead of fetching entire object graphs, you:

  • Load a root entity
  • Represent associations as placeholders
  • Trigger actual loading when code accesses the association

The pattern targets performance and memory by avoiding unnecessary work.

Where to use it

Lazy Load helps when:

  • Most use cases do not need full graphs
  • Some navigations are expensive or remote
  • You have to control query explosions carefully

In .NET, EF Core can use lazy loading proxies, or you can code your own lazy associations. The article will show both approaches and highlight the risk: invisible queries that surprise you in performance profiles.

Front Controller Pattern

Definition

Front Controller centralizes request handling for a web application. Instead of letting every page or endpoint own its own entry point, you:

  • Route all requests through a single handler
  • Apply cross cutting logic in one place
  • Delegate to controllers or handlers for detailed work

Fowler introduces it as a response to duplicated request handling logic.

Where to use it

Front Controller aligns with:

  • Web applications that need consistent logging, authentication, and error handling
  • Systems that must make routing decisions based on shared policies
  • Architectures that use pipelines and middleware

In ASP.NET Core, the combination of the hosting pipeline and routing already forms a Front Controller. The dedicated post will show how to control that pipeline intentionally instead of treating it as framework magic.

Model View Controller (MVC) Pattern

Definition

Model View Controller splits UI logic into three parts:

  • Model: the underlying data and behavior
  • View: rendering logic
  • Controller: input handling and coordination

Fowler’s version focuses on server side web MVC.

Where to use it

MVC suits:

  • Applications with complex UI interactions that depend on domain rules
  • Teams that want clear separation between presentation logic and domain logic
  • Systems that must support multiple views on the same model

In ASP.NET Core MVC, controllers speak to application services, views render models, and domain rules stay out of both. The article will show how to keep controller code lean instead of letting it morph into a second application layer.

Data Transfer Object (DTO) Pattern

Definition

Data Transfer Object carries data across process boundaries. It:

  • Aggregates fields into a serializable shape
  • Avoids sending full domain objects across the wire
  • Provides a contract between services, clients, or layers

DTOs trade object richness for stability and clarity at integration points.

Where to use it

DTOs are worth the effort when:

  • You are exposing public APIs
  • Multiple clients consume your service, each with their own evolution pace
  • You want to keep domain classes internal to your application

In C#, DTOs typically appear as record types in API projects or as message contracts in messaging systems. The article will show mapping patterns between domain objects and DTOs and how to keep them from overflowing with accidental complexity.

What comes next

The rest of this series will go pattern by pattern:

  • Each pattern gets its own post
  • Each post includes C# examples, tests where relevant, and context from real projects
  • The focus stays on tradeoffs, not worship of diagrams

You can read the series start to finish, or you can drop directly into the pattern that matches the pain in your current system and work outward from there.

If your code already resembles these patterns, this series gives you language and structure. If it does not, the upcoming posts will show how to reshape it piece by piece without pausing delivery.

The post Enterprise Patterns, Real Code: Implementing Fowler’s Ideas in C# first appeared on Chris Woody Woodruff | Fractional Architect.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

SEPs Are Moving to Pull Requests

1 Share

We’re updating how Specification Enhancement Proposals (SEPs) are submitted and managed. Starting today, SEPs will be created as pull requests to the seps/ directory instead of GitHub issues.

Why the Change?

When we introduced SEPs in July, we chose GitHub Issues as our starting point. Issues are familiar to developers, low-friction, and got us up and running quickly. But as more proposals have come through the process, we’ve identified some key pain points:

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Solving Advent of Code in Rust, With Just Enough AI

1 Share

A year ago, I wrote a blog post about solving Advent of Code puzzles using Rust as the implementation language. I believe it’s still relevant if you plan to use Rust this year. In one section, I advised limiting the use of AI and demonstrated how to disable the relevant functionality in RustRover, either partially or completely, while solving puzzles. Just a year later, we live in a very different world when it comes to using AI in software development. Yet here’s what Eric Wastl, the creator of Advent of Code, writes about using AI:

Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve – no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.

In this blog post, I want to argue with Eric. After all, when we go to a gym, isn’t it because there are specific tools there that help us get stronger? All those dumbbells, kettlebells, barbells with weight plates, pull-up bars, and various machines – we use them for a reason, right? We want to get stronger, so we go to a gym and use the tools. Why not use the tools that help us grow our coding skills?

Note the shift here. I fully agree with Eric: we shouldn’t use AI to solve the puzzles, but we can (and even should) use AI along the way. Why? Because being able to apply AI to writing code is a must-have skill in today’s world. Also, using AI isn’t a simple yes-or-no decision – it’s a spectrum. I’ll elaborate on that shortly. But first, I’d like to invite you to join the Solving Advent of Code 2025 in Rust contest and share the following message from the RustRover team.

Solve Advent of Code 2025 Puzzles in Rust and Win Prizes

Before we explain how to enter this year’s contest, we’d like to address last year’s Advent of Code in Rust. Unfortunately, we were unable to send the prizes to the winners because we overlooked an important logistical detail when launching the competition – we didn’t ask participants to ensure that their GitHub profiles included an email address or social media handle. We’re truly sorry about this.

To avoid the same issue this year, please make sure your email or social media handle is listed on your GitHub profile, so Santa can deliver your well-earned gifts. 🎁

As a gesture of appreciation, we’d also like to congratulate the three winners of the 2024 challenge, and we’re ready to send out the long-overdue prizes. Well done – great minds, great solutions!

  1. Duro – 4900
  2. Mark Janssen – 4769
  3. Michel Krämer – 4742

Thank you for participating and for your patience. We hope you’ll join us again this year, continue solving Advent of Code challenges in Rust, and keep contributing to the Rust community.

How to Enter the Contest

  • Make sure your GitHub profile includes an email address or social media handle.
  • Go to the Leaderboard section of your Advent of Code profile and enter one of the codes below:
    • Leaderboard 1: 4223313-0557c16e
    • Leaderboard 2: 2365659-de227312
  • Complete at least three Advent of Code puzzles in Rust.
  • Share your solutions on GitHub and add aoc-2025-in-rust to the Topics field of your repository. To do this, click the gear icon in the top right-hand corner of your repository page and edit the Topics list.

By competing for the top positions on our leaderboards, you can win one of the Amazon Gift Card prizes. As a small apology for last year’s issue, we’re offering five prizes instead of three:

  • 1st place – USD 150
  • 2nd place – USD 100
  • 3rd place – USD 70
  • 4th place – USD 50
  • 5th place – USD 30

Plus, USD 20 gift cards for five randomly selected participants.

GitHub Template

We’ve prepared a GitHub template to help you quickly set up your Advent of Code project in Rust. You don’t have to use it for your solutions, but it streamlines the setup and lets you focus on what really matters.

To use it:

  1. Log in to GitHub.
  2. Click Use this template (please don’t fork).
  3. Once the setup is complete, clone the project in RustRover.
  4. Don’t forget to add aoc-2025-in-rust to the Topics field of your repository. 

Skills we train while solving coding puzzles

Alright, if you aim to compete in leaderboards, then no AI for you. Code completion in an IDE? Well, I don’t know – maybe ed and rustc are all you need to demonstrate your puzzle-solving power. That way, you show that it’s all about the speed of your brain, your keyboard, your CPU, your network card, and your internet provider. Read the rest if you’re not competing with anyone.

Advent of Code is great precisely because it exercises so many real-world engineering muscles. Some of these muscles benefit from AI spotting you; others atrophy if you let AI do the heavy lifting. Here’s a closer look at which skills belong in each category.

Structuring code for a single puzzle and for the whole competition. Advent of Code puzzles are small, but the whole event is long. Structuring your solution so it doesn’t become a pile of spaghetti by Day 7 is a real skill. Should you use AI here? Absolutely yes – as a reviewer, not as a decision-maker. Ask AI to suggest module layouts, compare different folder structures, or propose ways to reuse code across days. But don’t outsource the structural thinking itself. Knowing how to architect small but flexible solutions is one of the main professional skills AoC trains, and AI should support your design, not replace it.

Reading the problem text and coming up with an initial idea. This skill is core to the spirit of Advent of Code. Reading carefully, extracting requirements, noticing tricky edge cases, and forming an initial idea – that’s exactly what Eric wants humans to practice. And he’s right: don’t use AI here. Don’t ask for summaries, hints, or solution outlines. Let your own brain wrestle with the puzzle text. This is one of the purest forms of algorithmic problem solving, and it’s the part AI takes away if you let it.

Choosing the right library and the right level of abstraction. Rust has plenty of useful crates, but AoC often rewards sticking to the standard library. Should AI help? Sure – but in moderation. Asking, “Is there a crate for fast grid manipulation?” or “Is there a simple way to parse this with nom?” mirrors real-world development. As long as you make the final call yourself, AI here acts like a knowledgable colleague pointing you toward options, not handing you the solution.

Choosing the right data structure. This is both an AoC skill and a general CS one. Selecting between vectors, hash maps, BTreeMaps, VecDeque, or a custom struct requires understanding the trade-offs. AI can help explain those trade-offs or remind you of performance characteristics. But don’t ask AI which data structure solves the puzzle. The puzzle is making that choice. Use AI to deepen understanding, not to skip the thinking.

Parsing the input into a convenient structure. AI shines here. Parsing is often tedious, repetitive, and not the focus of the puzzle. If you’d rather not spend 20 minutes writing yet another loop splitting lines on spaces, let AI write the initial parser. You’ll still check it, tweak it, and integrate it with your own logic, but AI can save your cognitive energy for the interesting bits.

Choosing the right algorithm. This is the heart of competitive puzzle solving. Deciding whether something requires BFS, DP, a custom state machine, or a greedy approach is a deeply human skill – and one that Advent of Code trains extremely well. This is another area where I’d say: no AI. If you rely on the model to pick the algorithm, you’ve skipped the actual puzzle. You can use AI afterward to compare approaches or learn alternatives, but not during the solving phase.

Picking the right language feature for the job. Rust is full of elegant features – iterators, pattern matching, ownership tricks, generics, traits, lifetimes. Sometimes AI can remind you of a syntactic trick or propose a more idiomatic expression of your idea. That’s fine, as long as the idea itself is yours. Using AI to teach you small idioms or propose cleaner code is actually great training, but avoid asking AI to “rustify” a solution you don’t understand.

Adding a visualization. Visualizations aren’t typically required, but they’re fun and often deepen understanding. Here, AI is extremely useful – whether generating a quick plot with plotters, helping build a tiny TUI with ratatui, or producing a debug print layout. This is auxiliary work, not the core puzzle, so go ahead and use the tools.

Testing your code. Do you need tests for AoC? Strictly speaking, no. But writing a couple of targeted tests is a great habit: testing parsing, edge cases, or parts of your algorithm. AI is a good assistant here: it can generate test scaffolding, propose property tests, or create sample data variations. As long as you understand what the tests check, this is a safe area to lean on AI.

Benchmarking solutions. Benchmarking is a professional skill AoC can absolutely help train – especially on later days, when naive solutions melt your CPU. AI can help you set up criterion benchmarks or interpret microbenchmarking results, but you should decide what to measure and why. Benchmarking is partly technical and partly philosophical: what trade-offs matter? AI can help with the technical part.

Learning stuff along the way. This is the most important skill of all. Every Advent of Code teaches something: an algorithm you forgot, a data structure you never used, a Rust feature you always meant to try. Learning with AI is natural and encouraged. Just ask it questions like a mentor, not like a puzzle-solver-for-hire. Explanation, context, and examples? Great. Solutions to the actual puzzle? Skip those.

Learning how to prompt coding agents to get a fully functional solution. Remember, Eric Wastl writes, “If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.” But is that really true? In my experience, not quite. Most “AI prompting exercises” out there are either artificial or oversimplified. Advent of Code puzzles, on the other hand, are perfect precisely because they aren’t designed for AI. Many of them can’t be solved in one shot, even by a strong coding agent, such as Junie or Claude Agent. They require iterative refinement, mid-course corrections, and careful steering – exactly the techniques you need to master if you want to use AI effectively in real-world development. In other words, these puzzles become some of the best training grounds for learning how to prompt coding agents. You’ll learn how to break down problems, feed partial context, debug collaboratively with the model, and guide it away from dead ends. These are practical, valuable skills, and Advent of Code offers an endless supply of opportunities to practice them.

Implementing the personal AI strategy for AoC in RustRover

At this point, the key question becomes: which skills do you want to train this year? There’s no single correct answer. Maybe you want to sharpen pure algorithmic thinking with zero AI help. Maybe you want to practice integrating AI into your daily workflow. Maybe you want something in between.

The important part is: make this a conscious choice, not an accidental default.

Sample strategies

Here are a few possible “AI strategies” you can adopt:

  • Strategy A: Pure human mode. No AI, no inline completion, no chat. You open RustRover, you read the problem, you write the code. This maximizes training of problem understanding, algorithm selection, data structures, and Rust fluency. It’s also the closest to what Eric Wastl has in mind.
  • Strategy B: Assisted implementation mode. You solve the puzzle on paper (or in your head) first: understand the problem, pick the algorithm, decide on data structures. Only then you let AI help with implementation details: parsing, boilerplate, small refactorings, docs. This is a great mode if you want to protect the “thinking” parts while still practicing how to collaborate with AI in code.
  • Strategy C: Agent steering mode. You deliberately practice guiding an AI coding agent toward a fully working solution. You still read and understand the puzzle, but you treat the model as a junior pair programmer: you prompt, correct, re-prompt, adjust the approach, and iterate. This is ideal if your goal is to improve at prompting coding models, debugging their output, and managing multi-step interactions.

You can even mix and match strategies across puzzle parts and days. For example, you might tackle Part 1 in Pure human mode to fully engage with the core problem, then switch to Assisted implementation or Agent steering for Part 2, where the twist often builds on the same logic. And on particularly difficult days, you might choose a more AI-assisted strategy from the start.

Using RustRover’s AI Assistant chat

For this section, I’m talking about direct chat with a model inside RustRover’s AI Assistant, not higher-level agents like Junie. You’re essentially talking to the model about your codebase and puzzle, asking it to:

  • explain parts of your solution or the standard library;
  • suggest refactorings and idiomatic Rust patterns;
  • help with parsing and data wrangling;
  • generate tests or benchmarks.

The goal is to keep you in charge of the solution, while the model helps with “muscle work” and with explanations.

If you want to train prompting as a skill, treat each puzzle as a mini lab:

  • Set constraints explicitly. “I already chose the algorithm: Dijkstra’s algorithm on a grid. Don’t change the approach, just help me implement it idiomatically in Rust.”
  • Provide context. Paste the relevant part of the puzzle description and your current code, and explain what you’re stuck on: “Parsing is done, I now need to maintain a priority queue of states. Help me implement this using BinaryHeap.”
  • Iterate, don’t restart. Instead of “rewrite everything”, use prompts like “Here is the current solution and the bug I see. Propose a minimal fix.”

This way, you’re not just getting answers; you’re practicing how to drive a coding model effectively.

Tweaking inline completion settings

Finally, inline completion can quietly shape the way you write code – sometimes too much. For Advent of Code, consider tuning it to match your chosen strategy:

  • If you’re in Pure human mode, you might want to turn inline completion off completely, or at least make it less aggressive.
  • If you’re in Assisted implementation mode, keep inline completion on, but be disciplined: accept suggestions only for clearly mechanical code (loops, parsing, simple matches), not for the core algorithm.
  • If you’re in Agent steering mode, you can let inline completion be quite active, but you should review what it proposes and ask the chat assistant to explain non-obvious pieces.

The key idea: your RustRover setup should reflect your personal AI training plan for Advent of Code, not the other way around.

Conclusion

Advent of Code remains one of the best ways to sharpen your coding skills, and AI doesn’t have to diminish that experience – it can enhance it when used intentionally. We shouldn’t let AI solve the puzzles for us, but we can absolutely let it help us write better, cleaner, and faster Rust code. The real challenge is choosing which skills you want to train: from algorithms and data structures to testing, visualization, and prompting coding agents effectively. With the right strategy, AoC becomes not just a seasonal tradition but a focused workout for both your problem-solving mind and your AI collaboration skills. RustRover gives you all the knobs and switches you need to fine-tune that strategy, from chat-based assistance to inline completion settings.

Most importantly, Advent of Code is fun – and every puzzle you attempt, no matter how you solve it, makes you a better engineer. So pick your approach, open RustRover, and go solve some puzzles.

Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

332: 2025 Re:Invent Predictions Draft – May The Odds Be Ever In Your Favor

1 Share

Welcome to episode 332 of The Cloud Pod – where the forecast is always cloudy! It’s Thanksgiving week, which can only mean one thing: AWS Re:Invent predictions! In this special episode, Justin, Jonathan, Ryan, and Matt engage in the annual tradition of drafting their best guesses for what AWS will announce at the biggest cloud conference of the year. Justin is the reigning champion (probably because he actually reads the show notes), but with a reverse snake draft order determined by dice roll, anything could happen. Will Werner announce his retirement? Is Cognito finally getting a much-needed overhaul? And just how many times will “AI” be uttered on stage? Grab your turkey and let’s get predicting!

Titles we almost went with this week:

  • Roll For Initiative: The Re:Invent Prediction Draft
  • Justin’s Winning Streak: A Study in Actually Doing Your Homework
  • Serverless GPUs and Broken Dreams: Our Re:Invent Wishlist
  • Shooting in the Dark: AWS Predictions Edition
  • We’re Never Good at This, But Here We Go Again
  • Vegas Odds: What Happens at Re:Invent, Gets Predicted Wrong

AWS Re:Invent Predictions 2025

The annual prediction draft is here! Draft order was determined by dice roll: Jonathan first, followed by Ryan, Justin, and Matt in last position. As always, it’s a reverse order format, with points awarded for each correct prediction announced during the Tuesday, Wednesday, and Thursday keynotes.

Jonathan’s Predictions

  1. Serverless GPU Support – An extension to Lambda or a different service that provides on-demand serverless GPU/inference capability. Likely with requirements for pre-warmed provisioned instances.
  2. Agentic Platform for Continuous AI Agents – A service that allows agents to run continuously with goals or instructions, performing actions periodically or on-demand in the real world. Think: running agents on a schedule that can check conditions and take automated actions.
  3. Werner Vogels Retirement Announcement – Werner will announce that this is his last Re:Invent keynote and that he is retiring.

Ryan’s Predictions

  1. New Trainium 3 Chips, Inferentia, and Graviton Chips – New generation of AWS custom silicon across training, inference, and general compute.
  2. Expanded Model Availability in Bedrock – AWS will significantly expand the number of models available in Bedrock, potentially via partnerships or integrations with additional providers.
  3. Major Refresh to AWS Organizations – UI-based or functionality refresh providing better visibility into SCPs, OU mappings, and stack sets across organizations.

Justin’s Predictions

  1. New Nova Model with Multi-modal Support – Launch of Nova Premier or Nova Sonic with multi-modal capabilities, bringing Amazon’s foundational model to the next level.
  2. OpenAI Partnership Announcement – AWS and OpenAI will announce a strategic partnership, potentially bringing OpenAI models to Bedrock (likely announced on stage).
  3. Advanced Agentic AI Capabilities for Security Hub – Enhanced features for Security Hub adding Agentic AI to help automate SOC team operations.

Matt’s Predictions

  1. Model Router for Bedrock – A service to route LLM queries to different AI models, simplifying the process of testing and selecting models for different use cases.
  2. Well-Architected Framework Expansion – New lenses or significant updates to the Well-Architected Framework beyond the existing Generative AI and Sustainability lenses.
  3. End User Authentication That Doesn’t Suck – A new or significantly revamped end-user authentication service (essentially Cognito 2.0) that actually works well for client portals.

Tiebreaker: How Many Times Will “AI” or “Artificial Intelligence” Be Said On Stage?

If we end in a tie (or nobody gets any predictions correct, which is historically possible), we go to the tiebreaker!

Host Guess Matt 200 Justin 160 Ryan 99 Jonathan 1

Honorable Mentions

Ideas that didn’t make the cut but might just surprise us:

Jonathan:

  • Mathematical proof/verification that text was generated by Amazon’s LLMs (watermarking for AI output)
  • Marketplace for AI work – publish and monetize AI-based tools with Amazon handling billing
  • New consumer device to accompany Nova models (smarter Alexa replacement with local inference)

Ryan:

  • FinOps AI recommender for model usage and cost optimization
  • Savings plans or committed use discounts for Bedrock use cases

Matt:

  • Sustainability/green dashboard improvements
  • AI-specific features for Aurora or DSQL

Justin:

  • Big S3 vectors announcement and integration to Bedrock
  • FinOps service for Kubernetes
  • Amazon Q Developer with autonomous coding agents
  • New GPU architecture combining training/inference/Graviton capabilities
  • Amazon Bedrock model marketplace for revenue share on fine-tuned models

Quick Hits From the Episode

  • 00:02 – Is it really Re:Invent already? The existential crisis begins.
  • 01:44 – Jonathan reveals why Justin always wins: “Because you read the notes.”
  • 02:54 – Matt hasn’t been to a Re:Invent session since Image Builder launched… eight years ago.
  • 05:03 – Jonathan comes in hot with serverless GPU support prediction.
  • 06:57 – The inference vs. training cost debate – where’s the real ROI?
  • 09:30 – Matt’s picks get systematically destroyed by earlier drafters.
  • 14:09 – The OpenAI partnership prediction causes draft chaos.
  • 16:24 – Jonathan drops the Werner retirement bombshell.
  • 19:12 – Justin’s Security Hub prediction: “Please automate the SOC teams.”
  • 19:46 – Everyone hates Cognito. Matt’s prediction resonates with the universe.
  • 21:47 – Tiebreaker time: Jonathan goes with 1 out of pure spite.
  • 24:08 – Honorable mentions include mathematical AI verification and a marketplace for AI work.

Re:Invent Tips (From People Who Aren’t Going)

Since none of us are attending this year, here’s what we remember from the good old days:

  • Chalk Talks remain highly respected and valuable for deep technical content
  • Labs and hands-on sessions are worth your time more than keynotes you can watch online
  • Networking on the expo floor and in hallways is where the real value happens
  • Don’t try to see everything – focus on what matters to your work
  • Stay hydrated – Vegas is dry and conferences are exhausting

Closing

And that is the week in the cloud! We’re taking Thanksgiving week off, so there won’t be an episode during Re:Invent. We’ll record late that week and have a dedicated Re:Invent recap episode the following week. If you’re heading to Las Vegas, have a great time and let us know how it goes!

Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod





Download audio: https://episodes.castos.com/5e2d2c4b117f29-10227663/2248333/c1e-rodobw3vd7f0wpw1-wwpr3jx7id5x-ua5nu7.mp3
Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Episode 13: UWP Apps with Daniel Paulino

1 Share

This week our guest is Daniel Paulino, builder of multiple popular UWP apps like Nightingale REST client and Ambie White Noise. You can find Daniel on GitHub https://github.com/dpaulino and on Bluesky https://bsky.app/profile/kidjenius.bsky.social





Download audio: https://media.transistor.fm/a1784dd0/280d226d.mp3
Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

ESLint v10.0.0-alpha.1 released

1 Share

Highlights

This version of ESLint is not ready for production use and is provided to gather feedback from the community before releasing the final version. Please let us know if you having any problems or feedback by creating issues on our GitHub repo.

Most of the highlights of this release are breaking changes, and are discussed further in the migration guide. There are summaries of the significant changes below. (Less significant changes are included in the migration guide.)

This prerelease version of ESLint has a separate documentation section.

Removed deprecated SourceCode methods

The following SourceCode methods are no longer available:

  • getTokenOrCommentBefore() - Use getTokenBefore() with the { includeComments: true } option instead
  • getTokenOrCommentAfter() - Use getTokenAfter() With the { includeComments: true } option instead
  • isSpaceBetweenTokens() - Use isSpaceBetween() instead
  • getJSDocComment() - No replacement

Users of plugins that haven’t updated their code yet can use the @eslint/compat utility in the meantime.

Installing

Since this is a pre-release version, you will not automatically be upgraded by npm. You must specify the next tag when installing:

npm i eslint@next --save-dev
1

You can also specify the version directly:

npm i eslint@10.0.0-alpha.1 --save-dev
1

Migration Guide

As there are a lot of changes, we’ve created a migration guide describing the breaking changes in great detail along with the steps you should take to address them. We expect that most users should be able to upgrade without any build changes, but the migration guide should be a useful resource if you encounter problems.

Breaking Changes

  • fa31a60 feat!: add name to configs (#20015) (Kirk Waiblinger)
  • 3383e7e fix!: remove deprecated SourceCode methods (#20137) (Pixel998)
  • 501abd0 feat!: update dependency minimatch to v10 (#20246) (renovate[bot])
  • ca4d3b4 fix!: stricter rule tester assertions for valid test cases (#20125) (唯然)

Features

Documentation

Chores

  • 0b14059 chore: package.json update for @eslint/js release (Jenkins)
  • d6e7bf3 ci: bump actions/checkout from 5 to 6 (#20350) (dependabot[bot])
  • 139d456 chore: require mandatory headers in rule docs (#20347) (Milos Djermanovic)
  • 3b0289c chore: remove unused .eslintignore and test fixtures (#20316) (Pixel998)
  • a463e7b chore: update dependency js-yaml to v4 [security] (#20319) (renovate[bot])
  • ebfe905 chore: remove redundant rules from eslint-config-eslint (#20327) (Milos Djermanovic)
  • 88dfdb2 test: add regression tests for message placeholder interpolation (#20318) (fnx)
  • 6ed0f75 chore: skip type checking in eslint-config-eslint (#20323) (Francesco Trotta)
Read the whole story
alvinashcraft
51 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories