Read more of this story at Slashdot.
Read more of this story at Slashdot.
We just concluded Python + Agents, a six-part livestream series where we explored the foundational concepts behind building AI agents in Python using the Microsoft Agent Framework:
All of the materials from our series are available for you to keep learning from, and linked below:
Spanish speaker? Check out the Spanish version of the series.
🙋🏽♂️ Have follow up questions? Join the weekly Python+AI office hours on Foundry Discord or the weekly Agent Framework office hours.
In the first session of our Python + Agents series, we'll kick things off with the fundamentals: what AI agents are, how they work, and how to build your first one using the Microsoft Agent Framework. We'll start with the core anatomy of an agent, then walk through how tool calling works in practice—beginning with a single tool, expanding to multiple tools, and finally connecting to tools exposed through local MCP servers. We'll conclude with the supervisor agent pattern, where a single supervisor agent coordinates subtasks across multiple subagents, by treating each agent as a tool. Along the way, we'll share tips for debugging and inspecting agents, like using the DevUI interface from Microsoft Agent Framework for interacting with agent prototypes.
In the second session of our Python + Agents series, we'll extend agents built with the Microsoft Agent Framework by adding two essential capabilities: context and memory. We'll begin with context, commonly known as Retrieval‑Augmented Generation (RAG), and show how agents can ground their responses using knowledge retrieved from local data sources such as SQLite or PostgreSQL. This enables agents to provide accurate, domain‑specific answers based on real information rather than model hallucination. Next, we'll explore memory—both short‑term, thread‑level context and long‑term, persistent memory. You'll see how agents can store and recall information using solutions like Redis or open‑source libraries such as Mem0, enabling them to remember previous interactions, user preferences, and evolving tasks across sessions. By the end, you'll understand how to build agents that are not only capable but context‑aware and memory‑efficient, resulting in richer, more personalized user experiences.
In the third session of our Python + Agents series, we'll focus on two essential components of building reliable agents: observability and evaluation. We'll begin with observability, using OpenTelemetry to capture traces, metrics, and logs from agent actions. You'll learn how to instrument your agents and use a local Aspire dashboard to identify slowdowns and failures. From there, we'll explore how to evaluate agent behavior using the Azure AI Evaluation SDK. You'll see how to define evaluation criteria, run automated assessments over a set of tasks, and analyze the results to measure accuracy, helpfulness, and task success. By the end of the session, you'll have practical tools and workflows for monitoring, measuring, and improving your agents—so they're not just functional, but dependable and verifiably effective.
In Session 4 of our Python + Agents series, we'll explore the foundations of building AI‑driven workflows using the Microsoft Agent Framework: defining workflow steps, connecting them, passing data between them, and introducing simple ways to guide the path a workflow takes. We'll begin with a conceptual overview of workflows and walk through their core components: executors, edges, and events. You'll learn how workflows can be composed of simple Python functions or powered by full AI agents when a step requires model‑driven behavior. From there, we'll dig into conditional branching, showing how workflows can follow different paths depending on model outputs, intermediate results, or lightweight decision functions. We'll introduce structured outputs as a way to make branching more reliable and easier to maintain—avoiding vague string checks and ensuring that workflow decisions are based on clear, typed data. We'll discover how the DevUI interface makes it easier to develop workflows by visualizing the workflow graph and surfacing the streaming events during a workflow's execution. Finally, we'll dive into an E2E demo application that uses workflows inside a user-facing application with a frontend and backend.
In Session 5 of our Python + Agents series, we'll go beyond workflow fundamentals and explore how to orchestrate advanced, multi‑agent workflows using the Microsoft Agent Framework. This session focuses on patterns that coordinate multiple steps or multiple agents at once, enabling more powerful and flexible AI‑driven systems. We'll begin by comparing sequential vs. concurrent execution, then dive into techniques for running workflow steps in parallel. You'll learn how fan‑out and fan‑in edges enable multiple branches to run at the same time, how to aggregate their results, and how concurrency allows workflows to scale across tasks efficiently. From there, we'll introduce two multi‑agent orchestration approaches that are built into the framework. We'll start with handoff, where control moves entirely from one agent to another based on workflow logic, which is useful for routing tasks to the right agent as the workflow progresses. We'll then look at Magentic, a planning‑oriented supervisor that generates a high‑level plan for completing a task and delegates portions of that plan to other agents. Finally, we'll wrap up with a demo of an E2E application that showcases a concurrent multi-agent workflow in action.
In the final session of our Python + Agents series, we'll explore how to incorporate human‑in‑the‑loop (HITL) interactions into agentic workflows using the Microsoft Agent Framework. This session focuses on adding points where a workflow can pause, request input or approval from a user, and then resume once the human has responded. HITL is especially important because LLMs can produce uncertain or inconsistent outputs, and human checkpoints provide an added layer of accuracy and oversight. We'll begin with the framework's requests‑and‑responses model, which provides a structured way for workflows to ask questions, collect human input, and continue execution with that data. We'll move onto tool approval, one of the most frequent reasons an agent requests input from a human, and see how workflows can surface pending tool calls for approval or rejection. Next, we'll cover checkpoints and resuming, which allow workflows to pause and be restarted later. This is especially important for HITL scenarios where the human may not be available immediately. We'll walk through examples that demonstrate how checkpoints store progress, how resuming picks up the workflow state, and how this mechanism supports longer‑running or multi‑step review cycles. This session brings together everything from the series—agents, workflows, branching, orchestration—and shows how to integrate humans thoughtfully into AI‑driven processes, especially when reliability and judgment matter most.
If you're building an application that needs real-time search data — whether that's an AI agent, an SEO tool, or a price tracker — SerpApi handles it for you. Make an API call, get back clean JSON. They handle the proxies, CAPTCHAs, parsing, and all the scraping so you don't have to. They support dozens of search engines and platforms, and are trusted by companies like NVIDIA, Adobe, and Shopify. If you're building with AI, they even have an official MCP to make getting up and running a simple task. Get started with a free tier at serpapi.com.
If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.
If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community today!
We are developing a brand new newsletter called The Tea Break! You can be the first in line to receive it by entering your email directly over at developertea.com.
If you're enjoying the show and want to support the content head over to iTunes and leave a review!


This blog post is originally published on https://blog.elmah.io/when-not-to-use-the-repository-pattern-in-ef-core/
If you design an application with a data source, the repository pattern often comes to mind as a prominent choice. In fact, many developers see it as the default choice. However, the pattern is not helping every time. In this post, I will pinpoint some cases where the repository pattern is not the best choice.

The repository pattern is a design pattern that acts as an intermediate layer between data access and business logic. It abstracts the data source and implements the details, providing a clean representation of data manipulation as objects and lists.
Let us start by looking at how a repository pattern can be implemented with EF Core.
Start by adding a new model named Movie:
public class Movie
{
public Guid Id { get; set; }
public string Title { get; set; } = string.Empty;
public string Director { get; set; } = string.Empty;
public int ReleaseYear { get; set; }
public double ImdbRating { get; set; }
public DateTime CreatedAtUtc { get; set; } = DateTime.UtcNow;
}Next, add a IMovieRepository interface with the basic methods for adding, getting, and saving movies:
public interface IMovieRepository
{
Task AddAsync(Movie movie);
Task<Movie?> GetByIdAsync(Guid id);
Task<List<Movie>> GetTopRatedAsync(double minRating);
Task SaveChangesAsync();
}Add an implementation of that interface using EF Core:
public class MovieRepository : IMovieRepository
{
private readonly AppDbContext _context;
public MovieRepository(AppDbContext context)
{
_context = context;
}
public async Task AddAsync(Movie movie)
{
await _context.Movies.AddAsync(movie);
}
public async Task<Movie?> GetByIdAsync(Guid id)
{
return await _context.Movies.FindAsync(id);
}
public async Task<List<Movie>> GetTopRatedAsync(double minRating)
{
return await _context.Movies
.Where(m => m.ImdbRating >= minRating)
.OrderByDescending(m => m.ImdbRating)
.ToListAsync();
}
public async Task SaveChangesAsync()
{
await _context.SaveChangesAsync();
}
}Finally, I'll add a service class that shows how to use the movie repository:
public class MovieService
{
private readonly IMovieRepository _repository;
public MovieService(IMovieRepository repository)
{
_repository = repository;
}
public async Task<Guid> CreateMovieAsync(
string title,
string director,
int releaseYear,
double rating)
{
var movie = new Movie
{
Id = Guid.NewGuid(),
Title = title,
Director = director,
ReleaseYear = releaseYear,
ImdbRating = rating
};
await _repository.AddAsync(movie);
await _repository.SaveChangesAsync();
return movie.Id;
}
public async Task<List<Movie>> GetHighlyRatedMoviesAsync()
{
return await _repository.GetTopRatedAsync(8.0);
}
}If you are writing CRUD applications, implementing a data layer like this probably looks very familiar.
The repository pattern promises several key advantages.
Like any tool, it offers leverage only when in the right place. If you smell any scent in your code, go for the repository pattern.
Well, we have seen the usefulness of the repository pattern. Now, rejoining our original question, "In what conditions can you avoid the repository pattern?"
For example, in the code, a repository pattern has simple operations:
public class UserRepository : IUserRepository
{
public User GetById(int id) => _context.Users.Find(id);
public void Add(User user) => _context.Users.Add(user);
}DbSet and AppDbContext. You can simply deal with entities like collections and objects. If you don't have to add conditions, validation, and projections in the operations, you can choose simplicity. When wrapping an ORM in repositories, you are often hiding powerful features (like IQueryable for deferred execution or Include for eager loading) behind a more restrictive interface.Select) tailored to the view.The repository pattern is something you have probably used on your development journey. Why not? It is one of the popular choices for abstracting data access. However, abstractions have a hidden cost that I highlighted in the blog. I identified a few scenarios where you can escape it and barely lose anything. If you still want to use the repository pattern without losing its limitation, the specification pattern is another player that can work. It allows for reusable query logic without the bloat of a traditional repository.