Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149882 stories
·
33 followers

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet

1 Share
An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop. Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree. But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms. "Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..." "This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Book Club recording of 'If Anyone Builds It, Everyone Dies'

1 Share
This is a recording of our AI Book Club discussion of If Anyone Builds It, Everyone Dies: Why Superhuman AI Will Kill Us All by Nate Soares and Eliezer Yudkowsky, held March 15, 2026. Our discussion touches on a variety of topics, including whether the book's use of parables strengthens or weakens its argument, the question of whether AI can develop genuine intentions, the competitive dynamics that prevent any single company from pumping the brakes, the limits of recursive self-improvement, and what ordinary people should make of wildly conflicting predictions from leading AI thinkers. This post also includes discussion questions, key themes, and a full transcript.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Learn how to build agents and workflows in Python

1 Share

We just concluded Python + Agents, a six-part livestream series where we explored the foundational concepts behind building AI agents in Python using the Microsoft Agent Framework:

  • Using agents with tools, MCP servers, and subagents
  • Adding context to agents with database calls and long-term memory with Redis or Mem0
  • Monitoring using OpenTelemetry and evaluating quality with the Azure AI Evaluation SDK
  • AI-driven workflows with conditional branching, structured outputs, and multi-agent orchestration
  • Adding human-in-the-loop with tool approval and checkpoints

All of the materials from our series are available for you to keep learning from, and linked below:

  • Video recordings of each stream
  • Powerpoint slides that you can use for reviewing or even teaching the material to your own community
  • Open-source code samples you can run yourself using frontier LLMs from GitHub Models or Microsoft Foundry Models

Spanish speaker? Check out the Spanish version of the series.

🙋🏽‍♂️ Have follow up questions? Join the weekly Python+AI office hours on Foundry Discord or the weekly Agent Framework office hours.

Building your first agent in Python

YouTube video
📺 Watch YouTube recording

In the first session of our Python + Agents series, we'll kick things off with the fundamentals: what AI agents are, how they work, and how to build your first one using the Microsoft Agent Framework. We'll start with the core anatomy of an agent, then walk through how tool calling works in practice—beginning with a single tool, expanding to multiple tools, and finally connecting to tools exposed through local MCP servers. We'll conclude with the supervisor agent pattern, where a single supervisor agent coordinates subtasks across multiple subagents, by treating each agent as a tool. Along the way, we'll share tips for debugging and inspecting agents, like using the DevUI interface from Microsoft Agent Framework for interacting with agent prototypes.

Adding context and memory to agents

YouTube video
📺 Watch YouTube recording

In the second session of our Python + Agents series, we'll extend agents built with the Microsoft Agent Framework by adding two essential capabilities: context and memory. We'll begin with context, commonly known as Retrieval‑Augmented Generation (RAG), and show how agents can ground their responses using knowledge retrieved from local data sources such as SQLite or PostgreSQL. This enables agents to provide accurate, domain‑specific answers based on real information rather than model hallucination. Next, we'll explore memory—both short‑term, thread‑level context and long‑term, persistent memory. You'll see how agents can store and recall information using solutions like Redis or open‑source libraries such as Mem0, enabling them to remember previous interactions, user preferences, and evolving tasks across sessions. By the end, you'll understand how to build agents that are not only capable but context‑aware and memory‑efficient, resulting in richer, more personalized user experiences.

Monitoring and evaluating agents

YouTube video
📺 Watch YouTube recording

In the third session of our Python + Agents series, we'll focus on two essential components of building reliable agents: observability and evaluation. We'll begin with observability, using OpenTelemetry to capture traces, metrics, and logs from agent actions. You'll learn how to instrument your agents and use a local Aspire dashboard to identify slowdowns and failures. From there, we'll explore how to evaluate agent behavior using the Azure AI Evaluation SDK. You'll see how to define evaluation criteria, run automated assessments over a set of tasks, and analyze the results to measure accuracy, helpfulness, and task success. By the end of the session, you'll have practical tools and workflows for monitoring, measuring, and improving your agents—so they're not just functional, but dependable and verifiably effective.

Building your first AI-driven workflows

YouTube video
📺 Watch YouTube recording

In Session 4 of our Python + Agents series, we'll explore the foundations of building AI‑driven workflows using the Microsoft Agent Framework: defining workflow steps, connecting them, passing data between them, and introducing simple ways to guide the path a workflow takes. We'll begin with a conceptual overview of workflows and walk through their core components: executors, edges, and events. You'll learn how workflows can be composed of simple Python functions or powered by full AI agents when a step requires model‑driven behavior. From there, we'll dig into conditional branching, showing how workflows can follow different paths depending on model outputs, intermediate results, or lightweight decision functions. We'll introduce structured outputs as a way to make branching more reliable and easier to maintain—avoiding vague string checks and ensuring that workflow decisions are based on clear, typed data. We'll discover how the DevUI interface makes it easier to develop workflows by visualizing the workflow graph and surfacing the streaming events during a workflow's execution. Finally, we'll dive into an E2E demo application that uses workflows inside a user-facing application with a frontend and backend.

Orchestrating advanced multi-agent workflows

YouTube video
📺 Watch YouTube recording

In Session 5 of our Python + Agents series, we'll go beyond workflow fundamentals and explore how to orchestrate advanced, multi‑agent workflows using the Microsoft Agent Framework. This session focuses on patterns that coordinate multiple steps or multiple agents at once, enabling more powerful and flexible AI‑driven systems. We'll begin by comparing sequential vs. concurrent execution, then dive into techniques for running workflow steps in parallel. You'll learn how fan‑out and fan‑in edges enable multiple branches to run at the same time, how to aggregate their results, and how concurrency allows workflows to scale across tasks efficiently. From there, we'll introduce two multi‑agent orchestration approaches that are built into the framework. We'll start with handoff, where control moves entirely from one agent to another based on workflow logic, which is useful for routing tasks to the right agent as the workflow progresses. We'll then look at Magentic, a planning‑oriented supervisor that generates a high‑level plan for completing a task and delegates portions of that plan to other agents. Finally, we'll wrap up with a demo of an E2E application that showcases a concurrent multi-agent workflow in action.

Adding a human in the loop to agentic workflows

YouTube video
📺 Watch YouTube recording

In the final session of our Python + Agents series, we'll explore how to incorporate human‑in‑the‑loop (HITL) interactions into agentic workflows using the Microsoft Agent Framework. This session focuses on adding points where a workflow can pause, request input or approval from a user, and then resume once the human has responded. HITL is especially important because LLMs can produce uncertain or inconsistent outputs, and human checkpoints provide an added layer of accuracy and oversight. We'll begin with the framework's requests‑and‑responses model, which provides a structured way for workflows to ask questions, collect human input, and continue execution with that data. We'll move onto tool approval, one of the most frequent reasons an agent requests input from a human, and see how workflows can surface pending tool calls for approval or rejection. Next, we'll cover checkpoints and resuming, which allow workflows to pause and be restarted later. This is especially important for HITL scenarios where the human may not be available immediately. We'll walk through examples that demonstrate how checkpoints store progress, how resuming picks up the workflow state, and how this mechanism supports longer‑running or multi‑step review cycles. This session brings together everything from the series—agents, workflows, branching, orchestration—and shows how to integrate humans thoughtfully into AI‑driven processes, especially when reliability and judgment matter most.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What's Brewing, Edition 1 - What Jonathan is Learning, Using, and Thinking

1 Share
  • The Power of Physical Checklists: Inspired by aviation, Atul Gawande's The Checklist Manifesto, and Daniel Kahneman's Noise, I've been experimenting with printed, physical checklists for repetitive tasks — from producing this show to running one-on-ones. The rigor of writing precise procedures carries over into clearer communication with both humans and AI agents.
  • Small Interventions, Big Returns: A Brother P-Touch label maker. Reorganizing scattered hobby gear. 3D printing organizational tools with a new Bambu Labs P1S. None of these are revolutionary on their own, but the compounding effect of better organization — essentially building a fast index for your physical life — pays back over and over.
  • Context Shapes Focus: Switching from a home gym to working out at Planet Fitness with my brother-in-law was one of the best focus interventions I've made. The change in environment eliminated the procrastination and context-blending that came from being steps away from my computer. If you're struggling with a habit, sometimes the environment is the variable to change, not your willpower.
  • The Reading List: Good Strategy, Bad Strategy by Richard Rumelt (and its follow-up The Crux), The Art of Action by Stephen Bungay (a great framework for thinking about agentic workflows), How to Know a Person by David Brooks, and my top recommendation: 4,000 Weeks by Oliver Burkeman — a book that will help you stop looking for the productivity hack that fixes everything and start thinking about what actually matters.
  • Learning as a Habit: Right now I'm learning to drive a stick shift on a 1983 Bronco. The point isn't the skill itself — it's staying in the beginner's seat. Intentional practice, setting small goals, refining through repetition. Keeping this habit alive is more important than ever when the industry demands rapid adaptation.
  • How I'm Actually Using AI: Claude Code for one-shotting tools with clear boundaries, local environment improvements, and terminal troubleshooting. OpenClaw for experimental agents like a personalized trip planner and Home Assistant automations via YAML. Claude Co-Work for file system management and screenshot organization. Obsidian as the connective tissue — a markdown knowledge base that gives AI agents personal context to work with. And at work, spec-driven development is showing real promise for shaping agent output quality.
  • A Framework for Thinking About AI's Role: I break AI use cases into categories: automating existing workflows (where most gains are today), operational restructuring (what happens when you free humans from a task), execution of complex technical work (agents on the front lines), iterative consulting on intent and goals, and the emerging frontier of exploratory connections and strategic synthesis.
  • What You Should Actually Do: Be action-oriented — the cat is out of the bag. Invest heavily in planning and specification before sending agents off to work. But more importantly, invest in mindful change: understand your own values, figure out who you want to be when you look back on this moment in 10 years, and let that guide your decisions about adoption, learning, and career direction.

🙏 Today's Episode is Brought To you by: SerpApi

If you're building an application that needs real-time search data — whether that's an AI agent, an SEO tool, or a price tracker — SerpApi handles it for you. Make an API call, get back clean JSON. They handle the proxies, CAPTCHAs, parsing, and all the scraping so you don't have to. They support dozens of search engines and platforms, and are trusted by companies like NVIDIA, Adobe, and Shopify. If you're building with AI, they even have an official MCP to make getting up and running a simple task. Get started with a free tier at serpapi.com.

đź“® Ask a Question

If you enjoyed this episode and would like me to discuss a question that you have on the show, drop it over at: developertea.com.

đź“® Join the Discord

If you want to be a part of a supportive community of engineers (non-engineers welcome!) working to improve their lives and careers, join us on the Developer Tea Discord community today!

🗞️ Subscribe to The Tea Break

We are developing a brand new newsletter called The Tea Break! You can be the first in line to receive it by entering your email directly over at developertea.com.

🧡 Leave a Review

If you're enjoying the show and want to support the content head over to iTunes and leave a review!





Download audio: https://dts.podtrac.com/redirect.mp3/cdn.simplecast.com/audio/c44db111-b60d-436e-ab63-38c7c3402406/episodes/1a3b060d-5095-4e38-ae4f-d221c5bfa0dd/audio/62d427a2-39bc-471f-bbb1-83ac403f7155/default_tc.mp3?aid=rss_feed&feed=dLRotFGk
Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

dotNetDave Says… Good Recruiters Are Transparent and Willing to Share Details About the Job Position

1 Share
The text emphasizes the importance of transparency in the recruiting process, highlighting that unclear communication from recruiters can hinder trust. Job seekers should request essential details upfront, while recruiters are encouraged to lead with clarity and respect candidates' experience. Building relationships based on transparency strengthens the recruitment dynamic.







Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When NOT to use the repository pattern in EF Core

1 Share

This blog post is originally published on https://blog.elmah.io/when-not-to-use-the-repository-pattern-in-ef-core/

If you design an application with a data source, the repository pattern often comes to mind as a prominent choice. In fact, many developers see it as the default choice. However, the pattern is not helping every time. In this post, I will pinpoint some cases where the repository pattern is not the best choice.

When NOT to use the repository pattern in EF Core

What is a repository pattern?

The repository pattern is a design pattern that acts as an intermediate layer between data access and business logic. It abstracts the data source and implements the details, providing a clean representation of data manipulation as objects and lists.

Let us start by looking at how a repository pattern can be implemented with EF Core.

Start by adding a new model named Movie:

public class Movie
{
    public Guid Id { get; set; }
    public string Title { get; set; } = string.Empty;
    public string Director { get; set; } = string.Empty;
    public int ReleaseYear { get; set; }
    public double ImdbRating { get; set; }
    public DateTime CreatedAtUtc { get; set; } = DateTime.UtcNow;
}

Next, add a IMovieRepository interface with the basic methods for adding, getting, and saving movies:

public interface IMovieRepository
{
    Task AddAsync(Movie movie);
    Task<Movie?> GetByIdAsync(Guid id);
    Task<List<Movie>> GetTopRatedAsync(double minRating);
    Task SaveChangesAsync();
}

Add an implementation of that interface using EF Core:

public class MovieRepository : IMovieRepository
{
    private readonly AppDbContext _context;

    public MovieRepository(AppDbContext context)
    {
        _context = context;
    }

    public async Task AddAsync(Movie movie)
    {
        await _context.Movies.AddAsync(movie);
    }

    public async Task<Movie?> GetByIdAsync(Guid id)
    {
        return await _context.Movies.FindAsync(id);
    }

    public async Task<List<Movie>> GetTopRatedAsync(double minRating)
    {
        return await _context.Movies
            .Where(m => m.ImdbRating >= minRating)
            .OrderByDescending(m => m.ImdbRating)
            .ToListAsync();
    }

    public async Task SaveChangesAsync()
    {
        await _context.SaveChangesAsync();
    }
}

Finally, I'll add a service class that shows how to use the movie repository:

public class MovieService
{
    private readonly IMovieRepository _repository;

    public MovieService(IMovieRepository repository)
    {
        _repository = repository;
    }

    public async Task<Guid> CreateMovieAsync(
        string title,
        string director,
        int releaseYear,
        double rating)
    {
        var movie = new Movie
        {
            Id = Guid.NewGuid(),
            Title = title,
            Director = director,
            ReleaseYear = releaseYear,
            ImdbRating = rating
        };

        await _repository.AddAsync(movie);
        await _repository.SaveChangesAsync();

        return movie.Id;
    }

    public async Task<List<Movie>> GetHighlyRatedMoviesAsync()
    {
        return await _repository.GetTopRatedAsync(8.0);
    }
}

If you are writing CRUD applications, implementing a data layer like this probably looks very familiar.

What are the advantages of the Repository pattern?

The repository pattern promises several key advantages.

  • A clean separation of concerns where data access logic is centralized.
  • Reusability, where the same repo methods can be used without copying the same logic again.

When to use the Repository pattern

Like any tool, it offers leverage only when in the right place. If you smell any scent in your code, go for the repository pattern.

  • When your application does not rely on simple data storage or fetching but requires enquiring logic such as validation, projection, object preparation, or calculations. Domains such as insurance, banking, healthcare, and IoT require calculations, so the repository pattern can be helpful.
  • The repository pattern can win for you if you are aggregating multiple data sources but presenting them as a single source to the upper layers. Usage of different data sources, such as MSSQL, Postgres, and external APIs, is kept hidden from the business logic layer.
  • The repository layer can be handy if an application demands sophisticated caching strategies and you don't want to pollute the business layers. Hence, the service layer can be unaware of how the cache is configured, or even of whether the data comes from the cache or another source.
  • For unit testing, you can employ the repository pattern, especially in error-critical systems such as financial systems, medical devices, and safety systems. Repositories enable you to test business logic in isolation by swapping real data access with test doubles. You can verify complex business rules, edge cases, and error handling without the overhead, unpredictability, and slowness of database tests.

When to avoid the Repository pattern

Well, we have seen the usefulness of the repository pattern. Now, rejoining our original question, "In what conditions can you avoid the repository pattern?"

  • If your app is just basic Create, Read, Update, Delete operations without complex business logic, you simply go without it. A simple creation or fetch will not require verbose code, and adding a new layer will overengineer it.

For example, in the code, a repository pattern has simple operations:

public class UserRepository : IUserRepository 
{
    public User GetById(int id) => _context.Users.Find(id);
    public void Add(User user) => _context.Users.Add(user);
}
  • With an ORM, you can avoid the abstraction layer. Most ORMs, such as Entity Framework Core, NHibernate, and Doctrine, already implement the repository pattern using DbSet and AppDbContext. You can simply deal with entities like collections and objects. If you don't have to add conditions, validation, and projections in the operations, you can choose simplicity. When wrapping an ORM in repositories, you are often hiding powerful features (like IQueryable for deferred execution or Include for eager loading) behind a more restrictive interface.
  • Smaller projects also don't need to be tedious. If your project requires simple queries and consists of 10-15 tables, you are good to go without bombarding a small project with more code.
  • Any abstraction comes with overhead. In a performance-critical system, a repository may not be the best choice for the same reason. Repository layers can require memory allocation, additional method calls, or complex query translation, which may slow down the software. Repositories often lead to the N+1 query problem or over-fetching data because the repository method returns a generic object rather than a specific projection (Select) tailored to the view.
  • One more scenario where you can skip the repository pattern is in a microservice architecture. If a service is simple enough to have a small database and minimal operations, you don't need to trade off the repository pattern for maintenance and performance.
  • While preparing reporting and analytics data, the repository pattern can be unnecessary. Mostly, the stored procedures, raw SQL queries, and database-specific optimizations do the whole job for us. The code only calls those underlying queries and returns. To keep things maintainable and, of course, speedy, you can avoid one layer.

Conclusion

The repository pattern is something you have probably used on your development journey. Why not? It is one of the popular choices for abstracting data access. However, abstractions have a hidden cost that I highlighted in the blog. I identified a few scenarios where you can escape it and barely lose anything. If you still want to use the repository pattern without losing its limitation, the specification pattern is another player that can work. It allows for reusable query logic without the bloat of a traditional repository.



Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories