Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149682 stories
·
33 followers

Why Technical Leaders Undersell Themselves and How to Stop

1 Share

Most leaders will tell you they hate bragging — and they mean it. But somewhere between "I don't want to be that person" and never talking about what you've done, a lot of genuinely talented people quietly disappear from the conversations that shape their careers. Josh opens with a real scenario — joining a new company and being asked to introduce himself at an all-hands — and the tension of threading that needle. Bob admits this is his kryptonite. As a self-described introvert and chronic underseller, he's spent years over-correcting away from self-promotion because it feels slimy. The episode digs into the nuance between "I" stories and "we" stories, and why the most effective communicators learn to weave both together naturally.

The harder truth surfaces when Bob connects this to the Capital One Agile layoffs — 1,100+ roles eliminated, not because the work wasn't valuable, but because no one made the case for it. Servant leadership culture convinced an entire profession that talking about results was someone else's job. It wasn't. Josh drives it home with Nick Saban's retirement: Alabama is still good, but they're not the same, because the person whose presence elevated everyone around them is gone. That's you. You are the face of your franchise, whether you're comfortable with it or not. Stop underselling. Start telling.

Stay Connected and Informed with Our Newsletters

Josh Anderson's "Leadership Lighthouse"

Dive deeper into the world of Agile leadership and management with Josh Anderson's "Leadership Lighthouse." This bi-weekly newsletter offers insights, tips, and personal stories to help you navigate the complexities of leadership in today's fast-paced tech environment. Whether you're a new manager or a seasoned leader, you'll find valuable guidance and practical advice to enhance your leadership skills. Subscribe to "Leadership Lighthouse" for the latest articles and exclusive content right to your inbox.

Subscribe here

Bob Galen's "Agile Moose"

Bob Galen's "Agile Moose" is a must-read for anyone interested in Agile practices, team dynamics, and personal growth within the tech industry. The newsletter features in-depth analysis, case studies, and actionable tips to help you excel in your Agile journey. Bob brings his extensive experience and thoughtful perspectives directly to you, covering everything from foundational Agile concepts to advanced techniques. Join a community of Agile enthusiasts and practitioners by subscribing to "Agile Moose."

Subscribe here

Do More Than Listen:

We publish video versions of every episode and post them on our YouTube page.

Help Us Spread The Word: 

Love our content? Help us out by sharing on social media, rating our podcast/episodes on iTunes, or by giving to our Patreon campaign. Every time you give, in any way, you empower our mission of helping as many agilists as possible. Thanks for sharing!





Download audio: https://episodes.captivate.fm/episode/6e2cf693-c9b8-4be8-9703-389b38a1a92a.mp3
Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Daniel Ward: AI Agents - Episode 393

1 Share

https://clearmeasure.com/developers/forums/

Daniel Ward is a Microsoft .NET MVP and software consultant at Lean TECHniques in San Antonio, TX. He works with teams to deliver high‑quality software through modern engineering practices, including effective CI/CD, automated testing, AI adoption, and product management. His background spans multiple industries such as finance, retail, and agriculture, and he has served as a software developer, technical coach, agile coach, and tech lead. Daniel is also a conference speaker, a contributor to the .NET community, and the creator behind Dan In a Can, where he writes about .NET, testing, DevOps, and developer tooling. Outside of his professional work, he enjoys piano, guitar, swing dancing, and game development. 

Mentioned in This Episode

Website
LinkedIn 
X Account
Github
Lean Techniques

"Kiro" AI Coding Tool 


Want to Learn More?

Visit AzureDevOps.Show for show notes and additional episodes.





Download audio: https://traffic.libsyn.com/clean/secure/azuredevops/Episode_393.mp3?dest-id=768873
Read the whole story
alvinashcraft
34 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kotlin 2.3.20 Released

1 Share

The Kotlin 2.3.20 release is out! Here are the main highlights:

  • Gradle: Compatibility with Gradle 9.3.0 and Kotlin/JVM compilation uses the Build tools API by default.
  • Maven: Simplified setup for Kotlin projects.
  • Kotlin compiler plugins: Lombok is Alpha and improved JPA support in the kotlin.plugin.jpa plugin.
  • Language: Support for name-based destructuring declarations.
  • Standard library: New API for creating immutable copies of Map.Entry.
  • Kotlin/Native: New interoperability mode for C and Objective-C libraries.

For the complete list of changes, refer to What’s new in Kotlin 2.3.20 or the release notes on GitHub.

How to install Kotlin 2.3.20

The latest version of Kotlin is included in the latest versions of IntelliJ IDEA and Android Studio.

To update to the new Kotlin version, make sure your IDE is updated to the latest version and change the Kotlin version to 2.3.20 in your build scripts.

If you need the command-line compiler, download it from the GitHub release page.

If you run into any problems:

Stay up to date with the latest Kotlin features! Subscribe to receive Kotlin updates by filling out the form at the bottom of this post. ⬇️

Special thanks to our EAP Champions

Further reading

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sunsetting Code With Me

1 Share

Code With Me has been part of JetBrains IDEs for years, providing real-time collaborative coding and pair programming directly inside your development environment. It enabled teams to share a workspace, tackle issues together, and learn from one another without leaving the IDE.

Today, we’re announcing plans to gradually sunset Code With Me.

In this post, we’ll explain why we’re making this change, what the sunset timeline looks like, and what it means for existing users. We’ll also outline how the transition will work and answer common questions in the FAQ below to help make the process as smooth as possible.

Why we’re making this change

Demand for built-in pair programming and real-time collaboration tools like Code With Me peaked during the pandemic and has since shifted, with many teams adopting different collaboration workflows. At the same time, maintaining Code With Me alongside the evolving IntelliJ Platform requires ongoing engineering investment.

After reviewing usage trends and the long-term direction of our IDEs, we’ve decided to discontinue Code With Me. This will allow us to focus our efforts on areas that deliver the most value to developers and align with how teams collaborate today.

Timeline and what to expect

2026.1 release

  • Code With Me will be unbundled from all JetBrains IDEs and made available as a separate plugin via JetBrains Marketplace.
  • 2026.1 will be the last IDE release to officially support Code With Me.
  • No new features will be developed from this point forward.

Transition period (2026.1 → Q1 2027)

  • The plugin will continue to function on supported IDE versions.
  • Security updates will be provided during this period.
  • Public relay infrastructure will remain operational.

Final shutdown (Q1 2027)

  • The public relay infrastructure will be turned off.
  • The service will be fully deactivated.

What this means for existing Code With Me users

We understand that Code With Me is part of some teams’ workflows, and we want to make this transition as smooth as possible. 

If you currently use Code With Me:

  • You can continue installing and using the plugin from JetBrains Marketplace throughout the transition period outlined above.
  • It will work on supported IDE versions, with security updates provided until the final sunset date in Q1 2027.
  • Existing subscriptions will remain active until support ends. New sales and renewals will be discontinued.
  • Our Support team will remain available during the transition to assist with any questions or compatibility concerns.

Depending on your workflow, you may find that general-purpose collaboration tools cover your needs. If your primary use case is remote access to development environments, our remote development features, which have been improved significantly in recent releases, may be a better fit.

For more details, see the FAQ below. If you have questions or feedback, please leave a comment or contact our Support team.

Looking ahead

As we invest in the future of our tools, we remain focused on delivering tools that support modern software development and bring the greatest value to developers and teams.

We’re grateful to everyone who used Code With Me, shared feedback, and contributed to its journey. Your input has helped shape the product. Thank you!

The JetBrains team

FAQ

Below, we have compiled answers to the most common questions about the discontinuation of Code With Me and the migration options available.

What is the last IDE release to support Code With Me?

2026.1 will be the last IDE release to officially support Code With Me.

Will I still be able to use Code With Me after the 2026.1 release?

Yes, you will be able to use Code With Me on all supported IDE versions for at least one year until Q1 2027. After that, our public relays will be shut down, and public sessions won’t be available anymore.

What alternatives are available for Code With Me users?

Depending on your workflow, you may find that general-purpose collaboration tools cover your needs. If your primary use case is remote access to development environments, our remote development features may be a better fit.

Does this decision affect remote development within JetBrains IDEs?

No. The discontinuation of Code With Me does not affect JetBrains IDEs’ remote development functionality.

We continue to actively invest in and evolve our remote development capabilities as part of the IntelliJ Platform. Remote development remains a strategic focus area, and progress in this direction will continue.

What will happen to my Code With Me license?

You can continue using Code With Me on supported IDE versions until the end of your current subscription term. Code With Me subscriptions will not renew.

For more details about your specific subscription, please contact our Support team.

I’m using Code With Me Enterprise. What does this mean for me?

If you are using Code With Me as part of a JetBrains IDE Services (Enterprise) agreement, your current contract terms remain valid during the supported period.

As we approach the end of the sunset period, renewal of Code With Me Enterprise will no longer be available. For contracts with specific provisions or custom arrangements, we will work individually to define the appropriate transition path.

If you have questions about how this change affects your agreement, please contact your JetBrains representative.

What should I do if I recently purchased an Code With Me license?

Our standard refund policy applies to recent purchases. If you have questions about your eligibility for a refund, please contact JetBrains support.

Where can I find more information or assistance?

For any further questions or support inquiries, please visit our Support page or reach out to us directly. We sincerely appreciate the Code With Me community’s support and look forward to continuing to provide the best solutions within our JetBrains IDEs.

Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Software Craftsmanship in the Age of AI

1 Share

On March 26, Addy Osmani and I are hosting the third O’Reilly AI Codecon, and this time we’re taking on the question of what software craftsmanship looks like when AI agents are writing much of the code.

The subtitle of this event, “Software Craftsmanship in the Age of AI,” was meant to be provocative. Craftsmanship implies care, intention, and deep skill. It implies a maker who touches the material. But we’re entering a world where some people with quite impressive output don’t touch the code. Steve Yegge, in our conversation earlier this week, put it bluntly: “Code is a liquid. You spray it through hoses. You don’t freaking look at it.” Wes McKinney, the creator of pandas and one of our speakers at this event, doesn’t write code by hand any more either. He’s burning north of 10 billion tokens a month across Claude, Codex, and Gemini, writing vast amounts of Go, a language he’s never coded in manually.

If that’s where this is headed, then what exactly are we crafting? That’s the question this lineup is built to answer, and the speakers come at it from very different angles.

The “dark factory” position

One end of the spectrum is occupied by people who are already operating what are increasingly being called dark factories, after the robot factories where there are no lights because the robots that do all of the work don’t need them. These are software production environments where humans set direction but agents do nearly all the implementation work.

Ryan Carson is the clearest example on our stage. Ryan built and sold Treehouse, where he helped over a million people learn to code. Now he’s building Antfarm, an open source tool that lets you install an entire team of agents into OpenClaw with a single command. His talk, “How to Create a Team of Agents in OpenClaw and Ship Code with One Command,” is essentially a tutorial on running a software factory where a planning agent decomposes your feature request into user stories, each story gets implemented and tested in isolation by a separate agent, failures retry automatically, and you get back tested pull requests. This isn’t quite a dark factory, though. Ryan has built a CI pipeline where the agent records itself using a feature and attaches the video to the PR for human review. It’s an assembly line, and the human’s job is to inspect the output, not produce it.

This is Steve Yegge’s Level 7 or 8, and it’s no longer theoretical. But Ryan’s talk will also reveal what happens at the edges, when agents break, when the feedback loop fails, when automated retries aren’t enough.

The craftsmanship-means-oversight position

At the other end you have people who are deeply enthusiastic about AI coding but insist that the human role isn’t just “set direction and walk away.” It’s active, continuous, and skilled.

Addy Osmani anchors this position. His talk, “Orchestrating Coding Agents: Patterns for Coordinating Agents in Real-World Software Workflows,” is about the coordination problem. As he and I discussed in our recent conversation, there’s a spectrum from solo founders running hundreds of agents without reviewing the code to enterprise teams with quality gates and long-term maintenance to think about. Most real teams are somewhere in the middle, and they need patterns, not just tools. Addy has been thinking hard about what Andrej Karpathy called “context engineering,” the discipline of structuring all the information an LLM needs to perform reliably. His new book Beyond Vibe Coding is essentially a manual for this new discipline.

Cat Wu from Anthropic brings the platform maker’s perspective. She leads product for Claude Code and Cowork, and her focus on building AI systems that are “reliable, interpretable, and steerable” represents a design philosophy that the tool should make human oversight natural and easy. Where Ryan Carson’s approach pushes toward maximum agent autonomy, Cat’s work at Anthropic is about giving humans the right levers to stay meaningfully in the loop. I’m really looking forward to the conversation between Cat and Addy.

The costs of getting it wrong

Several speakers are focused squarely on what happens when the dark factory breaks down.

Nicole Koenigstein’s talk, “The Hidden Cost of Agentic Failure and the Next Phase of Agentic AI,” is about the failure modes that don’t show up in demos. Nicole is writing the O’Reilly book AI Agents: The Definitive Guide, and she’s been consulting with companies on the gap between what agents can do in a sandbox and what they do in production. Hila Fox from Qodo brings a complementary perspective with “From Prompt to Multi-Agent System: The Evolution of Our AI Product,” which traces the real path from a simple prompt-based tool to a production multi-agent system, including all the things that go wrong along the way.

The lightning talks share more results of real-world experience. Advait Patel, a site reliability engineer at Broadcom, will talk about what happens when AI agents break production systems, and how his team responded. Abhimanyu Anand from Elastic asks a question that should keep every AI builder up at night: “Is your eval lying to you?” If your evaluation framework is giving you false confidence, you’re building on sand.

The bottleneck was never hands on keyboards

Wes McKinney’s talk, “The Mythical Agent-Month,” revisits Fred Brooks’s famous argument that adding more people to a late software project makes it later, and asks whether the same dynamics apply to adding more agents. Wes’s answer, as he’s laid it out in his blog post, is so compelling that we immediately invited him to give it as a talk, even though that meant rearranging the existing program. Agents leave the essential complexity, the hard design decisions, the conceptual integrity of the system, completely untouched. Worse, agents introduce new accidental complexity at machine speed. Wes describes hitting a “brownfield barrier” around 100,000 lines of code where agents begin choking on the bloated codebases they themselves have generated.

This connects directly to something that Steve Yegge and Wes (and many others, including me) have converged on: Taste is the scarce resource. Brooks argued 50 years ago that design talent was the real bottleneck. Now that agents have removed the labor constraint, that argument is stronger than ever. The developers who thrive won’t be the ones who run the most parallel sessions. They’ll be the ones who can hold their project’s conceptual model in their head, who know what to build and what to leave out.

New architectures for the new reality

A cluster of talks addresses the structural question: If agents are doing most of the coding, what does the engineering organization, the platform, and the architecture need to look like?

Juliette van der Laarse’s talk, “The AI Flower: A Public Capability Architecture for AI-Native Engineering,” lays out a framework for how engineering teams should organize their capabilities in a world of AI-native workflows. Juliette’s work is a start on thinking through the second-order effects of the new technology. How does the organization itself need to change? We came across Juliette’s work recently and think it may be especially compelling for many of our enterprise customers.

Mike Amundsen has spent years thinking about API ecosystems and sustainable architecture, and he’s applying that lens to the question of how AI should relate to human expertise. His talk, “From Automation to Augmentation: Designing AI Coaches That Amplify Expertise,” makes a distinction that will determine the shape of the future human/AI economy. Automation replaces human work. Augmentation amplifies it.

Several other lightning talks fill in important pieces. Tatiana Botskina, a PhD candidate at Oxford and founder of an AI agent registry, talks about agent-to-agent collaboration and provenance, the question of how you know where an agent’s outputs came from. Neethu Elizabeth Simon from Arm addresses MCP server testing, a nuts-and-bolts reliability question that will matter more as MCP becomes the standard connective tissue for agent systems. And Arushee Garg from LinkedIn describes a production multi-agent system for generating outreach messages. These are all exploring issues that matter during real-world deployment.

The enterprise view

The event closes with my fireside chat with Aaron Levie, cofounder and CEO of Box. Aaron has been one of the most thoughtful enterprise CEOs on the question of what agents mean for SaaS and for knowledge work more broadly. His argument is that agents don’t replace enterprise software; they ride on top of it, and they need content, context, and governance to do anything useful. He’s also made the point that most companies have vast amounts of work they’ve never been able to afford to do, contracts they’ve never analyzed, processes they’ve never optimized. AI doesn’t just automate existing work. It unlocks work that was previously too expensive to attempt.

That connects to a theme I’ve been developing in my own work: the danger that AI creates enormous value but hollows out the economic circulatory system that supports the human expertise it depends on. Aaron is running a public company that has to navigate this in real time, making AI central to Box’s product while making the case that human judgment, context, and governance are more valuable, not less, in an agentic world.

What I’ll be watching for

There will be not only real excitement but hopefully deeper insight emerging from the tensions between these speakers and the positions they take. Ryan Carson and Cat Wu represent genuinely different philosophies of the human-agent relationship, and both are shipping real products. Wes McKinney and Addy Osmani agree that taste and design judgment matter more than ever, but they’re coming at it from very different vantage points: Wes as an individual developer pushing the limits of parallel agent sessions, Addy as someone thinking about patterns that work for teams of hundreds. Nicole Koenigstein and Hila Fox are asking the question that the enthusiasm sometimes papers over: What happens when it goes wrong?

And underneath all of it is the question that Steve Yegge, who isn’t on this program but whose ideas have certainly shaped my design of the program, would frame as a matter of grief and acceptance. Are we at the end of programming as a craft practice, or at the beginning of a new and different craft? I think the lineup proves that the craft isn’t dying. It’s migrating, from writing code to designing systems, from typing to taste, from individual heroics to orchestration. The people who understand that transition earliest will have an enormous advantage.

Sign up for free here. The event runs March 26, 8:00am to 12:00pm PDT.



Read the whole story
alvinashcraft
35 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Seed Data to EF Core

1 Share

Introduction

We all know EF Core Migrations. They have been around for quite some time, some people like them, others prefer alternatives (hello, Flyway, DbUp, FluentMigrator!). I'm not going to discuss that, but, instead, how to seed data to a database, either using EF Core migrations or not. This is, of course, for inserting initial/reference data that must always be present, for Microsoft's opinion on this, please see this page.

This post assumes that you know about migrations - how to create and apply them, at least, and that you have all it takes for it, including EF Core Tools and the Microsoft.EntityFrameworkCore.Design NuGet package.

For actually adding data, we have essentially four options:

  • Explicit insertions
  • Data-only migrations
  • Entity configuration
  • Context configuration (explicit seeding)

Let's see them all one by one.

Using Explicit Insertions

The first case is pretty obvious: we add data explicitly after we forced pending migrations to occur, that is, after calling Migrate/MigrateAsync on DbContext.Database. For example:

using var scope = app.Services.CreateScope();
using var ctx = scope.ServiceProvider.GetRequiredService<BlogContext>();

await ctx.Database.MigrateAsync();

if (!ctx.Blogs.Any())
{
    ctx.Blog.Add(new Blog { Title = "Some BLog", Url = "https://some.blog" });
    ctx.Blog.Add(new Blog { Title = "Another BLog", Url = "https://another.blog" });

    await ctx.SaveChangesAsync();
}

Here, of course, we can apply custom logic - in this case, I'm merely checking that the Blogs table has any data prior to inserting into it, but you can do things differently.

Another example, when we want to load data explicitly, maybe loading it from some external resource. Here is an example endpoint with Minimal API:

app.MapPost("/seed", async (BlogContext ctx, CancellationToken cancellationToken) =>
{
    //two options:
    //load data from some external resource, create Blog instances and add to ctx.Blogs
    //or just add a small, unchanging, set of reference data:

    if (!await ctx.Blogs.AnyAsync(cancellationToken))
    {
        ctx.Blogs.Add(new Blog { Title = "Some BLog", Url = "https://some.blog" });
        ctx.Blogs.Add(new Blog { Title = "Another BLog", Url = "https://another.blog" });
    }

    await ctx.SaveChangesAsync();

    return Results.Ok();
});

Using a Data-only Migration

For this option, we create a migration in any of the usual ways:

1 - From inside the Package Manager Console:

Add-Migration ReferenceData

2 - From the command line, using the ef global tool:

dotnet ef migrations add ReferenceData

A new migration is then created inside the Migrations folder of the target project, inheriting from Migration. If there are no schema changes, this migration will be empty, meaning, without any changes to apply:

public partial class ReferenceDataMigration : Migration
{
    //
}

Now, this is where we can chip in: all we have to do is override the Up and Down methods to add our custom data, in the form of SQL:

public partial class ReferenceDataMigration : Migration
{
    protected override void Up(MigrationBuilder migrationBuilder)
    {
        if (migrationBuilder.IsSqlServer())
        {
            migrationBuilder.Sql("SET IDENTITY_INSERT dbo.Blogs ON");
        }

        migrationBuilder.Sql("INSERT INTO dbo.Blogs (Id, Title, Url) VALUES (1, 'Some Blog', 'https://some.blog')");
        migrationBuilder.Sql("INSERT INTO dbo.Blogs (Id, Title, Url) VALUES (2, 'Another Blog', 'https://another.blog')");

        if (migrationBuilder.IsSqlServer())
        {
            migrationBuilder.Sql("SET IDENTITY_INSERT dbo.Blogs OFF");
        }
    }

    protected override void Down(MigrationBuilder migrationBuilder)
    {
        migrationBuilder.Sql("DELETE FROM dbo.Blogs WHERE Id IN (1, 2)");
    }
}

As you can see, I'm relying on the MigrationBuilder.Sql method for executing raw SQL, so we must know what we're doing: this is because inside a migration we do not have a DbContext; I'm also using IsSqlServer extension method to ensure the SQL I execute is running for SQL Server, this is because I'm calling SET IDENTITY_INSERT, which is a SQL Server-only thing. Up and Down methods should both be specified, Down should revert whatever Up does.

This approach has a couple advantages:

  • The migration is only applied once, so we don't need to care about errors from duplicate records
  • We can make all sorts of insertions
But:
  • We have to work with plain SQL, and be conscious of the target databases, primary keys, etc

Using Entity Configuration

Another option is to declare the data as part of the entity's configuration. We can achieve that with the HasData method:

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
    modelBuilder
        .Entity<Blog>()
        .HasData(
            new Blog { Title = "Some BLog", Url = "https://some.blog" },
            new Blog { Title = "Another Blog", Url = "https://another.blog" }
        );
}

Of course, this can also be done inside an IEntityTypeConfiguration<Blog>'s Configure method, if we're using external configuration, and you can have data for all registered entities. The data provided through the HasData method is considered part of the schema, and goes into the snapshot, meaning, if we change it, EF Core migrations will detect that the schema has changed.

The advantage of this approach are:

  • We operate at context level, not SQL
  • The code is only executed at the first migration, when the database is created

As for possible disadvantages:

  • If we ever want to change the initial data, we have to create a new migration

Using Context Configuration

Yet another option is to use UseSeeding/UseAsyncSeeding methods to create seeding callbacks when configuring the DbContext. This approach differs from the previous in that the data is not part of the schema, and we have full control over what queries we issue, and, most important, this method runs not just after all migrations have been applied, but also after the schema is created using EnsureCreated/EnsureCreatedAsync. Here is an example:

builder.Services.AddDbContext<BlogContext>(options =>
{
    options
        .UseSqlServer("...")
        .UseAsyncSeeding(async (ctx, schemaUpdated, cancellationToken) =>
        {
            if (!await ctx.Set<Blog>().AnyAsync(cancellationToken))
            {
                ctx.Set<Blog>().Add(new Blog { Title = "Some BLog", Url = "https://some.blog" });
                ctx.Set<Blog>().Add(new Blog { Title = "Another BLog", Url = "https://another.blog" });

                await ctx.SaveChangesAsync(cancellationToken);
            }
        })
        .UseSeeding((ctx, schemaUpdated) =>
        {
            if (!ctx.Set<Blog>().Any())
            {
                ctx.Set<Blog>().Add(new Blog { Title = "Some BLog", Url = "https://some.blog" });
                ctx.Set<Blog>().Add(new Blog { Title = "Another BLog", Url = "https://another.blog" });

                ctx.SaveChanges();
            }
        });

});

The second parameter to the callback (schemaUpdated, in my example) passed to UseSeeding/UseAsyncSeeding is true if changes have been made to the schema, such as creating it or modifying it, or false otherwise.

An important note from the Microsoft documentationUseSeeding is called from the EnsureCreated or Migrate, and UseAsyncSeeding is called from the EnsureCreatedAsync or MigrateAsync methods; when using this feature, it is recommended to implement both UseSeeding and UseAsyncSeeding methods using similar logic, even if the code using EF Core is asynchronous, because we never know which one will be called.

Conclusion

So, in a nutshell, this are some of the rules when choosing a seeding strategy:

  • Use HasData for small, invariant lookup data you want tracked with schema changes and migrations
  • Data-only migrations are good for when you want to be sure that your data is only ever applied once, but you need to write SQL, which can make it database-dependant
  • Explicit insertions sometimes have their place, for example, when data is loaded from some external resource, or you are not using migrations, but, in general, other methods should be preferred
  • Use UseSeeding/UseAsyncSeeding for everything else: conditional seed logic, big pre-defined datasets, dev/test demo data, generated values, or any seeding that must run independently of migrations

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories