Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153687 stories
·
33 followers

Node.js 22.22.3 (LTS)

1 Share
Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft announces extension of Xbox and Discord partnership

1 Share

Discord Nitro now includes Xbox Game Pass, as Microsoft and Discord deepen their partnership to bring additional benefits to gamers. The new deal means that anyone with a Discord Nitro subscription gains access to a library of over 50 console and PC games without an increase in subscription fees. Discord describes it as “one of the best benefits we’re offering because it changes how you try games. Most of us have spent money on something, played it for a couple weeks, then watched it sit in our library. Game Pass removes that friction”. Microsoft says: “For years, Xbox and Discord… [Continue Reading]

Read the whole story
alvinashcraft
27 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Azure Migrate vs. Dr. Migrate: Understanding the Difference and When to Use Each

1 Share

Cloud migration is no longer just a technical project. For many organizations, it is a strategic business decision that affects cost, performance, security, modernization, scalability, and long-term innovation.

When enterprises begin planning a move to Microsoft Azure, two names often come up: Azure Migrate and Dr. Migrate.

At first glance, they may sound like competing tools. In reality, they serve different but complementary purposes. Understanding the relationship between them can help organizations plan smarter migrations, reduce risk, and create a stronger business case for moving workloads and databases to the cloud.

The Simple Explanation

Azure Migrate is Microsoft’s official migration hub for discovering, assessing, planning, and executing migrations to Azure.

Dr. Migrate is an AI-assisted migration assessment and planning accelerator often used in partner-led engagements to create deeper insights, executive-ready reports, modernization recommendations, cost analysis, and migration roadmaps.

In simple terms:

“Azure Migrate helps you assess and move workloads to Azure. Dr. Migrate helps you better understand, justify, prioritize, and plan the migration journey.”

The two are not enemies. They are better viewed as partners in a successful cloud migration strategy.

What Is Azure Migrate?

Azure Migrate is Microsoft’s native platform for helping organizations discover, assess, and migrate workloads to Azure.

It supports a wide range of migration scenarios, including:

  • VMware virtual machines
  • Hyper-V virtual machines
  • Physical servers
  • SQL Server databases
  • Web applications
  • Workloads from other cloud providers
  • Application modernization paths into Azure services

Azure Migrate helps technical teams understand what they currently have, whether those workloads are ready for Azure, what Azure resources may be required, and how the actual migration can be performed.

Key Benefits of Azure Migrate

1. Microsoft-Native Migration Hub

Azure Migrate is built by Microsoft and integrates directly with Azure services. This makes it a natural starting point for organizations that want a Microsoft-supported migration path.

It acts as a central hub where teams can organize migration projects, run assessments, and coordinate migration execution.

2. Discovery and Inventory

Before moving anything to the cloud, you must understand what you have.

Azure Migrate can discover servers, applications, databases, dependencies, and utilization patterns. This helps organizations avoid one of the biggest migration mistakes: moving workloads without a complete understanding of the current environment.

3. Readiness Assessment

Azure Migrate helps answer critical questions such as:

  • Can this workload run in Azure?
  • What Azure service is the best target?
  • Are there compatibility concerns?
  • What changes may be required before migration?
  • What would the estimated Azure cost be?

This is especially valuable for infrastructure and database teams trying to determine whether workloads should move to Azure Virtual Machines, Azure SQL Database, Azure SQL Managed Instance, App Service, AKS, or another Azure service.

4. Cost Estimation and Sizing

One of the strongest features of Azure Migrate is performance-based sizing. Instead of simply matching on-premises server sizes to equivalent Azure VM sizes, Azure Migrate can use utilization data to recommend more realistic Azure configurations.

This can help reduce overspending and improve the accuracy of cloud cost planning.

5. Actual Migration Execution

This is where Azure Migrate becomes especially important.

Azure Migrate is not only an assessment tool. It can also help perform the migration itself through integrated Microsoft migration services and partner tools.

For organizations ready to move workloads into Azure, Azure Migrate is usually the operational tool that supports the technical execution.

What Is Dr. Migrate?

Dr. Migrate is a migration assessment and planning accelerator designed to help organizations and Microsoft partners quickly analyze an environment, identify opportunities, build migration plans, and create business-ready recommendations.

Where Azure Migrate focuses heavily on Microsoft-native discovery, assessment, and execution, Dr. Migrate focuses more on accelerating insight.

It helps translate raw infrastructure and workload data into a more complete story:

  • What should move?
  • What should be modernized?
  • What should be retired?
  • What will it cost?
  • What savings may be possible?
  • What migration waves make sense?
  • What does the executive business case look like?

Key Benefits of Dr. Migrate

1. Faster Assessment and Planning

Enterprise environments can be complex. A large organization may have hundreds or thousands of servers, multiple database platforms, legacy applications, dependency chains, and unclear ownership.

Dr. Migrate helps accelerate the assessment process by organizing collected data into actionable insights. This can shorten the time required to move from “we are thinking about migrating” to “we have a clear migration roadmap.”

2. Executive-Ready Business Case

Technical assessments are important, but executives usually need a different kind of output.

They want to understand:

  • Why should we migrate?
  • What is the financial impact?
  • What risks are involved?
  • What business benefits can we expect?
  • How soon can we realize value?
  • Which workloads should move first?

Dr. Migrate can help produce polished reports and business-case materials that are easier for leadership teams to understand.

3. Modernization Recommendations

One of the biggest mistakes in cloud migration is assuming that everything should be lifted and shifted exactly as it is.

Sometimes that is the right move, especially when speed is the highest priority. But in many cases, the cloud creates an opportunity to modernize.

For example:

  • SQL Server may be a better fit for Azure SQL Managed Instance.
  • A legacy web application may be a candidate for Azure App Service.
  • Some workloads may benefit from containers.
  • Some systems may be retired instead of migrated.
  • Some databases may need refactoring before moving.

Dr. Migrate can help identify these opportunities earlier in the planning process.

4. Migration Wave Planning

A successful migration is rarely done all at once.

Most enterprise migrations happen in waves. Workloads are grouped based on dependencies, risk, business priority, technical complexity, and modernization opportunity.

Dr. Migrate helps support this planning conversation by giving teams a clearer view of how workloads relate to each other and how they may be moved in a logical sequence.

5. Strong Partner-Led Engagements

Dr. Migrate is particularly valuable in consulting, assessment, and partner-led migration engagements.

It gives architects and engineers a way to quickly create a clear, professional, data-driven migration story for the customer. This is especially useful when working with enterprise stakeholders who need both technical depth and business justification.

When Should You Use Azure Migrate?

Use Azure Migrate when your organization is ready to begin technical discovery, assessment, and migration execution.

Azure Migrate is a great choice when:

  • You need to inventory your servers and workloads.
  • You want to assess Azure readiness.
  • You need performance-based sizing recommendations.
  • You are preparing to migrate VMware, Hyper-V, physical servers, SQL Server, or web apps.
  • You want to execute the migration using Microsoft-supported tooling.
  • You want a central place to manage migration projects in Azure.

For many organizations, Azure Migrate is the practical tool used by the engineering team to move from assessment into real migration work.

When Should You Use Dr. Migrate?

Use Dr. Migrate when your organization needs a stronger migration strategy, faster assessment, business-case development, and executive-level clarity.

Dr. Migrate is a great choice when:

  • You are still deciding whether or how to migrate.
  • You need a migration roadmap.
  • You need to present the business case to leadership.
  • You want to identify modernization opportunities.
  • You need help prioritizing migration waves.
  • You are working with a Microsoft partner or consulting team.
  • You have a complex enterprise environment with many workloads and databases.

Dr. Migrate is especially useful before the execution phase, when organizations are trying to understand the full impact and opportunity of the migration.

The Best Strategy: Use Both Together

The strongest approach is often not Azure Migrate vs. Dr. Migrate.

The better approach is:

“Use Dr. Migrate to accelerate strategy, business case, and roadmap creation. Use Azure Migrate to validate, assess, and execute the technical migration.”

A practical enterprise migration flow may look like this:

  1. Discover the environment
    Collect information about servers, applications, databases, dependencies, and usage patterns.
  2. Analyze migration opportunities
    Identify which workloads should be rehosted, refactored, modernized, retired, or replaced.
  3. Build the business case
    Estimate cost, savings, risk reduction, agility benefits, and long-term value.
  4. Create migration waves
    Organize workloads into logical groups based on dependencies, priority, and complexity.
  5. Validate technical readiness
    Use Azure Migrate to validate sizing, compatibility, and Azure target options.
  6. Execute the migration
    Use Azure Migrate and other Azure services to perform the actual workload and database migration.
  7. Optimize after migration
    Use Azure monitoring, cost management, security tools, and architecture reviews to improve the environment after cutover.

How The Training Boss Can Help

At The Training Boss, we help enterprise customers plan and execute successful cloud and database migration strategies with confidence.

Our senior architects and engineers can assist with:

  • Azure migration strategy
  • Database migration planning
  • Azure Migrate assessments
  • Dr. Migrate analysis and roadmap interpretation
  • SQL Server to Azure SQL Database migration
  • SQL Server to Azure SQL Managed Instance migration
  • SQL Server to Azure VM migration
  • Cloud readiness assessments
  • Architecture reviews
  • Performance and cost optimization
  • Migration wave planning
  • Post-migration modernization

Whether your organization is just beginning the cloud conversation or already preparing for a major database migration, our team can help you avoid common mistakes and design a path that fits your business goals.

The goal is not simply to “move to the cloud.”

The goal is to move to the cloud the right way: securely, efficiently, strategically, and with a clear return on investment.  Reach out to us today 

The post Azure Migrate vs. Dr. Migrate: Understanding the Difference and When to Use Each appeared first on The Training Boss.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Critter Stack 2026 Releases are Underway

1 Share

Pardon the mess, but the Critter Stack 2026 releases are heavily underway in the JasperFx GitHub Organization. Busy enough that I am getting hit with GitHub rate limiting messages the past couple days and I think that’s a good sign about our productivity? It’s also blessedly coincided with a couple very quiet days in the community which has certainly helped too!

The main goals are to:

  • Get CritterWatch out the damn door!
  • Optimize the “cold start” times across the entire stack, have a better story for “Serverless” usage and, knock on wood, get to AOT compliance. To be clear, this will change the pattern of our previous Roslyn runtime compilation into a development time only thing where pre-generation of Wolverine or Marten types will be mandatory for production usage. And that’s the part I expect to be a little bit controversial or inconvenient.
  • Bring Polecat up to par quality wise with Marten. Polecat has had growing pains.
  • Adjusting some of our default settings to reflect more optimized usage for greenfield usage, which I mostly outlined in this post. But don’t worry, all of this will be documented in the migration guide, Marten will have a RestoreV8Defaults() and Wolverine is getting a RestoreV5Defaults() to make it easier to stay right were you are. Some of this is getting driven by real efficiency concerns, and other parts are purposely meant to enable CritterWatch management and oversight of production applications.

This wave of releases is going to include:

  1. JasperFx and JasperFx.Events 2.0
  2. Weasel 9.0
  3. Marten 9.0
  4. Polecat 4.0 (Event Sourcing & Document Db w/ SQL Server)
  5. Wolverine 6.0
  6. CritterWatch 1.0 – Our white whale, our “Duke Nukem Forever” release, “Winds of Winter” finally coming out so we can find out if Jon Snow survives! Add in Half Life 3 too. No seriously, it’s coming out on May 18th come hell or the river don’t rise.

For some other details:

  • There’s plenty of low level performance optimization and object allocation savings happening, but I don’t yet have numbers to know how much difference that’s going to make yet. We will be doing much more benchmarking before the release though
  • We’re pulling Newtonsoft references out of the main tools altogether, but replacing the integration with extra extension packages. That feels like the end of an era!
  • We’ll do the inevitable work of eliminating [Obsolete] APIs, but I don’t think there’s much
  • We’re doing a huge amount of work to promote code sharing between Marten and Polecat, with me hoping that that improves Polecat especially. Some of that is driven by CritterWatch needs. This is also happily reducing code duplication throughout the Wolverine codebase as well.
  • .NET 8 support was dropped, but we’re maintaining .NET 9 and .NET 10 for the lifetime of these versions. I realize that .NET 9 is EOL later this year, but I’m not eager to get yelled at for dropping it earlier than some of our users. We’ll add .NET 11 whenever that hits. For anyone wondering why this is a big deal at all, our usage of EF Core means that we do struggle somewhat with diamond dependency conflicts through major .NET versions.
  • Marten is getting a more efficient option for “Dynamic Consistency Boundaries” (DCB) using PostgreSQL HSTORE
  • There was also a big time round of code de-duplication in Weasel across the database engines that we support

Migration?

Other than some changes to the shape of our MultiStreamProjection types in Marten and Polecat, I’m expecting no other breaking changes except for some public types moving namespaces. I’m confident this time around that we’ll have a comprehensive migration guide that will cover all of those moves in a form that should be helpful for both humans or LLM tools.

If you utilize the new defaults though, there will be database migrations for extra fields and new tables that have been added in the past year or two for enhanced auditing and observability — but all of this will be purely additive.



Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Development environments for your agents

1 Share
Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Opinion: Vibe coding needs an on-ramp — and seat belts

1 Share
(Google Gemini Image)

[Editor’s Note: This is the fourth in a series by Oren Etzioni about AI usage and best practices.]

I’ve been writing code since before most of today’s vibe coding founders were born. So when I sat down to try the latest vibe coding tools — Lovable, Claude code, and the like — I expected a frictionless ride. What I got was a rough detour through systems administration 101.

I want to reassure non-coders: it’s not you, it’s the tools.  Vibe coding has a useability problem and a safety challenge — and the first should not be solved without the second.

Andrej Karpathy coined “vibe coding” in a now-famous tweet on Feb. 2, 2025: “There’s a new kind of coding I call ‘vibe coding,’ where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” That’s the promise; the reality is downright annoying.

Claude code wouldn’t run on my machine until I sorted out PATH variables. (If you don’t know what a PATH variable is, that’s exactly the point of this essay.) When I moved to Lovable to spin up a website, it asked me, almost immediately, about secrets and keys. I knew what it meant — and I knew what to do with them, which is its own kind of safety net. But what about a small business owner who wants to build an inventory tool for their shop? What about my mother-in-law?

What vibe coding needs is its Windows moment — the point at which a powerful but arcane technology gets a user interface so good that the machinery underneath disappears. Before Windows (and the Macintosh before it), using a personal computer meant typing obscure commands at a DOS prompt. Of course, Windows also opened the door to a flood of viruses. Vibe coding needs to get the on-ramp and the seat belts right at the same time.

Today, first-time vibe coders encounter The Great Wall of Jargon. In the first 10 minutes of trying to vibe-code a simple website, I encountered the terms: secret, key, API key, token, environment variable, .env file, shell, terminal, command line, CLI, PATH, localhost, port, 127.0.0.1, repo, clone, commit, push, Node, npm, dependency, runtime, build, IDE, deploy, deployment, production. Each is a tiny door I had to find the key to. None is about what I wanted to build. Novices are being asked to learn a foreign language before they get to say “Hello World.”

Justine Moore, a partner at Andreessen Horowitz who wrote the go-to piece on this usability barrier to vibe coding earlier this year, admits her own success rate on vibe coding projects is roughly 50-50.

“I spend a lot of time dragging screenshots and copying error messages into Cursor and asking for help,” Moore writes. If the people whose job it is to invest in this category are squinting at the screen, the audience that vibe coding was supposed to liberate is also struggling.

The data backs this up. Stack Overflow’s 2025 Developer Survey of more than 49,000 developers found that, when asked about vibe coding, 77% said it is not part of their professional work. These are the professionals — the people for whom this should be easiest.

And among the audience that should benefit most, the picture is no better. Bubble, a visual-development platform with an obvious competitive interest in the answer, surveyed 793 builders who had tried both visual development and vibe coding tools and found that 90.6% stuck with visual development while only 25.6% stuck with vibe coding.

As Moore put it: “Right now, vibe coding is a spectator sport for most of America.”

The vibe coding companies know this and are working on it. Replit’s one-click deploy is the closest thing in the current generation to that Windows moment — you hit “publish” and your app exists at a URL, no shell, no configuration, no Node install. But this is the exception, not the rule. Cybersecurity, too, is still up to you — a gap that the next generation of vibe coding platforms will need to close. 

What the on-ramp needs to look like is clear enough. Zero setup. Nothing to install. No keys to manage; the platform handles credentials behind the scenes. No separate deployment step; when you’re done, the thing exists at a URL, full stop. Sensible security defaults baked in, not opt-in — because an on-ramp without guardrails is worse than no on-ramp at all.

None of this is easy. Hiding the machinery requires solving real problems — credential management, sandboxed execution, automatic deployment — that are genuinely hard. But hard problems are exactly where startups are born.

Moore ends her piece with a powerful observation: every Unix command, as Matt Rickard noted years ago in a much-quoted essay, eventually becomes a startup. Squarespace did it for websites. Canva did it for design. The company that does it for vibe coding will do something at least as big. The on-ramp is missing. Whoever builds it will turn a craft that today requires patience and perseverance into something that millions of people can do in their spare time. But getting there will require more than the on-ramp.

An on-ramp gets you on the road. It doesn’t teach you how to drive. As non-coders ship more home-grown apps, we can expect some accidents — even dangerous ones. The Tea App, an app meant to help women stay safe on dates, was reportedly built largely with vibe coding; its creators stored 72,000 driver’s-license photos in a wide-open database. They were not bad people, but they didn’t understand cybersecurity. 

The next generation of vibe coding platforms should refuse to ship an app with a wide-open database the way a modern car refuses to shift into drive when your foot isn’t on the brake — automatically. For lesser sins, like the software equivalent of an unfastened seatbelt, the platforms should chime until you fix the problem.

The Windows moment we need isn’t only an on-ramp. It is an on-ramp with best practices baked in.

Read more of Oren Etzioni’s views on AI:

[Editor’s note: GeekWire publishes guest opinion pieces representing a range of perspectives. The views expressed are those of the author.]

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories