Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147360 stories
·
32 followers

Announcing Files v4.0.13

1 Share

Announcing Files Preview v4.0.13 for users of the preview version.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Mastering the Art of Public Speaking: Lessons from a Lifetime of Teaching and Sharing

1 Share
For decades, the author has focused on improving code quality through teaching and speaking, emphasizing the joy of mentoring developers. Despite the shift to virtual formats, they believe in-person interactions are essential for effective learning. The author highlights the importance of preparation, resilience against criticism, and making a positive impact on individuals.



Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Itโ€™s Not Your Tests, Itโ€™s Your Testability

1 Share

Let’s talk about that test. The one that’s always flaky. The one that takes twenty minutes to run and fails for a different reason every time. Your first instinct is to blame the test. Maybe the locator is wrong, maybe the wait time isn’t long enough.

But what if I told you it’s not the test’s fault? The real problem can be a lack of testability.

You know the feeling. You need to test a new AI-powered recommendation feature. But first, your script has to perform a slow, brittle ballet of UI interactions: Log in, create a user, navigate three screens, add two items to a cart… all just to get to the starting line. It can be the same for calling APIs just to get the system in the state you want. That’s a fundamental lack of operability.

Now, let’s say you get to that point. You finally use that feature. Where do you see the actual result? If it’s an internal operation, often it’s buried in a mountain of logs, not directly accessible to your test. You’re forced to become an archaeologist just to observe the most basic output of the system. That’s a classic lack of observability.

Now, with the internal black-box nature of AI, these two are even more worse.

Operability and observability have always been problems for testers, but the rise of expensive, non-deterministic AI has turned them from a daily annoyance into a critical pipeline bottleneck. We can’t afford this anymore. Let’s stop blaming our tests and start fixing our testability. In this article, we’ll talk about how.

The 5-Minute Conversation That Tames the Chaos

So, how do we start fixing our testability? Testability is really in the realm of the developers. We start with a conversation with them, that helps us both understand and control the chaos in our system.

Let’s be clear: this isn’t about telling developers how to do their job. It’s about showing, with a concrete example, how often a small architectural change can make our tests for the feature dramatically more reliable.

This talk usually is needed when we’re testing an AI feature. Let’s go through a regular scenario.

Step 1: Identify the Source of Unpredictability

First, as part of testing a feature, you (and hopefully your dev collaborator) identify the core function that makes the call to the AI. “The problem is here!”, you shout. It’s often a “god method”โ€”it does everything: it builds a prompt, makes the unpredictable call to the AI, and processes the response, all in one tightly-coupled block.

Something like this:

# The God Method (Hard to Control)
def feature_call_ai_and_analyze(self, user_request):
ย  ย  # ... prompt engineering ...
ย  ย  response_text = self._generate_content(prompt) # The source of the chaos
ย  ย  # ... parsing and other logic tangled here ...
ย  ย  return final_result_for_feature

Step 2: Frame the Problem – The Need to Control the Chaos

In this case, our job is to validate how the feature behaves with different kinds of AI responses. The core problem is that the live AI’s response is unpredictable. And the prompt is constructed inside that god method. Our lack of direct control makes it impossible to reliably test different prompts.

Our tests feel flaky because we don’t control the chaos in the system. This is a classic problem of poor operability (we can’t control the AI dependency) and poor observability (it’s hard to see what the actual response is).

If only we could make them more controllable…

Step 3: Propose the Humble Seam

Now, you and your developer have the conversation about solving the problem.

And, you don’t say, “Your code is untestable.”

You say, “Hey, I’m working on the tests for the X feature, and I’m trying to figure out how to reliably test what happens when the AI returns weird data. What if we added a ‘seam’ where I could inject a mocked response, bypassing the real AI call during my tests?”.

Or…

“Can we add a seam where we can inject different prompts, bypassing the rest of the system, and see what happens?”

This is a concept we’re all familiar with. It’s as easy like putting prompts in an external file instead of hard-coding them. It gives us a control point.

Step 4: Everyone Wins

Then, you explain the benefit. “If we have this seam, I can write a suite of reliable integrated tests for the entire X feature’s behavior under all kinds of conditions. You, developer, you’ll get extended feedback on how the feature handles bad data from the AI. Our CI/CD pipeline will be more stable because we’re not relying on the unpredictable network call for 90% of our checks. It will help us all understand and control the chaos in the system.”

That’s the conversation. It’s a collaborative, engineering-focused proposal that makes life better for everyone. It’s the first and most powerful tool in your new AI testing toolbox.

The Next Step: A New Collaboration

So, you had the conversation, your developer added the seam, and you now have a way to control the chaos. At least part of it. (By the way – this is not just AI related, it works like magic for whatever you want to control. )

This is a huge win, not just for you, but for the entire team. It opens up a new opportunity for a deeper, more effective collaboration between testers and developers.

This is where you can start bringing gifts to the partnership. Instead of just filing bugs, you can help bring more information to the developers. An actual collaboraion, who knew?

Speaking of gifts…

Here’s one for you.

The AI Test Engineer’s Prompt Pack I’ve created is designed to be a collaborative toolkit. It’s a set of expertly crafted prompts that you and your developer can use together to:

  • Quickly generate the scaffolding and plumbing tests to ensure the core logic and error handling are solid.
  • Refactor code for testability to make code easier to introduce seams.
  • Build the first line of validation tests that check the live AI responses for structure and sanity.

This isn’t just about making your job easier; it’s about building a stabler, more testable, and more reliable system, together. It’s the practical starting point for a new, more powerful partnership.

Conclusion

We’re at the start of a new era in software quality. The rise of AI is forcing us to evolve, to move beyond our traditional roles and embrace a more collaborative, engineering-focused mindset.

It’s a big shift, but it’s also a huge opportunity. By championing testability and bringing new tools to the table, we’re not just finding bugs anymore; we’re helping to build the resilient, high-quality systems of the future.

If you’re ready to take the first step in that journey, the Prompt Pack is for you.

Download The AI Test Engineer Prompt Pack Now

The post Itโ€™s Not Your Tests, Itโ€™s Your Testability first appeared on TestinGil.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

#536 - 19th October 2025

1 Share

There are a raft of new announcements for Microsoft Fabric: Gain End-to-End Visibility into Data Activity Using OneLake diagnostics (GA) enables workspace administrators to track who accessed what data, when, and how across Fabric workspaces by streaming data access events as JSON logs into a Lakehouse for analysis. Job-Level Bursting Switch gives capacity administrators granular control over Spark compute resources, allowing them to optimize for either peak performance or higher concurrency by enabling or disabling the ability for individual jobs to consume burst capacity. 

Top Features of Notebooks in Microsoft Fabric highlights key capabilities including native lakehouse integration, built-in file system, Data Wrangler for drag-and-drop exploration, Copilot AI assistance, fast Spark startup times, and seamless Power BI semantic model integration through Semantic Link, and Building data quality into Microsoft Fabric advocates for a validation-first approach where data quality checks become first-class citizens in pipeline design, with validation occurring early and often at multiple checkpoints throughout the data lifecycle. Azure Synapse Runtime for Apache Spark 3.5 is now generally available for production workloads, featuring upgrades to Apache Spark 3.5 and Delta Lake 3.2 while helping customers prepare for migration to Microsoft Fabric Spark.

In AI, Building Human-in-the-loop AI Workflows with Microsoft Agent Framework demonstrates how Microsoft Agent Framework combines deterministic workflow orchestration with autonomous AI agents for fraud detection, using a fan-out/fan-in pattern with specialized agents, checkpointing, and human review capabilities. GitHub Copilot in SSMS (Preview) brings AI-powered assistance to SQL Server Management Studio in Preview 3 of SSMS 22, providing help with writing, editing, and fixing T-SQL queries with database and schema context awareness. Celebrate National Book Month with Copilot showcases creative ways to use Microsoft Copilot for literary activities, including fighting book bans, supporting independent bookstores, setting up Little Free Libraries, and overcoming writer's block.

Finally, What is Grafana explains how Grafana serves as an open-source analytics and monitoring platform that visualizes data from multiple sources, with Azure offering both Azure Managed Grafana as a fully managed service and Azure Monitor dashboards with Grafana in preview for free dashboard creation.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Task Scheduling and Background Services in Blazor Server

1 Share
Learn how to safely run background tasks, update the UI, and schedule recurring jobs in Blazor Server using hosted services, Quartz.NET, and InvokeAsync.
Read the whole story
alvinashcraft
11 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 283 - "The AI Database" (The Oracle RDBMS Revolution)

1 Share
From: Iot Coffee Talk
Duration: 1:12:28
Views: 2

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week, Dimitri, Rob, and Leonard jump on Web3 to host a discussion about:

๐ŸŽถ ๐ŸŽ™๏ธ GOOD KARAOKE! ๐ŸŽธ ๐Ÿฅ "The Oracle" by Leonard Lee (Original)
๐Ÿฃ The genealogy of Oracle's World, from OAUG to Oracle Cloud World to Oracle AI World.
๐Ÿฃ What RDBMS did you start off with? Dimitri and Rob try to prove who is more ancient.
๐Ÿฃ How did Oracle come about and become the leading database company?
๐Ÿฃ How did Larry Ellison become the richest man in the world in two eras?
๐Ÿฃ How is the current GenAI hype similar to the database hype cycle of the 80s and the 90s.
๐Ÿฃ Why did DBAs make so much money back at the height of the database era?
๐Ÿฃ How is the RDBMS adapting to be the center of data in the GenAI era?
๐Ÿฃ The mess that will be the AI data center, and the coming modernization problem.
๐Ÿฃ Why Rob hates guys who wear track suits to high-end steakhouses.
๐Ÿฃ Do you care if an AI can hit an A5 in a crazy tough song like "Golden?"
๐Ÿฃ Why Apple continues to kill it with Apple Silicon.

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Communities. Just make a minimally required donation to www.elevatecommunities.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
11 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories