Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146997 stories
·
33 followers

Workday to Lay Off Roughly 400 Customer Support Staff

1 Share

SaaS provider says layoffs target non-revenue roles as it retools priorities, with costs weighing heavily on near-term financial results.

The post Workday to Lay Off Roughly 400 Customer Support Staff appeared first on TechRepublic.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agents League: Build, Learn, and Level Up Your AI Skills

1 Share

We're inviting the next generation of developers to join Agents League, running February 16-27. It's a two-week challenge where you'll build AI agents using production-ready tools, learn from live coding sessions, and get feedback directly from Microsoft product teams.

We've put together starter kits for each track to help you get up and running quickly that also includes requirements and guidelines. Whether you want to explore what GitHub Copilot can do beyond autocomplete, build reasoning agents on Microsoft Foundry, or create enterprise integrations for Microsoft 365 Copilot, we have a track for you.

Important: Register first to be eligible for prizes and your digital badge. Without registration, you won't qualify for awards or receive a badge when you submit.

What Is Agents League?

It's a 2-week competition where you learn by doing:

  • 📽️ Live coding battles – Watch experts compete in real-time and explain their thinking
  • đź’» Build at your pace – Two weeks to work on your project
  • đź’¬ Get help on Discord – AMAs, community support, and a friendly crowd to cheer you on
  • 🏆 Win prizes – $500 per track, GitHub Copilot Pro subscriptions, and digital badges for everyone who submits

The Three Tracks

  • 🎨 Creative Apps — Build with GitHub Copilot (Chat, CLI, or SDK)
  • đź§  Reasoning Agents — Build with Microsoft Foundry
  • đź’Ľ Enterprise Agents — Build with M365 Agents Toolkit (or Copilot Studio)

More details on each track below, or jump straight to the starter kits.

The Schedule

Agents League starts on February 16th and runs through February 27th. Within 2 weeks, we host live battles on Reactor and AMA sessions on Discord.

Week 1: Live Battles (Feb 17-19)

We're kicking off with live coding battles streamed on Microsoft Reactor. Watch experienced developers compete in real-time, explaining their approach and architectural decisions as they go.

  • Tue Feb 17, 9 AM PT — 🎨 Creative Apps battle
  • Wed Feb 18, 9 AM PT — đź§  Reasoning Agents battle
  • Thu Feb 19, 9 AM PT — đź’Ľ Enterprise Agents battle

All sessions are recorded, so you can watch on your own schedule.

Week 2: Build + AMAs (Feb 24-26)

This is your time to build and ask questions on Discord. The async format means you work when it suits you, evenings, weekends, whatever fits your schedule.

We're also hosting AMAs on Discord where you can ask questions directly to Microsoft experts and product teams:

  • Tue Feb 24, 9 AM PT â€” 🎨 Creative Apps AMA
  • Wed Feb 25, 9 AM PT â€” đź§  Reasoning Agents AMA
  • Thu Feb 26, 9 AM PT — đź’Ľ Enterprise Agents AMA

Bring your questions, get help when you're stuck, and share what you're building with the community.

Pick Your Track

We've created a starter kit for each track with setup guides, project ideas, and example scenarios to help you get started quickly.

 đźŽ¨ Creative Apps

Tool: GitHub Copilot (Chat, CLI, or SDK)

Build innovative, imaginative applications that showcase the potential of AI-assisted development. All application types are welcome, web apps, CLI tools, games, mobile apps, desktop applications, and more.

The starter kit walks you through GitHub Copilot's different modes and provides prompting tips to get the best results.View the Creative Apps starter kit.

đź§  Reasoning Agents

Tool: Microsoft Foundry (UI or SDK) and/or Microsoft Agent Framework

Build a multi-agent system that leverages advanced reasoning capabilities to solve complex problems. This track focuses on agents that can plan, reason through multi-step problems, and collaborate.

The starter kit includes architecture patterns, reasoning strategies (planner-executor, critic/verifier, self-reflection), and integration guides for tools and MCP servers. View the Reasoning Agents starter kit.

đź’Ľ Enterprise Agents

Tool: M365 Agents Toolkit or Copilot Studio

Create intelligent agents that extend Microsoft 365 Copilot to address real-world enterprise scenarios. Your agent must work on Microsoft 365 Copilot Chat.

Bonus points for: MCP server integration, OAuth security, Adaptive Cards UI, connected agents (multi-agent architecture). View the Enterprise Agents starter kit.

Prizes & Recognition

To be eligible for prizes and your digital badge, you must register before submitting your project.

Category Winners ($500 each):

  • 🎨 Creative Apps winner
  • đź§  Reasoning Agents winner
  • đź’Ľ Enterprise Agents winner

GitHub Copilot Pro subscriptions:

  • Community Favorite (voted by participants on Discord)
  • Product Team Picks (selected by Microsoft product teams)

Everyone who registers and submits a project wins: A digital badge to showcase their participation.

Beyond the prizes, every participant gets feedback from the teams who built these tools, a valuable opportunity to learn and improve your approach to AI agent development.

Why This Matters

AI development is where the opportunities are right now. Building with GitHub Copilot, Microsoft Foundry, and M365 Agents Toolkit gives you:

  • A real project for your portfolio
  • Hands-on experience with production-grade tools
  • Connections with developers from around the world

Whether you're looking for your first internship, exploring AI, or just want to build something cool, this is two weeks well spent.

How to Get Started

  1. Register first â€” This is required to be eligible for prizes and to receive your digital badge. Without registration, your submission won't qualify for awards or a badge.
  2. Pick a track â€” Choose one track. Explore the starter kits to help you decide.
  3. Watch the battles â€” See how experienced developers approach these challenges. Great for learning even if you're still deciding whether to compete.
  4. Build your project â€” You have until Feb 27. Work on your own schedule.
  5. Submit via GitHub â€” Open an issue using the project submission template.
  6. Join us on Discord â€” Get help, share your progress, and vote for your favorite projects on Discord.

Links

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Writing Effective Prompts for Testing Scenarios: AI Assisted Quality Engineering

1 Share

AI-assisted testing is no longer an experiment confined to innovation labs. Across enterprises, quality engineering teams are actively shifting from manual-heavy testing approaches to AI-first QA, where tools like GitHub Copilot participate throughout the SDLC—from requirement analysis to regression triage.

Yet, despite widespread adoption, most teams are only scratching the surface. They use AI to “generate test cases” or “write automation,” but struggle with inconsistent outputs, shallow coverage, and trust issues. The root cause is rarely the model, it’s prompt design.

This blog moves past basic prompting tips to cover QA practices, focusing on effective prompt design and common pitfalls. It notes that adopting AI in testing is a gradual process of ongoing transformation rather than a quick productivity gain.

 

Why Effective Prompting Is Necessary in Testing

At its core, testing is about asking the right questions of a system. When AI enters the picture, prompts become the mechanism through which those questions are asked. A vague or incomplete prompt is no different from an ambiguous test requirement—it leads to weak coverage and unreliable results.

Poorly written prompts often result in generic or shallow test cases, incomplete UI or API coverage, incorrect automation logic, or superficial regression analysis. This increases rework and reduces trust in AI-generated outputs.

In contrast, well-crafted prompts dramatically improve outcomes. They help expand UI and API test coverage, accelerate automation development, and enable faster interpretation of regression results. More importantly, they allow testers to focus on risk analysis and quality decisions instead of repetitive tasks. In this sense, effective prompting doesn’t replace testing skills—it amplifies them.

 

Industry Shift: Manual QA to AI-First Testing Lifecycle

Modern QA organizations are undergoing three noticeable shifts.

First, there is a clear move away from manual test authoring toward AI-augmented test design. Testers increasingly rely on AI to generate baseline coverage, allowing them to focus on risk analysis, edge cases, and system behavior rather than repetitive documentation.

Second, enterprises are adopting agent-based and MCP-backed testing, where AI systems are no longer isolated prompt responders. They operate with access to application context—OpenAPI specs, UI flows, historical regressions, and even production telemetry—making outputs significantly more accurate and actionable.

Third, teams are seeing tangible SDLC impact. Internally reported metrics across multiple organizations show faster test creation, reduced regression cycle time, and earlier defect detection when Copilot-style tools are used correctly. The key phrase here is correct. Poor prompt neutralizes these benefits almost immediately.

Prerequisites 
  • GitHub Copilot access in a supported IDE (VS Code, JetBrains, Visual Studio)
  • An appropriate model (advanced reasoning models for workflows and analysis)
  • Basic testing fundamentals (AI amplifies skill; it does not replace it)
  • (Optional but powerful) Context providers / MCP servers for specs, docs, and reports

 

Prompting - A Designing skill with Examples

Most testers treat prompts as instructions. Mature teams treat them as design artifacts. Effective prompts should be intentional, layered, and defensive. They should not just ask for output, but control how the AI reasons, what assumptions it can make, and how uncertainty is handled.

Pattern 1: Role-Based Prompting

Assigning a role fundamentally changes the AI’s reasoning depth.

Instead of:

“Generate test cases for login.”

Use:

This pattern consistently results in better prioritization, stronger negative scenarios, and fewer superficial cases.

Pattern 2: Few-Shot Prompting with Test Examples

AI aligns faster when shown what “good” looks like. Providing even a single example test case or automation snippet dramatically improves consistency in AI-generated outputs, especially when multiple teams are involved. Concrete examples help align the AI with expected automation structure, enforce naming conventions, influence the depth and quality of assertions, and standardize reporting formats. By showing what “good” looks like, teams reduce variation, improve maintainability, and make AI-generated assets far easier to review and extend.

 

 

 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
Pattern 3: Provide Rich Context and Clear Instructions

Copilot works best when it understands the surrounding context of what you are testing. The richer the context, the higher the quality of the output—whether you are generating manual test cases, automation scripts, or regression insights. When writing prompts clearly describe the application type (web, mobile, UI, API), the business domain, the feature or workflow under test, and the relevant user roles or API consumers. Business rules, constraints, assumptions, and exclusions should also be explicitly stated. Where possible, include structured instructions in an Instructions .md file and pass it as context to the Copilot agent. You can also attach supporting assets—such as Swagger screenshots or UI flow diagrams—to further ground the AI’s understanding. The result is more concise, accurate output that aligns closely with your system’s real behavior and constraints.

Below is an example of how rich context can aid in efficient output

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Below example shows how to give clear instructions to GHCP that helps AI to handle the uncertainty and exceptions to adhere

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Prompt Anti-Patterns to Avoid

Most AI failures in QA are self-inflicted. The following anti-patterns show up repeatedly in enterprise teams.

  • Overloaded prompts that request UI tests, API tests, automation, and analysis in one step
  • Natural language overuse where structured output (tables, JSON, code templates) is required
  • Automation prompts without environment details (browser, framework, auth, data)
  • Contradictory instructions, such as asking for “detailed coverage” and “keep it minimal” simultaneously

 

The AI-Assisted QA Maturity Model

Prompting is not a one-time tactic—it is a capability that matures over time. The levels below represent how increasing sophistication in prompt design directly leads to more advanced, reliable, and impactful testing outcomes.

 

Level 1 – Prompt-Based Test Generation
AI is primarily used to generate manual test cases, scenarios, and edge cases from requirements or user stories. This level improves test coverage and speeds up test design but still relies heavily on human judgment for validation, prioritization, and execution.

 

 
 

 

 

 

 

 

Level 2 – AI-Assisted Automation
AI moves beyond documentation and actively supports automation by generating framework-aligned scripts, page objects, and assertions. Testers guide the AI with clear constraints and patterns, resulting in faster automation development while retaining full human control over architecture and execution.

 

Level 3 – AI-Led Regression Analysis
At this stage, AI assists in analyzing regression results by clustering failures, identifying recurring patterns, and suggesting likely root causes. Testers shift from manually triaging failures to validating AI-generated insights, significantly reducing regression cycle time.

 

 
 
 
 
 
 
 

 

 

Level 4 – MCP-Integrated, Agentic Testing
AI operates with deep system context through MCP servers, accessing specifications, historical test data, and execution results. It can independently generate, refine, and adapt tests based on system changes, enabling semi-autonomous, context-aware quality engineering with human oversight.

 

 

 

 

 

 

 

 

 

 

 

Best Practices for Prompt-Based Testing

  • Prioritize context over brevity
  • Treat prompts as test specifications
  • Iterate instead of rewriting from scratch
  • Experiment with models when outputs miss intent
  • Always validate AI-generated automation and analysis
  • Maintain reusable prompt templates for UI testing, API testing, automation, and regression analysis

 

Final Thoughts: Prompting as a Core QA Capability

Effective prompt improves coverage, accelerates delivery, and elevates QA from execution to engineering. It turns Copilot from a code generator into a quality partner. The next use case in line is going beyond functional flows and understanding how AI prompting can aid for – Automation framework enhancements, Performance testing prompts, Accessibility testing prompts, Data quality testing prompts. Stay tuned for upcoming blogs!!

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why AI loves em dashes, with Sean Goedecke

1 Share

1157. This week, we look at AI em dashes with Sean Goedecke, software engineer for GitHub. We talk about why artificial intelligence models frequently use em dashes and words like "delve," and how training on public domain books from the late 1800s may have influenced these patterns. We also look at the role of human feedback in shaping "AI style."

www.SeanGoedecke.com

đź”— Join the Grammar Girl Patreon.

đź”— Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey. 

🔗 Get the edited transcript.

🔗 Get Grammar Girl books. 

| HOST: Mignon Fogarty

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Dan Feierabend
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian
  • Podcast Associate: Maram Elnagheeb

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTube. TikTok. Facebook. Threads. Instagram. LinkedIn. Mastodon. Bluesky.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://dts.podtrac.com/redirect.mp3/media.blubrry.com/grammargirl/stitcher.simplecastaudio.com/e7b2fc84-d82d-4b4d-980c-6414facd80c3/episodes/e26e74dd-2518-4bb0-9118-ba3bfd5126e9/audio/128/default.mp3?aid=rss_feed&awCollectionId=e7b2fc84-d82d-4b4d-980c-6414facd80c3&awEpisodeId=e26e74dd-2518-4bb0-9118-ba3bfd5126e9&feed=XcH2p3Ah
Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

.NET Source Generators with Jason Bock

1 Share
Why would you write code to generate code? Carl and Richard talk with Jason Bock about his experiences using modern .NET source generators to optimize certain aspects of applications. Jason talks about treading carefully - while .NET source generation has been part of .NET since 5.0 and Roslyn, it is a special case approach to problem solving. But with specialized implementations for regex and P/Invoke, there is some huge potential in these coding techniques that you can take advantage of!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/69804401/dotnetrocks_1988_dot_net_source_generators.mp3
Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the Developer Knowledge API and MCP Server

1 Share
Google is launching the Developer Knowledge API and MCP Server in public preview. This new toolset provides a canonical, machine-readable way for AI assistants and agentic platforms to search and retrieve up-to-date documentation across Firebase, Google Cloud, Android, and more. By using the official MCP server, developers can connect tools directly to Google’s documentation corpus, ensuring that AI-generated code and guidance are based on authoritative, real-time context.
Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories