SaaS provider says layoffs target non-revenue roles as it retools priorities, with costs weighing heavily on near-term financial results.
The post Workday to Lay Off Roughly 400 Customer Support Staff appeared first on TechRepublic.
SaaS provider says layoffs target non-revenue roles as it retools priorities, with costs weighing heavily on near-term financial results.
The post Workday to Lay Off Roughly 400 Customer Support Staff appeared first on TechRepublic.
We're inviting the next generation of developers to join Agents League, running February 16-27. It's a two-week challenge where you'll build AI agents using production-ready tools, learn from live coding sessions, and get feedback directly from Microsoft product teams.
We've put together starter kits for each track to help you get up and running quickly that also includes requirements and guidelines. Whether you want to explore what GitHub Copilot can do beyond autocomplete, build reasoning agents on Microsoft Foundry, or create enterprise integrations for Microsoft 365 Copilot, we have a track for you.
Important: Register first to be eligible for prizes and your digital badge. Without registration, you won't qualify for awards or receive a badge when you submit.
It's a 2-week competition where you learn by doing:
The Three Tracks
More details on each track below, or jump straight to the starter kits.
Agents League starts on February 16th and runs through February 27th. Within 2 weeks, we host live battles on Reactor and AMA sessions on Discord.
We're kicking off with live coding battles streamed on Microsoft Reactor. Watch experienced developers compete in real-time, explaining their approach and architectural decisions as they go.
All sessions are recorded, so you can watch on your own schedule.
This is your time to build and ask questions on Discord. The async format means you work when it suits you, evenings, weekends, whatever fits your schedule.
We're also hosting AMAs on Discord where you can ask questions directly to Microsoft experts and product teams:
Bring your questions, get help when you're stuck, and share what you're building with the community.
We've created a starter kit for each track with setup guides, project ideas, and example scenarios to help you get started quickly.
Tool: GitHub Copilot (Chat, CLI, or SDK)
Build innovative, imaginative applications that showcase the potential of AI-assisted development. All application types are welcome, web apps, CLI tools, games, mobile apps, desktop applications, and more.
The starter kit walks you through GitHub Copilot's different modes and provides prompting tips to get the best results.View the Creative Apps starter kit.
Tool: Microsoft Foundry (UI or SDK) and/or Microsoft Agent Framework
Build a multi-agent system that leverages advanced reasoning capabilities to solve complex problems. This track focuses on agents that can plan, reason through multi-step problems, and collaborate.
The starter kit includes architecture patterns, reasoning strategies (planner-executor, critic/verifier, self-reflection), and integration guides for tools and MCP servers. View the Reasoning Agents starter kit.
Tool: M365 Agents Toolkit or Copilot Studio
Create intelligent agents that extend Microsoft 365 Copilot to address real-world enterprise scenarios. Your agent must work on Microsoft 365 Copilot Chat.
Bonus points for: MCP server integration, OAuth security, Adaptive Cards UI, connected agents (multi-agent architecture). View the Enterprise Agents starter kit.
To be eligible for prizes and your digital badge, you must register before submitting your project.
Category Winners ($500 each):
GitHub Copilot Pro subscriptions:
Everyone who registers and submits a project wins: A digital badge to showcase their participation.
Beyond the prizes, every participant gets feedback from the teams who built these tools, a valuable opportunity to learn and improve your approach to AI agent development.
AI development is where the opportunities are right now. Building with GitHub Copilot, Microsoft Foundry, and M365 Agents Toolkit gives you:
Whether you're looking for your first internship, exploring AI, or just want to build something cool, this is two weeks well spent.
AI-assisted testing is no longer an experiment confined to innovation labs. Across enterprises, quality engineering teams are actively shifting from manual-heavy testing approaches to AI-first QA, where tools like GitHub Copilot participate throughout the SDLC—from requirement analysis to regression triage.
Yet, despite widespread adoption, most teams are only scratching the surface. They use AI to “generate test cases” or “write automation,” but struggle with inconsistent outputs, shallow coverage, and trust issues. The root cause is rarely the model, it’s prompt design.
This blog moves past basic prompting tips to cover QA practices, focusing on effective prompt design and common pitfalls. It notes that adopting AI in testing is a gradual process of ongoing transformation rather than a quick productivity gain.
At its core, testing is about asking the right questions of a system. When AI enters the picture, prompts become the mechanism through which those questions are asked. A vague or incomplete prompt is no different from an ambiguous test requirement—it leads to weak coverage and unreliable results.
Poorly written prompts often result in generic or shallow test cases, incomplete UI or API coverage, incorrect automation logic, or superficial regression analysis. This increases rework and reduces trust in AI-generated outputs.
In contrast, well-crafted prompts dramatically improve outcomes. They help expand UI and API test coverage, accelerate automation development, and enable faster interpretation of regression results. More importantly, they allow testers to focus on risk analysis and quality decisions instead of repetitive tasks. In this sense, effective prompting doesn’t replace testing skills—it amplifies them.
Modern QA organizations are undergoing three noticeable shifts.
First, there is a clear move away from manual test authoring toward AI-augmented test design. Testers increasingly rely on AI to generate baseline coverage, allowing them to focus on risk analysis, edge cases, and system behavior rather than repetitive documentation.
Second, enterprises are adopting agent-based and MCP-backed testing, where AI systems are no longer isolated prompt responders. They operate with access to application context—OpenAPI specs, UI flows, historical regressions, and even production telemetry—making outputs significantly more accurate and actionable.
Third, teams are seeing tangible SDLC impact. Internally reported metrics across multiple organizations show faster test creation, reduced regression cycle time, and earlier defect detection when Copilot-style tools are used correctly. The key phrase here is correct. Poor prompt neutralizes these benefits almost immediately.
Most testers treat prompts as instructions. Mature teams treat them as design artifacts. Effective prompts should be intentional, layered, and defensive. They should not just ask for output, but control how the AI reasons, what assumptions it can make, and how uncertainty is handled.
Assigning a role fundamentally changes the AI’s reasoning depth.
Instead of:
“Generate test cases for login.”
Use:
This pattern consistently results in better prioritization, stronger negative scenarios, and fewer superficial cases.
AI aligns faster when shown what “good” looks like. Providing even a single example test case or automation snippet dramatically improves consistency in AI-generated outputs, especially when multiple teams are involved. Concrete examples help align the AI with expected automation structure, enforce naming conventions, influence the depth and quality of assertions, and standardize reporting formats. By showing what “good” looks like, teams reduce variation, improve maintainability, and make AI-generated assets far easier to review and extend.
Copilot works best when it understands the surrounding context of what you are testing. The richer the context, the higher the quality of the output—whether you are generating manual test cases, automation scripts, or regression insights. When writing prompts clearly describe the application type (web, mobile, UI, API), the business domain, the feature or workflow under test, and the relevant user roles or API consumers. Business rules, constraints, assumptions, and exclusions should also be explicitly stated. Where possible, include structured instructions in an Instructions .md file and pass it as context to the Copilot agent. You can also attach supporting assets—such as Swagger screenshots or UI flow diagrams—to further ground the AI’s understanding. The result is more concise, accurate output that aligns closely with your system’s real behavior and constraints.
Below is an example of how rich context can aid in efficient output
Below example shows how to give clear instructions to GHCP that helps AI to handle the uncertainty and exceptions to adhere
Most AI failures in QA are self-inflicted. The following anti-patterns show up repeatedly in enterprise teams.
Prompting is not a one-time tactic—it is a capability that matures over time. The levels below represent how increasing sophistication in prompt design directly leads to more advanced, reliable, and impactful testing outcomes.
Level 1 – Prompt-Based Test Generation
AI is primarily used to generate manual test cases, scenarios, and edge cases from requirements or user stories. This level improves test coverage and speeds up test design but still relies heavily on human judgment for validation, prioritization, and execution.
Level 2 – AI-Assisted Automation
AI moves beyond documentation and actively supports automation by generating framework-aligned scripts, page objects, and assertions. Testers guide the AI with clear constraints and patterns, resulting in faster automation development while retaining full human control over architecture and execution.
Level 3 – AI-Led Regression Analysis
At this stage, AI assists in analyzing regression results by clustering failures, identifying recurring patterns, and suggesting likely root causes. Testers shift from manually triaging failures to validating AI-generated insights, significantly reducing regression cycle time.
Level 4 – MCP-Integrated, Agentic Testing
AI operates with deep system context through MCP servers, accessing specifications, historical test data, and execution results. It can independently generate, refine, and adapt tests based on system changes, enabling semi-autonomous, context-aware quality engineering with human oversight.
Effective prompt improves coverage, accelerates delivery, and elevates QA from execution to engineering. It turns Copilot from a code generator into a quality partner. The next use case in line is going beyond functional flows and understanding how AI prompting can aid for – Automation framework enhancements, Performance testing prompts, Accessibility testing prompts, Data quality testing prompts. Stay tuned for upcoming blogs!!
1157. This week, we look at AI em dashes with Sean Goedecke, software engineer for GitHub. We talk about why artificial intelligence models frequently use em dashes and words like "delve," and how training on public domain books from the late 1800s may have influenced these patterns. We also look at the role of human feedback in shaping "AI style."
🔗 Join the Grammar Girl Patreon.
🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)
🔗 Watch my LinkedIn Learning writing courses.
🔗 Subscribe to the newsletter.
🔗 Take our advertising survey.
🔗 Get the edited transcript.
🔗 Get Grammar Girl books.
| HOST: Mignon Fogarty
| Grammar Girl is part of the Quick and Dirty Tips podcast network.
| Theme music by Catherine Rannus.
| Grammar Girl Social Media: YouTube. TikTok. Facebook. Threads. Instagram. LinkedIn. Mastodon. Bluesky.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.