
T-Mobile is laying off 393 workers in Washington as part of a new round of cuts, according to a filing with the state Employment Security Department released Monday morning.
More than 200 different job titles are impacted, according to the filing, including analysts, engineers and technicians, as well as directors and managers.
The cuts targeted nearly 210 senior- and director-level employees, plus seven employees with vice president or senior vice president titles. They include a senior VP of talent and four VP of legal affairs roles.
Affected employees worked at the company’s Bellevue headquarters; data centers in Bellevue and East Wenatchee; and at stores and other facilities in Bothell, Bellingham, Woodinville, Spokane Valley and elsewhere.
The layoffs were attributed to “changing business needs,” according to the WARN filing signed by Monica Frohock, senior director of the Magenta Service Center.
“These facilities are not being closed,” the notice stated. “The layoffs are not due to relocation or contracting out employer operations or employee positions, but it is possible that some work currently done by these employees may at some point be done by others.”
Affected employees were given 60-days’ notice and the departures are expected to take effect April 2.
T-Mobile employed about 70,000 people as of Dec. 31, 2024. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.
The cuts come as the Seattle area is being hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta and other companies.
T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. In November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.
T-Mobile’s stock is down nearly 20% over the past 12 months. The company reported revenue of $18.2 billion in the third quarter, up 9% year-over-year, and added 1 million new postpaid phone customers.
Verizon, another telecom giant, laid off approximately 165 employees in Washington in November.

Amazon is laying off 2,198 employees across Washington as part of the company’s latest corporate workforce reduction, according to a new filing released Monday by the state Employment Security Department.
A detailed list included with the Washington state filing shows that software development roles account for the largest share of the layoffs, with engineering management, program management, and technical product roles also hit hard.
In total, more than half of the cuts impact Amazon’s core product and engineering organizations. The remaining positions span business intelligence, sales, marketing, infrastructure, QA, HR, design, and other support functions. Senior- and principal-level employees were also affected.
A majority of the cuts — more than 1,400 — impact workers in Seattle, with more than 600 in nearby Bellevue, where Amazon has been expanding its office footprint.
The cuts are part of Amazon’s company-wide layoffs announced last week that impact 16,000 corporate employees globally. Combined with a 14,000-worker layoff in October, it’s the largest corporate workforce reduction in the company’s history.
As part of the October cuts, Amazon laid off 2,303 employees in Washington state. Between the two layoffs, Amazon has laid off more than 4,500 corporate workers in Washington state in less than a year.
Software engineers were also the largest group of employees affected by the cuts in October. Corporate support and commercial functions were hit harder in that round, which included engineering roles but also targeted legal, tax, and ad sales positions that are largely absent from the new list released Monday. The October cuts also hit Amazon’s gaming division, while this latest round is focused more squarely on the core technology organization.
The company has made several additional, smaller workforce reductions in recent years as it seeks to streamline operations. In a memo to employees sent Wednesday, Amazon senior vice president of people experience and technology Beth Galetti said the company is “reducing layers, increasing ownership, and removing bureaucracy.”
The specific job titles in the latest Washington state filing align with that goal, particularly in the company’s technical teams. The list includes a significant number of “Manager III” and “Senior Manager” roles within software and product teams, suggesting Amazon is axing layers of oversight, not just reducing individual contributor headcount.
Amazon noted in the filing that employees who secure internal transfers before their separation dates will not ultimately be laid off. Separations are scheduled to begin April 28 and continue through late June, according to the filing.

Amazon employs roughly 50,000 corporate workers in the Seattle region, which serves as its primary headquarters. The company also laid off 27,000 workers globally in 2022-2023.
The latest cuts come amid concerns about Seattle’s tech-heavy economy as other companies trim headcount.
Many corporations are slashing headcount to address pandemic-fueled corporate “bloat” while juggling economic uncertainty and impact from AI tools.
The Seattle area lost 12,900 jobs last year across all sectors — the first time the region has experienced an annual decrease of jobs since 2009, according to The Puget Sound Regional Council.
Amazon implemented a five-day return-to-office policy for corporate employees last year — a move that drew pushback from some workers but a friendlier reception from small businesses surrounding Amazon’s office buildings.
Jon Scholes, president of the Downtown Seattle Association, said in a statement last week that a “workforce change of this scale has ripple effects on the community.”
The broader layoffs may also impact Seattle’s commercial real estate market, which continues to struggle with record-high vacancy rates.
Amazon is also laying off about 400 workers in Washington state as part of its decision to close all Amazon Go and Amazon Fresh stores nationwide. Those cuts are separate from the corporate layoffs.
Modern engineering work rarely lives in a single file. Real systems evolve across years of incrementally layered decisions—some good, some accidental. A single feature request (“Add tagging to notes,” “Refactor the validation layer,” “Support a new consumer on our API”) often touches controllers, domain models, repositories, migrations, tests, documentation, and deployment strategy.
Copilot’s agentic capabilities don’t replace your judgment in these situations—they amplify it. When used well, Copilot becomes a partner in system design, refactoring, modernization, and multi-file coordination.
This guide focuses on architecture-aware, multi-step workflows used every day by staff engineers, but written to be accessible for earlier-career engineers who want to understand how senior engineers think—and how Copilot can accelerate their own growth.
It draws on four GitHub Skills exercises (linked below), and builds toward a complete, real-world scenario: extending a small modular Notes Service with a tagging subsystem, refactoring a validation layer, designing a safe migration, and modernizing tests.
You’ll get the most out of this guide if you have:
If you’re earlier in your career, don’t worry. Each section explains why these patterns matter and how to practice them safely.
Senior engineers rarely begin by writing code. They begin by identifying boundaries: domain logic, data access, interfaces, and how modules should interact.
Copilot agent mode can help by revealing structural issues and proposing architectures.
Prompt:
Analyze this service and propose a modular decomposition with domain, infrastructure, and interface layers.
Identify anti-patterns, coupling issues, and potential failure points.
You’ll typically get back:
This transforms Copilot from an autocomplete tool into a design reviewer.
You can push further by asking it to compare architectures:
Compare hexagonal architecture vs. layered architecture for this codebase.
Recommend one based on the constraints here. Include tradeoffs.
Want to try it yourself? Use these proposals as starting points.
Once boundaries are defined, Copilot can coordinate changes across modules.
Prompt:
Implement the domain, controller, and repository layers as distinct modules.
Use dependency inversion to reduce coupling.
Document assumptions and contracts for each module.
Copilot will typically generate:
For earlier-career engineers, this provides exposure to real engineering patterns. For senior engineers, it provides leverage and reduces boilerplate overhead.
Adding a tagging subsystem is a deceptively simple request with meaningful architectural implications.
Even this single feature forces decisions across the system:
Before touching code, ask Copilot to map the impact.
Prompt:
Propose the architectural changes required to add a tagging subsystem.
Identify migration needs, cross-cutting concerns, caching or indexing implications, and potential regressions.
Copilot may identify:
This is the staff-level lens that Copilot can help junior developers adopt.
Then implement it:
Implement the tagging domain model, schema changes, repository updates, and controller logic.
Update tests and documentation. Show each change as a diff.
Example output (simplified)
Migration example:
ALTER TABLE notes ADD COLUMN tags TEXT DEFAULT '[]';
Domain model example:
export interface Tag {
id: string;
label: string;
}
export interface Note {
id: string;
title: string;
body: string;
tags: Tag[];
}
Controller update (partial):
await noteService.addTag(noteId, { label: req.body.label });
This is where agent mode shines: coordinating multiple files with consistent intent.
At senior levels, the hardest part isn’t writing SQL. It’s designing a change that is:
Ask Copilot to reason about this:
Prompt:
Generate an additive, backward-compatible schema migration to support the tagging subsystem.
Describe the rollback plan, compatibility window, and expected impact to existing clients.
This forces Copilot to consider:
If you’re earlier in your career, this offers lessons on how safe migrations are designed. And if you’re more experienced, this gives you a repeatable workflow for multi-step schema evolution.
Let’s perform a real cross-module refactor: extracting validation out of controllers into a domain service.
Prompt:
Create a step-by-step refactor plan to extract validation logic into a domain service.
Identify affected modules and required test updates.
Copilot may output something like:
validationServicePrompt:
Execute steps 1–3 only. Stop before controller rewrites.
Provide detailed diffs and call out risky areas.
This is a low-blast-radius refactor, modeled directly in the IDE.
Instead of asking Copilot “write tests,” ask it to assess the entire suite.
Prompt:
Analyze the current test suite and identify systemic gaps.
Recommend a modernization plan including contract, integration, and domain-layer tests.
Then implement contract tests:
describe("NotesRepository contract", () => {
test("create + fetch returns a fully hydrated note object", async () => {
const note = await notesRepo.create({ title: "Test", body: "…" });
const fetched = await notesRepo.get(note.id);
expect(fetched).toMatchObject({ title: "Test" });
expect(fetched.id).toBeDefined();
});
});
This elevates testing into an architectural concern.
Bringing it all together, here’s a real sequence you might run with Copilot:
This workflow is architecturally realistic—and a model for how Copilot becomes a system-level collaborator.
It’s important to clarify that agent mode is not ideal for:
Copilot should support your decision-making, not replace it.
Here’s where GitHub Skills comes in—not as “beginner content,” but as a set of guided, self-contained labs that reinforce the patterns above.
Even senior engineers will benefit: These exercises are structured so you can reliably recreate complex workflows and test Copilot’s behavior in controlled environments.
The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.
Recently retired PowerShell icon Don Jones joins The PowerShell Podcast for a wide-ranging conversation on career ownership, community leadership, and building a life that aligns with what you actually value. Don reflects on the difference between your job and your career, why investing in yourself pays off, and how asking better questions can change the way you influence decisions at work. The episode also dives into Don’s journey as a fiction author, his role in shaping the PowerShell community and Summit culture, and why real success comes from clarity, kindness, and helping others win.
Key Takeaways:
• Your employer owns your job, but you own your career—define your destination and build the skills to get there.
• Strong careers are built on outcomes, not tools—focus on saving time, reducing errors, and delivering measurable business value.
• Community scales when you empower others—create space for people to contribute, own wins, and multiply the impact beyond yourself.
Guest Bio:
Don Jones is a foundational figure in the PowerShell community, known for his decades of teaching, writing, and advocacy for automation and professional growth. A former Microsoft MVP, Don co-authored the widely influential Learn PowerShell in a Month of Lunches series and helped shape community culture through conferences, mentorship, and leadership. Now retired from full-time work, Don continues writing and publishing fiction, bringing the same clarity and craft to storytelling that made his technical teaching so impactful.
Resource Links:
• Don Jones Website and Books – https://donjones.com
Andrew's links: https://andrewpla.tech/links
• PowerShell + DevOps Global Summit – https://powershellsummit.org
• Tech Impact (nonprofit mentioned) – https://techimpact.org
• PowerShell.org – https://powershell.org
• PDQ Discord – https://discord.gg/PDQ
• PowerShell Wednesdays – https://www.youtube.com/results?search_query=PowerShell+Wednesdays
The PowerShell Podcast on YouTube: https://youtu.be/xKh8rqCqMQg
Open‑source AI is moving fast, with important breakthroughs in reasoning, agentic systems, multimodality, and efficiency emerging every day. Hugging Face has been a leading platform where researchers, startups, and developers share and discover new models. Microsoft Foundry brings these trending Hugging Face models into a production‑ready experience, where developers can explore, evaluate, and deploy them within their Azure environment. Our weekly Model Monday’s series highlights Hugging Face models available in Foundry, focusing on what matters most to developers: why a model is interesting, where it fits, and how to put it to work quickly.
This week’s Model Mondays edition highlights three Hugging Face models, including a powerful Mixture-of-Experts model from Z. AI designed for lightweight deployment, Meta’s unified foundation model for image and video segmentation, and MiniMax’s latest open-source agentic model optimized for complex workflows.
|
Use case |
Best‑practice prompt pattern |
|
Agentic coding (multi‑step repo work, debugging, refactoring) |
Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result. |
|
Long‑context agent workflows (local or low‑cost autonomous agents) |
Call out long‑horizon consistency and context preservation. Instruct the model to retain earlier assumptions and decisions across turns.
|
Now that you know GLM‑4.7‑Flash works best when you give it a clear goal and let it reason through a bounded task, here’s an example prompt that a product or engineering team might use to identify risks and propose mitigations:
You are a software reliability analyst for a mid‑scale SaaS platform. Review recent incident reports, production logs, and customer issues to uncover edge‑case failures outside normal usage (e.g., rare inputs, boundary conditions, timing/concurrency issues, config drift, or unexpected feature interactions). Prioritize low‑frequency, high‑impact risks that standard testing misses. Recommend minimal, low‑cost fixes (validation, guardrails, fallback logic, or documentation). Deliver a concise executive summary with sections: Observed Edge Cases, Root Causes, User Impact, Recommended Lightweight Fixes, and Validation Steps.
This model enables users to identify specific objects within video footage and isolate them over extended periods. With just one line of code, it is possible to detect multiple similar objects simultaneously. The accompanying GIF demonstrates how SAM3 efficiently highlights players wearing white on the field as they appear and disappear from view.
Additional examples are available at the following repository:
https://github.com/facebookresearch/sam3/blob/main/assets/player.gif
|
Use case |
Best‑practice prompt pattern |
|
Agentic coding (multi‑step repo work, debugging, refactoring)
|
Treat SAM 3 as a concept detector, not an interactive click tool. Use short, concrete noun‑phrase concept prompts instead of describing the scene or asking questions. Example prompt: “yellow school bus” or “shipping containers”. Avoid verbs or full sentences. |
|
Video segmentation + object tracking |
Specify the same concept prompt once, then apply it across the video sequence. Do not restate the prompt per frame. Let the model maintain identity continuity. Example: “person wearing a red jersey”. |
|
Hard‑to‑name or visually subtle objects |
Use exemplar‑based prompts (image region or box) when text alone is ambiguous. Optionally combine positive and negative exemplars to refine the concept. Avoid over‑constraining with long descriptions. |
Using the GIF above as a leading example, here is a prompt that shows how SAM 3 turns raw sports footage into structured, reusable data. By identifying and tracking players based on visual concepts like jersey color so that sports leagues can turn tracked data into interactive experiences where automated player identification can relay stats, fun facts, etc when built into a larger application. Here is a prompt that will allow you to start identifying specific players across video:
Act as a sports analytics operator analyzing football match footage. Segment and track all football players wearing blue jerseys across the video. Generate pixel‑accurate segmentation masks for each player and assign persistent instance IDs that remain stable during camera movement, zoom, and player occlusion. Exclude referees, opposing team jerseys, sidelines, and crowd. Output frame‑level masks and tracking metadata suitable for overlays, player statistics, and downstream analytics pipelines.
|
Use case |
Best‑practice prompt pattern |
|
End‑to‑end agentic coding (multi‑file edits, run‑fix loops)
|
Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result. |
|
Long‑horizon tool‑using agents (shell, browser, Python)
|
Explicitly request stepwise planning and sequential tool use. M2.1’s interleaved thinking and improved instruction‑constraint handling are designed for complex, multi‑step analytical tasks that require evidence tracking and coherent synthesis, not conversational back‑and‑forth. |
|
Long‑context reasoning & analysis (large documents / logs) |
Declare the scope and desired output structure up front. MiniMax‑M2.1 performs best when the objective and final artifact are clear, allowing it to manage long context and maintain coherence. |
Because MiniMax‑M2.1 is designed to act as a long‑horizon analytical agent, it shines when you give it a clear end goal and let it work through large volumes of information—here’s a prompt a risk or compliance team could use in practice:
You are a financial risk analysis agent. Analyze the following transaction logs and compliance policy documents to identify potential regulatory violations and systemic risk patterns. Plan your approach before executing. Work through the data step by step, referencing evidence where relevant. Deliver a final report with the following sections: Key Risk Patterns Identified, Supporting Evidence, Potential Regulatory Impact, Recommended Mitigations. Your response should be a complete, executive-ready report, not a conversational draft.
You can deploy open‑source Hugging Face models directly in Microsoft Foundry by browsing the Hugging Face collection in the Foundry model catalog and deploying to managed endpoints in just a few clicks. You can also start from the Hugging Face Hub. First, select any supported model and then choose "Deploy on Microsoft Foundry", which brings you straight into Azure with secure, scalable inference already configured. Learn how to discover models and deploy them using Microsoft Foundry documentation.