Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146797 stories
·
33 followers

Introducing the Codex app

1 Share
Introducing the Codex app for macOS—a command center for AI coding and software development with multiple agents, parallel workflows, and long-running tasks.
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

T-Mobile layoffs: Telecom giant cuts 393 jobs across Washington state, including VP roles

1 Share
(Photo by Mika Baumeister on Unsplash)

T-Mobile is laying off 393 workers in Washington as part of a new round of cuts, according to a filing with the state Employment Security Department released Monday morning.

More than 200 different job titles are impacted, according to the filing, including analysts, engineers and technicians, as well as directors and managers.

The cuts targeted nearly 210 senior- and director-level employees, plus seven employees with vice president or senior vice president titles. They include a senior VP of talent and four VP of legal affairs roles.

Affected employees worked at the company’s Bellevue headquarters; data centers in Bellevue and East Wenatchee; and at stores and other facilities in Bothell, Bellingham, Woodinville, Spokane Valley and elsewhere.

The layoffs were attributed to “changing business needs,” according to the WARN filing signed by Monica Frohock, senior director of the Magenta Service Center.

“These facilities are not being closed,” the notice stated. “The layoffs are not due to relocation or contracting out employer operations or employee positions, but it is possible that some work currently done by these employees may at some point be done by others.”

Affected employees were given 60-days’ notice and the departures are expected to take effect April 2.

T-Mobile employed about 70,000 people as of Dec. 31, 2024. The company has nearly 8,000 workers in the Seattle region, according to LinkedIn.

The cuts come as the Seattle area is being hit by thousands of tech-related layoffs, including job losses at Amazon, Expedia, Meta and other companies.

T-Mobile, the largest U.S. telecom company by market capitalization, laid off 121 workers in August 2025. In November, former Chief Operating Officer Srini Gopalan replaced longtime leader Mike Sievert as CEO.

T-Mobile’s stock is down nearly 20% over the past 12 months. The company reported revenue of $18.2 billion in the third quarter, up 9% year-over-year, and added 1 million new postpaid phone customers.

Verizon, another telecom giant, laid off approximately 165 employees in Washington in November.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Amazon layoffs hit nearly 2,200 in Washington state, more than half in core product and engineering roles

1 Share
Amazon’s headquarters campus in Seattle. (GeekWire Photo / Kurt Schlosser)

Amazon is laying off 2,198 employees across Washington as part of the company’s latest corporate workforce reduction, according to a new filing released Monday by the state Employment Security Department.

A detailed list included with the Washington state filing shows that software development roles account for the largest share of the layoffs, with engineering management, program management, and technical product roles also hit hard.

In total, more than half of the cuts impact Amazon’s core product and engineering organizations. The remaining positions span business intelligence, sales, marketing, infrastructure, QA, HR, design, and other support functions. Senior- and principal-level employees were also affected.

A majority of the cuts — more than 1,400 — impact workers in Seattle, with more than 600 in nearby Bellevue, where Amazon has been expanding its office footprint.

The cuts are part of Amazon’s company-wide layoffs announced last week that impact 16,000 corporate employees globally. Combined with a 14,000-worker layoff in October, it’s the largest corporate workforce reduction in the company’s history.

As part of the October cuts, Amazon laid off 2,303 employees in Washington state. Between the two layoffs, Amazon has laid off more than 4,500 corporate workers in Washington state in less than a year.

Software engineers were also the largest group of employees affected by the cuts in October. Corporate support and commercial functions were hit harder in that round, which included engineering roles but also targeted legal, tax, and ad sales positions that are largely absent from the new list released Monday. The October cuts also hit Amazon’s gaming division, while this latest round is focused more squarely on the core technology organization.

The company has made several additional, smaller workforce reductions in recent years as it seeks to streamline operations. In a memo to employees sent Wednesday, Amazon senior vice president of people experience and technology Beth Galetti said the company is “reducing layers, increasing ownership, and removing bureaucracy.”

The specific job titles in the latest Washington state filing align with that goal, particularly in the company’s technical teams. The list includes a significant number of “Manager III” and “Senior Manager” roles within software and product teams, suggesting Amazon is axing layers of oversight, not just reducing individual contributor headcount.

Amazon noted in the filing that employees who secure internal transfers before their separation dates will not ultimately be laid off. Separations are scheduled to begin April 28 and continue through late June, according to the filing.

Tech pullback in Seattle

An Amazon Prime delivery van outside the company’s Seattle headquarters. (GeekWire File Photo / Kurt Schlosser)

Amazon employs roughly 50,000 corporate workers in the Seattle region, which serves as its primary headquarters. The company also laid off 27,000 workers globally in 2022-2023.

The latest cuts come amid concerns about Seattle’s tech-heavy economy as other companies trim headcount.

  • GeekWire reported Monday that T-Mobile is laying off nearly 400 workers.
  • Expedia and Meta laid off hundreds of workers last month.
  • Microsoft laid off more than 3,200 employees in Washington state last year, part of broader cuts that impacted 15,000 people globally

Many corporations are slashing headcount to address pandemic-fueled corporate “bloat” while juggling economic uncertainty and impact from AI tools.

The Seattle area lost 12,900 jobs last year across all sectors — the first time the region has experienced an annual decrease of jobs since 2009, according to The Puget Sound Regional Council.

Amazon implemented a five-day return-to-office policy for corporate employees last year — a move that drew pushback from some workers but a friendlier reception from small businesses surrounding Amazon’s office buildings.

Jon Scholes, president of the Downtown Seattle Association, said in a statement last week that a “workforce change of this scale has ripple effects on the community.”

The broader layoffs may also impact Seattle’s commercial real estate market, which continues to struggle with record-high vacancy rates.

Amazon is also laying off about 400 workers in Washington state as part of its decision to close all Amazon Go and Amazon Fresh stores nationwide. Those cuts are separate from the corporate layoffs.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to maximize GitHub Copilot’s agentic capabilities

1 Share

Modern engineering work rarely lives in a single file. Real systems evolve across years of incrementally layered decisions—some good, some accidental. A single feature request (“Add tagging to notes,” “Refactor the validation layer,” “Support a new consumer on our API”) often touches controllers, domain models, repositories, migrations, tests, documentation, and deployment strategy.

Copilot’s agentic capabilities don’t replace your judgment in these situations—they amplify it. When used well, Copilot becomes a partner in system design, refactoring, modernization, and multi-file coordination.

This guide focuses on architecture-aware, multi-step workflows used every day by staff engineers, but written to be accessible for earlier-career engineers who want to understand how senior engineers think—and how Copilot can accelerate their own growth.

It draws on four GitHub Skills exercises (linked below), and builds toward a complete, real-world scenario: extending a small modular Notes Service with a tagging subsystem, refactoring a validation layer, designing a safe migration, and modernizing tests.


Before you start

You’ll get the most out of this guide if you have:

  • GitHub Copilot with agent mode enabled
  • Some familiarity with service-layer architectures (Node, Python, Go—language doesn’t matter
  • Copy a GitHub Skills exercise template to your handle or organization (use the green “Copy Exercise” button)
  • A willingness to let Copilot propose solutions—and the judgment to inspect and challenge them

If you’re earlier in your career, don’t worry. Each section explains why these patterns matter and how to practice them safely.


Using Copilot for system design and decomposition (not just scaffolding)

Senior engineers rarely begin by writing code. They begin by identifying boundaries: domain logic, data access, interfaces, and how modules should interact.

Copilot agent mode can help by revealing structural issues and proposing architectures.

Prompt:

Analyze this service and propose a modular decomposition with domain, infrastructure, and interface layers.

Identify anti-patterns, coupling issues, and potential failure points.

You’ll typically get back:

  • Proposed module boundaries
  • Cross-layer coupling concerns
  • Async/transaction pitfalls
  • Duplication or tight weaving of responsibilities
  • Testability and observability implications

This transforms Copilot from an autocomplete tool into a design reviewer.

You can push further by asking it to compare architectures:

Compare hexagonal architecture vs. layered architecture for this codebase.

Recommend one based on the constraints here. Include tradeoffs.

Want to try it yourself? Use these proposals as starting points.

Building a modular service using agentic workflows

Once boundaries are defined, Copilot can coordinate changes across modules.

Prompt:

Implement the domain, controller, and repository layers as distinct modules.

Use dependency inversion to reduce coupling.

Document assumptions and contracts for each module.

Copilot will typically generate:

  • Domain model interfaces
  • Repository abstractions
  • Controller logic calling domain services
  • A short Markdown summary describing each module

For earlier-career engineers, this provides exposure to real engineering patterns. For senior engineers, it provides leverage and reduces boilerplate overhead.

Feature work with architectural awareness (example: tagging subsystem)

Adding a tagging subsystem is a deceptively simple request with meaningful architectural implications.

Even this single feature forces decisions across the system: 

  • Data modeling: embedded tags vs. normalized tables vs. many-to-many relationships
  • Search behavior: how tags affect indexing, filtering, and relevance
  • API contracts: whether tags are first-class resources or an implementation detail
  • Validation boundaries: where constraints and invariants are enforced
  • Migration and rollout: additive vs. breaking changes and rollback strategy

Before touching code, ask Copilot to map the impact.

Prompt

Propose the architectural changes required to add a tagging subsystem.

Identify migration needs, cross-cutting concerns, caching or indexing implications, and potential regressions.

Copilot may identify:

  • Tag–note relationships (one-to-many or many-to-many)
  • Migration strategy
  • Impact to search logic
  • Required test updates
  • Changes in validation logic
  • Implications on external API consumers

This is the staff-level lens that Copilot can help junior developers adopt.

Then implement it:

Implement the tagging domain model, schema changes, repository updates, and controller logic.

Update tests and documentation. Show each change as a diff.

Example output (simplified)

Migration example:

ALTER TABLE notes ADD COLUMN tags TEXT DEFAULT '[]';

Domain model example:

export interface Tag {
  id: string;
  label: string;
}

export interface Note {
  id: string;
  title: string;
  body: string;
  tags: Tag[];
}

Controller update (partial):

await noteService.addTag(noteId, { label: req.body.label });

This is where agent mode shines: coordinating multiple files with consistent intent.

Schema migrations and safe rollout strategies

At senior levels, the hardest part isn’t writing SQL. It’s designing a change that is:

  • Backward compatible
  • Reversible
  • Safe under load
  • Transparent to dependent systems

Ask Copilot to reason about this:

Prompt:

Generate an additive, backward-compatible schema migration to support the tagging subsystem.

Describe the rollback plan, compatibility window, and expected impact to existing clients.

This forces Copilot to consider:

  • Mon-breaking additive fields
  • Optional fields vs. required fields
  • Whether a dual-read or dual-write strategy is needed
  • Safe rollback procedures
  • API versioning implications

If you’re earlier in your career, this offers lessons on how safe migrations are designed. And if you’re more experienced, this gives you a repeatable workflow for multi-step schema evolution.

Advanced refactoring with agentic workflows

Let’s perform a real cross-module refactor: extracting validation out of controllers into a domain service.

Prompt:

Create a step-by-step refactor plan to extract validation logic into a domain service.

Identify affected modules and required test updates.

Copilot may output something like:

  1. Introduce domain validationService
  2. Move validation logic from controller to service
  3. Update controllers to use new service
  4. Update repository logic where validation assumptions leak
  5. Update domain tests
  6. Update integration tests

Execute in incremental steps

Prompt:

Execute steps 1–3 only. Stop before controller rewrites.

Provide detailed diffs and call out risky areas.

This is a low-blast-radius refactor, modeled directly in the IDE.

Modernizing test strategy

Instead of asking Copilot “write tests,” ask it to assess the entire suite.

Prompt:

Analyze the current test suite and identify systemic gaps.

Recommend a modernization plan including contract, integration, and domain-layer tests.

Then implement contract tests:

describe("NotesRepository contract", () => {
  test("create + fetch returns a fully hydrated note object", async () => {
    const note = await notesRepo.create({ title: "Test", body: "…" });
    const fetched = await notesRepo.get(note.id);

    expect(fetched).toMatchObject({ title: "Test" });
    expect(fetched.id).toBeDefined();
  });
});

This elevates testing into an architectural concern.

A complete end-to-end workflow

Bringing it all together, here’s a real sequence you might run with Copilot:

  1. Ask Copilot to analyze the existing architecture: identify hazards, modularization opportunities
  2. Define module boundaries: domain, repository, controller layers
  3. Add tagging subsystem: architectural assessment to implementation to tests to doc updates
  4. Create a backward-compatible migration: additive schema to rollback plan
  5. Perform a targeted refactor: validation layer extraction
  6. Modernize tests: contract + integration + domain tests

This workflow is architecturally realistic—and a model for how Copilot becomes a system-level collaborator.

What agent mode is not for

It’s important to clarify that agent mode is not ideal for:

  • Altering domain invariants without human review
  • Redesigning cross-service ownership boundaries
  • Replacing logic driven by institutional knowledge
  • Large sweeping rewrites across hundreds of files
  • Debugging deep runtime issues

Copilot should support your decision-making, not replace it.

Where to go next

Here’s where GitHub Skills comes in—not as “beginner content,” but as a set of guided, self-contained labs that reinforce the patterns above. 

Even senior engineers will benefit: These exercises are structured so you can reliably recreate complex workflows and test Copilot’s behavior in controlled environments.

Explore GitHub Skills >

The post How to maximize GitHub Copilot’s agentic capabilities appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The PowerShell Podcast Owning Your Career and Your Time with Don Jones

1 Share

Recently retired PowerShell icon Don Jones joins The PowerShell Podcast for a wide-ranging conversation on career ownership, community leadership, and building a life that aligns with what you actually value. Don reflects on the difference between your job and your career, why investing in yourself pays off, and how asking better questions can change the way you influence decisions at work. The episode also dives into Don’s journey as a fiction author, his role in shaping the PowerShell community and Summit culture, and why real success comes from clarity, kindness, and helping others win.
 
Key Takeaways:
• Your employer owns your job, but you own your career—define your destination and build the skills to get there.
• Strong careers are built on outcomes, not tools—focus on saving time, reducing errors, and delivering measurable business value.
• Community scales when you empower others—create space for people to contribute, own wins, and multiply the impact beyond yourself.
 
Guest Bio:
Don Jones is a foundational figure in the PowerShell community, known for his decades of teaching, writing, and advocacy for automation and professional growth. A former Microsoft MVP, Don co-authored the widely influential Learn PowerShell in a Month of Lunches series and helped shape community culture through conferences, mentorship, and leadership. Now retired from full-time work, Don continues writing and publishing fiction, bringing the same clarity and craft to storytelling that made his technical teaching so impactful.
 
Resource Links:
• Don Jones Website and Books – https://donjones.com

Andrew's links: https://andrewpla.tech/links

• PowerShell + DevOps Global Summit – https://powershellsummit.org
• Tech Impact (nonprofit mentioned) – https://techimpact.org
• PowerShell.org – https://powershell.org
• PDQ Discord – https://discord.gg/PDQ
• PowerShell Wednesdays – https://www.youtube.com/results?search_query=PowerShell+Wednesdays

The PowerShell Podcast on YouTube: https://youtu.be/xKh8rqCqMQg

 

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What is trending in Hugging Face on Microsoft Foundry? Feb, 2, 2026

1 Share

Open‑source AI is moving fast, with important breakthroughs in reasoning, agentic systems, multimodality, and efficiency emerging every day. Hugging Face has been a leading platform where researchers, startups, and developers share and discover new models. Microsoft Foundry brings these trending Hugging Face models into a production‑ready experience, where developers can explore, evaluate, and deploy them within their Azure environment. Our weekly Model Monday’s series highlights Hugging Face models available in Foundry, focusing on what matters most to developers: why a model is interesting, where it fits, and how to put it to work quickly.

This week’s Model Mondays edition highlights three Hugging Face models, including a powerful Mixture-of-Experts model from Z. AI designed for lightweight deployment, Meta’s unified foundation model for image and video segmentation, and MiniMax’s latest open-source agentic model optimized for complex workflows.

Models of the week

Z.AI’s GLM-4.7-flash

Model Basics

    • Model name: zai-org/GLM-4.7-Flash
    • Parameters / size: 30B total -3B active
    • Default settings: 131,072 max new tokens
    • Primary task: Agentic, Reasoning and Coding

Why this model matters

    • Why it’s interesting:
      1. It utilizes a Mixture-of-Experts (MoE) architecture (30B total parameters and 3B active parameters) to offer a new option for lightweight deployment.
      2. It demonstrates strong performance on logic and reasoning benchmarks, outperforming similar sized models like gpt-oss-20b on AIME 25 and GPQA benchmarks.
      3. It supports advanced inference features like "Preserved Thinking" mode for multi-turn agentic tasks. 
    • Best‑fit use cases: Lightweight local deployment, multi-turn agentic tasks, and logical reasoning applications.
    • What’s notable: From the Foundry catalog, users can deploy on a A100 instance or unsloth/GLM-4.7-Flash-GGUF on a CPU.
Figure 1. GLM-4.7-flash results on SWE-bench Verified and τ²-Bench, GLM-4.7-Flash achieved open-source SOTA scores among models of comparable size. Additionally, compared to similarly sized models, GLM-4.7-Flash demonstrates superior frontend and backend development capabilities. Click to see more: https://docs.z.ai

Try it

Use case

Best‑practice prompt pattern

Agentic coding (multi‑step repo work, debugging, refactoring)

Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result.

Long‑context agent workflows (local or low‑cost autonomous agents)

Call out long‑horizon consistency and context preservation. Instruct the model to retain earlier assumptions and decisions across turns.

 

Now that you know GLM‑4.7‑Flash works best when you give it a clear goal and let it reason through a bounded task, here’s an example prompt that a product or engineering team might use to identify risks and propose mitigations:

You are a software reliability analyst for a mid‑scale SaaS platform. Review recent incident reports, production logs, and customer issues to uncover edge‑case failures outside normal usage (e.g., rare inputs, boundary conditions, timing/concurrency issues, config drift, or unexpected feature interactions). Prioritize low‑frequency, high‑impact risks that standard testing misses. Recommend minimal, low‑cost fixes (validation, guardrails, fallback logic, or documentation). Deliver a concise executive summary with sections: Observed Edge Cases, Root Causes, User Impact, Recommended Lightweight Fixes, and Validation Steps.

Meta's Segment Anything 3 (SAM3) 

Model Basics

    • Model name: facebook/sam3
    • Parameters / size: 0.9B
    • Primary task: Mask Generation, Promptable Concept Segmentation (PCS)

Why this model matters

    • Why it’s interesting:
      1. It handles a vastly larger set of open-vocabulary prompts than SAM 2, and unifies image and video segmentation capabilities.
      2. It includes a "SAM 3 Tracker" mode that acts as a drop-in replacement for SAM 2 workflows with improved performance.
    • Best‑fit use cases:  Open-vocabulary object detection, video object tracking, and automatic mask generation
    • What’s notable: Introduces Promptable Concept Segmentation (PCS), allowing users to find all matching objects (e.g., "dial") via text prompt rather than just single instances.

Try it

This model enables users to identify specific objects within video footage and isolate them over extended periods. With just one line of code, it is possible to detect multiple similar objects simultaneously. The accompanying GIF demonstrates how SAM3 efficiently highlights players wearing white on the field as they appear and disappear from view.

Figure 2. GIF showing soccer players on the field. Players in white are highlighted showcasing SAM3's ability to segment different objects according to a prompt across multiple frames.

Additional examples are available at the following repository:

https://github.com/facebookresearch/sam3/blob/main/assets/player.gif 

Use case

Best‑practice prompt pattern

Agentic coding (multi‑step repo work, debugging, refactoring)

 

Treat SAM 3 as a concept detector, not an interactive click tool. Use short, concrete noun‑phrase concept prompts instead of describing the scene or asking questions. Example prompt: “yellow school bus” or “shipping containers”. Avoid verbs or full sentences.

Video segmentation + object tracking

Specify the same concept prompt once, then apply it across the video sequence. Do not restate the prompt per frame. Let the model maintain identity continuity. Example: “person wearing a red jersey”.

Hard‑to‑name or visually subtle objects

Use exemplar‑based prompts (image region or box) when text alone is ambiguous. Optionally combine positive and negative exemplars to refine the concept. Avoid over‑constraining with long descriptions.

Using the GIF above as a leading example, here is a prompt that shows how SAM 3 turns raw sports footage into structured, reusable data. By identifying and tracking players based on visual concepts like jersey color so that sports leagues can turn tracked data into interactive experiences where automated player identification can relay stats, fun facts, etc when built into a larger application. Here is a prompt that will allow you to start identifying specific players across video:

Act as a sports analytics operator analyzing football match footage. Segment and track all football players wearing blue jerseys across the video. Generate pixel‑accurate segmentation masks for each player and assign persistent instance IDs that remain stable during camera movement, zoom, and player occlusion. Exclude referees, opposing team jerseys, sidelines, and crowd. Output frame‑level masks and tracking metadata suitable for overlays, player statistics, and downstream analytics pipelines.

MiniMax AI's MiniMax-M2.1

Model Basics

    • Model name: MiniMaxAI/MiniMax-M2.1
    • Parameters / size: 229B-10B Active
    • Default settings: 200,000 max new tokens
    • Primary task: Agentic and Coding

Why this model matters

    • Why it’s interesting:
      1. It is optimized for robustness in coding, tool use, and long-horizon planning, outperforming Claude Sonnet 4.5 in multilingual scenarios.
      2. It excels in full-stack application development, capable of architecting apps "from zero to one”.
      3. Previous coding models focused on Python optimization, M2.1 brings enhanced capabilities in Rust, Java, Golang, C++, Kotlin, Objective-C, TypeScript, JavaScript, and other languages.
    • The model delivers exceptional stability across various coding agent frameworks.
    • Best‑fit use cases: Lightweight local deployment, multi-turn agentic tasks, and logical reasoning applications.
    • What’s notable: The release of open-source weights for M2.1 delivers a massive leap over M2 on software engineering leaderboards.
Figure 3. MiniMax M2.1 against leading models across SWE‑bench (verified and multilingual), Terminal‑bench 2.0, and VIBE scores for web, simulation, Android, iOS, and backend tasks. The results highlight strong performance in multi‑file coding, multilingual development, and end‑to‑end application workflows. View more here: https://www.minimax.io/

Try it

Use case

Best‑practice prompt pattern

End‑to‑end agentic coding (multi‑file edits, run‑fix loops)

 

Treat the model as an autonomous coding agent, not a snippet generator. Explicitly require task decomposition and step‑by‑step execution, then a single consolidated result.

Long‑horizon tool‑using agents (shell, browser, Python)

 

Explicitly request stepwise planning and sequential tool use. M2.1’s interleaved thinking and improved instruction‑constraint handling are designed for complex, multi‑step analytical tasks that require evidence tracking and coherent synthesis, not conversational back‑and‑forth.

Long‑context reasoning & analysis (large documents / logs)

Declare the scope and desired output structure up front. MiniMax‑M2.1 performs best when the objective and final artifact are clear, allowing it to manage long context and maintain coherence.

Because MiniMax‑M2.1 is designed to act as a long‑horizon analytical agent, it shines when you give it a clear end goal and let it work through large volumes of information—here’s a prompt a risk or compliance team could use in practice:

You are a financial risk analysis agent. Analyze the following transaction logs and compliance policy documents to identify potential regulatory violations and systemic risk patterns. Plan your approach before executing. Work through the data step by step, referencing evidence where relevant. Deliver a final report with the following sections: Key Risk Patterns Identified, Supporting Evidence, Potential Regulatory Impact, Recommended Mitigations. Your response should be a complete, executive-ready report, not a conversational draft.

Getting started

You can deploy open‑source Hugging Face models directly in Microsoft Foundry by browsing the Hugging Face collection in the Foundry model catalog and deploying to managed endpoints in just a few clicks. You can also start from the Hugging Face Hub. First, select any supported model and then choose "Deploy on Microsoft Foundry", which brings you straight into Azure with secure, scalable inference already configured. Learn how to discover models and deploy them using Microsoft Foundry documentation.

 

 

Read the whole story
alvinashcraft
29 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories