Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147990 stories
·
32 followers

Filing: Amazon cuts more than 2,300 jobs in Washington state as part of broader layoffs

1 Share
GeekWire File Photo

Amazon will lay off 2,303 corporate employees in Washington state, primarily in its Seattle and Bellevue offices, according to a filing with the state Employment Security Department that provides the first geographic breakdown of the company’s 14,000 global job cuts.

A detailed list included with the state filing shows a wide array of impacted roles, including software engineers, program managers, product managers, and designers, as well as a significant number of recruiters and human resources staff. 

Senior and principal-level roles are among those being cut, aligning with a company-wide push to use the cutbacks to help reduce bureaucracy and operate more efficiently.

Amazon announced the cuts Tuesday morning, part of a larger push by CEO Andy Jassy to streamline the company. Jassy had previously told Amazon employees in June that efficiency gains from AI would likely lead to a smaller corporate workforce over time.

In a memo from HR chief Beth Galetti, the company signaled that further cutbacks will continue into 2026. Reuters reported Monday that the number of layoffs could ultimately total as many as 30,000 people, which is still possible as the layoffs continue into next year.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Copilot Retention, Auditing and eDiscovery: A deep dive

1 Share

Last Friday’s session was a powerhouse of insights for anyone navigating compliance in the age of AI. We explored how retention policies, audit logs, and eDiscovery apply to Copilot interactions, ensuring organizations can meet regulatory requirements while leveraging AI responsibly.

If you missed the session, we have you covered you can view it here https://aka.ms/Compliance-Meets-Ai-Session-Four 

Key highlights from the session:

  • Retention Policies for Copilot: Copilot prompts and responses are stored in hidden folders within user mailboxes. We discussed best practices for setting retention periods, deletion policies, and how precedence works when multiple policies apply.
  • Unified Audit Logs: Learn how to track Copilot activity at scale, filter by user, and export data for investigations. We also covered why DSPM for AI offers a cleaner view for compliance teams.
  • eDiscovery for Copilot: From quick searches to full cases, we demonstrated how to place holds, create review sets, and export Copilot interactions for legal or compliance needs. Plus, tips for using KQL queries for granular searches.

This session wasn’t just theory—it included live demos showing how to configure retention policies, run audit queries, and manage Copilot data in real-world scenarios. Attendees also got links to previous recordings and Microsoft Onboarding Accelerators for deeper learning.

What’s Next?

Don’t miss our upcoming session:
Copilot and Communication Compliance: A Deep Dive
📅 Next Friday
This session will cover monitoring inappropriate Copilot use, detecting sensitive information types, and setting up communication compliance policies.

👉 https://aka.ms/ComplianceMeetsAi to secure your spot and continue building your compliance expertise.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Using Microsoft Sentinel MCP Server with GitHub Copilot for AI-Powered Threat Hunting

1 Share

Introduction

This post walks through how to get started with the Microsoft Sentinel MCP Server and showcases a hands-on demo integrating with Visual Studio Code and GitHub Copilot.

Using the MCP server, you can run natural language queries against Microsoft Sentinel’s security data lake, enabling faster investigations and simplified threat hunting using tools you already know.

This blog includes a real-world prompt you can use in your own environment and highlights the power of AI-assisted security workflows.

 

What is the Microsoft Sentinel MCP Server?

The Model Context Protocol (MCP) allows AI models to access structured security data in a standard, context-aware way. The Sentinel MCP server connects to your Microsoft Sentinel data lake and enables tools like GitHub Copilot or Security Copilot to:

  • Search security data using natural language
  • Summarize findings and explain risks
  • Build intelligent agents for security operations

 

Prerequisites

Make sure you have the following in place:

  • Onboarded to Microsoft Sentinel Data Lake
  • Assigned the Security Reader role
  • Installed:
    • Visual Studio Code
    • GitHub Copilot extension
    • (Optional) Security Copilot plugin if building agents

 

Setting Up MCP Server in VS Code

Step 1: Add the MCP Server

  1. In VS Code, press Ctrl + Shift + P
  2. Search for: MCP: Add Server
  3. Choose HTTP or Server-Sent Events
  4. Enter one of the following MCP endpoints:

Use Case

Endpoint

Data Exploration

https://sentinel.microsoft.com/mcp/data-exploration

Agent Creation

https://sentinel.microsoft.com/mcp/security-copilot-agent-creation

  1. Give the server a friendly name (e.g., Sentinel MCP Server)
  2. Choose whether to apply it to all workspaces or just the current one
  3. When prompted, Allow authentication using an account with Security Reader access

 

Verify the Connection

  1. Open Chat: View > Chat or Ctrl + Alt + I
  2. Switch to Agent Mode
  3. Click the Configure Tools icon to ensure MCP tools are active

 

Using GitHub Copilot + Sentinel MCP

Once connected, you can use natural language prompts to pull insights from your Sentinel data lake without writing any KQL.

 

Demo Prompt:

🔍 “Find the top three users that are at risk and explain why they are at risk.”

This prompt is designed to:

  • Identify the highest-risk users in your environment
  • Explain the reasoning behind each user's risk status
  • Help prioritize investigation and response efforts

You can enter this prompt in either:

  • VS Code Chat window (Agent Mode)
  • Copilot inline prompt area

Expected Behavior

The MCP server will:

  • Query multiple Microsoft Sentinel sources (Identity Protection, Defender for Identity, Sign-in logs)
  • Correlate risk events (e.g., risky sign-ins, alerts, anomalies)
  • Return a structured response with top users and risk explanation

 

Sample Output from My Tenant

Results Found:

User 1: 233 risk score - 53 failed attempts from suspicious IPs
User 2: 100% failure rate indicating service account compromise
User 3: Admin account under targeted brute force attack

 

 

This demo shows how the integration of Microsoft Sentinel MCP Server with GitHub Copilot and VS Code transforms complex security investigations into simple, conversational workflows. By leveraging natural language and AI-driven context, we can surface high-risk users, understand the underlying threats, and take action — all within a familiar development environment, and without writing a single line of KQL.

 

More details here:

What is Microsoft Sentinel’s support for MCP? (preview) - Microsoft Security | Microsoft Learn

Get started with Microsoft Sentinel MCP server - Microsoft Security | Microsoft Learn

Data exploration tool collection in Microsoft Sentinel MCP server - Microsoft Security | Microsoft Learn

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Azure PostgreSQL Lesson Learned #2: Fixing Read Only Mode Storage Threshold Explained

1 Share

Co-authored with angesalsaa​ 

The issue occurred when the server’s storage usage reached approximately 95% of the allocated capacity. Automatic storage scaling was disabled.

 

Symptoms included:

  • Server switching to read-only mode
  • Application errors indicating write failures
  • No prior alerts or warnings received by the customer

Example error:

ERROR: cannot execute %s in a read-only transaction

Root Cause

The root cause was the server hitting the configured storage usage threshold (95%), which triggered an automatic transition to read-only mode to prevent data corruption or loss.

Storage options - Azure Database for PostgreSQL | Microsoft Learn

If your Storage Usage is below 95% but you're still seeing the same error, please refer to this article for more information >

​ 

Contributing factors:

  • Automatic storage scaling was disabled
  • Lack of proactive monitoring on storage usage
  • High data ingestion rate during peak hours

Specific conditions:

  • Customer had a custom workload with large batch inserts
  • No alerts configured for storage usage thresholds

Mitigation

To resolve the issue:

  • Increased the allocated storage manually via Azure Portal
  • No restart is needed after you scale up the storage because it is an online operation but make sure If you grow the disk from any size between 32 GiB and 4 TiB, to any other size in the same range, the operation is performed without causing any server downtime. It's also the case if you grow the disk from any size between 8 TiB and 32 TiB.
  • In all those cases, the operation is performed while the server is online. However, if you increase the size of disk from any value lower or equal to 4096 GiB, to any size higher than 4096 GiB, a server restart is required. In that case, you're required to confirm that you understand the consequences of performing the operation. Scale storage size - Azure Database for PostgreSQL | Microsoft Learn
  • Verified server returned to read-write mode

Steps:

  • Navigate to Azure Portal > PostgreSQL Flexible Server > Compute & Storage
  • Increase storage size (e.g., from 100 GB to 150 GB)

Post-resolution:

  • Server resumed normal operations
  • Write operations were successful

Prevention & Best Practices

Why This Matters

Failing to monitor storage and configure scaling can lead to:

  • Application downtime
  • Read-only errors impacting business-critical transactions

By following these practices, customers can ensure seamless operations and avoid unexpected read-only transitions.

Key Takeaways

  1. Symptom:
    • Server switched to read-only mode, causing write failures (ERROR: cannot execute INSERT in a read-only transaction).
  2. Root Cause:
    • Storage usage hit 95% threshold, triggering read-only mode to prevent corruption.
  3. Contributing Factors:
    • Automatic storage scaling disabled.
    • No alerts for storage thresholds.
    • High ingestion during peak hours with large batch inserts.
  4. Mitigation:
    • Increased storage manually via Azure Portal (online operation unless crossing 4 TiB → restart required).
    • Server returned to read-write mode.
  5. Prevention & Best Practices:
    • Enable automatic storage scaling.
    • Configure alerts for storage usage (e.g., 80%, 90%).
    • Monitor storage metrics regularly using Azure Monitor or dashboards.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Level up your Python + AI skills with our complete series

1 Share

We've just wrapped up our live series on Python + AI, a comprehensive nine-part journey diving deep into how to use generative AI models from Python.

The series introduced multiple types of models, including LLMs, embedding models, and vision models. We dug into popular techniques like RAG, tool calling, and structured outputs. We assessed AI quality and safety using automated evaluations and red-teaming. Finally, we developed AI agents using popular Python agents frameworks and explored the new Model Context Protocol (MCP).

To help you apply what you've learned, all of our code examples work with GitHub Models, a service that provides free models to every GitHub account holder for experimentation and education.

Even if you missed the live series, you can still access all the material using the links below! If you're an instructor, feel free to use the slides and code examples in your own classes. If you're a Spanish speaker, check out the Spanish version of the series.

Python + AI: Large Language Models

YouTube video
📺 Watch recording

In this session, we explore Large Language Models (LLMs), the models that power ChatGPT and GitHub Copilot. We use Python to interact with LLMs using popular packages like the OpenAI SDK and LangChain. We experiment with prompt engineering and few-shot examples to improve outputs. We also demonstrate how to build a full-stack app powered by LLMs and explain the importance of concurrency and streaming for user-facing AI apps.

Python + AI: Vector embeddings

YouTube video
📺 Watch recording

In our second session, we dive into a different type of model: the vector embedding model. A vector embedding is a way to encode text or images as an array of floating-point numbers. Vector embeddings enable similarity search across many types of content. In this session, we explore different vector embedding models, such as the OpenAI text-embedding-3 series, through both visualizations and Python code. We compare distance metrics, use quantization to reduce vector size, and experiment with multimodal embedding models.

Python + AI: Retrieval Augmented Generation

YouTube video
📺 Watch recording

In our third session, we explore one of the most popular techniques used with LLMs: Retrieval Augmented Generation. RAG is an approach that provides context to the LLM, enabling it to deliver well-grounded answers for a particular domain. The RAG approach works with many types of data sources, including CSVs, webpages, documents, and databases. In this session, we walk through RAG flows in Python, starting with a simple flow and culminating in a full-stack RAG application based on Azure AI Search.

Python + AI: Vision models

YouTube video
📺 Watch recording

Our fourth session is all about vision models! Vision models are LLMs that can accept both text and images, such as GPT-4o and GPT-4o mini. You can use these models for image captioning, data extraction, question answering, classification, and more! We use Python to send images to vision models, build a basic chat-with-images app, and create a multimodal search engine.

Python + AI: Structured outputs

YouTube video
📺 Watch recording

In our fifth session, we discover how to get LLMs to output structured responses that adhere to a schema. In Python, all you need to do is define a Pydantic BaseModel to get validated output that perfectly meets your needs. We focus on the structured outputs mode available in OpenAI models, but you can use similar techniques with other model providers. Our examples demonstrate the many ways you can use structured responses, such as entity extraction, classification, and agentic workflows.

Python + AI: Quality and safety

YouTube video
📺 Watch recording

This session covers a crucial topic: how to use AI safely and how to evaluate the quality of AI outputs. There are multiple mitigation layers when working with LLMs: the model itself, a safety system on top, the prompting and context, and the application user experience. We focus on Azure tools that make it easier to deploy safe AI systems into production. We demonstrate how to configure the Azure AI Content Safety system when working with Azure AI models and how to handle errors in Python code. Then we use the Azure AI Evaluation SDK to evaluate the safety and quality of output from your LLM.

Python + AI: Tool calling

YouTube video
📺 Watch recording

In the final part of the series, we focus on the technologies needed to build AI agents, starting with the foundation: tool calling (also known as function calling). We define tool call specifications using both JSON schema and Python function definitions, then send these definitions to the LLM. We demonstrate how to properly handle tool call responses from LLMs, enable parallel tool calling, and iterate over multiple tool calls. Understanding tool calling is absolutely essential before diving into agents, so don't skip over this foundational session.

Python + AI: Agents

YouTube video
📺 Watch recording

In the penultimate session, we build AI agents! We use Python AI agent frameworks such as the new agent-framework from Microsoft and the popular LangGraph framework. Our agents start simple and then increase in complexity, demonstrating different architectures such as multiple tools, supervisor patterns, graphs, and human-in-the-loop workflows.

Python + AI: Model Context Protocol

YouTube video
📺 Watch recording

In the final session, we dive into the hottest technology of 2025: MCP (Model Context Protocol). This open protocol makes it easy to extend AI agents and chatbots with custom functionality, making them more powerful and flexible. We demonstrate how to use the Python FastMCP SDK to build an MCP server running locally and consume that server from chatbots like GitHub Copilot. Then we build our own MCP client to consume the server. Finally, we discover how easy it is to connect AI agent frameworks like LangGraph and Microsoft agent-framework to MCP servers. With great power comes great responsibility, so we briefly discuss the security risks that come with MCP, both as a user and as a developer.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Universe 2025 Day 1 Recap

1 Share
From: GitHub
Duration: 10:41
Views: 761

Missed the GitHub Universe Day 1 keynote? Catch the biggest announcements here! We introduced Agent HQ, transforming GitHub into an open ecosystem for all coding agents (from OpenAI, Google, Anthropic, xAI & more). Plus, get the details on Mission Control, Plan Mode, Custom Agents, and new enterprise controls.

#GitHubUniverse #GitHubUniverse2025 #GitHub

Watch more videos from GitHub Universe 2025: https://www.youtube.com/watch?v=P6Va0_KILi4&list=PL0lo9MOBetEFKNlPHNouEmVeYeyoyGTXC

Stay up-to-date on all things GitHub by connecting with us:

YouTube: https://gh.io/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Insider newsletter: https://resources.github.com/newsletter/
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github

About GitHub
It’s where over 100 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories