Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148734 stories
·
33 followers

MicroVision layoffs impact senior-level engineering roles

1 Share

MicroVision plans to lay off 49 employees at its Redmond, Wash., headquarters in a series of job cuts starting in late April, according to a new regulatory filing with Washington state.

  • The cuts are heavily concentrated in engineering and technical roles. Several senior-level positions — including director of supply chain, director of IT, and director of operations and supply chain — are impacted, according to the filing.
  • MicroVision, a longtime Seattle-area tech company, develops lidar sensors and perception software used for autonomous driving systems, as well as industrial and security use cases. It reported 185 full-time employees at the end of its fiscal year 2024.
  • MicroVision recently acquired assets from lidar maker Luminar for $33 million through a bankruptcy auction, and scooped up Scantinel Photonics in January. The company reports its fourth quarter results on Wednesday.
Read the whole story
alvinashcraft
56 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub Copilot CLI Reaches General Availability, Bringing Agentic Coding to the Terminal

1 Share
GitHub Copilot CLI is now generally available for all paid Copilot subscribers, offering agentic workflows, multiple AI model support, and specialized agents for terminal-based development.
Read the whole story
alvinashcraft
56 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Giving Your AI Agents Reliable Skills with the Agent Skills SDK

1 Share

AI agents are becoming increasingly capable, but they often do not have the context they need to do real work reliably. Your agent can reason well, but it does not actually know how to do the specific things your team needs it to do. For example, it cannot follow your company's incident response playbook, it does not know your escalation policy, and it has no idea how to page the on-call engineer at 3 AM.

There are many ways to close this gap, from RAG to custom tool implementations. Agent Skills is one approach that stands out because it is designed around portability and progressive disclosure, keeping context window usage minimal while giving agents access to deep expertise on demand.

What is Agent Skills?

Agent Skills is an open format for giving agents new capabilities and expertise. The format was originally developed by Anthropic and released as an open standard. It is now supported by a growing list of agent products including Claude Code, VS Code, GitHub, OpenAI Codex, Cursor, Gemini CLI, and many others.

As defined in the spec, a skill is a folder on disk containing a SKILL.md file with metadata and instructions, plus optional scripts, references, and assets:

incident-response/ SKILL.md # Required: instructions + metadata references/ # Optional: additional documentation severity-levels.md escalation-policy.md scripts/ # Optional: executable code page-oncall.sh assets/ # Optional: templates, diagrams, data files

The SKILL.md file has YAML frontmatter with a name and description (so agents know when the skill is relevant), followed by markdown instructions that tell the agent how to perform the task. The format is intentionally simple: self-documenting, extensible, and portable.

What makes this design practical is progressive disclosure. The spec is built around the idea that agents should not load everything at once. It works in three stages:

  1. Discovery: At startup, agents load only the name and description of each available skill, just enough to know when it might be relevant.
  2. Activation: When a task matches a skill's description, the agent reads the full SKILL.md instructions into context.
  3. Execution: The agent follows the instructions, optionally loading referenced files or executing bundled scripts as needed.

This keeps agents fast while giving them access to deep context on demand.

The format is well-designed and widely adopted, but if you want to use skills from your own agents, there is a gap between the spec and a working implementation.

The Agent Skills SDK

Conceptually, a skill is more than a folder. It is a unit of expertise: a name, a description, a body of instructions, and a set of supporting resources. The file layout is one way to represent that, but there is nothing about the concept that requires a filesystem.

The Agent Skills SDK is an open-source Python library built around that idea, treating skills as abstract units of expertise that can be stored anywhere and consumed by any agent framework. It does this by addressing two challenges that come up when you try to use the format from your own agents.

The first is where skills live. The spec defines skills as folders on disk, and the tools that support the format today all assume skills are local files. Files are inherently portable, and that is one of the format's strengths. But in the real world, not every team can or wants to serve skills from the filesystem. Maybe your team keeps them in an S3 bucket. Maybe they are in Azure Blob Storage behind your CDN. Maybe they live in a database alongside the rest of your application data. At the moment, if your skills are not on the local filesystem, you are on your own. The SDK changes where skills are served from, not how they are authored. The content and format stay the same regardless of the storage backend, so skills remain portable across providers.

The second is how agents consume them. The spec defines the progressive disclosure pattern but actually implementing it in your agent requires real work. You need to figure out how to validate skills against the spec, generate a catalog for the system prompt, expose the right tools for on-demand content retrieval, and handle the back-and-forth of the agent requesting metadata, then the body, then individual references or scripts. That is a lot of plumbing regardless of where the skills are stored, and the work multiplies if you want to support more than one agent framework.

The SDK solves both by separating where skills come from (providers) from how agents use them (integrations), so you can mix and match freely. Load skills from the filesystem today, move them to an HTTP server tomorrow, swap in a custom database provider next month, and your agent code does not change at all.

How the SDK works

The SDK is a set of Python packages organized around two ideas: storage-agnostic providers and progressive disclosure.

The provider abstraction means your skills can live anywhere. The SDK ships with providers for the local filesystem and static HTTP servers, but the SkillProvider interface is simple enough that you can write your own in a few methods. A Cosmos DB provider, a Git provider, a SharePoint provider, whatever makes sense for your team. The rest of the SDK does not care where the data comes from.

On top of that, the SDK implements the progressive disclosure pattern from the spec as a set of tools that any LLM agent can use. At startup, the SDK generates a skills catalog containing each skill's name and description. Your agent injects this catalog into its system prompt so it knows what is available. Then, during a conversation, the agent calls tools to retrieve content on demand, following the same discovery-activation-execution flow the spec describes.

Here is the flow in practice:

  1. You register skills from any source (local files, an HTTP server, your own database).
  2. The SDK generates a catalog and tool usage instructions, which you inject into the system prompt.
  3. The agent calls tools to retrieve content on demand.

This matters because context windows are finite. An incident response skill might have a main body, three reference documents, two scripts, and a flowchart. The agent should not load all of that upfront. It should read the body first, then pull the escalation policy only when the conversation actually gets to escalation.

A quick example

Here is what it looks like in practice. Start by loading a skill from the filesystem:

from pathlib import Path from agentskills_core import SkillRegistry from agentskills_fs import LocalFileSystemSkillProvider provider = LocalFileSystemSkillProvider(Path("my-skills")) registry = SkillRegistry() await registry.register("incident-response", provider)

Now wire it into a LangChain agent:

from langchain.agents import create_agent from agentskills_langchain import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tool_usage_instructions = get_tools_usage_instructions() system_prompt = ( "You are an SRE assistant. Use the available skill tools to look up " "incident response procedures, severity definitions, and escalation " "policies. Always cite which reference document you used.\n\n" f"{skills_catalog}\n\n" f"{tool_usage_instructions}" ) agent = create_agent( llm, tools, system_prompt=system_prompt, )

That is it. The agent now knows what skills are available and has tools to fetch their content. When a user asks "How do I handle a SEV1 incident?", the agent will call get_skill_body to read the instructions, then get_skill_reference to pull the severity levels document, all without you writing any of that retrieval logic.

The same pattern works with Microsoft Agent Framework:

from agentskills_agentframework import get_tools, get_tools_usage_instructions tools = get_tools(registry) skills_catalog = await registry.get_skills_catalog(format="xml") tool_usage_instructions = get_tools_usage_instructions() system_prompt = ( "You are an SRE assistant. Use the available skill tools to look up " "incident response procedures, severity definitions, and escalation " "policies. Always cite which reference document you used.\n\n" f"{skills_catalog}\n\n" f"{tool_usage_instructions}" ) agent = Agent( client=client, instructions=system_prompt, tools=tools, )

What is in the SDK

The SDK is split into small, composable packages so you only install what you need:

  • agentskills-core handles registration, validation, the skills catalog, and the progressive disclosure API. It also defines the SkillProvider interface that all providers implement.
  • agentskills-fs and agentskills-http are the two built-in providers. The filesystem provider loads skills from local directories. The HTTP provider loads them from any static file host: S3, Azure Blob Storage, GitHub Pages, a CDN, or anything that serves files over HTTP.
  • agentskills-langchain and agentskills-agentframework generate framework-native tools and tool usage instructions from a skill registry.
  • agentskills-mcp-server spins up an MCP server that exposes skill tool access and usage as tools and resources, so any MCP-compatible client can use them.

Because providers and integrations are separate packages, you can combine them however you want. Use the filesystem provider during development, switch to the HTTP provider in production, or write a custom provider that reads skills from your own database. The integration layer does not need to know or care.

Where to go from here

The full source, working examples, and detailed API docs are on GitHub:

github.com/pratikxpanda/agentskills-sdk

The repo includes end-to-end examples for both LangChain and Microsoft Agent Framework, covering filesystem providers, HTTP providers, and MCP. There is also a sample incident-response skill you can use to try things out.

A proposal to contribute this SDK to the official agentskills repository has been submitted. If you find it useful, feel free to show your support on the GitHub issue.

To learn more about the Agent Skills format itself:

The SDK is MIT licensed and contributions are welcome. If you have questions or ideas, post a question here or open an issue on the repo.

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Open Standards for Enterprise Agents

1 Share
From: Microsoft Developer
Duration: 17:04
Views: 68

This episode of Armchair Architects explores the critical role of open standards in agentic AI, focusing on protocols like Model Context Protocol, agent-to-agent communication, observability with Open Telemetry, and agent identity. The architects discuss how these elements enable secure, interoperable, and responsible AI integration in enterprises, and offer practical guidance for architects navigating the rapidly evolving agent ecosystem.

By the end of this episode, you will …
- know more about the four key open standards shaping agentic AI: MCP, agent-to-agent protocols, OTEL for observability, and OAuth for identity, and why these are foundational for enterprise adoption.
- understand the importance of following and contributing to open-source communities and standards to ensure interoperability and responsible AI deployment.
- gain practical strategies for evaluating, adopting, and requesting these standards in enterprise RFPs to drive market convergence and ensure robust, future-proof agent architectures.

Resources
- Model Context Protocol (MCP) https://modelcontextprotocol.io
- Agent2Agent (A2A) Protocol https://github.com/a2aproject
- OpenTelemetry https://opentelemetry.io
- Web Authorization Protocol (oauth) https://datatracker.ietf.org/wg/oauth/about/
-Blog: Agent Factory: Designing the open agentic web stack https://azure.microsoft.com/blog/agent-factory-designing-the-open-agentic-web-stack/
- Agentic AI Foundation https://aaif.io/

Related Episodes
- Watch more episodes of Armchair Architects https://aka.ms/ArmchairArchitects
- Watch more episodes of the Azure Essentials Show https://aka.ms/AzureEssentialsShow

Connect
- David Blank-Edelman https://www.linkedin.com/in/dnblankedelman/
- Uli Homann https://www.linkedin.com/in/ulrichhomann/
- Eric Charran https://www.linkedin.com/in/ericcharran/

Chapters
00:00 Hey! Good news!
00:35 Welcome architects
01:10 Challenges for Enterprise
02:20 Model Context Protocol
03:24 Agent-to-agent Protocol
04:37 OpenTelemetry standard
06:20 Web Authorization Protocol (oAuth)
07:53 Holistic approach in three layers
09:08 Layer one: interaction and invocation standards
09:45 Layer two: orchestration and lifecycle patterns
10:33 Layer three: Evaluation, observation, orchestration
11:21 The role of the architect
12:08 Include standards in RFPs and RFIs
13:30 Discussion of open source examples
15:05 Agents as a commodity
16:03 Homework for viewers

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

e241 – “Behind the Scenes: Lori Chollar’s Ultrawide Presentation Session at the Presentation Design Conference”

1 Share

Show Notes – Episode #241

This episode of the Presentation Podcast we talk with Lori Chollar, CEO of TLC Creative Services, Inc. We get a behind-the-scenes look at Lori’s experience presenting at CreativePro’s Presentation Design Conference.

Lori shares insights into the process of being asked to present, preparing her session content, and the conference experience. On the presentation side, we talk about the technical and creative challenges, workflow tips, and the evolution of presentation technology as related to creating ultrawide presentations to fill the amazing LED wall configurations at events. Listen in to gain valuable perspectives on both the art and logistics of modern presentation design!

Highlights:

  • Lori Chollar’s experience presenting at the CreativePro Presentation Design Conference.
  • Her session was, “Stretch Your Creativity: Designing for Ultrawide Presentations.”
  • The process of being invited to speak and preparing for the presentation.
  • Insights into the organization and behind-the-scenes aspects of the conference.
  • Differences between live and pre-recorded presentations.
  • Technical challenges associated with designing for ultra-wide screens.
  • The evolution of presentation technology and its impact on design.
  • The importance of audience perspective in presentation design.
  • Strategies for managing speaker energy and engagement during pre-recorded sessions.
  • The value of collaboration and feedback in developing presentation content.

Resources from this Episode:  

Show Suggestions? Questions for your Hosts?

Email us at: info@thepresentationpodcast.com

New Episodes 1st and 3rd Tuesday Every Month

Thanks for joining us!

The post e241 – “Behind the Scenes: Lori Chollar’s Ultrawide Presentation Session at the Presentation Design Conference” appeared first on The Presentation Podcast.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

#471 The ORM pattern of 2026?

1 Share
Topics covered in this episode:
Watch on YouTube

About the show

Sponsored by us! Support our work through:

Michael #1: Raw+DC: The ORM pattern of 2026?

  • ORMs/ODMs provide great support and abstractions for developers
  • They are not the native language of agentic AI
  • Raw queries are trained 100x+ more than standard ORMs
  • Using raw queries at the data access optimizes for AI coding
  • Returning some sort of object mapped to the data optimizes for type safety and devs

Brian #2: pytest-check releases

  • 3 merged pull requests
  • 8 closed issues
  • at one point got to 0 PR’s and 1 enhancement request
  • Now back to 2 issues and 1 PR, but activity means it’s still alive and being used. so cool
  • Check out changelog for all mods
  • A lot of changes around supporting mypy
    • I’ve decided to NOT have the examples be fully --strict as I find it reduces readability
      • See tox.ini for explanation
    • But src is --strict clean now, so user tests can be --strict clean.

Michael #3: Dataclass Wizard

  • Simple, elegant wizarding tools for Python’s dataclasses.
  • Features
    • 🚀 Fast — code-generated loaders and dumpers
    • 🪶 Lightweight — pure Python, minimal dependencies
    • 🧠 Typed — powered by Python type hints
    • 🧙 Flexible — JSON, YAML, TOML, and environment variables
    • 🧪 Reliable — battle-tested with extensive test coverage
  • No Inheritance Needed

Brian #4: SQLiteo - “native macOS SQLite browser built for normal people”

  • Adam Hill
  • This is a fun tool, built by someone I trust.
  • That trust part is something I’m thinking about a lot in these days of dev+agent built tools
  • Some notes on my thoughts when evaluating
    • I know mac rules around installing .dmg files not from the apple store are picky.
      • And I like that
    • But I’m ok with the override when something comes from a dev I trust
    • The contributors are all Adam
      • I’m still not sure how I feel about letting agents do commits in repos
    • There’s “AGENTS” folder and markdown files in the project for agents, so Ad

Extras

Michael:

Joke: House is read-only!





Download audio: https://pythonbytes.fm/episodes/download/471/the-orm-pattern-of-2026.mp3
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories