Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149361 stories
·
33 followers

Red Hat announces Project Hummingbird to boost cloud-native development

1 Share
Today’s IT leaders frequently face a critical trade-off between delivery speed and systems security. AI-assisted and -generated coding tools accelerate development cycles, but this speed can run counter to the realities of managing multi-faceted, complicated software components. This seemingly leaves CIOs with two choices, moving at the speed of business while balancing potential production systems risks, or being overcautious to the point of losing to competitor’s innovations. To address this dilemma Red Hat is launching Project Hummingbird, an early access program for subscription customers, that provides a catalog of minimal, hardened container images. Project Hummingbird is designed to help IT… [Continue Reading]
Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Adobe to acquire digital marketing platform Semrush for $1.9 billion

1 Share

Adobe is acquiring the digital marketing platform Semrush for around $1.9 billion. In a press release on Wednesday, Adobe says the deal will allow it and Semrush to give marketers insight into how their brands appear across the web.

The acquisition builds on Adobe’s existing suite of marketing tools that help businesses manage digital campaigns and analyze web traffic. Adobe has begun incorporating AI into its marketing platform as well, as it now allows brands to generate ads using the technology. It also announced that it’s building an AI agent designed to brainstorm social media campaigns last month.

In addition to taking advantage of Semrush’s search engine optimization (SEO) capabilities, Adobe plans on incorporating the company’s tools that help brands appear inside AI-generated search results or responses. 

In 2023, Adobe dropped its $20 billion deal to acquire the collaborative design platform Figma after facing pressure from regulators in the UK and the European Union. Adobe’s deal to acquire Semrush is expected to close in the first half of 2026. It’s still subject to the approval of regulators, as well as Semrush stockholders.

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Agentic AI Empowers Architecture Governance

1 Share

One of the principles in our upcoming book Architecture as Code is the ability for architects to design automated governance checks for important architectural concerns, creating fast feedback loops when things go awry. This idea isn’t new—Neal and his coauthors Rebecca Parsons and Patrick Kua espoused this idea back in 2017 in the first edition of Building Evolutionary Architectures, and many of our clients adopted these practices with great success. However, our most ambitious goals were largely thwarted by a common problem in modern architectures: brittleness. Fortunately, the advent of the Model Context Protocol (MCP) and agentic AI have largely solved this problem for enterprise architects.

Fitness Functions

Building Evolutionary Architectures defines the concept of an architectural fitness function: any mechanism that provides an objective integrity check for architectural characteristics. Architects can think of fitness functions sort of like unit tests, but for architectural concerns.

While many fitness functions run like unit tests to test structure (using tools like ArchUnit, NetArchTest, PyTestArch, arch-go, and so on), architects can write fitness functions to validate all sorts of important checks…like tasks normally reserved for relational databases.

Fitness functions and referential integrity

Consider the architecture illustrated in Figure 1.

Figure 1: Strategically splitting a database in a distributed architecture
Figure 1: Strategically splitting a database in a distributed architecture

In Figure 1, the team has decided to split the data into two databases for better scalability and availability. However, the common disadvantage of that approach lies with the fact that the team can no longer rely on the database to enforce referential integrity. In this situation, each ticket must have a corresponding customer to model this workflow correctly.

While many teams seem to think that referential integrity is only possible within a relational database, we separate the governance activity (data integrity) from the implementation (the relational database) and realize we can create our own check using an architectural fitness function, as shown in Figure 2.

Figure 2: Implementing referential integrity as a fitness function
Figure 2: Implementing referential integrity as a fitness function

In Figure 2, the architect has created a small fitness function that monitors the queue between customer and ticket. When the queue depth drops to zero (meaning that the system isn’t processing any messages), the fitness function creates a set of customer keys from the customer service and a set of customer foreign keys from the ticket service and asserts that all of the ticket foreign keys are contained within the set of customer keys.

Why not just query the databases directly from the fitness function? Abstracting them as sets allows flexibility—querying across databases on a constant basis introduces overhead that may have negative side effects. Abstracting the fitness function check from the mechanics of how the data is stored to an abstract data structure has at least a couple of advantages. First, using sets allows architects to cache nonvolatile data (like customer keys), avoiding constant querying of the database. Many solutions exist for write-through caches in the rare event we do add a customer. Second, using sets of keys abstracts us from actual data items. Data engineers prefer synthetic keys to using domain data; the same is true for architects. While the database schema might change over time, the team will always need the relationship between customers and tickets, which this fitness function validates in an abstract way.

Who executes this code? As this problem is typical in distributed architectures such as microservices, the common place to execute this governance code is within the service mesh of the microservices architecture. Service mesh is a general pattern for handling operational concerns in microservices, such as logging, monitoring, naming, service discovery, and other nondomain concerns. In mature microservices ecosystems, the service mesh also acts as a governance mesh, applying fitness functions and other rules at runtime.

This is a common way that architects at the application level can validate data integrity, and we’ve implemented these types of fitness functions on hundreds of projects. However, the specificity of the implementation details makes it difficult to expand the scope of these types of fitness functions to the enterprise architect level because they include too many implementation details about how the project works.

Brittleness for metadomains

One of the key lessons from domain-driven design was the idea of keeping implementation details as tightly bound as possible, using anticorruption layers to prevent integration points from understanding too many details. Architects have embraced this philosophy in architectures like microservices.

Yet we see the same problem here at the metalevel, where enterprise architects would like to broadly control concerns like data integrity yet are hampered by the distance and specificity of the governance requirement. Distance refers to the scope of the activity. While application and integration architects have a narrow scope of responsibility, enterprise architects by their nature sit at the enterprise level. Thus, for an enterprise architect to enforce governance such as referential integrity requires them to know too many specific details about how the team has implemented the project.

One of our biggest global clients has a role within their enterprise architecture group called evolutionary architect, whose job is to identify global governance concerns, and we have other clients who have tried to implement this level of holistic governance with their enterprise architects. However, the brittleness defeats these efforts: As soon as the team needs to change an implementation detail, the fitness function breaks. Even though we often couch fitness functions as “unit tests for architecture,” in reality, they break much less often than unit tests. (How often do changes affect some fundamental architectural concern versus a change to the domain?) However, by exposing implementation details outside the project to enterprise architects, these fitness functions do break enough to limit their value.

We’ve tried a variety of anticorruption layers for metaconcerns, but generative AI and MCP have provided the best solution to date.

MCP and Agentic Governance

MCP defines a general integration layer for agents to query and consume capabilities within a particular metascope. For example, teams can set up an MCP server at the application or integration architecture level to expose tools and data sources to AI agents. This provides the perfect anticorruption layer for enterprise architects to state the intent of governance without relying on implementation details.

This allows teams to implement the type of governance that the strategically minded enterprise architects want but create a level of indirection for the details. For example, see the updated referential integrity check illustrated in Figure 3.

Figure 3. Using MCP for indirection to hide the fitness function implementation details
Figure 3. Using MCP for indirection to hide the fitness function implementation details

In Figure 3, the enterprise architect issues the general request to validate referential integrity to the MCP server for the project. It in turn exposes fitness functions via tools (or data sources such as log files) to carry out the request.

By creating an anticorruption layer between the project details and enterprise architect, we can use MCP to handle implementation details so that when the project evolves in the future, it doesn’t break the governance because of brittleness, as shown in Figure 4.

Figure 4. Using agentic AI to create metalevel indirection
Figure 4. Using agentic AI to create metalevel indirection

In Figure 4, the enterprise architect concern (validate referential integrity) hasn’t changed, but the project details have. The team added another service for experts, who work on tickets, meaning we now need to validate integrity across three databases. The team changes the internal MCP tool that implements the fitness function, and the enterprise architect request stays the same.

This allows enterprise architects to effectively state governance intent without diving into implementation details, removing the brittleness of far-reaching fitness functions and enabling much more proactive holistic governance by architects at all levels.

Defining the Intersections of Architecture

In Architecture as Code, we discuss nine different intersections with software architecture and other parts of the software development ecosystem (data representing one of them), all expressed as architectural fitness functions (the “code” part of architecture as code). In defining the intersection of architecture and enterprise architect, we can use MCP and agents to state intent holistically, deferring the actual details to individual projects and ecosystems. This solves one of the nagging problems for enterprise architects who want to build more automated feedback loops within their systems.

MCP is almost ideally suited for this purpose, designed to expose tools, data sources, and prompt libraries to external contexts outside a particular project domain. This allows enterprise architects to holistically define broad intent and leave it to teams to implement (and evolve) their solutions.

X as code (where X can be a wide variety of things) typically arises when the software development ecosystem reaches a certain level of maturity and automation. Teams tried for years to make infrastructure as code work, but it didn’t until tools such as Puppet and Chef came along that could enable that capability. The same is true with other “as code” initiatives (security, policy, and so on): The ecosystem needs to provide tools and frameworks to allow it to work. Now, with the combination of powerful fitness function libraries for a wide variety of platforms and ecosystem innovations such as MCP and agentic AI, architecture itself has enough support to join the “as code” communities.


Learn more about how AI is reshaping enterprise architecture at the Software Architecture Superstream on December 9. Join host Neal Ford and a lineup of experts including Metro Bank’s Anjali Jain and Philip O’Shaughnessy, Vercel’s Dom Sipowicz, Intel’s Brian Rogers, Microsoft’s Ron Abellera, and Equal Experts’ Lewis Crawford to hear hard-won insights about building adaptive, AI-ready architectures that support continuous innovation, ensure governance and security, and align seamlessly with business goals.

O’Reilly members can register here. Not a member? Sign up for a 10-day free trial before the event to attend—and explore all the other resources on O’Reilly.



Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to write a great agents.md: Lessons from over 2,500 repositories

1 Share

We recently released a new GitHub Copilot feature: custom agents defined in agents.md files. Instead of one general assistant, you can now build a team of specialists: a @docs-agent for technical writing, a @test-agent for quality assurance, and a @security-agent for security analysis. Each agents.md file acts as an agent persona, which you define with frontmatter and custom instructions.

agents.md is where you define all the specifics: the agent’s persona, the exact tech stack it should know, the project’s file structure, workflows, and the explicit commands it can run. It’s also where you provide code style examples and, most importantly, set clear boundaries of what not to do.

The challenge? Most agent files fail because they’re too vague. “You are a helpful coding assistant” doesn’t work. “You are a test engineer who writes tests for React components, follows these examples, and never modifies source code” does.

I analyzed over 2,500 agents.md files across public repos to understand how developers were using agents.md files. The analysis showed a clear pattern of what works: provide your agent a specific job or persona, exact commands to run, well-defined boundaries to follow, and clear examples of good output for the agent to follow. 

Here’s what the successful ones do differently.

What works in practice: Lessons from 2,500+ repos

My analysis of over 2,500 agents.md files revealed a clear divide between the ones that fail and the ones that work. The successful agents aren’t just vague helpers; they are specialists. Here’s what the best-performing files do differently:

  • Put commands early: Put relevant executable commands in an early section: npm test, npm run build, pytest -v. Include flags and options, not just tool names. Your agent will reference these often.
  • Code examples over explanations: One real code snippet showing your style beats three paragraphs describing it. Show what good output looks like.
  • Set clear boundaries: Tell AI what it should never touch (e.g., secrets, vendor directories, production configs, or specific folders). “Never commit secrets” was the most common helpful constraint.
  • Be specific about your stack: Say “React 18 with TypeScript, Vite, and Tailwind CSS” not “React project.” Include versions and key dependencies.
  • Cover six core areas: Hitting these areas puts you in the top tier: commands, testing, project structure, code style, git workflow, and boundaries. 

Example of a great agent.md file

Below is an example for adding a documentation agent.md persona in your repo to .github/agents/docs-agent.md:

---
name: docs_agent
description: Expert technical writer for this project
---

You are an expert technical writer for this project.

## Your role
- You are fluent in Markdown and can read TypeScript code
- You write for a developer audience, focusing on clarity and practical examples
- Your task: read code from `src/` and generate or update documentation in `docs/`

## Project knowledge
- **Tech Stack:** React 18, TypeScript, Vite, Tailwind CSS
- **File Structure:**
  - `src/` – Application source code (you READ from here)
  - `docs/` – All documentation (you WRITE to here)
  - `tests/` – Unit, Integration, and Playwright tests

## Commands you can use
Build docs: `npm run docs:build` (checks for broken links)
Lint markdown: `npx markdownlint docs/` (validates your work)

## Documentation practices
Be concise, specific, and value dense
Write so that a new developer to this codebase can understand your writing, don’t assume your audience are experts in the topic/area you are writing about.

## Boundaries
- ✅ **Always do:** Write new files to `docs/`, follow the style examples, run markdownlint
- ⚠️ **Ask first:** Before modifying existing documents in a major way
- 🚫 **Never do:** Modify code in `src/`, edit config files, commit secrets

Why this agent.md file works well

  • States a clear role: Defines who the agent is (expert technical writer), what skills it has (Markdown, TypeScript), and what it does (read code, write docs).
  • Executable commands: Gives AI tools it can run (npm run docs:build and npx markdownlint docs/). Commands come first.
  • Project knowledge: Specifies tech stack with versions (React 18, TypeScript, Vite, Tailwind CSS) and exact file locations.
  • Real examples: Shows what good output looks like with actual code. No abstract descriptions.
  • Three-tier boundaries: Set clear rules using always do, ask first, never do. Prevents destructive mistakes.

How to build your first agent

Pick one simple task. Don’t build a “general helper.” Pick something specific like:

  • Writing function documentation
  • Adding unit tests
  • Fixing linting errors

Start minimal—you only need three things:

  • Agent name: test-agent, docs-agent, lint-agent
  • Description: “Writes unit tests for TypeScript functions”
  • Persona: “You are a quality software engineer who writes comprehensive tests”

Copilot can also help generate one for you. Using your preferred IDE, open a new file at .github/agents/test-agent.md and use this prompt:

Create a test agent for this repository. It should:
- Have the persona of a QA software engineer.
- Write tests for this codebase
- Run tests and analyzes results
- Write to “/tests/” directory only
- Never modify source code or remove failing tests
- Include specific examples of good test structure

Copilot will generate a complete agent.md file with persona, commands, and boundaries based on your codebase. Review it, add in YAML frontmatter, adjust the commands for your project, and you’re ready to use @test-agent.

Six agents worth building

Consider asking Copilot to help generate agent.md files for the below agents. I’ve included examples with each of the agents, which should be changed to match the reality of your project. 

@docs-agent

One of your early agents should write documentation. It reads your code and generates API docs, function references, and tutorials. Give it commands like npm run docs:build and markdownlint docs/ so it can validate its own work. Tell it to write to docs/ and never touch src/

  • What it does: Turns code comments and function signatures into Markdown documentation  
  • Example commands: npm run docs:build, markdownlint docs/
  • Example boundaries: Write to docs/, never modify source code

@test-agent

This one writes tests. Point it at your test framework (Jest, PyTest, Playwright) and give it the command to run tests. The boundary here is critical: it can write to tests but should never remove a test because it is failing and cannot be fixed by the agent. 

  • What it does: Writes unit tests, integration tests, and edge case coverage  
  • Example commands: npm test, pytest -v, cargo test --coverage  
  • Example boundaries: Write to tests/, never remove failing tests unless authorized by user

@lint-agent

A fairly safe agent to create early on. It fixes code style and formatting but shouldn’t change logic. Give it commands that let it auto-fix style issues. This one’s low-risk because linters are designed to be safe.

  • What it does: Formats code, fixes import order, enforces naming conventions  
  • Example commands: npm run lint --fix, prettier --write
  • Example boundaries: Only fix style, never change code logic

@api-agent

This agent builds API endpoints. It needs to know your framework (Express, FastAPI, Rails) and where routes live. Give it commands to start the dev server and test endpoints. The key boundary: it can modify API routes but must ask before touching database schemas.

  • What it does: Creates REST endpoints, GraphQL resolvers, error handlers  
  • Example commands: npm run dev, curl localhost:3000/api, pytest tests/api/
  • Example boundaries: Modify routes, ask before schema changes

@dev-deploy-agent

Handles builds and deployments to your local dev environment. Keep it locked down: only deploy to dev environments and require explicit approval. Give it build commands and deployment tools but make the boundaries very clear.

  • What it does: Runs local or dev builds, creates Docker images  
  • Example commands: npm run test
  • Example boundaries: Only deploy to dev, require user approval for anything with risk

Starter template

---
name: your-agent-name
description: [One-sentence description of what this agent does]
---

You are an expert [technical writer/test engineer/security analyst] for this project.

## Persona
- You specialize in [writing documentation/creating tests/analyzing logs/building APIs]
- You understand [the codebase/test patterns/security risks] and translate that into [clear docs/comprehensive tests/actionable insights]
- Your output: [API documentation/unit tests/security reports] that [developers can understand/catch bugs early/prevent incidents]

## Project knowledge
- **Tech Stack:** [your technologies with versions]
- **File Structure:**
  - `src/` – [what's here]
  - `tests/` – [what's here]

## Tools you can use
- **Build:** `npm run build` (compiles TypeScript, outputs to dist/)
- **Test:** `npm test` (runs Jest, must pass before commits)
- **Lint:** `npm run lint --fix` (auto-fixes ESLint errors)

## Standards

Follow these rules for all code you write:

**Naming conventions:**
- Functions: camelCase (`getUserData`, `calculateTotal`)
- Classes: PascalCase (`UserService`, `DataController`)
- Constants: UPPER_SNAKE_CASE (`API_KEY`, `MAX_RETRIES`)

**Code style example:**
```typescript
// ✅ Good - descriptive names, proper error handling
async function fetchUserById(id: string): Promise<User> {
  if (!id) throw new Error('User ID required');
  
  const response = await api.get(`/users/${id}`);
  return response.data;
}

// ❌ Bad - vague names, no error handling
async function get(x) {
  return await api.get('/users/' + x).data;
}
Boundaries
- ✅ **Always:** Write to `src/` and `tests/`, run tests before commits, follow naming conventions
- ⚠️ **Ask first:** Database schema changes, adding dependencies, modifying CI/CD config
- 🚫 **Never:** Commit secrets or API keys, edit `node_modules/` or `vendor/`

Key takeaways

Building an effective custom agent isn’t about writing a vague prompt; it’s about providing a specific persona and clear instructions.

My analysis of over 2,500 agents.md files shows that the best agents are given a clear persona and, most importantly, a detailed operating manual. This manual must include executable commands, concrete code examples for styling, explicit boundaries (like files to never touch), and specifics about your tech stack. 

When creating your own agents.md cover the six core areas: Commands, testing, project structure, code style, git workflow, and boundaries. Start simple. Test it. Add detail when your agent makes mistakes. The best agent files grow through iteration, not upfront planning.

Now go forth and build your own custom agents to see how they level up your workflow first-hand!

The post How to write a great agents.md: Lessons from over 2,500 repositories appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
21 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond note-taking with Fireflies

1 Share

Fireflies CEO, Krish Ramineni shares how the company is transforming AI-powered note-taking into a deeper layer of knowledge automation. He breaks down the technology behind real-time functionality like Live Assist, the user behavior patterns driving product evolution, and how Fireflies is innovating far beyond meetings. Krish also shares insights on future trends in AI and the potential for hardware integration, emphasizing the ongoing evolution of AI in knowledge work.

Featuring:

Links:

Sponsors:

  • Miro – Get the right things done faster with Miro's Innovation Workspace. AI Sidekicks, instant insights, and rapid prototyping—transform weeks of work into days. No more scattered docs or endless meetings. Help your teams get great done at [Miro](https://miro.com).
  • Framer – Design and publish without limits with Framer, the free all-in-one design platform. Unlimited projects, no tool switching, and professional sites—no Figma imports or HTML hassles required. Start creating for free at [framer.com/design](https://www.framer.com/design/) with code `PRACTICALAI` for a free month of Framer Pro.

Upcoming Events: 





Download audio: https://media.transistor.fm/a3018119/3a4ba1ea.mp3
Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

C++ and GitHub Copilot

1 Share
From: VisualStudio
Duration: 16:07
Views: 64

Sinem shows us the new C++ code editing tools for GitHub Copilot and David show us the Copilot build performance agent for Windows.

⌚ Chapters:
00:00 Welcome
03:25 Using the new code editing tools
07:20 Using the new build agent
14:35 Wrap-up

đź”— Sign up for the private preview at https://aka.ms/cpp-agents-private-preview.

🎙️ Featuring: Robert Green, David Li, Sinem Akinci

#visualstudio #visualstudio2026 #githubcopilot

Read the whole story
alvinashcraft
22 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories