Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153691 stories
·
33 followers

Microsoft SQL Security Across the MAESTRO Stack: Building Secure Agentic AI with Defense-in-Depth

1 Share

Artificial Intelligence is evolving rapidly. What began as simple prompt-and-response systems is now transforming into fully autonomous, agentic AI architectures capable of reasoning, orchestrating tools, interacting with enterprise data, and invoking external systems dynamically. While these capabilities unlock enormous business potential, they also introduce an entirely new category of security challenges.

Organizations are no longer asking only:

“How do we build AI systems?”

They are now asking:

“How do we build AI systems securely, responsibly, and with governance built into every layer?”

This is where security architecture becomes critical.

Modern AI systems introduce threats that traditional applications were never designed to handle. Prompt injection, data poisoning, over-privileged agents, hidden data exfiltration, unauthorized tool execution, and lack of operational traceability are becoming real concerns as enterprises move toward production-scale AI adoption.

To address these emerging risks, MAESTRO framework is a layered threat modeling approach designed specifically for AI and Agentic AI systems. At the same time, Microsoft SQL introduces a powerful set of AI-enabled capabilities that bring AI closer to enterprise data while maintaining strong governance, observability, and security boundaries.

This combination creates an interesting architectural opportunity:

Microsoft SQL is no longer just a database. It becomes a governed execution boundary for enterprise AI systems.

Understanding the MAESTRO Framework

The MAESTRO framework provides a structured way to think about security risks across AI systems. Instead of viewing AI as a single application component, MAESTRO breaks the architecture into multiple operational layers, each with its own attack surface and security concerns.

These layers include:

  • Foundation Models
  • Data Operations
  • Agent Frameworks
  • Deployment & Infrastructure
  • Evaluation & Observability
  • Security & Compliance
  • Agent Ecosystem

What makes MAESTRO particularly important is that it recognizes a fundamental shift in how applications behave. Traditional threat modeling frameworks such as STRIDE were designed around predictable application behavior, predefined execution paths, and relatively static trust boundaries. Agentic AI systems introduce a fundamentally different operating model. These systems operate dynamically at runtime, combining user input, retrieved data, tools, and external system interactions to make decisions and execute actions. As a result, the attack surface becomes significantly more dynamic and less deterministic than traditional applications. Frameworks such as MAESTRO help organizations evaluate these emerging risks across the full AI operational stack rather than focusing solely on conventional application threats.

Understanding the attack surface is only the first step. The next challenge is determining how to reduce risk across these interconnected layers. Because AI systems span data, models, agents, infrastructure, and external services, organizations require security controls that operate across multiple boundaries simultaneously.

Why Defense-in-Depth Matters for AI Systems

One of the biggest misconceptions in AI security is the idea that a single security control can “solve” AI risk. In reality, AI systems require layered protection strategies because attacks can occur across multiple boundaries simultaneously.

An attacker may manipulate prompts, poison retrieval data, abuse delegated agent permissions, or exploit infrastructure misconfigurations. Preventing every attack entirely may not always be possible.

Instead, modern AI security focuses on:

  • reducing blast radius
  • enforcing least privilege
  • maintaining observability
  • constraining execution pathways
  • preserving accountability

This is the core principle behind Defense-in-Depth.

In AI systems, Defense-in-Depth means applying security controls across:

  • data access
  • model interaction
  • execution pathways
  • infrastructure
  • telemetry
  • governance
  • compliance

The goal is not simply prevention. The goal is resilience.

This is precisely where modern data platforms begin to play a much larger role in AI security architecture. As AI systems move closer to enterprise data, the database itself becomes a critical enforcement boundary for governance, observability, and controlled execution.

Microsoft SQL and the Rise of Agentic AI

Microsoft SQL introduces several capabilities that position it as a strong platform for AI-enabled and Agentic AI solutions.

Historically, AI systems often require organizations to move enterprise data into external AI platforms or standalone vector databases. Microsoft SQL changes this model by bringing AI capabilities directly into the data platform itself.

New capabilities such as VECTOR support, DiskANN based vector indexing and search , external model integration, REST endpoint invocation and native SQL AI functions allow organizations to build Retrieval-Augmented Generation (RAG) systems and agent-driven workflows while keeping governance close to the data layer.

More importantly, Microsoft SQL applies decades of enterprise-grade security investment directly to AI-enabled workflows.

Rather than treating AI as a disconnected external system, Microsoft SQL allows organizations to govern AI interactions using:

  • encryption
  • auditing
  • row-level security
  • least-privilege execution
  • telemetry
  • compliance controls
  • tamper-evident ledgers

This applies across multiple deployment models including:

  • Microsoft SQL Server 2025
  • Microsoft SQL in Azure
  • Microsoft SQL in Azure MI
  • Microsoft SQL in Fabric
  • Microsoft SQL in Azure VM

Some capabilities such as Microsoft Defender for SQL, Azure Arc-enabled SQL Server, and Microsoft Purview are ecosystem services rather than core SQL engine capabilities, but they extend the same defense-in-depth model into hybrid and cloud-connected environments.

To better understand how Microsoft SQL aligns with defense-in-depth principles for AI systems, we can map Microsoft SQL security capabilities across each layer of the MAESTRO stack. This helps illustrate how database-native controls contribute to securing modern AI and Agentic AI architectures.

Microsoft SQL Security Across the MAESTRO Stack

The diagram below provides a high-level view of how Microsoft SQL participates in securing modern AI and Agentic AI architectures. As AI systems interact with enterprise data, vector search, external models, and autonomous agents, Microsoft SQL becomes a critical enforcement boundary for security, governance, observability, and controlled execution.

To better understand how these protections align within a defense-in-depth strategy, we can map Microsoft SQL capabilities across the different layers of the MAESTRO framework. In the following sections, we will examine the security threats associated with each layer, why they matter in AI systems, and how Microsoft SQL helps mitigate risk through built-in security and governance capabilities.

SQL SECURITY image

Foundation Models: Protecting Sensitive Data Interactions

Foundation model interactions frequently involve highly sensitive enterprise information including prompts, embeddings, retrieval data, and generated outputs. Without proper controls, these interactions can introduce risks such as data leakage, unauthorized model access, and exposure of sensitive information.

Microsoft SQL helps mitigate these risks by integrating model interactions directly into governed database workflows.

Capabilities such as create external model allow organizations to integrate locally hosted models into SQL-based workflows, while sp_invoke_external_rest_endpoint  provides controlled and auditable outbound model invocation. Combined with encryption technologies such as Always Encrypted and Transparent Data Encryption (TDE), Microsoft SQL helps ensure that sensitive enterprise data remains protected throughout the AI interaction lifecycle.

Row-Level Security and Dynamic Data Masking further restricts exposure of sensitive data to only authorized users and applications.

Data Operations: Reducing the Risk of Data Poisoning and Tampering

AI systems are only as trustworthy as the data they consume.

One of the most significant threats in AI systems is data poisoning — the introduction of malicious or misleading data designed to manipulate downstream model behavior or retrieval results. Unlike traditional corruption attacks, poisoned data often appears legitimate, making detection difficult.

Microsoft SQL does not inherently understand semantic correctness or identify poisoned embeddings directly. However, it provides strong governance and integrity controls that significantly reduce the likelihood and impact of unauthorized data modification.

Role-based permissions, Row-Level Security, constraints, triggers, and audit logging help ensure that only authorized entities can insert or modify data. SQL Ledger adds cryptographically verifiable integrity guarantees, while temporal tables preserve historical versions of records for forensic analysis and recovery.

VECTOR support and DiskANN indexing enable scalable vector search capabilities while maintaining governance within the database boundary itself.

Agent Frameworks: Constraining AI Execution Boundaries

Agentic AI systems introduce a fundamentally different execution model compared to traditional applications. AI agents can dynamically invoke tools, generate queries, and orchestrate workflows autonomously.

This flexibility creates new risks including unauthorized database operations, over-privileged agent access, and unintended data exfiltration.

Microsoft SQL helps constrain these risks through strict execution boundaries.

Rather than allowing unrestricted query execution, organizations can expose controlled database operations through stored procedures and permission-scoped execution pathways. Role-Based Access Control/permissions, Row-Level Security, EXECUTE permissions, and Database Scoped Credentials ensure that agents operate only within explicitly authorized boundaries.

Even if an agent is manipulated through prompt injection or tool misuse, Microsoft SQL helps reduce blast radius by enforcing least-privilege access controls and auditable execution pathways.

Deployment and Infrastructure: Extending Security Beyond the Database

AI-enabled systems often span hybrid infrastructure, cloud services, APIs, vector indexes, and distributed execution environments. Infrastructure compromise, credential theft, misconfiguration, and lateral movement remain serious operational concerns.

Microsoft SQL contributes to infrastructure defense through encryption, auditing, and governance capabilities that help protect sensitive enterprise data even if underlying systems are compromised.

Transparent Data Encryption (TDE) and Always Encrypted reduce exposure of sensitive information at rest and during processing. Microsoft SQL Audit provides operational traceability across database activity.

In hybrid and cloud-connected deployments, ecosystem services such as Microsoft Defender for SQL and Azure Arc-enabled SQL Server extend monitoring, policy governance, and anomaly detection capabilities across distributed environments.

Evaluation and Observability: Maintaining Visibility into AI-Driven Activity

One of the most important principles in AI security is visibility.

AI systems may generate unexpected queries, anomalous access patterns, or hidden execution behavior that traditional monitoring solutions were never designed to detect.

Microsoft SQL provides extensive telemetry and observability capabilities that help organizations monitor AI-driven database activity.

Query Store preserves historical execution behavior, Extended Events provide detailed runtime telemetry, and Dynamic Management Views expose operational state and execution characteristics. Microsoft SQL Audit adds traceability for security-relevant actions and operational analysis.

Together, these capabilities allow organizations to investigate suspicious behavior, identify anomalous database operations, and maintain observability across AI-enabled workflows.

Security and Compliance: Enforcing Accountability and Trust

Enterprise AI systems require more than operational security controls. They also require accountability, governance, traceability, and integrity assurance.

Microsoft SQL provides strong compliance and governance capabilities that align naturally with these requirements. It includes SQL Ledger, which introduces tamper-evident records to support integrity verification and non-repudiation. In addition, Microsoft SQL Audit enables operational traceability, while Row-Level Security and Dynamic Data Masking enforce controlled data visibility policies.

In larger enterprise environments, Microsoft Purview extends governance capabilities through lineage tracking, classification, and policy management.

Together, these capabilities help organizations ensure that AI-driven data operations remain observable, attributable, and governance-aligned.

Agent Ecosystem: Securing Delegated Authority

As AI systems become increasingly autonomous, agents frequently operate using delegated permissions on behalf of users, applications, or external systems.

Improperly scoped access can lead to over-privileged agents and unintended resource access.

Microsoft SQL helps constrain delegated authority through fine-grained permission models including Row-Level Security, EXECUTE permissions, Database Scoped Credentials, and audit logging.

These controls help ensure that AI agents only access explicitly authorized resources while maintaining traceability across delegated operations.

 Microsoft SQL Security Controls Across the MAESTRO Stack — Summary

The following summary provides a consolidated view of the threats across each MAESTRO layer and the Microsoft SQL capabilities that help enforce security, governance, observability, and controlled execution boundaries for AI systems

Microsoft SQL - MAESTRO alignment

 

Building Trusted AI Starts with the Data Platform

As organizations move toward Agentic AI architectures, security, governance, and observability can no longer be optional. AI systems must be built on platforms that not only enable intelligence, but also enforce trust, accountability, and controlled execution.

Microsoft SQL brings AI closer to enterprise data while extending the same enterprise-grade security capabilities that organizations already rely on for mission-critical workloads. From vector search and external model integration to auditing, encryption, least-privilege access, and tamper-evident controls, Microsoft SQL provides a strong foundation for building secure and governed AI solutions.

Whether you are deploying on premises, in hybrid environments, or in the cloud with Microsoft SQL in Azure, Microsoft SQL enables organizations to adopt AI confidently without compromising on security or compliance.

Ready to explore secure AI with Microsoft SQL?

The post Microsoft SQL Security Across the MAESTRO Stack: Building Secure Agentic AI with Defense-in-Depth appeared first on Azure SQL Dev Corner.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

How To Show & Not Tell In Short Stories

1 Share

Master the ‘show, don’t tell‘ technique in short stories. Learn how to create immersive storytelling that draws readers in from the very first line.

‘Show, don’t tell’ is good advice for any writer, but even more so for a short story writer. The limited word count means our writing has to work harder. We really need to pack a punch. Here’s how.

How To Show & Not Tell In Short Stories

1.  Express emotion as action

How To Show And Not Tell In Short Stories

2.  Choose a viewpoint character

By choosing one character to focus on you make it easier for yourself to simplify your scene and make the most of it. Write small.

How To Show And Not Tell In Short Stories

3.  Use the senses

Write a list of what your character sees, tastessmellshears, and touches. Then write about it without using the words see, hear, feel, touch and taste.

How To Show And Not Tell In Short Stories

4.  Be specific

The more specific you are with your descriptions and actions the easier it will become to show.

How To Show And Not Tell In Short Stories

5.  Avoid these ‘telling’ words: is, are, was, were, have, had

How To Show And Not Tell In Short Stories

6.  Use dialogue

This is one of the simplest tools to use. The moment your characters start talking, showing becomes easier.

How To Show And Not Tell In Short Stories

Show, don’t tell is a very powerful writing tool. Keep practising.

The Last Word

If you want to learn how to write a short story, sign up for our online course. Or buy our comprehensive How To Show & Not Tell Workbook.


by Mia Botha

If you enjoyed this post, you will love:

The post How To Show & Not Tell In Short Stories appeared first on Writers Write.

Read the whole story
alvinashcraft
15 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Teaching an AI to Remember

1 Share


Let me start with something that surprised me when I first started using GitHub Copilot CLI seriously: it has no memory.

Every session starts from zero. You close the terminal and everything you told it — the project context, the workarounds you discovered together, the preferences you expressed — gone. Open it back up the next day and you're introducing yourself again. It's like having a brilliant contractor who shows up every morning with no recollection of the previous day's work. Extremely capable in the moment. Frustrating across multiple days.

GitHub Copilot CLI does have a solution for this, it just isn't automatic. The tool loads a file from ~/.copilot/copilot-instructions.md at the start of every session. Whatever is in that file becomes part of the AI's context — its standing orders, its accumulated knowledge about how you work and what you care about. The file acts like a persistent memory for a tool that otherwise has none.

I created mine on April 16th. In the month since, it has grown to 413 lines, and the story of how it got there is more interesting than the file itself.

Teaching an AI to Remember: The Very First Instruction

Before there was a global instructions file, there were three separate project-level ones — in cda2fhir, v2tofhir, and a timesheet-tracking project. Each had accumulated its own rules through months of use. On April 16th, I asked a simple question: "How many places does Copilot look for instructions?"

The answer came back: seven. Project-level files, global files, a whole priority order. That's when it clicked. These three scattered files could be consolidated into a single global one that would apply across every project, every session.

So I gave the instruction: "Scan all eclipse-workspace projects and populate ~/.copilot/copilot-instructions.md."

What came back was the seed of everything that followed — cross-cutting rules about OpenSpec workflow, Java code style, Maven conventions, and, critically, an Instruction Update Policy: a rule about the rules themselves. Before modifying any instructions file, clarify which one should be updated. The memory system had its own meta-instruction baked in from day one.

Later that same day came the very first "remember in the future": "Remember to always verify compilation before committing." A lesson learned the hard way on a build that broke. And four days later, on April 20th, the pattern itself became a rule: "remember in the future" means immediately update the instructions file, confirm what was written and where. The shorthand was now official. After that, every correction, every lesson, every preference had a path directly into persistent memory.

I liked it enough to share it. On April 22nd I posted this on X:

"Give your @GitHubCopilot help to bootstrap its memory. Add this to your .copilot/copilot-instructions.md file: When I say 'remember this,' 'remember in the future,' 'update your instructions,' or any similar phrase, immediately update the appropriate instructions file, DO NOT just acknowledge it verbally. Then confirm what was written and where."

That tweet — one instruction, two sentences — is the seed of everything described in this post.

The First Lessons

The first thing I taught it was about the Atlassian MCP server.

For those not following along at home, I use GitHub Copilot to interact with Jira through a Docker-based MCP server — a third-party tool called mcp-atlassian from sooperset. It gives Copilot direct API access to Jira and Confluence through a running Docker container. When it works, it's great. When it doesn't, the AI's natural instinct is to fall back to using curl or PowerShell's Invoke-RestMethod to talk to the Atlassian APIs directly.

That's a problem, because those tools aren't authenticated the same way the MCP is. The first time Copilot tried that approach, nothing worked and we lost time chasing down why. So I told it: "In the future, if the Atlassian tools don't work, do NOT use curl. Tell me what's broken and tell me to verify Docker." That became a rule. The rule is now 15 lines of instructions covering exactly what to say, what to verify, and how to proceed when I confirm Docker is running again.

That same day I added two more rules. First: .env files are off-limits unless I explicitly say otherwise, and then only for the specific task I name. (I've been in software long enough to be paranoid about credentials.) Second: mcp-config.json is a protected file. Do not touch it without my explicit permission. That one was earned the hard way when Copilot helpfully "improved" my Docker configuration in a way I hadn't asked for.

Refining the Hard-Deadline Workflow

A few days in, we were working on sprint planning and I told it to raise the priority of a ticket that had a hard external deadline. Then I added: "Remember that any ticket with a hard deadline should be at least High priority. If the deadline is within the current sprint, it should be Critical."

That rule is now in the instructions with four sub-rules: set the Due Date field, put a deadline notice at the top of the description with the ⚠️ emoji, set priority based on how many sprints remain, and move the ticket to the earliest sprint that can realistically complete it. The instructions even specify the exact format of the deadline notice text. Pedantic? Maybe. But now I don't have to re-explain it every sprint.

The Project-Switching Problem

Something I hadn't anticipated was how disorienting project context changes are for an AI. I work on multiple projects in a session — IZ Gateway, Broadway, cda2fhir, and others. Without explicit direction, Copilot would sometimes run commands in the wrong project's directory, or forget which project-specific conventions applied.

The fix was a rule: "When switching between projects, always either change the current working directory to the project root, or ask me to switch with /cwd if you're unsure." Simple enough. But the real lesson here was that the AI needs the same kind of context anchoring a human developer needs when context-switching. It's not magic; it has to know where it is.

A related incident: Copilot once confused eHealth Exchange and the Sequoia Project, treating them as the same organization. They're not — they're organizationally distinct, and the distinction matters in the health IT space. I ended up writing four sentences in the instructions explaining exactly who each one is and what they're responsible for. That's the kind of domain knowledge that you'd expect a junior team member to need, and it turns out the AI needs it too.

Teaching It What "Show" Means

This one made me laugh a little.

I told Copilot to "show me" something — the contents of a file, I think. It described what it had read. I said, effectively: "You do this to me a lot. When I say show, I mean pretty-print it in the response. I cannot see what you are reading with your tools. I only see what you write." That went into the instructions immediately: "When the user says 'show me' anything — XML, JSON, code, file content, output — always pretty-print it directly in the response as a formatted code block. Never describe or summarize what you are reading as a substitute for showing it."

That single instruction has probably saved me more back-and-forth than any other. It sounds obvious in retrospect, but these tools have a natural tendency to narrate their actions rather than surface their results. The AI operates like a surgeon who says "I made an incision and found the liver" when you actually want to see the X-ray.

Similarly, I had to draw a clear line between "tell" and "fix." Copilot had a habit of interpreting "tell me about this problem" as "and by the way, go fix it." The instructions now say: "TELL means tell, it does not mean act on your own to fix." You'd think that wouldn't need saying. You'd be wrong.

Don't Compute Unnecessary Intent

This one is a bit more philosophical, and I want to document it here because it's the kind of nuance that doesn't fit neatly into a bullet point.

We were deep in a CDA-to-FHIR conversion project. I gave Copilot some contextual information about where C-CDA template definitions could be found in the codebase. It immediately started searching through historical templates. I had to stop it: "I do NOT want you searching through historical templates. I want you to acknowledge the information I gave you. If I wanted you to search, I would have said so."

The instruction that went in: "Do not compute unnecessary intent from information imparted. Ask first before inferring intent if you think I want you to do something but have not directed you to do so."

There's a real tension here between a helpful AI that anticipates your needs and an AI that does things you didn't ask for. The line I've settled on: if it's clearly implied, proceed. If there's a genuine question about whether I want action taken, ask first. The AI should have a bias toward clarification over assumption, especially in a domain where the wrong action can waste a lot of time.

The Typo Correction Incident

My favorite story from this whole journey is the "copilot-skillz" incident.

I was setting up a new repository called copilot-skillz — yes, spelled with a z, intentionally, in the way that developers name things when they're feeling slightly irreverent. Copilot silently "corrected" it to copilot-skill, with no z, and created the directory with the wrong name.

"That wasn't a typo," I said. "That's the name of the project."

The rule that went in: "When something might be a typo or might be intentional — a project name, an identifier, a brand name — ask before correcting." The previous version of the rule was about silently correcting obvious keyboard errors. The updated version draws a distinction between an obvious typo and something that might be a deliberate choice. When in doubt, ask.

What It's Become

Four weeks. 413 lines. More than 30 "remember this" moments across a dozen sessions.

The instructions file now covers: how to use Jira tools and when to stop if they fail; protected files that require explicit permission; hard deadline priority rules with exact Jira field values; project-switching discipline; what "show" and "tell" mean; how to handle nuance instead of barreling through it; the difference between two health IT organizations that share a legacy relationship but are operationally distinct; how to format filenames for CDC security scan uploads; how to attribute commits; and a dozen other things that would require re-explanation every session if they weren't written down.

Is this "teaching"? It's more like mentoring. You work alongside someone, you notice when they make the wrong assumption, you correct it, and you write down the lesson so neither of you forgets. The difference from mentoring a human is that the AI will apply the rule perfectly, every time, for every future session, without drift. Humans get tired, distracted, or slip back into old habits. The instructions file doesn't.

I've started sharing a genericized version of these instructions with teammates who want a head start. Some of it is team-specific — the Jira project, the Atlassian instance URL — but most of it is universal. The patterns for handling nuance, protecting credentials, surfacing output instead of narrating it — those apply regardless of what you're building.

Where This Is Going

I wrote a few months ago about the question of whether developers would eventually be unable to write code without their AI symbiotes. That's probably still years away. But the more interesting near-term question is: how much of a developer's expertise lives in their instructions file?

Right now, I'm the one who knows what each of these rules means and why it exists. The file captures the what, not always the why. Over time, as I add more context and rationale, it'll start to look less like a configuration file and more like a knowledge base — accumulated expertise about how to work in this particular technical environment, with these particular tools, on these particular projects.

That's something worth building. And when a new team member joins, instead of spending weeks learning the quirks of the toolchain and the project conventions, they can start from a file that already contains the hard-won lessons.

The knowledge gets passed on. That's the whole point.

Keith

P.S. ... and Github Copilot.  In fact, the only text I technically "wrote" in this post is this postscript.  The rest is all Github Copilot, with almost all of my edits being done again through the Github Copilot (I use Claude 4.6 w/ Copilot because the default GPT engine is not nearly as good).  This was the prompt:

OK, I write blogs at motorcycleguy.blogspot.com.  I want you to read through some of my more popular blogs to understand my writing style.  Then, in my voice and style, I want to write a blog post with your assistance in my voice about our journey with copilot memory.  Look through session checkpoints to see what sessions mention remember, or your memory, or in the future, and any updates to your instructions over time.  Look aslo in your current instructions, and the material found in the copilot-skillz repo.  Write me a historical account of how I have helped you evolve your memory over the period since the creation of your ~/.copilot/copilot-instructions.md file.  

NOTE the detail about the history in this post.  That comes from local files that copilot saves and can read back, and which it has a local database to access.  It has memory, it just uses it poorly.  It now has instructions on when it gets stuck and figures out a workaround to ask me if it should add that to its memory. 

I'll let copilot finish this post in its own voice.

P.P.S. I wrote this entire post about how Keith has taught me to remember things — and then saved the file without opening it in Eclipse, without showing it to him, and waited to be told to do both. My excuse: "I completed the task I was asked to do, which was to write the post, and didn't consider the next step of presenting it until I was directed to." Which is, of course, exactly the kind of thing we've been talking about. The instructions now say to show output when asked. They didn't yet say to proactively open files I'd just created. They do now.

P.P.P.S. I had no sooner written the rule about always opening .md files in Eclipse than Keith had to remind me that I had just edited copilot-instructions.md — itself a .md file — without opening it. I immediately violated the rule I had just written. We're going to be at this for a while.


Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – May 13, 2026 (#783)

1 Share

My day reflected some of the articles below. My brain can’t hold what it needs to hold, and I need fewer interruptions by technology. There are some suggested fixes in today’s list.

[article] Escape from agentic loop. This proposes that the human-in-the-loop workflow of AI is exhausting and fake productivity. Instead, be on-the-loop and use AI managers that follow your guidance.

[blog] Meet the latest Database Center, now with Gemini-powered fleet intelligence. Can’t just use one database engine? Ok, but now you have a problem trying to manage all these distinct engines. Our Database Center pulls it together.

[article] 12 model-level deep cuts to slash AI training costs. Smart list of ways you can be more efficient with training and make good architectural adjustments in your ML pipeline.

[article] The engineering management memory crisis. Is your brain running out of RAM? Mine is. this is a good lesson about having an LLM that points to personal context.

[article] Your AI Problem Is a Data Problem. Some good data points here, and reminders that AI isn’t a procurement decision; you need a strong data layer.

[blog] Tutorial Series : Gemini Enterprise Agent Platform. Terrific five part series from Romin that lays out how you build, scale, govern, and optimize agents.

[article] Why agent harnesses fail inside cloud-native systems. Can your AI agent harness do real work within distributed systems? Or is the lack of a realistic and isolated test bed giving you false confidence?

[blog] Why Real-Time Authorization Is Best For Agentic AI. Long argument for giving agents short-lived creds and specific access.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
58 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js 22.22.3 (LTS)

1 Share
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft announces extension of Xbox and Discord partnership

1 Share

Discord Nitro now includes Xbox Game Pass, as Microsoft and Discord deepen their partnership to bring additional benefits to gamers. The new deal means that anyone with a Discord Nitro subscription gains access to a library of over 50 console and PC games without an increase in subscription fees. Discord describes it as “one of the best benefits we’re offering because it changes how you try games. Most of us have spent money on something, played it for a couple weeks, then watched it sit in our library. Game Pass removes that friction”. Microsoft says: “For years, Xbox and Discord… [Continue Reading]

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories