Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152895 stories
·
33 followers

AI Agent & Copilot Podcast: Inside Microsoft’s Vision for AI Agents and the Future of Work

1 Share

A major barrier to AI adoption isn’t willingness but governance, as leaders seek secure, observable, and controllable systems to confidently deploy AI across enterprise environments.

The post AI Agent & Copilot Podcast: Inside Microsoft’s Vision for AI Agents and the Future of Work appeared first on Cloud Wars.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft wants lawyers to trust its new AI agent in Word documents

1 Share
Vector illustration of the Microsoft logo.

Microsoft is launching a new AI agent inside Word that's specifically designed for legal teams. Legal Agent handles document edits, negotiation history, and complex documents to help legal teams handle tasks like reviewing contracts.

"Instead of relying on general AI models to interpret commands, the agent follows structured workflows shaped by real legal practice, managing clearly defined, repeatable tasks like reviewing contracts clause by clause against a playbook," explains Sumit Chauhan, corporate vice president of Microsoft's Office Product Group.

The Legal Agent can work with existing documents that have tracked changes, and analyz …

Read the full story at The Verge.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

From capability to responsibility: Securing our global digital ecosystem with next‑generation AI

1 Share

Cybersecurity is at a turning point. Advanced AI models are dramatically accelerating vulnerability discovery and creating conditions ripe for exploitation, underscored by the announcement of Claude Mythos Preview. This marks a shift, and whether this technology will favor defenders or attackers will depend on the choices we make now. 

With the right safeguards, these capabilities can help trusted defenders identify and fix vulnerabilities across critical systems in hospitals, power grids, water, and telecommunications. Released irresponsibly or not properly secured, however, those same capabilities could be abused by malicious actors, threatening the foundations of our digital ecosystem. 

Much of the discussion has rightly focused on risks. As advanced AI models speed up the discovery of vulnerabilities, the way we fix them must speed up too. That means stronger pre-deployment risk assessments and close collaboration between governments, frontier AI developers, software providers, and the broader ecosystem to ensure these tools reduce, rather than increase, cyber risk. This is particularly important as AI systems themselves have become high‑value targets, requiring stronger protection of models, systems, data, and underlying infrastructure. 

This is ultimately an international challenge. Neither software supply chains nor threat actors stop at borders. Neither can our response. Meeting this moment will require shared approaches across countries, sectors, and systems—rooted in trust, shared standards, resilience, and responsible use. 

This moment is also an opportunity. Security has been and remains the top priority at Microsoft. Over the last two years, through our  Secure Future Initiative, we have strengthened our security foundations for this age of AI, in part by using AI to accelerate vulnerability discovery and remediation. We have also invested in fundamental AI for security research, including the development of open-source industry benchmarks that can be used to evaluate whether models are ready for real-world security work. We are accelerating that work through deeper public-private collaboration and in partnership with AI, including Anthropic’s Project Glasswing and OpenAI’s Trusted Access for Cyber program. 

Securing our digital ecosystem with nextgeneration AI is within reach but is not automatic.  

Building secure foundations for the era of frontier AI  

Ensuring advanced AI technologies are used to strengthen cybersecurity requires deliberate and urgent action. We are sharing the following recommendations as practical steps governments, industry, and the broader ecosystem can take to ensure these tools, often referred to as “frontier AI”, reinforce the security foundations on which digital societies depend. And we hope to continue to partner with model providers, industry and government so we can work together to improve security outcomes for all. 

1. Reinforce core cybersecurity practices  

Advanced AI can strengthen cybersecurity only when strong, consistent cyber hygiene is already in place. As frontier AI accelerates vulnerability discovery and response, core practices such as rapid patching, access control, and system resilience become more critical, not less. 

Security gains in the frontier AI era depend on close coordination between technology providers advancing new capabilities and the organizations responsible for operating, updating, and securing real‑world systems. Without this interdependence, advanced AI cannot deliver durable improvements in security. No organization can solve these cybersecurity problems alone. 

That is why sustained investment in what we know works remains essential: secure‑by‑design product lifecycles, Zero Trust architectures, multi‑factor authentication, least‑privileged access, and ongoing security training. Broad adoption and harmonization of established cybersecurity frameworks to ensure consistent resilience across AIenabled systems. Trusted cloud environments that enable these practices at scale, supporting secure data handling, continuous patching, and the secure deployment of AI‑enabled tools for defenders.  

  2. Release advanced capabilities responsibly  

As frontier AI systems gain reasoning, coding, and agentic capabilities, some of the most serious security risks arise before deployment, including realistic misuse involving multi‑step reasoning, tool use, and reconnaissance. Technical safety benchmarks remain important, but they are insufficient without rigorous, real‑world testing.  

As a result, governments are increasingly establishing pre‑deployment evaluations that combine technical testing with threat modeling. These assessments are most effective when frontier developers work closely with organizations that track national‑security risks. Investing in secure evaluation environments and modern testing methods can help governments keep pace as capabilities advance.  

Responsible release practices, including phased and controlled access, are a critical extension of this approach. Our work with Anthropic in Project Glasswing offers one practical model, enabling trusted defenders to evaluate advanced capabilities in constrained settings prior to broader release. Similarly, OpenAI and Microsoft work closely through Trusted Access for Cyber program, and we already support OpenAI’s use of scoped, early deployments for safety and security testing.  

Responsibility does not end at release. Organizations that deploy frontier models are often best positioned to detect emerging misuse and should monitor, mitigate, and share threat information. Microsoft is working with peers through the Frontier Model Forum to advance best practices for evaluating and managing cyber risk and enable information sharing. Governments should encourage continued industry collaboration to restrict access for identified threat actors and counter adversarial or malicious use of advanced AI. 

  3. Modernize vulnerability management  

AI is changing both the speed of vulnerability discovery and what constitutes meaningful security risk. Faster discovery only improves security if triage, validation, and remediation can keep up. 

As AI accelerates discovery, vulnerability management must shift from tracking raw volume to reducing real‑world risk. That means prioritizing vulnerabilities that are genuinely exploitable, assigning clear responsibility for triage and remediation, and using phased, risk‑based disclosure when private coordination improves safety. Above all, systems must be designed around validation and realistic remediation capacity, not the assumption that more findings automatically lead to better security. 

Developers of frontier AI models should embed vulnerability coordination and disclosure directly into responsible‑release frameworks. And work with governments and industry to ensure findings are routed to the right owners, acted on early, and supported by clear coordination pathways. 

  4. Fix faster: Strengthen and accelerate response and remediation 

As AI accelerates vulnerability discovery, remediation must keep pace. Initiatives such as DARPA’s AI Cyber Challenge show how AI can help both find and fix flaws in open‑source software. Hardening defenses requires investment not just in detection tools but in the people, processes, and infrastructure responsible for fixing vulnerabilities, especially in critical sectors. 

Much of the software underpinning critical infrastructure relies on open‑source components maintained by small teams or volunteers with limited security capacity. A surge in AI‑enabled discovery risks overwhelming existing triage and disclosure processes. Efforts such as the GitHub Secure Open Source Fundalongside investments by Microsoft and others through the Linux Foundation, Alpha‑Omega, and OpenSSF, are helping maintainers adapt in ways that are practical and aligned with existing workflows.  

Governments should treat remediation capacity as a core resilience priority, including sustained investment in and support for maintainers, surge capacity during large discovery events, and modernized disclosure pathways—recognizing that effective remediation still largely depends on human judgment, coordination, and time.  

  5. Advance AI security internationally 

AI security is essential to deploy AI at scale. Because AI systems, supply chains, and the risks they introduce operate across borders, national approaches alone will not be sufficient. 

Governments and industry should work together to build interoperable international foundations for AI security, including risk evaluation, coordinated vulnerability disclosure, and information sharing. Priorities should include strengthening the defensive use of AI, preventing misuse through shared norms and safeguards, and securing AI systems- and the AI technology stack.  

Global participation is critical. Countries and organizations with limited cybersecurity resources or legacy infrastructure are often the most exposed. International cooperation should prioritize capacitybuilding, ensuring that the security benefits of AI are realized broadly and equitably. 

AI security is not just a safeguard; it is an enabler for innovation and growth. By acting collectively and moving quickly, governments and industry can strengthen global digital resilience and unlock the trusted adoption of AI across economies, critical infrastructure, and public services.

Meeting the moment: Use frontier AI capabilities to build trust and confidence  

Meeting this moment is ultimately about trust: not in any single technology or provider, but in our collective ability to introduce advanced AI responsibly.  

Used deliberately and built on strong security foundations, these capabilities can strengthen cybersecurity and reinforce confidence in the systems society depends on. The choice is not between innovation and security but whether we enable them to reinforce one another. 

That outcome is within reach. With governments, industry, and infrastructure operators aligned, advanced AI can be deployed in ways that match real‑world defensive capacity and support trusted, lawful action. Done right and working together, frontier AI can help protect the digital infrastructure that underpins modern life and build lasting confidence in its resilience. 

The post From capability to responsibility: Securing our global digital ecosystem with next‑generation AI appeared first on Microsoft On the Issues.

Read the whole story
alvinashcraft
13 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Dispatches from O'Reilly: Fast Paths and Slow Paths

1 Share
Selective control in autonomous AI systems: Why governing every decision breaks autonomy—and how runtime control actually works at scale
Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic Data Science Pair Programming With marimo pair

1 Share

How do you add agent skills to your data science workflow? How can a coding agent assist with data wrangling and research? This week on the show, Trevor Manz from marimo joins us to discuss marimo pair.

Trevor is a founding engineer at marimo, where he’s been working on integrating LLM tools with marimo. We discuss the balancing act of building a skill and determining how to give an agent access to all the variables in a notebook. He shares how they built a specialized reactive REPL that eliminates hidden state and allows the agent to continue constructing a reproducible Python program.

We dig into installing and getting started with marimo pair. Trevor also covers several of the tasks an agent can tackle in a data science workflow.

Video Course Spotlight: Getting Started With marimo Notebooks

Discover how marimo notebook simplifies coding with reactive updates, UI elements, and sandboxing for safe, sharable notebooks.

Topics:

  • 00:00:00 – Introduction
  • 00:02:26 – Trevor’s role at marimo
  • 00:03:08 – Current AI tools in marimo
  • 00:06:26 – Describing marimo notebooks
  • 00:10:11 – What is marimo pair?
  • 00:18:49 – Building an agent skill
  • 00:27:34 – Setup & installation
  • 00:31:16 – Video Course Spotlight
  • 00:32:42 – Examples of EDA and data wrangling
  • 00:45:46 – Experimenting inside of a notebook
  • 00:50:40 – Managing context
  • 00:53:25 – Accessing additional libraries
  • 00:57:16 – Recent tools and updates from the marimo community
  • 00:59:31 – What are you excited about in the world of Python?
  • 01:01:10 – What do you want to learn next?
  • 01:02:26 – How can people follow your work online?
  • 01:03:13 – Thanks and goodbye

Show Links:

Level up your Python skills with our expert-led courses:

Support the podcast & join our community of Pythonistas





Download audio: https://dts.podtrac.com/redirect.mp3/files.realpython.com/podcasts/RPP_E293_02_Trevor_marimo-pair.7cd045a838d8.mp3
Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

OpenAI’s Big Reset + A.I. in the Doctor’s Office + Talkie, a pre-1930s LLM

1 Share
Will the rising tide of A.I. adoption lift all boats?
Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories