Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149742 stories
·
33 followers

Microsoft Outlook and 365 Hit by Widespread Outages, Users Report Login and Email Failures

1 Share

Microsoft is investigating issues with Outlook and Microsoft 365 after users report login failures and missing emails, alongside known bugs in classic Outlook.

The post Microsoft Outlook and 365 Hit by Widespread Outages, Users Report Login and Email Failures appeared first on TechRepublic.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android 17 Leaks Reveal Major Redesign, AI Features, and Privacy Upgrades

1 Share

Android 17 beta is here. Here’s what is confirmed so far, what leaks suggest, and which rumored features may arrive later in 2026.

The post Android 17 Leaks Reveal Major Redesign, AI Features, and Privacy Upgrades appeared first on TechRepublic.

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft teases image support in Notepad for Windows 11 ahead of roll out

1 Share

Windows Latest previously exclusively reported that Microsoft is internally testing image support in Notepad for Windows 11. A month later, Notepad’s image integration is now being quietly teased in an email sent to the Windows Insiders ahead of the rollout. But it’s unclear when the update will roll out to everyone.

Notepad image support teased

In an email spotted by Windows Latest, there’s a screenshot of an unreleased Notepad version, and it has a toggle to insert images.

We don’t know how the feature works, but you should be able to insert multiple images in Notepad. It’ll be similar to how WordPad handled images.

Notepad image insert

Microsoft sources have already told Windows Latest that Notepad’s image support is real and has been in the works for the past few months. Some of you might argue that Notepad is a text editor and it doesn’t need image support, which is a fair point. However, it’s part of the company’s efforts to fill the gap left by WordPad.

Like the rest of the formatting options, Windows Latest has learned that Notepad’s image support does not use a lot of resources, and it’s barely noticable in most cases.

Microsoft is bridging the gap between Notepad and WordPad at the cost of Notepad’s simplicity

Microsoft has always maintained multiple text editors on Windows, including MS Word, WordPad, and Notepad.

While Microsoft Word is a flagship software and paid, Notepad has always been a simple text editor that only allows you to type text. This changed after Microsoft retired WordPad, as the company decided to bring WordPad-like text formatting to Notepad.

Notepad is no longer a simple text editor, and it’s always getting new features, including full-fledged support for markdown.

Insert a table in Notepad

Microsoft argues that markdown in Notepad is lightweight, and it allows applying text formatting, such as italics, underline, bold, links, and even creating a table.

Notepad also has other formatting syntax, including strikethrough and nested lists, so you can make cleaner notes without switching to toher. You can use the toolbar, keyboard shortcuts, or just type the Markdown symbols directly.

Finally, tables are available more widely after recent update. You can insert a table from the toolbar and pick the size using a small grid, then type inside the cells like normal text.

Because it’s Markdown-style, the table is still stored as plain text with separators, which keeps it lightweight and easy to edit even in a basic .txt file.

Another change is how Notepad handles AI text tools like Write, Rewrite, and Summarize. Instead of waiting for the full result to finish, Notepad now starts showing text sooner, line by line, similar to ChatGPT.

Notepad text streaming

As always, if you dislike one of these features, you can turn them off from Settings in Notepad.

The post Microsoft teases image support in Notepad for Windows 11 ahead of roll out appeared first on Windows Latest

Read the whole story
alvinashcraft
36 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft at NVIDIA GTC: New solutions for Microsoft Foundry, Azure AI infrastructure and Physical AI

1 Share

Microsoft combines accelerated computing with cloud scale engineering to bring advanced AI capabilities to our customers. For years, we’ve worked with NVIDIA to integrate hardware, software and infrastructure to power many of today’s most important AI breakthroughs.

What’s new at NVIDIA GTC

  • Expanded Microsoft Foundry capabilities to build, deploy and operate production-ready AI agents on NVIDIA accelerators and open NVIDIA Nemotron models
  • New Azure AI infrastructure optimized for inference-heavy, reasoning-based workloads, including the first hyperscale cloud to power on next-generation NVIDIA Vera Rubin NVL72 systems
  • Deeper integration across Microsoft Foundry, Microsoft Fabric and NVIDIA Omniverse libraries and open frameworks to support Physical AI systems from simulation to real‑world operations

From Frontier models to production-ready agents

At the foundation of this system is Microsoft Foundry: serving as the operating system for building, deploying and operating AI at enterprise scale. Foundry builds on Azure to bring together models, tools, data and observability into a single system designed for production agents. Today we’re expanding those capabilities across Foundry Agent Service and NVIDIA Nemotron models.

The next-generation Foundry Agent Service and Observability in Foundry Control Plane are now generally available, enabling organizations to build and operate AI agents at production scale. Foundry Agent Service allows teams to quickly develop agents that reason, plan and act across tools, data and workflows. Once created, Foundry Control Plane provides the developer end-to-end visibility into agent behavior, unlocking both developer productivity as well as enterprise trust. Companies such as Corvus Energy are already using Foundry to replace manual inspection workflows with agent-driven operational intelligence across their global fleet.

We are further simplifying the path from prototype to production with the availability of Voice Live API integration with Foundry Agent Service, in public preview, which enables developers to build voice-first, multimodal, real-time agentic experiences. This pairs with the general availability of a refreshed Microsoft Foundry portal and expanded integrations for Palo Alto Networks’ Prisma AIRS and Zenity, delivering deeper builder experiences and runtime security across the entire agent lifecycle.

NVIDIA Nemotron models are also now available through Microsoft Foundry, joining the widest selection of models on any cloud, including the latest reasoning, frontier and open models. This bolsters our recent partnership announcement bringing Fireworks AI to Microsoft Foundry, enabling customers to fine-tune open-weight models like NVIDIA Nemotron into low-latency assets that can be distributed to the edge.

Scaling AI infrastructure for the world’s most demanding workloads

Inference AI workloads are reshaping cost, performance and system design requirements. To operationalize agentic AI at scale, customers need purpose-built infrastructure for inference‑heavy, reasoning‑based workloads that can be deployed and operated consistently across global and regulated environments.

Microsoft’s AI infrastructure approach is engineered to seamlessly bring next-generation NVIDIA systems into Azure datacenters that are designed for power, cooling networking and rapid generational upgrades. This allows our customers to move with speed and agility and stay at the leading edge from generation to generation.

In less than a year, we’ve deployed hundreds of thousands of liquid-cooled Grace Blackwell GPUs across our global datacenter footprint, and now we are excited to be the first hyperscale cloud to power on NVIDIA’s newest Vera Rubin NVL72 in our labs. Over the next few months, Vera Rubin NVL72 will be rolled out into our modern, liquid-cooled Azure datacenters.

Microsoft’s infrastructure innovation with NVIDIA also extends to sovereign and regulated environments to give customers control of both where AI runs and how it evolves over time. Recently, we announced Foundry Local support for modern infrastructure and large AI models, and today we now have initial support for NVIDIA Vera Rubin platform on Azure Local, extending accelerated AI capabilities to customer-controlled environments. This approach allows organizations to plan for next-generation AI workloads, including reasoning-based and agentic systems, while maintaining Azure-consistent operations, governance and security through our unified software layer with Azure Arc and Foundry Local.

YouTube Video

Bringing AI into the physical world

As AI moves beyond digital experiences, Microsoft and NVIDIA are collaborating to support the next wave of Physical AI. At GTC, this work centers on NVIDIA Physical AI Data Factory Blueprint, with Microsoft Foundry as the platform for hosting and operating Physical AI systems on Azure at cloud scale.

By integrating this blueprint with Azure services as part of a Physical AI Toolchain, Microsoft enables developers to build, train and operate physical AI and robotics workflows that connect physical assets, simulation and cloud training environments into repeatable, enterprise-grade pipelines. To support, we are introducing a public Azure Physical AI Toolchain GitHub repository integrated with the Nvidia Physical AI Data Factory and with core Azure services.

To further the impact of AI in real‑world, physical environments, today Microsoft and NVIDIA are deepening the integration between Microsoft Fabric and NVIDIA Omniverse libraries, connecting live operational data with physically accurate digital twins and simulation. This allows organizations to see what’s happening across their physical systems, understand it in real time and use AI to decide what to do next. In practice, customers in manufacturing and operations and beyond are using this approach to move beyond dashboards and alerts to coordinated, AI‑driven action across machines, facilities and workflows.

From innovation to impact

Microsoft is delivering reliable, production‑scale AI by bringing together its global AI infrastructure, platforms and real‑world systems with the latest innovation from NVIDIA. For customers, this means the ability to operate intelligence continuously, running inference-heavy, reasoning-based and physical AI workloads with the performance, security and governance required for real businesses and regulated industries.

Whether powering always-on agents, scaling next-generation AI infrastructure or deploying intelligent systems in factories, energy facilities and sovereign environments, Microsoft and Nvidia are helping customers move faster from insight to action.

Yina Arenas leads product strategy and execution for Microsoft Foundry, overseeing the endtoend AI product portfolio, infrastructure, developer experiences and foundation model integration across OpenAI, Anthropic, Mistral, DeepSeek and others. She delivers an enterprise ready, production grade AI platform trusted by global customers for secure, reliable and scalable AI.

The post Microsoft at NVIDIA GTC: New solutions for Microsoft Foundry, Azure AI infrastructure and Physical AI appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Cursor built a fleet of security agents to solve a familiar frustration

1 Share

Cursor‘s security team has built a fleet of AI agents that continuously monitor and secure the company’s codebase, and it is releasing the templates and Terraform behind them so other security teams can do the same.

Travis McPeak, Cursor’s Head of Security, tells The New Stack that this project grew out of a familiar frustration. Traditional security tooling, such as code owners, linters, and static analysis, wasn’t keeping pace with the rate of change at a company where engineers ship code as fast as AI coding tools evolve.

“We’ve always had this struggle in security, where there’s more demand for our attention than we can scale ourselves,” McPeak says. “So the idea was: How can we leverage agents in a more focused way for security and show up in those places at the right time?”

“So the idea was: How can we leverage agents in a more focused way for security and show up in those places at the right time?”

Focusing on code owners’ features, for example, was too imprecise at Cursor’s scale, and often pulled security teams into irrelevant Pull Requests while missing changes that actually mattered. Linting generated too many false positives. The core idea behind Cursor’s new approach: Agents can now reason semantically about what a code change actually does, rather than triggering on more traditional keywords or pattern rules.

The agents run on Cursor Automations, the company’s recently launched platform for always-on coding agents. Automations sits on top of Cursor’s cloud agent platform and provides those agents with integrations for receiving webhooks, responding to GitHub pull requests, and monitoring codebase changes. These agents run continuously in the background and step forward when triggered by a PR, a webhook event, or a cron schedule.

McPeak’s security team gained early access to an internal version of Automations late last year and became among its first heavy users, he tells us. To make those agents work for the team’s security workflows, they built a custom MCP tool and deployed it as a serverless Lambda function.

Four security agents

The team released the blueprint for four of its security agents on Monday: Agentic Security Review, Vuln Hunter, Anybump, and Invariant Sentinel.

The first and likely most visible of these new agents is Agentic Security Review, which now runs on virtually every pull request and can block continuous integration if necessary. It started as a fork of Cursor’s existing Bugbot, the company’s general-purpose code review agent. But McPeak and his team then tuned it specifically for the security team’s needs.

He wanted something high-signal enough to actually block merges, he says. “When this thing says this is a problem, we have the confidence in it that we can block.” In just two months, it has run on thousands of PRs and blocked hundreds of issues from reaching production, he says.

The second, Vuln Hunter, scans the existing codebase daily. The idea here is that if Agentic Security Review had existed when the older code was written, it would have caught issues that are now running undetected in production. Vuln Hunter divides the code into logical segments and instructs agents to trace vulnerabilities through to their root causes. As McPeak notes, the agent can’t report a finding to the security team unless it can prove it’s real.

Anybump handles dependency patching. As McPeak noted, too often security tools look for issues in third-party libraries for components that aren’t actually in use.”Before any of the stuff that we’re doing here, we have a reachability analysis service, and it finds out if we are actually using the component in a way where the vulnerability matters to us or not?”

The agent then traces through code paths, runs tests, and opens a PR if tests pass. McPeak says the agents produce more thorough analysis than humans would typically be willing to do — and they do it without interrupting anyone’s workflow.

The fourth agent, Invariant Sentinel, monitors for drift against a set of security and compliance properties the team has defined: privacy guarantees, legal requirements, and high-impact controls. It runs daily and uses Automations’ memory feature to track state across runs.

What the agents found

Among many other issues, Vuln Hunter, for example, flagged an email-sending service lacking proper input validation that could have been used as a spam relay. It found a forgotten just-in-time service with overly broad access permissions that the infrastructure team had lost track of — “everybody here had forgotten about it,” McPeak says. And it caught a server-side request forgery vulnerability that required tracing requests across multiple services, the kind of logic-based issue that static analysis tools struggle with because it depends on full cross-service context rather than a single code path.

McPeak says his engineering team’s response has been unusually positive. “I started showing it to engineers, and they said, ‘Wow, this is a great catch. When can we have this on everything?'” he says. “Which is very unusual for security products — that engineers [are like], ‘I like this. I want to have it.’”

Why release the templates?

McPeak notes that part of why the company is releasing these templates now is in response to the attackers’ actions. Those attackers, after all, are also using AI to find vulnerabilities.

“If we don’t scale ourselves, things are going to get worse for security as a whole,” McPeak says.

Because the automations are prompt-based agents with customizable tools, other teams should be able to adapt them to their own threat models.

“If I gave you a binary, it does security stuff exactly the way I want it done at Cursor, then you can’t customize it,” McPeak says.

What does this mean for security startups?

One interesting aspect of all of this is what it means for security startups that focus on code reviews and dependency scanning. Pundits have lately spent a lot of time discussing the SaaSpocalypse, after all.

McPeak is pretty open about the dynamic. “My observation now is, if you just drop [an LLM] in and give it the right tools, it’ll do a lot of good stuff.” Startups whose value proposition was purely coaching models to behave in the right way may find that there isn’t a lot of long-term value in that, McPeak notes, but he does believe there’s still room for companies that package the end-to-end workflow. Prompt engineering, however, is not much of a moat.

Over time, McPeak says Cursor plans to expand the approach to include vulnerability report intake, privacy compliance monitoring, on-call alert triage, and access provisioning. The Agentic Security Review has already evolved from a single agent into an orchestrator that calls specialized sub-agents, including a dedicated privacy check, with more domains planned. The company that makes the tool developers use to write AI-assisted code is now using that same tool to secure it.

Is the SaaSpocalypse nigh? The era of paying for software seats may be ending.

The post Cursor built a fleet of security agents to solve a familiar frustration appeared first on The New Stack.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation

1 Share

As organizations adopt AI, security and governance remain core primitives for safe AI transformation and acceleration. After all, data leaders are aware of the notion that:

Your AI is only as good as your data.

Organizations are skeptical about AI transformation due to concerns of sensitive data oversharing and poor data quality. In fact, 86% of organizations lack visibility into AI data flows, operating in darkness about what information employees share with AI systems [1]. Compounding on this challenge, about 67% of executives are uncomfortable using data for AI due to quality concerns [2]. The challenges of data oversharing and poor data quality requires organizations to solve these issues seamlessly for the safe usage of AI. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their entire data estate, in particular best in class integrations with M365, Microsoft Fabric, and Azure data estates, streamlining oversight and reducing complexity across the estate.

At FabCon Atlanta, we’re announcing new Microsoft Purview innovations for Fabric to help seamlessly secure and confidently activate your data for AI transformation. These updates span data security and data governance, granting Fabric users to both

  1. Discover risks and prevent data oversharing in Fabric
  2. Improve governance processes and data quality across their data estate

1. Discover risks and prevent data oversharing in Fabric

As data volume increases with AI usage, Microsoft Purview secures your data with capabilities such as Information Protection, Data Loss Prevention (DLP), Insider Risk Management (IRM), and Data Security Posture Management (DSPM). These capabilities work together to secure data throughout its lifecycle and now specifically for your Fabric data estate. Here are a few new Purview innovations for your Fabric estate:

Microsoft Purview DLP policies to prevent data leakage for Fabric Warehouse and KQL/SQL DBs

Now generally available, Microsoft Purview DLP policies allow Fabric admins to prevent data oversharing in Fabric through policy tip triggering when sensitive data is detected in assets uploaded to Warehouses. Additionally, in preview, Purview DLP enables Fabric admins to restrict access to assets with sensitive data in KQL/SQL DBs and Fabric Warehouses to prevent data oversharing. This helps admins limit access to sensitive data detected in these data sources and data stores to just asset owners and allowed collaborators. These DLP innovations expand upon the depth and breadth of existing DLP policies to ensure sensitive data in Fabric is protected.

Figure 1. DLP restrict access preventing data oversharing of customer information stored in a KQL database.

Microsoft Purview Insider Risk Management (IRM) indicators for Lakehouse, IRM data theft quick policy for Fabric, and IRM pay-as-you-go usage report for Fabric

Microsoft Purview Insider Risk Management is now generally available for Microsoft Fabric extending its risk-detection capabilities to Microsoft Fabric lakehouses (in addition to Power BI which is supported today) by offering ready-to-use risk indicators based on risky user activities in Fabric lakehouses, such as sharing data from a Fabric lakehouse with people outside the organization . Additionally, IRM data theft policy is now generally available for security admins to create a data theft policy to detect Fabric data exfiltration, such as exporting Power BI reports. Also, organizations now have visibility into how much they are billed with the IRM pay-as-you-go usage report for Fabric, providing customers with an easy-to-use dashboard to track their consumption and predictability on costs.

Figure 2. IRM identifying risky user behavior when handling data in a Fabric Lakehouse. 

Figure 3. Security admins can create a data theft policy to detect Fabric data exfiltration. 

Figure 4. Security admins can check the pay-as-you-go usage (processing units) across different workloads and activities such as the downgrading of sensitivity labels of a lakehouse through the usage report.

Microsoft Purview for all Fabric Copilots and Agents

Microsoft Purview currently provides capabilities in preview for all Copilots and Agents in Fabric. Organizations can:

  • Discover data risks such as sensitive data in user prompts and responses and receive recommended actions to reduce these risks.
  • Detect and remediate oversharing risks with Data Risk Assessments on DSPM, that identify potentially overshared, unprotected, or sensitive Fabric assets, giving teams clear visibility into where data exposure exists and enabling targeted actions—like applying labels or policies—to reduce risk and ensure Fabric data is AI‑ready and governed by design.
  • Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI.
  • Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection.

Figure 5. Purview DSPM provides admins with the ability to discover data risks such as a user’s attempt to obtain historical data within a data agent in the Data Science workload in Fabric. DSPM subsequently provides actions to solve this risk.

Now that we’ve covered how Purview helps secure Fabric data and AI, the next focus is ensuring Fabric users can use that data responsibly.

2. Improve governance processes and data quality across their data estate

Once an organization’s data is secured for AI, the next challenge is ensuring consumers can easily find and trust the data needed for AI. This is where the Purview Unified Catalog comes in, serving as the foundation for enterprise data governance. Estate-wide data discovery provides a holistic view of the data landscape, helping prevent valuable data from being underutilized. Built-in data quality tools enable teams to measure, monitor, and remediate issues such as incomplete records, inconsistencies, and redundancies, ensuring decisions and AI outcomes are based on trusted, reliable data.  Purview provides additional governance capabilities for all data consumers and governance teams and supplement those who utilize the Fabric OneLake catalog. Here are a few new innovations within the Purview Unified Catalog:

Publication workflows for data products and glossary terms

Now generally available, data owners can leverage Workflows in the Purview Unified Catalog to manage how data products and glossary terms are published. Customizable workflows enable governance teams to work faster to create a well curated catalog, specifically by ensuring that data products and glossary terms are published and governed responsibly. Data consumers can request access to data products and be reassured that the data is held to a certain governance standard by governance teams.

Figure 6. Customizing a Workflow for publishing a glossary term in your catalog.

Data quality for ungoverned assets in the Unified Catalog, including Fabric data  

In the Unified Catalog, Data Quality for ungoverned data assets allows organizations to run data quality on data assets, including Fabric assets, without linking them to data products. This approach enables data quality stewards to run data quality at a faster speed and on greater scale, ensuring that their organizations can democratize high quality data for AI use cases.

Figure 7.  Running data quality on data assets without it being associated with a data product.

Looking Forward

As organizations accelerate their AI ambitions, data security and governance become essential. Microsoft Purview and Microsoft Fabric deliver an integrated and unified foundation that enables organizations to innovate with confidence, ensuring data is protected, governed, and trusted for responsible AI activation.

We’re committed to helping you stay ahead of evolving challenges and opportunities as you unlock more value from your data. Explore these new capabilities and join us on the journey toward a more secure, governed, and AI‑ready data future.

[1] 2025 AI Security Gap: 83% of Organizations Flying Blind

[2] The Importance Of Data Quality: Metrics That Drive Business Success

The post New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
37 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories