Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150668 stories
·
33 followers

The moment AI skilling stopped being optional—and started being personal

1 Share

Kavitha Radhakrishnan is a General Manager in Microsoft Global Skilling, where she leads the teams creating AI‑first, learner‑centered experiences that help people and teams build skills they can apply at work.

 

Sunday night. The week hasn’t started yet, but the questions already have.

A leader is scrolling through AI headlines, trying to keep up with the constant changes. Every day there’s a new tool, a new capability, a new prediction about how work is changing. And it is. By Monday morning, the pressure isn’t theoretical; it’s sitting on a packed calendar and a team that’s already running hot. Everyone’s saying, “We should be using AI,” but nobody’s quite sure what that means for this team, this week.

Elsewhere, an employee is watching coworkers use AI with speed and confidence. They want to keep up without feeling exposed for what they don’t know yet. The gap isn’t intelligence; it’s psychological safety and a clear starting point.

And then there’s the learning leader who’s had the “training participation” conversation a hundred times, but now the question is sharper. It isn’t How many people finished?, but What changed in the way they work? The bar has moved from awareness to application.

None of these people are asking for more content. There’s plenty of that. They want a path that respects their time, fits their role, and helps them build confidence, both individually and as part of a team.

 

Enter AI Skills Navigator

AI is moving faster than most of us can track. The problem is figuring out what to do next.

Leaders don’t want to stitch together five different tools. People don’t want another long course about AI. Teams are looking for skilling that fits into real work. That’s the gap AI Skills Navigator is built to address.

AI Skills Navigator brings role‑based, practical skilling into a single experience, so individuals and teams have a clear starting point, a sense of direction, and ways to see progress as they go. Instead of an endless catalog, it offers guided paths that respect time, align to real responsibilities, and make it easier to turn learning into action. At its core, it’s designed to help turn skilling into execution—progress that people can feel and leaders can point to.

How AI Skills Navigator fits your flow

Alex, a team manager, is trying to set the team up for success.

The team is kicking off a new project with clear goals, tight timelines, and a mix of responsibilities across roles. Everyone is expected to use AI more effectively, but “go learn AI” isn’t a plan. Sending people to a long list of links doesn’t help either.

So Alex turns to AI Skills Navigator.

Instead of gathering content from multiple places, Alex uses AI Skills Navigator to design a skilling playlist for the team. The conversational AI experience helps him identify what his team really needs. The playlist is grounded in what the team is actually working on and intentionally structured around the project goals and role-specific responsibilities. It brings together different content formats on purpose: short sessions for core concepts, practice where it matters, and optional deeper dives for people who want to explore further.

It’s not about forcing everyone through the same experience. It’s about giving the team a shared path forward, while respecting different roles, learning needs, and preferences.

 

 

Sam, an experienced marketing manager on the team, doesn’t have to figure out where to start. A link from Alex lands in their inbox, and the intent is clear: this is what matters for our work right now.

As Sam works through the playlist, they move naturally between different ways of learning. The structure makes it easy for them to focus without feeling boxed in.

For a topic that matters most to the project, Sam chooses a skilling session. A video sets the context, and the Skilling Session Coach AI agent is there along the way—ready to clarify a concept, answer a quick question, or pause to check understanding. Sometimes there’s a short quiz to help Sam confirm that they’re learning and making progress.

People are juggling meetings, messages, deadlines, and more. Attention spans are shorter, and learning often happens in brief moments between tasks. AI Skills Navigator is designed for that reality.

Sometimes Sam feels like listening instead of reading. An AI‑generated podcast turns dense material into something easier to absorb. When time is tight, an AI-generated summary helps Sam catch up in minutes, without losing the thread.

From Alex’s perspective, there’s a simple view of how the team is progressing—enough to see who’s moving forward, where people might be getting stuck, and when it’s time to adjust the plan.

 

 

Together, these moments add up. Skilling sessions provide depth when it’s needed. Podcasts and summaries offer flexibility when attention is limited. Skilling playlists keep everything connected, so learning feels purposeful rather than scattered.

By combining structured paths with flexible ways to learn, and pairing AI support with human expertise, AI Skills Navigator helps individuals and teams build confidence, apply skills, and make progress together.

 

How we’re building confidence, together

AI Skills Navigator is designed to help people and teams build confidence, not by adding more noise, but by providing guidance that fits real work.

It brings together training content from sources that many people already know and trust, including Microsoft Learn, LinkedIn, and GitHub, and connects it into structured paths that make it easier to start, go deeper, and keep moving forward. Whether you’re learning on your own or designing skilling for a team, the goal is the same: turn learning into progress that you can feel.

And this isn’t static. We’ll continue evolving AI Skills Navigator based on how people learn, how teams work, and the feedback we hear from learners and leaders along the way. We’ll share updates regularly, including new content, new capabilities, and what’s coming next, so you can stay current as the experience grows.

 

 

After you've signed in, you can get started with these options. (Pro tip: To expand the navigation pane on the left, try selecting it.)

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

File-level archiving comes to Microsoft 365 Archive (public preview)

1 Share

As content growth accelerates worldwide, organizations need better ways to manage inactive data without sacrificing security, compliance, retention, or discoverability.

Today, we’re announcing a public preview of filelevel archiving in Microsoft 365 Archive. This new capability enables you to archive individual files moving them into a lower-cost, cold-storage tier in SharePoint. This means you can archive outdated and redundant files while keeping the rest of the site active, improving both your Copilot relevancy and your search results in the process.

Read on to learn more about how it works and how to get started. You can also join our webinar on April 7, 10:00am PDT, to see file-level archive in action and engage with our product team.

SharePoint folder with a mix of archived and active files

Get granular control for inactive content

Until now, Microsoft 365 Archive supported site‑level archiving, meaning admins had to choose whether to archive an entire site, often without knowing the details of the work happening there. With file‑level archiving, it’s now possible to archive individual documents within active SharePoint sites. This is ideal for older project files, completed events, or reference materials that must be retained but are rarely accessed.

Animation showing the file-level archiving experience in SharePoint.

Improve Copilot and Search performance

Microsoft 365 Archive helps improve Copilot results by removing archived files and sites from Copilot’s active index. By reducing clutter across SharePoint and search experiences, you enable Copilot to surface higher‑value, current information while still maintaining governance and compliance. Admins can also enable users to identify and archive the content they are familiar with, instead of admins having to make judgement calls about the relevancy of entire sites.

Reduce overall storage costs

Archiving also helps reduce active SharePoint storage consumption. This can help you save up to 75% compared to adding additional SharePoint storage – freeing up your SharePoint storage for active collaboration. As a reminder, archived storage is only charged when your tenant has exceeded its storage quota. There are no reactivation fees for SharePoint sites and content stored in Microsoft 365 Archive.

Keep security, compliance, and metadata intact

File‑level archiving helps organizations take a more intentional approach to data lifecycle management and responsible data growth – separating active collaboration content from long‑term records. Meanwhile, it retains the security settings, compliance protections, and metadata of the archived content. Importantly, archived content remains searchable by administrators using Microsoft Purview and admin search and all Purview flows remain intact. This approach keeps inactive data protected inside the Microsoft 365 trust boundary – without cluttering active workspaces or inflating primary storage usage.

Matrix comparing the SharePoint standard storage tier to the Microsoft 365 Archive tier

Built for extensibility with APIs

Microsoft 365 Archive includes support for Microsoft Graph APIs, enabling organizations and partners to integrate site- and file‑level archiving into custom workflows and lifecycle management solutions.

For public preview, file‑level archiving focuses on manual and API‑based experiences. Looking ahead, we know that the best way to manage archive files at scale is through policies. We’re working hard to make policy‑based automation for archiving files available soon.  

We have partners, including Preservica, using these APIs to incorporate Microsoft 365 Archive into their own solutions. These integrations can offer you more ways to manage your organization’s long-term SharePoint content.

“We are delighted to be part of the public preview for file-level archiving for Microsoft 365 Archive. This enables us to build on our Preserve365® integration with Microsoft 365 Archive for site-level archiving as we work together to make Active Digital Preservation a seamless part of the Microsoft 365 ecosystem.”
– Stuart Reed, Chief Product Officer, Preservica

Customer spotlight: Kantar

Kantar, a global leader in marketing data and analytics, is embracing Gen AI to save time, improve quality, and deliver better results to both employees and customers. After moving content to Microsoft 365 and rolling out Microsoft 365 Copilot, Kantar faced increasing storage costs. By adopting Microsoft 365 Archive, Kantar reports they have successfully reduced storage costs and improved data quality. Kantar also helped ensure Copilot could access clean, relevant information while keeping inactive data secure and cost-effective.

“Microsoft 365 Archive helps us not only address storage costs, but also provide our end users the most up-to-date, relevant content across SharePoint, Teams, Copilot and Gen AI agents.”
       - Davide Ranchetti, Principal Engineering Manager, Digital Workspace

To date, Kantar reports that they have archived more than 40,000 sites – nearly 100 terabytes of data – significantly cleaning their data estate and reducing SharePoint storage costs. They’re looking forward to using file-level archiving to save even more on storage costs by archiving large, inactive video files.

Learn more about how Kantar is reducing storage costs with Microsoft 365 Archive.

Get started with Microsoft 365 Archive

File‑level archiving in Microsoft 365 Archive is now available in public preview for eligible Microsoft 365 tenants. Learn more about setting up Microsoft 365 Archive, and how to enable and manage file-level and site-level archiving.

Register today for our webinar on April 7, 10:00am PDT to learn more about Microsoft 365 Archive and ask our product team questions about file-level archiving.

If you’re a developer, check out the developer guidance for Microsoft 365 Archive to learn more about using the Microsoft Graph APIs with Microsoft 365 Archive.

Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Compute GPU Capacity for GPT Models (GPT‑4o and Later)

1 Share

When deploying large language models like GPT‑4o, capacity planning is no longer about picking a GPU SKU. Instead, Azure abstracts GPU compute behind Provisioned Throughput Units (PTUs)—a model‑centric way to reason about GPU usage, throughput, and latency.

This post explains how GPU capacity is computed for GPT‑4o‑class models, and how to translate your workload into the right number of PTUs.

From GPUs to Tokens: The Mental Shift

With GPT‑4o and newer models, Azure does not expose GPUs directly. Instead:

  • GPU compute is consumed as token throughput
  • Throughput is measured in tokens per minute (TPM)
  • Capacity is provisioned using PTUs, which represent a fixed slice of GPU processing capacity

A PTU is not “one GPU.” It is a guaranteed amount of model‑processing capacity, backed by GPUs under the hood and optimized by Azure for that specific model. [learn.microsoft.com], [learn.microsoft.com]

The Key Change with GPT‑4o

For GPT‑4o and later models, input and output tokens are metered separately.

That matters because:

  • Input tokens (prompt processing) stress the model differently than
  • Output tokens (generation), which are more GPU‑intensive

Azure therefore assigns separate TPM budgets per PTU for input and output tokens.

GPT‑4o Throughput per PTU

For gpt‑4o, the effective per‑PTU capacities are:

Metric

Value

Input TPM per PTU

~2,500

Output TPM per PTU

~625

Input : Output ratio

4 : 1

These ratios are baked into Azure’s PTU calculators and provisioning logic.

The Core Formula

To compute required GPU capacity (PTUs):

Then:

  • Round up
  • Apply minimum deployment constraints (e.g., 15 PTUs for Global / Data Zone)

Step‑by‑Step Example

Assume this workload:

  • 800 input tokens
  • 150 output tokens
  • 30 requests per minute
  1. Compute TPM

Input TPM

Output TPM

  1. Convert to PTUs

Input side

Output side

  1. Take the bottleneck

Apply Azure’s minimum deployment size15 PTUs required.

This is why tables often show PTUs higher than a simple TPM ÷ constant calculation.

Why Output Tokens Matter More

Output tokens:

  • Are generated sequentially
  • Consume GPU compute longer per token
  • Drive latency and tail performance

That’s why GPT‑4o uses a 4:1 input‑to‑output ratio, and why output TPM often becomes the bottleneck in chatty or agentic workloads. [modelavail...bility.com]

Practical Guidance

  • Short prompts, long answers → output‑bound → more PTUs
  • Large prompts, short answers → input‑bound → more PTUs
  • Stable traffic → PTUs give predictable latency
  • Spiky traffic → consider Standard + spillover

Azure recommends validating sizing with the PTU Calculator and real traffic benchmarks before committing long‑term reservations. 

Final Takeaway

For GPT‑4o and newer models, GPU sizing is token‑driven, not hardware‑driven.
PTUs abstract GPUs, and the required capacity is simply the maximum of input‑bound and output‑bound throughput needs.

Once you understand that, GPT‑4o capacity planning becomes predictable, explainable, and much easier to operate at scale.

 

Read the whole story
alvinashcraft
21 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Labeling Files is Worth It | Speed & Protection Benefits in Microsoft Purview

1 Share
From: Microsoft Mechanics
Duration: 15:37
Views: 153

Take control of your data by discovering sensitive information across every file type and location with Microsoft Purview Information Protection. Classify your data, apply clear labels, and enforce protections that automatically adapt to human and AI interactions so you can reduce risk without slowing down workflows. Proactively monitor, assess, and respond to risk in real time. Use labeling and layered policies to stop accidental sharing, manage AI access, and maintain consistent protection across your organization.

Matt McSpirit, Microsoft Mechanics expert, joins Jeremy Chapman to share how to turn scattered data into actionable security that moves as fast as your team and AI.

► QUICK LINKS:
00:00 - Microsoft Purview data protection
01:04 - Data Loss Prevention
03:36 - Layered approach in addition to DLP
04:13 - Unified classification
04:27 - How sensitive data is determined
06:23 - Create trainable classifiers
07:06 - Distinction between classification and labeling
08:06 - Configure policy protections
09:12 - DLP in action
10:10 - IRM in action
10:51 - See how protections show up
13:37 - Move from reactive to proactive protection
15:00 - Wrap up

► Link References

For deeper guidance, go to https://aka.ms/PurviewInformationProtection

► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast

► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics

#DataSecurity #DataLossPrevention #Microsoft365 #AISecurity

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

EP269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI

1 Share

Guests:

  • No guests! Just Tim and Anton

Topics:

  • Hard to believe we've been doing these since 2022, is that right?
  • What did we see this year at RSA, apart from AI? And more AI? And more AI?
  • What framework can we use to understand the approaches vendors take to AI and security? Just saying "AI washing" is not enough!
  • How to tell "AI washer" from "AI tourist"? 
  • I sense that "securing AI" (and agents) is finally growing as fast as "using AI for security", do you agree?
  • Is the AI vulnerability apocalypse coming? Soon?
  • Have we seen any signs of AI backlash?

Resource:





Download audio: https://traffic.libsyn.com/secure/cloudsecuritypodcast/EP269_CloudSecPodcast.mp3?dest-id=2641814
Read the whole story
alvinashcraft
49 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

GitHub for Beginners: Getting started with GitHub security

1 Share

Welcome back to GitHub for Beginners, season three! So far this year, we’ve covered GitHub Issues and Projects, as well as GitHub Actions. This time around, we’re going to be talking a little bit about security, and what tools GitHub provides to help you keep your code secure. By the end of this post, you’ll understand how to fix vulnerabilities in your repository using built-in tools like secret scanning, Dependabot, code scanning, and Copilot Autofix.

Why security matters

Vulnerabilities are weaknesses in your code or the libraries you use that attackers can exploit. It’s important to realize that you inherit any risk from a library the moment you import it into your project, even though you didn’t write the vulnerable code yourself. This is why even small or brand-new projects can have vulnerabilities—almost all software relies on third-party packages.

GitHub makes finding and fixing these issues easier than ever with GitHub Advanced Security (GHAS), a suite of products that helps you improve and maintain the quality of your code. On public repositories, you have access to Dependabot, code scanning, secret scanning, and Copilot Autofix. If you want to learn even more about the different features, check out our documentation about GHAS. Or keep reading as we walk through enabling and using some of these features.

Enabling security features

The first step is making sure that GHAS is turned on.

  1. Navigate to your repository.
  2. Click the Settings tab at the top of the page.
  3. In the left-hand bar, under the “Security” section, select Advanced Security.
  4. Under “Dependabot,” enable “Dependabot alerts” and “Dependabot security updates.”
  5. Scroll down to the “Code scanning” section.
  6. For “CodeQL analysis,” select Set up and then select Default from the context menu.
  7. A new window will appear. Select Enable CodeQL without changing any settings.
  8. Scroll down to “Secret Protection” and enable it.

These tools are available to public repositories by default. If you have a private repository, you’ll need a GHAS license.

Select the Security tab at the top of the window to navigate to the security home page for this repository. Here you’ll see options for the various GHAS tools you’ve enabled. This is where you can see alerts for exposed secrets, vulnerable dependencies, and risky code paths.

Now let’s take a look at some of these tools in greater detail. To see how the various alerts look, remember that we have a video version of this blog available online.

Using secret scanning

GitHub can help you protect sensitive information with secret scanning. If you accidentally commit an API key or token, secret scanning will flag it in the security tab in the left-hand column underneath Secret scanning. When you see an alert, click the title of the specific alert to see what secret was detected and where it was found.

One of the ways to address this exposed secret is to revoke it. Revoking a secret means disabling the old key so that it can’t be used anymore. You usually do this by generating a new key on the platform where the secret came from, such as Azure or Stripe.

GitHub can’t automatically revoke the secret for you. You’ll need to do that part yourself. However, secret scanning gives you an early warning so that a leaked secret doesn’t become an exploited secret.

Once you’ve revoked the secret, you can close the secret scanning alert by doing the following:

  1. Select Close as in the top-right of the window.
  2. Select Revoked from the context menu.
  3. Click the green Close alert button at the bottom of the context menu.

What is Dependabot?

Dependabot is a code scanning tool that helps you keep your dependencies up to date. Remember when we talked about how you inherit the vulnerabilities of every library you pull into your project? Dependabot helps to address this by alerting you if it finds vulnerabilities in the libraries your project depends on.

To find Dependabot alerts, navigate back to the Security tab in your repository. When you click on a Dependabot alert, it’ll navigate you to the pull request, so you can update your library. In the pull request, if you scroll down, you can see the specific advisory that triggered the alert by selecting See advisory in GitHub Advisory Database.

From the pull request, select the green Review security update button at the top to review the version bump. You should always review suggested changes before incorporating them. As long as everything looks good, go ahead and merge the pull request.

Dependabot automates turning GitHub security advisories into pull requests so you don’t have to manually track common vulnerabilities and exposures.

Responding to CodeQL alerts

CodeQL is the engine that scans your code and produces the code scanning alerts (which you can find under the Security tab). CodeQL is not a linter. It’s much more powerful because it understands data flow, showing where input starts and where it ends up.

As a result, code scanning alerts can cover a wide range of possible scenarios. When you select a code scanning alert, it will explain the issue and, if it can, provide additional information, such as a recommendation for fixes and examples to illustrate the problem and possible solution.

Once you have an understanding of the alert, you can use Copilot Autofix to resolve it by following these steps:

  1. Select the Generate fix button at the top of the alert.
  2. Copilot will suggest a patch. Review the change and verify it addresses your needs.
  3. Click the green Commit to new branch button at the bottom.
  4. In the new pop-up window, select the Open a pull request option, and click Commit change.
  5. Treat the generated pull request as you would any other pull request: review it and merge changes. Remember that while Copilot accelerates security fixes, you stay in control the entire time.

What’s next?

Congratulations! You’ve now learned how to use GitHub Advanced Security to confidently detect and fix vulnerabilities in your code. Public repositories have access to these GHAS tools for free, so you can keep your projects safe from the start. Test your skills using GitHub Skills or the vulnerable-node repository any time.

And if you’re looking for more information, we have lots of documentation available. Here are just a few links to get your started:

Happy coding!

The post GitHub for Beginners: Getting started with GitHub security appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories