Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150669 stories
·
33 followers

Tech CEOs Suddenly Love Blaming AI For Mass Job Cuts

1 Share
An anonymous reader quotes a report from the BBC: Sweeping job cuts at Big Tech companies have become an annual tradition. How executives explain those decisions, however, has changed. Out are buzzwords like efficiency, over-hiring, and too many management layers. Today, all explanations stem from artificial intelligence (AI). In recent weeks, giants including Google, Amazon, Meta, as well as smaller firms such as Pinterest and Atlassian, have all announced or warned of plans to shrink their workforce, pointing to developments in AI that they say are allowing their firms to do more with fewer people. [...] But explaining cuts by pointing to advances in AI sounds better than citing cost pressures or a desire to please shareholders, says tech investor Terrence Rohan, who has had a seat on many company boards. "Pointing to AI makes a better blog post," Rohan says. "Or it at least doesn't make you seem as much the bad guy who just wants to cut people for cost-effectiveness." That does not mean there is no substance behind the words, Rohan added. Some of the companies he's backing are using code that is 25% to 75% AI-generated. That is a sign of the real threat that AI tools for writing code represent to jobs such as software developer, computer engineer and programmer, posts once considered a near-guarantee of highly paid, stable careers. "Some of it is that the narrative is changing, some of it is that we really are starting to see step changes in productivity," Anne Hoecker, a partner at Bain who leads the consultancy's technology practice, says of the recent job cuts. "Leaders more recently are seeing these tools are good enough that you really can do the same amount of work with fundamentally less people." There is another way that AI is driving job cuts -- and it has nothing to do with the technical abilities of coding tools and chatbots. Amazon, Meta, Google and Microsoft are collectively planning to pour $650 billion into AI in the coming year. As executives hunt for ways to try to ease investor shock at those costs, many are landing on payroll, typically tech firms' single biggest expense. [...] Although the expense of, for example, 30,000 corporate Amazon employees is dwarfed by that company's AI spending plans, firms of this size will now take any opportunity to cut costs, Rohan says. "They're playing a game of inches," Rohan says of cuts at Big Tech firms. "If you can even slightly tune the machine, that is helpful." Hoecker says cutting jobs also signals to stock market investors worried about the "real and huge" cost of AI development that executives are not blithely writing blank cheques. "It shows some discipline," says Hoecker. "Maybe laying off people isn't going to make much of a dent in that bill, but by creating a little bit of cashflow, it helps."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The moment AI skilling stopped being optional—and started being personal

1 Share

Kavitha Radhakrishnan is a General Manager in Microsoft Global Skilling, where she leads the teams creating AI‑first, learner‑centered experiences that help people and teams build skills they can apply at work.

 

Sunday night. The week hasn’t started yet, but the questions already have.

A leader is scrolling through AI headlines, trying to keep up with the constant changes. Every day there’s a new tool, a new capability, a new prediction about how work is changing. And it is. By Monday morning, the pressure isn’t theoretical; it’s sitting on a packed calendar and a team that’s already running hot. Everyone’s saying, “We should be using AI,” but nobody’s quite sure what that means for this team, this week.

Elsewhere, an employee is watching coworkers use AI with speed and confidence. They want to keep up without feeling exposed for what they don’t know yet. The gap isn’t intelligence; it’s psychological safety and a clear starting point.

And then there’s the learning leader who’s had the “training participation” conversation a hundred times, but now the question is sharper. It isn’t How many people finished?, but What changed in the way they work? The bar has moved from awareness to application.

None of these people are asking for more content. There’s plenty of that. They want a path that respects their time, fits their role, and helps them build confidence, both individually and as part of a team.

 

Enter AI Skills Navigator

AI is moving faster than most of us can track. The problem is figuring out what to do next.

Leaders don’t want to stitch together five different tools. People don’t want another long course about AI. Teams are looking for skilling that fits into real work. That’s the gap AI Skills Navigator is built to address.

AI Skills Navigator brings role‑based, practical skilling into a single experience, so individuals and teams have a clear starting point, a sense of direction, and ways to see progress as they go. Instead of an endless catalog, it offers guided paths that respect time, align to real responsibilities, and make it easier to turn learning into action. At its core, it’s designed to help turn skilling into execution—progress that people can feel and leaders can point to.

How AI Skills Navigator fits your flow

Alex, a team manager, is trying to set the team up for success.

The team is kicking off a new project with clear goals, tight timelines, and a mix of responsibilities across roles. Everyone is expected to use AI more effectively, but “go learn AI” isn’t a plan. Sending people to a long list of links doesn’t help either.

So Alex turns to AI Skills Navigator.

Instead of gathering content from multiple places, Alex uses AI Skills Navigator to design a skilling playlist for the team. The conversational AI experience helps him identify what his team really needs. The playlist is grounded in what the team is actually working on and intentionally structured around the project goals and role-specific responsibilities. It brings together different content formats on purpose: short sessions for core concepts, practice where it matters, and optional deeper dives for people who want to explore further.

It’s not about forcing everyone through the same experience. It’s about giving the team a shared path forward, while respecting different roles, learning needs, and preferences.

 

 

Sam, an experienced marketing manager on the team, doesn’t have to figure out where to start. A link from Alex lands in their inbox, and the intent is clear: this is what matters for our work right now.

As Sam works through the playlist, they move naturally between different ways of learning. The structure makes it easy for them to focus without feeling boxed in.

For a topic that matters most to the project, Sam chooses a skilling session. A video sets the context, and the Skilling Session Coach AI agent is there along the way—ready to clarify a concept, answer a quick question, or pause to check understanding. Sometimes there’s a short quiz to help Sam confirm that they’re learning and making progress.

People are juggling meetings, messages, deadlines, and more. Attention spans are shorter, and learning often happens in brief moments between tasks. AI Skills Navigator is designed for that reality.

Sometimes Sam feels like listening instead of reading. An AI‑generated podcast turns dense material into something easier to absorb. When time is tight, an AI-generated summary helps Sam catch up in minutes, without losing the thread.

From Alex’s perspective, there’s a simple view of how the team is progressing—enough to see who’s moving forward, where people might be getting stuck, and when it’s time to adjust the plan.

 

 

Together, these moments add up. Skilling sessions provide depth when it’s needed. Podcasts and summaries offer flexibility when attention is limited. Skilling playlists keep everything connected, so learning feels purposeful rather than scattered.

By combining structured paths with flexible ways to learn, and pairing AI support with human expertise, AI Skills Navigator helps individuals and teams build confidence, apply skills, and make progress together.

 

How we’re building confidence, together

AI Skills Navigator is designed to help people and teams build confidence, not by adding more noise, but by providing guidance that fits real work.

It brings together training content from sources that many people already know and trust, including Microsoft Learn, LinkedIn, and GitHub, and connects it into structured paths that make it easier to start, go deeper, and keep moving forward. Whether you’re learning on your own or designing skilling for a team, the goal is the same: turn learning into progress that you can feel.

And this isn’t static. We’ll continue evolving AI Skills Navigator based on how people learn, how teams work, and the feedback we hear from learners and leaders along the way. We’ll share updates regularly, including new content, new capabilities, and what’s coming next, so you can stay current as the experience grows.

 

 

After you've signed in, you can get started with these options. (Pro tip: To expand the navigation pane on the left, try selecting it.)

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

File-level archiving comes to Microsoft 365 Archive (public preview)

1 Share

As content growth accelerates worldwide, organizations need better ways to manage inactive data without sacrificing security, compliance, retention, or discoverability.

Today, we’re announcing a public preview of filelevel archiving in Microsoft 365 Archive. This new capability enables you to archive individual files moving them into a lower-cost, cold-storage tier in SharePoint. This means you can archive outdated and redundant files while keeping the rest of the site active, improving both your Copilot relevancy and your search results in the process.

Read on to learn more about how it works and how to get started. You can also join our webinar on April 7, 10:00am PDT, to see file-level archive in action and engage with our product team.

SharePoint folder with a mix of archived and active files

Get granular control for inactive content

Until now, Microsoft 365 Archive supported site‑level archiving, meaning admins had to choose whether to archive an entire site, often without knowing the details of the work happening there. With file‑level archiving, it’s now possible to archive individual documents within active SharePoint sites. This is ideal for older project files, completed events, or reference materials that must be retained but are rarely accessed.

Animation showing the file-level archiving experience in SharePoint.

Improve Copilot and Search performance

Microsoft 365 Archive helps improve Copilot results by removing archived files and sites from Copilot’s active index. By reducing clutter across SharePoint and search experiences, you enable Copilot to surface higher‑value, current information while still maintaining governance and compliance. Admins can also enable users to identify and archive the content they are familiar with, instead of admins having to make judgement calls about the relevancy of entire sites.

Reduce overall storage costs

Archiving also helps reduce active SharePoint storage consumption. This can help you save up to 75% compared to adding additional SharePoint storage – freeing up your SharePoint storage for active collaboration. As a reminder, archived storage is only charged when your tenant has exceeded its storage quota. There are no reactivation fees for SharePoint sites and content stored in Microsoft 365 Archive.

Keep security, compliance, and metadata intact

File‑level archiving helps organizations take a more intentional approach to data lifecycle management and responsible data growth – separating active collaboration content from long‑term records. Meanwhile, it retains the security settings, compliance protections, and metadata of the archived content. Importantly, archived content remains searchable by administrators using Microsoft Purview and admin search and all Purview flows remain intact. This approach keeps inactive data protected inside the Microsoft 365 trust boundary – without cluttering active workspaces or inflating primary storage usage.

Matrix comparing the SharePoint standard storage tier to the Microsoft 365 Archive tier

Built for extensibility with APIs

Microsoft 365 Archive includes support for Microsoft Graph APIs, enabling organizations and partners to integrate site- and file‑level archiving into custom workflows and lifecycle management solutions.

For public preview, file‑level archiving focuses on manual and API‑based experiences. Looking ahead, we know that the best way to manage archive files at scale is through policies. We’re working hard to make policy‑based automation for archiving files available soon.  

We have partners, including Preservica, using these APIs to incorporate Microsoft 365 Archive into their own solutions. These integrations can offer you more ways to manage your organization’s long-term SharePoint content.

“We are delighted to be part of the public preview for file-level archiving for Microsoft 365 Archive. This enables us to build on our Preserve365® integration with Microsoft 365 Archive for site-level archiving as we work together to make Active Digital Preservation a seamless part of the Microsoft 365 ecosystem.”
– Stuart Reed, Chief Product Officer, Preservica

Customer spotlight: Kantar

Kantar, a global leader in marketing data and analytics, is embracing Gen AI to save time, improve quality, and deliver better results to both employees and customers. After moving content to Microsoft 365 and rolling out Microsoft 365 Copilot, Kantar faced increasing storage costs. By adopting Microsoft 365 Archive, Kantar reports they have successfully reduced storage costs and improved data quality. Kantar also helped ensure Copilot could access clean, relevant information while keeping inactive data secure and cost-effective.

“Microsoft 365 Archive helps us not only address storage costs, but also provide our end users the most up-to-date, relevant content across SharePoint, Teams, Copilot and Gen AI agents.”
       - Davide Ranchetti, Principal Engineering Manager, Digital Workspace

To date, Kantar reports that they have archived more than 40,000 sites – nearly 100 terabytes of data – significantly cleaning their data estate and reducing SharePoint storage costs. They’re looking forward to using file-level archiving to save even more on storage costs by archiving large, inactive video files.

Learn more about how Kantar is reducing storage costs with Microsoft 365 Archive.

Get started with Microsoft 365 Archive

File‑level archiving in Microsoft 365 Archive is now available in public preview for eligible Microsoft 365 tenants. Learn more about setting up Microsoft 365 Archive, and how to enable and manage file-level and site-level archiving.

Register today for our webinar on April 7, 10:00am PDT to learn more about Microsoft 365 Archive and ask our product team questions about file-level archiving.

If you’re a developer, check out the developer guidance for Microsoft 365 Archive to learn more about using the Microsoft Graph APIs with Microsoft 365 Archive.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Compute GPU Capacity for GPT Models (GPT‑4o and Later)

1 Share

When deploying large language models like GPT‑4o, capacity planning is no longer about picking a GPU SKU. Instead, Azure abstracts GPU compute behind Provisioned Throughput Units (PTUs)—a model‑centric way to reason about GPU usage, throughput, and latency.

This post explains how GPU capacity is computed for GPT‑4o‑class models, and how to translate your workload into the right number of PTUs.

From GPUs to Tokens: The Mental Shift

With GPT‑4o and newer models, Azure does not expose GPUs directly. Instead:

  • GPU compute is consumed as token throughput
  • Throughput is measured in tokens per minute (TPM)
  • Capacity is provisioned using PTUs, which represent a fixed slice of GPU processing capacity

A PTU is not “one GPU.” It is a guaranteed amount of model‑processing capacity, backed by GPUs under the hood and optimized by Azure for that specific model. [learn.microsoft.com], [learn.microsoft.com]

The Key Change with GPT‑4o

For GPT‑4o and later models, input and output tokens are metered separately.

That matters because:

  • Input tokens (prompt processing) stress the model differently than
  • Output tokens (generation), which are more GPU‑intensive

Azure therefore assigns separate TPM budgets per PTU for input and output tokens.

GPT‑4o Throughput per PTU

For gpt‑4o, the effective per‑PTU capacities are:

Metric

Value

Input TPM per PTU

~2,500

Output TPM per PTU

~625

Input : Output ratio

4 : 1

These ratios are baked into Azure’s PTU calculators and provisioning logic.

The Core Formula

To compute required GPU capacity (PTUs):

Then:

  • Round up
  • Apply minimum deployment constraints (e.g., 15 PTUs for Global / Data Zone)

Step‑by‑Step Example

Assume this workload:

  • 800 input tokens
  • 150 output tokens
  • 30 requests per minute
  1. Compute TPM

Input TPM

Output TPM

  1. Convert to PTUs

Input side

Output side

  1. Take the bottleneck

Apply Azure’s minimum deployment size15 PTUs required.

This is why tables often show PTUs higher than a simple TPM ÷ constant calculation.

Why Output Tokens Matter More

Output tokens:

  • Are generated sequentially
  • Consume GPU compute longer per token
  • Drive latency and tail performance

That’s why GPT‑4o uses a 4:1 input‑to‑output ratio, and why output TPM often becomes the bottleneck in chatty or agentic workloads. [modelavail...bility.com]

Practical Guidance

  • Short prompts, long answers → output‑bound → more PTUs
  • Large prompts, short answers → input‑bound → more PTUs
  • Stable traffic → PTUs give predictable latency
  • Spiky traffic → consider Standard + spillover

Azure recommends validating sizing with the PTU Calculator and real traffic benchmarks before committing long‑term reservations. 

Final Takeaway

For GPT‑4o and newer models, GPU sizing is token‑driven, not hardware‑driven.
PTUs abstract GPUs, and the required capacity is simply the maximum of input‑bound and output‑bound throughput needs.

Once you understand that, GPT‑4o capacity planning becomes predictable, explainable, and much easier to operate at scale.

 

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Labeling Files is Worth It | Speed & Protection Benefits in Microsoft Purview

1 Share
From: Microsoft Mechanics
Duration: 15:37
Views: 153

Take control of your data by discovering sensitive information across every file type and location with Microsoft Purview Information Protection. Classify your data, apply clear labels, and enforce protections that automatically adapt to human and AI interactions so you can reduce risk without slowing down workflows. Proactively monitor, assess, and respond to risk in real time. Use labeling and layered policies to stop accidental sharing, manage AI access, and maintain consistent protection across your organization.

Matt McSpirit, Microsoft Mechanics expert, joins Jeremy Chapman to share how to turn scattered data into actionable security that moves as fast as your team and AI.

► QUICK LINKS:
00:00 - Microsoft Purview data protection
01:04 - Data Loss Prevention
03:36 - Layered approach in addition to DLP
04:13 - Unified classification
04:27 - How sensitive data is determined
06:23 - Create trainable classifiers
07:06 - Distinction between classification and labeling
08:06 - Configure policy protections
09:12 - DLP in action
10:10 - IRM in action
10:51 - See how protections show up
13:37 - Move from reactive to proactive protection
15:00 - Wrap up

► Link References

For deeper guidance, go to https://aka.ms/PurviewInformationProtection

► Unfamiliar with Microsoft Mechanics?
As Microsoft's official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.

• Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
• Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
• Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast

► Keep getting this insider knowledge, join us on social:
• Follow us on Twitter: https://twitter.com/MSFTMechanics
• Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
• Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
• Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics

#DataSecurity #DataLossPrevention #Microsoft365 #AISecurity

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

EP269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI

1 Share

Guests:

  • No guests! Just Tim and Anton

Topics:

  • Hard to believe we've been doing these since 2022, is that right?
  • What did we see this year at RSA, apart from AI? And more AI? And more AI?
  • What framework can we use to understand the approaches vendors take to AI and security? Just saying "AI washing" is not enough!
  • How to tell "AI washer" from "AI tourist"? 
  • I sense that "securing AI" (and agents) is finally growing as fast as "using AI for security", do you agree?
  • Is the AI vulnerability apocalypse coming? Soon?
  • Have we seen any signs of AI backlash?

Resource:





Download audio: https://traffic.libsyn.com/secure/cloudsecuritypodcast/EP269_CloudSecPodcast.mp3?dest-id=2641814
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories