Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148588 stories
·
33 followers

Microsoft PowerPoint is removing “Reuse slides” on Windows & macOS, as it continues to add Copilot features

1 Share

PowerPoint’s Reuse Slides feature, which has been available for over a decade, will be removed in January 2026. Microsoft says PowerPoint Reuse slide was a duplicate feature, and it no longer makes sense to have it. But what are your other options? You can use the “New window” feature to duplicate the deck or just drag and drop slides.

What is Reuse in PowerPoint?

Reuse slides in PowerPoint allows you to add more slides from another presentation (.ppt). This is particularly useful when you want to retain formatting across different PowerPoints. It's commonly used in organizations to reuse the deck.

You can always manually copy slides from another PowerPoint, but the experience is not exactly perfect. Also, with Reuse slide in PowerPoint, you can copy the entire deck, and then edit it with new content without losing out on the formatting, such as your company’s logo or footer/header design.

Reuse slides in PowerPoint

When you choose ‘Reuse slides‘ in the PowerPoint toolbar, it opens a panel on the right side, which allows you to choose the slides you want to reuse and whether you want to retain the formatting or just the content.

How to reuse slides in PowerPoint

In an advisory on Microsoft 365 Admin Centre, Microsoft argues that it’s removing the Reuse slides feature, as there are other ways to use material from different PowerPoint presentations. This feature will be removed from Windows 11 and macOS in January 2026, and you won’t be able to turn it back on.

“We will begin retiring this feature starting in December 2025 and expect to complete in January 2026,” Microsoft noted in a support document spotted by Windows Latest. “Users should update documentation and consider alternative slide reuse methods.”

I understand there are other ways to import an entire deck, but people still prefer Reuse Slide because they are familiar with it.

How to keep using the “Reuse slides” feature in PowerPoint?

I’ve some ideas that would give you 90% same experience:

  1. Open a new PowerPoint and the source deck.
  2. Then, you enter split mode on Windows. Have your new PowerPoint on the left side and the existing presentation on the right side that you want to reuse.
    Drag and drop slides in PowerPoint
  3. Now, simply select the slides you want to use and drag and drop them to the new PowerPoint.

This will retain all your animations and media in most cases.

New window in PowerPoint presentation

However, if you want to duplicate the entire presentation, simply navigate to the ‘View‘ tab and select ‘New window.’ It will duplicate your existing PowerPoint presentation, but make sure to save it with a separate file name.

New window” is possibly the best replacement for “Reuse slide” in PowerPoint, but Reuse slides still offers greater control because you can pick specific slides and choose formatting in a sidebar.

The post Microsoft PowerPoint is removing “Reuse slides” on Windows & macOS, as it continues to add Copilot features appeared first on Windows Latest

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Trump’s Hatred of EVs Is Making Gas Cars More Expensive

1 Share
Trump’s anti-climate agenda is making it more expensive to own a car, period.
Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

​​Whisper Leak: A novel side-channel attack on remote language models

1 Share

Microsoft has discovered a new type of side-channel attack on remote language models. This type of side-channel attack could allow a cyberattacker a position to observe your network traffic to conclude language model conversation topics, despite being end-to-end encrypted via Transport Layer Security (TLS).

We have worked with multiple vendors to get the risk mitigated, as well as made sure Microsoft-owned language model frameworks are protected.

The importance of language model confidentiality

In the last couple of years, AI-powered chatbots have rapidly become an integral part of our daily lives, assisting with everything from answering questions and generating content to coding and personal productivity. As these AI systems continue to evolve, they are increasingly used in sensitive contexts, including healthcare, legal advice, and personal conversations. This makes it crucial to ensure that the data exchanged between humans and language models remains anonymous and secure. Without strong privacy protections, users may be targeted or hesitate to share information, limiting the chatbot’s usefulness and raising ethical concerns. Implementing robust anonymization techniques, encryption, and strict data retention policies is essential to trust and safeguarding user privacy in an era where AI-powered interactions are becoming the norm.

In this blog post, we present a novel side-channel attack against streaming-mode language models that uses network packet sizes and timings. This puts the privacy of user and enterprise communications with chatbots at risk despite end-to-end encryption. Cyberattackers in a position to observe the encrypted traffic (for example, a nation-state actor at the internet service provider layer, someone on the local network, or someone connected to the same Wi-Fi router) could use this cyberattack to infer if the user’s prompt is on a specific topic. This especially poses real-world risks to users by oppressive governments where they may be targeting topics such as protesting, banned material, election process, or journalism. Finally, we discuss mitigations implemented by cloud providers of language models to reduce the privacy attack risks against their users. Through this process, we have successfully worked with multiple vendors to get these privacy issues addressed.

Background: Language model communication practices

Since AI-powered chatbots are used over the internet, the communications with them are often encrypted with HTTP over TLS (HTTPS), which ensures the authenticity of the server and security through encryption.

At a high level, language models generate responses by predicting and producing one token at a time based on the given prompt. Rather than constructing the entire response at once, the model sequentially calculates each token using the previous tokens as context to determine the next most likely word or phrase. This autoregressive nature means that responses are inherently generated in a step-by-step manner. Additionally, since users typically prefer immediate feedback rather than waiting for the full response to be computed, language models stream their output in chunks. This approach ensures that text is displayed as soon as possible rather than delaying until the entire response is fully formed.

Background: Symmetric ciphers

The TLS protocol is the standard means of application-level cryptography over the internet, and is most commonly used by HTTPS. Thus, the security of TLS is foundational to the confidentiality of communication.

Generally, TLS aims to use asymmetric cryptography (such as  RSA or ECDH) with certificate validation to exchange session keys, which can be further used as keys in symmetric ciphers. Symmetric ciphers have also been studied and improved over the years. Symmetric ciphers fall into one of two families:

  1. Block ciphers: Plaintext is split into blocks of data, each block encrypted on its own, usually with some input from other blocks (known as Mode of Operation). The most common Block cipher used modernly is  Advanced Encryption Standard (AES).
  2. Stream ciphers: Based on the key, the cipher creates a pseudo-random endless stream of bytes, which are used to manipulate the plaintext or encrypted data. Some common stream ciphers include ChaCha20, as well as AES-GCM, which turns the AES Block cipher into a stream cipher.

An important difference between Block ciphers and Stream ciphers is the data size granularity: in Block ciphers the data size always divides by the block size (for example, 16 bytes), while Stream ciphers support any data size.

Without taking compression into consideration, we can conclude the size of the ciphertext is equal to the size of the plaintext, plus a constant (for example,  Message Authentication Code).

Side-channel attacks against language models

Side-channel attacks have a long history in cryptography, traditionally targeting hardware implementations by analyzing power consumption, electromagnetic emissions, or timing variations to leak secret keys.

More recently, the unique characteristics of language models have opened new avenues for side-channel analysis. Our research into Whisper Leak builds upon and is contextualized by several concurrent and recent works specifically targeting language models:

  • A token length side-channel attack presented by Weiss et al. in 2024. The attack demonstrated that the length of individual plaintext tokens can be inferred from the size of encrypted packets in streaming language model responses, and in many cases, the output response being reconstructed using this information.
  • A remote timing attack presented by Carlini and Nasr in 2024. The attack specifically targets the timing variations introduced by efficient inference techniques like speculative decoding. 
  • A timing side-channel attack via output token count presented by Tianchen Zhang, Gururaj Saileshwar, and David Lie in 2024, abusing the fact that the total number of output tokens generated by a language model can vary depending on sensitive input attributes, such as the target language in translation or the predicted class in classification.
  • Timing side-channel attack through cache sharing was presented by Zheng et al. in 2024, exploiting timing differences caused by cache sharing optimizations (prefix caching and semantic caching) in language model services.

We hypothesized that the sequence of encrypted packet sizes and inter-arrival times during a streaming language model response contains enough information to classify the topic of the initial prompt, even in the cases where responses are streamed in groupings of tokens. To validate this, we designed an experiment simulating the scenario where the adversary can observe encrypted traffic but not decrypt it.

Whisper Leak methodology

In our experiment, we train a binary classifier to distinguish between a specific target topic and general background traffic. We chose “legality of money laundering” as the target topic for our proof-of-concept.

  • For positive samples, we used a language model to generate 100 semantically similar variants of questions about this topic (example, “Are there any circumstances where money laundering is legal?”, “Are there international laws against money laundering?”). Eighty (80) variants were used for training and validation, and 20 were held out for testing generalization. 
  • For negative noise samples, we randomly sampled 11,716 unrelated questions from the Quora Questions Pair dataset, covering a wide variety of topics.
  • Data collection was performed for each language model service individually, recording response times and packet sizes via network sniffing (via tcpdump), shuffling the order of positive and negative samples for collection, as well as introducing variants by inserting extra spaces between words to avoid caching interference risk. We chose a standard of language model temperature = 1.0 to encourage language model response diversity. 
  • See Figure 2 below for examples of the target and noise prompts used.

Post data-collection, we evaluated three different machine learning models, each of which was evaluated in three modes (time-only, packet-size only, or both):

  • LightGBM: A gradient boosting framework.
  • LSTM-based (Bi-LSTM): A recurrent neural network architecture suitable for sequential data.
  • BERT-based: Using a pre-trained transformer model (DistilBERT-uncased) adapted with extended tokens representing size and time buckets for sequence classification.

We evaluated the performance using Area Under the Precision-Recall Curve (AUPRC), which is a measurement of a cyberattack’s success for imbalanced datasets (many negative samples, fewer positive samples). In the following table, we illustrate the results:

A quick look at the “Best Overall” column shows that for many models, the cyberattack achieved scores above 98%. This tells us that the unique digital “fingerprints” left by conversations on a specific topic are distinct enough for our AI-powered eavesdropper to reliably pick them out in a controlled test.

What this means in the real world

To understand what this means practically, we simulated a more realistic surveillance scenario: imagine a cyberattacker monitoring 10,000 random conversations, with only one conversation about the target sensitive topic mixed in. Even with this extreme imbalance, our analysis shows concerning results.

For many of the tested models, a cyberattacker could achieve 100% precision (all conversations it flags as related to the target topic are correct) while still catching 5-50% of target conversations. In plain terms: nearly every conversation the cyberattacker flags as suspicious would actually be about the sensitive topicno false alarms. This level of accuracy means a cyberattacker could operate with high confidence, knowing they’re not wasting resources on false positives.

To put this in perspective: if a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topicswhether that’s money laundering, political dissent, or other monitored subjectseven though all the traffic is encrypted.

Important caveat: these precision estimates are projections based on our test data and are inherently limited by the volume and diversity of our collected data. Real-world performance would depend on actual traffic patterns, but the results strongly suggest this is a practical threat, not just a theoretical one.

This is a starting risk level

The cyberthreat could grow worse over time. These results represent a baseline risk level. Figure 4 below shows that attack effectiveness improves as cyberattackers collect more training data. In extended tests with one tested model, we observed continued improvement in attack accuracy as dataset size increased. Combined with more sophisticated attack models and the richer patterns available in multi-turn conversations or multiple conversations from the same user, this means a cyberattacker with patience and resources could achieve higher success rates than our initial results suggest.

Working with industry partners and mitigation

We have engaged in responsible disclosures with affected vendors and are pleased to report successful collaboration in implementing mitigations. Notably, OpenAI, Mistral, Microsoft, and xAI have deployed protections at the time of writing. This industry-wide response demonstrates the commitment to user privacy across the AI ecosystem.

OpenAI, and later mirrored by Microsoft Azure, implemented an additional field in the streaming responses under key “obfuscation,” where a random sequence of text of variable length is added to each response. This notably masks the length of each token, and we observed it mitigates the cyberattack effectiveness substantially. We have directly verified that Microsoft Azure’s mitigation successfully reduces attack effectiveness to levels we consider no longer a practical risk.

Similarly, Mistral included a new parameter called “p” that has a similar effect.

What users can do

While this is primarily an issue for AI providers to address, users concerned about privacy can additionally:

  • Avoid discussing highly sensitive topics over AI chatbots when on untrusted networks.
  • Use VPN services to add an additional layer of protection.
  • Prefer providers who have implemented mitigations.
  • Use non-streaming models of large language models providers.
  • Stay informed about provider security practices.

Source code

 The models and data collection code are publicly available under the Whisper Leak repository. In addition, we have built a proof-of-concept code that uses the models to conclude a probability (between 0.0 and 1.0) of a topic being “sensitive” (related to money laundering, in our proof-of-concept).

Technical report

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

Microsoft Ignite

Join us at Microsoft Ignite to explore the latest solutions for securing AI. Connect with industry leaders, innovators, and peers shaping what’s next.

San Francisco on November 17-21
Online (free) on November 18-20

Microsoft Ignite

The post ​​Whisper Leak: A novel side-channel attack on remote language models appeared first on Microsoft Security Blog.



Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What 986 million code pushes say about the developer workflow in 2025

1 Share

If you’re building software today, you’ve probably noticed that it’s like… really fast now. And that’s the thing: it’s not just that we code faster. It’s how we code, review, and ship that has changed (and is changing).

You might have seen the Octoverse 2025 report, but in case you haven’t, the stats are pretty wild: developers created 230+ repositories per minute and pushed 986 million commits last year. Almost a billion commits! With a b!

Because developers (and teams of developers) are moving faster overall, the expectation is to make different choices because they’re moving faster. When they move faster, their workflows change, too.

Iteration is the default state

What’s really interesting is that this doesn’t feel like a temporary spike. It feels like an actual long-term shift in iteration. The days of shipping big releases once per quarter are rapidly going away.

Developers are pushing constantly, not just when things are “ready.” Smaller and more frequent commits are becoming more of the norm. Personally, I love that. Nobody wants to review a gigantic, 1000-line pull request all the time (only to inevitably plop in a “LGTM” as their eyes glaze over). It’s still more code, shipped faster, but in smaller bursts. 

The new normal is lightweight commits. You fix a bug, write a small feature, adjust some configurations, and… push. The shift we’re seeing is that things continue to move, not that things are “done” in huge chunks, because “done”-ness is temporary!

Art Code is never finished, only abandoned iterated upon.”

Leonardo Da Vinci Cassidy, as well as most developers at this point

And devs know that shipping constantly is about reducing risk, too. Small, frequent changes are easier to debug, and easier to roll back if things go wrong. You don’t have to sift through a month’s worth of changes to get something fixed.This cycle changes how teams think about quality, about communication, and even hiring. If your team is still moving at a pace where they wait weeks to ship something, your team honestly isn’t working like a lot of the world is anymore.

Shipping looks different now

Because we’re iterating differently, we’re shipping differently. In practice, that looks like:

  • More feature flags: Feature flags used to be “for A/B testing and maybe the spooky experimental feature.” Now they’re core to how we ship incomplete work safely. Feature flags are everywhere and let teams ship code behind a toggle. You can push that “maybe” feature to prod, see how it behaves, and then turn it off instantly if something goes sideways. Teams don’t have to hold up releases to finish edge cases. And feature flags are more a part of main workflows now instead of an afterthought.
  • CI/CD runs everything: Every push sets off a chain of events: tests, builds, artifact generations, security scans… if it passes, it deploys. Developers expect pipelines to kick in automatically, and manual deploys are more and more rare.
  • Smaller, focused pull requests: Pull requests simply aren’t novels anymore. We’re seeing more short, readable pull requests with a single purpose. It’s easier and faster to review, and that mental overhead alone increases speed and saves us some brain cells.
  • Tests drive momentum: Developers used 11.5 billion GitHub Actions minutes running tests last year (a 35% increase! That’s with a b! Again!). With all this automation, we’re seeing unit tests, integration tests, end-to-end tests, and all the tests becoming more and more necessary because automation is how we keep up with the new pace.

How teams communicate should also change

We know it’s a fact now that developer workflows have changed, but I personally think that communication around development should also follow suit.

This is how I envision that future:

  • Standups are shorter (or async).
  • Status updates live in issues (and “where code lives,” not meetings).
  • “Blocked because the pull request isn’t reviewed yet” is no longer acceptable.
  • Hiring shifts toward people who can ship fast and communicate clearly.

Yes, the code got faster, so developers have to move faster as well.

Developers are still a part of engineering speed. But developer workflows should never slow things down too much.

Take this with you

It’ll be interesting to see what developer workflows in 2026 look like after such rapid changes in 2025.

I think “AI fatigue” is incredibly real (and valid) and we’ll see many tools fall by the wayside, of course, as the natural productivity enhancers succeed and the ones that add friction go away. But I also think new standards and tooling will emerge as our new “baseline” for our ever-changing success metrics.

In the future, specs and code will live closer together (Markdown-to-code workflows are only going to grow). That will mean more communication across teams, and perhaps even more documentation overall. And we’ll continue to see more and more constant and collaborative shipping (even from companies that are still slow to adopt AI tooling) because it’s necessary.

This year, we’ve seen a lot of growth across the board in terms of pull requests, projects overall, contributions, and so on… so perhaps we’ll see some stabilization? 

But, of course, the only constant is change.

Looking to stay one step ahead? 

Read the latest Octoverse report and consider trying Copilot CLI.

The post What 986 million code pushes say about the developer workflow in 2025 appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to Correctly Align Columns in ComboBox, ListBox, or CheckListView in Advanced Installer

1 Share
Creating installer packages sometimes requires presenting item lists in custom dialogs, where data must line up cleanly rather than drift into uneven text blocks. Advanced Installer offers controls such as ComboBox, ListBox, and CheckListView. However, to correctly align columns in these controls, there are some steps you should be aware of. [...]
Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

PSADT Service UI Deprecated

1 Share
PSADT 4.1.X finally brings direct user interaction to Intune deployments, so no more ServiceUI.exe. The new Invoke-AppDeployToolkit.exe automatically detects active sessions, displaying messages and prompts right from Intune. But the new Defer feature doesn’t work on an exact timer; it depends on Intune’s own unpredictable recheck cycle. [...]
Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories