Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147551 stories
·
32 followers

The Signals Loop: Fine-tuning for world-class AI apps and agents

1 Share

In the early days of the AI shift, AI applications were largely built as thin layers on top of off-the-shelf foundation models. But as developers began tackling more complex use cases, they quickly encountered the limitations of simply using RAG on top of off-the-shelf models. While this approach offered a fast path to production, it often fell short in delivering the accuracy, reliability, efficiency, and engagement needed for more sophisticated use cases.

However, this dynamic is shifting. As AI shifts from assistive copilots to autonomous co-workers, the architecture behind these systems must evolve. Autonomous workflows, powered by real-time feedback and continuous learning, are becoming essential for productivity and decision-making. AI applications that incorporate continuous learning through real-time feedback loops—what we refer to as the ‘signals loop’—are emerging as the key to building more adaptive and resilient differentiation over time.

Building truly effective AI apps and agents requires more than just access to powerful LLMs. It demands a rethinking of AI architecture—one that places continuous learning and adaptation at its core. The ‘signals loop’ centers on capturing user interactions and product usage data in real time, then systematically integrating this feedback to refine model behavior and evolve product features, creating applications that get better over time.

As the rise of open-source frontier models democratizes access to model weights, fine-tuning (including reinforcement learning) is becoming more accessible and building these loops becomes more feasible. Capabilities like memory are also increasing the value of signals loops. These technologies enable AI systems to retain context and learn from user feedback—driving greater personalization and improving customer retention. And as the use of agents continues to grow, ensuring accuracy becomes even more critical, underscoring the growing importance of fine-tuning and implementing a robust signals loop. 

At Microsoft, we’ve seen the power of the signals loop approach firsthand. First-party products like Dragon Copilot and GitHub Copilot exemplify how signals loops can drive rapid product improvement, increased relevance, and long-term user engagement.

Implementing signals loop for continuous AI improvement: Insights from Dragon Copilot and GitHub Copilot

Dragon Copilot is a healthcare Copilot that helps doctors become more productive and deliver better patient care. The Dragon Copilot team has built a signals loop to drive continuous product improvement. The team built a fine-tuned model using a repository of clinical data, which resulted in much better performance than the base foundational model with prompting only. As the product has gained usage, the team used customer feedback telemetry to continuously refine the model. When new foundational models are released, they are evaluated with automated metrics to benchmark performance and updated if there are significant gains. This loop creates compounding improvements with every model generation, which is especially important in a field where the demand for precision is extremely high. The latest models now outperform base foundational models by ~50%. This high performance helps clinicians focus on patients, capture the full patient story, and improve care quality by producing accurate, comprehensive documentation efficiently and consistently.

Graph showing model accuracy comparison. Text reads "Dragon Copilot fine-tuning process."

GitHub Copilot was the first Microsoft Copilot, capturing widespread attention and setting the standard of what AI-powered assistance could look like. In its first year, it rapidly grew to over a million users, and has now reached more than 20 million users. As expectations for code suggestion quality and relevance continue to rise, the GitHub Copilot team has shifted its focus to building a robust mid-training and post-training environment, enabling a signals loop to deliver Copilot innovations through continuous fine-tuning. The latest code completions model was trained on over 400 thousand real-world samples from public repositories and further tuned via reinforcement learning using hand-crafted, synthetic training data. Alongside this new model, the team introduced several client-side and UX changes, achieving an over 30% improvement in retained code for completions and a 35% improvement in speed. These enhancements allow GitHub Copilot to anticipate developer needs and act as a proactive coding partner.

Graph showing product accuracy improvement. Text reads "GitHub Copilot fine-tuning process."

Key implications for the future of AI: Fine-tuning, feedback loops, and speed matter 

The experiences of Dragon Copilot and GitHub Copilot underscore a fundamental shift in how differentiated AI products will be built and scaled moving forward. A few key implications emerge:

  1. Fine-tuning is not optional—it’s strategically important: Fine-tuning is no longer niche, but a core capability that unlocks significant performance improvements. Across our products, fine-tuning has led to dramatic gains in accuracy and feature quality. As open-source models democratize access to foundational capabilities, the ability to fine-tune for specific use cases will increasingly define product excellence.
  2. Feedback loops can generate continuous improvement: As foundational models become increasingly commoditized, the long-term defensibility of AI products will not come from the model alone, but from how effectively those models learn from usage. The signals loop—powered by real-world user interactions and fine-tuning—enables teams to deliver high-performing experiences that continuously improve over time.
  3. Companies must evolve to support iteration at scale, and speed will be key: Building a system that supports frequent model updates requires adjusting data pipelines, fine-tuning, evaluation loops, and team workflows. Companies’ engineering and product orgs must align around fast iteration and fine-tuning, telemetry analysis, synthetic data generation, and automated evaluation frameworks to keep up with user needs and model capabilities. Organizations that evolve their systems and tools to rapidly incorporate signals—from telemetry to human feedback—will be best positioned to lead. Azure AI Foundry provides the essential components needed to facilitate this continuous model and product improvement.
  4. Agents require intentional design and continuous adaptation: Building agents goes beyond model selection. It demands thoughtful orchestration of memory, reasoning, and feedback mechanisms. Signals loops enable agents to evolve from reactive assistants into proactive co-workers that learn from interactions and improve over time. Azure AI Foundry provides the infrastructure to support this evolution, helping teams design agents that act, adapt dynamically, and deliver sustained value.

While in the early days of AI fine-tuning was not economical and required lots of time and effort, the rise of open-source frontier models and methods like LoRA and distillation have made tuning more cost-effective, and tools have become easier to use. As a result, fine-tuning is more accessible to more organizations than ever before. While out-of-the-box models have a role to play for horizontal workloads like knowledge search or customer service, organizations are increasingly experimenting with fine-tuning for industry and domain-specific scenarios, adding their domain-specific data to their products and models.

The signals loop ‘future proofs’ AI investments by enabling models to continuously improve over time as usage data is fed back into the fine-tuned model, preventing stagnated performance.

Graph showing continuous improvement. Text reads "Signals Loop as future-proofing."

Build adaptive AI experiences with Azure AI Foundry

To simplify the implementation of fine-tuning feedback loops, Azure AI Foundry offers industry-leading fine-tuning capabilities through a unified platform that streamlines the entire AI lifecycle—from model selection to deployment—while embedding enterprise-grade compliance and governance. This empowers teams to build, adapt, and scale AI solutions with confidence and control. 

Here are four key reasons why fine-tuning on Azure AI Foundry stands out: 

  • Model choice: Access a broad portfolio of open and proprietary models from leading providers, with the flexibility to choose between serverless or managed compute options. 
  • Reliability: Rely on 99.9% availability for Azure OpenAI models and benefit from latency guarantees with provisioned throughput units (PTUs). 
  • Unified platform: Leverage an end-to-end environment that brings together models, training, evaluation, deployment, and performance metrics—all in one place. 
  • Scalability: Start small with a cost-effective Developer Tier for experimentation and seamlessly scale to production workloads using PTUs. 

Join us in building the future of AI, where copilots become co-workers, and workflows become self-improving engines of productivity.

Learn more

The post The Signals Loop: Fine-tuning for world-class AI apps and agents  appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building Secure AI Chat Systems: Part 2 - Securing Your Architecture from Storage to Network

1 Share

In Part 1 of this series, we tackled the critical challenge of protecting the LLM itself from malicious inputs. We implemented three essential security layers using Azure AI services: harmful content detection with Azure Content Safety, PII protection with Azure Text Analytics, and prompt injection prevention with Prompt Shields. These guardrails ensure that your AI model doesn't process harmful requests or leak sensitive information through cleverly crafted prompts.

But even with a perfectly secured LLM, your entire AI chat system can still be compromised through architectural vulnerabilities.

For example, the WotNot incident wasn't about prompt injection—it was 346,000 files sitting in an unsecured cloud storage bucket. Likewise the OmniGPT breach with 34 million lines of conversation logs due to backend database security failures.

The global average cost of a data breach is now $4.44 million, and it takes organizations an average of 241 days to identify and contain an active breach. That's eight months where attackers have free reign in your systems. The financial cost is one thing, but the reputational damage and loss of customer is irreversible.

This article focuses on the architectural security concerns I mentioned at the end of Part 1—the infrastructure that stores your chat histories, the networks that connect your services, and the databases that power your vector searches. We'll examine real-world breaches that happened in 2024 and 2025, understand exactly what went wrong, and implement Azure solutions that would have prevented them.

By the end of this article, you'll have a production-ready, secure architecture for your AI chat system that addresses the most common—and most devastating—security failures we're seeing in the wild.

Let's start with the most fundamental question: where is your data, and who can access it?

1. Preventing Exposed Storage with Network Isolation

The Problem: When Your Database Is One Google Search Away

Let me paint you a picture of what happened with two incidents in 2024-2025:

WotNot AI Chatbot left 346,000 files completely exposed in an unsecured cloud storage bucket—passports, medical records, sensitive customer data, all accessible to anyone on the internet without even a password. Security researchers who discovered it tried for over two months to get the company to fix it.

In May 2025, Canva Creators' data was exposed through an unsecured Chroma vector database operated by an AI chatbot company. The database contained 341 collections of documents including survey responses from 571 Canva Creators with email addresses, countries of residence, and comprehensive feedback. This marked the first reported data leak involving a vector database.

The common thread? Public internet accessibility. These databases and storage accounts were accessible from anywhere in the world. No VPN required. No private network. Just a URL and you were in.

Think about your current architecture. If someone found your Cosmos DB connection string or your Azure Storage account name, what's stopping them from accessing it? If your answer is "just the access key" or "firewall rules," you're one leaked credential away from being in the headlines.

So what to do: Azure Private Link + Network Isolation

The most effective way to prevent public exposure is simple: remove public internet access entirely. This is where Azure Private Link becomes your architectural foundation.

With Azure Private Link, you can create a private endpoint inside your Azure Virtual Network (VNet) that becomes the exclusive gateway to your Azure services. Your Cosmos DB, Storage Accounts, Azure OpenAI Service, and other resources are completely removed from the public internet—they only respond to requests originating from within your VNet. Even if someone obtains your connection strings or access keys, they cannot use them without first gaining access to your private network.

Implementation Overview:

To implement Private Link for your AI chat system, you'll need to:

  1. Create an Azure Virtual Network (VNet) to host your private endpoints and application resources
  2. Configure private endpoints for each service (Cosmos DB, Storage, Azure OpenAI, Key Vault)
  3. Set up private DNS zones to automatically resolve service URLs to private IPs within your VNet
  4. Disable public network access on all your Azure resources
  5. Deploy your application inside the VNet using Azure App Service with VNet integration, Azure Container Apps, or Azure Kubernetes Service
  6. Verify isolation by attempting to access resources from outside the VNet (should fail)

You can configure this through the Azure Portal, Azure CLI, ARM templates, or infrastructure-as-code tools like Terraform. The Azure documentation provides step-by-step guides for each service type.

Figure 1: Private Link Architecture for AI Chat Systems

Private endpoints ensure all data access occurs within the Azure Virtual Network, blocking public internet access to databases, storage, and AI services.

2. Protecting Conversation Data with Encryption at Rest

The Problem: When Backend Databases Become Treasure Troves

Network isolation solves the problem of external access, but what happens when attackers breach your perimeter through other means? What if a malicious insider gains access? What if there's a misconfiguration in your cloud environment? The data sitting in your databases becomes the ultimate prize.

In February 2025, OmniGPT suffered a catastrophic breach where attackers accessed the backend database and extracted personal data from 30,000 users including emails, phone numbers, API keys, and over 34 million lines of conversation logs. The exposed data included links to uploaded files containing sensitive credentials, billing details, and API keys.

These weren't prompt injection attacks. These weren't DDoS incidents. These were failures to encrypt sensitive data at rest. When attackers accessed the storage layer, they found everything in readable format—a goldmine of personal information, conversations, and credentials.

Think about the conversations your AI chat system stores. Customer support queries that might include account numbers. Healthcare chatbots discussing symptoms and medications. HR assistants processing employee grievances. If someone gained unauthorized (or even authorized) access to your database today, would they be reading plaintext conversations?

What to do: Azure Cosmos DB with Customer-Managed Keys

The fundamental defense against data exposure is encryption at rest—ensuring that data stored on disk is encrypted and unreadable without the proper decryption keys. Even if attackers gain physical or logical access to your database files, the data remains protected as long as they don't have access to the encryption keys.

But who controls those keys?

With platform-managed encryption (the default in most cloud services), the cloud provider manages the encryption keys. While this protects against many threats, it doesn't protect against insider threats at the provider level, compromised provider credentials, or certain compliance scenarios where you must prove complete key control.

Customer-Managed Keys (CMK) solve this by giving you complete ownership and control of the encryption keys. You generate, store, and manage the keys in your own key vault. The cloud service can only decrypt your data by requesting access to your keys—access that you control and can revoke at any time. If your keys are deleted or access is revoked, even the cloud provider cannot decrypt your data.

Azure makes this easy with Azure Key Vault integrated with Azure Cosmos DB. The architecture uses "envelope encryption" where your data is encrypted with a Data Encryption Key (DEK), and that DEK is itself encrypted with your Key Encryption Key (KEK) stored in Key Vault. This provides layered security where even if the database is compromised, the data remains encrypted with keys only you control.

While we covered PII detection and redaction using Azure Text Analytics in Part 1—which prevents sensitive data from being stored in the first place—encryption at rest with Customer-Managed Keys provides an additional, powerful layer of protection. In fact, many compliance frameworks like HIPAA, PCI-DSS, and certain government regulations explicitly require customer-controlled encryption for data at rest, making CMK not just a best practice but often a mandatory requirement for regulated industries.

Implementation Overview:

To implement Customer-Managed Keys for your chat history and vector storage:

  1. Create an Azure Key Vault with purge protection and soft delete enabled (required for CMK)
  2. Generate or import your encryption key in Key Vault (2048-bit RSA or 256-bit AES keys)
  3. Grant Cosmos DB access to Key Vault using a system-assigned or user-assigned managed identity
  4. Enable CMK on Cosmos DB by specifying your Key Vault key URI during account creation or update
  5. Configure the same for Azure Storage if you're storing embeddings or documents in Blob Storage
  6. Set up key rotation policies to automatically rotate keys on a schedule (recommended: every 90 days)
  7. Monitor key usage through Azure Monitor and set up alerts for unauthorized access attempts

Figure 2: Envelope Encryption with Customer-Managed Keys

User conversations are encrypted using a two-layer approach: (1) The AI Chat App sends plaintext messages to Cosmos DB, (2) Cosmos DB authenticates to Key Vault using Managed Identity to retrieve the Key Encryption Key (KEK), (3) Data is encrypted with a Data Encryption Key (DEK), (4) The DEK itself is encrypted with the KEK before storage. This ensures data remains encrypted even if the database is compromised, as decryption requires access to keys stored in your Key Vault.

For AI chat systems in regulated industries (healthcare, finance, government), Customer-Managed Keys should be your baseline. The operational overhead is minimal with proper automation, and the compliance benefits are substantial.

The entire process can be automated using Azure CLI, PowerShell, or infrastructure-as-code tools. For existing Cosmos DB accounts, enabling CMK requires creating a new account and migrating data.

3. Securing Vector Databases and Preventing Data Leakage

The Problem: Vector Embeddings Are Data Too

Vector databases are the backbone of modern RAG (Retrieval-Augmented Generation) systems. They store embeddings—mathematical representations of your documents, conversations, and knowledge base—that allow your AI to retrieve relevant context for every user query. But here's what most developers don't realize: those vectors aren't just abstract numbers. They contain your actual data.

A critical oversight in AI chat architectures is treating vector databases—or in our case, Cosmos DB collections storing embeddings—as less sensitive than traditional data stores. Whether you're using a dedicated vector database or storing embeddings in Cosmos DB alongside your chat history, these mathematical representations need the same rigorous security controls as the original text.

In documented cases, shared vector databases inadvertently mixed data between two corporate clients. One client's proprietary information began surfacing in response to the other client's queries, creating a serious confidentiality breach in what was supposed to be a multi-tenant system. 

Even more concerning are embedding inversion attacks, where adversaries exploit weaknesses to reconstruct original source data from its vector representation—effectively reverse-engineering your documents from the mathematical embeddings. 

Think about what's in your vector storage right now. Customer support conversations. Internal company documents. Product specifications. Medical records. Legal documents. If you're running a multi-tenant system, are you absolutely certain that Company A can't retrieve Company B's data? Can you guarantee that embeddings can't be reverse-engineered to expose the original text?

What to do: Azure Cosmos DB for MongoDB with Logical Partitioning and RBAC

The security of vector databases requires a multi-layered approach that addresses both storage isolation and access control. Azure Cosmos DB for MongoDB provides native support for vector search while offering enterprise-grade security features specifically designed for multi-tenant architectures.

Logical partitioning creates strict data boundaries within your database by organizing data into isolated partitions based on a partition key (like tenant_id or user_id). When combined with Role-Based Access Control (RBAC), you create a security model where users and applications can only access their designated partitions—even if they somehow gain broader database access.

Implementation Overview:

To implement secure multi-tenant vector storage with Cosmos DB:

  1. Enable MongoDB RBAC on your Cosmos DB account using the EnableMongoRoleBasedAccessControl capability
  2. Design your partition key strategy based on tenant_id, user_id, or organization_id for maximum isolation
  3. Create collections with partition keys that enforce tenant boundaries at the storage level
  4. Define custom RBAC roles that grant access only to specific databases and partition key ranges
  5. Create user accounts per tenant or service principal with assigned roles limiting their scope
  6. Implement partition-aware queries in your application to always include the partition key filter
  7. Enable diagnostic logging to track all vector retrieval operations with user identity
  8. Configure cross-region replication for high availability while maintaining partition isolation

Figure 3: Multi-Tenant Data Isolation with Partition Keys and RBAC

Azure Cosmos DB enforces tenant isolation through logical partitioning and Role-Based Access Control (RBAC). Each tenant's data is stored in separate partitions (Partition A, B, C) based on the partition key (tenant_id). RBAC acts as a security gateway, validating every query to ensure users can only access their designated partition. Attempts to access other tenants' partitions are blocked at the RBAC layer, preventing cross-tenant data leakage in multi-tenant AI chat systems.

Azure provides comprehensive documentation and CLI tools for configuring RBAC roles and partition strategies. The key is to design your partition scheme before loading data, as changing partition keys requires data migration.

Beyond partitioning and RBAC, implement these AI-specific security measures:

  • Validate embedding sources: Authenticate and continuously audit external data sources before vectorizing to prevent poisoned embeddings
  • Implement similarity search thresholds: Set minimum similarity scores to prevent irrelevant cross-context retrieval
  • Use metadata filtering: Add security labels (classification levels, access groups) to vector metadata and enforce filtering
  • Monitor retrieval patterns: Alert on unusual patterns like one tenant making queries that correlate with another tenant's data
  • Separate vector databases per sensitivity level: Keep highly confidential vectors (PII, PHI) in dedicated databases with stricter controls
  • Hash document identifiers: Use hashed references instead of plaintext IDs in vector metadata to prevent enumeration attacks

For production AI chat systems handling multiple customers or sensitive data, Cosmos DB with partition-based RBAC should be your baseline. The combination of storage-level isolation and access control provides defense in depth that application-layer filtering alone cannot match.

Bonus: Secure Logging and Monitoring for AI Chat Systems

During development, we habitually log everything—full request payloads, user inputs, model responses, stack traces. It's essential for debugging. But when your AI chat system goes to production and starts handling real user conversations, those same logging practices become a liability.

Think about what flows through your AI chat system: customer support conversations containing account numbers, healthcare queries discussing medical conditions, HR chatbots processing employee complaints, financial assistants handling transaction details. If you're logging full conversations for debugging, you're creating a secondary repository of sensitive data that's often less protected than your primary database.

The average breach takes 241 days to identify and contain. During that time, attackers often exfiltrate not just production databases, but also log files and monitoring data—places where developers never expected sensitive information to end up.

The question becomes: how do you maintain observability and debuggability without creating a security nightmare?

The Solution: Structured Logging with PII Redaction and Azure Monitor

The key is to log metadata, not content. You need enough information to trace issues and understand system behavior without storing the actual sensitive conversations.

Azure Monitor with Application Insights provides enterprise-grade logging infrastructure with built-in features for sanitizing sensitive data. Combined with proper application-level controls, you can maintain full observability while protecting user privacy.

What to Log in Production AI Chat Systems:

 

DO LogDON'T Log
Request timestamps and durationFull user messages or prompts
User IDs (hashed or anonymized)Complete model responses
Session IDs (hashed)Raw embeddings or vectors
Model names and versions usedPersonally identifiable information (PII)
Token counts (input/output)Retrieved document content
Embedding dimensions and similarity scoresDatabase connection strings or API keys
Retrieved document IDs (not content)Complete stack traces that might contain data
Error codes and exception types 
Performance metrics (latency, throughput) 
RBAC decisions (access granted/denied) 
Partition keys accessed 
Rate limiting triggers 

Final Remarks: Building Compliant, Secure AI Systems

Throughout this two-part series, we've addressed the complete security spectrum for AI chat systems—from protecting the LLM itself to securing the underlying infrastructure. But there's a broader context that makes all of this critical: compliance and regulatory requirements.

AI chat systems operate within an increasingly complex regulatory landscape. The EU AI Act, which entered force on August 1, 2024, became the first comprehensive AI regulation by a major regulator, assigning applications to risk categories with high-risk systems subject to specific legal requirements. The NIS2 Directive further requires that AI model endpoints, APIs, and data pipelines be protected to prevent breaches and ensure secure deployment. 

Beyond AI-specific regulations, chat systems must comply with established data protection frameworks depending on their use case. GDPR mandates data minimization, user rights to erasure and data portability, 72-hour breach notification, and EU data residency for systems serving European users. Healthcare chatbots must meet HIPAA requirements including encryption, access controls, 6-year audit log retention, and Business Associate Agreements. Systems processing payment information fall under PCI-DSS, requiring cardholder data isolation, encryption, role-based access controls, and regular security testing. B2B SaaS platforms typically need SOC 2 Type II compliance, demonstrating security controls over data availability, confidentiality, continuous monitoring, and incident response procedures.

Azure's architecture directly supports these compliance requirements through its built-in capabilities. Private Link enables data residency by keeping traffic within specified Azure regions while supporting network isolation requirements. Customer-Managed Keys provide the encryption controls and key ownership mandated by HIPAA and PCI-DSS. Cosmos DB's partition-based RBAC creates the access controls and audit trails required across all frameworks. Azure Monitor and diagnostic logging satisfy audit and monitoring requirements, while Azure Policy and Microsoft Purview automate compliance enforcement and reporting. The platform's certifications and compliance offerings (including HIPAA, PCI-DSS, SOC 2, and GDPR attestations) provide the documentation and third-party validation that auditors require, significantly reducing the operational burden of maintaining compliance.

Further Resources:

Stay secure, stay compliant, and build responsibly.

 

 

 

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Wayback Machine at 1 Trillion

1 Share

In 1996, the web was still young—a chaotic, creative frontier built one page at a time. That same year, the Internet Archive set out to preserve it all. Nearly three decades later, that audacious goal has reached a generational milestone: 1 trillion web pages preserved.

Co-hosts Chris Freeland (Internet Archive) and Dave Hansen (Authors Alliance) talk with Mark Graham, director of the Wayback Machine, about how this vast public archive came to be—and what 1 trillion captures mean for humanity’s collective memory.

This conversation was recorded on 10/16/2025.

Check out all of the Future Knowledge episodes at https://archive.org/details/future-knowledge





Download audio: https://media.transistor.fm/d11074fe/c4c11b4f.mp3
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

SE Radio 691: Kacper Łukawski on Qdrant Vector Database

1 Share

Kacper Łukawski, a Senior Developer Advocate at Qdrant, speaks with host Gregory M. Kapfhammer about the Qdrant vector database and similarity search engine. After introducing vector databases and the foundational concepts undergirding similarity search, they dive deep into the Rust-based implementation of Qdrant. Along with comparing and contrasting different vector databases, they also explore the best practices for the performance evaluation of systems like Qdrant. Kacper and Gregory also discuss topics such as the steps for using Python to build an AI-powered application that uses Qdrant.

Brought to you by IEEE Computer Society and IEEE Software magazine.





Download audio: https://traffic.libsyn.com/secure/seradio/691-kacper-lukawski-qdrant-vector-database.mp3?dest-id=23379
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Becoming a Cybersecurity Expert with Paula Januszkiewicz

1 Share

How do you become a cybersecurity expert? While at Cybersecurity Intersection in Orlando, Richard chatted with Paula Januszkiewicz about her career in cybersecurity. Paula talks about insatiable curiosity to understand how things work the way they do - why an exploit happens and following the twists and turns that lead to root cause and permanent solutions. The conversation delves into the balance between education and experience, the types of work available in cybersecurity, and pursuing your passion!

Links

Recorded October 5, 2025





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/97d3e73d-58ae-4e23-a43a-167957729c51/audio/64cb57da-79f5-4eb2-b0c9-4ede093e11c9/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Releasing Windows 11 Builds 26100.7015 and 26200.7015 to the Release Preview Channel

1 Share
Hello Windows Insiders, today we’re releasing Windows 11 Builds 26100.7015 and 26200.7015 (KB5067036) to Insiders in the Release Preview Channel on Windows 11, version 24H2 (Build 26100) and version 25H2 (Build 26200). Below is a summary of the new features and improvements included as part of this update separated into two sections: gradual rollout and normal rollout. The bold text within the brackets indicates the item or area of the change we are documenting.

Gradual rollout

The following features and improvements might not be available to all users because they will roll out gradually. Text bolded in brackets indicate the area of the change being documented. Please note that features and improvements that gradually roll out may not begin rolling out right away nor will they immediately show up right away.
  • [Start menu] New! The redesigned Start menu, built to help you access your apps more quickly and smoothly. Its redesigned layout makes it easier than ever to find what you need.
    • Scrollable ‘All’ section: The main page now includes a scrollable “All” section, making it easier to find apps.
    • Category and grid views: Switch between two new views—category view, which groups apps by type and highlights frequently used ones, and grid view, which lists apps alphabetically with more horizontal space for easier scanning. The menu remembers your last selected view.
    • Responsive layout: The Start menu adapts to your screen size. Larger displays show more pinned apps, recommendations, and categories by default. Sections like Pinned and Recommended expand or collapse based on content. You can customize these views under Settings > Personalization > Start.
    • Phone Link integration: A new mobile device button next to Search lets you expand or collapse content from your connected phone. This feature supports Android and iOS devices in most markets and will roll out to the European Economic Area (EEA) in 2025.
  • [Voice access]
    • New! Fluid Dictation in voice access makes voice-based dictation smoother and smarter. It helps correct grammar, punctuation, and filler words in real time. Powered by on-device small language models (SLMs), it offers fast and private processing. Fluid Dictation is on by default—just open the voice access app to start. To turn it on or off, go to Settings or say Turn on Fluid Dictation or Turn off Fluid Dictation. You can use it in most text input apps, but it’s disabled in secure fields like passwords or PINs Fluid Dictation is available in all English locales on Copilot+ PCs.
    • New! You can now configure a delay before a voice command is executed. To set this up, go to Voice access settings > Wait time before acting, and select the option that best matches your preferences.
    • New! Voice Access now supports Japanese, expanding accessibility for more users. You can now navigate, dictate, and interact with Windows using voice commands.  With this update, Windows users who speak Japanese can enjoy a hands-free, voice-powered PC experience.
    • Fixed: Voice access unexpectedly stops working, showing error code 9001.
  • [Agent in Settings] New! The new agent in the Settings experience now supports French.
  • [Settings] New! The “Email & accounts” section is now called “Your accounts.” You manage all your accounts under Settings > Accounts.
  • [Click to Do]
    • New! The prompt box in Click to Do streamlines interaction with Copilot, helping you work more efficiently. You can type a custom prompt directly into the text box, which sends your prompt and selected on-screen content to Copilot. Suggested prompts appear below the text box and are powered locally by https://learn.microsoft.com/en-us/windows/ai/apis/phi-silica. These suggestions are available for text selections in English, Spanish, and French. This feature isn’t currently available in the European Economic Area (EEA) or China.
    • New! You can now translate on-screen text with Microsoft Copilot with Click to Do. When you select text that is in a different language from your Windows display language or preferred language setting, a translation suggestion appears. After selecting the option to translate the text, the translated text will appear in the Copilot app. You can see the supported regions and languages in Microsoft Copilot. This experience is not yet available for customers in the EEA (European Economic Area) or China.
    • New! Click to Do provides unit conversions for length, area, volume, height, temperature, and speed. When you hover over a number + unit, a floating tooltip will show you the conversation. When you select the number + unit, the context menu opens with additional conversation options. You can also access more conversions via the Copilot app. This experience is not yet available for customers in the EEA (European Economic Area) or China.
    • New! By pressing and holding two fingers anywhere on your Copilot+ PC with a touch screen, you can simultaneously launch Click to Do, select the entity under your finger and see relevant actions.
    • New! You can now select objects in Click to Do using Freeform Selection, Rectangle Selection and Ctrl + Click. To use Freeform Selection Mode, select the Freeform button on the toolbar. Then use your pen or finger to draw freely around the items you want to select. To use the Rectangle Selection Mode, select the Rectangle Selection button on the toolbar. Then drag a box around the items. Everything inside the rectangle will be selected. You can also use Ctrl + Click to select multiple types of items—such as text and images—by holding down the Ctrl key and selecting each item.
    • New! Click to Do can now detect tables. You can now highlight any simple table and send it to Excel, copy or share it. With a table in any application, you can press Win + Click to invoke Click to Do, or Win + Q or right swipe and tap to select the table. Once selected, you’ll see the actions you can take like Convert to table with Excel. Just click, capture and continue. This is available on Snapdragon-powered Copilot+ PCs, with support for AMD and Intel-powered Copilot+ PCs coming soon. You’ll need the latest Microsoft Excel application installed on your PC to see Convert to table with Excel and a Microsoft 365 Subscription. This change is not yet rolling out to Windows Insiders in the EEA (European Economic Area).
    • New! Live Persona Cards from Microsoft 365 now appears in Click to Do. To view a profile card, press Win + Click on an email address from your organization.
    • New! Visual cues now make key items, such as emails, tables, and more, light up on your screen as you launch Click to Do.
    • Fixed: Click to Do may unexpectedly invoke sometimes when pressing Windows key + P.
  • [File Explorer]
    • New! Recommended files in File Explorer Home are now available in personal Microsoft accounts and local accounts. Recommended files show content such as files you frequently used, have recently downloaded, or added to your File Explorer Gallery. This change is not yet rolling out in the EEA (European Economic Area). If you would prefer not to see the recommended section in File Explorer Home, this can be turned off in File Explorer Folder Options. When turned off, folders pinned to Quick Access will display instead.
    • New! When you hover over a file in File Explorer Home, commands such as Open file location or Ask Copilot This experience is available if you’re signed in with a Microsoft account. Support for work and school account (Entra ID) will be available in a future update. This change is not available in the EEA (European Economic Area).
    • New! StorageProvider APIs are now available for cloud providers to integrate with File Explorer Home. Developers can learn to enable the system to query for suggested files.
    • Fixed: The File Explorer context menu may unexpectedly switch back and forth between the normal view and Show More Options on each right click.
    • Fixed: When opening a folder from another app (for example, opening the Downloads folder from a browser), your custom view — including sorting files by name, changing the icon size, or removing grouping — unexpectedly resets back to default.
    • Fixed: The body of the File explorer window may no longer respond to mouse clicks after invoking the context menu.
    • Fixed: Extracting very large archive folders (1.5gb+) may fail with a Catastrophic Error (0x8000FFFF).
    • Fixed: File Explorer may become unresponsive when opening Home.
  • [Taskbar]
    • New! The battery icons are now improved to display colored icons to indicate charging states, simplified overlays that don’t block the percentage bars, and an option to turn on battery percentage. A green battery icon shows that your PC is charging and is in a good state. A yellow icon indicates your battery is less than or equal to 20%. You can also enable the ability to see the battery percentage next to the battery icon in the system tray. To turn on this feature, go to Settings > Power & battery and toggle on the “Battery Percentage”
    • New! When you hover over an open app icon on the taskbar, a thumbnail preview of the app window appears with a new “Share with Copilot” button beneath this preview. Selecting it allows Copilot Vision to scan, analyze, and offer insights on the content displayed by the app at that time. This option can be turned off if preferred under Settings > Personalization > Taskbar > Taskbar behaviors, using the checkbox “Share any window from my taskbar”.
    • Fixed: If you hover over an app icon on the taskbar, and then click the window preview, the preview may dismiss and not bring the window to the foreground.
  • [Lock screen] New! The new battery icons, featuring color indicators and battery percentage, now appear in the lower-right corner of the lock screen.
  • [Microsoft 365 Copilot] New! A new Microsoft 365 Copilot page is added to the Get Started experience for commercial devices managed with an active Microsoft 365 subscription. This page helps you discover and engage with Microsoft 365 Copilot more easily.
  • [Windows Setup Experience] New! You can now name your default user folder during set up. On the Microsoft account sign in page, press Shift + F10 to open Command Prompt. Type the following command: “cd oobe”, press Enter. Then type ““SetDefaultUserFolder.cmd ”. Enter a folder name of your choice and proceed with the MSA sign-in. The folder name cannot be more than 16 characters and only Unicode characters are supported. The custom folder name will be applied if valid. If not, Windows will automatically generate a profile folder name from your Microsoft email address.
  • [Logging into your PC] Improved: Made underlying changes to help improve the performance of loading the taskbar when unlocking your PC after coming out of sleep. This should also help with cases where the password field and other login screen contents didn’t render when transitioning from lock screen to login screen after sleep.
  • [Windows Update]
    • Improved: Addressed underlying issue which can cause “Update and shutdown” to not actually shut down your PC after updating.
    • Improved: Addressed underlying issue which can cause Windows Update to fail to install with error 0x800f0983.
  • [Remote Credential Guard] Fixed: Remote Credential Guard scenarios between the latest Windows 11 builds and Server 2022 (and below) may unexpectedly fail.
  • [Display and Graphics]
    • Fixed: Apps and browsers may have partially stuck onscreen content when other maximized / full screen apps are updating in the background. This is noticeable particularly when trying to scroll the window content, as only some parts update.
    • Fixed: After recent updates, some videos and games may be unexpectedly red.
    • Fixed: If Connected Devices Platform Service has been disabled, Settings may crash when trying to open Settings > System > Display (including if launched using the menu when right clicking the desktop.
  • [Input] Fixed: An issue related to microsoft.ink.dll and relevant APIs can result in pen and handwriting not working correctly in apps or app crashes recently due to unexpected exceptions being thrown.
  • [Open and Save Dialog] Fixed: Certain apps may become unresponsive when launching the Open or Save Dialog.
  • [Administrator Protection Preview] Administrator protection aims to protect free floating admin rights for administrator users allowing them to still perform all admin functions with just in time admin privileges. This feature is off by default and needs to be enabled via OMA-URI in Intune or via group policy.

Normal rollout

This update includes the following features and improvements that are rolling out as part of this update. Text bolded in brackets indicates the area of the change being documented.
  • [Authentication] Fixed: An issue that caused an ACCESS_DENIED error when users attempted to change passwords remotely on member servers or workgroup devices, even when they had the required permissions.
  • [Media] Fixed: This update addresses an issue where protected content playback fails on some machines after installing KB5064081.
Thanks, Windows Insider Program Team
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories