Few weeks ago at Microsoft Ignite 2025, we announced General Availability (GA) of Mirroring for Azure Database for PostgreSQL flexible server in Microsoft Fabric. This milestone marks a major leap forward in empowering organizations to seamlessly integrate their operational PostgreSQL data into the Microsoft Fabric analytics ecosystem—enabling near real-time analytics, machine learning, and business intelligence without the complexity of traditional ETL pipelines.
Fabric Mirroring eliminates the need for complex, custom ETL pipelines. Data from your operational PostgreSQL databases is continuously replicated into OneLake as Delta tables, making it instantly available for analytics, machine learning, and reporting. This means you can:
With support for Entra ID, VNETs, and Private Endpoints, organizations can enforce strict access controls and network isolation. Mirroring is designed to meet the needs of highly regulated industries, ensuring data privacy and compliance at every step.
The new HA support ensures that mirroring sessions remain resilient to failures, delivering uninterrupted analytics even during server failovers. This is essential for mission-critical applications where downtime is not an option.
Mirroring is offered at no additional cost, dramatically reducing the total cost of ownership for analytics solutions. By removing ETL complexity, organizations can focus on extracting value from their data rather than managing infrastructure.
Building on the momentum of our Public Preview, the GA release introduces several enterprise-grade enhancements:
For a full list of prerequisites and setup guidance, see the official documentation.
Support for Entra ID database roles includes:
This feature ensure secure and efficient connectivity for flexible servers within Microsoft Fabric. Connecting to a flexible server without public connectivity enhances security during both initial setup and ongoing operations. Establishing a Virtual Network Gateway on the target VNET facilitates encrypted traffic between networks, while subnet delegation allows specific resources within a subnet to be managed for specialized tasks. The system supports servers restricted by Virtual Network (VNET) and Private Endpoint configurations, enabling robust network isolation and protection from unauthorized access.
Fabric Mirroring supports high availability by enabling seamless failover and enhanced fault tolerance for servers configured with HA. This feature requires PostgreSQL version 17 or later, as replication slot failover is only available in these versions. If you are using an earlier PostgreSQL version, you will need to manually reconfigure mirroring after each failover event to maintain replication.
Fabric Mirroring is not limited to PostgreSQL. The GA release also includes support for other databases like:
This interoperability allows organizations to consolidate data from diverse sources into OneLake, unlocking unified analytics and AI across their entire data estate. By leveraging shortcuts in Microsoft Fabric, customers can reference data stored in different mirrored databases and storage accounts as if it resided in a single location. This means users can build cross-database queries and analytics pipelines without physically moving or duplicating data, avoiding the need for complex ETL processes or data integration solutions. Shortcuts make it possible to seamlessly join, analyze, and visualize data from SQL Server, Snowflake, Cosmos DB, and more within OneLake, streamlining analytics workflows and accelerating time to insight while reducing storage costs and operational overhead.
Ready to experience the power of Mirroring for Azure Database for PostgreSQL Flexible Server in Microsoft Fabric?
Looking ahead to the next future, our focus will be on delivering a series of post-GA enhancements designed to make Mirroring for Azure Database for PostgreSQL Flexible Server even more robust, versatile, and user-friendly. Key advancements will be in the following areas:
Collectively, these planned features underscore our commitment to continuous improvement and to meeting the evolving needs of our users as they harness the full power of Microsoft Fabric for unified data analytics and AI.
The General Availability of Mirroring for Azure Database for PostgreSQL Flexible Server in Microsoft Fabric represents a significant advancement for organizations seeking to unlock real-time analytics, AI, and BI on their operational data—securely, reliably, and without ETL complexity. With new enterprise features, proven customer success, and broad interoperability, now is the perfect time to bring your operational databases into the Microsoft Fabric analytics era.
Learn more and get started today: Fabric Mirroring for Azure Database for PostgreSQL
Modern multiplayer games demand more than fast servers - they require persistent, reliable, low-latency communication at massive scale, often under highly unpredictable traffic patterns. Launch days, seasonal events, and promotions can generate connection spikes that dwarf steady-state traffic, while players still expect real-time responsiveness and stability.
In this post, we’ll explore how a game studio building a large-scale online RPG addressed these challenges using Azure Web PubSub, and what lessons other game developers can apply when designing their own real-time backend architectures.
The studio began with a backend architecture that relied heavily on polling a centralized data store to synchronize multiplayer state - such as party invitations, friend presence, and session updates - across geographically distributed game servers.
This approach worked initially, but it came with clear drawbacks:
As multiplayer adoption grew and concurrency increased into the hundreds of thousands, these limitations became increasingly painful - especially during major releases and promotional events.
“Building multiplayer games is very different from building typical business APIs. Small timing and synchronization issues are immediately visible to players.”
The team needed a solution that could:
The initial instinct was to build a custom WebSocket infrastructure in-house. But persistent connections, failover, reconnection logic, scaling behavior, and regional distribution quickly added up to a large and risky engineering effort.
Instead, the team opted for Azure Web PubSub, a fully managed service designed for large-scale, real-time messaging over WebSockets. What stood out wasn’t just performance but the operational simplicity and cost model.
After adopting Azure Web PubSub, the backend architecture changed fundamentally:
This eliminated polling entirely and unlocked new real-time capabilities with minimal additional complexity.
State changes - such as party invites or presence updates - are delivered immediately instead of waiting for polling intervals. What once took seconds now arrives in tens of milliseconds.
Azure Web PubSub supports:
This makes it well-suited for launch-day spikes, where traffic may surge for a few weeks and then settle to a much lower baseline.
In practice, backend-to-service latency stays in single-digit milliseconds, with end-to-end delivery typically under 100 ms - a dramatic improvement over polling-based designs.
For asynchronous game features, even modest latency differences can significantly improve perceived responsiveness.
A critical insight for game workloads is how Azure Web PubSub pricing works:
This makes Web PubSub particularly attractive for games with:
One of the most common questions game teams ask is:
"How do we handle massive launch traffic without locking ourselves into long-term costs?"
During launch or major promotions
After traffic stabilizes
This strategy balances reliability, latency, and cost, while avoiding unnecessary complexity during the most critical periods.
Rather than relying on one large global endpoint, the recommended pattern is to:
This improves fault isolation, reduces blast radius, and aligns well with how large game backends already segment players.
Persistent connections introduce different threat models than traditional REST APIs. Key protections include:
For additional protection, Azure Web PubSub can be combined with services like Azure Front Door, which natively supports WebSockets.
What this real-world scenario highlights is a broader trend: multiplayer games increasingly resemble real-time distributed systems, not just applications with APIs.
By adopting a managed real-time service like Azure Web PubSub, teams can:
Instead of spending engineering effort reinventing infrastructure, teams can focus on what truly differentiates their game: player experience.
To mark the start of 2026, we’re releasing a special episode of our Hello World podcast, which reflects on the key developments in computing education during 2025 and considers the trends likely to shape the year ahead.
Hosted by James Robinson, the episode brings together a conversation between three Foundation team members — Rehana Al-Soltane, Dr Bobby Whyte, and Laura James — and perspectives from colleagues and partners in Kenya, South Africa, and Greece.

The podcast is framed around three major themes that defined 2025: data science, AI literacy, and digital literacy, all of which continue to play an increasingly important role in education systems worldwide.
In the podcast, Rehana reflects on a year characterised by research, collaboration, and community, highlighting the importance of global partnerships in developing and localising AI literacy resources for diverse educational contexts.
From a research perspective, Bobby explains that 2025 was about pulling together what we already know and making sense of it, to better understand what good data science education should look like, including curriculum design, pedagogy, and appropriate tools.
Laura focuses on resilience and creativity in computing education, as well as the growing presence of more personalised forms of artificial intelligence, which present both significant opportunities and complex ethical challenges.

A key concern raised throughout the episode is the risk of cognitive offloading, whereby learners rely on AI tools to bypass critical thinking processes. The speakers emphasise the need for learning experiences and assessments that value process, reasoning, and reflection rather than solely final outputs.
The episode also examines barriers to the adoption of computing and AI education, including teacher confidence, limited access to devices, restrictive school IT policies, and the need for translated and localised resources.
Contributions from our colleagues around the world highlight stark contrasts in educational contexts, with challenges such as funding constraints, connectivity issues, and teacher training needs, alongside examples of innovation where educators are adequately supported.
Looking ahead to 2026, Rehana outlines the potential of interdisciplinary approaches to AI literacy, integrating AI concepts into subjects such as geography, history, languages, and the arts to increase relevance and engagement (look out for our upcoming research seminar series on the topic).

Bobby anticipates a gradual shift towards more data-informed approaches to computing education, with greater emphasis on classroom-based trials and research that directly informs practice.
Laura offers a strong call to renew focus on cybersecurity education, arguing that security and safety must remain central as digital systems and AI technologies continue to evolve.
In a series of concise predictions, the speakers point to increased attention on explainable AI, wider integration of AI literacy across the curriculum, and renewed concern for digital safety and security.
You can subscribe to Hello World and listen to the full podcast episodes from wherever you get your podcasts. Or you can find this and previous Hello World podcasts on our podcast page.
Also check out Hello World magazine, our free digital and print magazine from computing educators for computing educators.
The post What shaped computing education in 2025 — and what comes next appeared first on Raspberry Pi Foundation.
Red Hat Lightspeed is transforming how IT professionals interact with complex operational data. By integrating with the Red Hat Lightspeed (formerly known as Red Hat Insights) and Model Context Protocol (MCP), you can simplify inventory management using simple, human-readable language.
This article shares practical examples of how you can perform inventory-related operations using Red Hat Lightspeed and MCP. The core benefit is its ability to turn natural-language prompts into structured inventory API requests. Instead of crafting filters or writing custom scripts, you can simply ask questions and let the AI build and execute the query for you.
Before you begin, ensure your Red Hat Lightspeed and MCP service is correctly set up and running. Refer to the Red Hat documentation for the necessary prerequisites and installation instructions.
The most immediate benefit of Red Hat Lightspeed and MCP is its power to transform complex API queries and data filtering into simple, natural language prompts. The agent utilizes the underlying inventory API tools to fulfill your request.
Behind the scenes, the agent converts your query into a structured API request, retrieves the JSON data from Red Hat Lightspeed inventory, and then uses a large language model (LLM) to summarize and format the result into a readable, conversational response. The allows you to explore your fleet without needing to switch contexts, click through dashboards, or write API queries.
Functionality and corresponding sample natural language prompts:
Maintaining an accurate, up-to-date inventory is essential for effective IT operations, compliance, and lifecycle management. Traditional data-quality checks—finding stale systems, missing metadata, or duplicate entries—are often time-consuming when done manually.
Red Hat Lightspeed and MCP drastically improves this process by leveraging the LLM's power to proactively audit your inventory data. It quickly surfaces inconsistencies, missing tags, or stale system records, ensuring better configuration management database (CMDB) hygiene.
Housekeeping task and sample prompt to the MCP:
By automating these data quality checks, Red Hat Lightspeed and MCP ensures your inventory remains accurate and reliable, a critical requirement for security and compliance reporting. Rather than manually running reports or exporting data, the MCP queries the inventory, flags issues, and summarizes next steps, making CMDB accuracy easier than ever.
The true power of Red Hat Lightspeed and MCP lies in its ability to chain together information from different Red Hat Lightspeed services (i.e., inventory, advisor, and vulnerability) to perform multi-step analysis—a process called multi-agent orchestration. This can be accomplished by either enhancing your own code base via using Llama stack or by the use of an AI tool like Langflow.
These innovative prompts require the MCP to use its inventory tool first, and then potentially call a second tool (e.g., Advisor or Vulnerability API) to analyze the returned host list. Few examples of innovative prompts could be:
Multi-agent tasks and corresponding sample prompt to the MCP:
Now is a great time to test, experiment, and provide feedback when you connect existing Red Hat Lightspeed and MCP with your LLMs. Whether you're exploring automation, enhancing incident processes, or building intelligent dashboards, this preview places powerful Red Hat Lightspeed capabilities at your LLM-driven fingertips.
This release offers early access to powerful MCP-driven workflows with Red Hat Lightspeed. We strongly encourage your feedback—including bug reports, requests for additional toolsets, and enhancement ideas—through the Red Hat Issue Router (select MCP) and by contributing to our GitHub repository. Your input will directly refine and shape the future of Red Hat Lightspeed and MCP.
The post Manage AI-powered inventory using the Red Hat Lightspeed appeared first on Red Hat Developer.