Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152285 stories
·
33 followers

You need quality engineers to turn AI into ROI

1 Share
Pete Johnson, Field CTO, Artificial Intelligence at MongoDB, joins the podcast to say that looking at AI’s impact as a job killer is a flawed metric.
Read the whole story
alvinashcraft
23 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

General Availability of Mirroring Azure Database for PostgreSQL in Microsoft Fabric Is Here!

1 Share

Unlock Real-Time Analytics on Operational Data—Now Enterprise-Ready

Few weeks ago at Microsoft Ignite 2025, we announced General Availability (GA) of Mirroring for Azure Database for PostgreSQL flexible server in Microsoft Fabric. This milestone marks a major leap forward in empowering organizations to seamlessly integrate their operational PostgreSQL data into the Microsoft Fabric analytics ecosystem—enabling near real-time analytics, machine learning, and business intelligence without the complexity of traditional ETL pipelines.

Why Mirror Operational Databases in Microsoft Fabric?

Accelerate Analytics Without ETL

Fabric Mirroring eliminates the need for complex, custom ETL pipelines. Data from your operational PostgreSQL databases is continuously replicated into OneLake as Delta tables, making it instantly available for analytics, machine learning, and reporting. This means you can:

  • Run advanced analytics and AI on live data without impacting production workloads.
  • Empower data scientists to experiment and innovate with up-to-date data.
  • Create real-time dashboards and cross-database queries for comprehensive business insights.
  • Unify governance and security under OneLake, reducing risk and operational overhead.

Enterprise-Grade Security and Compliance

With support for Entra ID, VNETs, and Private Endpoints, organizations can enforce strict access controls and network isolation. Mirroring is designed to meet the needs of highly regulated industries, ensuring data privacy and compliance at every step.

High Availability and Reliability

The new HA support ensures that mirroring sessions remain resilient to failures, delivering uninterrupted analytics even during server failovers. This is essential for mission-critical applications where downtime is not an option.

Cost Efficiency and Simplicity

Mirroring is offered at no additional cost, dramatically reducing the total cost of ownership for analytics solutions. By removing ETL complexity, organizations can focus on extracting value from their data rather than managing infrastructure.

What’s New in GA?

Building on the momentum of our Public Preview, the GA release introduces several enterprise-grade enhancements:

  • Microsoft Entra ID Authentication: Secure, centralized identity management for all mirroring operations. Entra ID authentication streamlines access control and compliance, making it easier for organizations to manage users and roles across their data estate.
  • VNET and Private Endpoint Support: Mirroring now works with PostgreSQL Flexible Servers deployed behind Virtual Networks (VNETs) and Private Endpoints, ensuring secure, private connectivity with no public exposure. This is critical for regulated industries and enterprises with strict security requirements.
  • High Availability (HA) Support: Mirroring is now compatible with HA-enabled servers, delivering business continuity and seamless failover for mission-critical workloads. For PostgreSQL 17+, replication slot failover ensures uninterrupted mirroring even during planned or unplanned outages.
  • Performance and Reliability Enhancements: The replication engine has been optimized for smoother onboarding, improved error handling, and higher throughput—supporting initial snapshot rates up to ~1TB/hour and change data capture (CDC) with minimal latency (as low as 5 seconds under optimal conditions).

For a full list of prerequisites and setup guidance, see the official documentation.

Microsoft Entra ID Authentication

Support for Entra ID database roles includes:

VNET and Private Endpoint Support

This feature ensure secure and efficient connectivity for flexible servers within Microsoft Fabric. Connecting to a flexible server without public connectivity enhances security during both initial setup and ongoing operations. Establishing a Virtual Network Gateway on the target VNET facilitates encrypted traffic between networks, while subnet delegation allows specific resources within a subnet to be managed for specialized tasks. The system supports servers restricted by Virtual Network (VNET) and Private Endpoint configurations, enabling robust network isolation and protection from unauthorized access.

High Availability (HA) Support

Fabric Mirroring supports high availability by enabling seamless failover and enhanced fault tolerance for servers configured with HA. This feature requires PostgreSQL version 17 or later, as replication slot failover is only available in these versions. If you are using an earlier PostgreSQL version, you will need to manually reconfigure mirroring after each failover event to maintain replication.

Beyond PostgreSQL: Interoperability Across Azure Databases

Fabric Mirroring is not limited to PostgreSQL. The GA release also includes support for other databases like:

  • SQL Server (2016–2025): Native mirroring for on-premises, Azure VMs, and non-Azure clouds, with secure connectivity and analytics-ready Delta tables.
  • Snowflake: Mirroring for managed and Apache Iceberg tables, enabling high-performance analytics and open-format interoperability.
  • Cosmos DB: Continuous change capture and mirroring for globally distributed NoSQL data, supporting real-time personalization, fraud detection, and IoT analytics.  

This interoperability allows organizations to consolidate data from diverse sources into OneLake, unlocking unified analytics and AI across their entire data estate. By leveraging shortcuts in Microsoft Fabric, customers can reference data stored in different mirrored databases and storage accounts as if it resided in a single location. This means users can build cross-database queries and analytics pipelines without physically moving or duplicating data, avoiding the need for complex ETL processes or data integration solutions. Shortcuts make it possible to seamlessly join, analyze, and visualize data from SQL Server, Snowflake, Cosmos DB, and more within OneLake, streamlining analytics workflows and accelerating time to insight while reducing storage costs and operational overhead.

Getting Started

Ready to experience the power of Mirroring for Azure Database for PostgreSQL Flexible Server in Microsoft Fabric?

Future Enhancements

Looking ahead to the next future, our focus will be on delivering a series of post-GA enhancements designed to make Mirroring for Azure Database for PostgreSQL Flexible Server even more robust, versatile, and user-friendly. Key advancements will be in the following areas:

  • automatic replication for newly created database tables when operating in auto-mode, ensuring that your mirrored environments remain up to date with minimal manual intervention.
  • enhanced support for advanced DDL operations, giving users greater flexibility and control when managing schema changes on mirrored databases.
  • expanding compatibility with additional data types—such as JSON, arrays, ranges, and geometry—will open up new scenarios for analytics and data integration, accommodating a wider range of workloads and use cases.
  • support for partitioned tables, TOAST tables, and views will allow organizations to mirror more complex database structures, further streamlining operational analytics.
  • enable the ability to mirror databases hosted on Read Replicas, which will help organizations optimize their high-availability and scaling strategies without compromising data consistency.

Collectively, these planned features underscore our commitment to continuous improvement and to meeting the evolving needs of our users as they harness the full power of Microsoft Fabric for unified data analytics and AI.

Conclusion

The General Availability of Mirroring for Azure Database for PostgreSQL Flexible Server in Microsoft Fabric represents a significant advancement for organizations seeking to unlock real-time analytics, AI, and BI on their operational data—securely, reliably, and without ETL complexity. With new enterprise features, proven customer success, and broad interoperability, now is the perfect time to bring your operational databases into the Microsoft Fabric analytics era.

Learn more and get started today: Fabric Mirroring for Azure Database for PostgreSQL

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Building scalable, cost-effective real-time multiplayer games with Azure Web PubSub

1 Share

Modern multiplayer games demand more than fast servers - they require persistent, reliable, low-latency communication at massive scale, often under highly unpredictable traffic patterns. Launch days, seasonal events, and promotions can generate connection spikes that dwarf steady-state traffic, while players still expect real-time responsiveness and stability.

In this post, we’ll explore how a game studio building a large-scale online RPG addressed these challenges using Azure Web PubSub, and what lessons other game developers can apply when designing their own real-time backend architectures.

The challenge: from polling to real-time multiplayer

The studio began with a backend architecture that relied heavily on polling a centralized data store to synchronize multiplayer state - such as party invitations, friend presence, and session updates - across geographically distributed game servers.

This approach worked initially, but it came with clear drawbacks:

  • High latency (5 seconds or more for critical interactions)
  • Wasted compute resources due to constant polling
  • A difficult trade-off between cost and responsiveness
  • Limited flexibility to introduce richer real-time features

As multiplayer adoption grew and concurrency increased into the hundreds of thousands, these limitations became increasingly painful - especially during major releases and promotional events.

“Building multiplayer games is very different from building typical business APIs. Small timing and synchronization issues are immediately visible to players.”

The team needed a solution that could:

  • Maintain persistent connections at scale
  • Deliver near real-time updates without polling
  • Handle spiky traffic patterns without over-provisioning
  • Minimize operational complexity

Why a managed real-time service?

The initial instinct was to build a custom WebSocket infrastructure in-house. But persistent connections, failover, reconnection logic, scaling behavior, and regional distribution quickly added up to a large and risky engineering effort.

Instead, the team opted for Azure Web PubSub, a fully managed service designed for large-scale, real-time messaging over WebSockets. What stood out wasn’t just performance but the operational simplicity and cost model.

Architecture shift: event-driven, not poll-driven

After adopting Azure Web PubSub, the backend architecture changed fundamentally:

  • Game servers maintain persistent WebSocket connections to Web PubSub
  • Backend services publish messages only when state changes
  • Database change feeds trigger real-time updates
  • Messages are routed efficiently using groups, targeting only relevant servers or players

This eliminated polling entirely and unlocked new real-time capabilities with minimal additional complexity.

Key benefits for multiplayer games

Push-based real-time updates

State changes - such as party invites or presence updates - are delivered immediately instead of waiting for polling intervals. What once took seconds now arrives in tens of milliseconds.

Massive, elastic scalability

Azure Web PubSub supports:

  • Up to 1 million concurrent connections per resource
  • Auto-scaling based on actual demand
  • Geo-replication for resilience and global reach

This makes it well-suited for launch-day spikes, where traffic may surge for a few weeks and then settle to a much lower baseline.

Low-latency at global scale

In practice, backend-to-service latency stays in single-digit milliseconds, with end-to-end delivery typically under 100 ms - a dramatic improvement over polling-based designs.

For asynchronous game features, even modest latency differences can significantly improve perceived responsiveness.

Cost efficiency for spiky traffic

A critical insight for game workloads is how Azure Web PubSub pricing works:

  • Billing is based on units × time used (in seconds), aggregated daily
  • Short-lived spikes don’t incur full-day costs
  • You don’t pay for unused capacity once traffic drops

This makes Web PubSub particularly attractive for games with:

  • Large launch peaks
  • Periodic promotional spikes
  • Lower steady-state concurrency

Designing for launch peaks without overpaying

One of the most common questions game teams ask is:

"How do we handle massive launch traffic without locking ourselves into long-term costs?"

Recommended approach

During launch or major promotions

  • Provision a fixed capacity with ~20% headroom
  • Avoid auto-scaling delays during critical windows
  • Use multiple regional P1-tier resources instead of a single large P2

After traffic stabilizes

  • Enable auto-scale
  • Reduce baseline units
  • Keep capacity aligned with real usage

This strategy balances reliability, latency, and cost, while avoiding unnecessary complexity during the most critical periods.

Reliability, geo-distribution, and sharding

Rather than relying on one large global endpoint, the recommended pattern is to:

  • Deploy multiple Web PubSub resources per continent
  • Shard users by geography
  • Use geo-replicas primarily for disaster recovery
  • Optionally implement lightweight routing logic when multiple resources exist in a single region

This improves fault isolation, reduces blast radius, and aligns well with how large game backends already segment players.

Security considerations for WebSocket-based games

Persistent connections introduce different threat models than traditional REST APIs. Key protections include:

  • Authenticated connection tokens
  • Enforcing one connection per user
  • Rate limiting connection attempts
  • Message size and throughput controls

For additional protection, Azure Web PubSub can be combined with services like Azure Front Door, which natively supports WebSockets.

Why this matters for game developers

What this real-world scenario highlights is a broader trend: multiplayer games increasingly resemble real-time distributed systems, not just applications with APIs.

By adopting a managed real-time service like Azure Web PubSub, teams can:

  • Ship features faster
  • Reduce operational risk
  • Scale confidently through unpredictable demand
  • Pay only for what they actually use

Instead of spending engineering effort reinventing infrastructure, teams can focus on what truly differentiates their game: player experience.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What shaped computing education in 2025 — and what comes next

1 Share

To mark the start of 2026, we’re releasing a special episode of our Hello World podcast, which reflects on the key developments in computing education during 2025 and considers the trends likely to shape the year ahead.

Hosted by James Robinson, the episode brings together a conversation between three Foundation team members — Rehana Al-Soltane, Dr Bobby Whyte, and Laura James — and perspectives from colleagues and partners in Kenya, South Africa, and Greece.

The Hello World Podcast team

The podcast is framed around three major themes that defined 2025: data science, AI literacy, and digital literacy, all of which continue to play an increasingly important role in education systems worldwide.

Looking back at 2025

In the podcast, Rehana reflects on a year characterised by research, collaboration, and community, highlighting the importance of global partnerships in developing and localising AI literacy resources for diverse educational contexts.

From a research perspective, Bobby explains that 2025 was about pulling together what we already know and making sense of it, to better understand what good data science education should look like, including curriculum design, pedagogy, and appropriate tools.

Laura focuses on resilience and creativity in computing education, as well as the growing presence of more personalised forms of artificial intelligence, which present both significant opportunities and complex ethical challenges.

The new set!

A key concern raised throughout the episode is the risk of cognitive offloading, whereby learners rely on AI tools to bypass critical thinking processes. The speakers emphasise the need for learning experiences and assessments that value process, reasoning, and reflection rather than solely final outputs.

The episode also examines barriers to the adoption of computing and AI education, including teacher confidence, limited access to devices, restrictive school IT policies, and the need for translated and localised resources.

Contributions from our colleagues around the world highlight stark contrasts in educational contexts, with challenges such as funding constraints, connectivity issues, and teacher training needs, alongside examples of innovation where educators are adequately supported.

What’s ahead

Looking ahead to 2026, Rehana outlines the potential of interdisciplinary approaches to AI literacy, integrating AI concepts into subjects such as geography, history, languages, and the arts to increase relevance and engagement (look out for our upcoming research seminar series on the topic).

The cast on set

Bobby anticipates a gradual shift towards more data-informed approaches to computing education, with greater emphasis on classroom-based trials and research that directly informs practice.

Laura offers a strong call to renew focus on cybersecurity education, arguing that security and safety must remain central as digital systems and AI technologies continue to evolve.

In a series of concise predictions, the speakers point to increased attention on explainable AI, wider integration of AI literacy across the curriculum, and renewed concern for digital safety and security.

More from Hello World

You can subscribe to Hello World and listen to the full podcast episodes from wherever you get your podcasts. Or you can find this and previous Hello World podcasts on our podcast page.

Also check out Hello World magazine, our free digital and print magazine from computing educators for computing educators.

The post What shaped computing education in 2025 — and what comes next appeared first on Raspberry Pi Foundation.

Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why Component Libraries Like Blazorise Are Key to Blazor's Adoption

1 Share
How strong UI ecosystems accelerate Blazor adoption in enterprise environments, with behind-the-scenes insights into Blazorise's design philosophy.
Read the whole story
alvinashcraft
24 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Manage AI-powered inventory using the Red Hat Lightspeed

1 Share

Red Hat Lightspeed is transforming how IT professionals interact with complex operational data. By integrating with the Red Hat Lightspeed (formerly known as Red Hat Insights) and Model Context Protocol (MCP), you can simplify inventory management using simple, human-readable language.

This article shares practical examples of how you can perform inventory-related operations using Red Hat Lightspeed and MCP. The core benefit is its ability to turn natural-language prompts into structured inventory API requests. Instead of crafting filters or writing custom scripts, you can simply ask questions and let the AI build and execute the query for you.

Using natural language for inventory querying

Before you begin, ensure your Red Hat Lightspeed and MCP service is correctly set up and running. Refer to the Red Hat documentation for the necessary prerequisites and installation instructions.

The most immediate benefit of Red Hat Lightspeed and MCP is its power to transform complex API queries and data filtering into simple, natural language prompts. The agent utilizes the underlying inventory API tools to fulfill your request.

Behind the scenes, the agent converts your query into a structured API request, retrieves the JSON data from Red Hat Lightspeed inventory, and then uses a large language model (LLM) to summarize and format the result into a readable, conversational response. The allows you to explore your fleet without needing to switch contexts, click through dashboards, or write API queries.

Functionality and corresponding sample natural language prompts:

  • Simple filtering: "How many RHEL 9 systems do I have tagged with 'finance'?"
  • Complex filtering: "Show me all hosts in the 'Dev' environment running kernel version 4.18.0 that are not currently checked into Satellite."
  • Data retrieval: "List the FQDN and last seen date for the 10 oldest RHEL 8 hosts."
  • Summary and aggregation: "What is the distribution of my hosts by operating system version?"

Streamlining inventory housekeeping and auditing

Maintaining an accurate, up-to-date inventory is essential for effective IT operations, compliance, and lifecycle management. Traditional data-quality checks—finding stale systems, missing metadata, or duplicate entries—are often time-consuming when done manually.

Red Hat Lightspeed and MCP drastically improves this process by leveraging the LLM's power to proactively audit your inventory data. It quickly surfaces inconsistencies, missing tags, or stale system records, ensuring better configuration management database (CMDB) hygiene.

Housekeeping task and sample prompt to the MCP:

  • Find missing data: "Find all inventory hosts missing the 'Cost Center' or 'Owner' tags."
  • Identify stale systems: "List all systems that haven't reported data to Red Hat Lightspeed in the last 60 days, excluding those tagged as 'decommissioned'."
  • Audit compliance: "Identify all production systems that are not currently running the latest minor RHEL version for their major release."
  • Detect potential duplicates: "Show me all hosts that share the same IP address but have different FQDNs, to check for potential duplicates."

By automating these data quality checks, Red Hat Lightspeed and MCP ensures your inventory remains accurate and reliable, a critical requirement for security and compliance reporting. Rather than manually running reports or exporting data, the MCP queries the inventory, flags issues, and summarizes next steps, making CMDB accuracy easier than ever.

Innovative multi-agent orchestration

The true power of Red Hat Lightspeed and MCP lies in its ability to chain together information from different Red Hat Lightspeed services (i.e., inventory, advisor, and vulnerability) to perform multi-step analysis—a process called multi-agent orchestration. This can be accomplished by either enhancing your own code base via using Llama stack or by the use of an AI tool like Langflow.

These innovative prompts require the MCP to use its inventory tool first, and then potentially call a second tool (e.g., Advisor or Vulnerability API) to analyze the returned host list. Few examples of innovative prompts could be:

Multi-agent tasks and corresponding sample prompt to the MCP:

  • Proactive remediation planning: "I need to plan my next patch window. Show me all RHEL 8 servers in the 'Staging' environment with at least one high-severity vulnerability, and then summarize the suggested remediation playbook for those systems." (MCP identifies hosts via Inventory, then uses the vulnerability tool to find and summarize the required playbook.)
  • Migration readiness analysis: "Analyze the inventory and identify all systems running an OS version that will reach end-of-life (EOL) within the next nine months that are not yet tagged with a 'Migration_Target' label. Group them by their current Business Unit tag." (MCP queries inventory, compares versions/dates against internal knowledge, and generates a structured report.)
  • Cross-platform health check: "What is the current system health status (from advisor recommendations) for the top five largest hosts in the 'Critical' group, ranked by their memory size?" (MCP sorts hosts by a resource metric using the inventory tool, retrieves the top five, then queries the advisor tool for their specific health status.)

Join us and share your feedback

Now is a great time to test, experiment, and provide feedback when you connect existing Red Hat Lightspeed and MCP with your LLMs. Whether you're exploring automation, enhancing incident processes, or building intelligent dashboards, this preview places powerful Red Hat Lightspeed capabilities at your LLM-driven fingertips.

This release offers early access to powerful MCP-driven workflows with Red Hat Lightspeed. We strongly encourage your feedback—including bug reports, requests for additional toolsets, and enhancement ideas—through the Red Hat Issue Router (select MCP) and by contributing to our GitHub repository. Your input will directly refine and shape the future of Red Hat Lightspeed and MCP.

The post Manage AI-powered inventory using the Red Hat Lightspeed appeared first on Red Hat Developer.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories