Read more of this story at Slashdot.
Read more of this story at Slashdot.
The SQL community is gathering in Atlanta this March for the first‑ever SQLCon, co‑located with FabCon, the Microsoft Fabric Community Conference, March 16-20. One registration unlocks both events, giving you access to deep SQL expertise and the latest in Fabric, Power BI, data engineering, real‑time intelligence, and AI. Whether you’re a DBA, developer, data engineer, architect, or a leader building data‑driven team, this is your chance to learn, connect, and shape what’s next.
Bonus: make the budget work
Depending on timing, look for early‑bird pricing, team discounts, or buy‑one‑get‑one offers on the registration page. These deals move fast, so check what’s live when you register. You can always use SQLCMTY200 for $200 off!
Wrap‑up: build the next chapter of your data strategy at SQLCon
SQLCon + FabCon is the highest‑leverage week of the year to sharpen your technical skills, understand SQL’s next chapter, accelerate modernization and performance, and build meaningful connections across the global community. If SQL plays any role in your data estate, this is the one event you shouldn’t miss.
See you in Atlanta!
The post Five Reasons to attend SQLCon appeared first on Microsoft Azure Blog.
We're inviting all developers to join Agents League, running February 16-27. It's a two-week challenge where you'll build AI agents using production-ready tools, learn from live coding sessions, and get feedback directly from Microsoft product teams.
We've put together starter kits for each track to help you get up and running quickly that also includes requirements and guidelines. Whether you want to explore what GitHub Copilot can do beyond autocomplete, build reasoning agents on Microsoft Foundry, or create enterprise integrations for Microsoft 365 Copilot, we have a track for you.
Important: Register first to be eligible for prizes and your digital badge. Without registration, you won't qualify for awards or receive a badge when you submit.
It's a 2-week competition that combines learning with building:
More details on each track below, or jump straight to the starter kits.
Agents League starts on February 16th and runs through Feburary 27th. Within 2 weeks, we host live battles on Reactor and AMA sessions on Discord.
We're kicking off with live coding battles streamed on Microsoft Reactor. Watch experienced developers compete in real-time, explaining their approach and architectural decisions as they go.
All sessions are recorded, so you can watch on your own schedule.
This is your time to build and ask questions on Discord. The async format means you work when it suits you, evenings, weekends, whatever fits your schedule.
We're also hosting AMAs on Discord where you can ask questions directly to Microsoft experts and product teams:
Bring your questions, get help when you're stuck, and share what you're building with the community.
We've created a starter kit for each track with setup guides, project ideas, and example scenarios to help you get started quickly.
Tool: GitHub Copilot (Chat, CLI, or SDK)
Build innovative, imaginative applications that showcase the potential of AI-assisted development. All application types are welcome, web apps, CLI tools, games, mobile apps, desktop applications, and more.
The starter kit walks you through GitHub Copilot's different modes and provides prompting tips to get the best results. View the Creative Apps starter kit.
Tool: Microsoft Foundry (UI or SDK) and/or Microsoft Agent Framework
Build a multi-agent system that leverages advanced reasoning capabilities to solve complex problems. This track focuses on agents that can plan, reason through multi-step problems, and collaborate.
The starter kit includes architecture patterns, reasoning strategies (planner-executor, critic/verifier, self-reflection), and integration guides for tools and MCP servers. View the Reasoning Agents starter kit.
Tool: M365 Agents Toolkit or Copilot Studio
Create intelligent agents that extend Microsoft 365 Copilot to address real-world enterprise scenarios. Your agent must work on Microsoft 365 Copilot Chat.
Bonus points for: MCP server integration, OAuth security, Adaptive Cards UI, connected agents (multi-agent architecture). View the Enterprise Agents starter kit.
To be eligible for prizes and your digital badge, you must register before submitting your project.
Category Winners ($500 each):
GitHub Copilot Pro subscriptions:
Everyone who registers and submits a project wins: A digital badge to showcase their participation.
Beyond the prizes, every participant gets feedback from the teams who built these tools, a valuable opportunity to learn and improve your approach to AI agent development.
Agentic AI is best understood as a distributed system, and many of the same patterns that made microservices successful apply. This article explains how to design agentic workflows with composable components, explicit contracts and guardrails, resilience practices like timeouts and idempotency, and end-to-end observability. We will also discuss how the Red Hat AI portfolio supports production-ready agentic systems across the hybrid cloud, from efficient inference to consistent execution and lifecycle management at scale.
Microservices changed software engineering by forcing us to treat systems as distributed by default with small components, explicit contracts, independent scaling, and a serious focus on reliability and observability. Agentic AI is driving a similar shift. Instead of HTTP services calling other services, we now have agents coordinating models, tools, and enterprise data to complete multi-step tasks.
If you build cloud-native applications, this analogy is useful because it replaces hand-wavy intuition with a familiar engineering frame. Agentic AI is a distributed system. The same realities apply: latency, partial failure, versioning, security boundaries, and operational visibility. Once you accept that, building agentic systems becomes far more practical. Figure 1 illustrates a comparison of microservices patterns and Agentic AI equivalents.
Early agent implementations often become a single, do-everything agent. It retrieves context, decides what to do, calls tools, handles errors, and writes the final answer. That looks convenient, but it is the AI equivalent of a monolith. When something goes wrong, it is difficult to isolate the cause. When you want to improve one part (i.e., retrieval), you risk breaking everything.
A more scalable pattern is the same one microservices pushed us toward, decomposition. Break the workflow into smaller, purpose-built agents and orchestrate them as a pipeline. For example, you might have an agent that retrieves and ranks information, another that validates policy and safety constraints, and another that executes tool calls and formats results. You can test, update, and scale each component independently, so that failures become easier to contain.
Microservices succeed when interfaces are explicit. Agents need that discipline even more, because ambiguity is where unpredictable behavior lives. Define what the agent accepts and produces, ideally with structured outputs you can validate (e.g., JSON schema). You must be equally explicit about tool contracts and allowlists. For instance, define what tools can be called, with what parameters, and what data can be accessed. This is how you prevent prompt drift from turning into system drift, and it is how you make agentic workflows governable across teams.
A strong contract mindset also improves portability. When your tools and agent steps have stable interfaces, you can swap models, change retrieval methods, or add new workflow steps without rewriting the whole system.
Microservices taught us to assume failure. Networks drop, dependencies degrade, and tail latency ruins user experience. Agentic systems have the same issues plus a few new ones, such as tool calls failing, retrieval returning irrelevant context, and inference latency spikes. The lesson from microservices still holds. It is the same operational playbook: timeouts, retries with backoff, circuit breakers to avoid cascading failures, and fallback behaviors that let a workflow degrade gracefully. For tool calls, it also helps to design for idempotency so retries do not create duplicate actions.
This is where agentic design becomes an engineering discipline. You define failure semantics, set performance expectations, and decide what the system should do under degradation. In other words, you are making explicit tradeoffs about latency, cost, and correctness when a dependency is slow or unavailable.
In microservices, we trace requests through services. In agentic systems, we also need to trace decisions through workflow steps. When a result is wrong, you want to know whether the failure came from retrieval, a tool invocation, or the model’s reasoning. That means capturing step-level traces, tool-call inputs and outputs, and inference performance metrics; then correlating them end-to-end through a single trace that spans the workflow.
Without observability, agentic AI remains stuck in the prototype stage. With it, teams can tune quality, reduce cost, and improve reliability with the same confidence they bring to cloud-native operations.
Figure 2 shows a simple reference architecture for an agentic workflow, tracing a request from the client through an orchestrator and agent services, into tools and data, then model inference, and finally end-to-end observability.
Agentic AI becomes real when it is supported by a consistent platform layer across hybrid cloud, one that delivers efficient inference, consistent runtimes, and lifecycle management at scale. Together, the following Red Hat services map cleanly to the microservices mindset, including inference as a service, consistent runtimes where you need them, and a platform to build and operate AI workflows reliably across environments.
Agentic AI is not a replacement for engineering discipline. It demands more of it. Agentic AI is a distributed system with probabilistic components, and that raises the bar for architecture and operations. In practice, you must engineer agents like microservices by breaking workflows into composable components, enforcing clear contracts and guardrails, designing for failure with timeouts and fallbacks, and instrumenting everything so behavior is observable and auditable.
The post Agentic AI: Design reliable workflows across the hybrid cloud appeared first on Red Hat Developer.
