Azure AI Foundry is unveiling OpenAI’s GPT-5.1 series, the next generation of reasoning, analytics, and conversational intelligence.
The following models will be rolling out in Foundry today:
Learn more here!
OneNote Class Notebook is entering a new era with a modern, unified LTI® 1.3 integration (as part of the Microsoft 365 LTI app) that brings back automatic roster sync and streamlines how educators and students use Class Notebooks within their LMS.
This update is designed to benefit both educators and IT admins, making it easier to set up Class Notebooks and keep them in sync with course enrollments. In this article, we’ll cover the key new features, use cases, and next steps -- and how the new integration replaces the older LTI 1.1 tool (which is retiring in 2026).
OneNote Class Notebook is a digital binder designed for classrooms, giving educators and students a flexible space to organize and collaborate. Each notebook includes a personal workspace for every student, a content library for handouts, and a collaboration space for group work. Educators can distribute pages, provide feedback, and manage class materials all in one place, while students can take notes, complete assignments, and access resources anytime, on any device--to make learning more organized and interactive.
Setup is simple. Once your LMS has the Microsoft 365 LTI app, any instructor can create a Class Notebook right from the course. Check out the more detailed instructions here.
Your notebook is automatically shared with all students and instructors as they access the notebook. Students can launch the notebook from the LMS with single sign-on, making access seamless.
Automatic roster sync is back with LTI 1.3. When you create a Class Notebook via your LMS, students and instructors are added with the right permissions and stay in sync as the course roster changes, with no need for manual updates. This results in less admin work and fewer access gaps as your notebook stays current with your class list.
LTI 1.3 (with LTI Advantage) brings modern security and interoperability as part of the unified Microsoft 365 LTI app -- which covers Class Notebook, Assignments, OneDrive files, Teams, Reflect, and more -- to meet current standards and work reliably across LTI 1.3-compatible LMS platforms.
For admins, one LTI app means simpler deployment and fewer plugins to manage, while configuration settings help you choose which components (OneNote, Teams, etc.) to enable or disable within the Microsoft 365 for your organization. This gives schools flexibility to roll out tools at your own pace. This new LTI integration also includes automatic identity mapping and course enrollment syncing.
Meanwhile, educators gain access to a single hub for Microsoft 365 tools and can surface (or hide) components that are approved by the administrator. For educators using Class Notebook, provisioning is streamlined: launching Class Notebook creates and links the right OneNote space to the course with minimal friction.
The classic OneNote Class Notebook LTI 1.1 will retire on September 17, 2026, so it’s best to plan to move to the Microsoft 365 LTI app before then. Existing Notebooks will still work, and you can copy pages/sections into new Notebooks as needed.
Educators: once your administrator has installed the new Microsoft 365 LTI app, look for it in your course, choose Class Notebook, and follow the prompts to create a Notebook in just a few clicks. You can see detailed steps here.
Admins: To deploy the Microsoft 365 LTI 1.3 app in your LMS, check out prerequisites, configuration steps, and troubleshooting tips in the official deployment guide is available on Microsoft Learn at aka.ms/LMSAdminDocs.
Microsoft also holds bi-monthly office hours every first and third Thursday of each month, where LMS platform customers and Microsoft 365 customers join from around the world to ask questions, share feedback, discuss scenarios, and get assistance from Microsoft and peers. Feel free to join us:
Microsoft 365 LTI Office Hours
1st and 3rd Thursday of each month @11am EST
Join link: https://aka.ms/LTIOfficeHours
You can also join the free Education Insiders Program to preview updates, get support from other Class Notebook community members, meet the team, and influence the roadmap.
Need help with a specific issue? Contact Microsoft Education support at aka.ms/EduSupport.
We can’t wait to hear your feedback!
Learning Tools Interoperability® (LTI®) is a trademark of the 1EdTech Consortium, Inc. (1edtech.org)
The 22nd cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
Today, we're thrilled to announce the public preview of the durable task extension for Microsoft Agent Framework. This extension transforms how you build production-ready, resilient and scalable AI agents by bringing the proven durable execution (survives crashes and restarts) and distributed execution (runs across multiple instances) capabilities of Azure Durable Functions directly into the Microsoft Agent Framework. Now you can deploy stateful, resilient AI agents to Azure that automatically handle session management, failure recovery, and scaling, freeing you to focus entirely on your agent logic.
Whether you're building customer service agents that maintain context across multi-day conversations, content pipelines with human-in-the-loop approval workflows, or fully automated multi-agent systems coordinating specialized AI models, the durable task extension gives you production-grade reliability, scalability and coordination with serverless simplicity.
Key features of the durable task extension include:
# Python endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME", "gpt-4o-mini") # Create an AI agent following the standard Microsoft Agent Framework pattern agent = AzureOpenAIChatClient( endpoint=endpoint, deployment_name=deployment_name, credential=AzureCliCredential() ).create_agent( instructions="""You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name="DocumentPublisher", tools=[ AIFunctionFactory.Create(search_web), AIFunctionFactory.Create(generate_outline) ] ) # Configure the function app to host the agent with durable session management app = AgentFunctionApp(agents=[agent]) app.run()// C# var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"); var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT") ?? "gpt-4o-mini"; // Create an AI agent following the standard Microsoft Agent Framework pattern AIAgent agent = new AzureOpenAIClient(new Uri(endpoint), new DefaultAzureCredential()) .GetChatClient(deploymentName) .CreateAIAgent( instructions: """You are a professional content writer who creates engaging, well-structured documents for any given topic. When given a topic, you will: 1. Research the topic using the web search tool 2. Generate an outline for the document 3. Write a compelling document with proper formatting 4. Include relevant examples and citations""", name: "DocumentPublisher", tools: [ AIFunctionFactory.Create(SearchWeb), AIFunctionFactory.Create(GenerateOutline) ]); // Configure the function app to host the agent with durable thread management // This automatically creates HTTP endpoints and manages state persistence using IHost app = FunctionsApplication .CreateBuilder(args) .ConfigureFunctionsWebApplication() .ConfigureDurableAgents(options => options.AddAIAgent(agent) ) .Build(); app.Run();
As AI agents evolve from simple chatbots to sophisticated systems handling complex, long-running tasks, new challenges emerge:
The Durable Extension addresses these challenges by extending Microsoft Agent Framework with capabilities from Azure Durable Functions, enabling you to build AI agents that survive failures, scale elastically, and execute predictably through durable and distributed execution.
The extension is built on four foundational value pillars, which we refer to as the 4D’s:
Every agent state change (messages, tool calls, decisions) is durably checkpointed automatically. Agents survive and automatically resume from infrastructure updates, crashes, and can be unloaded from memory during long waiting periods without losing context. This is essential for agents that orchestrate long-running operations or wait for external events.
Agent execution is accessible across all instances, enabling elastic scaling and automatic failover. Healthy nodes seamlessly take over work from failed instances, ensuring continuous operation. This distributed execution model allows thousands of stateful agents to scale up and run in parallel.
Agent orchestrations execute predictably using imperative logic written as ordinary code. Define the execution path, enabling automated testing, verifiable guardrails, and business-critical workflows that stakeholders can trust. This complements agent-directed workflows by providing explicit control flow when needed.
Use familiar development tools (IDEs, debuggers, breakpoints, stack traces, and unit tests) and programming languages to develop and debug. Your agent and agent orchestrations are expressed as code, making them easily testable, debuggable, and maintainable.
Deploy agents to Azure Functions (with expansion to other Azure computes soon) with automatic scaling to thousands of instances or down to zero when not in use. Pay only for the compute resources you consume. This code-first deployment approach gives you full control over the compute environment while maintaining the benefits of a serverless architecture.
Agent sessions are automatically checkpointed in durable storage that you configure in your function app, enabling durable and distributed execution across multiple instances. Any instance can resume an agent's execution after interruptions or process failures, ensuring continuous operation.
Under the hood, agents are implemented as durable entities. These are stateful objects that maintain their state across executions. This architecture enables each agent session to function as a reliable, long-lived entity with preserved conversation history and context.
Example scenario: A customer service agent handling a complex support case over multiple days and weeks. The conversation history, context, and progress are preserved even if the agent is redeployed or moves to a different instance.
Coordinate multiple specialized durable agents using imperative code where you define the control flow. This differs from agent-directed workflows where the agent decides the next steps. Deterministic Orchestrations provide predictable, repeatable execution patterns with automatic checkpointing and recovery.
Example scenario: An email processing system that uses a spam detection agent, then conditionally routes to different specialized agents based on the classification. The orchestration automatically recovers if any step fails and completed agent calls are not re-executed.
Orchestrations and agents can pause for human input, approval, or review without consuming compute resources. Durable execution enables orchestrations to wait for days or even weeks while waiting for human responses, even if the app crashes or restarts. When combined with serverless hosting, all compute resources are spun down during the wait period, eliminating compute costs until the human provides their input.
Example scenario: A content publishing agent that generates drafts, sends them to human reviewers, and waits days for approval without running (or paying for) compute resources during the review period. When the human response arrives, the orchestration automatically resumes with full conversation context and execution state intact.
# Python app.orchestration_trigger(context_name="context") def content_approval_workflow(context: DurableOrchestrationContext): """Human-in-the-loop workflow with zero-cost waiting.""" topic = context.get_input() # Step 1: Generate content using an agent content_agent = context.get_agent("ContentGenerationAgent") draft_content = yield content_agent.run(f"Write an article about {topic}") # Step 2: Send for human review yield context.call_activity("notify_reviewer", draft_content) # Step 3: Wait for approval - no compute resources consumed while waiting approval_event = context.wait_for_external_event("ApprovalDecision") timeout_task = context.create_timer(context.current_utc_datetime + timedelta(hours=24)) winner = yield context.task_any([approval_event, timeout_task]) if winner == approval_event: timeout_task.cancel() approved = approval_event.result if approved: result = yield context.call_activity("publish_content", draft_content) return result else: return "Content rejected" else: # Timeout - escalate for review result = yield context.call_activity("escalate_for_review", draft_content) return result
Configure your Function App with the Durable Task Scheduler as the durable backend (what persists agents and orchestration state). The Durable Task Scheduler is the recommended durable backend for your durable agents, offering the best throughput performance, fully managed infrastructure, and built-in observability through a UI dashboard.
The Durable Task Scheduler dashboard provides deep visibility into your agent operations:
The Durable Extension supports:
Support for additional computes coming soon.