Read more of this story at Slashdot.
Read more of this story at Slashdot.
Most applications used by millions of people every single day are powered by JavaScript/TypeScript. But when it comes to AI, most learning resources and code samples assume you're working in Python and will leave you trying to stitch scattered tutorials together to build AI into your stack.
The JavaScript AI Build-a-thon is a free, hands-on program designed to close that gap. Over the course of four weeks (March 2 - March 31, 2026), you'll move from running AI 100% on-device (Local AI), to designing multi-service, multi-agentic systems, all in JavaScript/ TypeScript and using tools you are already familiar with.
The series will culminate in a hackathon, where you will create, compete and turn what you'll have learnt into working projects you can point to, talk about and extend.
Register now at aka.ms/JSAIBuildathon
The program is organized around 2 phases: -
| Day/Time (PT) | Topic | Links to join |
|---|---|---|
| Mon 3/2, 8:00 AM PST | Local AI Development with Foundry Local |
Livestream Discord Office Hour |
| Wed 3/4, 8:00 AM PST | End-to-End Model Development on Microsoft Foundry |
Livestream Discord Office Hour |
| Fri 3/6, 9:00 AM PST | Advanced RAG Deep Dive + Guided Project |
Livestream Discord Office Hour |
| Mon 3/9, 8:00 AM PST | Design & Build an Agent E2E with Agent Builder (AITK) |
Livestream Discord Office Hour |
| Wed 3/11, 8:00 AM PST | Build, Scale & Govern AI Agents + Guided project |
Livestream Discord Office Hour |
The Build-a-thon prioritizes practical learning, so you'll complete 2 guided projects by the end of this phase:-
1. A Local Serverless AI chat with RAG
Concepts covered include: -
2. A Burger Ordering AI Agent
Concepts covered include: -
This is where you'll build something that matters using everything learnt in the quests, and beyond, to create an AI-powered project that solves a real problem, delights users, or pushes what's possible.
The hackathon launches on March 13, 2026. Full details on registration, submission, judging criteria, award categories, prizes, and the hack phase schedule will be published when the hack goes live. Stay tuned!
But, here's what we can tell you now:
Join our community to connect with other participants and experts from Microsoft &. GitHub to support your builder journey.
Register now at aka.ms/JSAIBuildathon
See you soon!
The blog explores background workload use cases where Azure Functions on Azure Container Apps provide clear advantages over traditional Container App Jobs. Here is an overview of Azure functions and Container App Jobs on Azure Container Apps.
Container-based jobs offer control. You define the image, configure the execution, manage the lifecycle. But for many scenarios, you’re writing boilerplate:
Azure Functions offers simplicity. Triggers, bindings, automatic scaling. But historically, you traded away container flexibility custom runtimes, specific dependencies, the portable packaging model teams have standardized on.
Here’s what’s changed: Azure Functions now runs natively on Azure Container Apps infrastructure. You get the event-driven programming model-triggers, bindings, Durable Functions with the container-native foundation your platform team already manages.
This isn’t “Functions or containers.” It’s Functions with containers.
The implications are significant:
A retail company processes inventory updates from 200+ suppliers every night. Files land in blob storage between midnight and 4 AM, varying from 10 KB to 500 MB.
With a traditional container job approach, you’d need: a scheduler to trigger execution, polling logic to detect new files, parallel processing code, error handling with dead-letter queues, and cleanup routines. The job runs on a schedule whether files exist or not.
With Functions on Container Apps: a Blob trigger fires automatically when files arrive. Each file processes independently. Automatic parallelization. Built-ins retry policies. The function scales are based on actual files and not on any predetermined schedule.
.blob_trigger(arg_name="blob", path="inventory-uploads/{name}", connection="StorageConnection") async def process_inventory(blob: func.InputStream): data = blob.read() # Transform and load to database await transform_and_load(data, blob.name)The difference? Event-driven execution means no wasted runs when suppliers are late. No missed files when they’re early. The trigger handles the coordination.
An e-commerce platform processes orders through multiple stages: validation, inventory check, payment capture, fulfillment notification. Each stage can fail independently and needs different retry semantics.
A container-based job would need custom state management tracking which orders are at which stage, handling partial failures, implementing resume logic after crashes.
Durable Functions on Container Apps solves this declaratively:
.orchestration_trigger(context_name="context") def order_workflow(context: df.DurableOrchestrationContext): order = context.get_input() # Each step is independently retryable with built-in checkpointing validated = yield context.call_activity("validate_order", order) inventory = yield context.call_activity("check_inventory", validated) payment = yield context.call_activity("capture_payment", inventory) yield context.call_activity("notify_fulfillment", payment) return {"status": "completed", "order_id": order["id"]}The orchestrator maintains state across failures automatically. If payment capture fails after inventory check, the workflow resumes at payment not from the beginning. No external state store to manage. No custom checkpoint logic to write
Finance teams need their reports: daily summaries, weekly aggregations, month-end reconciliations
Timer-triggered Functions handle this with minimal ceremony and they run in the same Container Apps environment as your other services:
.timer_trigger(schedule="0 0 6 * * *", arg_name="timer") async def daily_financial_summary(timer: func.TimerRequest): if timer.past_due: logging.warning("Timer is running late!") await generate_summary(date.today() - timedelta(days=1)) await send_to_stakeholders()No separate job definition. No CRON expression parsing. The schedule is code, versioned alongside your business logic.
“But what about jobs that run for hours?” - a fair question
A data migration team needed to process 50 million records. Rather than one monolithic execution, they used the fan-out/fan-in pattern with Durable Functions:
.orchestration_trigger(context_name="context") def migration_orchestrator(context: df.DurableOrchestrationContext): batches = yield context.call_activity("get_migration_batches") # Process all batches in parallel across multiple instances tasks = [context.call_activity("migrate_batch", b) for b in batches] results = yield context.task_all(tasks) yield context.call_activity("generate_report", results)Each batch processes independently. Failures are isolated. Progress is checkpointed. The entire migration is completed in hours with automatic parallelization while maintaining full visibility into each batch’s status.
Beyond the architectural benefits, there’s a pragmatic reality that most batch workloads are fundamentally about reacting to something and producing a result.
Functions on Container Apps gives:
|
Consideration |
Azure Functions on Azure Container Apps |
Azure Container Apps Jobs |
|
Trigger model |
Event‑driven (files, messages, timers, HTTP, events) |
Explicit execution (manual, scheduled, or externally triggered) |
|
Scaling behavior |
Automatic scaling based on trigger volume / queue depth |
Fixed or explicitly defined parallelism |
|
Programming model |
Functions programming model with triggers, bindings, Durable Functions |
General container execution model |
|
State management |
Built‑in state, retries, and checkpointing via Durable Functions |
Custom state management required |
|
Workflow orchestration |
Native support using Durable Functions |
Must be implemented manually |
|
Boilerplate required |
Minimal (no polling, retry, or coordination code) |
Higher (polling, retries, lifecycle handling) |
|
Runtime flexibility |
Limited to supported Functions runtimes |
Full control over runtime and dependencies |
If you’re already running on Container Apps, adding Functions is straightforward:
Your Functions run alongside your existing apps, sharing the same networking, observability, and scaling infrastructure.
Check out the documentation for details - Getting Started on Functions on Azure Container Apps
# Create a Functions app in your existing Container Apps environment az functionapp create \ --name my-batch-processor \ --storage-account mystorageaccount \ --environment my-container-apps-env \ --workload-profile-name "Consumption" \ --runtime python \ --functions-version 4
What does it take to get your SaaS offering on multiple cloud providers? Richard chats with Steve Buchanan about his new role at JAMF, which focuses on a mobile device management product for Apple devices. Originally built as a SaaS product on AWS, Steve is helping to build out the JAMF stack on Azure to support a broader range of customers. Steve talks about Kubernetes as the common ground among the major cloud players, but you need to dig into the rest of the tooling to minimize differences across implementations. That means cloud-agnostic tools for deployment, identity, instrumentation, and more! The good news is that there are plenty of tools out there to help you, but it does take time to work out your suite of tools to get consistent results, no matter where the backend resides.
Links
Recorded January 8, 2026
Every experience in Office is designed using the principle “the user is in control.” It’s critical that customers are the ones who make choices about their user experience, ensuring it fits their specific needs and workflows.
Customers have asked for greater user control over add-in launch behavior in Office. Specifically, integrations into our applications via our add-in framework have been automatically appearing when documents are opened without user notice and without the ability to disable that behavior, causing confusion and frustration.
After careful consideration, we are making three adjustments to the behavior of add-ins distributed through the Microsoft Marketplace to address this customer feedback:
– Excel: When the workbook uses custom functions provided by an add-in and cannot calculate correctly without it.
– Excel and PowerPoint: When the file contains content add-ins that must be loaded for a worksheet or slide to render correctly.
With these changes, developers should consider updating their product documentation and add-in experiences to guide their users on best practices for inviting others to benefit from their add-in. For example, users could add a note about the add-in when sharing a document with others or embed a link to the Microsoft Marketplace listing in an appropriate place in the document.
These changes do not affect the runtime behavior of add-ins—they will only impact the conditions under which the add-in can automatically launch UI without explicit user choice.
Show or hide the task pane of your Office Add-in
Automatically open a task pane with a document
The post Increased control over Office Add-in user experiences appeared first on Microsoft 365 Developer Blog.