Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
148344 stories
·
33 followers

Apple's Touch-Screen MacBook Pro To Have Dynamic Island, New Interface

1 Share
Apple's forthcoming touch-screen MacBook Pro models -- the company's first-ever laptops to support touch input -- will feature the iPhone's Dynamic Island at the center top of their OLED displays and a new interface that dynamically adjusts between touch and point-and-click controls, according to a Bloomberg report citing people familiar with the plans. The 14-inch and 16-inch models, code-named K114 and K116, are slated for release toward the end of 2026 and won't be part of Apple's product announcements in the first week of March. The redesigned interface brings up a contextual menu surrounding a user's finger when they touch a button or control, and enlarges menu bar items when tapped, adapting the available controls based on whether the input is touch or click. Apple does not plan to position the machines as iPad replacements or describe them as touch-first; the physical design retains the full keyboard and large trackpad of the current MacBook Pro. Last year's Liquid Glass redesign in macOS Tahoe, which added more padding around icons and touch-optimized sliders in the control center, was partly groundwork for this shift.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The JavaScript AI Build-a-thon Season 2 starts March 2!

1 Share

Most applications used by millions of people every single day are powered by JavaScript/TypeScript. But when it comes to AI, most learning resources and code samples assume you're working in Python and will leave you trying to stitch scattered tutorials together to build AI into your stack.

The JavaScript AI Build-a-thon is a free, hands-on program designed to close that gap. Over the course of four weeks (March 2 - March 31, 2026), you'll move from running AI 100% on-device (Local AI), to designing multi-service, multi-agentic systems, all in JavaScript/ TypeScript and using tools you are already familiar with.
The series will culminate in a hackathon, where you will create, compete and turn what you'll have learnt into working projects you can point to, talk about and extend.

Register now at aka.ms/JSAIBuildathon

How the program works!

The program is organized around 2 phases: -

Phase I: Learn & Skill Up Schedule (Mar 2 - 13)

  • Self-paced quests that teach core AI patterns,
  • Interactive Expert-led sessions on Microsoft Reactor (Livestreams) and Discord (Office hours & QnA)

JavaScript AI Build-a-thon Roadmap

Day/Time (PT) Topic Links to join
Mon 3/2, 8:00 AM PST Local AI Development with Foundry Local Livestream
Discord Office Hour
Wed 3/4, 8:00 AM PST End-to-End Model Development on Microsoft Foundry Livestream
Discord Office Hour
Fri 3/6, 9:00 AM PST Advanced RAG Deep Dive + Guided Project Livestream
Discord Office Hour
Mon 3/9, 8:00 AM PST Design & Build an Agent E2E with Agent Builder (AITK) Livestream
Discord Office Hour
Wed 3/11, 8:00 AM PST Build, Scale & Govern AI Agents + Guided project Livestream
Discord Office Hour

The Build-a-thon prioritizes practical learning, so you'll complete 2 guided projects by the end of this phase:-

1. A Local Serverless AI chat with RAG
Concepts covered include: -

  • RAG Architecture
  • RAG Ingestion pipeline
  • Query & Retrieval
  • Response Generation (LLM Chains)

Serverless Chat LangChain.js CodeTour

2. A Burger Ordering AI Agent
Concepts covered include: -

  • Designing AI Agents
  • Building MCP Tools (Backend API Design)

Contoso Burger Ordering Agent

Phase II: Global Hack! (Mar 13 - 31)

  • Product demo series to showcase the latest product features that will accelerate your builder experience
  • A Global hackathon to apply what you learn into real, working AI solutions

This is where you'll build something that matters using everything learnt in the quests, and beyond, to create an AI-powered project that solves a real problem, delights users, or pushes what's possible.

The hackathon launches on March 13, 2026. Full details on registration, submission, judging criteria, award categories, prizes, and the hack phase schedule will be published when the hack goes live. Stay tuned!

But, here's what we can tell you now:

  • 🏆 6 award categories
  • đź’» Product demo showcases throughout the hack phase to keep you building with the latest tools
  • 👥 Teams of up to 4 or solo. Your call

Start Now (Join the Community)

Join our community to connect with other participants and experts from Microsoft &. GitHub to support your builder journey.

Register now at aka.ms/JSAIBuildathon

See you soon!

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Rethinking Background Workloads with Azure Functions on Azure Container Apps

1 Share

Objective

The blog explores background workload use cases where Azure Functions on Azure Container Apps provide clear advantages over traditional Container App Jobs. Here is an overview of Azure functions and Container App Jobs on Azure Container Apps.

The Traditional Trade-offs

Container-based jobs offer control. You define the image, configure the execution, manage the lifecycle. But for many scenarios, you’re writing boilerplate:

  • Polling logic to detect new files or messages
  • Retry mechanisms with backoff strategies
  • Parallelization code for batch processing
  • State management for long-running workflows
  • Cleanup routines and graceful shutdown handling

Azure Functions offers simplicity. Triggers, bindings, automatic scaling. But historically, you traded away container flexibility custom runtimes, specific dependencies, the portable packaging model teams have standardized on.

The Convergence: Functions on Container Apps

Here’s what’s changed: Azure Functions now runs natively on Azure Container Apps infrastructure. You get the event-driven programming model-triggers, bindings, Durable Functions with the container-native foundation your platform team already manages.

This isn’t “Functions or containers.” It’s Functions with containers.

The implications are significant:

  1. Same Container Apps environment your APIs and services use
  2. Event-driven triggers without writing polling code
  3. Built-in bindings for storage, queues, Cosmos DB, Event Hubs
  4. Durable Functions for complex workflows and long-running orchestrations
  5. KEDA-powered scaling that understands your triggers natively

Scenario Where This Shines

The Overnight Data Pipeline

A retail company processes inventory updates from 200+ suppliers every night. Files land in blob storage between midnight and 4 AM, varying from 10 KB to 500 MB.

With a traditional container job approach, you’d need: a scheduler to trigger execution, polling logic to detect new files, parallel processing code, error handling with dead-letter queues, and cleanup routines. The job runs on a schedule whether files exist or not.

With Functions on Container Apps: a Blob trigger fires automatically when files arrive. Each file processes independently. Automatic parallelization. Built-ins retry policies. The function scales are based on actual files and not on any predetermined schedule.

.blob_trigger(arg_name="blob", path="inventory-uploads/{name}", connection="StorageConnection") async def process_inventory(blob: func.InputStream): data = blob.read() # Transform and load to database await transform_and_load(data, blob.name)

The difference? Event-driven execution means no wasted runs when suppliers are late. No missed files when they’re early. The trigger handles the coordination.

The Event-Driven Order Processor

An e-commerce platform processes orders through multiple stages: validation, inventory check, payment capture, fulfillment notification. Each stage can fail independently and needs different retry semantics.

A container-based job would need custom state management tracking which orders are at which stage, handling partial failures, implementing resume logic after crashes.

Durable Functions on Container Apps solves this declaratively:

.orchestration_trigger(context_name="context") def order_workflow(context: df.DurableOrchestrationContext): order = context.get_input() # Each step is independently retryable with built-in checkpointing validated = yield context.call_activity("validate_order", order) inventory = yield context.call_activity("check_inventory", validated) payment = yield context.call_activity("capture_payment", inventory) yield context.call_activity("notify_fulfillment", payment) return {"status": "completed", "order_id": order["id"]}

The orchestrator maintains state across failures automatically. If payment capture fails after inventory check, the workflow resumes at payment not from the beginning. No external state store to manage. No custom checkpoint logic to write

The Scheduled Report Generator

Finance teams need their reports: daily summaries, weekly aggregations, month-end reconciliations

Timer-triggered Functions handle this with minimal ceremony and they run in the same Container Apps environment as your other services:

.timer_trigger(schedule="0 0 6 * * *", arg_name="timer") async def daily_financial_summary(timer: func.TimerRequest): if timer.past_due: logging.warning("Timer is running late!") await generate_summary(date.today() - timedelta(days=1)) await send_to_stakeholders()

No separate job definition. No CRON expression parsing. The schedule is code, versioned alongside your business logic.

The Long-Running Migration

“But what about jobs that run for hours?” - a fair question

A data migration team needed to process 50 million records. Rather than one monolithic execution, they used the fan-out/fan-in pattern with Durable Functions:

.orchestration_trigger(context_name="context") def migration_orchestrator(context: df.DurableOrchestrationContext): batches = yield context.call_activity("get_migration_batches") # Process all batches in parallel across multiple instances tasks = [context.call_activity("migrate_batch", b) for b in batches] results = yield context.task_all(tasks) yield context.call_activity("generate_report", results)

Each batch processes independently. Failures are isolated. Progress is checkpointed. The entire migration is completed in hours with automatic parallelization while maintaining full visibility into each batch’s status.

The Developer Experience Advantage

Beyond the architectural benefits, there’s a pragmatic reality that most batch workloads are fundamentally about reacting to something and producing a result.

Functions on Container Apps gives:

  • Declarative triggers: “When a file arrives, do this.” “When a message appears, process it.” “Every day at 6 AM, generate this report.” The coordination logic is handled for you
  • Native bindings: Direct integration with Azure Storage, Cosmos DB, Event Hubs, Service Bus, and dozens of other services. No SDK initialization boilerplate
  • Workflow orchestration: Durable Functions for stateful, long-running processes with automatic checkpointing, retries, and human interaction patterns
  • Unified observability: Integrated with Application Insights. Distributed tracing across your entire Container Apps environment
  • Same deployment model: Your Functions deploy as container images to the same environment as your APIs and services. One platform, consistent operations

Making the Choice

Consideration

Azure Functions on Azure Container Apps

Azure Container Apps Jobs

Trigger model

Event‑driven (files, messages, timers, HTTP, events)

Explicit execution (manual, scheduled, or externally triggered)

Scaling behavior

Automatic scaling based on trigger volume / queue depth

Fixed or explicitly defined parallelism

Programming model

Functions programming model with triggers, bindings, Durable Functions

General container execution model

State management

Built‑in state, retries, and checkpointing via Durable Functions

Custom state management required

Workflow orchestration

Native support using Durable Functions

Must be implemented manually

Boilerplate required

Minimal (no polling, retry, or coordination code)

Higher (polling, retries, lifecycle handling)

Runtime flexibility

Limited to supported Functions runtimes

Full control over runtime and dependencies

Getting Started

If you’re already running on Container Apps, adding Functions is straightforward:

Your Functions run alongside your existing apps, sharing the same networking, observability, and scaling infrastructure.

Check out the documentation for details - Getting Started on Functions on Azure Container Apps 

# Create a Functions app in your existing Container Apps environment az functionapp create \ --name my-batch-processor \ --storage-account mystorageaccount \ --environment my-container-apps-env \ --workload-profile-name "Consumption" \ --runtime python \ --functions-version 4

Quick Links

 

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SaaS on Multiple Clouds with Steve Buchanan

1 Share

What does it take to get your SaaS offering on multiple cloud providers? Richard chats with Steve Buchanan about his new role at JAMF, which focuses on a mobile device management product for Apple devices. Originally built as a SaaS product on AWS, Steve is helping to build out the JAMF stack on Azure to support a broader range of customers. Steve talks about Kubernetes as the common ground among the major cloud players, but you need to dig into the rest of the tooling to minimize differences across implementations. That means cloud-agnostic tools for deployment, identity, instrumentation, and more! The good news is that there are plenty of tools out there to help you, but it does take time to work out your suite of tools to get consistent results, no matter where the backend resides.

Links

Recorded January 8, 2026





Download audio: https://cdn.simplecast.com/media/audio/transcoded/5379899c-61c5-43c3-aa3f-1128cffd9ef4/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/audio/group/581e8fb8-16ea-4fe4-93ed-f559299cf268/group-item/75d155a3-7b65-4e49-a65b-bf37ffad282f/128_default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mike Chambers: From OpenClaw to AI Functions — What's Next for Agentic Development

1 Share
Mike Chambers is back — calling in from the other side of the globe — and he brought a lot to unpack. We pick up threads from our first conversation and follow them into genuinely exciting (and occasionally mind-bending) territory. We start with OpenClaw, the open-source agentic framework that took the developer world by storm. Mike shares his take on why it happened now — not just what it is — and why the timing was almost inevitable given how developers had been quietly experimenting with local agents for the past year. Then we go deep on asynchronous tool calling — a project Mike has been working on since mid-2024 that finally works reliably, thanks to more capable models. The idea: let your agent kick off a long-running task, keep the conversation going naturally, and have the result arrive without interrupting the flow. Mike walks through how he built this on top of Strands Agents SDK and why he's planning to propose it as a contribution to the open-source project. We also explore Strands Labs and its freshly released AI Functions — a genuinely new way to think about embedding generative capability directly into application code. Is this Software 3.1? Mike makes the case, and Romain pushes back in the best way. The episode closes with a look ahead: agent trust, observability with OpenTelemetry, and a thought experiment about what software might look like in five years if the execution environment itself becomes a model.

With Mike Chambers, Senior Developer Advocate, AWS





  • Download audio: https://op3.dev/e/dts.podtrac.com/redirect.mp3/developers.podcast.go-aws.com/media/197.mp3
    Read the whole story
    alvinashcraft
    11 minutes ago
    reply
    Pennsylvania, USA
    Share this story
    Delete

    Increased control over Office Add-in user experiences

    1 Share

    Every experience in Office is designed using the principle “the user is in control.” It’s critical that customers are the ones who make choices about their user experience, ensuring it fits their specific needs and workflows.

    Customers have asked for greater user control over add-in launch behavior in Office. Specifically, integrations into our applications via our add-in framework have been automatically appearing when documents are opened without user notice and without the ability to disable that behavior, causing confusion and frustration.

    After careful consideration, we are making three adjustments to the behavior of add-ins distributed through the Microsoft Marketplace to address this customer feedback:

    • First, starting on March 2nd, to ensure user control, add-ins will no longer be able to configure themselves to automatically display a task pane on document launch (also known as auto-open). Add-ins will not be able to (1) programmatically set the AutoShowTaskpaneWithDocument property via office.js nor (2) programmatically load a task pane using the showAsTaskpane API unless that call results from an explicit user action (e.g., from the user clicking on a button on the Ribbon). In a forthcoming update, we will enable add-ins to explicitly request user consent to be automatically loaded via an API.
    • Second, in a forthcoming update, if a user closes an automatically launched task pane, it will no longer automatically load when the document is subsequently opened, as Office will remove the AutoShowTaskpaneWithDocument property whenever the user takes action to close the add-in.
    • Third, in a forthcoming update, users will no longer be automatically prompted to install add-ins when opening a document unless that add-in is required for the document to function or display correctly. Specifically, a prompt appears in the following cases:

    –  Excel: When the workbook uses custom functions provided by an add-in and cannot calculate correctly without it.

    –  Excel and PowerPoint: When the file contains content add-ins that must be loaded for a worksheet or slide to render correctly.

    With these changes, developers should consider updating their product documentation and add-in experiences to guide their users on best practices for inviting others to benefit from their add-in. For example, users could add a note about the add-in when sharing a document with others or embed a link to the Microsoft Marketplace listing in an appropriate place in the document.

    These changes do not affect the runtime behavior of add-ins—they will only impact the conditions under which the add-in can automatically launch UI without explicit user choice.

    See also

    Show or hide the task pane of your Office Add-in

    Automatically open a task pane with a document

    The post Increased control over Office Add-in user experiences appeared first on Microsoft 365 Developer Blog.

    Read the whole story
    alvinashcraft
    11 minutes ago
    reply
    Pennsylvania, USA
    Share this story
    Delete
    Next Page of Stories