Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147078 stories
·
32 followers

3 things to know about Red Hat AI 3

1 Share

Imagine you’re driving an old pickup truck, reliable but a bit worn, down the scenic route of innovation. Up ahead, the road splits—one path winds along the safe, familiar flatlands of traditional IT, and the other, a challenging mountainous trail marked “Artificial Intelligence.” You want to take that adventurous AI path, don’t you? But looking closer, it’s riddled with potholes of high costs and sharp curves of complexity that make you question if your old trusty can handle it. Here’s where something like Red Hat AI3 comes into play—it promises to be the upgrade your vehicle needs to confidently take on that treacherous path.

The AI paradox is a real challenge. As much as organizations wish to innovate with AI, operational headaches such as spiraling costs, escalating complexities, and stringent control needs are formidable obstacles. Many a time, what starts as a thrilling foray into AI-driven transformation ends up as a stalled project, with tools and experiments gathering digital dust. Red Hat’s newest offering, AI3, aims to transform this landscape, turning AI from a challenging endeavor to a strategic advantage.

This video is from Red Hat.

Red Hat AI3 isn’t just a tool; think of it as a significant evolution of Red Hat’s AI portfolio, a unified platform that synergizes various AI solutions. This platform is designed meticulously to tackle three major challenges that throttle AI applications: cost, complexity, and control. Let’s dive deeper into how it intends to address these issues.

First up, cost. Running hefty AI models isn’t just a computational headache; it’s also heavy on the wallet. AI3 introduces efficient model inferencing, which allows for scaling without the usual cost penalty. How? By using components like LLMD, which distribute a model’s computational demands across your existing hybrid cloud architecture. This not only maximizes hardware utility but also reduces bottlenecks, enhancing response times and throughput.

Next, we look toward the horizon—preparing for the future of AI, specifically, Agentic AI. The landscape is evolving beyond simple question-and-answer setups to autonomous agents capable of performing complex, goal-oriented tasks. AI3 lays down the foundational tech needed for these advancements, such as support for model context protocol and llama stack, accelerating the deployment and scalability of AI agents across varied environments.

Then comes the operational aspect. AI3 allows IT teams to morph into internal AI hubs, offering services like GPU as a service. This maximizes the utility of expensive hardware and ensures it’s available across all teams. Similarly, model as a service enables a centralized, curated catalog of approved models, which standardizes and streamlines AI efforts across the board.

Red Hat AI3 proffers not just a toolkit but a comprehensive ecosystem that supports cost-effective scaling, simplifies next-generation AI deployment, and centralizes operational control. It stands as a foundation that not only protects your existing investments but also empowers you to utilize any model on any hardware accelerator, spread across any segment of your hybrid cloud setup.

In essence, Red Hat AI3 is about transforming your AI strategies from mere experimentation to robust, enterprise-scale innovations. It’s about ensuring that when you choose the adventurous path of AI, you’re equipped not just with a robust off-roader but also with a detailed map and a versatile tool kit to help navigate the tough terrain.

Navigating the AI terrain requires more than just enthusiasm; it demands robust, scalable, and flexible solutions like Red Hat AI3 that understand and address the core needs of cost, complexity, and control. As we move towards more integrated, intelligent systems, having a platform that adapts and scales will undoubtedly be a game changer.

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Modernizing Authentication for Legacy Visual Studio Clients

1 Share

As part of our ongoing commitment to security and modernization, we’re updating outdated authentication mechanisms used by older versions of clients reliant on our older Visual Studio client libraries.

For full details on all known impacted clients, refer to the official announcement we made in April 2024: End of Support for Microsoft products reliant on older Azure DevOps and Visual Studio authentication.

In order to minimize disruption due to removing these legacy tokens, over the past few months, we’ve worked on seamlessly transitioning these legacy tokens to Entra-backed authentication when possible. This change enhances security and aligns with current identity standards—but it may also result in more frequent interactive reauthentication prompts, due to the 1-hour lifetime of Entra tokens.

If you’re still experiencing authentication issues, we strongly recommend upgrading to a supported version of your client or tool that relies on the legacy tokens. As a reminder, many of these older products are past end-of-support, so we’re unable to maintain compatibility with deprecated in-product authentication flows. Upgrading your clients to later versions that rely on modern authentication likely comes with additional security and product improvements.

The post Modernizing Authentication for Legacy Visual Studio Clients appeared first on Azure DevOps Blog.

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Choose how you search and stay organized with Firefox

1 Share

At Mozilla, we build Firefox around one principle: putting you in control. With today’s release, we’re introducing new features that make browsing smarter and more personal while staying true to the values you care about most: privacy and choice.

A new option for search, still on your terms.

Earlier this year, we gave you more choice in how you search by testing Perplexity, an AI-powered answer engine, as a search option on Firefox. Now, after positive feedback, we’re making it a fixture, rolling it out to more users for desktop. Perplexity provides conversational answers with citations, so you can validate information without digging through pages of results.

This addition reflects our shared commitment to choice: You decide when to use an AI answer engine, or if you want to use it at all. Available globally, Perplexity can be found in the unified search button in the address bar. We’ll be bringing Perplexity to mobile in the coming months. And as always, privacy matters – Perplexity maintains strict prohibitions against selling or sharing personal data.

Organize your life with profiles

At the beginning of the year, we started testing profiles — a way to create and switch between different browsing setups. After months of gradual rollout and community feedback, profiles are now available to everyone.

Firefox Profiles feature shown with an illustration of three foxes and a setup screen for creating and customizing browser profiles.
Create and switch between different browsing setups

Profiles let you keep work tabs distinct from personal browsing, or dedicate a setup to testing extensions or managing a specific project. Each profile runs independently, giving you flexibility and focus. Feedback from students, professionals and contributors helped us refine this feature into the version you see today.

Discover more with visual search

In September, we announced visual search on Mozilla Connect and began rolling it out for testing. Powered by Google Lens, it lets you search what you see with a simple right-click on any image.

Search what you see with a simple right-click on an image

You can:

  • Find similar products, places or objects 
  • Copy, translate or search text from images
  • Get inspiration for learning, travel or research

This desktop-only feature makes searching more intuitive and curiosity-driven. For now, it requires Google as your default search engine. Tell us what you think. Your feedback will guide where visual search appears next, from the address bar to mobile.

Evolving to meet your needs

Today’s release brings more ways to browse on your terms — from smarter search with Perplexity, to profiles that let you separate work from play, to visual search.

Each of these features reflects what matters most to us: putting you in control of your online experience and building alongside the community that inspires Firefox. With your feedback, we’ll keep shaping a browser that not only keeps pace with the future of the web but also stays true to the open values you trust.

We’re excited to see how you use what’s new, and can’t wait to share what’s next.

Take control of your internet

Download Firefox

The post Choose how you search and stay organized with Firefox appeared first on The Mozilla Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Can Arduino Teach a Tech Giant How To Win Over Developers?

1 Share
arduino

The initial reaction from developers and makers to the news that Qualcomm is buying Arduino, the open source electronic prototyping platform, has been less than positive. Qualcomm maintains that Arduino will remain an open ecosystem and the company will be an independent subsidiary that works with multiple silicon vendors — just with more reach now.

The concern is that Qualcomm might fail to invest in Arduino or misunderstand its strengths. But ironically, that’s exactly why Qualcomm needs Arduino: to help it work better with developers and makers.

Qualcomm’s Strategic Play for the Developer Community

Although Arduino has around 30,000 business customers for its industrial and enterprise Pro boards, what Qualcomm is after is its worldwide community of over 33 million makers and developers — not to mention its pervasive presence in hardware startups, who use Arduino for everything from prototyping to running test systems in their labs. The idea might be to upsell these existing users to more powerful compute, especially for edge AI and robotics.

Arduino itself has a somewhat tangled and occasionally contentious history as a company. It’s a long way from the plucky academic startup, having gone through a fractious disagreement about trademarks and manufacturing rights and raised $54 million in VC funding since 2022. But continuing to open source tools and specifications takes a lot of resources.

The company never quite managed to create the promised Arduino Foundation, which was supposed to handle governance of the Arduino IDE — software that’s been the gateway to making embedded systems programming accessible to a generation of developers. So can Arduino keep its independence, stay open, and teach Qualcomm how to understand and support developers?

“Qualcomm’s acquisition of Arduino, following its earlier integrations of Foundries.io and Edge Impulse, signals a strategic shift toward empowering developers with a self-service approach.”
– Leendert van Doorn, Qualcomm Senior VP of engineering

“Qualcomm’s acquisition of Arduino, following its earlier integrations of Foundries.io and Edge Impulse, signals a strategic shift toward empowering developers with a self-service approach,” senior vice president of engineering Leendert van Doorn told The New Stack.

“Specifically, for the Arduino UNO Q board, all source code is openly available and integrated into upstream repositories, allowing developers to build and customize software independently of Qualcomm. This move establishes a new standard for future Linux-based products from the company.”

Culture changes like this don’t just make Qualcomm easier for developers to work with; they also make it more likely that Arduino will stay truly independent. So far that’s happened for Foundries.io (which was bought by Qualcomm last year) and Edge Impulse still supports multiple hardware vendors. But like all culture changes, Qualcomm won’t become more open overnight.

“It will be interesting to see whether and how Qualcomm sustains Arduino’s open source ethos…”
– James Governor, RedMonk cofounder and analyst

“Qualcomm’s business is predicated on selling to customers buying in volume, so this is certainly a change,” RedMonk cofounder and analyst James Governor told us. “It will be interesting to see whether and how Qualcomm sustains Arduino’s open source ethos, and whether it can help Arduino capture new markets — namely robotics. Arduino is currently very much skewed to the hobbyist market, however.”

That might require a balancing act, Governor suggested: can Qualcomm “manage to be both hands off enough not to mess up what it’s acquiring, and yet hands on enough to reposition Arduino as a prototyping platform for robotics and industrial manufacturing companies”?

Bringing Two Worlds Together With the UNO Q Board

Arduino and Qualcomm didn’t just announce the acquisition; they’ve already worked together to build a $44 board called the UNO Q. That’s a fairly typical Arm SBC (a single-board computer that integrates CPU, GPU, RAM and storage for embedded applications like IoT, industrial controllers and robotics) with a budget Qualcomm Dragonwing processor running full Debian Linux. But it also has an Arduino microcontroller on the back for working with the sensors and actuators you need for actually building the kind of robotics and “everyday IoT devices” Qualcomm recommends the Dragonwing for, with an Arduino bridge library handling the communication between them.

Arduino and Qualcomm have already worked together to build a $44 board called the UNO Q.

This isn’t the first attempt at putting Arduino and Linux on the same system, but it might be the first one to be successful. The Arduino Yun (which used a fairly obscure Linux distro) and the TRE (cancelled due to lawsuits) didn’t have much of an impact and Arduino’s Portenta X8 is firmly aimed at its industrial customers.

Combining sensors, controls and an AI vision model that works with them is getting common in industrial IoT. But makers who need their device to have a complex user interface and data storage or something as demanding as on-board AI, as well as reading sensors and controlling motors or other electronics, have usually had to integrate the two systems themselves, which usually means extra components for connectivity.

That’s something Arduino has wanted to address for a while. It’s hoping to deliver a product with the power of a Raspberry Pi in the classic UNO form factor (and pin layout) — familiar to existing Arduino users — and compatible with the existing shields that add features and connectivity to Arduino boards. The price is certainly comparable to a high-end Pi; the performance might not be, although Qualcomm is claiming the systems will be ideal for running local AI models (you’re also getting the real-time control built in).

The New App Lab Development Environment

That means you need more than just the Arduino IDE to work with the UNO Q, so there’s a new (also open source) App Lab development environment that supports Arduino, C++ Python, Linux and AI workflows with libraries and modular components called ‘bricks’, without developers needing to think about complexities like handling Docker containers. If you want to build a smart lock with image recognition in Python running on Linux with a USB webcam and then have the Arduino drive the motor that unlocks the door when it recognizes you, App Lab gives you one place to do that.

Arduino UNO Q

The UNO Q gives you a whole development stack; image credit: Qualcomm

A clever twist: although you can run App Lab on your usual computer, it’s also preinstalled on the UNO Q: Just plug in a screen and keyboard and then you can use Debian running on the Dragonwing to program the Arduino MCU.

Arduino was already working with Edge Impulse, another recent Qualcomm acquisition for making it easy to go from sensor data to a deployed AI model. App Lab includes several of their pre-built AI models that run on the UNO Q — including object detection, audio classification like recognizing a ‘wake word’ (if you connect a microphone), computer vision (just plug in a USB webcam) and anomaly detection. The Dragonwing’s GPU supports OpenCL and developers can upload or train their own models using App Lab or the standard Edge Impulse CLI workflow.

App Lab also uses Arduino Cloud for device lifecycle management and remote updates. And if you want to use custom Debian or Yocto images, App Lab uses another recent Qualcomm acquisition, FoundriesFactory, which offers a SaaS platform for open source embedded development (similar to AWS IoT or Azure Sphere).

How Arduino Is Opening Up Qualcomm’s Ecosystem

Arduino was the first system that made it easy for any developer to get started with embedded systems and it’s the standard for prototyping, but it has a lot of competition now: Adafruit with Circuit Python, the PlatformIO IDE that works with multiple hardware options, ExpressIf ESP boards, and pre-certified Matter stacks from big-name vendors like Nordic, NXP and Silicon Labs that let developers write applications and treat the hardware as just a platform. Qualcomm’s backing may take the pressure off a little, as well as allowing Arduino to expand into the now-obligatory AI areas.

Arduino doesn’t have as strong a reputation in professional manufacturing as it does in the maker market, particularly in devices that need compliance and interference testing. Qualcomm’s experience in phones and automotive may give it a leg up there.

The idea that a startup could go from prototyping with a maker board to getting runs of a custom design that integrates everything with the same supplier isn’t always realistic, but Qualcomm is likely betting that hardware startups that start with their chip in a prototype will stick with that for the AI half of a device that can’t be replaced by a custom logic board.

Qualcomm needs to learn to better support developers; and that’s an area where Arduino has a lot to teach them.

To appeal to those potential joint customers, Qualcomm needs to learn to better support developers; and that’s an area where Arduino has a lot to teach them.

What developers have typically got from Qualcomm in the past have been evaluation and developer kits, often with specific hardware accessories for prototyping devices, or modules and reference designs — sometimes offered in conjunction with platform vendors like Microsoft. That’s a good fit for the device manufacturers that have been Qualcomm’s typical customers, but it hasn’t always worked as well for more general developers.

A decade ago it worked with Arrow, which made the DragonBoard — the first development board with Snapdragon chips preloaded with Android, aimed at IoT and embedded developers. That didn’t make much of an impression on the market. More recently, the Snapdragon X Elite Dev Kit it designed with Microsoft — to give developers a desktop box for building apps for Arm-based Copilot+ laptops — was repeatedly delayed and then abruptly cancelled. Qualcomm said it didn’t meet its usual standards and Microsoft suggested that with all the delays, developers could already buy more powerful laptops.

Arduino Changing How Qualcomm Delivers Hardware and Software

The UNO Q is a much more open-ended proposition; no signing up to terms and conditions just to see information about the hardware, and you can just order it from the Arduino store like any other board. It’s open source hardware with the usual Hardware Abstraction Layer, so code can be portable between different boards. The schematics, pinout and CAD files are already available, so in theory other manufacturers can produce compatible boards (although they would have to be Qualcomm partners to get the Dragonwing processors, and might need to order in unfeasibly large numbers).

That’s the first sign that Arduino is already changing the way Qualcomm delivers both hardware and software. Typically, Qualcomm offers closed-source SDKs — like the Qualcomm Neural Processing SDL, whose late delivery for Windows meant Arm-based Copilot+ notebooks were on sale first but not useful for developers until months later — with support for common open source projects.

While Qualcomm doesn’t have the same track record for sustained open source contributions as Arduino, it’s been ramping up its upstream contributions to Linux, Mesa, U-boot and open source AI projects…

And while Qualcomm doesn’t have the same track record for sustained open source contributions as Arduino, it’s been ramping up its upstream contributions to Linux, Mesa, U-boot and open source AI projects that can use its chips.

What runs on the Dragonwing is standard Debian Linux, chosen to appeal to developers (for prototyping if not for production use). In a first for Qualcomm, the project was built on upstream Debian, continually rebasing the latest kernel and submitting their patches as they went along, which means the patch quality has to be good enough to get accepted upstream while the project is in progress, rather than waiting until the end and hoping it happens quickly.

That’s a developer-friendly approach that’s on track to become the norm for all Qualcomm’s Linux-enabled devices, where developers don’t need to ask Qualcomm for information, let alone permission: they can just pick it up and make any changes they need.

The post Can Arduino Teach a Tech Giant How To Win Over Developers? appeared first on The New Stack.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Build a Multi-Agent System in 5 Minutes with cagent

1 Share

Models are advancing quickly. GPT-5, Claude Sonnet, Gemini. Each release gives us more capabilities. But most real work isn’t solved by a single model.

Developers are realizing they need a system of agents: different types of agents working together to accomplish more complex tasks. For example, a researcher to find information, a writer to summarize, a planner to coordinate, and a reviewer to check accuracy.

The challenge is that today, building a multi-agent system is harder than it should be. Context doesn’t flow cleanly between agents. Tools require custom integration. Sharing with a teammate means sending instructions and hoping they can re-create your setup.

That’s the problem cagent solves.

In this blog, we’ll walk you through the basics, how to create a multi-agent AI system in minutes, and how cagent makes this possible. 

What’s a multi-agent system?

A multi-agent system is a coordinated group of AI agents that collaborate to complete complex tasks. Using cagent, you can build and run these systems declaratively, no complex wiring or reconfiguration needed.

Meet cagent: The best (and open source) way to build multi-agent systems

cagent blog

Figure 1: cagent workflow for multi-agent orchestration. 

cagent is an open-source tool for building agents and a part of Docker’s growing ecosystem of AI tools

Instead of writing glue code to wire up models, tools, and workflows, describe an agent (or a team of agents) in a single YAML file:

  • Which model the agent uses (OpenAI, Anthropic, Gemini, or a local one)
  • What its role or instructions are
  • Which tools it can use (like GitHub, search, or the filesystem)
  • And, if needed, which sub-agents it delegates to

This turns agents into portable, reproducible artifacts you can run anywhere and share with anyone. 

Multi-agent challenges that cagent is solving

Create, run, and share multi-agent AI systems more easily with cagent.

  • Orchestrate agents (and sub-agents) more easily – Define roles and delegation (sub-agents). cagent manages calls and context.
  • Let agents use tools with guardrails – Grant capabilities with MCP: search, GitHub, files, databases. Each agent gets only the tools you list and is auditable.
  • Use (and swap) models – OpenAI, Anthropic, Gemini, or local models through Docker Model Runner. Swap providers without rewriting your system.
  • Treat agents like artifacts – Package, version, and share agents like containers.

How to build a multi-agent system with Docker cagent

Here’s what that looks like in practice.

Step 1: Define your multi-agent system

version: "2"

agents:
  root:
    model: anthropic/claude-sonnet-4-0
    instruction: |
      Break down a user request.
      Ask the researcher to gather facts, then pass them to the writer.
    sub_agents: ["researcher", "writer"]

  researcher:
    model: openai/gpt-5-mini
    description: Agent to research and gather information.
    instruction: Collect sources and return bullet points with links.
    toolsets:
      - type: mcp
        ref: docker:duckduckgo

  writer:
    model: dmr/ai/qwen3
    description: Agent to summarize notes.

    instruction: Write a concise, clear summary from the researcher’s notes.

Step 2: Run the YAML file

cagent run team.yaml

The coordinator delegates, the researcher gathers, and the writer drafts. You now have a functioning team of agents.

Step 3: Share it on Docker Hub

cagent push ./team.yaml org/research-writer

Now, anyone on your team can run the exact same setup with:

cagent run docker.io/org/research-writer

That’s a full multi-agent workflow, built and shared in under 5 minutes.

First principles: Why cagent works

These principles keep cagent an easy-to-use and customizable multi-agent runtime to orchestrate AI agents.

  • Declarative > imperative. Multi-agent systems are mostly wiring: roles, tools, and topology. YAML keeps that wiring declarative, making it easy to define, read, and review.
  • Agents as artifacts. Agents become portable artifacts you can pull, pin, and trust.
  • Small surface area. A thin runtime that does one job well: coordinate agents.

What developers are building with cagent

Developers are already exploring different multi-agent use cases with cagent. Here are some examples:

1. PR and issue triaging

  • Collector reads PRs/issues, labels, failing checks
  • Writer drafts comments or changelogs
  • Coordinator enforces rules, routes edge cases

2. Research summarizing

  • Researcher finds and cites sources
  • Writer produces a clean summary
  • Reviewer checks for hallucinations and tone

3. Knowledge routing

  • Router classifies requests
  • KB agent queries internal docs
  • Redactor strips PII before escalation

Each one starts the same way: a YAML file and an idea. And they can be pushed to a registry and run by anyone.

Get started

cagent gives you the fastest path forward to build multi-agent systems. It’s open-source, easy to use, and built for the way developers already work. Define your agents, run them locally, and share them, all in a few lines of YAML.

YAML in, agents out.

Run the following to get started:

brew install cagent
cagent new
cagent run agent.yaml

Learn more

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Join Us in Rebooting the Docker Model Runner Community!

1 Share

We’re thrilled to announce that we’re breathing new life into the Docker Model Runner community, and we want you to be a part of it! Our goal is to make it easier than ever for you to contribute, collaborate, and help shape the future of running AI models with Docker.

From a Limited Beta to a Universe of Possibilities

When we first announced Docker Model Runner, it was in its beta phase, exclusively available on Docker Desktop and limited to Apple and Nvidia hardware. We received a ton of valuable feedback, and we’ve been hard at work making it more accessible and powerful.

Today, we’re proud to say that Docker Model Runner is now Generally Available (GA) and can be used in all versions of Docker! But that’s not all. We’ve added Vulkan support, which means you can now run your models on virtually any GPU. This is a huge leap forward, and it’s all thanks to the incredible potential we see in this project and the community that surrounds it.

Making Contributions a Breeze

We’ve listened to your feedback about the contribution process, and we’ve made some significant changes to make it as smooth as possible.

To start, we’ve consolidated all the repositories into a single, unified home. This makes it much easier to find everything you need in one place.

Github MCP Registry

We have also invested a lot of effort in updating our documentation for contributors. Whether you’re a seasoned open-source veteran or a first-time contributor, you’ll find the information you need to get started.

Your Mission, Should You Choose to Accept It

The success of Docker Model Runner depends on you, our amazing community. We’re calling on you to help us make this project the best it can be. Here’s how you can get involved:

  • Star our repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Fork and Contribute: Have an idea for a new feature or a bug fix? Fork the repository, make your changes, and submit a pull request. We’re excited to see what you come up with!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Learn more

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories