Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149799 stories
·
33 followers

Sergio Villar: Implementing WebXR in WebKit for GTK and WPE

1 Share

Since 2022, my main focus has been working on the Wolvic browser, still the only open source WebXR-capable browser for Android/AOSP devices (Meta, Pico, Huawei, Lenovo, Lynx, HTC…) out there. That’s an effort that continues to this day (although to a much lesser extent nowadays). In early 2025, as a consequence of all that work in XR on the web, an opportunity emerged to implement WebXR support in WebKit for the GTK and WPE ports, and we decided to take it.

Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Managed OpenClaw bids to kill hidden token tax on AI agents

1 Share

Featherless is a serverless platform specialist that provides API-based access to open-source AI models via a supporting infrastructure. The concept is simple: Developers can run AI models without having to shoulder server management responsibilities. The company on Tuesday released Managed OpenClaw, a managed environment for open-source AI agents.

The new service offers developers a secure, sandboxed runtime with bundled inference (in which the cost of AI models is covered by a flat monthly subscription fee rather than charged per token), a move the company claims helps eliminate infrastructure complexity and the unpredictable “hidden tax” of running autonomous agents.

The launch marks what Featherless calls out as a direct challenge to the “proprietary monopolies currently gatekeeping” agentic technology

How the agentic taxation bubble grows

As AI use now progresses from home-user chat sessions to enterprise-level autonomous background tasks that run business processes, AI agents need to browse the web, execute code, and manage file structures.

The sum total of these actions means that agents potentially consume millions of tokens daily. When an agentic service needs to scale (due to spikes in user adoption, or perhaps if an agent decides to spawn multiple sub-agents and cause computational recursion to straddle complex reasoning tasks), a developer may experience so-called token anxiety, and the tax bill quickly mounts. Other token-related anxieties might arise when concurrent agents experience slow or erroneous results from an external API call; each recovery loop attempt consumes more tokens.

Uncle Sam lies in wait

With open source autonomous AI agent OpenClaw now ranking as the world’s fastest-growing open-source agent project, the agentic Uncle Sam taxman may already be throwing another log on the fire.

According to a Bain technology report, agentic workflows consume 20-30x more tokens per interaction than standard chat. Featherless says this can easily cause monthly bills to unexpectedly reach thousands of dollars. The company claims that the predictability of its Managed OpenClaw service eliminates this financial risk and enables developers to run a high-performance agent on their own terms, even when substantial DevOps resources are not available to oversee infrastructure requirements.

Secure sandboxing slog

As reported in The New Stack, OpenClaw has amassed over 250k GitHub stars and over 50k forks, which is faster than any other software project. Featherless says that despite this, most users still struggle with the underlying complexity of infrastructure management and secure sandboxing.

In terms of the mechanics at work here, Featherless has deployed a security-hardened version of the OpenClaw engine, powered by Daytona, an open source secure infrastructure Development Environment Manager (DEM) toolset. This security layer uses multi-layer container isolation and sandboxed runtimes built for durability, unlike the more ephemeral sessions found in consumer-level tools.

Managed OpenClaw environments operate 24/7 and are supported by shared persistent storage. Agents can manage complex, multi-day workflows that remain active and uninterrupted even after the user closes their browser.

“If developers don’t provide an agent with a standardized, secure and isolated workspace, then we’re giving a robot a power tool in a crowded room with no off switch.”

Eugene Cheah, CEO and co-founder of Featherless, says that developers looking to run OpenClaw in the current market either have to surrender to a closed monopoly or spend weeks on DevOps to self-host. Managed OpenClaw is positioned as the middle ground, providing an always-on and secure home for the open-source ecosystem.

“A production-ready self-hosted setup typically requires a developer to juggle at least eight distinct infrastructure concerns, from container orchestration and GPU provisioning to custom security sandboxing and durable storage. Most teams end up managing five or more separate vendor relationships just to keep a single agent online. It’s a multi-week DevOps project before the first line of AI logic is even written,” says Cheah.

Cheah says his team has now collapsed those eight hurdles into a single subscription. By bundling hardened compute with integrated inference, he promises that Featherless is enabling independent developers and startups to deploy durable, 24/7 agents that genuinely compete with the largest labs, without the infrastructure costs or vendor lock-in.

Giving a robot a power tool

Ivan Burazin, CEO and founder of Daytona, tells The New Stack that for agents to be truly useful, they need more than just intelligence; they need a place to live and work. He says that if developers don’t provide an agent with a standardized, secure, and isolated workspace, then we’re giving a robot a power tool in a crowded room, with no off switch.

While human developers can often adapt to inconsistent environments, an AI agent requires absolute predictability to function. Burazin argues that Managed OpenClaw is “exactly what the ecosystem needs” today: a way to abstract away the massive DevOps burden of agent infrastructure so that teams can focus on building, not just managing runtimes.

“Most people focus on the intelligence of the agent, but the real bottleneck is the environment it runs in.”

“Most people focus on the intelligence of the agent, but the real bottleneck is the environment it runs in. Agents need a secure place to execute code, access files, and run long-lived tasks safely. Without that isolation layer, you’re effectively letting autonomous software operate directly on production systems, which is a huge risk. Managed OpenClaw shows why sandbox infrastructure is becoming a foundational layer for agent systems, allowing developers to run powerful agents without having to build and maintain the underlying runtime themselves,” Burazin says.

With this launch, Featherless introduces a compute environment with 1 vCPU and 2–4 GB of RAM per sandboxed instance, orchestrated via Daytona. Managed OpenClaw includes inference directly in the subscription to create its predictable billing offering. The service allows developers to toggle between open source models, including Qwen 3.5, Minimax M2.5, and Kimi K2.5, without the friction of per-token costs or separate vendor management. Access to more than 30,000 models will be made available in the near future.

The post Managed OpenClaw bids to kill hidden token tax on AI agents appeared first on The New Stack.

Read the whole story
alvinashcraft
59 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Stop Closing the Door. Fix the House.

1 Share
The following article originally appeared on Angie Jones’s website and is being republished here with the author’s permission.

I’ve been seeing more and more open source maintainers throwing up their hands over AI generated pull requests. Going so far as to stop accepting PRs from external contributors.

If you’re an open source maintainer, you’ve felt this pain. We all have. It’s frustrating reviewing PRs that not only ignore the project’s coding conventions but also are riddled with AI slop.

But yo, what are we doing?! Closing the door on contributors isn’t the answer. Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part to prepare your repo for AI coding assistants.

I’m a maintainer on goose which has more than 300 external contributors. We felt this frustration early on, but instead of pushing well-meaning contributors away, we did the work to help them contribute with AI responsibly.

1. Tell humans how to use AI on your project

We created a HOWTOAI.md file as a straightforward guide for contributors on how to use AI tools responsibly when working on our codebase. It covers things like:

  • What AI is good for (boilerplate, tests, docs, refactoring) and what it’s not (security critical code, architectural changes, code you don’t understand)
  • The expectation that you are accountable for every line you submit, AI-generated or not
  • How to validate AI output before opening a PR: build it, test it, lint it, understand it
  • Being transparent about AI usage in your PRs

This welcomes AI PRs but also sets clear expectations. Most contributors want to do the right thing, they just need to know what the right thing is.

And while you’re at it, take a fresh look at your CONTRIBUTING.md too. A lot of the problems people blame on AI are actually problems that always existed, AI just amplified them. Be specific. Don’t just say “follow the code style”, say what the code style is. Don’t just say “add tests”, show what a good test looks like in your project. The better your docs are, the better both humans and AI agents will perform.

2. Tell the agents how to work on your project

Contributors aren’t the only ones who need instructions. The AI agents do too.

We have an AGENTS.md file that AI coding agents can read to understand our project conventions. It includes the project structure, build commands, test commands, linting steps, coding rules, and explicit “never do this” guardrails.

When someone points their AI agent at our repo, the agent picks up these conventions automatically. It knows what to do and how to do it, what not to touch, how the project is structured, and how to run tests to check their work.

You can’t complain that AI-generated PRs don’t follow your conventions if you never told the AI what your conventions are.

3. Use AI to review AI

Investing in an AI code reviewer as the first touchpoint for incoming PRs has been a game changer.

I already know what you’re thinking… they suck too. LOL, fair. But again, you have to guide the AI. We added custom instructions so the AI code reviewer knows what we care about.

We told it our priority areas: security, correctness, architecture patterns. We told it what to skip: style and formatting issues that CI already catches. We told it to only comment when it has high confidence there’s a real issue, not just nitpick for the sake of it.

Now, contributors get feedback before a maintainer ever looks at the PR. They can clean things up on their own. By the time it reaches us, the obvious stuff is already handled.

4. Have good tests

No, seriously. I’ve been telling y’all this for YEARS. Anyone who follows my work knows I’ve been on the test automation soapbox for a long time. And I need everyone to hear me when I say the importance of having a solid test suite has never been higher than it is right now.

Tests are your safety net against bad AI-generated code. Your test suite can catch breaking changes from contributors, human or AI.

Without good test coverage, you’re doing manual review on every PR trying to reason about correctness in your head. That’s not sustainable with 5 contributors let alone 50 of them, half of whom are using AI.

5. Automate the boring gatekeeping with CI

Your CI pipeline should also be doing the heavy lifting on quality checks so you don’t have to. Linting, formatting, type checking all should run automatically on every PR.

This isn’t new advice, but it matters more now. When you have clear, automated checks that run on every PR, you create an objective quality bar. The PR either passes or it doesn’t. Doesn’t matter if a human wrote it or an AI wrote it.

For example, in goose, we run a GitHub Action on any PR that involves reusable prompts or AI instructions to ensure they don’t contain prompt injections or anything else that’s sketchy.

Think about what’s unique to your project and see if you can throw some CI checks at it to keep quality high.


I understand the impulse to lock things down but y’all we can’t give up  on the thing that makes open source special.

Don’t close the door on your projects. Raise the bar, then give people (and their AI tools) the information they need to clear it.

On March 26, join Addy Osmani and Tim O’Reilly at AI Codecon: Software Craftsmanship in the Age of AI, where an all-star lineup of experts will go deeper into orchestration, agent coordination, and the new skills developers need to build excellent software that creates value for all participants. Sign up for free here.



Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Website Rebuilds, AI Tools, and UX in 2026

1 Share

This month, Paul and Marcus get into a tool that has made Paul cancel his Figma subscription, walk through how Paul has completely changed the way he approaches website rebuilds thanks to AI, and round things off with the latest thinking from Nielsen Norman Group on where UX is heading in 2026.

App of the Week: figr.design

Paul has been road-testing AI design tools as part of a workshop he ran on AI and UI, and after going through dozens of them, one stood out: figr.design.

What makes it work where others fall short? A few things. It lets you feed in a significant amount of context upfront, things like style guides, design systems, and personas, which means the output is far more tailored than the generic average you often get from AI design tools. Iteration is also genuinely fast. You can queue up a whole list of changes and it processes them all in one go, rather than making you wait between each tweak.

The prototypes it produces are more realistic than what you would typically get out of Figma. Text fields you can actually type in, accordion states that open and close, button states, fully responsive layouts. Not exactly revolutionary in theory, but refreshingly functional in practice. Export to Figma is available when you need it.

The main limitation is that you cannot manually adjust elements yourself. Everything goes through the conversational interface. Paul has also been looking at a tool called Inspector, which runs locally and connects to the Claude API so you pay as you go rather than a flat monthly token allocation. It has been a bit fiddly to set up but worth keeping an eye on.

For anyone regularly using Figma for wireframing and prototyping, it is worth giving figr.design a proper look. The shift Paul describes, from hunching over Figma to leaning back and having a conversation with the tool, is a fairly good summary of where this kind of work is heading.

Rebuilding a Website in 2026

Paul has fundamentally changed how he approaches website rebuilds, and the shift is largely down to AI making a genuinely hard problem, getting good content onto a website, a lot easier.

The old problem

Website rebuilds have traditionally meant migrating existing content into a new design. Which sounds fine until you remember that most of that content was written by subject matter experts who know their field but have never thought about writing for the web.

The result is pages that lecture rather than help, that bury the things users actually want to know, and that rarely arrive on time, because the content phase is almost always where projects stall.

Why things are different now

AI has changed three things meaningfully.

  • First, generating content is no longer the enormous manual effort it used to be.
  • Second, doing the research that informs good content, finding out what users actually ask, worry about, and need, is much simpler with tools like Perplexity.
  • Third, AI-powered search engines are pushing toward a more question-oriented approach to content anyway, which makes getting this right more important than it used to be.

How Paul works now

Here is the process Paul walks through for a rebuild project.

1. Online research

Using Perplexity, Paul researches the audience. For a well-known client, he'll ask specifically about them. For a smaller or niche client, he looks at the sector. He is looking for the questions people are asking, the tasks they are trying to complete, their objections, goals, and pain points. This takes about 10 minutes.

2. Personas

The research output goes into AI, which identifies patterns and segments it into a set of personas. A couple of hours of back and forth to get these right.

3. Company overview

Paul records his kickoff meeting with the client and points AI at the transcript. Out comes a clean summary of what the company does, its products and services, and how it talks about itself. An hour for the meeting, plus 10 minutes for the summary creation.

4. Top task analysis and information architecture

If time and budget allow, Paul runs a formal top task analysis, collecting and prioritizing the questions users most want answered. For card sorting, he uses UX Metrics. If there is no time for that, AI brainstorms the top tasks from the personas and company overview. Either way, those tasks get fed into an AI-generated information architecture.

5. Building out the IA

Paul builds the IA in the CMS or in Notion, assigning the relevant tasks and questions to each page. Stakeholders can see the structure and understand what each page is there to do before a word of copy is written.

6. Getting stakeholders to contribute

Rather than asking stakeholders to write content (a recipe for delays), Paul asks them to do two simpler things for each page: bullet-point answers to the questions assigned to that page, and any other talking points they want included. Bullets only. No pressure to write.

7. Writing the content with AI

This is where it all comes together. Paul sets up an AI project with four inputs:

  • A web copywriting best practice guide covering readability, structure, and scanning
  • A company-specific style guide built from existing brand materials
  • The audience personas
  • The company overview

For each page, he drops in the questions and stakeholder bullet points, and the AI drafts the content using all of that context. Paul recommends Claude for writing tasks. The result is copy that actually reflects the company's voice and addresses what users need, rather than generic filler.

8. Review and refinement

Stakeholders review the draft and leave comments, ideally directly in Notion where AI can read the page, take in the comments, and rewrite accordingly. One more pass by stakeholders and it is ready to go.

Paul has been using this approach on half a dozen projects and reckons you can work through a full site's worth of content in about a week (depending on size) once the setup is done. For clients, it is a service worth paying for because it takes the content burden off them while producing noticeably better results than migrating whatever was already there.

One thing Paul is careful to flag: this does not mean starting from absolute scratch every time. Old articles, compliance pages, event databases, templated content that just has to be there, all of that can still come across. The point is to treat migration as the exception rather than the default.

Read of the Week: State of UX 2026

The Nielsen Norman Group article Design Deeper to Differentiate confirmed, in Marcus's words, most of what Paul has been saying for the past year. Paul took this as further evidence he is always right!

A few of the key points from the article:

UX has stabilized after the 2023-24 downturn, but teams are leaner. UX practitioners are now expected to cover more ground and demonstrate business impact rather than just shipping deliverables.

AI fatigue has set in, both among designers tired of the "you're being replaced" narrative, and among users who have grown skeptical of AI features that add sparkle without actually improving anything. The article argues that trust is now the central design problem for AI-powered products, covering transparency, control, consistency, and what happens when things go wrong.

UI quality is becoming commoditized. If your value is primarily in making interfaces look good and work correctly, the ceiling on that work is dropping. Real differentiation lives in service design, content strategy, complete user flows, and the connective tissue that links everything together over time.

The hard-to-automate skills, taste, contextual understanding, critical thinking, and judgment, are where humans still add the most value. To thrive, the article suggests UX practitioners need to position themselves as strategic problem-solvers with a broad toolkit rather than deliverable-focused specialists doing what it calls "design theater."

Paul agreed with all of it. Marcus mostly agreed too, while noting that it must be genuinely difficult to be a UX specialist inside a large organization right now, particularly in teams that have cut so far back that one person is expected to cover the entire discipline. The answer, in Marcus's entirely unbiased view, is to hire Headscape!

Marcus' Joke

I stole a neck brace from the hospital. I feel kind of bad, but at least I can hold my head up high.

Find The Latest Show Notes





Download audio: https://cdn.simplecast.com/media/audio/transcoded/eea3ff50-d316-4ff7-b8db-24c157eb37ff/ae88e41b-a26d-4404-8e81-f97bca80d60d/episodes/audio/group/a0976448-8b16-43e1-bc9e-63e59b8bc080/group-item/7a98d412-2457-424a-8ab2-196398f34bcd/128_default_tc.mp3?aid=rss_feed&feed=XJ3MbVN3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Inference Engineering with Baseten's Philip Kiely Inference Engineering with Baseten's Philip Kiely

1 Share
From: Scott Hanselman
Duration: 32:53
Views: 59

This week on the show, Scott talks to Philip Kiley about his new book, Inference Engineering. Inference Engineering is your guide to becoming an expert in inference. It contains everything that Philip has learned in four years of working at Baseten. This book is based on the hundreds of thousands of words of documentation, blogs, and talks he's written on inference; interviews with dozens of experts from our engineering team; and countless conversations with customers and builders around the world.

https://www.baseten.co/inference-engineering/

Check out https://textcontrol.com for document editing, reporting, and PDF processing SDKs for .NET

00:00 Did you feel uncomfortable that you don't have a PhD in machine learning and deep learning?
01:03 Introduction to Philip Kiley and 'Inference Engineering'
02:09 Was that a conscious decision to do something like that?
07:11 Purpose and Audience of 'Inference Engineering'
14:37 Do you think about what can be done with AI, considering it can't currently write a good book on AI inference?
17:23 How much faster can AI performance get, and will it result in less energy consumption and lower costs?
24:14 AI Proofreading and Human Collaboration
31:28 Why is it difficult to get paper copies of the book listed on Amazon despite high demand?

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS Guardrails Over Processes—How to Scale Teams Without Killing Creativity With Prashanth Tondapu

1 Share

BONUS: Guardrails Over Processes—How to Scale Teams Without Killing Creativity

What actually slows down tech teams—lack of talent, or lack of ownership? In this episode, Prashanth Tondapu shares lessons from leading through global-scale failures, scaling from a small team to a 100-person company, and discovering why guardrails beat rigid processes when it comes to building teams that own outcomes and execute with discipline.

Diffusion of Accountability: When Everyone Is Responsible, Nobody Is

"Crisis is not the problem. Crisis is the one that uncovers the problem that has always existed."

 

Early in his career, Prashanth witnessed a large-scale failure at a major technology company—not because the team lacked talent, but because accountability had become diffused. When too many people are responsible for something, it translates to nobody being responsible. The team was brilliant individually, but there was no clear demarcation of who owned what outcome. On good days, everything worked. But when things went wrong, there was no single person who could no longer delegate accountability to someone else. In this segment, we also refer to the concept from Extreme Ownership by Jocko Willink.
Prashant argues for: outcome can only come with 100% emotional commitment to a particular problem, and when five people share that commitment, each carries only 20%. That's where breakdowns happen.

The Leadership Design Problem: From Computers to People

"I was a developer who imagined that humans are also going to be as predictable as computers. Until 6 or 7 people, it works well because you can be everywhere. But as soon as we increased above 7, I was not able to be everywhere."

 

Prashanth's journey as a founder mirrors what many tech leaders experience at scale. Starting Innostax at 27 as a developer with no management experience, he initially treated people like predictable systems. Below seven people, it worked—he could be the hero founder, the catch-all. But beyond that threshold, he had to learn delegation, which meant learning to trust. First came the people-dependent phase, then the process-oriented phase with SOPs (Standard Operating Procedures) for everything—even how APIs should look. The SOPs made the team fast at execution, but their clients noticed something troubling: "Your guys do not even ask any questions." The rigid processes had suppressed the very creativity and critical thinking they needed. That feedback became the catalyst for the next evolution: becoming a people-first company.

Guardrails vs. Processes: Freeing Creativity Within Structure

"If something goes wrong, our guardrail is: we will just ask you one question—what was your intent behind doing this?"

 

Prashanth draws a sharp distinction between processes and guardrails. Processes tell you exactly what to do and how to do it—they create predictable execution but kill creativity. Guardrails define the boundaries within which people have freedom to be creative and solve problems their own way. At Innostax, guardrails take practical forms:

 

  • Time-on-task guardrails: If a task takes longer than expected, ask for help—don't rabbit-hole into it for three days

  • Don't be a hero: When friction appears with a client or a problem, escalate early rather than trying to solve everything alone

  • The intent review: When something goes wrong, instead of punishment, they ask three questions—was the intent right, was the approach right, and what was the outcome? If intent and approach were right but it still failed, that's the company's problem, not the individual's

 

This framework creates psychological safety while maintaining accountability. People know they won't be penalized for honest mistakes made with good intent, which means they surface problems early rather than hiding them.

Vision Elements and the People-First Company

"The outcome is not just what is expected, but outcome also consists of what is not expected. People come out in so many creative, great ways that they end up surprising you."

 

The shift to a people-first company meant replacing rigid SOPs with what Prashanth calls "vision elements"—broader directional guidance like "we are working for the client, we need to give the best for the client in the resources that we have." This gives teams a larger sandbox to work in while guardrails prevent them from going too far off course. 

The daily rhythm includes team leads reviewing work summaries—not to micromanage, but to catch misalignment early and offer support. Prashanth emphasizes that guardrails must be created with emotional intelligence and detachment. If you create guardrails assuming you're also part of the problem, they'll be biased and ineffective. That's why he considers emotional intelligence the prerequisite skill for any leader designing team structures.

The Books That Changed Everything

"Whenever I was reading through the fixed mindset guy, it was like it was describing me. And that actually changed everything."

 

Prashanth recommends two foundational books for leaders building ownership-driven teams. First, Mindset by Carol Dweck—a book that cracked his own fixed mindset as a confident developer who thought he knew everything. Reading about the fixed mindset felt like reading his own biography, and that uncomfortable recognition opened him to listening more, seeking exposure to experts, and believing there were perspectives he hadn't encountered yet. Second, Emotional Intelligence by Daniel Goleman—because without mastering emotional intelligence, everything you hear feels personal, clouding your judgment and making you too close to the problem to design effective solutions for your team.

 

Self-reflection Question: Are you building guardrails that give your team freedom to be creative within clear boundaries, or are you still writing processes that tell people exactly what to do—and in the process, suppressing the very thinking you hired them for?

 

About Prashanth Tondapu

Prashanth Tondapu is Founder and CEO of Innostax and a veteran technology leader. He's led teams through high-stakes global incidents at McAfee and scaled disciplined delivery organizations worldwide. His work focuses on ownership, accountability, and designing teams for predictable, sustainable execution as complexity grows.

 

You can link with Prashanth Tondapu on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20260317_Prashanth_Tondapu_Tue.mp3?dest-id=246429
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories