Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152801 stories
·
33 followers

Gates Foundation will cut up to 500 positions by 2030 to help reach ‘ambitious goals’

1 Share
The Gates Foundation headquarters in Seattle. (GeekWire Photo / Taylor Soper)

The Gates Foundation on Wednesday unveiled a record $9 billion operating budget for 2026 — which includes a plan to reduce its workforce by up to 500 positions over the next five years, or about a fifth of its current headcount.

The foundation’s board approved a cap on operating expenses of no more than $1.25 billion annually — roughly 14% of its total budget — prompting the cuts and other cost controls to align internal spending with that new limit.

The Seattle-based foundation said headcount targets and timelines will be adjusted year by year, and that it will continue to hire selectively for roles deemed critical to advancing its mission.

The decision comes after the foundation announced last year that it would shut down by 2045.

The philanthropy is the world’s largest and has already disbursed $100 billion since its founding, helping save millions of lives with its focus on global health and other social initiatives.

Bill Gates, the Microsoft co-founder who helped launch the Gates Foundation in 2000, announced plans in May to give away $200 billion — including nearly all of his wealth — over the next two decades through the foundation.

“The foundation’s 2045 closure deadline gives us a once-in-a-generation opportunity to make transformative progress, but doing so requires us to focus relentlessly on the people we serve and the outcomes we want to deliver,” Mark Suzman, CEO of the Gates Foundation, said in a statement. “Ensuring as much of every dollar as possible flows toward impact is critical to achieving our ambitious goals to save and improve millions more lives over the next 20 years.”

The foundation had already begun ramping up its grant making, issuing $8.75 billion in 2025, and previously committed to distribute $9 billion this year. It has a $77 billion endowment.

This year the foundation will increase spending in priority areas, including maternal health, polio eradication, U.S. education, and vaccine development.

The increase in funding commitments comes amid Trump administration cuts to global foreign assistance, its shutdown of the U.S. Agency for International Development (USAID), and broader reductions in funding for health and scientific research.

In his annual letter released last week, Gates wrote that “the thing I am most upset about” is that the number of deaths of children under 5 years old increased in 2024 for the first time this century, which he traced to cuts in aid from rich countries.

“The next five years will be difficult as we try to get back on track and work to scale up new lifesaving tools,” he wrote. “Yet I remain optimistic about the long-term future. As hard as last year was, I don’t believe we will slide back into the Dark Ages. I believe that, within the next decade, we will not only get the world back on track but enter a new era of unprecedented progress.”

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Digg launches its new Reddit rival to the public

1 Share
Digg, a reboot of an earlier social news site, is now relaunching as a Reddit competitor focused on communities.
Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Get started with GitLab Duo Agent Platform: The complete guide

1 Share

GitLab Duo Agent Platform is a new AI-powered solution that embeds multiple intelligent assistants ("agents") throughout your software development lifecycle. It serves as an orchestration layer where developers collaborate asynchronously with AI agents across DevSecOps, transforming linear workflows into dynamic, parallel processes.

Routine tasks, from code refactoring and security scans to research, can be delegated to specialized AI agents, freeing human developers to focus on solving complex problems and driving innovation.

The platform leverages GitLab's role as a central DevSecOps platform (encompassing code management, CI/CD pipelines, issue tracking, test results, security scans, and more) to provide these agents with complete project context, enabling them to contribute meaningfully while adhering to your team's standards and practices.

This comprehensive eight-part guide will take you from your first interaction to production-ready automation workflows with full customization.

💡 Join GitLab Transcend on February 10 to learn how agentic AI transforms software delivery. Hear from customers and discover how to jumpstart your own modernization journey. Register now.

Evolution from GitLab Duo Pro/Enterprise to Duo Agent Platform

GitLab Duo Agent Platform is an evolution, not a replacement of Duo Pro and Enterprise. It's a superset that moves from 1:1 developer-AI interactions to many-to-many team-agent collaboration.

  • Duo Pro enhanced individual developer productivity in the IDE with AI-powered code suggestions and chat.
  • Duo Enterprise expanded beyond coding to deliver comprehensive AI capabilities across the entire software development lifecycle. But it was still primarily an approach in enabling 1:1 interaction between the user and an AI assistant — mostly a Q&A experience with one use case at a time.
  • Duo Agent Platform moves from 1:1 interactions to many-to-many team-agent collaboration, where specialized agents autonomously handle routine tasks across the software lifecycle.

The complete series

PartTitleWhat You'll Learn
1Introduction to GitLab Duo Agent PlatformPlatform architecture, four ways to use agents, accessing agents and flows, first interactions, sessions, and model selection
2Getting started with GitLab Duo Agentic ChatAccessing chat across Web UI and IDEs, model selection and switching, agent selection, common use cases, and troubleshooting
3Understanding agents: Foundational, custom, and externalFoundational agents (GitLab Duo, Planner, Security Analyst, Data Analyst), creating custom agents with system prompts, external agents setup, AGENTS.md customization, and choosing the right agent type
4Understanding flows: Multi-agent workflowsIntroduction to foundational flows, creating custom YAML workflows, flow execution, multi-agent orchestration, and monitoring
5AI Catalog: Discover, create, and share agents and flowsBrowsing and discovering agents and flows, enabling agents and flows in projects, creating and publishing your own agents and flows, and managing visibility
6Monitor, manage, and automate AI workflowsAutomate menu overview, monitoring sessions with detailed logs, setting up event-driven triggers, and managing AI workflows
7Model Context Protocol integrationMCP overview, GitLab as MCP client connecting to external tools, GitLab as MCP server for external AI tools, and configuration examples
8Customizing GitLab Duo Agent PlatformCustom chat rules, AGENTS.md configuration, system prompts for agents, agent tool configuration, MCP setup, and custom flow YAML configuration

Key concepts reference

Core components

ComponentDescriptionKey Features
Duo Agentic ChatPrimary interface for agent interaction• Available in Web UI and IDEs
• Supports model selection
• Maintains conversation history
AgentsSpecialized AI collaboration partners for specific tasksFoundational: Provided by GitLab (Planner, Security Analyst, etc.)
Custom: Created by your team
External: External AI providers like Claude and OpenAI
FlowsMulti-step workflows combining agentsFoundational: Provided by GitLab (Developer, Fix CI/CD Pipeline, etc.)
Custom: User-defined workflows you create
AI CatalogCentral repository for discovering, creating, and sharing• Browse and discover agents and flows
• Add to your projects
• Share across organization
Automate MenuManagement hub for AI workflowsSessions: Flow activity logs
Flows: Multi-step workflows
Agents: Specialized AI assistants
Triggers: Event-based automation
Model Context Protocol (MCP)External integration frameworkClient: GitLab Duo connects to external MCP servers (Jira, Slack, AWS, etc.)
Server: GitLab acts as MCP server for external AI tools (Claude Desktop, Cursor, etc.)

Essential terminology

TermDefinition
AgentSpecialized AI assistant for specific tasks and to answer complex questions
Foundational AgentPre-built agents created and maintained by GitLab (e.g., GitLab Duo, Planner, Security Analyst) — available immediately with no setup
Custom AgentAgents you create with custom system prompts and tools for team-specific workflows — configured through project/group settings
External AgentExternal AI providers like Claude, OpenAI, Google Gemini, and more integrated into the platform
FlowCombination of one or more agents working together to solve a complex problem
Foundational FlowPre-built workflows by GitLab (Issue to MR, Fix Pipeline, Convert Jenkins, Software Development Flow) — triggered via UI buttons or IDEs
Custom FlowYAML-defined workflows you create for team-specific automation — triggered by events or mentions
TriggerEvent that automatically starts a flow (e.g., mention, assignment)
SessionRecord of agent or flow activity with complete logs and pipeline execution details
System PromptInstructions defining agent behavior, expertise, and communication style
Service AccountAccount used by flows or external agents to perform GitLab operations with specific permissions
MCPModel Context Protocol for external integrations (connects to Jira, Slack, AWS, etc.)
AGENTS.mdIndustry-standard file for customizing agent behavior at user or workspace level
Custom RulesRules that customize how GitLab Duo behaves in your IDE
ToolsCapabilities that agents can use to interact with GitLab and external systems (e.g., create issues, merge requests, run pipelines, analyze code)

Ready to get started?

Begin your journey with Part 1: Introduction to GitLab Duo Agent Platform to learn the platform fundamentals.

Feedback

We'd love to hear from you! Found an error? Have a suggestion?

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Open Source Whamm: Use WebAssembly To Monitor and Fix Running Apps

1 Share

Whamm is designed to allow users to instrument their WebAssembly (or Wasm) applications with a programming language or code, or let them program their WebAssembly applications in modules directly. With it, they can debug and monitor their applications within WebAssembly modules.

If you have Homebrew already installed, updates will be installed automatically as Whamm is installed. This command downloads and installs Homebrew:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"


Download the latest release for your platform from the Releases page.

Clone the Whamm repo:

git clone https://github.com/ejrgilbert/whamm.git


Rust is required, and even if you don’t know Rust very well — I certainly don’t — once it’s installed, you can play around with Whamm. I downloaded and installed Rust on my Mac by accessing it at this trusted site with this command (which I highly recommend doing):

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh


I then ran these commands to make sure that Rust is available for all terminal sessions. This is necessary because Rust is incredibly complicated to use, is extremely logical, or both:

grep -q ".cargo/env" ~/.zshrc 2>/dev/null || echo 'source "$HOME/.cargo/env"' >> ~/.zshrc && echo "Added to ~/.zshrc - will work in new terminals"


Build the source code:

cargo build --release


When I first prompted Whamm to build the source code, I received an error message:

The fact that I had not installed the WebAssembly target was the source of the error. So, I installed the WebAssembly target as a fix:

cd ~/whamm && rustup target add wasm32-wasip1


I attempted to build the source code again:

cargo build --release


And success!

Add the built binary to your PATH. This binary should be located at:

target/release/whamm


Once you do that, nifty things you can do with Whamm and Wasm are listed neatly. You can use the command line interface (CLI), as indicated, as reference for all of the various commands on offer. These include — as indicated — monitoring or manipulating a program’s execution. I won’t go into what application monitoring means here, but it’s the beginning of adding more observability for Wasm applications. Using Whamm for this is usual for debugging, collecting and analyzing logs and metrics as the application runs, instead of just statically.

Once you find the source of the error, so-called “manipulating execution” allows you to fix errors by changing the “dynamic behavior,” or in other words, fixing what is wrong. You can change the application state by “manipulating an application’s dynamic behavior,” as the documentation states.

Basic Test

You can run a basic test to make sure that the Whamm binary is on your path and working as expected by running the following command:

whamm --help


This is what you should see:

For reasons I don’t yet know, my initial test of Whamm functionality did not work, as shown above. But then I went through all the commands above, including reinstalling the Rust target and rebuilding or reinstalling the WebAssembly target, and reinstalling Cargo for Rust to make sure it was available in all terminals. So it was likely a Rust problem, though I’m not sure. In any case, I went through most of the commands above and eventually succeeded.

Now you’re ready to begin your Whamm journey for instrumenting, rebuilding, correcting, monitoring and debugging code. Hats off to the creator — and now, I assume, main maintainer — Elizabeth Gilbert, a doctoral candidate at Carnegie Mellon University, for this great project.

Although I would argue that this looks simple and is relatively simple to use, it’s an amazing build and represents a lot of hard work and engineering dedication. Definitely another win for the WebAssembly community, as well as for observability, debugging and the ability to update applications dynamically with Wasm.

The post Open Source Whamm: Use WebAssembly To Monitor and Fix Running Apps appeared first on The New Stack.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The React Framework Face-Off: Which One Owns the Future?

1 Share
Two hockey players face off on the ice, symbolizing the faceoff of the React frameworks.

Frontend developer and educator Kent Dodds has some good news for frontend developers confused about which React-based framework to choose.

“For those who are looking for a new framework, you’re starting a new project, you cannot make a wrong choice. All of these frameworks are fabulous,” he told audiences at React Conf, which last week released videos from the event. “React is doing so much more for us now. The frameworks are really converging, and they really look similar now.”

Dodds joined a panel of framework creators, and one bundler creator, led by Jack Herrington, aka the Blue Collar Coder, to discuss “What’s The Framework of the React Future?

Here’s what developers had to say for their frameworks.

Expo Router

Evan Bacon, a software engineer and the creator of Expo Router, spoke on behalf of the opinionated routing framework for React Native.

“We really believe in the beauty and the optimization of the mobile form factor and how much that resonates with real people,” he said. “We were really happy with what we built here, starting with React, … bring[ing] JavaScript and React over to native and now we have file-based routing, server actions, API routes, environment variables, one-click deployments. So it’s just so much good stuff that the web community has innovated, and it’s incredible to bring it over to these new spaces and new form factors.”

Expo is exploring how to support the development of AI applications.

“We want to be a really good tool for helping others build AI tools,” he said. “But like very recursively because … everyone is trying to build something agentic. And so downstream of that, they’re going to reach for a dev tool like Expo.”

Next.js

Josh Story, a Vercel engineer who works on the Next.js team, spoke on behalf of the framework. The big change for Next.js was in 2022 when it started working on App Router, which was marked stable in 2023, he noted.

“This is the React Server Components implementation that’s been stable for a few years now,” Story said. “The journey is not over. There’s something that’s essential in any React project and that’s composition, and today there are still some places where there’s some hard cliffs.”

Story added that since frameworks like Next.js now support server components, he expects to see more libraries on npm that are server components.

“This is going to be possible so that maybe you’ll have authentication libraries or form validation libraries or these kinds of things to just get componentized now and they can be used in any framework,” he said. “That’s what’s really powerful about server components.”

He also said that Next.js is reviewing how to best support MCP (Model Context Protocol) endpoints for AI.

Parcel

Devon Govett, a software architect at Adobe, created the open source Parcel bundler.

“If you have an existing client app and you want to try out server components, for example, and you don’t really want to migrate anything or you don’t want to add SSR [server-side rendering], you just want to…. embed some server components into your app and try it out, a bundler, like Parcel, is an easy way to do that,” he said.

Since Parcel isn’t a framework, he pointed to React Router as a possibility for those seeking a React-based framework.

“I really like what the React Router team is doing actually, where … you can actually swap between different bundlers and React Router’s data mode, which is really cool,” he said. “So if you have bundler-specific plugins or types of things that you want to use with React Router, you can actually do that.”

Rock (for React Native)

Michał Pierzchała, principal engineer at Callstack, said that Rock, described as “a modular toolkit for teams building React Native apps,” is trying to accomplish two goals. First, the framework team wants to get older React Native community CLI apps on the newest tooling so they can leverage some local and remote caching. Second, the team wants to support native iOS and Android apps in the React Native realm.

“With our Rock Brownfield feature, we allow them to import one file and then just instantiate React Native in their iOS or Android,” he said.

Brownfield development is when a developer wants to add React Native to an existing app developed in native code, such as Swift for iOS or Kotlin for Android, without rewriting the whole app.

React Router

The framework, originally Remix, was created by Ryan Florence and Michael Jackson. It converged with React Router last year and now the two creators have moved on to work on a different framework. But Dodds uses React Router and advocated for it.

“For React Router, in particular, I really appreciate the dedication of the team. It has a lot of investment from Shopify,” he said.

“I feel really, really good about where we are from framework perspective now.”
— Kent Dodds, frontend developer and educator

Dodds acknowledged he’s unique in that he’s only made two or three commits to the framework he’s representing.

“I feel really, really good about where we are from framework perspective now,” he said. “It’s really important for frameworks to take care of their users, and React Router has proven to do that over the last decade.”

In regards to AI, Dodds noted that developers are used to adding chatbots to their applications, but now the situation has reversed: Chatbots actually are adding applications through MCP-UI. He said there’s more room for frameworks to improve and support that dynamic.

“If it does play out to be the case, then every one of these frameworks will want to have some affordances for MCP and serving your app right into that chat experience. Whether it be ChatGPT or Gemini or Claude or whatever, that will be an important part of the future for React frameworks or any web framework,” he said.

Redwood Software Development Kit (SDK)

Redwood started as a framework then pivoted to an SDK. Redmond co-creator Peter Pistorius emphasized that the SDK is lightweight, composable and “server-first.”

“I’m not going to tell you to use my framework, because you’re already using it. If you’re using Vite, Typescript and React [you’re] using Redwood, there’s nothing to learn,” he said.

The post The React Framework Face-Off: Which One Owns the Future? appeared first on The New Stack.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Caught in the Middle: The New Role of Platform Teams

1 Share
Sign with hand pointing to "a rock" in one direction and another pointing to "a hard place" the other way.

Earlier this year, the platform team at a global bank introduced a new workflow for developers to provision cloud environments on demand. The goal was to improve delivery speed and reduce dependency on manual approvals. The infrastructure rolled out smoothly. Within weeks, teams were launching workloads across AWS, Google Cloud Platform (GCP) and Microsoft Azure.

Then the questions started.

Finance asked why certain environments were running in high-cost regions. Security flagged resources missing encryption tags. Compliance asked whether logs for nonproduction environments were being stored according to internal policy. The data team asked whether the platform could integrate an AI agent to detect and remediate drift automatically.

None of these were incidents. Nothing failed. The system behaved exactly as designed.

But each question required insight into a different part of the organization. Cloud configuration. Internal policy. Team behavior. Risk exposure. No single group owned the full picture.

The platform team was expected to answer anyway.

Everyday Decisions at Enterprise Scale

In large enterprises, infrastructure work is distributed by design. Application teams own services. Security defines guardrails. Compliance defines requirements. Finance tracks spend. Infrastructure teams manage shared foundations.

Each group operates with partial context.

Platform teams sit closest to the workflows that connect all of this together. As a result, they are pulled into decisions that span multiple domains.

In a typical week, a platform team may be asked to explain:

  • Why a workload launched in a region that was technically allowed but operationally discouraged.
  • Why certain resources lack cost attribution even though tagging standards exist.
  • Whether access patterns comply with internal policy when multiple identity systems are involved.
  • Whether an AI-assisted workflow can be audited in the same way as a manual one.

These are not edge cases. They are daily questions that emerge from scale, decentralization and constant change.

A schematic diagram with a white circle at the middle representing platform team. Five teams surround the platform team each with different disconnected context and tools: security, finance, infrastructure, compliance and application.

Source: env zero.

The Expanding Role of Platform Engineering

Platform teams were originally chartered to improve delivery speed and consistency. Over time, that mandate has expanded.

Today, platform leaders are expected to weigh in on:

  • Security posture and enforcement boundaries.
  • Cost controls and efficiency trade-offs.
  • Compliance visibility and audit readiness.
  • Governance around change and access.
  • The safe use of AI agents inside infrastructure workflows.

They are not only building systems. They are shaping decisions that carry financial, regulatory and operational consequences.

Yet these expectations rarely come with corresponding authority.

Responsibility Without Structure

Platform teams are often asked to explain outcomes they did not directly cause. They are expected to provide answers without owning the policy, budget or workload.

This creates a structural tension.

The team has enough context to be accountable, but not enough leverage to set direction. They become the place where unresolved questions land, simply because they are closest to the execution layer.

As the number of daily infrastructure events, environment changes, access updates, policy evaluations and AI-driven actions grows, this tension compounds. The work becomes less about building and more about interpreting, coordinating and justifying decisions across teams.

What Needs To Change

If platform teams are going to continue operating at this intersection, organizations need to adjust the way they support them.

That means:

  • Making ownership visible across environments, teams and services.
  • Involving platform leaders earlier in policy and governance discussions.
  • Providing systems that record and explain why decisions were made, not just what changed.
  • Giving platform teams the ability to push back when requirements conflict or outpace capacity.

These teams already function as a coordination layer. The structure around them has not caught up.

Conclusion

Platform engineering is no longer confined to infrastructure delivery. It has become a decision-making function that operates across security, compliance, finance and operations.

This shift did not happen all at once. It emerged gradually as systems scaled and responsibilities fragmented.

Platform teams now operate in the middle of the organization by necessity. Recognizing that reality is the first step toward supporting it properly.

The post Caught in the Middle: The New Role of Platform Teams appeared first on The New Stack.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories