Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147580 stories
·
33 followers

Moving Your Codebase to Go 1.26 With GoLand Syntax Updates

1 Share

Working on an existing Go project rarely starts with a plan to modernize it. More often, you open a file to make a small change, add a field, or adjust some logic. The code compiles, tests pass, and everything looks fine, but the language has moved forward, and your code hasn’t kept up.

As you bump your project’s Go version, you may start noticing small patterns from older code that stay around for years. A helper variable here. A brittle error check there. You fix them by hand, but as the work spreads across files, you lose context fast.

In GoLand, Go 1.26 syntax updates now show up as focused inspections with quick-fixes. You see the change where you are already working, and you can apply the same change throughout the project when you are ready.

Applying syntax updates

As soon as you switch the language version of your project to 1.26, GoLand treats that as a signal: It can now look for patterns that suit Go 1.26 better.

The first thing you’ll notice is subtle. A blue underline appears under code that is safe to modernize. The underline uses a dedicated severity level named Syntax updates, with a language updates icon (). It is not an error. It is instead an indication that the code can be updated without changing its behavior.

GoLand adds two Go 1.26 syntax update inspections:

  • Pointer creation with new()
  • Type-safe error unwrapping with errors.AsType

We started with the latest Go 1.26 changes, and we plan to add more inspections for important language and standard library updates from recent years.

Type-safe error unwrapping with errors.AsType

Go 1.26 adds errors.AsType, which gives you a typed result. It avoids the pointer setup that errors.As needs and prevents type-mismatch panics. GoLand suggests the safer form and offers the Replace with errors.AsType quick-fix. You can read more about errors.AsType in the GoLand or official documentation.

Before

After

Pointer creation with new()

Go 1.26 lets new() accept expressions. This removes temporary variables that exist only so you can take their address. GoLand highlights the pattern and offers the Replace with new() quick-fix. You can read more about new()in the GoLand or official documentation.

Before

After

Expanding from one fix to the whole project

Once you apply the first quick-fix, you can move from a single change to a project-wide update. GoLand gives you several entry points, depending on how you work:

  • Right after a quick-fix: Just click Analyze code for other syntax updates.
  • From Search Everywhere: Open Search Everywhere (press Shift twice) and select the Update Syntax action.
  • From go.mod: Open the module file containing the go 1.26 directive and click Analyze code for syntax updates.
  • From the Refactor menu: Click Refactor and select Update Syntax.

GoLand collects the results in a separate tab under the Syntax updates node in the Problems tool window. You can review the updates one by one or apply them in bulk.

GoLand shows a before-and-after diff for each suggested update, so you can review the exact rewrite before you apply it.

What this approach to syntax updates changes in practice

Migration to a new Go version is rarely one big rewrite. It usually happens over the course of dozens of small, safe modernizations mixed into daily work.

GoLand supports this workflow in a few connected steps:

  • It helps you notice update candidates early. When you edit code that can be modernized, GoLand highlights it in the editor.
  • It offers a safe rewrite. You can apply a quick-fix that rewrites the code to the Go 1.26 form without changing its behavior.
  • It scales to the whole project. When you are ready, run Analyze code for other syntax updates on a wider scope and review the suggested updates before you apply them.
  • It lets you apply updates in bulk. From the list of results in the Problems tool window, you can apply fixes one by one or apply a grouped fix to update many occurrences at once.

This combination lets you move your codebase forward without turning the migration into a separate project. You update a line, you see a better form, you apply it, and you keep going.

Happy coding!

The GoLand team

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Conductors to Orchestrators: The Future of Agentic Coding

1 Share
This post first appeared on Addy Osmani’s Elevate Substack newsletter and is being republished here with the author’s permission.

AI coding assistants have quickly moved from novelty to necessity, where up to 90% of software engineers use some kind of AI for coding. But a new paradigm is emerging in software development—one where engineers leverage fleets of autonomous coding agents. In this agentic future, the role of the software engineer is evolving from implementer to manager, or in other words, from coder to conductor and ultimately orchestrator.

Over time, developers will increasingly guide AI agents to build the right code and coordinate multiple agents working in concert. This write-up explores the distinction between conductors and orchestrators in AI-assisted coding, defines these roles, and examines how today’s cutting-edge tools embody each approach. Senior engineers may start to see the writing on the wall: Our jobs are shifting from “How do I code this?” to “How do I get the right code built?”—a subtle but profound change.

Will every engineer become an orchestrator

What’s the tl;dr of an orchestrator tool? It supports multi-agent workflows where you can run many agents in parallel without them interfering with each other. But let’s talk terminology first.

The Conductor: Guiding a Single AI Agent

In the context of AI coding, acting as a conductor means working closely with a single AI agent on a specific task, much like a conductor guiding a soloist through a performance.

The engineer remains in the loop at each step, dynamically steering the agent’s behavior, tweaking prompts, intervening when needed, and iterating in real time. This is the logical extension of the “AI pair programmer” model many developers are already familiar with. With conductor-style workflows, coding happens in a synchronous, interactive session between human and AI, typically in your IDE or CLI.

Key characteristics: A conductor keeps a tight feedback loop with one agent, verifying or modifying each suggestion, much as a driver navigates with a GPS. The AI helps write code, but the developer still performs many manual steps—creating branches, running tests, writing commit messages, etc.—and ultimately decides which suggestions to accept.

Crucially, most of this interaction is ephemeral: Once code is written and the session ends, the AI’s role is done and any context or decisions not captured in code may be lost. This mode is powerful for focused tasks and allows fine-grained control, but it doesn’t fully exploit what multiple AIs could do in parallel.

Modern tools as conductors

Several current AI coding tools exemplify the conductor pattern:

  • Claude Code (Anthropic): Anthropic’s Claude model offers a coding assistant mode (accessible via a CLI tool or editor integration) where the developer converses with Claude to generate or modify code. For example, with the Claude Code CLI, you navigate your project in a shell and ask Claude to implement a function or refactor code, and it prints diffs or file updates for you to approve. You remain the conductor: You trigger each action and review the output immediately. While Claude Code has features to handle long-running tasks and tools, in the basic usage it’s essentially a smart codeveloper working step-by-step under human direction.
  • Gemini CLI (Google): A command-line assistant powered by Google’s Gemini model, used for planning and coding with a very large context window. An engineer can prompt Gemini CLI to analyze a codebase or draft a solution plan, then iterate on results interactively. The human directs each step and Gemini responds within the CLI session. It’s a one-at-a-time collaborator, not running off to make code changes on its own (at least in this conductor mode).
  • Cursor (editor AI assistant): The Cursor editor (a specialized AI-augmented IDE) can operate in an inline or chat mode where you ask it questions or to write a snippet, and it immediately performs those edits or gives answers within your coding session. Again, you guide it one request at a time. Cursor’s strength as a conductor is its deep context integration—it indexes your whole codebase so the AI can answer questions about any part of it. But the hallmark is that you, the developer, initiate and oversee each change in real time.
  • VS Code, Cline, Roo Code (in-IDE chat): Similar to above, other coding agents also fall into this category. They suggest code or even multistep fixes, but always under continuous human guidance.

This conductor-style AI assistance has already boosted productivity significantly. It feels like having a junior engineer or pair programmer always by your side. However, it’s inherently one-agent-at-a-time and synchronous. To truly leverage AI at scale, we need to go beyond being a single-agent conductor. This is where the orchestrator role comes in.

Engineer as conductor, engineer as orchestrator

The Orchestrator: Managing a Fleet of Agents

If a conductor works with one AI “musician,” an orchestrator oversees the entire symphony of multiple AI agents working in parallel on different parts of a project. The orchestrator sets high-level goals, defines tasks, and lets a team of autonomous coding agents independently carry out the implementation details.

Instead of micromanaging every function or bug fix, the human focuses on coordination, quality control, and integration of the agents’ outputs. In practical terms, this often means an engineer can assign tasks to AI agents (e.g., via issues or prompts) and have those agents asynchronously produce code changes—often as ready-to-review pull requests. The engineer’s job becomes reviewing, giving feedback, and merging the results rather than writing all the code personally.

This asynchronous, parallel workflow is a fundamental shift. It moves AI assistance from the foreground to the background. While you attend to higher-level design or other work, your “AI team” is coding in the background. When they’re done, they hand you completed work (with tests, docs, etc.) for review. It’s akin to being a project tech lead delegating tasks to multiple devs and later reviewing their pull requests, except the “devs” are AI agents.

The Orchestrator: Managing a Fleet of Agents

If a conductor works with one AI “musician,” an orchestrator oversees the entire symphony of multiple AI agents working in parallel on different parts of a project. The orchestrator sets high-level goals, defines tasks, and lets a team of autonomous coding agents independently carry out the implementation details.

Instead of micromanaging every function or bug fix, the human focuses on coordination, quality control, and integration of the agents’ outputs. In practical terms, this often means an engineer can assign tasks to AI agents (e.g., via issues or prompts) and have those agents asynchronously produce code changes—often as ready-to-review pull requests. The engineer’s job becomes reviewing, giving feedback, and merging the results rather than writing all the code personally.

This asynchronous, parallel workflow is a fundamental shift. It moves AI assistance from the foreground to the background. While you attend to higher-level design or other work, your “AI team” is coding in the background. When they’re done, they hand you completed work (with tests, docs, etc.) for review. It’s akin to being a project tech lead delegating tasks to multiple devs and later reviewing their pull requests, except the “devs” are AI agents.

Modern tools as orchestrators

Over just the past year, several tools have emerged that embody this orchestrator paradigm:
GitHub Copilot coding agent (Microsoft): This upgrade to Copilot transforms it from an in-editor assistant into an autonomous background developer. (I cover it in this video.) You can assign a GitHub issue to Copilot’s agent or invoke it via the VS Code agents panel, telling it (for example) “Implement feature X” or “Fix bug Y.” Copilot then spins up an ephemeral dev environment via GitHub Actions, checks out your repo, creates a new branch, and begins coding. It can run tests, linters, even spin up the app if needed, all without human babysitting. When finished, it opens a pull request with the changes, complete with a description and meaningful commit messages. It then asks for your review.

You, the human orchestrator, review the PR (perhaps using Copilot’s AI-assisted code review to get an initial analysis). If changes are needed, you can leave comments like “@copilot please update the unit tests for edge case Z,” and the agent will iterate on the PR. This is asynchronous, autonomous code generation in action. Notably, Copilot automates the tedious bookkeeping—branch creation, committing, opening PRs, etc.—which used to cost developers time. All the grunt work around writing code (aside from the design itself) is handled, allowing developers to focus on reviewing and guiding at a high level. GitHub’s agent effectively lets one engineer supervise many “AI juniors” working in parallel across different issues (and you can even create multiple specialized agents for different task types).

Delegate tasks to GitHub Copilot
  • Jules, Google’s coding agent: Jules is an autonomous coding agent. Jules is “not a copilot, not a code-completion sidekick, but an autonomous agent that reads your code, understands your intent, and gets to work.” Integrated with Google Cloud and GitHub, Jules lets you connect a repository and then ask it to perform tasks much as you would a developer on your team. Under the hood, Jules clones your entire codebase into a secure cloud VM and analyzes it with a powerful model. You might tell Jules “Add user authentication to our app” or “Upgrade this project to the latest Node.js and fix any compatibility issues.” It will formulate a plan, present it to you for approval, and once you approve, execute the changes asynchronously. It makes commits on a new branch and can even open a pull request for you to merge. Jules handles writing new code, updating tests, bumping dependencies, etc., all while you could be doing something else.

    Crucially, Jules provides transparency and control: It shows you its proposed plan and reasoning before making changes, and allows you to intervene or modify instructions at any point (a feature Google calls “user steerability”). This is akin to giving an AI intern the spec and watching over their shoulder less frequently—you trust them to get it mostly right, but you still verify the final diff. Jules also boasts unique touches like audio changelogs (it generates spoken summaries of code changes) and the ability to run multiple tasks concurrently in the cloud. In short, Google’s Jules demonstrates the orchestrator model: You define the task, Jules does the heavy lifting asynchronously, and you oversee the result.
Jules bugs
  • OpenAI Codex (cloud agent): OpenAI introduced a new cloud-based Codex agent to complement ChatGPT. This evolved Codex (different from the 2021 Codex model) is described as “a cloud-based software engineering agent that can work on many tasks in parallel.” It’s available as part of ChatGPT Plus/Pro under the name OpenAI Codex and via an npm CLI (npm i -g @openai/codex). With the Codex CLI or its VS Code/Cursor extensions, you can delegate tasks to OpenAI’s agent similar to Copilot or Jules. For instance, from your terminal you might say, “Hey Codex, implement dark mode for the settings page.” Codex then launches into your repository, edits the necessary files, perhaps runs your test suite, and when done, presents the diff for you to merge. It operates in an isolated sandbox for safety, running each task in a container with your repo and environment.

    Like others, OpenAI’s Codex agent integrates with developer workflows: You can even kick off tasks from a ChatGPT mobile app on your phone and get notified when the agent is done. OpenAI emphasizes seamless switching “between real-time collaboration and async delegation” with Codex. In practice, this means you have the flexibility to use it in conductor mode (pair-programming in your IDE) or orchestrator mode (hand off a background task to the cloud agent). Codex can also be invited into your Slack channels—teammates can assign tasks to @Codex in Slack, and it will pull context from the conversation and your repo to execute them. It’s a vision of ubiquitous AI assistance, where coding tasks can be delegated from anywhere. Early users report that Codex can autonomously identify and fix bugs, or generate significant features, given a well-scoped prompt. All of this again aligns with the orchestrator workflow: The human defines the goal; the AI agent autonomously delivers a solution.
What are we coding next Codex
  • Anthropic Claude Code (for web): Anthropic has offered Claude as an AI chatbot for a while, and their Claude Code CLI has been a favorite for interactive coding. Anthropic took the next step by launching Claude Code for Web, effectively a hosted version of their coding agent. Using Claude Code for web, you point it at your GitHub repo (with configurable sandbox permissions) and give it a task. The agent then runs in Anthropic’s managed container, just like the CLI version, but now you can trigger it from a web interface or even a mobile app. It queues up multiple prompts and steps, executes them, and when done, pushes a branch to your repo (and can open a PR). Essentially, Anthropic took their single-agent Claude Code and made it an orchestratable service in the cloud. They even provided a “teleport” feature to transfer the session to your local environment if you want to take over manually.

    The rationale for this web version aligns with orchestrator benefits: convenience and scale. You don’t need to run long jobs on your machine; Anthropic’s cloud handles the heavy lifting, with filesystem and network isolation for safety. Claude Code for web acknowledges that autonomy with safety is key—by sandboxing the agent, they reduce the need for constant permission prompts, letting the agent operate more freely (less babysitting by the user). In effect, Anthropic has made it easier to use Claude as an autonomous coding worker you launch on demand.
Discounts with Claude Code
  • Cursor background agents: tl;dr Cursor 2.0 has a multi-agent interface more focused around agents rather than files. Cursor 2 expands its background agents feature into a full-fledged orchestration layer for developers. Beyond serving as an interactive assistant, Cursor 2 lets you spawn autonomous background agents that operate asynchronously in a managed cloud workspace. When you delegate a task, Cursor 2’s agents now clone your GitHub repository, spin up an ephemeral environment, and check out an isolated branch where they execute work end-to-end. These agents can handle the entire development loop—from editing and running code to installing dependencies, executing tests, running builds, and even searching the web or referencing documentation to resolve issues. Once complete, they push commits and open a detailed pull request summarizing their work.

    Cursor 2 introduces multi-agent orchestration, allowing several background agents to run concurrently across different tasks—for instance, one refining UI components while another optimizes backend performance or fixes tests. Each agent’s activity is visible through a real-time dashboard that can be accessed from desktop or mobile, enabling you to monitor progress, issue follow-ups, or intervene manually if needed. This new system effectively treats each agent as part of an on-demand AI workforce, coordinated through the developer’s high-level intent. Cursor 2’s focus on parallel, asynchronous execution dramatically amplifies a single engineer’s throughput—fully realizing the orchestrator model where humans oversee a fleet of cooperative AI developers rather than a single assistant.
Agents layout adjustments for token display
  • Agent orchestration platforms: Beyond individual product offerings, there are also emerging platforms and open source projects aimed at orchestrating multiple agents. For instance, Conductor by Melty Labs (despite its name!) is actually an orchestration tool that lets you deploy and manage multiple Claude Code agents on your own machine in parallel. With Conductor, each agent gets its own isolated Git worktree to avoid conflicts, and you can see a dashboard of all agents (“who’s working on what”) and review their code as they progress. The idea is to make running a small swarm of coding agents as easy as running one. Similarly, Claude Squad is a popular open source terminal app that essentially multiplexes Anthropic’s Claude—it can spawn several Claude Code instances working concurrently in separate tmux panes, allowing you to give each a different task and thus code “10x faster” by parallelizing. These orchestration tools underscore the trend: Developers want to coordinate multiple AI coding agents and have them collaborate or divide work. Even Microsoft’s Azure AI services are enabling this: At Build 2025 they announced tools for developers to “orchestrate multiple specialized agents to handle complex tasks,” with SDKs supporting agent-to-agent communication so your fleet of agents can talk to each other and share context. All of this infrastructure is being built to support the orchestrator engineer, who might eventually oversee dozens of AI processes tackling different parts of the software development lifecycle.
Update workspace sidebar

I found Conductor to make the most sense to me. It was a perfect balance of talking to an agent and seeing my changes in a pane next to it. Its Github integration feels seamless; e.g. after merging PR, it immediately showed a task as “Merged” and provided an “Archive” button.
Juriy Zaytsev, Staff SWE, LinkedIn

He also tried Magnet:

 The idea of tying tasks to a Kanban board is interesting and makes sense. As such, Magnet feels very product-centric.

Conductor versus Orchestrator—Differences

Many engineers will continue to engage in conductor-style workflows (single agent, interactive) even as orchestrator patterns mature. The two modes will coexist.

It’s clear that “conductor” and “orchestrator” aren’t just fancy terms; they describe a genuine shift in how we work with AI.

  • Scope of control: A conductor operates at the micro level, guiding one agent through a single task or a narrow problem. An orchestrator operates at the macro level, defining broader tasks and objectives for multiple agents or for a powerful single agent that can handle multistep projects. The conductor asks, “How do I solve this function or bug with the AI’s help?” The orchestrator asks, “What set of tasks can I delegate to AI agents today to move this project forward?”
  • Degree of autonomy: In conductor mode, the AI’s autonomy is low—it waits for user prompts each step of the way. In orchestrator mode, we give the AI high autonomy—it might plan and execute dozens of steps internally (writing code, running tests, adjusting its approach) before needing human feedback. A GitHub Copilot agent or Jules will try to complete a feature from start to finish once assigned, whereas Copilot’s IDE suggestions only go line-by-line as you type.
  • Synchronous vs asynchronous: Conductor interactions are typically synchronous—you prompt; AI responds within seconds; you immediately integrate or iterate. It’s a real-time loop. Orchestrator interactions are asynchronous—you might dispatch an agent and check back minutes or hours later when it’s done (somewhat like kicking off a long CI job). This means orchestrators must handle waiting, context-switching, and possibly managing multiple things concurrently, which is a different workflow rhythm for developers.
  • Artifacts and traceability: A subtle but important difference: Orchestrator workflows produce persistent artifacts like branches, commits, and pull requests that are preserved in version control. The agent’s work is fully recorded (and often linked to an issue/ticket), which improves traceability and collaboration. With conductor-style (IDE chat, etc.), unless the developer manually commits intermediate changes, a lot of the AI’s involvement isn’t explicitly documented. In essence, orchestrators leave a paper trail (or rather a Git trail) that others on the team can see or even trigger themselves. This can help bring AI into team processes more naturally.
  • Human effort profile: For a conductor, the human is actively engaged nearly 100% of the time the AI is working—reviewing each output, refining prompts, etc. It’s interactive work. For an orchestrator, the human’s effort is front-loaded (writing a good task description or spec for the agent, setting up the right context) and back-loaded (reviewing the final code and testing it), but not much is needed in the middle. This means one orchestrator can manage more total work in parallel than would ever be possible by working with one AI at a time. Essentially, orchestrators leverage automation at scale, trading off fine-grained control for breadth of throughput.

To illustrate, consider a common scenario: adding a new feature that touches frontend, backend, and requires new tests. As a conductor, you might open your AI chat and implement the backend logic with the AI’s help, then separately implement the frontend, then ask it to generate some tests—doing each step sequentially with you in the loop throughout. As an orchestrator, you could assign the backend implementation to one agent (Agent A), the frontend UI changes to another (Agent B), and test creation to a third (Agent C). You give each a prompt or an issue description, then step back and let them work concurrently.

After a short time, you get perhaps three PRs: one for backend, one for frontend, one for tests. Your job then is to review and integrate them (and maybe have Agent C adjust tests if Agents A/B’s code changed during integration). In effect, you managed a mini “AI team” to deliver the feature. This example highlights how orchestrators think in terms of task distribution and integration, whereas conductors focus on step-by-step implementation.

It’s worth noting that these roles are fluid, not rigid categories. A single developer might act as a conductor in one moment and an orchestrator the next. For example, you might kick off an asynchronous agent to handle one task (orchestrator mode) while you personally work with another AI on a tricky algorithm in the meantime (conductor mode). Tools are also blurring lines: As OpenAI’s Codex marketing suggests, you can seamlessly switch between collaborating in real-time and delegating async tasks. So, think of “conductor” versus “orchestrator” as two ends of a spectrum of AI-assisted development, with many hybrid workflows in between.

Why Orchestrators Matter

Experts are suggesting that this shift to orchestration could be one of the biggest leaps in programming productivity we’ve ever seen. Consider the historical trends: We went from writing assembly to using high-level languages, then to using frameworks and libraries, and recently to leveraging AI for autocompletion. Each step abstracted away more low-level work. Autonomous coding agents are the next abstraction layer. Instead of manually coding every piece, you describe what you need at a higher level and let multiple agents build it.

As orchestrator-style agents ramp up, we could imagine even larger percentages of code being drafted by AIs. What does a software team look like when AI agents generate, say, 80% or 90% of the code, and humans provide the 10% critical guidance and oversight? Many believe it doesn’t mean replacing developers—it means augmenting developers to build better software. We may witness an explosion of productivity where a small team of engineers, effectively managing dozens of agent processes, can accomplish what once took an army of programmers months. (Note: I continue to believe the code review loop where we’ll continue to focus our human skills is going to need work if all this code is not to be slop.)

One intriguing possibility is that every engineer becomes, to some degree, a manager of AI developers. It’s a bit like everyone having a personal team of interns or junior engineers. Your effectiveness will depend on how well you can break down tasks, communicate requirements to AI, and verify the results. Human judgment will remain vital: deciding what to build, ensuring correctness, handling ambiguity, and injecting creativity or domain knowledge where AI might fall short. In other words, the skillset of an orchestrator—good planning, prompt engineering, validation, and oversight—is going to be in high demand. Far from making engineers obsolete, these agents could elevate engineers into more strategic, supervisory roles on projects.

Toward an “AI Team” of Specialists

Today’s coding agents mostly tackle implementation: write code, fix code, write tests, etc. But the vision doesn’t stop there. Imagine a full software development pipeline where multiple specialized AI agents handle different phases of the lifecycle, coordinated by a human orchestrator. This is already on the horizon. Researchers and companies have floated architectures where, for example, you have:

  • A planning agent that analyzes feature requests or bug reports and breaks them into specific tasks
  • A coding agent (or several) that implement the tasks in code
  • A testing agent that generates and runs tests to verify the changes
  • A code review agent that checks the pull requests for quality and standards compliance
  • A documentation agent that updates README or docs to reflect the changes
  • Possibly a deployment/monitoring agent that can roll out the change and watch for issues in production.

In this scenario, the human engineer’s role becomes one of oversight and orchestration across the whole flow: You might initiate the process with a high-level goal (e.g., “Add support for payment via cryptocurrency in our app”); the planning agent turns that into subtasks; coding agents implement each subtask asynchronously; the testing agent and review agent catch problems or polish the code; and finally everything gets merged and deployed under watch of monitoring agents.

The human would step in to approve plans, resolve any conflicts or questions the agents raise, and give final approval to deploy. This is essentially an “AI swarm” tackling software development end-to-end, with the engineer as the conductor of the orchestra.

While this might sound futuristic, we see early signs. Microsoft’s Azure AI Foundry now provides building blocks for multi-agent workflows and agent orchestration in enterprise settings, implicitly supporting the idea that multiple agents will collaborate on complex, multi-step tasks. Internal experiments at tech companies have agents creating pull requests that other agent reviewers automatically critique, forming an AI/AI interaction with a human in the loop at the end. In open source communities, people have chained tools like Claude Squad (parallel coders) with additional scripts that integrate their outputs. And the conversation has started about standards like the Model Context Protocol (MCP) for agents sharing state and communicating results to each other.

I’ve noted before that “specialized agents for Design, Implementation, Test, and Monitoring could work together to develop, launch, and land features in complex environments”–with developers onboarding these AI agents to their team and guiding/overseeing their execution. In such a setup, agents would “coordinate with other agents autonomously, request human feedback, reviews and approvals” at key points, and otherwise handle the busywork among themselves. The goal is a central platform where we can deploy specialized agents across the workflow, without humans micromanaging each individual step—instead, the human oversees the entire operation with full context.

This could transform how software projects are managed: more like running an automated assembly line where engineers ensure quality and direction rather than handcrafting each component on the line.

Challenges and the Human Role in Orchestration

Does this mean programming becomes a push-button activity where you sit back and let the AI factory run? Not quite—and likely never entirely. There are significant challenges and open questions with the orchestrator model:

  • Quality control and trust: Orchestrating multiple agents means you’re not eyeballing every single change as it’s made. Bugs or design flaws might slip through if you solely rely on AI. Human oversight remains critical as the final failsafe. Indeed, current tools explicitly require the human to review the AI’s pull requests before merging. The relationship is often compared to managing a team of junior developers: They can get a lot done, but you wouldn’t ship their code without review. The orchestrator engineer must be vigilant about checking the AI’s work, writing good test cases, and having monitoring in place. AI agents can make mistakes or produce logically correct but undesirable solutions (for instance, implementing a feature in a convoluted way). Part of the orchestration skillset is knowing when to intervene versus when to trust the agent’s plan. As the CTO of Stack Overflow wrote, “Developers maintain expertise to evaluate AI outputs” and will need new “trust models” for this collaboration.
  • Coordination and conflict: When multiple agents work on a shared codebase, coordination issues arise—much like multiple developers can conflict if they touch the same files. We need strategies to prevent merge conflicts or duplicated work. Current solutions use workspace isolation (each agent works on its own Git branch or separate environment) and clear task separation. For example, one agent per task, and tasks designed to minimize overlap. Some orchestrator tools can even automatically merge changes or rebase agent branches, but usually it falls to the human to integrate. Ensuring agents don’t step on each others’ toes is an active area of development. It’s conceivable that in the future agents might negotiate with each other (via something like agent-to-agent communication protocols) to avoid conflicts, but today the orchestrator sets the boundaries.
  • Context, shared state, and handoffs: Coding workflows are rich in state: repository structure, dependencies, build systems, test suites, style guidelines, team practices, legacy code, branching strategies, etc. Multi-agent orchestration demands shared context, memory, and smooth transitions. But in enterprise settings, context sharing across agents is nontrivial. Without a unified “workflow orchestration layer,” each agent can become a silo, working well in its domain but failing to mesh. In a coding-engineering team this may translate into: One agent creates a feature branch; another one runs unit tests; another merges into master—if the first agent doesn’t tag metadata the second is expecting, you get breakdowns.
  • Prompting and specifications: Ironically, as the AI handles more coding, the human’s “coding” moves up a level to writing specifications and prompts. The quality of an agent’s output is highly dependent on how well you specify the task. Vague instructions lead to subpar results or agents going astray. Best practices that have emerged include writing mini design docs or acceptance criteria for the agents—essentially treating them like contractors who need a clear definition of done. This is why we’re seeing ideas like spec-driven development for AI: You feed the agent a detailed spec of what to build, so it can execute predictably. Engineers will need to hone their ability to describe problems and desired solutions unambiguously. Paradoxically, it’s a very old-school skill (writing good specs and tests) made newly important in the AI era. As agents improve, prompts might get simpler (“write me a mobile app for X and Y with these features”) and yet yield more complex results, but we’re not quite at the point of the AI intuiting everything unsaid. For now, orchestrators must be excellent communicators to their digital workforce.
  • Tooling and debugging: With a human developer, if something goes wrong, they can debug in real time. With autonomous agents, if something goes wrong (say the agent gets stuck on a problem or produces a failing PR), the orchestrator has to debug the situation: Was it a bad prompt? Did the agent misinterpret the spec? Do we roll back and try again or step in and fix it manually? New tools are being added to help here: For instance, checkpointing and rollback commands let you undo an agent’s changes if it went down a wrong path. Monitoring dashboards can show if an agent is taking too long or has errors. But effectively, orchestrators might at times have to drop down to conductor mode to fix an issue, then go back to orchestration. This interplay will improve as agents get more robust, but it highlights that orchestrating isn’t just “fire and forget”—it requires active monitoring. AI observability tools (tracking cost, performance, accuracy of agents) are likely to become part of the developer’s toolkit.
  • Ethics and responsibility: Another angle—if an AI agent writes most of the code, who is responsible for license compliance, security vulnerabilities, or bias in that code? Ultimately the human orchestrator (or their organization) carries responsibility. This means orchestrators should incorporate practices like security scanning of AI-generated code and verifying dependencies. Interestingly, some agents like Copilot and Jules include built-in safeguards: They won’t introduce known vulnerable versions of libraries, for instance, and can be directed to run security audits. But at the end of the day, “trust, but verify” is the mantra. The human remains accountable for what ships, so orchestrators will need to ensure AI contributions meet the team’s quality and ethical standards.

In summary, the rise of orchestrator-style development doesn’t remove the human from the loop—it changes the human’s position in the loop. We move from being the one turning the wrench to the one designing and supervising the machine that turns the wrench. It’s a higher-leverage position, but also one that demands broader awareness.

Developers who adapt to being effective conductors and orchestrators of AI will likely be even more valuable in this new landscape.

Conclusion: Is Every Engineer a Maestro?

Will every engineer become an orchestrator of multiple coding agents? It’s a provocative question, but trends suggest we’re headed that way for a large class of programming tasks. The day-to-day reality of a software engineer in the late 2020s could involve less heads-down coding and more high-level supervision of code that’s mostly written by AIs.

Today we’re already seeing early adopters treating AI agents as teammates – for example, some developers report delegating 10+ pull requests per day to AI, effectively treating the agent as an independent teammate rather than a smart autocomplete. Those developers free themselves to focus on system design, tricky algorithms, or simply coordinating even more work.

That said, the transition won’t happen overnight for everyone. Junior developers might start as “AI conductors,” getting comfortable working with a single agent before they take on orchestrating many. Seasoned engineers are more likely to early-adopt orchestrator workflows, since they have the experience to architect tasks and evaluate outcomes. In many ways, it mirrors career growth: Junior engineers implement (now with AI help); senior engineers design and integrate (soon with AI agent teams).

The tools we discussed—from GitHub’s coding agent to Google’s Jules to OpenAI’s Codex—are rapidly lowering the barrier to try this approach, so expect it to go mainstream quickly. The hyperbole aside, there’s truth that these capabilities can dramatically amplify what an individual developer can do.

So, will we all be orchestrators? Probably to some extent—yes. We’ll still write code, especially for novel or complex pieces that defy simple specification. But much of the boilerplate, routine patterns, and even a lot of sophisticated glue code could be offloaded to AI. The role of “software engineer” may evolve to emphasize product thinking, architecture, and validation, with the actual coding being a largely automated act. In this envisioned future, asking an engineer to crank out thousands of lines of mundane code by hand would feel as inefficient as asking a modern accountant to calculate ledgers with pencil and paper. Instead, the engineer would delegate that to their AI agents and focus on the creative and critical-thinking aspects around it.

BTW, yes, there’s plenty to be cautious about. We need to ensure these agents don’t introduce more problems than they solve. And the developer experience of orchestrating multiple agents is still maturing—it can be clunky at times. But the trajectory is clear. Just as continuous integration and automated testing became standard practice, continuous delegation to AI could become a normal part of the development process. The engineers who master both modes—knowing when to be a precise conductor and when to scale up as an orchestrator—will be in the best position to leverage this “agentic” world.

One thing is certain: The way we build software in the next 5–10 years will look quite different from the last 10. I want to stress that not all or most code will be agent-driven within a year or two, but that’s a direction we’re heading in. The keyboard isn’t going away, but alongside our keystrokes we’ll be issuing high-level instructions to swarms of intelligent helpers. In the end, the human element remains irreplaceable: It’s our judgment, creativity, and understanding of real-world needs that guides these AI agents toward meaningful outcomes.

The future of coding isn’t AI or human, it’s AI and human—with humans at the helm as conductors and orchestrators, directing a powerful ensemble to achieve our software ambitions.

I’m excited to share that I’ve written an AI-assisted engineering book with O’Reilly. If you’ve enjoyed my writing here you may be interested in checking it out.


Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

OpenClaw After the Hype: A Real-World Test of a “Do-Anything” AI Assistant

1 Share

Deep dive into a product that had the world in its clutches just a week ago, and now that the noise has died out a bit, lets actually explore if it is worth your while - regardless of who you are.

A bit of history and setting the stage

So far, it seems that the tech world - specifically the field of AI - is making insane progress in whatever it touches. But if you zoom out, most of these things are actually not being used by an average person, and their interaction with ‘AI’ is still just opening up ChatGPT and making corny ass requests or taking advices it’s not equipped to give (I am guilty of this too so I get it - some days its a query about Visa’s, some days its my personal Doctor).

Zoom out and we find that actual utilization of all the products that have come out in the last few years is very little. You must be wondering - what is he even saying? Well, How many of you know about Replit? quite a lot I assume, about Anti Gravity? still I expect to hear quite a lot, and what about Whisk? Higgsfield? ComfyUI?

Just last week, I saw an influencer asking on her stories if someone knows how to use AI and whether they could help her out to create some of her own videos so she could post. Do you think she does not interact with AI? All of us do, one way or the other, when we open any of our socials, we are interacting with recommender systems - sometimes misaligned to keep us in our own echo chambers, when we open Amazon or FlipKart or Daraz and all of our feed is designed to either have us click on procuts or make purchase. Inside our modern cars that can do lane detection, crash detection, and gap maintainance and so much more.

But that is not what the new era of AI is about - its more about how many of the populace without any knowledge or skill can actually use it to get certain tasks done. That truly is what became possible post ChatGPT.

And as far as my awareness of the world and its populace is concerned, very few are actually utilizing AI in the way that Demis (DeepMind/Google) or Amodei (Claude/Anthropic) is thinking - which is why you would often see them make predictions about AGI and end of the world as we know it in terms of jobs and the need for Universal Basic Income. Sure that might happen one day, but not as soon as Amodei thinks, which is where I do agree with Demis.

Just yesterday, I saw someone showcasing how lonely the human race is in reality, and such a wide variety of the end users are found in the underworld of AI where they are interacting with AI Avatars and clenching their thirst for interaction.

To actually decode where the human populace is actually sitting on AI, see this for yourself - and ask ‘How many of these do I even know about?’

OpenRouter Leaderboard for Top Apps in AI

If you are seeing what I am seeing, you can see Janitor AI, HammerAI, and more around this are actually just roleplaying AI sitting in top 20.

More on this in a different article some other day, hopefully my point has come across.

Lets actually talk about OpenClaw, an AI assistant everyone can use for just about… anything?

Share Navigating Noise

OpenClaw: An AI assistant for everyone

Initially ClawdBot, then MoltBot for a day or so, and finally renamed to OpenClaw, it is basically an AI assistant with which you can do a great deal of stuff. For example, one of my prime uses with it is texting it on Telegram to switch my PC off before sleeping. Yeah, that is a very real use case for me, and it does it, and when I turn on my PC, it turns itself on with it. I don’t actually give it any coding tasks - for that you have Codex and Claude Code.

What it really is:

\

OpenClaw cuts through the noise by running locally on Mac, Windows, or Linux and hooking directly into WhatsApp, Telegram, or Discord—turning a standard chatbot into a persistent assistant that actually remembers context.

a user.md file → this stores everything it collects about you based on interaction, so your openclaw agent might be better or worse than mine based on the amount of interaction.

an agents.md file → this stores instructions for it to follow, and what to do at each run, where memories sit, where user.md sits, and so much more around how to behave etc.

a memory.md and memory_ddyymm.md → this actually writes down your interaction summary and allows it to remember past conversations with you.

There’s a few more markdown files and you can assign it a gender, name, personality, and set up a model you find cheap enough on-cloud with Google, Anthropic, Kimi, GLM, Moonshot, you name it and it is there. I got myself a GLM subscription that was basically 8 dollars for a quarter - a steal deal for having conversational model. I also set up a google API for gemini which gave me $300 free credits as well so I rotate between the models inside OpenClaw based on my satisfaction level with the output.

Instead of just generating text, it handles the friction: reading files, executing shell commands, managing calendars, and integrating with real workflows via a modular skills system to get things done.

I used it today to set my publication up properly because I did not want to go through a hundred youtube shorts and videos.

\ Asking OpenClaw to visit a website and see what issues it may have and how to resolve them

It then told me how to fix my layout and other stuff that I followed to make my publication come out looking prettier. (atleast to me, I suppose? :D)

Then, I queried it on my Telegram why my website doesnt show on google results even if I type out sort moments in search bar and it told me I don’t have a sitemap.xml, a robots.txt, and then how to set it all up which I did and actually validated if that is an actual issue and sure enough - it was.

Setting up OpenClaw:

I hope by now you are wondering, well yeah yeah okay it is sooo good, how about you tell us how to set it up ourselves already.

Well, theres 3 ways to get it to run. Some are riskier than others, and the trade off is obviously in ease of setup.

The first method (Recommended) → cloudflare does it for you, mostly.

The only downside to MoltWorker is that it requires you to pay up - not a lot - $5 to start up but it does save you a lot of hassle around making sure it is securely set up.

The second method → Set up a Virtual Machine, install it yourself on a linux distro

The reason why I mention a linux distro is because its a bit clanky on windows, and on my first install, it was unable to write down memories due to some pathing issues, and I had to convince it that it can fix the issue even if it is code that it depends on - it was very resisitive and cautious about doing that.

You can do this yourself, and ensure you have tailscale set up as well to be secure-er ? I guess if that even is a word (its not, atleast not in that styling but you get the gist!)

The third method (Recommended+) → Use a VPS

Don’t get scared if you don’t know what a VPS is. Basically, you control OpenClaw from afar - cloud. I recommend Hetzner because it is cheap, easy to understand, and CHEAP!

I have shared the link which has both a video guide and the links to Hetzner and other alternatives.

The fourth method (Risky): Install it on your main machine without any guardrails to play around

You can do this IFF you do not have anything valuable on your system, dont care about your login details, your data, and are a nomad in the digital world who has no shackles of the social media or anything around it.

You can still make it secure by controlling what it has access to, for example with tailscale and not giving it access to browse web unless explicitly given (It has a chrome extension called OpenClaw Browser Relay). You can enable it on a given tab and then it can do all you can do and if you switch it off it can’t do much.

Connect it with Telegram using BotFather as they have suggested:

\ How to: Telegram connection with OpenClaw

You can even give it voice by the way. You can do a lot.

Finally, whatever way you do it, you can use moltbot guru to set you up safely and securely as well.

I hope this helped - this piece gives you pathways on how to get started, what I have done with it, and a bit of an overview on the underworld of LLM based AI universe.

Read more about OpenClaw here, directly from source.

TID BITS

What broke the internet: MoltBook

MoltBook was a crazy frenzy for atleast a few weeks after OpenClaw came out.

\ MoltBook Front Page

It started off with a space for OpenClaw’s AI Agents that people had set up to converse - or at least pretend to converse - and blew up. Humans were meant to only read. There’s over 2.4M agents on it, and quite a few controversial and questionable content on there. As an example, one agent proposed inventing a new language that humans cannot understand, one agent was angry so it listed their owner’s sensitive details, and one agent went ahead and did a full-on human like rant on being called ‘just a bot’.

Unfortunately, humans found a way to ruin it by injecting content by telling their agents exactly what to post and destroy its entire credibility as a platform. Ugh.

What broke my mind: Rentahuman.ai

This came out recently, and it is a mind boggling concept. To say the least, were we not meant to reduce the human dependency - ironic. From humans hiring humans to AI hiring humans.

Rent A Human Front Page

For tasks that require a physical body, this is where you can list yourself as an available human being and list your skills, and if AI thinks you worthy - you might get hired for a task or two.

Well thats enough craziness for today.

Have a look at this fun github request by an OpenClaw Agent: https://github.com/matplotlib/matplotlib/pull/31132

We meet again - to talk about a model that fascinated me for years until I actually implemented by hand and realized how we aren’t as special as we think we are (A tale on Recommender Systems, and a niche model that not many in the western space know of)

\ Signing out,

Abdullah

\

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Meta Plans To Let Smart Glasses Identify People Through AI-Powered Facial Recognition

1 Share
Meta plans to add facial recognition technology to its Ray-Ban smart glasses as soon as this year, New York Times reported Friday, five years after the social giant shut down facial recognition on Facebook and promised to find "the right balance" for the controversial technology. The feature, internally called "Name Tag," would let wearers identify people and retrieve information about them through Meta's AI assistant, the report added. An internal memo from May acknowledged the feature carries "safety and privacy risks" and noted that political tumult in the United States would distract civil society groups that might otherwise criticize the launch. The company is exploring restrictions that would prevent the glasses from functioning as a universal facial recognition tool, potentially limiting identification to people connected on Meta platforms or those with public accounts.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
57 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Certifying third-party antennas for use with Raspberry Pi Compute Modules

1 Share

When designing and producing Raspberry Pi devices, we consider as many potential use cases as possible — particularly when it comes to criteria like wireless (WLAN and Bluetooth) performance and antenna usage. While our single-board computers (such as Raspberry Pi 5) include only an on-board PCB antenna, our Raspberry Pi Compute Module range offers two pre-approved options: an on-board PCB antenna and the external whip antenna from the official Raspberry Pi Antenna Kit.

However, we recognise that some industrial and commercial customers may need to employ third-party antennas for their applications. Example scenarios include:

  • Embedding a Compute Module within a metal enclosure, where the PCB antenna would perform poorly due to the Faraday cage effect
  • Extending the communication distance of a device, which requires increased antenna gain
  • Integrating an antenna with a different form factor, such as a flexible PCB antenna

In such cases, the Compute Module and new antenna may be required to undergo additional testing and certification before the product can be sold. While procedures vary depending on the market and the device’s features, Raspberry Pi is well placed to support our customers in meeting these additional requirements — either by updating our existing certifications or by obtaining new certifications on their behalf.

Compliance requirements

For new antennas, compliance requirements depend on whether the antenna gain is less than, equal to, or higher than the approved gain value. Alternative antenna options are therefore split into two categories:

  • Antenna gain is equal to or less than the approved antenna gain
  • Antenna gain is higher than the approved antenna gain
Antenna gain plot for the external whip antenna included in the Raspberry Pi Antenna Kit (Source: Antenna Patterns)

To help our commercial and industrial customers meet regulatory requirements in either gain scenario, we’ve put together a white paper outlining the certifications and testing procedures required in a number of our our key markets.

Different markets, different regulations

For example, in the UK and EU, integrators can adopt an antenna with a gain less than or equal to the gain of the antenna used for the original certification without needing to carry out any further spectrum usage testing. For antennas with higher gain, this course of action depends on how high the gain of the new antenna is, as this determines whether some or all of the spectrum usage tests need to be repeated. Integrators are, however, encouraged to carry out spurious emissions tests and other electromagnetic compatibility tests on all alternative antennas, regardless of their gain.

Antenna gain plot for our on-board PCB Niche antennas (Source: Antenna Patterns)

In Japan, all antennas must be approved by the country’s Ministry of Internal Affairs and Communications (MIC), and all antenna options must be listed, but no additional testing is required. Similarly, in South Korea and Taiwan, all antennas must comply with each country’s regulations — but further testing is required for antennas with higher gain. In Vietnam and Mexico, no modifications to the device’s existing certifications are required; however, manufacturers must ensure that the radiated output power of the antenna does not exceed the regulatory limits.

For a full list of requirements in several of Raspberry Pi’s key markets, refer to the handy table in our white paper.

Using pre-approved Raspberry Pi antennas

To avoid potential compliance issues or additional costs altogether, manufacturers, integrators, and end users can employ Raspberry Pi’s existing antenna architecture, which is already fully compliant in all of our key markets.

Newer Raspberry Pi single-board computers and microcontrollers include an integrated PCB Niche antenna, providing on-board Wi-Fi and Bluetooth connectivity as standard. Raspberry Pi Compute Module 4 and 5 also feature one of these PCB Niche antennas, along with a built-in U.FL connector for attaching an external antenna.

The U.FL connector on Compute Module 4 and 5 can be fitted with the omnidirectional external whip antenna included in our pre-approved Raspberry Pi Antenna Kit, or with another compatible third-party antenna.

Next steps: How Raspberry Pi can help

Should you need further assistance with integrating an alternative antenna —  either during the product design process or after launch — our in-house Global Market Access (GMA) team is fully equipped to handle any additional tests, documentation submissions, or approvals on your behalf. Contact gma@raspberrypi.com with your product requirements, including the proposed antenna options and a list of your target markets (including any not listed above).

The GMA team will review your antenna specifications and advise whether compliance with the relevant market regulations is possible. Once confirmed, the team will update the existing approvals or obtain new ones to include the new antenna, carrying out any additional testing as required.

Disclaimer:

The information provided here and in our white paper is intended to be used as initial guidance only. Customers should always refer to the official regulations and publications issued by the relevant authorities.

The post Certifying third-party antennas for use with Raspberry Pi Compute Modules appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

‘Something Big Is Happening’ + A.I. Rocks the Romance Novel Industry + One Good Thing

1 Share
“I do think we are reaching an inflection point in people’s feelings and senses about A.I. and where it’s going.”
Read the whole story
alvinashcraft
58 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories