Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149541 stories
·
33 followers

Valve Says Steam Machine Isn't a Console—but It Is

1 Share
Consoles are good, actually.
Read the whole story
alvinashcraft
47 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Angular v21 Adds Signal Forms, New MCP Server

1 Share
Dev News logo

Angular v21 released this week, with a slew of new and experimental offerings, including the launch of Signal Forms and the release of the new Angular MCP Server.

Signal Forms is an experimental library that uses Signals to create scalable, composable and reactive forms, according to Angular team members Jens Kuehlers and Mark Thompson. Signals is a reactive primitive used for state management. It allow you to manage form state by building on the reactive foundations of signals.

“With Signal Forms, the Form model is defined by a signal that automatically syncs with the form fields bound to it, allowing for an ergonomic developer experience with full type-safety for accessing form fields,” Kuehlers and Thompson explained. “Powerful and centralized schema-based validation logic is built in.”

The Signal Forms API is still experimental and will be iterating based on developer feedback.

This release also includes the Angular MCP Server as well, which was introduced in version 20.2 but is now stable. It adds seven stable and experimental tools so that LLMs can use new Angular features, the team wrote.

The MCP Server incorporates best practices so that your favorite AI tool can write better Angular. It also includes a search documentation tool that lets you query Angular’s documentation. The MCP Server incorporates a migration tool that can analyze code and provide a plan to migrate your application to OnPush and boneless change detection. It also offers an experimental tool to perform code migrations using existing schematics, the team wrote.

Developers can also use the AI Tutor tool, which launches an interactive AI tutor that will help developers learn Angular and provide feedback on best practices.

“With the MCP server you are able to bridge the knowledge cutoff issue — your LLM was trained with Angular knowledge as of a specific date, but using the MCP server, it can learn to use even brand new features such as Signal Forms and Angular Aria — you just need to ask your agent to find examples and use them,” the team wrote.

In support of accessibility, Angular also released Angular Aria in Developer Preview. It, too, is Signals-based. It provides headless components built with accessibility as a priority.

There’s a set of eight UI patterns encompassing 13 components that are completely unstyled and can be customized with your own styles, the team wrote. The eight patterns are:

  1. Accordion
  2. Combobox
  3. Grid
  4. Listbox
  5. Menu
  6. Tabs
  7. Toolbar
  8. Tree

Also new: The Angular CLI integrates Vitest as the new default test runner, with Vitest support being production-ready, the team said. While Vitest is the new default test runner for new projects, Karma and Jasmine are still fully supported by the Angular team, so developers don’t have to migrate … yet. They have, however, decided to deprecate the experimental support for Web Test Runner and Zest, which will be removed completely in version 22.

Finally, the new Angular applications no longer include zone.js by default. Zone.js is a library that patches browser APIs to keep track of changes in applications, the team explained

“This enabled the ‘magical’ experience where templates automatically change as the user performs actions in your application, however zone.js has performance drawbacks, especially for high-complexity applications,” they wrote. “Through our experience with applications in Google we became increasingly more confident that new Angular applications work best without zone.js.”

Svelte Adds MCP Server

Svelte made its MCP server available this month, with its own section of the docs site and GitHub repo.

“It should replace the copy/pasting of the Svelte docs that’s often required to get LLMs to write valid Svelte 5 code and can provide suggestions on the generated code with static analysis,” wrote designer Dani Sandoval on the Svelte blog.

Sandoval’s blog post also outlines additional changes in this month’s release.

Postman Acquires Tool for Automating Generation of SDKs

API company Postman announced its acquisition of liblab late last week. Liblab is tool that lets developers automate the generation of Software Development Kits (SDKs).

“With this acquisition, we are expanding our platform to cover the entire API lifecycle and accelerating our customers’ ability to build AI-ready, agent-enabled APIs,” wrote Abhinav Asthana, the company’s co-founder and CEO.

Liblab brings to Postman:

  • An SDK Code Generator that can auto-generate custom SDKs for APIs;
  • An MCP Generator that seamlessly integrates with API and AI tooling; and
  • A novel API and SDK documentation that can sync with every API change.

“Their ‘SDKs-as-a-service’ model enables developers to generate high-quality, customizable SDKs in seconds, with code that mirrors the look and feel of expert-written libraries in every major language,” Asthana wrote. “These products make API consumption effortless and allow developers to build faster, a vision that Postman shares.”

Their technology will be integrated into Postman’s platform.

Postman plans to invest in liblab’s core SDK generation engine and work with their team to create a single, seamless workflow from API design to API consumption, he added.

Avalonia Brings .NET MAUI to Linux, Browser

The cross-platform development framework .NET MAUI is coming to Linux and the browser, according to Avalonia, which provides the backend for .NET MAUI.

This is a real MAUI application running on WebAssembly, rendered through Avalonia, which is an open source, cross-platform UI framework for .NET developers. Avalonia CEO Mike James writes that it’s “an early build with rough edges,” but it proves that MAUI can now run on every major desktop OS and in the browser.

Avalonia’s MAUI backend keeps the MAUI codebase but replaces the rendering layer with Avalonia.

“The goal is straightforward: Take your existing MAUI applications and extend them to additional platforms, while enhancing desktop performance along the way,” James wrote.

Avalonia already runs on embedded Linux devices such as Raspberry Pi, but now .NET MAUI leverages that backend to run as first-class desktop apps on distributions such as Ubuntu, Debian and Fedora.

Avalonia also demoed a MAUI application that runs on WebAssembly in the browser and is rendered by Avalonia without native dependencies on the client.

James explained why Avalonia is investing in MAUI.

“The honest answer is that we care about .NET client developers first, and about which on ramp they use second. Many teams have already chosen MAUI, which they like and want more from. If we can provide them with Linux and browser support, along with improved desktop performance, without requiring a rewrite, that aligns with our mission to delight developers and solve complex problems.”

For MAUI developers, he wrote that Avalonia provides:

• Hardware accelerated rendering on every platform;
• A consistent layout and styling system;
• Smooth animations at high refresh rates;
• Custom rendering and visual effects capabilities;
• Broad platform coverage; and
• A fully supported platform that is receiving significant investment.

“By building MAUI on top of Avalonia, you get a predictable, drawn UI foundation and an expanded set of platforms, without having to throw away your existing codebase,” James wrote. “You do not need to abandon MAUI to get Linux and the web. You can bring MAUI with you, while also improving the experience on Windows and macOS.”

The post Angular v21 Adds Signal Forms, New MCP Server appeared first on The New Stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Hands-On With Antigravity: Google’s Newest AI Coding Experiment

1 Share
antigravity

When Google launched a new agentic development platform called Antgravity this week, the first question I had was: “What about Jules and Gemini CLI?” But remember, this is Google; they throw a lot of mud against the wall just to see what sticks (very little is the answer, as their graveyard attests). So Google considers Antigravity and Jules to be experiments that examine the same technology from different angles.

Let’s now look at Antigravity, which appears to be ready for “public preview” on all platforms, not just the Mac:

Like Verdent, it isn’t a CLI for the terminal — it’s an IDE app. So I downloaded it for my MacBook and slid it into the Applications folder.

First Impressions and Setup Process

Sensibly, it asks the user to select a theme during setup. Most people set the theme once (dark or light) and rarely touch the option again:

This next setup question is quite intriguing:

This feels like it wants to possibly move towards a parallel runner model, with little human intervention (parallel runner means letting an agentic task run in the background while you start another — in other words, each task is run on isolated branches). Or perhaps Antigravity will be more like document-driven solutions like Kiro. As you can guess, this post is just to set the scene for developers, not to deeply examine what is, after all, an experiment. So I’ll go with the “assisted development” option, which feels like more conventional territory.

After signing in with Google (using its internal 2FA), Antigravity asks you to open a folder or to clone. To create an isolated branch, like a parallel runner it would need a git clone. It also finds likely-looking projects in your user folder.

As you see, I chose Claude Sonnet from the currently available models:

A Familiar IDE With AI-Powered Suggestions

As usual for releases such as this, you get a bunch of free tokens with the proviso that abuses will be curtailed. When it has finished initialising (scanning the project, I assume) we get the standard layout:

Does this look familiar? Yes, it does seem to be some sort of VS Code fork. I’ve written about the dangers of these — but really I don’t know why Google has done this.

On the right I chose “planning mode”; the alternative “fast” mode “is helpful for when speed is an important factor, and the task is simple enough that there is low worry of worse quality” — which is slightly comical, but we know what the writer means.

As far as I can tell, “Agent manager” is closer to the parallel runner agentic CLI mode, since you don’t interact with the editor. The following diagram refers to this:

Having said that, the language used here is rather strange. The text below on the left confirms the parallel tasks model. But the text at the top mentions “Deep research,” which is an odd term to use for software development. Plus, the expectation now is that any task is “background” unless the user has asked for constant intervention. This does not feel like it was written in the same spirit of recent “agentic” code editor releases.

Looking at the Editor window first (right side of diagram), I get good quick code completion and suggestions. I select one of those awkward bits of project code that can be improved — a bounds checker.

public void CalculateMapBound(MapSector sec) { 
  Vector2 topleft = sec.GetAbsoluteTopLeft(); 
  Vector2 size = sec.GetSize(); 
  TagDebug.Log($"Topleft and size for sector {sec.Name} {topleft} {size}");

  if (topleft.x < leftbound) leftbound = topleft.x; 
  if (topleft.x + size.x > rightbound) rightbound = topleft.x + size.x; 
  if (topleft.y > topbound) topbound = topleft.y; 
  if (topleft.y - size.y < bottombound) bottombound = topleft.y - size.y; 
  TagDebug.Log($"Calculate bound for sector {sec.Name} {leftbound} {rightbound} {topbound} {bottombound}"); 
}


After selecting it, I’m given the chance to add it to the query, so we have:

It suggested I use the Min/Max functions. Given I am not pushed for speed, this sounds right. The text in the chat box stated the changes and benefits, and I could accept the changes in context by the code:

Of course, I’m not brave (or foolish) enough to try transplanting my whole build environment into an experimental app. But I will keep the changes and check them later.

Exploring Antigravity’s Agentic Capabilities

So now let’s try the agentic side. Unfortunately, it doesn’t seem to work with independent branches (or doesn’t expect to work this way), so you can probably only work within the same workspace folder:

I think a “conversation” maps to a task. I’ll ask it to improve another method that works with bounds. But it isn’t really designed to keep the task manager open, since it continually shuts it for space for the chat box and (for example) the changes view:

One aspect I did like was the following statement:

This proves it has contextual understanding of its last conversation and the resultant changes. And it is correct — Min/Max is neater (though I would expect it will be far slower).

So, looks like I should retract the idea that this was designed to work on parallel tasks in the same project — it clearly wasn’t. Perhaps Google is waiting for responses to see how they should react.

Conclusion: An Experiment With an Unclear Direction

At this stage, I decided to stop as I don’t quite know what this product is truly going for. It is an app, but it appears to be partly a VS Code clone. It offers an Agent manager, but doesn’t really offer the parallel tasks I’d expect. But it manages to do what Gemini CLI does, or for that matter, what Jules does.

I recognise that Google creates projects from independent teams, and sometimes there are great ideas (I still remember Google Wave) that Google as a company go on to ignore. But often they are misaligned with current trends. I’m going to wait until Google works out what they want to do with this before giving it a fresh look later in its development evolution.

The post Hands-On With Antigravity: Google’s Newest AI Coding Experiment appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Aspireify an existing app

1 Share
Learn how to add Aspire orchestration to your existing application using aspire init.
Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Is Spec-Driven Development Key for Infrastructure Automation?

1 Share
Spec-driven development

Since GitHub Universe and the announcement of GitHub Spec Kit, spec-driven development (SDD) has taken the dev world by storm. The premise is compelling: Give AI agents structured context through markdown specs before they write code, and you’ll have it all. An end to hallucinations about APIs, rushed coding and low-quality outcomes.

With SDD, AI agents will work more like human developers who receive product requirements documents (PRDs), break down tasks and execute systematically.

The concept formalizes what development teams have done for years. A product manager writes requirements. Developers digest the PRD or specifications, break the work into tasks and start coding. SDD simply structures this workflow for the AI era, turning natural language specifications into the context large language models (LLMs) need to generate meaningful code.

As someone who lives and breathes DevOps and platform engineering, I found myself asking the obvious question: What does this mean for infrastructure work? Should we be racing to adopt SDD for our Terraform modules and Kubernetes configurations?

Infrastructure Code Isn’t Application Code

Infrastructure code looks like code, but it behaves very differently from application code.

Look at any Terraform file, Helm chart or CloudFormation template. What do you see? Specifications. Infrastructure as Code (IAC) is already spec-driven by design. It is declarative. We describe the desired state. We say “I want a database with these properties,” not “Execute these commands to create a database.”

But here’s where things get interesting.

  • Application code favors creativity. Give 10 developers the same feature request, and you’ll get 10 different implementations. Each might be valid, elegant in its own way. The goal is to solve business problems with novel approaches, optimize for user experience or find clever performance improvements. There’s value in that diversity of solutions.
  • Infrastructure code favors reproducibility. When I spin up infrastructure in us-east-1, eu-west-1 and ap-southeast-1, I need identical configurations. Same networking setup, same security groups, same database configurations. Standardization means predictable costs, interchangeable parts and reliable disaster recovery.

This distinction matters for SDD because AI agents thrive on creative problem-solving but struggle with strict reproducibility. We don’t want an AI agent getting creative with our virtual private cloud (VPC) configuration. We want the exact same blueprint deployed perfectly every time.

More importantly, infrastructure code rarely flows from spec to implementation. Consider how infrastructure actually evolves. FinOps adjusts instance types to optimize costs. Security patches a vulnerability directly in production. Someone scales a database through the console during an incident. Your Terraform still describes the original state, but reality has moved on. This is drift: when your actual cloud resources no longer match your IaC. Infrastructure teams work backward, constantly updating specifications to match reality.

​​SDD assumes a forward flow from requirements to code. But that’s not the way platform teams work. We don’t need AI to write more Terraform from specs; we need something else entirely.

The Real Automation Gap Is Deployment Orchestration

SDD will influence infrastructure work, but not always in ways platform teams will celebrate.

Today, developers are already 10 times more productive with AI copilots than they were just a few years ago. SDD promises to push this even further: complete modules generated from specifications, entire features materialized from markdown plans. The volume of application code will explode.

All this code needs somewhere to run. Every feature needs infrastructure. Every microservice needs its database, message queue and networking. The acceleration in code production creates unprecedented pressure on deployment velocity.

Yet deployment remains stubbornly manual. While developers get AI assistants that turn specs into code, platform engineers still coordinate complex deployments by hand. We can generate a complete microservice in a few hours, but spend days figuring out how to safely set up and deploy its infrastructure.

Why AI Agents Can’t Orchestrate Infrastructure Deployment

The barriers to AI-driven deployment are structural problems in the way infrastructure code is organized today.

  • Terraliths: Monolithic nightmares. We’ve created massive Terraform files where specifications, values and logic are tangled together. A single file might define networking, databases and application configuration all mixed up. Small changes cascade unpredictably. There’s no way to understand blast radius when everything touches everything else.
  • Heterogeneous tooling. A single environment uses Terraform for infrastructure, Helm for Kubernetes, as well as Python scripts. Each with different inputs, outputs and state management. Orchestrating across them requires understanding hidden dependencies that aren’t documented anywhere.
  • Working backward from drift. Infrastructure teams constantly reconcile drift, updating code to match reality. AI agents would first need to map what actually exists across all clouds, regions and accounts before even attempting to update anything.

What we really need is to restructure infrastructure to be AI-ready.

Blueprint-Driven Deployment: Infrastructure for the AI Era

The path forward is clear. We need to transform the way infrastructure is packaged, deployed and managed. Here’s how to make infrastructure AI-ready:

  • Transform every piece of infrastructure. Turn every Terraform module, Helm chart and Python script into artifacts with normalized inputs and outputs. These artifacts become reusable building blocks that assemble into blueprints.
  • Assemble artifacts into clear, versioned blueprints with well-defined boundaries. A database blueprint creates databases. A networking blueprint handles VPCs and subnets. No mixing, no confusion.
  • Publish them to a catalog. Decide which ones to surface to AI agents so they know what’s safe and allowed to deploy.


This approach solves the structural problems we mentioned earlier:

  • Terraliths get decomposed into artifacts and blueprints. That 10,000-line monolith becomes a collection of focused components, each with its own life cycle and clear interfaces. Changes are scoped. Blast radius is contained. AI works with the blueprints, not the tangled code.
  • Heterogeneous tooling becomes unified. Terraform, Helm, Python, etc., all become artifacts with standard inputs and outputs through normalization. The orchestration layer doesn’t care what created the artifact.

With that in place, you can tackle the drift problem:

  1. Regularly map what exists: Discover actual cloud state across all accounts and regions into a cloud graph.
  2. Extract patterns from production and create/update blueprints.

The cycle becomes sustainable: Reality informs blueprints, blueprints guide deployment and deployment is tracked in the graph.

Now an AI agent can actually work:

Request: “We need to expand to Asia-Pacific for lower latency.”

The agent queries the blueprint catalog and cloud graph, and finds the standard regional infrastructure blueprint already running in the United States and Europe. It understands dependencies from the graph, networking databases before applications.

“I’ll deploy regional infrastructure blueprint v2.3 to ap-southeast-1. Based on the dependency graph, this takes 12 minutes. No existing resources affected.”

One approval. The orchestrator handles the rest.

Infrastructure Is Different From Application Code

Spec-driven development makes sense when you’re building features. It doesn’t when you’re building the platform those features run on.

Infrastructure needs:

  • Blueprints, not specs: Reusable, versioned patterns you can roll out across regions and environments.
  • Orchestration, not just code generation: Coordinated, multistep workflows across infrastructure, configuration and apps.
  • Clear boundaries, not entangled modules: Well-defined scopes per blueprint and artifact so changes stay contained.
  • Cloud graphs, not code dumps: A live view of what actually runs in every account, region and cloud.

The future isn’t AI agents generating Terraform. It’s AI agents executing deployments safely by using prevalidated blueprints.

This is how AI finally enters the deployment game.

The post Is Spec-Driven Development Key for Infrastructure Automation? appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Taking Your Observability Strategy to the Next Level

1 Share
Person on a cliff looking out over clouds below.

Observability strategy is a tricky concept. Everyone wants to talk about it, but nobody can agree on how they’re talking about it. Consequently, we end up talking past each other without even realizing it.

I like to think of this as a multifaceted spectrum, built up of trade-offs; You pick somewhere on a spectrum in a few different areas, and then you figure out how to optimize the trade-offs, and slowly your strategy emerges from there.

That works up to a point, but then you’ll hit a strategy cliff, usually around three years of implementation (or ~250 engineers). That strategy cliff is when having a single strategy breaks down and stops being sufficient.

It turns out that while you can split up a company into pre-product market fit (pre-PMF), small-to-medium business (SMB) and enterprise, you’re probably going to have all three of these paradigms existing in your company at the same time, regardless of company size. That can be quite confusing to work with.

But rather than splitting up into a million separate strategies, building a meta-strategy around observability can make this much easier to deal with. Let’s walk through how that can happen and what that looks like.

Strategy? What Strategy?

Smaller companies tend to not have an observability strategy, and larger companies tend to want to improve their existing one. That would be one of the spectrums I mentioned earlier.

I find this a lot easier to explain in story form, which is why user journeys are so valuable to me. So let’s talk about a few company personas that we’ll sketch out briefly.

The Pre-PMF Seed Startup

Picture this: You’ve raised a seed round. You have 12 months of runway. You have plans about what users need. You also have plans about your plans. You might even think you know who your users could be. Better yet, you’ve even built something.

Now what? Well … are users using it? Do they like it? Sure, you can email all of them, but there’s a difference between what they say and how they behave, and you’re going to need the right data to see user behavior. Your observability is probably going to look a lot more like user tracking and marketing funnel analysis than it is “hyper-optimize this endpoint to be 2% more efficient.”

And you know what? That’s fine! User journeys are super valuable here, but in a weird way: You don’t know the user journey, and you want to find it out. Typically, you’re only measuring them if you have a good idea of what they are, but pre-PMF? It could be anything.

If you’re thinking that this sounds like an “unknown unknown”, you know, the type of thing observability is supposed to be amazing at approaching, you’re not wrong.

Translating that into the business impact, what really matters is whether you can figure out what that product market fit is, and how it relates to your users and your revenue streams. Rather than observability being an afterthought, leaning into it and integrating it with your user journeys can be the difference between thinking you have product-market fit and being able to observe when you hit product-market fit.

Observability Check for My SMB Era

You found product-market fit. Revenue is growing. You have actual customers who will be angry if things break. Suddenly, you have problems you didn’t have six months ago. Congratulations! Condolences! Everything in between!

This is the pain zone for observability: Your main goal is to keep things from falling over. Everything is going to change every six months, and nothing is going to make sense. You’re going to be building systems faster than you can keep track of them, and you’re going to be duct taping stuff together so much that your eyes will glaze over.

Those user journeys of yours are going to start landing in an uncomfortable middle; you’re still not quite sure about them, but a lot of load-bearing revenue now lies on the personas you’ve built out. Unfortunately, the assurances you get with enterprise scale, enterprise purchase plans, tooling integration and conveniences of large companies with unlimited budgets are still going to seem so far away.

You still want to do more with less, and you want to focus on tool consolidation, but you’re in the uncomfortable spot of simultaneously knowing you have to spend money to make money and saying, “Haha, but what money, right?” Likewise, with your users, you definitely have them, but the confidence you have in how those user demographics have been split is probably going to feel like it varies from day to day.

If only you had a magic wand that you could wave, which would let you know which efforts to focus on. If only you could know those user journeys were driving revenue. If only you had a way to divine the levers of impact that let you maximize operational return on investment (ROI). If only you had … observability. But wait, not that observability. This observability looks way different than the observability you implemented two years ago.

Well, observability actually doesn’t look that different than it does now, does it?

You’re not exactly losing any of the capabilities you needed pre-PMF; you just have more things to care about and different trade-offs to make. Which means that your observability strategy can grow seamlessly from pre-PMF to the SMB era, assuming, of course, that you focused on transferrable skills and technology choices rather than punting operational knowledge until later.

So, even as you’ll likely focus less on adding new user journey experiments, because discovery is less important, they’re still going to be an essential part of your observability strategy. Reliability and cost efforts need similar things and pair well with user journeys, especially if you can correlate that data together.

The O11y-Ship Enterprise

Congratulations! You’ve made it: You can officially say “at our scale” whenever a vendor starts hitting you up. It’s guaranteed to make your colleagues roll their eyes, and you’ll elicit a nervous chuckle from the vendor. Are you running a tool? Pffh, you’re running every tool, and many of them more than once. Your vendor might know more about how your company uses the tool than you do, and you’re the people that bought it in the first place. Those user journeys? Set in stone, baked, ossified; Entire departments exist because those user demographics are the shape they are. You could set your clock on the ability to detect user behavior that you’ve painstakingly curated.

Something interesting happens in this environment. Money changes shape. Rather than trying to reduce cost to an absolute minimum on a case-by-case basis, you start becoming very willing to spend large amounts of money to buy systemic improvements.

And when you’re in that stage, it does mean that tooling integration and catering to people outside the expert domain of tool usage become incredibly important. In addition, your faith in those user journeys and personas have grown to the point where you’re likely to dismiss most contrary experiences and user reports as outliers rather than as signals that your market research was flawed.

After all, in a startup, you can assume that anyone using a tool is probably interested in using that tool — you often don’t have a lot of pressure forcing you to use tools against your will in smaller companies. As for users, you can probably assume you don’t know them, or even if you do… The rule of startups is that “the only constant is change.”

Enterprises, on the other hand, experience mandatory tool consumption on a daily basis. As evidence of this, look no further than that ticketing system that has 37 mandatory fields to create the zillions of reporting charts for management needs. They also have a stickiness of users and a predictability of user behavior that makes it possible to build a foundational enterprise on top of that data. It’s not just a moat; it’s the life of the business.

Projects like Weaver for OpenTelemetry become super important, whereas you’ll likely not care about them as much in smaller companies. Internal platforms, marketplace offerings and other consumption models of observability start making sense as well.

User journeys go from a discovery mechanism, to a profit mechanism, to a basic essential of life. The same goes for blueprints: They sound silly until you need to get tens of thousands of engineers to wrap their heads around a thing they don’t really care a whole lot about. All of the sudden, they become astonishingly necessary.

Observability as a Meta-Strategy

Now, looking at these three sorts of company shapes, one thing that you’ll readily notice is that the main thing they have in common is that they don’t seem to have anything in common. Even if user journeys or observability or “understanding the unknown unknowns” feels constant, the reasons and motivations behind that are going to change. So you might be wondering to yourself how exactly you’re supposed to do this magical meta-strategy thing and solve all your observability problems with one strategy.

Rather than treating strategy as a static document, or treating it as a living one, have a strategy around your strategy and build that into observability. I like to think about the S-curve of technology adoption. Imagine an observability initiative, like “use OpenTelemetry” for example (another example could be “implement user journeys“, whatever that means for you). Every stage of the initiative will start slowly and have an exploration phase before it starts to click. Then it will start to get traction and accelerate through a growth phase as it starts to take hold. Finally, it plateaus as the initiative hits saturation in the company.

Now, imagine that every single observability initiative in your company has its own S-curve. Adding user journey tracking? That’s a curve. Implementing sampling? Another curve. Migrating to OpenTelemetry? Yet another curve. And these curves overlap, intersect and sometimes contradict each other.

Which means that any company is going to find itself with dozens of these S-curves, all overlapping, and you’ll drive yourself up the wall trying to figure out how to build a meta-strategy that handles all these different combinations of strategies to understand where your company is and what you should do.

So instead, I like to think of transitions between strategies. Focus on identifying what stage of a strategy you’re at and build your strategy around understanding how to transition from one stage to another. For example, how do we transition an initiative (or product) from pre-market fit to market fit? How do we transition from SMB to enterprise? Or how do we transition a project from “Oh, I need to use OpenTelemetry” to “Now we’re ready to implement user journeys.” Is that the order you go in? Do you do it in a different order? Why?

If you think of the strategy as understanding inflection points rather than mapping out the combinations, you’ll help prevent yourself from running into the trap of needing to write dozens of strategies to cover every random combination of scenarios. This becomes particularly important when you realize that you’re going to end up with dozens, if not hundreds, of these S-curves all interlapping together.

It’s going to be impossible to predict when you’ll run into certain combinations of strategy inputs, so rather than predicting the future, adapt to the present. After all, building that operational adaptability is one of the strong points of observability. Your ability to do this in your strategy is part of an indication that you are maturing in your observability practice.

Strategy Is Embracing the Tangled Layers

There’s a huge temptation to think of taking a complex domain and attempting to simplify it by removing the complexity. The problem is that true complexity cannot be removed; it can only be handled. That complexity can only be grappled with by reframing your mindset.

But to make matters worse, reframing your mindset only works in that new stage of complexity handling: Once you’ve learned how to handle an SMB, for example, you can’t magically use SMB strategies on your new pre-PMF product line.

So, while you’re going to end up with a series of successive mental model shifts in the company throughout different periods of time and throughout different maturity areas of your various products, each one is going to apply only in that one domain; there’s no universal strategy.

Over time, they’ll proliferate, and you’ll end up having quite the tangled layer of strategies. Embracing the tangled layers, rather than trying to enforce a uniform strategy that can’t exist, will give you the flexibility you need to adapt observability successfully in a world where software development is seemingly changing every few months.

The post Taking Your Observability Strategy to the Next Level appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories