Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
142236 stories
·
32 followers

Warp Goes Agentic: A Developer Walk-Through of Warp 2.0

1 Share
warp

After the enlightening question-and-answer session with Warp CEO Zach Lloyd, I was ready to try out Warp 2.0’s agentic large language model ability. This feels strange, because I have Warp sitting in front of me most of the time.

I’ve written about Warp frequently (see my original review from a year ago), but essentially it is a modern terminal emulator app. If you haven’t used one of those, well, you should. Warp is well suited for agentic tasks, as it has a  built-in awareness of multiple sessions through tabs, as well as tracking when sessions stop and restart. The block structure allows for a natural separation between query and response.

Warp has always included some basic AI capabilities, which have no doubt annoyed a few — heavy terminal users are not in the same Venn diagram loop as LLM supporters, in general. The LLM could attempt to fix common problems based on the (many) ways a Unix-like command could fail.

This is partly why Warp uses the slightly hyperbolic marketing term “agentic development environment” to mark this bigger offering. The actual version label gives us no clue:

Agentic Quality of Life

Earlier this week I made a list of the quality of life expectations for agentic sessions and, in theory, Warp has an advantage here. The terminal prompt UI is already pretty good:

It displays the model in use (currently Claude 4 Sonnet, but I’ll change to Opus 4 if I can). The prompt is in ‘auto detection mode,’ which guesses if I’m clearly writing in English or using a Unix-like command.

With cmd-I I can toggle between the straight terminal mode, agent mode and the auto guess (icons on the bottom left). We can see the directory we are in, and the git branch.

Before doing anything, I’ll check the permissions with Settings > AI > Agents > Permissions, so I can determine what the AI could do if let loose.

There doesn’t seem to be a way to lock activity to one directory — perhaps there is no natural concept of a project directory here (and if there is the equivalent of a Claude.md file, I can’t see it). But what does catch my eye is a denylist. This is a very simple but nice idea. At the end of the day, tasks are executed via OS commands, so it is sensible to check with the user for permission before removing files, spawning new shells or hitting the internet.

I’m going to ask Warp to perform a simple merge task, as I did previously with Gemini CLI and OpenAI Codex. I have two JSON files with city information, and I want to update the first file with contents of the other.

The two JSON files are in place:

OK, now I’m ready to ready to ask for the merge. Here is the query (the same one I asked in the previous posts). I explicitly move to agent mode and ask:

“please update the JSON file original_cities.json with the contents of the file updated_cities.json but if the ‘image’ field is different, please update or write a new ‘imageintended’ field with the new value instead”

I’m not sure what the point is in explicitly showing me Python code, unbidden. I started this conversation in English, after all. However, glancing at it (and the summary at the bottom of the code), I see no obvious problems.

As it asks for permission, it also checks if it can fix the trailing comma. I’m glad it spotted that, but it didn’t need to create a diff just for that!

I notice that it hadn’t quite understood the concept of a working directory, and wanted to do everything with absolute paths. But this might just be to make other tools happy.

The final summary was good (none of the models I’ve tested have had any real material problem with this task itself), again proving that the LLM understood both the problem and the context:

Now, obviously there is the issue of who I’m paying. Warp has a payment plan, but I’m not sure if I can pay Anthropic directly for using Claude Opus 4, for example. But I appear to have 150 requests per month on the free tier. There is no ongoing token usage data within the display yet — and as I don’t “quit” when I change tabs in the app, there is no chance to display usage stats at the end of a session.

Code Editor

This release of Warp includes a code editor, which was clearly designed to work with diffs as above, but you can summon it for any code file.

If I list the files in my project folder, we see that Warp left its merge file:

On my Mac, I left-click and then I can open the file in the Warp editor (“Open with Warp”). It places the file in a separate tab and it looks like the diff above. It is intended to work within a block, with the frame buttons — but without these, it is nicely sparse and quick. You can save any changes you make with ⌘-S. There is also language-specific colouring. My guess is it will gain a context menu in a few drops.

Adding a file editor may look fairly mundane, but as Lloyd mentioned to me in our Q&A, it isn’t going to be fully featured in any way (but it still represents a slightly aggressive move if you were, say, Zed). It does take away the cognitive friction of changing tools, which is good.

Conclusion

An agentic terminal should be able to handle much harder requests, where a large codebase is involved. However, because I have asked very specific and completable requests, all the agentic tools I’ve tested recently have managed perfectly well. And yes, Warp presented the solution efficiently.

The Warp terminal needs to add some visible usage stats, but as I’ve said, it already has the framework to allow the user to stay in control. Overall, I think Warp is in a good position to adapt to the agentic era because of its excellent terminal heritage.

The post Warp Goes Agentic: A Developer Walk-Through of Warp 2.0 appeared first on The New Stack.

Read the whole story
alvinashcraft
16 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The reality of GitOps application recreation

1 Share

Your application code is in Git, and you’ve adopted GitOps principles, so you can recreate it anywhere, anytime, right? If you’re like many teams, the honest answer is “sort of”. While GitOps promises the ability to recreate your application from version control, the reality is often more nuanced as you consider the holistic view of deploying and running your applications.

Our State of GitOps report reveals some fascinating insights about recreatable applications. As part of the survey, we asked this question:

Can you recreate the application from the configuration files stored in version control, for example, to recover from a disaster or to create a new test environment?

Overall, 52% of respondents feel confident they can recreate their applications from version control. For high-performing GitOps teams, this jumps to 70%. But what caught my attention is that “partially recreatable” was a significant response across GitOps maturity levels and the most common response among teams still developing their GitOps practices. It suggests that while most teams can handle their core applications, they struggle with recreating the complete environment stack.

Bar chart showing responses to ‘can you recreate the application from configuration files stored in version control?’ across groups with different GitOps maturity scores.

Interestingly, our data reveals that teams often become less confident about recreation as they advance their GitOps practices, only to see confidence surge at higher maturity levels. This pattern likely reflects a natural discovery process. As teams dive deeper into GitOps, they uncover dependencies and complexities they didn’t initially realize existed.

So, what does complete application recreation entail? And why do so many teams find themselves in that “partially recreatable” category?

The real-world value of recreatable applications

Recreatable applications don’t make sense for all organizations and applications and arguably could be viewed as nice to have rather than a necessity. However, recreating your applications from Git can solve real problems teams face at different scales and circumstances.

When disasters occur, having your entire application stack defined in code means restoration becomes a deployment rather than a scramble. You can rebuild from your Git repository instead of hunting through documentation or recreating manual configurations. The same capability proves valuable during cloud provider migrations driven by acquisitions, regulatory requirements, or strategic business decisions, where recreation becomes a non-trivial but controlled transition rather than a risky lift-and-shift operation.

Of course, your application is just one piece of the puzzle. You’ll still need to consider the surrounding infrastructure, networking, and foundational services that enable your application to run in the target environment. While mature teams often manage this infrastructure through tools like Terraform and Crossplane, getting to that level of complete recreation from Git requires thoughtful planning and infrastructure provisioning processes.

Operational efficiency improves when you can create new test environments on demand with minimal overhead. Whether you’re testing critical fixes, running performance tests against production-like infrastructure, or validating new features, the ability to spin up identical environments quickly and tear them down when finished reduces both time and infrastructure costs.

Recreation provides auditable proof for regulated industries that infrastructure and deployment processes are fully documented and reproducible. Suppose you need to satisfy compliance frameworks that require demonstrable change control. In that case, recreatable applications help you meet audit requirements for deployment consistency and provide evidence that you can rebuild systems according to documented specifications.

The maturity journey

Our survey data’s confidence curve tells a story about how teams learn and adopt GitOps. Rather than steady upward progress, there’s an initial dip in confidence before teams reach far higher levels of certainty about their recreation capabilities.

This pattern might highlight the natural learning process. Pre and low adopters will typically approach GitOps from an application-down perspective and focus on getting their manifests into Git repositories. The initial confidence may come from successfully deploying applications this way and feeling like they’ve “solved” recreation.

However, as teams mature, reality sets in when they add more applications to their GitOps processes and try to recreate complete environments. They likely discover their core applications are relatively easy to recreate, but the surrounding environment may not be. Infrastructure provisioning may be required, whether manual or automated, and teams may not have accounted for external dependencies yet.

You need backup and restore strategies for stateful components like databases that go beyond what you define in Kubernetes manifests. While the application might be recreatable from Git, the data likely isn’t. You’ll consider whether to increase cost and complexity by replicating databases across infrastructure, configure automated backup and restore processes, or accept that provisioning new environments require data restoration as a separate step.

This explains why “partially recreatable” was a popular response in our survey across all GitOps maturity levels. Most teams can handle their core applications but struggle with the complete environment stack. As low and medium-maturity teams adopt foundational GitOps practices, they discover the full scope of what requires management, decreasing their confidence in complete recreation.

What to consider with complete recreation

Success stories do exist. Teams using infrastructure provisioning tools like Crossplane, Terraform, and mature GitOps workflows have achieved recreatable applications. But getting there requires stepping back and considering what recreation means from a holistic perspective.

Your application manifests assume that the underlying infrastructure exists, but something must first create that foundation. Many organizations still manage infrastructure provisioning through separate Terraform workflows, creating a gap where recreation often breaks down. While infrastructure provisioning may be mostly automated, unless integrated with your GitOps workflows, you must step away and use another platform to create the infrastructure first.

Recreation means recreating the configuration and access to secrets in the new environment. Your GitOps process needs strategies for safely managing environment-specific values, API keys, and certificates.

Teams can easily recreate applications but can’t recreate data the same way. You need strategies for database backups, data replication, or accepting that data restoration happens separately from application recreation.

External services, APIs, or legacy systems your applications depend on often fall outside what GitOps can recreate. Your recreation strategy needs to account for these dependencies, whether through service discovery, configuration updates, or fallback mechanisms. Additionally, can your target environment support the infrastructure you rely on? Not all cloud services are available in every region, especially during disasters.

70% of high-performers have achieved confidence in their recreation capabilities, and have worked through these considerations. They prove that complete recreation is possible and likely find it worthwhile, but it requires treating it as a comprehensive system design challenge rather than just putting YAML in Git.

The path forward

While the journey from partial to complete recreation involves discovering complexities you didn’t know existed, 70% of high-performing teams that achieved this capability prove it’s possible and worthwhile.

The key is treating recreation as a systematic challenge rather than an afterthought. Whether you’re just starting your GitOps journey or working through the complexities that come with maturity, understanding what complete recreation entails helps you make informed decisions about where to invest your efforts.

For more insights into how teams across different maturity levels approach GitOps and application recreation, download our complete State of GitOps report to see the full research findings and implementation patterns.

Happy deployments!

Read the whole story
alvinashcraft
16 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Improved control over package retention

1 Share

Retention isn’t the most glamorous part of the deployment process, but it is critical to get right. When storage is efficiently used, it can improve deployment times and performance. To help achieve this, we’ve made two improvements to package retention, with more to come.

These changes are available to cloud customers now and will be included in the server release 2025.3.

Recent improvements (and why they matter to you)

  1. Package caching
  2. Decoupling release and package retention

Package caching

Package cache retention currently runs by default when the target machine hits less than 20% storage. These retention rules can be problematic for users with both small and large amounts of disk space; Smaller machines with large packages involved in deployments may be prematurely deleted due to storage constraints, causing deployment failures or delays when subsequent deployments need to re-download them Machines with larger disk space may experience more infrequent cache clearing, accumulating obsolete deployment files that degrade performance or consume space needed for higher-priority files

Default Machine Policy page showing where package cache retention can be set

Now, within the default machine policies, you have the choice of allowing Octopus to set the default for them or keep a specific number of packages. For your larger machines, you can ensure their package cache doesn’t get too big and for smaller machines, you can set a sensible default based on your deployment patterns.

Decoupling release and package retention

Lifecycle policies control the retention of releases and the associated packages. Customers with frequent deployments or large packages typically require shorter retention periods to stay within storage limits. Tightening retention policies reduces the number of deployments kept, which limits your ability to audit or troubleshoot old failed deployments.

Built-in Package Repo settings where you can now set your updated package retention policy

We’ve added new flexibility to give you better control over your package retention. You can now choose between two approaches: keeping packages for all retained releases and runbooks (our current default), or only keeping packages for releases visible on your dashboard. The dashboard-only option is a game-changer for storage management, this will keep relevant deployment history without the overhead of keeping the associated packages. While it means some older releases won’t be redeployable after their packages get cleaned up, we’ve found customers rarely need to redeploy those old releases. What you want is to keep more of your release history at your fingertips. It’s all about giving you the right balance between storage efficiency and the information you need.

What’s Next?

Our next iteration of this project will focus on centralizing where you view all retention policies. We aim to provide better context and the ability to standardize retention policies based on your organization’s guidelines. If you have examples of where retention inefficiencies have cost you time or money, please submit your examples here. The more information we have, the better solutions we can build.

Happy deployments!

Read the whole story
alvinashcraft
16 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Solar Was the Leading Source of Electricity In the EU Last Month

1 Share
In June 2025, solar power became the leading source of electricity in the EU for the first time, surpassing nuclear and wind, while coal hit a record low. CBC reports: Solar generated 22.1 percent of the EU's electricity last month, up from 18.9 percent a year earlier, as record sunshine and continued solar installations pushed output to 45.4 terawatt hours. Nuclear followed closely at 21.8 percent and wind contributed 15.8 percent of the mix. At least 13 EU countries, including Germany, Spain and the Netherlands, recorded highest-ever monthly solar generation, [data from energy think tank Ember showed on Thursday.] Coal's share of the EU electricity mix fell to a record low of 6.1 percent in June, compared to 8.8 percent last year, with 28 percent less electricity generated than a year earlier. Germany and Poland, which together generated nearly 80 percent of the 27-country bloc's coal-fired electricity in June, also saw record monthly lows. Coal accounted for 12.4 per cent of Germany's electricity mix and 42.9 percent of Poland's. Spain, nearing a full phase-out of coal, generated just 0.6 per cent of its electricity from coal in the same period. Wind power also set new records in May and June, rebounding after poor wind conditions resulted in a weak start to the year. But despite record solar and wind output in June, fossil fuel usage in the first half of 2025 grew 13 percent from last year, driven by a 19 percent increase in gas generation to offset weak hydro and wind output earlier in the year. Electricity demand in the EU rose 2.2 percent in the first half of the year, with five of the first six months showing year-on-year increases. The next challenge for Europe's power system is to expand battery storage and grid flexibility to reduce its reliance on fossil fuels during non-solar hours, Ember said in the report.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
20 hours ago
reply
Pennsylvania, USA
Share this story
Delete

IoT Coffee Talk: Episode 269 - Your Personal Tay (Realizing Your Inevitable AI You)

1 Share
From: Iot Coffee Talk
Duration: 1:04:17
Views: 7

Welcome to IoT Coffee Talk, where hype comes to die a terrible death. We have a fireside chat about all things #IoT over a cup of coffee or two with some of the industry's leading business minds, thought leaders and technologists in a totally unscripted, organic format.

This week, Rob, Dimitri, David, and Leonard Lee jump on Web3 to talk about:

🎶 🎙️ BAD KARAOKE! 🎸 🥁 "Cissy Strut" - The Meters
🐣 Why IoT can't keep your wine from going bad on transit from Seattle to Houston.
🐣 Why do pastries in the U.S. generally suck but they rock in Europe?
🐣 Why is Autodesk acquiring PTC? What does it mean for IoT and Thingworx and Kepware?
🐣 Digital twins may bringing IoT back into the hype cycle?
🐣 Why independence important to integrity.
🐣 Does democratization of things lend to "democratic values" or something else?
🐣 The role of frameworks in establishing and maintain a basis of truth and trust.
🐣 Will artificial intelligence become not only smarter than us and have better values?
🐣 Is Grok's Tay moment affirmation of the direction of AI evolution and final recolve?
🐣 Is China AI leapfrogging Western AI with less?
🐣 What happens when you weaponize marketing? What do you get?

It's a great episode. Grab an extraordinarily expensive latte at your local coffee shop and check out the whole thing. You will get all you need to survive another week in the world of IoT and greater tech!

Tune in! Like! Share! Comment and share your thoughts on IoT Coffee Talk, the greatest weekly assembly of Onalytica and CBT tech and IoT influencers on the planet!!

If you are interested in sponsoring an episode, please contact Stephanie Atkinson at Elevate Our Kids. Just make a minimally required donation to www.elevateourkids.org and you can jump on and hang with the gang and amplify your brand on one of the top IoT/Tech podcasts in the known metaverse!!!

Take IoT Coffee Talk on the road with you on your favorite podcast platform. Go to IoT Coffee Talk on Buzzsprout, like, subscribe, and share: https://lnkd.in/gyuhNZ62

Read the whole story
alvinashcraft
20 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Using Instruction Files with VS Code Agent Mode and the Uno Platform

1 Share

In this post we’re going to do two things. Firstly, we’re going to step through how to create an Uno Platform application with VS Code. Then, we’re going to use VS Code to generate an instructions file that will help guide Copilot Chat to generate output that’s more consistent with your application structure.

Setting up VS Code for Uno Platform Development

The first step is to install the Uno Platform extension for VS Code (you can follow this link, or install from the Extensions list inside VS Code):

Once install, you’ll want to make sure you run uno-check to ensure all the dependencies are installed. In the past you would have had to download uno-check but it’s now part of the extension, so you can invoke it directly from the command palette.

Uno-check will run as a separate process, stepping through each set of dependencies to make sure you’re all setup to build and run Uno Platform applications.

Now that we have all the dependencies installed, let’s go ahead and create a new project. You can do this directly from the command line by exploring the options of the unoapp template by running "dotnet new unoapp -h". Alternatively, you can use the Uno Platform Live Wizard to step through the available options and then generate the appropriate dotnet new command that you can run to create your application.

In this case we’re going to be creating a new application using the Recommended preset.

dotnet new unoapp -o InstructionsSampleApp -preset "recommended"

Inside a Terminal within VS Code we can execute this code in order to create the application.

The last thing to do is to open the newly created InstructionsSampleApp folder in VS Code.

After opening the application in VS Code, use the highlighted (red) project selector button at the bottom of the screen to select the project file as the active project.

Next, pick the target platform (click the target platform selector button at bottom of screen), in this case net9-desktop.

Now we can run the application. If you run without the debugger you’ll see that the Hot Design button (left) in the Studio toolbar is enabled.

Clicking the Hot Design button will launch the runtime design experience, allowing the creation, selection and modification of elements in the running application.

If you haven’t already checked out the Hot Design experience, give it a shot today as we’d love to hear your feedback.

Creating Instruction files

Ok, now we have our Uno Platform application setup and ready to go, we can use Copilot Chat in VS Code to use AI to modify our application. Chat can be launched from the View menu.

Opening Copilot chat gives us various options to pick the AI model and the mode – in this case I’m using Agent mode with Claude.

Before I start interacting with Copilot Chat, I make sure that I setup my application to guide AI as to how I want code to be created. To do this you can define instruction files that will help guide AI. Rather than create instruction files manually, in the latest VS Code Preview, there’s a new option to Generate Instructions.

When you select this option, Copilot will spend a couple of minutes analyzing your existing project structure in order to determine the appropriate instructions.

Here you can see a snapshot of the instructions created for our sample application – we created an application using the Recommended presets in the unoapp template, so the instructions are quite opinionated and use a lot of the Uno Platform features such as Toolkit and Extensions.

If I repeat this process using the Blank preset, the instructions file is still quite detailed but isn’t as opinionated when it comes to how code should be structured.

After creating the instructions file, you should see that the output of any interaction with Copilot Chat will be more inline with the existing structure of your application. Of course, you’ll want to adjust and extend the generated instructions file as you continue to develop your application.

Hopefully in the post you’ve seen how to successfully create an Uno Platform application and how you can improve the output of Copilot Chat using instruction files.

The post Using Instruction Files with VS Code Agent Mode and the Uno Platform appeared first on Nick's .NET Travels.

Read the whole story
alvinashcraft
20 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories