Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151608 stories
·
33 followers

Linux 7.0 Released

1 Share
"The new Linux kernel was released and it's kind of a big deal," writes longtime Slashdot reader rexx mainframe. "Here is what you can expect." Linuxiac reports: A key update in Linux 7.0 is the removal of the experimental label from Rust support. That (of course) does not make Rust a dominant language in kernel development, but it is still an important step in its gradual integration into the project. Another notable security-related change is the addition of ML-DSA post-quantum signatures for kernel module authentication, while support for SHA-1-based module-signing schemes has been removed. The kernel now includes BPF-based filtering for io_uring operations, providing administrators with improved control in restricted environments. Additionally, BTF type lookups are now faster due to binary search. At the same time, this release continues ongoing cleanup in the kernel's lower layers. The removal of linuxrc initrd code advances the transition to initramfs as the sole early-userspace boot mechanism. Linux 7.0 also introduces NULLFS, an immutable and empty root filesystem designed for systems that mount the real root later. Plus, preemption handling is now simpler on most architectures, with further improvements to restartable sequences, workqueues, RCU internals, slab allocation, and type-based hardening. Filesystems and storage receive several updates as well. Non-blocking timestamp updates now function correctly, and filesystems must explicitly opt in to leases rather than receiving them by default. Phoronix has compiled a list of the many exciting changes. Linus Torvalds himself announced the release, which can be downloaded directly from his git tree or from the kernel.org website. Linux 7.0 has a major new version number but it's "largely a numbering reset [...], not a sign of some unusually disruptive release," notes Linuxiac.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Postman’s MCP Server Now Works With Google Antigravity IDE

1 Share

We are excited to announce that Postman’s MCP server now integrates natively with Google’s Antigravity IDE. If you are already using Postman to manage your APIs, you can now give Antigravity’s AI agents direct access to your Collections, environments, and test workflows with a few lines of config.

What Is Postman’s MCP Server?

The Postman MCP server uses the Model Context Protocol (MCP) to give AI agents structured, executable access to your APIs. Think of it as a translation layer between your Postman workspace and any MCP-compatible AI tool. You define your APIs once in Postman, and the server handles the rest.

Once connected, AI agents can browse your Collections, execute requests, run your test scripts, and reference your environment variables to move between dev, staging, and production. Your Collections stop being documentation. They become something an AI can actually use.

Setting Up the Integration

To get started, generate a Postman API key at go.postman.co/settings/me/api-keys.

  1. Open the MCP store via the “Additional options” or “…”dropdown at the top of the editor’s agent panel.
  2. Click on “MCP Servers”
  3. Search “Postman”
  4. Click “Install”
  5. Enter your Postman API Key

That’s it. Antigravity’s AI agent will automatically discover the Collections and environments in your workspace. Your APIs show up as callable tools in the agent’s context, and you’re off and building.

How It Works Under the Hood

When the Postman MCP server starts up, it reads your connected Postman workspace and exposes your Collections as tools the AI agent can invoke directly. Each request in a Collection becomes a callable action. Your environments become switchable contexts. Your test scripts become validation steps the agent can run and report on.

Antigravity maintains this context persistently across sessions. The agent doesn’t start from scratch each time you open a new task. It already knows your API surface, and it can draw on that knowledge throughout a full feature build without you having to re-explain anything.

What You Can Do With It

Rather than share a long feature list, here are a few real usage scenarios.

“My team needs to validate API changes before merging”

This is one of the most common pain points for API teams. A backend change ships, something breaks downstream, and no one finds out until the PR is already merged.

With the integration active, you can ask Antigravity’s agent to run your Postman test suite against staging before any pull request is opened. It executes the Collection, parses the results, surfaces failures inline, and flags which requests broke. Regressions get caught before they reach code review.

“I’m writing code that calls an internal API and I don’t want to guess at the contract”

This happens constantly. A developer writes an integration against a service they don’t own, and they’re either digging through static docs or copying examples from memory.

With Postman’s MCP server connected, Antigravity’s agent pulls the actual request definition from your Collection. The correct method, headers, auth scheme, and expected response shape are all there. No hallucinated endpoints. No made-up field names. The agent is working from what the API actually does.

“We onboarded a new engineer and they need to understand our API surface”

Point them to Antigravity and have them ask the agent to walk through your APIs. The agent can execute live requests against a sandbox environment and show real responses in context. It is a faster, more concrete introduction than handing someone a static doc and hoping they read it.

“I have an OpenAPI spec and I need a Postman Collection”

Drop the spec into Antigravity and ask the agent to scaffold a Collection from it. It uses Postman’s MCP tools to generate requests, test assertions, and example responses, then publishes the Collection directly to your workspace. Your team can pick it up immediately.

Why This Is Different From Just Using AI With Your Docs

Most AI coding tools can read a markdown file or an OpenAPI spec. What they cannot do is run your API, check whether the endpoint is actually up, or tell you whether the response coming back from staging today matches what the spec says it should return.

Postman’s MCP server gives the agent grounded, executable context. It is the difference between an AI that talks about your APIs and one that can actually call them.

Get Started

The Postman MCP server is available now. The Antigravity IDE integration is live for all Postman users today.

Check out our MCP server documentation for a full walkthrough, or explore the open-source repo on GitHub. If you have questions or want to share what you’re building, head over to the Postman Community.

Have questions about Postman’s MCP server? Visit our product page or join the discussion in the Postman Community Forums.

The post Postman’s MCP Server Now Works With Google Antigravity IDE appeared first on Postman Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Managing Long-Running Tasks Inside Akka.NET Actors

1 Share

19 minutes to read

Here’s a scenario we hit all the time on TextForge: a new user signs up, connects their Gmail account, and now we have to pull down their entire mailbox history. For the pro plan, that can be hundreds of thousands of emails. Which translates to tens of thousands of Gmail API round trips, exponential backoff when the API pushes back, intermittent network failures, and tens of minutes (sometimes longer) of continuous work per user. And on top of that, we need to be able to cancel the job cleanly, because the pod hosting that actor might get rebalanced across the cluster, or we might roll a deployment, or the user might disconnect the account mid-sync.

So the question is: how can a single Akka.NET actor manage a long-running background job without becoming non-responsive to its own mailbox?

This post (and its accompanying video) walks through the exact pattern we use in TextForge’s MailboxSyncActor: a CancellationTokenSource field, a fire-and-forget PipeTo, and a couple of subtle lifecycle details that will bite you if you skip them.

Click here to read the full article.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Refactoring to SOLID in C#

1 Share

I'm really pleased to announce the publication of my latest Pluralsight course, "Refactoring to SOLID in C# 14". It aims to provide C# developers with practical techniques and strategies to tackle the unique challenges of working in legacy codebases, such as dealing with technical debt, modernizing outdated dependencies, fixing code smells, and improving test coverage.

Legacy Code

Most professional software developers will spend a significant proportion of their careers working on "legacy codebases". By "legacy", I simply mean that the codebase is several years old, has had many different developers working on it, and continues to be actively maintained.

Legacy code isn't necessarily bad, but it's not uncommon for problems to gradually accumulate over time, making the codebase progressively harder to work with as time goes on.

Code Smells

Anyone who has worked on a legacy codebase will be all too familiar with the concept of "code smells", made popular by Martin Fowler. It's common to spend hours or even days navigating through various files, trying to understand how something works and wondering why on earth it was implemented this way. And while you maybe can't put your finger on exactly what's wrong with the code - it's clear that in its current state it's difficult to maintain, and difficult to understand.

Test Coverage

Of course, identifying problems in legacy code is easy enough, but fixing them is risky. And the main reason for this is a lack of confidence that the test coverage we have is sufficient. It's often the time required to thoroughly test our changes that proves the main blocker to addressing code smells.

One approach that's worth considering is Michael Feathers' concept of "Characterization Tests". These are tests designed to capture the current behaviour of the system, rather than the "correct" behaviour. The advantage of these tests is that they can alert you to any regressions introduced by refactoring.

Of course you can ask AI to generate test coverage for you, although it often doesn't have a great grasp of what the "correct" behaviour is - it simply infers what's supposed to happen from what the code already does. So almost by definition, the tests an AI will generate on your behalf are characterization tests.

One pitfall to be aware of with characterization tests though is that you can inadvertently "lock in" undesirable behaviour, as future developers (or agents) assume that the tests are protecting some important functionality.

Refactoring Strategies

Refactoring is safest when done in small, incremental steps, testing your work as you go along. Tools like Visual Studio include some built-in refactorings such as renaming variables, extracting classes etc, and these should be used wherever possible as they are deterministic.

If you are making widespread changes, it's worth familiarizing yourself with techniques such as "branch by abstraction", and the "Strangler fig" pattern, which are both designed to help you gradually replace legacy components without having to change everything in one go.

There's a very real danger though with any large refactoring initiative that it will stall mid-way through. This can result in an even more convoluted and confusing architecture. So be careful of starting what you can't finish.

SOLID Principles

In my new course I spend a couple of modules exploring how refactoring code to adhere to the SOLID principles can help a lot with software maintainability, testability and extensibility. The five "SOLID" principles have proved themselves to be very helpful guidelines over the years, but they aren't necessarily the whole picture.

It's also worth exploring complementary ideas such as DRY, YAGNI, KISS, Clean Architecture, CUPID, and STABLE.

A key benefit that all of these various "principles" provide, is that they give us lenses through which to evaluate our code, and vocabulary to talk about the problems we encounter. They help us move past the vague "code smell" sense that something is not quite right, to being able to articulate what the problem is and formulate a plan to remediate it.

App Modernization

If you have a large codebase that's more than about five years old, then it's highly likely that it's in need of some modernization. New versions of tools, frameworks and dependencies are constantly coming out, and the programming language itself moves on with new features. Unless you are very disciplined, it's easy to get left behind, and while most tech upgrades are relatively straightforward, every now and then you'll find the migration is non-trivial and you get stuck for some reason.

The further behind you get, the harder it becomes to upgrade and before you know it you find yourself in a situation where the libraries you depend on have critical security vulnerabilities but are no longer being maintained. You may even find that your hosting platform no longer supports running the framework you're using.

App modernization is an area that AI agents can be particularly helpful with, especially if you give them access to the official migration guides. That's essentially what the GitHub Copilot modernization agent is. If you've not tried asking an AI agent to help you modernize an app, it's something that's definitely worth experimenting with - you might be surprised at the results.

Summary

Legacy codebases can seem daunting to work on, but with the right tools and techniques at your disposal it can be a very rewarding experience to slowly and steadily improve a legacy codebase. If you have access to Pluralsight's excellent library of training courses, then do consider checking out my new Refactoring to SOLID in C# course in which I go into a lot more detail about all of these topics. And you don't need to wait until your codebase is a mess to start learning about these topics - refactoring should be an ongoing part of day-to-day development, even on a brand new application.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Finding a duplicated item in an array of N integers in the range 1 to N − 1

1 Share

A colleague told me that there was an O(N) algorithm for finding a duplicated item in an array of N integers in the range 1 to N − 1. There must be a duplicate due to the pigeonhole principle. There might be more than one duplicated value; you merely have to find any duplicate.¹

The backstory behind this puzzle is that my colleague had thought this problem was solvable in O(N log N), presumably by sorting the array and then scanning for the duplicate. They posed this as an interview question, and the interviewee found an even better linear-time algorithm!

My solution is to interpret the array as a linked list of 1-based indices, and borrow the sign bit of each integer as a flag to indicate that the slot has been visited. We start at index 1 and follow the indices until they either reach a value whose sign bit has already been set (which is our duplicate), or they return to index 1 (a cycle). If we find a cycle, then move to the next index which does not have the sign bit set, and repeat. At the end, you can restore the original values by clearing the sign bits.²

I figured that modifying the values was acceptable given that the O(N log N) solution also modifies the array. At least my version restores the original values when it’s done!

But it turns out the interview candidate found an even better O(N) algorithm, one that doesn’t modify the array at all.

Again, view the array values as indices. You are looking for two nodes that point to the same destination. You already know that no array entry has the value N, so the entry at index N cannot be part of a cycle. Therefore, the chain that starts at N must eventually join an existing cycle, and that join point is a duplicate. Start at index N and use Floyd’s cycle detector algorithm to find the start of the cycle in O(N) time.

¹ If you constrain the problem further to say that there is exactly one duplicate, then you can find the duplicate by summing all the values and then subtracting N(N−1)/2.

² I’m pulling a fast one. This is really O(N) space because I’m using the sign bit as a convenient “initially zero” flag bit.

The post Finding a duplicated item in an array of <VAR>N</VAR> integers in the range 1 to <VAR>N</VAR> − 1 appeared first on The Old New Thing.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Test Multi-Device Interactions with the Android Emulator

1 Share
Posted by Steven Jenkins, Product Manager, Android Studio










Testing multi-device interactions is now easier than ever with the Android Emulator. Whether you are building a multiplayer game, extending your mobile application across form factors, or launching virtual devices that require a device connection, the Android Emulator now natively supports these developer experiences.

Previously, interconnecting multiple Android Virtual Devices (AVDs) caused significant friction. It required manually managing complex port forwarding rules just to get two emulators to connect.

Now you can take advantage of a new networking stack for the Android Emulator which brings zero-configuration peer-to-peer connectivity across all your AVDs.

Interconnecting emulator instances

The new networking stack for the Android Emulator transforms how emulators communicate. Previously, each virtual device operated on its own local area network (LAN), effectively isolating it from other AVDs. The new Wi-Fi network stack changes this by creating a shared virtual network backplane that bridges all running instances on the same host machine.

Key Benefits:

  • Zero-configuration: No more manual port forwarding or scripting adb commands. AVDs on the same host appear on the same virtual network.
  • Peer-to-peer connectivity: Critical protocols like Wi-Fi Direct and Network Service Discovery (NSD) work out of the box between emulators.
  • Improved stability: Resolves long-standing stability issues, such as data loss and connection drops found in the legacy stack.
  • Cross-platform consistency: Works the same across Windows, macOS and Linux.

Use Cases

The enhanced emulator networking supports a wide range of multi-device development scenarios:

  • Multi-device apps: Test file sharing, local multiplayer gaming, or control flows between a phone and another Android device.
  • Continuous Integration: Create robust, automated multi-device test pipelines without flaky network scripts.
  • Android XR & AI glasses: Easily test companion app pairing and data streaming between a phone and glasses within Android Studio.
  • Automotive & Wear OS: Validate connectivity flows between a mobile device and a vehicle head unit or smartwatch.



The new emulator networking stack allows multiple AVDs to share a virtual network, 
enabling direct peer-to-peer communication with zero configuration.

Get Started

The new networking capability is enabled by default in the latest Android Emulator release (36.5), which is available via the Android Studio SDK Manager. Just update your emulator and launch multiple devices!

If you need to disable this feature or want to learn more, please refer to our documentation.

As always, we appreciate any feedback. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, Medium, Youtube, or X.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories