A collection of upcoming CFPs (call for papers) from across the internet and around the world.
The post Call For Papers Listings for 4/10 appeared first on Leon Adato.
A collection of upcoming CFPs (call for papers) from across the internet and around the world.
The post Call For Papers Listings for 4/10 appeared first on Leon Adato.
Job postings that came across my desk, slack, email, discord, etc this week.
The post Job listings for week ending 4/10 appeared first on Leon Adato.
This reading list is courtesy of Vivaldi browser, who pay me decent money to fight for a better web and don’t moan at me for reading all this stuff. We’ve just released Vivaldi 7.9, with even more personalistion and zero “A.I.”, because it’s cream of the crop, not a stream of the Slop.
Everyone says AI needs guardrails. Julien Verlaguet wants to know who is actually building them.
Verlaguet, founder of SkipLabs, has spent the last year asking that question, but he doesn’t like the answers he keeps finding. “Every time I look closer at people who claim that they are bringing guardrails to AI, I see more the same,” he tells The New Stack. “I see more prompting — and I don’t see anybody who is trying to build real guardrails and real tooling from scratch.”
His explanation for why is that “It’s a lot of work, and so it’s much easier to make those big claims that you’re going to do these things when, in the end, you don’t.”
Verlaguet has set out to do it. SkipLabs builds Skipper, a specialized coding agent for generating and maintaining backend services — not a code generator in the Copilot mold, but the structural layer underneath AI-assisted development that’s supposed to make AI output readable, maintainable, and deployable at speed.
“The first thing to notice is that Skipper is not a model,” Verlaguet says. “So, we’re not doing any AI here. We treat models as a commodity. We use different models, Anthropic for the most part, but not only, but to us, the model is just an API that we call with a context, it comes back with the result.”
Verlaguet’s thesis starts with a latency problem, he says.
“In the next few years, it’s not going to be okay to wait for CircleCI to take half an hour to an hour to validate a diff,” he says. “AI is getting faster and faster, and so you will need the tooling to guardrail the AI, and that tooling will need to be incremental.”
Verlaguet built his career on the idea of incremental development. At Facebook he created Hack, the gradually-typed PHP dialect that put a type system on a dynamic language before TypeScript made that approach fashionable.
“When Julien Verlaguet and the team at Facebook created Hack, they were solving a specific problem: PHP, the language powering the entire Facebook codebase, was fundamentally type-unsafe,” wrote Hugo Venturini, a software engineer at SkipLabs, in a blog post.
Yet, Venturini added: “At the scale Facebook operated, that wasn’t just an aesthetic problem, it was an engineering liability,” he says.
“So, Julien built Hack: a gradually typed, rigorously annotated replacement that was strictly less pleasant to write than PHP,” Venturini writes. “It was more verbose. It demanded precision. It required you to say exactly what you meant. And then they got every engineer at Facebook to switch.”
Verlaguet then built Skip, a full reactive programming environment organized around the principle that when inputs change, you shouldn’t have to recompute everything from scratch. Seeing there were no similar frameworks available outside of Meta, Julien and the team founded SkipLabs to bring reactivity to the next million software engineers, the company says.
Lucas Hosseini, a software engineer at SkipLabs, in a blog post offers his definition of reactive programming: “In practical terms, reactive programming is a declarative way of expressing computations: instead of manually handling state transitions, you simply describe state as a function of multiple inputs.”
Verlaguet left Meta in 2020 to build his startup around the reactive technology; Skipper is where that work lands.
Skipper has two components. The first is a development environment built on a new, sound, and incremental implementation of TypeScript. Verlaguet chose TypeScript deliberately.
“AI is very good at TypeScript and Python,” he says. “I think starting with one of these two is probably where you’re going to get the best results.” The soundness matters technically: a type system with holes can’t support the reachability analysis — the call-graph mapping of what code changes affect what program state — that sits at Skipper’s core.
“If the type system wasn’t sound, then you don’t know what the types are, you don’t know what you’re calling, you cannot build a call graph,” Verlaguet says. A reactive runtime sits on top, updating live state when code changes so the program never has to restart from scratch.
The second component is the harness — the agentic orchestration layer. Classic in structure (plan, generate tests, generate code), but built on the same incremental framework so that generation and remediation happen in parallel on independent pieces rather than sequentially across a whole codebase, Verlaguet says. The combination means Skipper can ingest diffs at speed, rerun only the tests affected by a change, and update a live service without taking it down.
On internal benchmarks, Verlaguet claims Skipper passes more than 90% of tests on their corpus of backend service prompts, against 20% for Claude Code on the same corpus. But he is careful about the scope of that comparison.
“Claude Code does a lot more,” he says. “We are better at what we do.” What they do is generate backend services — general-purpose ones, not a catalog of predefined templates. “All the code is generated from scratch,” he says.
Verlaguet says he believes AI will push more and more software into service architectures, not because that’s how things are built today, but because stateful services are what make iterative AI development tractable. He uses a compiler as illustration.
“In five years from now, I’m pretty sure you will want this compiler to be built by an AI. And what does it mean?” he asks. “It means that every time the AI wants to iterate on that compiler, it has to wait half an hour. It doesn’t make any sense.” Turn the compiler into a service with state, send requests to it, and the AI can iterate without the overhead.
Skipper is still in the early stages, however Verlaguet says the company is poised to make an announcement in the next month. He describes the current user base as “us and our friends.”
Verlaguet characterizes SkipLabs as a specialized coding agent shop — and expects that category to get crowded. “I think we’re going to see different agents with different tools, tool chains, that are better at doing certain things,” he said.
The guardrails question, he argues, is what will separate the real ones from the noise.
“Here’s a company who’s actually built the guardrails you were looking for, for your AI-generated code,” Verlaguet says.
The technical ambition is not lost on industry observers.
Brad Shimmin, an analyst at the Futurum Group, called SkipLabs’ technical creation: “Very fascinating and a reflection of how software is changing in our current non-deterministic world. Instead of using traditional but sometimes network-heavy declarative frameworks for real-time response, this framework and back-end service basically use a declarative mechanism to just reason over what a code block is supposed to do, tracking any dependencies in a computational graph.”
Meanwhile, David Mytton, CEO of AI security platform provider Arcjet, tells The New Stack that Skipper is definitely addressing an emerging issue.
“I’ve had multiple conversations with technical leaders recently and the industry is still coming to terms with the death of code review,” he says. “We’re in a new world where the old security model breaks down because it assumes a human wrote the code. Review can’t keep up when agents handle implementation — and they’ll soon manage the full cycle from planning through to deployment. Security must be baked into the whole process, from the development environment to runtime.”
In his post, Venturini says “An agent benefits from verbosity. Long, precise, unambiguous tool outputs are easier to parse than short, clever, human-optimized ones.”
He argues that loud, specific failures are more useful than errors swallowed for the sake of developer experience. Strong type systems matter not as guardrails but as the most information-dense description available of what code is supposed to do, he says.
Moreover, that points toward a split in the tooling landscape, Venturini notes in the blog. Languages and environments built for human authors will persist, as “humans aren’t going anywhere.” But alongside them, a new generation of tools is emerging with the agent as the primary consumer: strict, formally specified, intolerant of ambiguity, he indicates.
Skip Labs positions Skipper in that second camp. SKJS, its TypeScript-compatible type checker, is sound where standard TypeScript is not. This tradeoff makes it harder for humans and more useful for agents. The reactive runtime enforces explicit dependency contracts that would feel excessive to a developer writing by hand and are exactly what an agent needs to reason about cause and effect in a codebase, Venturini notes.
The underlying argument: if agents are generating most of the code, and the tools those agents use were designed around human readability, a significant capability ceiling is being left in place. You’re asking the agent to work in someone else’s medium, he says.
“We made the tools more readable to get more developers,” Venturini writes. “We’ll make them less readable — more precise, more formal, more machine-native — to get better agents. The readability era of programming languages was long and productive and is now coming to an end.”
Here is a timeline of Verlaguet’s accomplishments leading up to the creation of Skipper and a potential product release:
The post Where are the guardrails everyone promised for AI? appeared first on The New Stack.
We’re excited to announce the stable release of Azure MCP Server 2.0, a significant milestone for building secure and flexible agentic workflows on Azure. Azure MCP Server is open-source software that implements the Model Context Protocol specification and enables AI agents and developer tools to interact with Azure resources through a consistent, standardized tool interface.
Azure MCP currently contains 276 MCP tools across 57 Azure services, enabling end-to-end scenarios that span provisioning, deployment, monitoring, and operational diagnostics within AI-assisted experiences. The defining advancement in 2.0 is the self-hosted, remote MCP server support. Azure MCP server can now run as a remote MCP server, so you can deploy it exactly where your team builds and operates.
Azure MCP Server is an MCP-compliant server that exposes Azure capabilities as structured, discoverable tools that agents can select and invoke. It’s designed to integrate into modern developer workflows and can be used flexibly across local development on IDEs, tool integrations, and centralized deployments, including operation as a self-hosted remote MCP server for team and enterprise scenarios, which is the primary focus of the 2.0 release.
This flexibility lets you start small on a single machine and scale to centrally managed deployments with consistent policy, security controls, and configuration.
Azure MCP Server 2.0 represents a focused set of improvements that make the platform more suitable for shared deployments, stronger governance, and daily engineering workflows.
Azure MCP Server 2.0 is designed for remote hosting. It strengthens HTTP-based transport to support authentication scenarios and safer operational behaviors in remote mode. This enables teams to deploy Azure MCP as a centrally managed internal service and apply consistent configuration and governance.
Remote Azure MCP also supports multiple authentication approaches so you can align access with your environment and security model. For example, you can use managed identity when running alongside Microsoft Foundry. Alternatively, use an On-Behalf-Of (OBO) flow, also known as OpenID Connect delegation, to securely call Azure APIs using the signed-in user context.
Common scenarios include:
Security and operational safety are central design priorities in 2.0. The release includes stronger validation and safeguards intended to reduce risk in both local development and remote hosting scenarios. These improvements span endpoint validation, protections against common injection patterns for query-oriented tools, and tighter isolation controls for development environments.
Collectively, these changes are intended to make Azure MCP safer to run locally and more appropriate to host as a remote shared service.
Azure MCP Server 2.0 continues to support a broad range of development environments and agent platforms, whether you’re working inside an IDE, interacting through a CLI, or running a standalone server. The release also expands distribution options to improve portability and simplify onboarding across MCP-compatible tools.
Azure MCP Server 2.0 includes practical upgrades that improve reliability and responsiveness in day-to-day usage, particularly in scenarios that depend on multiple MCP toolsets. Container distribution updates also reduce image size and support more efficient deployment in containerized environments.
Azure MCP Server can be configured for sovereign cloud environments such as Azure US Government and Azure operated by 21Vianet Cloud (Azure in China), enabling use in regulated deployments that require sovereign endpoints and stronger boundary controls. This capability complements the 2.0 emphasis on self-hosting by allowing teams to deploy Azure MCP close to their required cloud environment and compliance posture.
Azure MCP continues to evolve its tool ecosystem to improve usability and agent selection accuracy through clearer tool descriptions, more consistent validation logic, and consolidation of redundant operations where it improves discoverability. The intent is to provide a practical code-to-cloud operational interface that works consistently across a wide range of Azure scenarios without requiring service-specific integration patterns.
Azure MCP Server 2.0 reflects continued collaboration with partners and the broader developer community. Thank you for the feedback, contributions, and real-world scenarios that shaped this release. We’re looking forward to what you build with 2.0, especially as more teams adopt self-hosted MCP servers to bring agentic workflows closer to their systems, policies, and day-to-day engineering practices.
The post Announcing Azure MCP Server 2.0 Stable Release for Self-Hosted Agentic Cloud Automation appeared first on Azure SDK Blog.
Newly updated Windows Insider Settings screen showing the new Experimental and Beta channels[/caption]
For most Insiders, picking the Beta or Experimental channel will be all you need to get set up, but for those who want to go further, we are adding an advanced option to pick your Windows core version compatible with your hardware. Most users will see these options as 25H2 or 26H1 builds.
The Experimental channel will also contain a further Future Platforms option which is our earliest preview build for Windows and is not aligned to a retail version of Windows. This is aimed at users who are looking to be at the forefront of platform development. Insiders looking for the earliest access to features should remain on a version aligned to a retail build.
[caption id="attachment_178876" align="alignnone" width="1024"]
Advanced settings showing the ability to pick Windows core version[/caption]
Release Preview will continue to be an advanced option aimed at commercial customers and Insiders who want early access to production builds in the days leading up to broad release. To select Release Preview, you will need to enable it in the ‘Advanced Options’ but the content remains unchanged. We're actively talking with our commercial Insiders about how to make it better, and we want to hear from you.
The new Feature flags screen in Settings[/caption]