Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
138187 stories
·
31 followers

Introduction to API Management

1 Share
Silhouette of a person pushing two giant jigsaw puzzle pieces together.

What Is API Management?

API management, a critical aspect of modern digital architecture, involves overseeing the lifecycle of application programming interfaces (APIs) in a secure, scalable environment. It is the process of creating, publishing, maintaining and securing APIs, a set of protocols and tools for building and integrating application software.

API management is integral to businesses seeking to enhance operations and connectivity in the digital realm. It ensures that APIs, serving as bridges between different software applications, are well-organized, secure and effectively used to unlock digital potential.

As the digital landscape has evolved with advancements like cloud computing, mobile applications, microservices, cloud native applications, serverless and the Internet of Things (IoT), the need for efficient API management has become more pronounced.

With the rise of generative AI (GenAI), the importance of API management has grown. Platforms such as Anthropic, Google and OpenAI expose sophisticated foundation models as APIs. Similar to these platforms, enterprises are considering providing GenAI services to developers via internal APIs to infuse intelligence into applications, which makes API management critical to these implementations.

Sophisticated API management tools and platforms have emerged, enabling organizations to manage API lifecycles from creation to deprecation. These tools provide functionalities for defining, securing, publishing and analyzing APIs, ensuring that they align with business goals and industry standards.

This section aims to provide an in-depth look into API management, elucidating its significance, components and practices, and how it shapes the interaction between diverse software systems in our increasingly connected world.

Core Concepts of API Management

API management is a multifaceted field that involves a range of tools and practices essential for the effective handling of APIs. Four key components at the heart of API management are API gateways, API publishing tools, API governance and developer portals or API stores.

API Gateways

API gateways are pivotal in managing both traffic and security for APIs. They act as the frontline interface between APIs and the users, handling incoming requests and directing them to the appropriate services. API gateways enforce policies such as rate limiting and authentication, ensuring secure and controlled access to API functions.

Furthermore, they can transform and route requests, collect analytics data and provide caching capabilities. This functionality is essential for optimizing API performance and reliability, making API gateways an indispensable part of API management solutions.

API Publishing Tools

These tools form the backbone of API creation and maintenance. API publishing tools enable developers to define APIs, often using standards like OpenAPI or RAML, and generate comprehensive documentation. They govern API usage through various policies, ensuring APIs are used as intended and in compliance with any regulations.

These tools also facilitate the testing and debugging of APIs, including security testing, and manage the deployment of APIs across different environments. The lifecycle management of APIs, from development to retirement, is significantly streamlined with the use of these tools.

API Governance

API governance is the process of setting policies and guidelines that help developers collaborate to ensure that APIs are consistent. With API governance, businesses get the most out of their investment. The purpose of API governance is to make sure that APIs are standardized so that they are complete, compliant and consistent.

Effective API governance enables organizations to identify and mitigate API-related risks, including performance concerns, compliance issues and security vulnerabilities. API governance is complex and involves security, technology, compliance, utilization, monitoring, performance and education. Organizations can make their APIs secure, efficient, compliant and valuable to users by following best practices in these areas.

Developer Portals and API Stores

Developer portals and API stores play a critical role in community engagement and the broader adoption of APIs. These platforms serve as centralized hubs where developers can find all the necessary resources related to an API, such as documentation, tutorials, sample code and SDKs. They often include interactive consoles for testing APIs and mechanisms for users to subscribe to APIs and manage their access keys.

By providing a comprehensive and user-friendly interface, developer portals and API stores facilitate easier and more efficient use of APIs, fostering a community of developers around them.

In summary, API gateways, publishing tools, governance and developer portals/stores are the cornerstones of effective API management. They collectively ensure that APIs are not only functional and secure but also well-documented and accessible, leading to better integration, more innovative applications, and a thriving ecosystem of developers and users.

Advanced Features in API Management

API Advanced Features

The advancement of API management encompasses a range of features that enhance the functionality, profitability and security of APIs.

Reporting and Analytics

Effective API management requires comprehensive tools for monitoring and analyzing API usage. Reporting and analytics tools provide vital insights into API performance, usage patterns and efficiency. They track metrics like the number of API calls, response times and data throughput.

This data is crucial for identifying trends, optimizing performance and making informed decisions about API scaling and improvements. Real-time monitoring capabilities also enable quick responses to potential issues, ensuring high availability and reliability of API services.

API Monetization Strategies

Monetizing APIs has become a strategic business model for many organizations. API monetization involves setting up pricing models based on usage, functionality or other criteria. It can include tiered pricing plans, freemium models or charges based on the number of API calls or data volume. Effective monetization strategies require a balance between offering value to users and generating revenue. API management tools for managing billing, invoicing and payment collection are integral to this process, facilitating smooth financial transactions.

API Security and Compliance

Security is paramount in API management. Advanced security features include authentication mechanisms like OAuth, API keys and JWT (JSON Web Tokens) to control access. Encryption, both in transit and at rest, ensures data integrity and confidentiality.

Additionally, APIs must comply with various regulatory standards, such as GDPR for data protection. Compliance involves implementing privacy policies, data handling procedures and regular audits. Managing these aspects is critical to maintaining trust and legal compliance.

The integration of these advanced features in API management platforms provides a comprehensive environment for managing the lifecycle of APIs, from development to deployment, ensuring they are profitable, secure and compliant with relevant standards.

API Management in Practice

API Management Best Practices

Implementing effective API management plays a transformative role in various industries, with each sector leveraging it to meet unique challenges and objectives. Let’s explore some real-world applications and industry-specific uses of API management.

Case Studies and Successful Implementations

Numerous organizations have successfully implemented API management strategies, leading to significant improvements in their operations and services. For instance, a major retail company might use API management to streamline its supply chain, enhancing communication between internal systems and external suppliers.

In another example, a technology firm could deploy API management to securely expose APIs to external developers, fostering innovation and expanding its ecosystem.

Industry-Specific Applications

The application of API management varies across different industries, each with its unique requirements:

  • Finance: In the finance sector, API management is crucial for integrating various banking services and ensuring secure data exchanges. It enables banks to offer innovative services like open banking, where third-party providers access financial information through APIs to create new financial products.
  • Healthcare: Healthcare organizations use API management to securely manage patient data, comply with regulations like HIPAA and facilitate interoperability between different healthcare systems. This leads to improved patient care and streamlined operations.
  • Retail: In retail, API management helps integrate e-commerce platforms with various services like payment gateways, inventory management systems and customer relationship management tools. This integration is key to providing a seamless shopping experience.
  • Telecommunications: Telecom companies use API management to offer value-added services, manage network operations efficiently and provide better customer service through various applications and services accessed via APIs.

In each of these sectors, API management is not just about technology; it’s about transforming business models, enhancing customer experiences and driving innovation. The adaptability of API management to different industry needs underscores its versatility and effectiveness as a tool for digital transformation.

Best Practices in API Management

API Management Best Practices

A successful approach to API management involves strategic planning, performance optimization and stringent security measures. Here are some best practices to ensure effective API management:

Effective API Strategy

  • Clear objectives: Define clear goals for what the APIs are intended to achieve. This could range from enhancing internal operations to creating new revenue streams.
  • Stakeholder engagement: Involve all stakeholders, including developers, business units and partners to align API strategies with business objectives.
  • Lifecycle management: Implement robust API lifecycle management, from planning and design to deprecation, ensuring each stage is well-managed.

Performance Optimization

  • Efficient design: Design APIs to handle requests efficiently, avoiding unnecessary processing.
  • Scalability: Ensure APIs can handle increased loads with scalable infrastructure.
  • Monitoring and analytics: Continuously monitor API performance using analytics tools to identify and address issues proactively.

Security Best Practices

  • Access control: Implement strong authentication and authorization mechanisms, such as OAuth, to control access to APIs.
  • Data encryption: Use encryption for data in transit and at rest to protect sensitive information.
  • Regular audits: Conduct regular security audits and updates to stay ahead of potential vulnerabilities.

Adhering to these best practices ensures that API management contributes positively to an organization’s technological infrastructure and business goals.

Learning Resources and Community for API Management

API Management Learning Resources

API Educational Resources

  • Books: “API Management: An Architect’s Guide to Developing and Managing APIs for Your Organization” by Brajesh De provides comprehensive insights.
  • Online courses: Platforms like Coursera and Udemy offer a range of courses covering various aspects of API management.
  • Tutorials: For practical learning, websites like IBM Developer and Apigee Edge Documentation provide valuable tutorials.

Community and Support for APIs

  • Forums: Engage in discussions on platforms like Stack Overflow for API management topics.
  • Conferences: The API World Conference is an excellent event for professionals to network and learn.
  • Professional networks: LinkedIn hosts groups focused on API management, fostering professional connections and knowledge sharing.

Continuous Learning

Keeping up with the dynamic field of API management is vital. Regularly engaging with industry blogs, podcasts, newsletters and webinars can provide continuous learning and insights into the latest developments in API management.

The Future of API Management

API Management the future

Navigating the Digital Shift

As we delve deeper into the digital age, API management emerges as a pivotal element in the technological revolution. The integration of APIs into diverse business models is not just a trend but a necessity, facilitating smoother interactions between disparate systems and technologies.

Advancements on the Horizon

Looking forward, we anticipate significant advancements in API management. Key areas of development include:

  • AI-driven API analytics: Leveraging AI to enhance API analytics, providing deeper insights into usage patterns and efficiency.
  • Enhanced security protocols: Strengthening API security to combat evolving cyberthreats and protect sensitive data.
  • Robust lifecycle management tools: Improving tools for managing the API lifecycle, from creation to retirement, ensuring APIs remain relevant and effective.

The Role of The New Stack

At The New Stack, we recognize the importance of staying at the forefront of these developments. Our commitment is to keep our readers informed with the latest news and educational content in API management.

  • Breaking news and tutorials: From the latest breakthroughs in API technology to evolving practices and comprehensive tutorials, our platform is a reservoir of knowledge.
  • Expert insights: We bring insights from industry experts, offering in-depth analysis and forward-thinking perspectives.

Embracing Continuous Evolution

In the fast-evolving world of API management, continual improvement is key. We encourage our readers to embrace this evolution, refining their skills and adapting their strategies to navigate the future of digital technology successfully.

  • Skill development: Continuously enhancing API management skills to keep pace with technological advancements.
  • Strategic innovation: Encouraging innovative approaches to API management, driving success across various fields.

Shaping the Future

The future of API management is dynamic and holds immense potential. By staying informed and adaptable, professionals in this field can leverage APIs to drive significant innovation and success in their respective industries. At The New Stack, we are excited to be a part of this journey, guiding and informing our readers every step of the way.

The post Introduction to API Management appeared first on The New Stack.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

.NET Modernization: GitHub Copilot Upgrade Eases Migrations

1 Share

During the .NET Conf Focus on Modernization event, Microsoft demonstrated a powerful new tool: GitHub Copilot Upgrade for .NET with Agent Mode. Unlike previous solutions, this tool applies AI to comprehensively manage the entire upgrade process across multiple interdependent projects.

GitHub Copilot Upgrade for .NET isn’t just making suggestions here, it’s actually guiding the entire .NET upgrade process and automating the changes necessary, all with minimal user input,” said McKenna Barlow, Microsoft’s product manager for .NET Tools, during the event. “One of the most exciting features of the tool is that you can now upgrade your entire solution in one go. So, no more project-by-project upgrades. No more fiddling around with tangled messes of all these dependencies.”

This represents a significant advancement over existing tools like the .NET Upgrade Assistant, which could only upgrade one project at a time, often leaving developers with broken dependencies and countless compatibility issues to resolve manually, Microsoft said.

Reluctance To Upgrade

There is reluctance for developers to upgrade their codebase, yet there are consequences of not updating, such as security risks and performance bottlenecks. However, the benefits of upgrading to the newest .NET version include performance improvements, security enhancements and access to modern development tools.

Technical Debt or Innovation?

Indeed, staying on outdated frameworks is not just a technical debt problem but also a missed opportunity for innovation, the conference speakers explained.

Performance improvements include optimizations in runtime tools and libraries. Security enhancements provide protection against vulnerabilities with the latest security updates. Also, new features and APIs in each new release simplify development and enable building more innovative applications.

In addition, improved tooling experiences include hot reload, C# dev kit, Maui and debugging support.

Why Modernization Matters

Beyond solving technical challenges, the tool addresses a broader issue: the opportunity cost of staying on outdated frameworks.

“The reality is that every new version of .NET brings really great improvements, including performance, stronger security, access to modern development tools, all that good stuff, and staying on outdated frameworks. It’s not just a technical debt problem. It’s actually a missed opportunity for you to be innovating faster,” explained Barlow.

These benefits include:

  • Performance enhancements for faster, more efficient applications
  • Critical security updates and vulnerability patches
  • New features and APIs that simplify development
  • Improved tooling experiences (hot reload, C# dev kit, MAUI debugging)
  • Continued support from Microsoft
  • Access to the latest community contributions and libraries

Compatibility and Support

Upgrading ensures compatibility with the latest technologies and platforms, ensuring continued support from Microsoft. Older versions eventually reach an end of life and won’t receive updates.

GitHub Copilot Upgrade for .NET is designed to make upgrading faster, smarter and easier, Barlow said. The tool helps build an upgrade plan, guides step by step and tracks progress, enabling developers to work at their own pace. It also automates changes necessary for upgrading, including analyzing projects, resolving dependencies and rewriting outdated code.

The Tool in Action

During the conference, Chet Husk, program manager for the .NET SDK, CLI, MSBuild and Templating Engine, demonstrated the upgrade process for a .NET app using GitHub Copilot Upgrade for .NET in both Visual Studio and a prototype using Visual Studio Code.

Husk demonstrated how the tool starts with defining the goal, generating a plan and executing the plan, automating as much of the upgrade process as possible. The tool prompts users for input when it encounters issues that require human intervention. Moreover, the tool learns from user interventions, applying fixes based on what it has learned, reducing the need for manual steps and improving accuracy over time.

During demonstrations, Husk also showcased how the tool intelligently handles complex scenarios. One example involved upgrading a WPF application:

“Because the binary formatter APIs were deprecated, the tool has changed the usage of serialization over to use System.Text.JSON instead. This is an automatic thing… This is an example of the tool getting you to something buildable, and then you can follow back and pick up on that thread.”

He also showed the tool’s learning capabilities. After manually fixing a namespace casing issue once, the tool recognized similar problems elsewhere in the application and automatically applied the same fix, demonstrating how it becomes more efficient as you use it.

Keeping You in Control

Despite its automation capabilities, GitHub Copilot Upgrade for .NET isn’t a black box. “One of the most powerful aspects of GitHub Copilot Upgrade for .NET is that it’s not a black box… it’s actively keeping you in the loop throughout the upgrade process,” noted Barlow.

The tool creates Git branches and checkpoints along the way and provides detailed reports of every change made.

As Husk summarized after his demonstration: “This took out the drudgery part of the upgrade. Now we have to do the parts that are interesting.”

Looking Ahead

By leveraging AI to handle the complex, tedious aspects of framework upgrades, the tool promises to dramatically reduce the time and effort required to keep applications current.

For .NET developers who have been putting off modernization due to its complexity, this tool might just be the breakthrough they’ve been waiting for.

The tool is currently in preview, with Microsoft inviting developers to sign up to be among the first to access it.

Roadmap

During the .NET Conf presentation, the team revealed a packed roadmap designed to further streamline the upgrade experience.

Enhanced Configurability

Perhaps the most significant upcoming enhancement is the addition of granular control over virtually every aspect of the upgrade process. Soon, developers will be able to:

  • Guide NuGet package updates with precision
  • Choose exactly which package replacements to apply
  • Customize code rewriting rules to align with team standards
  • Fine-tune transformations according to specific project needs

This level of configurability will ensure the tool can adapt to the unique requirements of different development teams and codebases, balancing between opinionated defaults for simplicity and custom options for complex scenarios.

Platform-Agnostic Approach

While the initial demonstrations focused on Visual Studio integration, Microsoft is actively working on making the tool more accessible across different environments. The team is experimenting with Model Context Protocol (MCP) support, as was briefly shown in the VS Code demo.

“We know that teams work everywhere, on prem, in the cloud, even on specialized platforms,” Barlow explained. “So, we’re looking to build platform-agnostic tooling so that you can get that same reliable upgrade capability no matter where your code actually lives.”

This platform-agnostic vision could eventually enable developers to run upgrades directly on servers, integrate them into CI/CD pipelines or use them alongside any IDE — making the technology more accessible to teams with diverse development environments.

Upgrading at Scale

Perhaps the most ambitious aspect of the roadmap addresses the challenge of upgrading at enterprise scale. For organizations with dozens or even hundreds of repositories, Microsoft is planning fleet-wide orchestration capabilities.

This would enable teams to define upgrade configurations once and trigger them across entire codebases, with centralized monitoring to track progress. The goal is to transform what has traditionally been a manual, repository-by-repository ordeal into a streamlined, automated process that can be executed consistently across an organization.

“We do know that upgrading at scale and in an automated fashion is something that a lot of teams need a lot of help with,” Barlow said. “So, we definitely have this one on our radar for the future.”

Expanding Access

While these enhancements are still in development, Microsoft is expanding access to the tool beyond its internal teams. They’ve opened a private preview for third-party customers who want to experience the current version and help shape its future.

The post .NET Modernization: GitHub Copilot Upgrade Eases Migrations appeared first on The New Stack.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Tired of all the restarts? Get hotpatching for Windows Server

1 Share

Hotpatching for Windows Server 2025, made available in preview in 2024, will become generally available as a subscription service on July 1st, 2025. One of the key updates in the latest release of Windows Server 2025 is the addition of hybrid and multicloud capabilities, aligned with Azure’s adaptive cloud approach. With hotpatching, we are taking what was previously an Azure-only capability and now making it available to Windows Server machines outside of Azure through Azure Arc.

We encourage you to try hotpatching now while it’s still free of charge in preview, before subscription pricing starts this July. Read on to learn more.

How does hotpatching work?

Hotpatching is a new way to install updates in Windows Server 2025 that does not require a reboot after installation, by patching the in-memory code of running processes without the need to restart the process.

Some of the benefits of hotpatching include the following:

  • Higher availability with fewer reboots.
  • Faster deployment of updates as the packages are smaller, install faster, and have easier patch orchestration with Azure Update Manager (optional).
  • Hotpatch packages install without the need to schedule a reboot, so they can happen sooner. This can decrease the “window of vulnerability” which can result if an administrator might normally delay an update and restart after a Windows security update is released.

Hotpatching is available at no charge to preview now, but starting in July with the subscription launch, hotpatching for Windows Server 2025 will be offered at a subscription of $1.50 USD per CPU core per month.

With hotpatching, you will still need to restart your Windows Servers about four times yearly for baseline updates, but hotpatching can save significant time and ease the inconvenience of a traditional “patch Tuesday.” 

Hotpatching for Windows Server Datacenter: Azure Edition has been available for years. In fact, our own Xbox team has used it to reduce processes that used to take the team weeks down to just a couple of days. With Windows Server 2025, we have been able to deliver these efficiencies to on-premises and non-Azure servers through connection with Azure Arc.

What are the requirements?

To use hotpatching outside of Azure such as, on-premises or in multicloud environments, you must be using Windows Server 2025 Standard or Datacenter, and your server must be connected to Azure Arc. You will also need to subscribe to the Hotpatch service.

Important: If you are currently using Windows Server 2025 and opted in to try the hotpatching service through Azure Arc in preview, you will need to disenroll on or before June 30 if you wish to end your preview and not subscribe to the service. Otherwise, your subscription will start automatically in July.

If you’re running on Azure IaaS, Azure Local, or Azure Stack you can still use hotpatching as part of functionality of Windows Server Datacenter: Azure Edition. This feature is included both with Windows Server 2022 Datacenter: Azure Edition and Windows Server 2025 Datacenter: Azure Edition. There are no new requirements in this case, i.e. you don’t need to Arc-enable those machines, and there’s no additional price for it. 

How do I enable hotpatching?

First, if your server is not yet connected to Azure Arc, you can do so by following these steps. Azure Arc is available at no extra cost and lets you manage physical servers, and virtual machines hosted outside of Azure, on your corporate network, or other cloud providers. In addition to hotpatching, there are several paid Azure services you can access through Azure Arc, including Microsoft Defender for Cloud, Azure Monitor, and many others. For full details, refer to this documentation.

Once you are connected with Azure Arc, you will sign into the Azure Portal, go to Azure Update Manager, select your Azure Arc-enabled server, and select the hotpatching option as outlined in this documentation.

You can also manage your subscription to hotpatching through the Azure Portal as well.

What is the hotpatching schedule?

The hotpatch service provides up to eight hotpatches in a year. It follows a three-month cycle with the first month as a baseline month (monthly cumulative update) followed by two months of hotpatches. During baseline months the machines will need a reboot. The four planned baseline months are January, April, July and October.

On rare occasions, for security reasons we may have to ship a non-hotpatch update during a hotpatch month which will also need a reboot. But the goal will be to provide up to eight hotpatches in a year. 

The Windows Server hotpatching subscription will be billed on a monthly basis, so your cost will be consistent throughout the year in both hotpatch and non-hotpatch months. 

Where to learn more about Windows Server

In addition to the documentation above, please check out our blog posts on Tech Community and attend our Windows Server Summit virtual event on April 29-30 and on-demand. We encourage you to try this new time-saving feature during this preview and start discovering all the time you’ll save!

And don’t forget…

As you may have heard at Ignite, hotpatching is also available for Windows 11 Enterprise. Learn more about eligibility and hotpatching for Windows clients here.


*Prices are in US dollars and are subject to change

The post Tired of all the restarts? Get hotpatching for Windows Server appeared first on Microsoft Windows Server Blog.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.33: User Namespaces enabled by default!

1 Share

In Kubernetes v1.33 support for user namespaces is enabled by default. This means that, when the stack requirements are met, pods can opt-in to use user namespaces. To use the feature there is no need to enable any Kubernetes feature flag anymore!

In this blog post we answer some common questions about user namespaces. But, before we dive into that, let's recap what user namespaces are and why they are important.

What is a user namespace?

Note: Linux user namespaces are a different concept from Kubernetes namespaces. The former is a Linux kernel feature; the latter is a Kubernetes feature.

Linux provides different namespaces to isolate processes from each other. For example, a typical Kubernetes pod runs within a network namespace to isolate the network identity and a PID namespace to isolate the processes.

One Linux namespace that was left behind is the user namespace. It isolates the UIDs and GIDs of the containers from the ones on the host. The identifiers in a container can be mapped to identifiers on the host in a way where host and container(s) never end up in overlapping UID/GIDs. Furthermore, the identifiers can be mapped to unprivileged, non-overlapping UIDs and GIDs on the host. This brings three key benefits:

  • Prevention of lateral movement: As the UIDs and GIDs for different containers are mapped to different UIDs and GIDs on the host, containers have a harder time attacking each other, even if they escape the container boundaries. For example, suppose container A runs with different UIDs and GIDs on the host than container B. In that case, the operations it can do on container B's files and processes are limited: only read/write what a file allows to others, as it will never have permission owner or group permission (the UIDs/GIDs on the host are guaranteed to be different for different containers).

  • Increased host isolation: As the UIDs and GIDs are mapped to unprivileged users on the host, if a container escapes the container boundaries, even if it runs as root inside the container, it has no privileges on the host. This greatly protects what host files it can read/write, which process it can send signals to, etc. Furthermore, capabilities granted are only valid inside the user namespace and not on the host, limiting the impact a container escape can have.

  • Enablement of new use cases: User namespaces allow containers to gain certain capabilities inside their own user namespace without affecting the host. This unlocks new possibilities, such as running applications that require privileged operations without granting full root access on the host. This is particularly useful for running nested containers.

Image showing IDs 0-65535 are reserved to the host, pods use higher IDs

User namespace IDs allocation

If a pod running as the root user without a user namespace manages to breakout, it has root privileges on the node. If some capabilities were granted to the container, the capabilities are valid on the host too. None of this is true when using user namespaces (modulo bugs, of course 🙂).

Demos

Rodrigo created demos to understand how some CVEs are mitigated when user namespaces are used. We showed them here before (see here and here), but take a look if you haven't:

Mitigation of CVE 2024-21626 with user namespaces:

Mitigation of CVE 2022-0492 with user namespaces:

Everything you wanted to know about user namespaces in Kubernetes

Here we try to answer some of the questions we have been asked about user namespaces support in Kubernetes.

1. What are the requirements to use it?

The requirements are documented here. But we will elaborate a bit more, in the following questions.

Note this is a Linux-only feature.

2. How do I configure a pod to opt-in?

A complete step-by-step guide is available here. But the short version is you need to set the hostUsers: false field in the pod spec. For example like this:

apiVersion: v1
kind: Pod
metadata:
 name: userns
spec:
 hostUsers: false
 containers:
 - name: shell
 command: ["sleep", "infinity"]
 image: debian

Yes, it is that simple. Applications will run just fine, without any other changes needed (unless your application needs the privileges).

User namespaces allows you to run as root inside the container, but not have privileges in the host. However, if your application needs the privileges on the host, for example an app that needs to load a kernel module, then you can't use user namespaces.

3. What are idmap mounts and why the file-systems used need to support it?

Idmap mounts are a Linux kernel feature that uses a mapping of UIDs/GIDs when accessing a mount. When combined with user namespaces, it greatly simplifies the support for volumes, as you can forget about the host UIDs/GIDs the user namespace is using.

In particular, thanks to idmap mounts we can:

  • Run each pod with different UIDs/GIDs on the host. This is key for the lateral movement prevention we mentioned earlier.
  • Share volumes with pods that don't use user namespaces.
  • Enable/disable user namespaces without needing to chown the pod's volumes.

Support for idmap mounts in the kernel is per file-system and different kernel releases added support for idmap mounts on different file-systems.

To find which kernel version added support for each file-system, you can check out the mount_setattr man page, or the online version of it here.

Most popular file-systems are supported, the notable absence that isn't supported yet is NFS.

4. Can you clarify exactly which file-systems need to support idmap mounts?

The file-systems that need to support idmap mounts are all the file-systems used by a pod in the pod.spec.volumes field.

This means: for PV/PVC volumes, the file-system used in the PV needs to support idmap mounts; for hostPath volumes, the file-system used in the hostPath needs to support idmap mounts.

What does this mean for secrets/configmaps/projected/downwardAPI volumes? For these volumes, the kubelet creates a tmpfs file-system. So, you will need a 6.3 kernel to use these volumes (note that if you use them as env variables it is fine).

And what about emptyDir volumes? Those volumes are created by the kubelet by default in /var/lib/kubelet/pods/. You can also use a custom directory for this. But what needs to support idmap mounts is the file-system used in that directory.

The kubelet creates some more files for the container, like /etc/hostname, /etc/resolv.conf, /dev/termination-log, /etc/hosts, etc. These files are also created in /var/lib/kubelet/pods/ by default, so it's important for the file-system used in that directory to support idmap mounts.

Also, some container runtimes may put some of these ephemeral volumes inside a tmpfs file-system, in which case you will need support for idmap mounts in tmpfs.

5. Can I use a kernel older than 6.3?

Yes, but you will need to make sure you are not using a tmpfs file-system. If you avoid that, you can easily use 5.19 (if all the other file-systems you use support idmap mounts in that kernel).

It can be tricky to avoid using tmpfs, though, as we just described above. Besides having to avoid those volume types, you will also have to avoid mounting the service account token. Every pod has it mounted by default, and it uses a projected volume that, as we mentioned, uses a tmpfs file-system.

You could even go lower than 5.19, all the way to 5.12. However, your container rootfs probably uses an overlayfs file-system, and support for overlayfs was added in 5.19. We wouldn't recommend to use a kernel older than 5.19, as not being able to use idmap mounts for the rootfs is a big limitation. If you absolutely need to, you can check this blog post Rodrigo wrote some years ago, about tricks to use user namespaces when you can't support idmap mounts on the rootfs.

6. If my stack supports user namespaces, do I need to configure anything else?

No, if your stack supports it and you are using Kubernetes v1.33, there is nothing you need to configure. You should be able to follow the task: Use a user namespace with a pod.

However, in case you have specific requirements, you may configure various options. You can find more information here. You can also enable a feature gate to relax the PSS rules.

7. The demos are nice, but are there more CVEs that this mitigates?

Yes, quite a lot, actually! Besides the ones in the demo, the KEP has more CVEs you can check. That list is not exhaustive, there are many more.

8. Can you sum up why user namespaces is important?

Think about running a process as root, maybe even an untrusted process. Do you think that is secure? What if we limit it by adding seccomp and apparmor, mask some files in /proc (so it can't crash the node, etc.) and some more tweaks?

Wouldn't it be better if we don't give it privileges in the first place, instead of trying to play whack-a-mole with all the possible ways root can escape?

This is what user namespaces does, plus some other goodies:

  • Run as an unprivileged user on the host without making changes to your application. Greg and Vinayak did a great talk on the pains you can face when trying to run unprivileged without user namespaces. The pains part starts in this minute.

  • All pods run with different UIDs/GIDs, we significantly improve the lateral movement. This is guaranteed with user namespaces (the kubelet chooses it for you). In the same talk, Greg and Vinayak show that to achieve the same without user namespaces, they went through a quite complex custom solution. This part starts in this minute.

  • The capabilities granted are only granted inside the user namespace. That means that if a pod breaks out of the container, they are not valid on the host. We can't provide that without user namespaces.

  • It enables new use-cases in a secure way. You can run docker in docker, unprivileged container builds, Kubernetes inside Kubernetes, etc all in a secure way. Most of the previous solutions to do this required privileged containers or putting the node at a high risk of compromise.

9. Is there container runtime documentation for user namespaces?

Yes, we have containerd documentation. This explains different limitations of containerd 1.7 and how to use user namespaces in containerd without Kubernetes pods (using ctr). Note that if you use containerd, you need containerd 2.0 or higher to use user namespaces with Kubernetes.

CRI-O doesn't have special documentation for user namespaces, it works out of the box.

10. What about the other container runtimes?

No other container runtime that we are aware of supports user namespaces with Kubernetes. That sadly includes cri-dockerd too.

11. I'd like to learn more about it, what would you recommend?

Rodrigo did an introduction to user namespaces at KubeCon 2022:

Also, this aforementioned presentation at KubeCon 2023 can be useful as a motivation for user namespaces:

Bear in mind the presentation are some years old, some things have changed since then. Use the Kubernetes documentation as the source of truth.

If you would like to learn more about the low-level details of user namespaces, you can check man 7 user_namespaces and man 1 unshare. You can easily create namespaces and experiment with how they behave. Be aware that the unshare tool has a lot of flexibility, and with that options to create incomplete setups.

If you would like to know more about idmap mounts, you can check its Linux kernel documentation.

Conclusions

Running pods as root is not ideal and running them as non-root is also hard with containers, as it can require a lot of changes to the applications. User namespaces are a unique feature to let you have the best of both worlds: run as non-root, without any changes to your application.

This post covered: what are user namespaces, why they are important, some real world examples of CVEs mitigated by user-namespaces, and some common questions. Hopefully, this post helped you to eliminate the last doubts you had and you will now try user-namespaces (if you didn't already!).

How do I get involved?

You can reach SIG Node by several means:

You can also contact us directly:

  • GitHub: @rata @giuseppe @saschagrunert
  • Slack: @rata @giuseppe @sascha
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

PPP 457 | Virtual Communication Mistakes You Don’t Know You’re Making, with Andrew Brodsky

1 Share

Summary

In this episode, Andy talks with Andrew Brodsky about his new book, Ping: The Secrets of Successful Virtual Communication. The discussion covers key topics, such as the impact of typos, the use of emojis and exclamation points, and the importance of timely responses.

Andrew highlights the pros and cons of different communication mediums, offering evidence-based recommendations on when to use email versus meetings, and the benefits of cameras on or off during virtual meetings. Practical advice is given on mimicking language to build trust, improving small talk to enhance virtual hallway interactions, and preparing younger generations for successful virtual communication.

If you're looking for insights on how to lead and more effectively when you're not face-to-face, this episode is for you!

Sound Bites

  • “Regardless of whether you work from home, the office, hybrid, anywhere in between, we're now all virtual communicators.”
  • “When you're writing an email, try to read the message in the opposite tone you intended.”
  • “The shorter meetings are and the fewer participants there are, the more engaging they are.”
  • “If it's a new relationship and you're trying to build trust, you're probably going to want your camera on.”
  • “Sometimes a 30-second text can build more team cohesion than a full hour of meeting time."
  • “We assume the recipient wants a response a lot quicker than they do.”
  • “Typos in angry emails made the person seem angrier, and in happy emails made them seem happier.”

Chapters

  • 00:00 Introduction
  • 01:39 Start of Interview
  • 01:56 Andrew's Personal Story and Research
  • 03:16 Defining Virtual Communication
  • 04:53 The P of Ping: Perspective Taking
  • 08:15 In-Person vs. Virtual Communication
  • 11:14 Meeting Dynamics and Camera Use
  • 16:09 Email Urgency and Response Expectations
  • 21:34 Impact of Typos in Virtual Communication
  • 22:58 Understanding Typos and Ambiguity in Virtual Communication
  • 24:42 Using AI and Tools for Effective Communication
  • 25:47 The Rise of Voice Notes and Their Impact
  • 27:40 Emojis, Exclamation Points, and Language Mimicry
  • 30:04 Bringing Small Talk into Virtual Interactions
  • 32:48 Preparing Kids for Virtual Communication
  • 35:13 End of Interview
  • 35:40 Andy's Comments After the Interview
  • 41:42 Outtakes

Learn More

You can learn more about Andrew and his book at ABrodsky.com.

For more learning on this topic, check out:

  • Episode 407 with Ben Guttman about his book Simply Put. It's an intriguing book on how to design clear messages.
  • Episode 332 with Kevin Eikenberry and Wayne Turmel about their book on virtual teams.
  • Episode 237 with Nick Morgan about his book on virtual communication.

Thank you for joining me for this episode of The People and Projects Podcast!

Talent Triangle: Power Skills

Topics: Virtual Communication, Email Etiquette, Remote Work, Leadership, Team Cohesion, Productivity, Small Talk, Emotional Intelligence, AI Tools, Generational Differences

The following music was used for this episode:

Music: The Fantastical Ferret by Tim Kulig
License (CC BY 4.0): https://filmmusic.io/standard-license

Music: Chillhouse by Frank Schroeter
License (CC BY 4.0): https://filmmusic.io/standard-license





Download audio: https://traffic.libsyn.com/secure/peopleandprojectspodcast/457-AndrewBrodsky.mp3?dest-id=107017
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Okta at RSAC 2025

1 Share

In this bonus episode, we discuss Okta's events at the upcoming security conference, what kind of textile an Identity Fabric would be, and more.


See https://regionalevents.okta.com/rsaconference2025 for all the details on Okta's RSA presence this year.







Download audio: https://media.casted.us/49/64137c97.mp3
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories