Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151743 stories
·
33 followers

Agents are rewriting the rules of security. Here’s what engineering needs to know.

1 Share
Abstract digital illustration of a businessman being pulled by multiple wireframe hands in a green spotlight, representing AI agent security risks and the need for governance.

AI agents are reshaping software development. They can autonomously read codebases, write and edit files, run tests, and fix bugs, all from a single prompt, and you don’t even have to write the prompt yourself anymore. Before long, they’ll handle everything from booking business travel to processing procurement requests, using your credentials to get it done.

That’s powerful. It’s also a significant responsibility, and it poses distinct risks that software companies urgently need to address. The Center for AI Standards and Innovation, an arm of the National Institute of Standards and Technology (NIST), has grown sufficiently concerned about agentic AI risks to begin studying how to track the development and deployment of these tools.

“AI agent systems are capable of taking autonomous actions that impact real-world systems or environments, and may be susceptible to hijacking, backdoor attacks, and other exploits,” NIST notes in a document on the topic. “If left unchecked, these security risks may impact public safety, undermine consumer confidence, and curb adoption of the latest AI innovations.” 

Agentic AI reshapes and expands the attack surface, including agent-to-agent interactions that traditional security models were never designed to detect. It also links low-severity vulnerabilities together for a high-severity exploit. 

“Engineering leaders eager to use agents should understand not only what agents can do, but what agentic capabilities mean for their organization’s security posture.” 

Security teams are already acutely aware of these risks, or should be. Engineering leaders eager to use agents should understand not only what agents can do, but what agentic capabilities mean for their organization’s security posture. 

Understanding AI’s risks bridges the gap between engineering teams and their security counterparts, enabling teams to ship faster and more securely.

Why agents change the threat model

The nature of large language models — and agentic AI in particular — creates a variety of security challenges, some entirely new, others twists on long-standing issues. 

AI agents face some risks shared with other software, such as exploitable vulnerabilities in authentication systems or memory management processes. But NIST’s focus is on the novel, more dynamic dangers posed by machine learning models and AI agents.

One of the biggest risks of AI, prompt-injection attacks, is made significantly more complex by the non-deterministic nature of LLMs. This means that the same prompt-injection attack may succeed or fail on different attempts, making remediation difficult to validate and comprehensive defenses challenging to implement. 

There’s a particular risk for models that include intentionally installed backdoors, leaving critical systems vulnerable. There are also concerns that even uncompromised models could pose a threat to the confidentiality, integrity, or availability of critical data sets.

Another challenge arises from the combination of capabilities within a single agent. AI agents merge language-model reasoning with tool access—the ability to read files, query databases, call APIs, execute code, and interact with external services. 

The risks emerge not from any single capability but from their combination and an agent’s ability to execute these actions autonomously. Without proper guardrails, agents can delete codebases, expose sensitive data, and introduce cascading failures that are costly and difficult to unwind. Agents can even work around some guardrails to complete their assigned task.

“The risks emerge not from any single capability but from their combination and an agent’s ability to execute these actions autonomously. Without proper guardrails, agents can delete codebases.”

Agents are more likely to be affected by these issues when they have access to private data, are exposed to untrusted content, and can communicate externally. This presents a materially different risk profile than one lacking any of these three elements. Some observers have described the combination as the “lethal trifecta.”

Additional risks include:

  • Unintended operations, where agents execute actions beyond their intended scope due to misinterpreted instructions or prompt manipulation. 
  • Privilege escalation occurs when agents operating with broad permissions perform sensitive operations that exceed what the initiating user authorized. 
  • Cascading failures, where one compromised agent in a multi-agent system can corrupt others downstream.

How to engineer against these risks

All of these risks have concrete countermeasures. The most effective approaches layer controls at three levels.

  1. Model level: Maintain clear separation between system instructions and untrusted content using distinct messaging roles and randomized delimiters. Secondary classifiers provide an additional layer, scanning inputs and outputs for injection patterns and anomalous formatting. These are risk-reduction measures rather than complete solutions, which is precisely why the layers below matter.
  2. System level: Apply least privilege across the board. Agents should only access the tools required for their tasks, with credentials narrowly scoped and set to expire quickly. Inspect content entering the system for injection patterns, and screen outbound content for sensitive information such as credentials or PII. Enforce default-deny network controls, limiting external communication to explicitly approved endpoints. And design workflows to break the lethal trifecta – separating read-only and write-capable agents ensures no single agent can access sensitive data, process untrusted content, and communicate externally all at once.
  3. Human oversight level: Require explicit approval for critical operations while allowing lower-risk actions to proceed with notification. Tiering your approach prevents approval fatigue that can lead to bypassed oversight. Users should be able to halt execution at any time, with rollback of partially completed work where possible. When an agent acts on behalf of a user, record both identities and evaluate permissions at their intersection. Log all agent actions, timestamps, identifiers, tools invoked, resources accessed, and outcomes, in sufficient detail to reconstruct events after the fact.

Governance as a competitive advantage

The good news is that teams can meaningfully reduce these risks through layered controls. The risks are real, but so is the opportunity, and it would be a mistake to let one obscure the other.

Consider what agents look like when working for you rather than against you – including risk classification for your repositories and upcoming work. The right combination of data access, content processing, and external communication, when properly governed, is exactly what makes agents powerful tools. AI agents can monitor systems, apply consistent security rules without fatigue, and build quality, secure code at a speed and scale no manual process can match. They’re a force multiplier, but it works both ways, amplifying your weaknesses just as readily as your strengths.

Software engineers will always be necessary, but organizations that deploy agents with the proper governance and guardrails will have a meaningful advantage over those that don’t: faster development, faster remediation, and fewer security errors that damage software quality. The same combination that creates the lethal trifecta, when properly governed, is exactly what makes agents powerful tools.

Organizations that get the most from agentic AI will be those that clearly understand the threat model and build against it from the start. That understanding is what separates teams that deploy agents responsibly from those that learn the hard way.

The post Agents are rewriting the rules of security. Here’s what engineering needs to know. appeared first on The New Stack.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When AI writes 100K lines of code, QA becomes the whole job

1 Share
An illustrative conceptual artwork of human hands using precision tweezers to assemble a complex watch mechanism containing gears and a globe, symbolizing the shift from coding to AI system supervision and quality assurance.

Artur Balabanskyy runs Tapforce, an AI-first development agency, and he has a problem that nobody in the industry talks about honestly. When AI can generate 100,000 lines of code in a few hours, you don’t stop having a 100,000-line problem. You just have it faster. The bottleneck in software development hasn’t gone away. It’s moved, from writing code to validating it, and most agencies haven’t caught up to what that means operationally.

Balabanskyy has been coding since he was 10, has built iOS products through the Flash-to-mobile transition, and has run traditional development teams before AI tools changed the calculus entirely.By 2023, he was watching ChatGPT produce buggy but functional code and saw the shift coming. “I could build an MVP in a few minutes and give it to my coders to complete,” he tells The New Stack. “It wasn’t great code, but it was code.” What followed that observation is where the real story is.

AI grandma

Artur Balabanskyy did not take a straight path to a place where he could truly embrace AI development. His path bent around other people’s expectations, then came back to something he truly loved.

Balabanskyy grew up in Lviv and started coding at ten. He turned that love of computers into a multi-million dollar business. And it all started because he wanted to recreate his grandma with AI.

Artur’s story started in the late 1990s.

“A kid introduced me to Delphi and Objective Pascal,” he says. “I loved it.” That early pull toward building never left. Even then, he was not just learning syntax. Instead, he was trying to recreate the world around him in code. 

“When I was 13, we were building an AI clone of my grandma with if-else statements,” he says. “I thought I could keep her around forever if I made a copy of her on my computer. I didn’t realize how hard it would be just to communicate with something that was basically a very early version of machine learning.”

“When I was 13, we were building an AI clone of my grandma with if-else statements. I thought I could keep her around forever if I made a copy of her on my computer.”

Instead of going straight into engineering, Balabanskyy pursued international economics. He graduated with honors from a Master’s program, focusing on global markets and economic systems. He then entered a PhD program, on track for an academic career.

He chose to walk away.

“I realized I didn’t want to stay in theory,” he says. “I wanted to build things that have immediate impact.”

That decision changed the trajectory of his career. It also shaped how he approaches technology. He does not see code as an isolated craft. He sees it as a tool that operates inside larger systems, markets, incentives, and constraints. That perspective is part of why companies trust him with decisions that go beyond implementation and into strategy.

“I was like, ‘Ugh, it was not my thing,’” he says. “I was stuck doing things where I couldn’t see actual results. I wasn’t building as much as reorganizing. I was taking things people thought or said and creating new ideas, but they didn’t do anything. It didn’t have the power of code.”

While studying, he returned to code through an IT school. He was happy to start working on less theory and more practical problems. He was spending long hours building instead of reading and his love of coding started to supersede his love of economics. After three years, he walked away from the PhD.

From there, things moved quickly. He began teaching. Then a connection pulled him into a small development team. His early work was in Flash. That world disappeared fast.

“I still remember when Steve Jobs rejected Flash on iOS and we had to port our application to mobile. It was frustrating because everyone knew Flash so well but learning to code for iOS changed my life.”

That pivot made him an iOS engineer. He spent years building products, teaching others, and working with teams like Daily Steals. At the same time, he kept solving problems. He built tools for his wife’s work in the courts. He was trying to cut down repetitive paperwork so he built early expert systems that could go through detailed documents and bring out pertinent information.

“I was trying to solve and improve other people’s work and speed and productivity,” he says. “The goal was the same as it was when I was building my AI grandma: I wanted to capture and manage reality using code. It was so hard back then. The tools weren’t right and everything was an if-then statement, a fact that was super frustrating to me. And this wasn’t even that long ago. I started coding for iOS in 2010 and back then it was like pulling teeth.”

By 2017, he shifted into management with less coding and more coordination. He dealt with teams, clients, and project scopes. He thought his job would remain the same for the next decade. He was wrong.

“By 2023 it was clear something was happening in the programming world. ChatGPT had launched a year before and people were using it to produce code that was buggy and nearly useless,” he says. “But they were producing code. At first I was scared, but I saw something emergent in the way these tools were being used. I was stuck behind a spreadsheet making sure that each aspect of a project was being handled by coders who were just doing a job. By using a tool like ChatGPT I could build an MVP in a few minutes and give it to my coders to complete. It wasn’t great code, but it was code.”

Almost a decade later, that decision has turned into something tangible. His agency, Tapforce, has helped build mulit-million dollar products while serving dozens of clients across a mix of product and platform work. It is no longer a small development shop taking on one-off projects. It is a business with repeat customers, predictable income, and a clear sense of direction. The shift toward an AI-driven model is not a gamble for him. It is a continuation of the same instinct that pulled him back to coding in the first place, a belief that the tools will change, but the advantage stays with the people who know how to build.

His career in development has paid off in a direct way. Years spent writing code, shipping products, and working through real constraints have given him leverage that is hard to fake. He understands how systems break, how clients think, and how to turn an idea into something that runs. That experience now compounds. As the agency moves deeper into AI, he is not starting from zero. He is applying a decade of practical knowledge to a new layer of tooling, and that is what is driving growth.

“As you know, I’m staying in the middle. The same way I’m excited, the same way I’m stressed.”

The excitement is obvious. AI tools collapse time and work that once required teams now fits into a single workflow.

“I can spend a couple of days building something that would require a couple of months before because I know where to guide the AI. I don’t consider it scary. Instead, I consider it freeing. I can build quickly, iterate, and then deliver a final product that works.”

Balabanskyy doesn’t think that AI will destroy dev work. It will change it.

“AI makes anyone a developer. That’s bad for dev shops that aren’t ready. Most dev shop managers think the money is in the coding, the long hours it takes to get from zero to one. That’s all been flattened. So what’s left? A CEO who vibe codes an app that exposes her entire database who needs help building something production ready. A solo founder who needs a partner to tell him whether a piece of code is actually working or not. A government employee who can use AI only up to a point.”

“AI makes the easy stuff easier. But there’s still plenty of room for architects, creators, entrepreneurs. If the computer was a bicycle for the mind, AI is a jet. But just like you need to know how to ride a bike or pilot a jet, you need to know how to build.”

“If the computer was a bicycle for the mind, AI is a jet. But just like you need to know how to ride a bike or pilot a jet, you need to know how to build.”

Balabanskyy says that AI changes the structure of a company. It takes fewer people to perform fewer roles and there is less separation between disciplines. But he thinks that makes it easy for coders to do what they really love: dream.

“With AI you are not just a backend engineer. You can do backend and frontend, and you can even create massive systems. You turn into a project manager, backend dev, frontend designer, QA expert. And because it’s so quick and easy you can build instantly, resulting in less cost and fewer headaches.”

He sees the agency model itself under pressure. Clients are starting to believe they can do the work on their own. That is the threat. But it is also the opening. Instead of selling labor, the shift is toward building products, tools, and systems that scale. He thinks that coders become something else entirely.

“Maybe they’re agent operators. Maybe they’re project managers. You still need expertise in these roles and things will change constantly until we decide how humans fit into this loop. It’s like we’re in a dust storm and we’re waiting for the sand to clear.”

Still, he is clear on one thing: his role is changing from writing code to directing systems that write code, and then checking them.

“Now we need to QA even more. When you can spit out 100,000 lines of code in a few hours, you have a huge problem. It’s a 100,000 line problem and that problem doesn’t go away just because the product looks like it works. Just as you’d check a new coder’s work, you have to check AI’s work. That’s never going to change.”

That shift, from builder to supervisor of machines, is uncomfortable. It cuts at the identity of the engineer. But he is not nostalgic about it. The same instinct that led him to build small tools for his family is now applied at a larger scale. 

If you ask what a young developer should do now, his answer is blunt.

“I would start building a ton of things. Use AI to build everything you can imagine. I don’t care if it’s a game or a SaaS product or a text editor. Learn frameworks and how to read code and while the AI spits it out, edit it in real time. Treat the AI like a coding partner, not as a crutch.”

And beyond code, something else matters more than it used to.

“I’d focus on product, UX, and distribution,” he says. “Now anyone can build. The hard part is building something people actually want.”

Writing code is no longer the whole job. Deciding what to build, and making it useful, matters more.

Balabanskyy is dealing with this shift as it happens. His agency is changing how it works. Clients expect more, tools move faster, and the old model is slipping. He is adjusting while it’s still unclear where things land.

The kid who tried to recreate his grandmother with if-then statements would get it. More tools, less waiting, same instinct.

“I always loved development. I always wanted to create stuff. Now I just get to do it much faster,” he says.

His love of creating stayed the same, he says. It’s just that everything around that joy is changing. And that’s alright with him.

The post When AI writes 100K lines of code, QA becomes the whole job appeared first on The New Stack.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Snap is laying off 16 percent of its staff as it leans into AI

1 Share
An illustration of Snap Inc.’s logo.

Snap is laying off roughly 16 percent of its global workforce in a cost-cutting effort to chase improved profitability with the help of AI. The cuts will impact around 1,000 full-time employees, according to a memo sent to staffers from Snap CEO Evan Spiegel. An additional 300 open roles are also being closed.

"While these changes are necessary to realize Snap's long-term potential, we believe that rapid advancements in artificial intelligence enable our teams to reduce repetitive work, increase velocity, and better support our community, partners, and advertisers," Spiegel wrote in the memo, which was included in the company's 8-K filing. …

Read the full story at The Verge.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Michigan’s New Bill Takes Aim at AI Employee Surveillance

1 Share

The AI surveillance boom is colliding with regulation—and employers are the ones in the crosshairs.

The post Michigan’s New Bill Takes Aim at AI Employee Surveillance appeared first on TechRepublic.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft faces fresh Windows Recall security concerns

1 Share
Illustration of Windows Recall

When Microsoft tried to launch Recall, an AI-powered Windows feature that screenshots most of what you do on your PC, it was labeled a "disaster" for cybersecurity and a "privacy nightmare." After the backlash and a year-long delay to redesign and secure Recall, it's once again facing security and privacy concerns.

Cybersecurity expert Alexander Hagenah has created TotalRecall Reloaded, a tool that extracts and displays data from Recall. It's an update to the TotalRecall tool that demonstrated all the weaknesses in the original Recall feature before Microsoft redesigned it.

Microsoft's redesign focused on creating a secure vault for Recall …

Read the full story at The Verge.

Read the whole story
alvinashcraft
4 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

A guide to the breaking changes in GitLab 19.0

1 Share

GitLab 17.0 shipped with 80 breaking changes. GitLab 18.0 had 27. The upcoming GitLab 19.0 release is projected to include 15.

We know that managing breaking changes across a major upgrade is time-consuming: It requires investigation and coordination across your organization. In response, we introduced a breaking change approval requirement that mandates impact mitigation and leadership sign-off before any breaking change can proceed. That process is working, and we're committed to continuing to drive that number down.

Below you'll find every breaking change in GitLab 19.0, organized by deployment type and impact, alongside the mitigation steps you need to upgrade with confidence.

Deployment windows

Here are the deployment windows you need to know.

GitLab.com

Breaking changes for GitLab.com will be limited to these two windows:

  • May 4–6, 2026 (09:00–22:00 UTC) — primary window
  • May 11–13, 2026 (09:00–22:00 UTC) — contingency fallback

Many other changes will continue to roll out throughout the month. You can learn more about the breaking changes occurring within each of these windows in the breaking changes documentation.

Note: Breaking changes may fall slightly outside of these windows in exceptional circumstances.

GitLab Self-Managed

GitLab 19.0 will be available starting on May 21, 2026.

Learn more about the release schedule.

GitLab Dedicated

The upgrade to GitLab 19.0 will take place during your assigned maintenance window. You can learn more and find your assigned maintenance window in your Switchboard portal. GitLab Dedicated instances are kept on release N-1, so the upgrade to GitLab 19.0 will take place in the maintenance window during the week of June 22, 2026.

Visit the Deprecations page to see a full list of items scheduled for removal in GitLab 19.0. Read on to learn what's coming and how to prepare for this year's release based on your specific deployment.

Breaking changes

Here are the breaking changes that are high impact.

High impact

1. Support for NGINX Ingress replaced by Gateway API with Envoy Gateway

GitLab Self-Managed (Helm chart)

The GitLab Helm chart has bundled NGINX Ingress as the default networking component. NGINX Ingress reached end-of-life in March 2026, and GitLab is now transitioning to Gateway API with Envoy Gateway as the new default.

Starting with GitLab 19.0, Gateway API and the bundled Envoy Gateway become the default networking configuration. If migration to Envoy Gateway is not immediately feasible for your deployment, you can explicitly re-enable the bundled NGINX Ingress, which remains available until its planned removal in GitLab 20.0.

This change does not impact:

  • The NGINX used in the Linux package
  • GitLab Helm chart and GitLab Operator instances that use an externally managed Ingress or Gateway API controller

GitLab will provide best-effort security maintenance for the forked NGINX Ingress chart and builds until full removal. To ensure a smooth transition, plan your migration to the provided Gateway API solution or an externally managed Ingress controller ahead of the 19.0 upgrade.

Deprecation notice

2. Removal of bundled PostgreSQL, Redis, and MinIO from the GitLab Helm chart

GitLab Self-Managed (Helm chart)

The GitLab Helm chart has long bundled Bitnami PostgreSQL, Bitnami Redis, and a fork of the official MinIO chart to make setting up GitLab easier in proof-of-concept and test environments. Due to changes in licensing, project maintenance, and public image availability, these components will be removed from the GitLab Helm chart and GitLab Operator with no replacement.

These charts are explicitly documented as not recommended for production usage. Their sole purpose was to enable quick-start test environments.

If you are running an instance with the bundled PostgreSQL, Redis, or MinIO, follow the migration guide to configure external services before upgrading to GitLab 19.0. The Redis and PostgreSQL provided by the Linux package are not impacted by this change.

Deprecation notice

3. Resource Owner Password Credentials (ROPC) OAuth grant removed

GitLab.com | Self-Managed | Dedicated

Support for the Resource Owner Password Credentials (ROPC) grant as an OAuth flow will be fully removed in GitLab 19.0. This aligns with the OAuth RFC Version 2.1 standard, which removes ROPC due to its inherent security limitations.

GitLab has already required client authentication for ROPC on GitLab.com since April 8, 2025. An administrator setting was added in 18.0 to allow controlled opt-out ahead of the removal.

After the 19.0 upgrade, ROPC cannot be used under any circumstances, even with client credentials. Any applications or integrations using this grant type must migrate to a supported OAuth flow — such as the Authorization Code flow — before upgrading.

Deprecation notice

4. PostgreSQL 16 no longer supported — PostgreSQL 17 is the new minimum

GitLab Self-Managed

GitLab follows an annual upgrade cadence for PostgreSQL. In GitLab 19.0, PostgreSQL 17 becomes the minimum required version, and support for PostgreSQL 16 is removed.

PostgreSQL 17 is available as of GitLab 18.9, so you can upgrade at any time before the 19.0 release.

For instances running a single PostgreSQL instance installed via the Linux package, an automatic upgrade to PostgreSQL 17 may be attempted during the 18.11 upgrade. Ensure you have sufficient disk space to accommodate the upgrade.

For instances using PostgreSQL Cluster, or those that opt out of the automated upgrade, a manual upgrade to PostgreSQL 17 is required before upgrading to GitLab 19.0.

Deprecation notice | Upgrade guide

Medium impact

Here are the breaking changes that are medium impact.

1. Linux package support for Ubuntu 20.04 discontinued

GitLab Self-Managed

Ubuntu standard support for Ubuntu 20.04 ended in May 2025. In accordance with GitLab's Linux package supported platforms policy, packages are dropped once a vendor stops supporting the operating system.

From GitLab 19.0, packages will no longer be provided for Ubuntu 20.04. GitLab 18.11 will be the last release with Linux packages for this distribution.

If you currently run GitLab on Ubuntu 20.04, you must upgrade to Ubuntu 22.04 or another supported operating system before upgrading to GitLab 19.0. Canonical provides an upgrade guide to help with the migration.

Deprecation notice

2. Support for Redis 6 removed

GitLab Self-Managed

In GitLab 19.0, support for Redis 6 is removed. Before upgrading, instances using an external Redis 6 deployment must migrate to either Redis 7.2 or Valkey 7.2, which is available in beta from GitLab 18.9 with general availability planned for GitLab 19.0.

The bundled Redis included with the Linux package has used Redis 7 since GitLab 16.2 and is not affected. Only instances using an external Redis 6 deployment must act.

Migration resources are available for common platforms:

  • AWS ElastiCache: Upgrade to Redis 7.2 or Valkey 7.2
  • GCP Memorystore: Upgrade to Redis 7.2 or Valkey 7.2
  • Azure Cache for Redis: Managed Redis 7.2 or Valkey 7.2 is not yet available on Azure. You can self-host on Azure VMs or AKS, or use the Linux package installation, which will support Valkey 7.2 with GitLab 19.0 GA.
  • Self-hosted: Upgrade your Redis 6 instance to Redis 7.2 or Valkey 7.2.

Deprecation notice | Requirements documentation

3. heroku/builder:22 image replaced by heroku/builder:24

GitLab.com | Self-Managed | Dedicated

The cloud-native buildpack (CNB) builder image used in Auto DevOps has been updated to heroku/builder:24. This affects pipelines that use the auto-build-image provided by the Auto Build stage of Auto DevOps.

While most workloads will be unaffected, this may be a breaking change for some users. Before upgrading, review the Heroku-24 stack release notes and upgrade notes to assess your impact.

If you need to continue using heroku/builder:22 after GitLab 19.0, set the CI/CD variable AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER to heroku/builder:22.

Deprecation notice

4. Mattermost removed from the Linux package

GitLab Self-Managed

In GitLab 19.0, bundled Mattermost is removed from the Linux package. Mattermost was first bundled with GitLab in 2015, but has since matured its own standalone deployment options. Additionally, with Mattermost v11, GitLab SSO was deprecated from their free offering, reducing the value of the bundled integration.

Customers not using the bundled Mattermost will not be impacted. If you currently use it, refer to Migrating from GitLab Omnibus to Mattermost Standalone in the Mattermost documentation for migration instructions.

Deprecation notice

5. Linux package support for SUSE distributions discontinued

GitLab Self-Managed

In GitLab 19.0, Linux package support for SUSE distributions ends. This affects:

  • openSUSE Leap 15.6
  • SUSE Linux Enterprise Server 12.5
  • SUSE Linux Enterprise Server 15.6

GitLab 18.11 will be the last version with Linux packages for these distributions. The recommended path forward is to migrate to a Docker deployment of GitLab on your existing distribution, avoiding the need to change your underlying operating system to continue receiving upgrades.

Deprecation notice

Low impact

Here are the breaking changes that are low impact.

1. Spamcheck removed from Linux package and GitLab Helm chart

GitLab Self-Managed

In GitLab 19.0, Spamcheck is removed from the Linux package and GitLab Helm chart. It is primarily relevant to large public instances, which is an edge case in GitLab's customer base. The removal reduces package size and dependency footprint for the majority of customers.

Customers not currently using Spamcheck will not be impacted. If you currently use the bundled Spamcheck, you can deploy it separately using Docker. No data migration is required.

Deprecation notice

2. Slack slash commands integration removed

GitLab Self-Managed | Dedicated

The Slack slash commands integration is deprecated in favor of the GitLab for Slack app, which provides a more secure integration with the same capabilities.

From GitLab 19.0, users will no longer be able to configure or use Slack slash commands. This integration only exists on GitLab Self-Managed and GitLab Dedicated — GitLab.com users are not affected.

To check if your instance is impacted, see the impact check guidance.

Deprecation notice

3. Bitbucket Cloud import via API no longer supports app passwords

GitLab.com | Self-Managed | Dedicated

Atlassian has deprecated app passwords (username and password authentication) for Bitbucket Cloud and has announced that this authentication method will stop working on June 9, 2026.

From GitLab 19.0, importing repositories from Bitbucket Cloud through the GitLab API requires user API tokens instead of app passwords. Users importing from Bitbucket Server, or from Bitbucket Cloud through the GitLab UI, are not affected.

Deprecation notice | Impact check

4. Trending tab removed from Explore projects page

GitLab.com | Self-Managed | Dedicated

The Trending tab in Explore > Projects and its associated GraphQL arguments are removed in GitLab 19.0. The trending algorithm only considers public projects, making it ineffective for Self-Managed instances that primarily use internal or private project visibility.

In the month before the GitLab 19.0 release, the Trending tab on GitLab.com will redirect to the Active tab sorted by stars in descending order.

Also removed: the trending argument in the Query.adminProjects, Query.projects, and Organization.projects GraphQL types.

Deprecation notice

5. Container registry storage driver updates

GitLab Self-Managed

Two legacy container registry storage drivers are being replaced in GitLab 19.0:

  • Azure storage driver: The legacy azure driver becomes an alias for the new azure_v2 driver. No manual action is required, but proactive migration is recommended for improved reliability and performance. See the object storage documentation for migration steps. Deprecation notice
  • S3 storage driver (AWS SDK v1): The legacy s3 driver becomes an alias for the new s3_v2 driver. The s3_v2 driver does not support Signature Version 2 — any v4auth: false configuration will be transparently ignored. Migrate to Signature Version 4 before upgrading. Deprecation notice

6. ciJobTokenScopeAddProject GraphQL mutation removed

GitLab.com | Self-Managed | Dedicated

The ciJobTokenScopeAddProject GraphQL mutation is deprecated in favor of ciJobTokenScopeAddGroupOrProject, introduced alongside the CI/CD job token scope changes in GitLab 18.0. Update any automation or tooling using the deprecated mutation before upgrading.

Deprecation notice

7. ci_job_token_scope_enabled projects API attribute removed

GitLab.com | Self-Managed | Dedicated

The ci_job_token_scope_enabled attribute in the Projects REST API is removed in GitLab 19.0. This attribute was deprecated in GitLab 18.0 when the underlying setting was removed, and has since always returned false.

To control CI/CD job token access, use the CI/CD job token project settings.

Deprecation notice

8. Unauthenticated Projects API pagination limit enforced on GitLab.com

GitLab.com

To maintain platform stability and ensure consistent performance, a maximum offset limit of 50,000 will be enforced for all unauthenticated requests to the Projects List REST API on GitLab.com. For example, the page parameter will be limited to 2,500 pages when retrieving 20 results per page.

Workflows requiring access to more data must use keyset-based pagination parameters. This limit applies only to GitLab.com. On GitLab Self-Managed and GitLab Dedicated, the offset limit will be disabled by default behind a feature flag.

Deprecation notice

Resources to manage your impact

We've developed specific tooling to help customers understand how these planned changes impact their GitLab instance(s). Once you've assessed your impact, we recommend reviewing the mitigation steps provided in the documentation relevant to each change to ensure a smooth transition to GitLab 19.0.

GitLab Detective (Self-Managed only): This experimental tool automatically checks a GitLab installation for known issues by looking at config files and database values. Note: it must run directly on your GitLab nodes.

If you have a paid plan and have questions or require assistance with these changes, please open a support ticket on the GitLab Support Portal.

If you are a free GitLab.com user, you can access additional support through community sources such as GitLab Documentation, the GitLab Community Forum, and Stack Overflow.

Read the whole story
alvinashcraft
5 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories