Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152547 stories
·
33 followers

Give your AI agent direct, structured GitLab access with glab CLI

1 Share

When teams use GitLab Duo, Claude, Cursor, and other AI assistants, more of the development workflow runs through an AI agent acting on your behalf — reading issues, reviewing merge requests, running pipelines, and helping you ship faster. Most developers are already using glab from the terminal to interact with GitLab. Combining the two is a natural next step.

The problem is that without the right tools, AI agents are essentially guessing when it comes to your GitLab projects. They might hallucinate the details of an issue they've never seen, summarize a merge request based on stale training data rather than its actual state, or require you to manually copy context from a browser tab and paste it into a chat window just to get started. Every one of those workarounds is friction: it slows you down, introduces the possibility of error, and puts a hard ceiling on what your agent can actually do on your behalf. The GitLab CLI (glab) changes that by giving agents a direct, reliable interface to your projects.

With glab, your agent fetches what it needs directly from GitLab, acts on it, and reports back — so you spend less time relaying information and more time on the work that matters.

In this tutorial, you'll learn how to use glab to give AI agents structured, reliable access to your GitLab projects. You'll also discover how that unlocks a faster, more capable development workflow.

How to connect your AI agent to GitLab through MCP

The most direct way to supercharge your AI workflow is to give your AI agent native access to glab through Model Context Protocol (MCP).

MCP is an open standard that lets AI tools discover and use external capabilities at runtime. Once connected, your AI assistant can read issues, comment on merge requests, check pipeline status, and write back to GitLab, all without copying anything from the UI or writing a single API call yourself.

To get started, run:

# Start the glab MCP server
glab mcp serve

Once your MCP client is configured, your AI can answer questions like "What's the status of my open MRs?" or "Are there any failing pipelines on main?" by querying GitLab directly, not scraping the web UI, not relying on stale training data. See the full setup docs for configuration steps for Claude Code, Cursor, and other editors.

One detail worth knowing: glab automatically adds --output json when invoked through MCP, for any command that supports it. Your agent gets clean, structured data without you needing to think about output formats. And because glab uses the official MCP SDK, it stays compatible as the protocol evolves.

We've also been deliberate about which commands are exposed through MCP. Commands that require interactive terminal input are intentionally excluded, so your agent never gets stuck waiting for input that will never come. What's exposed is what actually works reliably in an agent context.

Let your AI participate in code review

Most developers have a backlog of MRs waiting for review. It's one of the most time-consuming parts of the job and one of the best places to put AI to work. With glab, your agent doesn't just observe your review queue, it can work through it with you.

See exactly what still needs addressing

Start with this:

glab mr view 2677 --comments --unresolved --output json

This input returns the full MR: metadata, description, and every unresolved discussion, as a single structured JSON payload. Hand that to your AI and it has everything it needs: which threads are open, what the reviewer asked for, and in what context. No tab-switching, no copy-pasting individual comments.

{
  "id": 2677,
  "title": "feat: add OAuth2 support",
  "state": "opened",
  "author": { "username": "jdwick" },
  "labels": ["backend", "needs-review"],
  "blocking_discussions_resolved": false,
  "discussions": [
    {
      "id": "3107030349",
      "resolved": false,
      "notes": [
        {
          "author": { "username": "dmurphy" },
          "body": "This error handling will swallow panics — consider wrapping with recover()",
          "created_at": "2026-03-14T09:23:11.000Z"
        }
      ]
    },
    {
      "id": "3107030412",
      "resolved": false,
      "notes": [
        {
          "author": { "username": "sreeves" },
          "body": "Token refresh logic needs a test for the expired token case",
          "created_at": "2026-03-14T10:05:44.000Z"
        }
      ]
    }
  ]
}

Instead of reading through every thread yourself, you ask your agent "what do I still need to fix in MR 2677?" and get back a prioritized summary with suggested changes. This all happens from a single command.

Close the loop programmatically

Once your AI has helped you address the feedback, it can resolve discussions:

# List all discussions — structured, ready for the agent to process
glab mr note list 456 --output json

# Resolve a discussion once the feedback is addressed
glab mr note resolve 456 3107030349

# Reopen if something needs another look
glab mr note reopen 456 3107030349
[
  {
    "id": 3107030349,
    "body": "This error handling will swallow panics — consider wrapping with recover()",
    "author": { "username": "dmurphy" },
    "resolved": false,
    "resolvable": true
  },
  {
    "id": 3107030412,
    "body": "Token refresh logic needs a test for the expired token case",
    "author": { "username": "sreeves" },
    "resolved": false,
    "resolvable": true
  }
]

Note IDs are visible directly in the GitLab UI and API, no extra lookup needed. Your agent can work through the full list, verify each fix, and resolve as it goes.

Talk to your AI about your code more effectively

Even if you're not running an MCP server, there's a simpler shift that makes a huge difference: using glab to feed your AI better information.

Think about the last time you asked an AI assistant to help triage issues or debug a failing pipeline. You probably copied some text from the GitLab UI and pasted it into the chat. Here's what your agent is actually working with when you do that:

open issues: 12 • milestone: 17.10 • label: bug, needs-triage ...

Compare that to what it gets with glab:

[
  {
    "iid": 902,
    "title": "Pipeline fails on merge to main",
    "labels": ["bug", "needs-triage"],
    "milestone": { "title": "17.10" },
    "assignees": []
  },
  ...
]

Structured, typed, complete; no ambiguity, no parsing guesswork. That's the difference between an agent that can act and one that has to ask follow-up questions.

If you're using the MCP server, you get this automatically: glab adds --output json for any command that supports it. If you're working directly from the terminal, just add the flag yourself:

# Pull open issues for triage
glab issue list --label "needs-triage" --output json

# Check pipeline status
glab ci status --output json

# Get full MR details
glab mr view 456 --output json

We've significantly expanded JSON output support in recent releases. It now covers CI status, milestones, labels, releases, schedules, cluster agents, work items, MR approvers, repo contributors, and more. If glab can retrieve it, your AI can consume it cleanly.

A real workflow

$ glab issue list --label "needs-triage" --milestone "17.10"
--output json
Agent: I found 2 unassigned bugs in the 17.10 milestone that need triage:
1. #902 — Pipeline fails on merge to main (opened 5 days ago)
2. #903 — Auth token not refreshing on expiry (opened 4 days ago)
Both are unassigned. Want me to draft triage notes and suggest assignees based on recent commit history?

Your agent is never limited to built-in commands

glab's first-class commands cover the most common workflows, but your agent is never limited to them. Through glab api, it has authenticated access to the full GitLab REST and GraphQL API surface, using the same session, with no extra credentials or configuration required.

This is a meaningful differentiator. Most CLI tools stop at what their commands expose. With glab, if GitLab's API supports it, your agent can do it. It's always working from a trusted, authenticated context.

A practical example: fetching just the list of changed files in an MR before deciding which diffs to pull in full:

# Get changed file paths — lightweight, no diff content yet
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[].new_path'

# Then fetch only the specific file your agent needs
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[] | select(.new_path == "path/to/file.go")'
"internal/auth/token.go"
"internal/auth/token_test.go"
"internal/oauth/refresh.go"

For anything the REST API doesn't cover (epics, certain work item queries, complex cross-project data), glab api graphql gives you the full GraphQL interface:

  glab api graphql -f query='
{
  project(fullPath: "gitlab-org/gitlab") {
    mergeRequest(iid: "12345") {
      title
      reviewers { nodes { username } }
    }
  }
}'
{
  "data": {
    "project": {
      "mergeRequest": {
        "title": "feat: add OAuth2 support",
        "reviewers": {
          "nodes": [
            { "username": "dmurphy" },
            { "username": "sreeves" }
          ]
        }
      }
    }
  }
}

Your agent has a single, authenticated entry point to everything GitLab exposes without the token juggling, separate API clients, or configuration overhead.

What's coming and your feedback

Two improvements we're actively working on will make glab even more useful for agent workflows:

Agent-aware help text. Today, --help output is written for humansvat a terminal. We're updating it to surface the non-interactive alternative for every interactive command, flag which commands support --output json, and generally make help a useful resource for agents discovering capabilities at runtime — not just humans.

Better machine-readable errors. When something goes wrong today, agents get the same human-readable error messages as terminal users. We're changing that so errors in JSON mode return structured output, giving your agent the information it needs to handle failures gracefully, retry intelligently, or surface the right context back to you.

Both of these are in active development. If you're already using glab with an AI tool, you're exactly the audience we want feedback from.

  • What friction are you hitting? Commands that don't behave well in agent contexts, error messages that aren't actionable, gaps in JSON output coverage. We want to know.
  • What workflows have you unlocked? Real usage patterns help us prioritize what to build next.

Join the discussion in our feedback issue — that's where we're shaping the roadmap for agent-friendliness, and where your input will have the most direct impact. If you've found a specific gap, open an issue. If you've got a fix in mind, contributions are welcome. Visit CONTRIBUTING.md to get started.

The GitLab CLI has always been about giving developers more control over their workflow. As AI becomes a bigger part of how we all work, that means making glab the best possible interface between your AI tools and your GitLab projects. We're just getting started and we'd love to build the next part with you.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Do Component Libraries Still Matter in the Age of AI?

1 Share

You can have both AI code generation and a solid component library at the foundation. Here are the top six reasons to use both in an integrated approach to code.

One of the hottest hot takes that I see floating around the developer space these days is “Why would I ever use a component library anymore when I can just generate all the components myself with AI?” On the surface, it seems like a fairly reasonable question to ask.

After all, one of the main reasons why developers reached for a component library in the past was to avoid the work of building complex components. And that’s fair! As someone who once had to build a color picker from scratch, I get it—I’m not particularly keen on ever repeating that experience, either.

But now, that’s not really a pain point anymore. If I don’t want to build that color picker myself, I can just ask my new best friend Claude (or Copilot or Cursor or whatever) to do it for me. In a matter of minutes, I can have my shiny new component with no need to wrangle the code myself. That’s the obvious answer … right?

Well, maybe not. While it’s certainly possible to have every component in your application generated by your AI tool of choice, I’d argue that it’s not the best solution. Instead, I’d say that your code generation will be better if you use AI with component libraries. This doesn’t have to be an either/or situation. Why choose when you can have both? Here are my top six reasons why component libraries should remain a core part of your frontend infrastructure, even if you’re a full-on vibe coder:

1. Abundant Reference Material

AI cannot generate from nothing: the code it creates for you is based on the code samples that it has access to. And you know what creates thousands of standardized examples of the exact same components used in all kinds of different situations? Yep: component libraries.

When tons of developers are using the same set of components, there’s more sample data for the AI to learn from. That means it will be able to reproduce better patterns, make more effective choices and generate fewer mistakes. And this goes beyond just the stuff other developers have built using these components. Any worthwhile component library will also come with pages and pages of documentation. Feature overviews, configuration details, API guides, troubleshooting, recommendations, official demos and sample code—what more could your coding agent ask for?

Well, now that you’ve asked … what about third-party reference material? Larger and more popular component libraries will also have unofficial “documentation” in the form of technical blogs, walkthroughs, videos and sample repos that were created by the community. The longer the component library has been around, the more content will have been created—and the more content the AI has to reference, the better the output.

2. Better Accessibility

Let’s be honest for a moment: AI code generating tools do not excel at accessibility. They’re getting better, and I think it’s even fair to say that there will be a time (hopefully in the not-too-distant future) where they might even be good at it. But that day is not today. Until that gap closes, we (the human developers) are still responsible for making the applications and software we build as accessible as possible.

One of the best ways that we can do that—and one that the article linked above calls out specifically—is to leverage libraries that have accessibility built-in on the component level. While it’s possible to attempt to include accessibility constraints and instruction in your prompts, giving an AI code generator a set of accessible primitives to work with will get you a lot further.

The deepest solution is architectural. Instead of relying on every prompt to produce correct primitives, use libraries that encode accessibility into their API contracts.

3. Cleaner (and Fewer) Lines of Code

One of the best parts about using prebuilt components has always been the functionality that comes already baked in. It’s not hard to build a basic data grid that displays the content—the difficulty comes with the virtualization, sorting / filtering / grouping, exporting and more. Each additional feature that you have to build (or generate) is extra code in your codebase that has to be maintained and managed indefinitely.

Using a component library means you can take advantage of code that someone else is maintaining—and isn’t that the dream, ultimately? Your lines of code go down because it takes fewer lines of code to pass true into the predefined sort property than it does to build a sorting function. “But wait!” you cry. “Why do I care how many lines of code something is if I can just make the AI write those lines of code for me?” Well, unfortunately, you do still have to understand, review and maintain those AI-generated lines of code. Nobody is really reading that 2,000+ line PR, but they might read a 200 line one—isn’t that what you’d rather deal with? Less stuff generated from scratch means less to review—and fewer chances for errors to slip in, in the first place.

4. Cost Effectiveness

You know what comes with fewer things to generate and fewer revision cycles to correct errors? Fewer tokens. Most AI coding tools charge by token consumption, which means every line of generated code, new prompt and iteration has real cost attached. Complex components with lots of edge cases can take several revision cycles before they’re production-ready, and those cycles add up.

Using a component library helps cut that cost at multiple points. To begin with, you’ll write shorter prompts and (as mentioned earlier) generate less code because the AI is handling configuration and composition code rather than implementation code. Telling a well-documented component what to do is a much shorter prompt than asking an AI to build that functionality from scratch. Because the output is more predictable, you’ll also spend fewer tokens on corrections. That may not make much of a difference for a single component, but it absolutely scales when you’re doing it across an entire application.

5. Easier Human Revision

Back to human review—AI won’t get it right every time (at least, not yet), which means there will always be places a human needs to step in and make corrections. We already know that it’s harder to open up a document you’re not familiar with and orient yourself in someone else’s code, but that mental lift gets a little easier if you are at least familiar with the tools being used.

Component libraries offer the benefit of consistent patterns: the same properties, the same naming structures, the same mental model. AI-generated components won’t have this kind of standardization, especially if they were generated over time by different developers using different tools. If the AI is generating code using the same set of components every time—and you’re already familiar with those components—you’re going to be able to get up to speed quicker and spend less time figuring out what the AI generated.

6. Alignment with Design

There’s a solid chance that both your development team and the design team are both using generative AI tools in your work—which can lead to even more painful gaps during the handoff. However, if you’re both telling the AI to work with a specific component library, you can start to bring those disparate experiences closer together. Designers might even be able to start vibe coding prototypes they could hand off to a developer!

Alternately, you could also feed your design system tokens (different kind of tokens) into the AI tool to help it create work in alignment with what already exists. If you’re using a tool like Progress ThemeBuilder, designers can go in and customize exactly how each component should look and behave, and then developers can apply that exported CSS to their AI generated layouts—assuming they’re built with the same components.

Build on Top of a Strong Foundation

You can always prompt your AI generator of choice and hope for the best—but from our experience building UI controls, we’ve found that the results are better when you can provide your model with the right building blocks. That’s why we’ve created a suite of AI code generation tools that leverage the Progress Kendo UI and Telerik component libraries, allowing you to generate new layouts, pages and features built with components you know you can trust.

We believe that AI code generation isn’t a replacement for UI controls that are secure, accessible and human-built. Rather than taking the approach that AI should be used instead, our approach is to see where it can be integrated—to expand the ease of use and capabilities of what we’ve already built.

After all, why throw away a decade of knowledge about building UI controls? For us, it’s the ideal foundation to build on top of.

Try the UI Generator

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Trump demands ABC fire Jimmy Kimmel

1 Share
A picture of Jimmy Kimmel against a background of red speech bubbles

President Donald Trump is calling for Disney to fire Jimmy Kimmel. On Thursday, Kimmel joked that Melania Trump had looked like an "expectant widow" in a skit about the upcoming White House Correspondents' Dinner. The skit aired days before an armed gunman made an assassination attempt at the event, which President Trump and Melania Trump attended, on Saturday. They and other administration officials were evacuated from the ballroom.

Trump - who has faced significant speculation over his health in recent months - interpreted the comment as an incitement to attack him. "I appreciate that so many people are incensed by Kimmel's despicable cal …

Read the full story at The Verge.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Consumers lost $2.1 billion to social media scams in 2025, FTC reports

1 Share
The agency reports that losses from social media scams have increased eightfold, and that social media scams resulted in higher losses than any other method scammers used to contact consumers.
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Prompt Engineering Cheat Sheet: How to Write Better AI Prompts

1 Share

Learn prompt engineering with this practical cheat sheet that covers frameworks, techniques, and tips for producing more accurate and useful AI outputs.

The post The Prompt Engineering Cheat Sheet: How to Write Better AI Prompts appeared first on TechRepublic.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local

1 Share

Today, I am pleased to announce that Azure Local now scales to support deployments of up to thousands of servers within a single sovereign environment, allowing organizations to run much larger workloads locally across large-footprint datacenters, industrial environments and edge locations while maintaining control within their sovereign boundary.

Organizations operating national infrastructure, regulated workloads or mission-critical services are navigating a fundamental shift in how cloud infrastructure must be deployed and managed. As digital sovereignty postures evolve and regulatory requirements tighten across regions, infrastructure strategies are increasingly shaped by the need to maintain jurisdictional control over data, operations and dependencies. At the same time, AI and data-intensive applications are moving closer to where data is generated, requiring infrastructure that can scale to support larger deployment footprints while maintaining operational control, compliance and data residency requirements within sovereign environments.

Azure Local is the foundation for Microsoft’s Sovereign Private Cloud, allowing organizations to run cloud-consistent infrastructure on hardware they own and operate within their sovereign boundary. It supports deployments across connected, intermittently connected or fully disconnected environments. With Azure Local disconnected operations, customers retain the ability to apply policy enforcement, role-based access control, auditing and compliance configuration locally, allowing them control over how infrastructure is configured, secured and updated regardless of public cloud connectivity.

Scaling Sovereign Private Cloud

Sovereign Private Cloud deployments must scale to support not only larger workloads, but also the operational requirements of national infrastructure and regulated industries. Azure Local allows organizations to grow deployments from hundreds up to thousands of servers within a single sovereign boundary, allowing infrastructure to expand alongside demand without requiring architectural redesign.

As deployment footprints grow, resiliency becomes essential to maintaining continuous operations for mission critical services. Expanded fault domains and infrastructure pools help prevent hardware failures from resulting in service outages, ensuring critical workloads remain operational across environments with varying levels of cloud connectivity.

At these larger scale points, organizations can run data-intensive AI inference and analytics workloads entirely within their own environment. With support for high-performance graphics processing unit (GPU) infrastructure, sensitive models and operational data remain within customer-controlled infrastructure, while access management, auditing and compliance controls are maintained within the sovereign deployment.

Built for challenging workloads 

Increased deployment scale unlocks new workload placement opportunities, from large sovereign private cloud deployments to distributed AI workloads, allowing organizations to run more data intensive and latency sensitive applications entirely within their sovereign boundary.

AT&T, one of the world’s largest telecommunications operators, is deploying Azure Local to run mission-critical infrastructure on hardware they own in their environment. The goal: full operational control while running at the scale the business demands.

“Azure Local provides the infrastructure foundation we need to run critical operations at scale, while ensuring control and governance across our environment. The consistency of the Azure operating model, delivered on our own infrastructure, is key as we continue to modernize while delivering reliable services to our customers.”

— Sherry McCaughan, Vice President – Mobility Core Services, AT&T

Kadaster, the Netherlands’ official land registry and mapping agency, is running Azure Local to keep sovereign control over some of the country’s most sensitive public data.

“As a government agency responsible for some of the Netherlands’ most sensitive data, we need infrastructure that gives us full control over where our data lives and how it’s governed. Azure Local has been a consistent foundation for that — and as our workloads grow in scale and complexity, the platform has grown with us.”

— Maarten van der Tol, General Manager, Kadaster

FiberCop, Italy’s most advanced and extensive digital network operator is deploying Azure Local across its edge locations to bring sovereign cloud and AI services to organizations throughout the country. Fabio Veronese, Chief Information & Technology Officer commented:

“FiberCop is better positioned than any other player on the Italian market to drive innovation and deliver cloud as well as AI services at national scale. Azure Local supports our mission to drive Italy’s digital future and brings Microsoft’s cloud capabilities to edge workloads across the country while keeping data sovereignty and compliance where they matter most.”

The infrastructure behind Sovereign Private Cloud

Azure Local is available today with validated compute and enterprise storage platforms from partners including DataON, Dell Technologies, Everpure, Hitachi Vantara, HPE, Lenovo and NetApp, allowing organizations to integrate existing Storage Area Networks (SAN) and preserve prior investments while allowing compute and storage resources to scale independently within their sovereign environment.

At the silicon level, Intel®  Xeon® 6 processors provide the compute foundation for the platform. Built for the density and performance demands of modern enterprise workloads, Xeon 6 also brings built-in AI acceleration with Intel® AMX, meaning organizations running inference or generative AI workloads within their sovereign environment do not need to introduce separate, specialized infrastructure to do so.

Together, Azure Local, validated compute and enterprise storage platforms, accelerated computing platforms and underlying silicon can provide a datacenter-scale stack that supports sovereign infrastructure deployments while helping ensure data, models and execution remain within customer-controlled environments.

Sovereign infrastructure built for your requirements

Azure Local was built to meet customers where their requirements are whether that means strict data residency, disconnected operations, regulated workloads or AI running close to where data is generated. As these requirements evolve across regulated industries and governments worldwide, Sovereign Private Cloud deployments can expand from a single node at the edge to large enterprise-scale datacenter environments, running on hardware organizations own and operate, with consistent lifecycle management through Azure.

Resources:

Douglas Phillips leads global engineering efforts for Microsoft’s specialized, sovereign and private clouds. He is responsible for Microsoft’s global strategy, products and operations that bring Microsoft’s industry-leading solutions, including Azure, our adaptive cloud portfolio and Microsoft 365 collaboration suite, to customers with additional sovereignty, security, edge and compliance requirements.

The post Microsoft Sovereign Private Cloud scales to thousands of nodes with Azure Local appeared first on The Official Microsoft Blog.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories