When teams use GitLab Duo, Claude, Cursor, and other AI assistants, more of the development workflow runs through an AI agent acting on your behalf — reading issues, reviewing merge requests, running pipelines, and helping you ship faster. Most developers are already using glab from the terminal to interact with GitLab. Combining the two is a natural next step.
The problem is that without the right tools, AI agents are essentially guessing when it comes to your GitLab projects. They might hallucinate the details of an issue they've never seen, summarize a merge request based on stale training data rather than its actual state, or require you to manually copy context from a browser tab and paste it into a chat window just to get started. Every one of those workarounds is friction: it slows you down, introduces the possibility of error, and puts a hard ceiling on what your agent can actually do on your behalf. The GitLab CLI (glab) changes that by giving agents a direct, reliable interface to your projects.
With glab, your agent fetches what it needs directly from GitLab, acts on it, and reports back — so you spend less time relaying information and more time on the work that matters.
In this tutorial, you'll learn how to use glab to give AI agents structured, reliable access to your GitLab projects. You'll also discover how that unlocks a faster, more capable development workflow.
How to connect your AI agent to GitLab through MCP
The most direct way to supercharge your AI workflow is to give your AI agent native access to glab through Model Context Protocol (MCP).
MCP is an open standard that lets AI tools discover and use external capabilities at runtime. Once connected, your AI assistant can read issues, comment on merge requests, check pipeline status, and write back to GitLab, all without copying anything from the UI or writing a single API call yourself.
To get started, run:
# Start the glab MCP server
glab mcp serve
Once your MCP client is configured, your AI can answer questions like "What's the status of my open MRs?" or "Are there any failing pipelines on main?" by querying GitLab directly, not scraping the web UI, not relying on stale training data. See the full setup docs for configuration steps for Claude Code, Cursor, and other editors.
One detail worth knowing: glab automatically adds --output json when invoked through MCP, for any command that supports it. Your agent gets clean, structured data without you needing to think about output formats. And because glab uses the official MCP SDK, it stays compatible as the
protocol evolves.
We've also been deliberate about which commands are exposed through MCP. Commands that require interactive terminal input are intentionally excluded, so your agent never gets stuck waiting for input that will never come. What's exposed is what actually works reliably in an agent context.
Let your AI participate in code review
Most developers have a backlog of MRs waiting for review. It's one of the most time-consuming parts of the job and one of the best places to put
AI to work. With glab, your agent doesn't just observe your review queue, it can work through it with you.
See exactly what still needs addressing
Start with this:
glab mr view 2677 --comments --unresolved --output json
This input returns the full MR: metadata, description, and every unresolved discussion, as a single structured JSON payload. Hand that to your AI and it has everything it needs: which threads are open, what the reviewer asked for, and in what context. No tab-switching, no copy-pasting individual comments.
{
"id": 2677,
"title": "feat: add OAuth2 support",
"state": "opened",
"author": { "username": "jdwick" },
"labels": ["backend", "needs-review"],
"blocking_discussions_resolved": false,
"discussions": [
{
"id": "3107030349",
"resolved": false,
"notes": [
{
"author": { "username": "dmurphy" },
"body": "This error handling will swallow panics — consider wrapping with recover()",
"created_at": "2026-03-14T09:23:11.000Z"
}
]
},
{
"id": "3107030412",
"resolved": false,
"notes": [
{
"author": { "username": "sreeves" },
"body": "Token refresh logic needs a test for the expired token case",
"created_at": "2026-03-14T10:05:44.000Z"
}
]
}
]
}
Instead of reading through every thread yourself, you ask your agent "what do I still need to fix in MR 2677?" and get back a prioritized summary with suggested changes. This all happens from a single command.
Close the loop programmatically
Once your AI has helped you address the feedback, it can resolve discussions:
# List all discussions — structured, ready for the agent to process
glab mr note list 456 --output json
# Resolve a discussion once the feedback is addressed
glab mr note resolve 456 3107030349
# Reopen if something needs another look
glab mr note reopen 456 3107030349
[
{
"id": 3107030349,
"body": "This error handling will swallow panics — consider wrapping with recover()",
"author": { "username": "dmurphy" },
"resolved": false,
"resolvable": true
},
{
"id": 3107030412,
"body": "Token refresh logic needs a test for the expired token case",
"author": { "username": "sreeves" },
"resolved": false,
"resolvable": true
}
]
Note IDs are visible directly in the GitLab UI and API, no extra lookup needed. Your agent can work through the full list, verify each fix, and resolve as it goes.
Talk to your AI about your code more effectively
Even if you're not running an MCP server, there's a simpler shift that makes a huge difference: using glab to feed your AI better information.
Think about the last time you asked an AI assistant to help triage issues or debug a failing pipeline. You probably copied some text from the GitLab UI and pasted it into the chat. Here's what your agent is actually working with when you do that:
open issues: 12 • milestone: 17.10 • label: bug, needs-triage ...
Compare that to what it gets with glab:
[
{
"iid": 902,
"title": "Pipeline fails on merge to main",
"labels": ["bug", "needs-triage"],
"milestone": { "title": "17.10" },
"assignees": []
},
...
]
Structured, typed, complete; no ambiguity, no parsing guesswork. That's the difference between an agent that can act and one that has to ask follow-up questions.
If you're using the MCP server, you get this automatically: glab adds --output json for any command that supports it. If you're working directly
from the terminal, just add the flag yourself:
# Pull open issues for triage
glab issue list --label "needs-triage" --output json
# Check pipeline status
glab ci status --output json
# Get full MR details
glab mr view 456 --output json
We've significantly expanded JSON output support in recent releases. It now covers CI status, milestones, labels, releases, schedules, cluster agents, work items, MR approvers, repo contributors, and more. If glab can
retrieve it, your AI can consume it cleanly.
A real workflow
$ glab issue list --label "needs-triage" --milestone "17.10"
--output json
Agent: I found 2 unassigned bugs in the 17.10 milestone that need triage:
1. #902 — Pipeline fails on merge to main (opened 5 days ago)
2. #903 — Auth token not refreshing on expiry (opened 4 days ago)
Both are unassigned. Want me to draft triage notes and suggest assignees based on recent commit history?
Your agent is never limited to built-in commands
glab's first-class commands cover the most common workflows, but your agent is never limited to them. Through glab api, it has authenticated access to the full GitLab REST and GraphQL API surface, using the same session, with no extra credentials or configuration required.
This is a meaningful differentiator. Most CLI tools stop at what their commands expose. With glab, if GitLab's API supports it, your agent can do it. It's always working from a trusted, authenticated context.
A practical example: fetching just the list of changed files in an MR before deciding which diffs to pull in full:
# Get changed file paths — lightweight, no diff content yet
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[].new_path'
# Then fetch only the specific file your agent needs
glab api "/projects/$CI_PROJECT_ID/merge_requests/$CI_MERGE_REQUEST_IID/diffs?per_page=100" \
| jq '.[] | select(.new_path == "path/to/file.go")'
"internal/auth/token.go"
"internal/auth/token_test.go"
"internal/oauth/refresh.go"
For anything the REST API doesn't cover (epics, certain work item queries, complex cross-project data), glab api graphql gives you the full
GraphQL interface:
glab api graphql -f query='
{
project(fullPath: "gitlab-org/gitlab") {
mergeRequest(iid: "12345") {
title
reviewers { nodes { username } }
}
}
}'
{
"data": {
"project": {
"mergeRequest": {
"title": "feat: add OAuth2 support",
"reviewers": {
"nodes": [
{ "username": "dmurphy" },
{ "username": "sreeves" }
]
}
}
}
}
}
Your agent has a single, authenticated entry point to everything GitLab exposes without the token juggling, separate API clients, or configuration overhead.
What's coming and your feedback
Two improvements we're actively working on will make glab even more useful for agent workflows:
Agent-aware help text. Today, --help output is written for humansvat a terminal. We're updating it to surface the non-interactive alternative
for every interactive command, flag which commands support --output json, and generally make help a useful resource for agents discovering
capabilities at runtime — not just humans.
Better machine-readable errors. When something goes wrong today, agents get the same human-readable error messages as terminal users. We're changing that so errors in JSON mode return structured output, giving your agent the information it needs to handle failures gracefully, retry intelligently, or surface the right context back to you.
Both of these are in active development. If you're already using glab with an AI tool, you're exactly the audience we want feedback from.
- What friction are you hitting? Commands that don't behave well in agent contexts, error messages that aren't actionable, gaps in JSON output coverage. We want to know.
- What workflows have you unlocked? Real usage patterns help us prioritize what to build next.
Join the discussion in our feedback issue — that's where we're shaping the roadmap for agent-friendliness, and where your input will have the most direct impact. If you've found a specific gap, open an issue. If you've got a fix in mind, contributions are welcome. Visit CONTRIBUTING.md to get started.
The GitLab CLI has always been about giving developers more control over their workflow. As AI becomes a bigger part of how we all work, that means making glab the best possible interface between your AI tools and your GitLab projects. We're just getting started and we'd love to build the next part with you.