Your developers’ AI tools are only as good as what they know going in. When those tools run through the right IDE, it can give them a head start – a picture of the codebase the tools would otherwise need to piece together themselves.
That means your team’s IDE choices belong on your AI agenda alongside the policies you set around gateway data and LLM decisions.
The AI gateway ceiling
AI gateways are now serious management infrastructure components. Gartner projected that 70% of software engineering teams building multimodal applications will have them in place by 2028.
Gateways give you two types of AI management levers:
- In-pipeline controls. Think model routing, rate limiting, and cost allocation: In-pipeline controls give you solid visibility and guardrails over AI spend, but they are applied to requests that are already formed.
- Pre-pipeline policies. Think approved model lists, prompting guidelines, and training programs. In theory, such policies shape developer behavior. A 2024 Stack Overflow survey found that 73% of developers weren’t sure whether their companies even had an AI policy.
And yet the question of how to link AI usage to engineering outcomes remains open. “We’re building toward that answer”, said GitHub when launching their organization-level Copilot dashboard in February 2026.
Gateways are a necessary part of the answer. But they don’t provide an architectural lever over what AI tools have to work with before a request is even formed. The information they can access makes a difference – regardless of how well your people follow prompting guidelines or how closely you monitor gateway statistics.
Familiar tool, overlooked AI lever
One of the best-evidenced frameworks for closing the measurement loop between AI usage and AI outcomes is in the DORA 2025 State of AI-Assisted Software Development report. It identified seven capabilities for leaders to prioritize:
- Two are organizational: a focus on AI’s end users and a clear, communicated AI policy. That’s where your AI gateway fits in.
- Two are procedural: strong version control practices and working in small batches.
- Three are technical: a healthy data ecosystem, AI-accessible internal data, and a high-quality internal platform.
Within the area of data capabilities, DORA is specific about what drives performance: context, or what a model receives before generating output. Better context means greater benefits. What DORA doesn’t drill into is what determines context quality at the point of creation. That depends on who or what creates it and how.
To AI, Re: Context
Gateways may not yet show who or what is creating context, but there are three basic cases:
- Developer-direct. A developer interacts with AI directly through a browser or chat interface. The context is whatever gets pasted.
- Agent-direct. An autonomous agent operates directly on the codebase. The context is whatever the agent selects.
- IDE-mediated. An AI assistant or coding agent runs through the development environment. The context includes whatever structural knowledge of the codebase the IDE provides – automatically for assistants, by configuration for agents.
All three cases have policy levers, including which models you fund, which agents you allow, and how you track cost and volume.
But the IDE-mediated case also introduces a decision about the environment AI tools operate in, not the tools themselves. Where most code is AI-generated inside IDE-based tools – at Uber, that share is 65%–72% – this decision carries real weight.
Context, assemble!
Context assembly is the process of selecting what to send to an AI model. The method used measurably affects output quality:
- A 2026 study found that a method based on tracing how code connects across files – versus one based on matching similar-looking code – produced 213% more complete test coverage for Java and 174% for Go.
- A 2024 study compared a different, similar-looking code method to a static-analysis-based method for extracting code dependencies and type information. The extraction-based method produced code completions that were 62% more accurate.
For AI tools running in a development environment, the environment determines what structural knowledge their context assembly method has to work with.
The IDE decision, reframed
Which IDE to use has traditionally been a developer’s decision. The best metrics you’ve had around it have included licensing costs and developer satisfaction scores. AI gateways are beginning to change that.
Consider the gateway data you may already be monitoring, such as model call volume, context payload size, or token usage. What your team’s IDEs make available to their AI tools can influence all these metrics.
No established AI management framework has yet formalized the IDE’s role in this picture. The measurement infrastructure is still developing. GitHub’s Copilot dashboard can tell you where Copilot traffic originated. No multi-tool gateway currently offers an off-the-shelf equivalent across all your AI coding tools. In the meantime, there are two things you can do to stay ahead of the curve:
Understand what you have
Whether or not you have a gateway yet, start by understanding which IDEs your developers are using and why. If you have a gateway, go a step further: Ask your engineers what it would take to classify model calls by interaction type – IDE-mediated, agent-direct, or developer-direct. The effort varies by configuration, but the raw material is likely already there. Establishing a baseline now gives you something to measure against as your tooling matures.
Evaluate for what’s coming
Some IDEs leave AI tools to figure out the codebase on their own. Today’s coding agents default to doing exactly that.
Other IDEs make a structural model of the entire codebase available to the AI tools running through them. The Agent Client Protocol (ACP) lets external agents run inside JetBrains IDEs. Once connected, they can call IDE-side tools through the Model Context Protocol (MCP).
As agentic coding work becomes more complex and autonomous, this structural advantage that an IDE can provide matters more. The mechanisms are new enough that the evidence base is still thin, but early findings from a Sourcegraph-published benchmark showed that agents using MCP tools complete tasks 38% faster and locate relevant files 70% more often on large, multi-repository codebases.
Your developers know what their IDEs provide and how their agents are configured. It’s on you to decide whether that’s enough for where AI-assisted development is heading.
IDEs for the work ahead
When your team’s IDE choices are on your AI agenda, JetBrains gives you architectural variables to adjust.
JetBrains IDEs maintain a continuous structural representation of the entire codebase that streamlines AI context building. All of it automatically reaches the AI Assistant, the IDEs’ native interface that supports virtually any LLM with your own keys or JetBrains AI.
For over 25 ACP-compatible coding agents, JetBrains IDEs provide tools that expose the same representation directly. Most agents need to be pointed at the tools; when they are, the context-building loop can be shortened according to at least one engineering team.
The dynamics are still settling, and your mileage will surely vary – but the levers are there for you to pull.
See how JetBrains supports more reliable context building in AI-assisted development.
Explore JetBrains tools for business