Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152816 stories
·
33 followers

AI Agent & Copilot Podcast: Jorgen Bach on Closing the ERP Value Gap and Operationalizing AI

1 Share

At the AI Agent & Copilot Summit, Truvio’s Jorgen Bach explains how unified platforms and AI agents are helping enterprises close the ERP value gap and operationalize AI adoption.

The post AI Agent & Copilot Podcast: Jorgen Bach on Closing the ERP Value Gap and Operationalizing AI appeared first on Cloud Wars.

Read the whole story
alvinashcraft
16 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Spotify introduces verified artist badges to help distinguish humans from AI

1 Share
As AI-generated artists and tracks flood music streaming platforms, Spotify is rolling out a new “Verified by Spotify” badge to help listeners more easily identify authentic human artists. To receive the badge, artists must meet certain criteria. Spotify looks for an identifiable artist presence both on and off platform, like concert dates, merch, and linked […]
Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Meta lost 20 million users last quarter

1 Share
Vector illustration of the Meta logo.

Meta is planning to pump billions more into AI investments this year, despite noting that millions of users have seemingly started to abandon its platforms. In an earning call on Wednesday, Meta reported that figures for "Family daily active people" - the term Meta has coined for all collective users of Facebook, Instagram, WhatsApp, or Messenger - declined by 20 million this quarter compared to the previous three months.

Meta attributes this fall to "internet disruptions in Iran, as well as a restriction on access to WhatsApp in Russia." It's up to you whether you take Meta on its word, given that by bundling the user stats together across …

Read the full story at The Verge.

Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft PowerToys 0.99 Adds Multi-Monitor Tools for Windows Users

1 Share

PowerToys 0.99 adds new monitor and window-management tools for Windows users, plus updates to Command Palette, Keyboard Manager, ZoomIt, and Image Resizer.

The post Microsoft PowerToys 0.99 Adds Multi-Monitor Tools for Windows Users appeared first on TechRepublic.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Worst Coder in the World goes agentic: building a leaderboard cracking AI

1 Share
Agents are everywhere, so isn't it fitting that the Worst Coder in the World goes agentic? A coding newbie explores the challenges and rewards of building an agent for work—and trying to learn a few things about coding along the way.
Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The IDE Is Already an AI Quality Variable. Is It on Your AI Agenda?

1 Share

Your developers’ AI tools are only as good as what they know going in. When those tools run through the right IDE, it can give them a head start – a picture of the codebase the tools would otherwise need to piece together themselves.

That means your team’s IDE choices belong on your AI agenda alongside the policies you set around gateway data and LLM decisions. 

The AI gateway ceiling

AI gateways are now serious management infrastructure components. Gartner projected that 70% of software engineering teams building multimodal applications will have them in place by 2028.

Gateways give you two types of AI management levers:

  • In-pipeline controls. Think model routing, rate limiting, and cost allocation: In-pipeline controls give you solid visibility and guardrails over AI spend, but they are applied to requests that are already formed.
  • Pre-pipeline policies. Think approved model lists, prompting guidelines, and training programs. In theory, such policies shape developer behavior. A 2024 Stack Overflow survey found that 73% of developers weren’t sure whether their companies even had an AI policy. 

And yet the question of how to link AI usage to engineering outcomes remains open. “We’re building toward that answer”, said GitHub when launching their organization-level Copilot dashboard in February 2026

Gateways are a necessary part of the answer. But they don’t provide an architectural lever over what AI tools have to work with before a request is even formed. The information they can access makes a difference – regardless of how well your people follow prompting guidelines or how closely you monitor gateway statistics.

Familiar tool, overlooked AI lever

One of the best-evidenced frameworks for closing the measurement loop between AI usage and AI outcomes is in the DORA 2025 State of AI-Assisted Software Development report. It identified seven capabilities for leaders to prioritize:

  • Two are organizational: a focus on AI’s end users and a clear, communicated AI policy. That’s where your AI gateway fits in.
  • Two are procedural: strong version control practices and working in small batches.
  • Three are technical: a healthy data ecosystem, AI-accessible internal data, and a high-quality internal platform.

Within the area of data capabilities, DORA is specific about what drives performance: context, or what a model receives before generating output. Better context means greater benefits. What DORA doesn’t drill into is what determines context quality at the point of creation. That depends on who or what creates it and how.

To AI, Re: Context

Gateways may not yet show who or what is creating context, but there are three basic cases:

  1. Developer-direct. A developer interacts with AI directly through a browser or chat interface. The context is whatever gets pasted.
  2. Agent-direct. An autonomous agent operates directly on the codebase. The context is whatever the agent selects. 
  3. IDE-mediated. An AI assistant or coding agent runs through the development environment. The context includes whatever structural knowledge of the codebase the IDE provides – automatically for assistants, by configuration for agents. 

All three cases have policy levers, including which models you fund, which agents you allow, and how you track cost and volume.

But the IDE-mediated case also introduces a decision about the environment AI tools operate in, not the tools themselves. Where most code is AI-generated inside IDE-based tools – at Uber, that share is 65%–72% – this decision carries real weight. 

Context, assemble! 

Context assembly is the process of selecting what to send to an AI model. The method used measurably affects output quality:

  • A 2026 study found that a method based on tracing how code connects across files – versus one based on matching similar-looking code – produced 213% more complete test coverage for Java and 174% for Go. 
  • A 2024 study compared a different, similar-looking code method to a static-analysis-based method for extracting code dependencies and type information. The extraction-based method produced code completions that were 62% more accurate. 

For AI tools running in a development environment, the environment determines what structural knowledge their context assembly method has to work with.

The IDE decision, reframed

Which IDE to use has traditionally been a developer’s decision. The best metrics you’ve had around it have included licensing costs and developer satisfaction scores. AI gateways are beginning to change that.

Consider the gateway data you may already be monitoring, such as model call volume, context payload size, or token usage. What your team’s IDEs make available to their AI tools can influence all these metrics. 

No established AI management framework has yet formalized the IDE’s role in this picture. The measurement infrastructure is still developing. GitHub’s Copilot dashboard can tell you where Copilot traffic originated. No multi-tool gateway currently offers an off-the-shelf equivalent across all your AI coding tools. In the meantime, there are two things you can do to stay ahead of the curve:

Understand what you have

Whether or not you have a gateway yet, start by understanding which IDEs your developers are using and why. If you have a gateway, go a step further: Ask your engineers what it would take to classify model calls by interaction type – IDE-mediated, agent-direct, or developer-direct. The effort varies by configuration, but the raw material is likely already there. Establishing a baseline now gives you something to measure against as your tooling matures.

Evaluate for what’s coming

Some IDEs leave AI tools to figure out the codebase on their own. Today’s coding agents default to doing exactly that.

Other IDEs make a structural model of the entire codebase available to the AI tools running through them. The Agent Client Protocol (ACP) lets external agents run inside JetBrains IDEs. Once connected, they can call IDE-side tools through the Model Context Protocol (MCP).

As agentic coding work becomes more complex and autonomous, this structural advantage that an IDE can provide matters more. The mechanisms are new enough that the evidence base is still thin, but early findings from a Sourcegraph-published benchmark showed that agents using MCP tools complete tasks 38% faster and locate relevant files 70% more often on large, multi-repository codebases.

Your developers know what their IDEs provide and how their agents are configured. It’s on you to decide whether that’s enough for where AI-assisted development is heading.

IDEs for the work ahead

When your team’s IDE choices are on your AI agenda, JetBrains gives you architectural variables to adjust.

JetBrains IDEs maintain a continuous structural representation of the entire codebase that streamlines AI context building. All of it automatically reaches the AI Assistant, the IDEs’ native interface that supports virtually any LLM with your own keys or JetBrains AI.

For over 25 ACP-compatible coding agents, JetBrains IDEs provide tools that expose the same representation directly. Most agents need to be pointed at the tools; when they are, the context-building loop can be shortened according to at least one engineering team. 

The dynamics are still settling, and your mileage will surely vary – but the levers are there for you to pull. 

See how JetBrains supports more reliable context building in AI-assisted development.

Explore JetBrains tools for business

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories