Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147748 stories
·
33 followers

Does AI Make the Agile Manifesto Obsolete?

1 Share

Capgemini's Steve Jones argues AI agents building apps in hours have killed the Agile Manifesto, as its human-centric principles don't fit agentic SDLCs. While Forrester reports 95% still find Agile relevant, Kent Beck proposes "augmented coding" and AWS suggests "Intent Design" over sprint planning. The debate: Is Agile dead, or evolving for AI collaboration?

By Steef-Jan Wiggers
Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Creating standard and "observable" instruments: System.Diagnostics.Metrics APIs - Part 3

1 Share

In the first post in this series I provided an introduction to the System.Diagnostics.Metrics APIs introduced in .NET 6. I initially introduced the concept of "observable" Instruments in that post, but didn't go into more details. In this post, we'll understand what being "observable" means, and how these Instruments differ from non-observable Instruments.

I start the post with a quick refresher on the basics of the System.Diagnostics.Metrics APIs, such as the different types of instruments available. I then show how you can create each of the instrument types and produce values from them.

System.Diagnostics.Metrics APIs

The System.Diagnostics.Metrics APIs were introduced in .NET 6 but are available in earlier runtimes (including .NET Framework) by using the System.Diagnostics.DiagnosticSource NuGet package. There are two primary concepts exposed by these APIs: Instrument and Meter:

  • Instrument: An instrument records the values for a single metric of interest. You might have separate Instruments for "products sold", "invoices created", "invoice total", or "GC heap size".
  • Meter: A Meter is a logical grouping of multiple instruments. For example, the System.Runtime Meter contains multiple Instruments about the workings of the runtime, while the Microsoft.AspNetCore.Hosting Meter contains Instruments about the HTTP requests received by ASP.NET Core.

There are also (currently, as of .NET 10) 7 different types of Instrument:

  • Counter<T>
  • ObservableCounter<T>
  • UpDownCounter<T>
  • ObservableUpDownCounter<T>
  • Gauge<T>
  • ObversableGauge<T>
  • Histogram<T>.

To create a custom metric, you need to choose the type of Instrument to use, and associate it with a Meter. I'll discuss the differences between each of these instruments shortly, but first we'll look at the difference between "observable" instruments, and "normal" instruments.

What is an Observable* instrument?

When using the System.Diagnostic.Metrics APIs there's a "producer" side and a "consumer" side. The producer of metrics is the app itself, recording values and details about how it's operating. The consumer could be an in-process consumer, such as the OpenTelemetry libraries, or it could be an external process, such as dotnet-counters or dotnet-monitor.

The differences between a "normal" instrument and an "observable" instrument stem from who controls when and how a value is emitted:

  • For "normal" instruments, the producer emits values as they occur. For example, when a request is received, ASP.NET Core emits the http.server.active_requests metric, indicating a new request is in-flight.
  • For "observable" instruments, the consumer side asks for the value. For example, the dotnet.gc.pause.time metric returns "The total amount of time paused in GC since the process has started", but only when you ask for it.

In general, observable instruments are used when you have an effectively continuous value that you wouldn't make sense for the consumer to actively emit, such as the dotnet.gc.pause.time above, or where emitting all of the intermediate values would be too expensive from a performance point of view.

Technically, you could potentially emit this metric every time the GC pauses, but given that these values are more fine-grained than you would likely want anyway, it's much more efficient to allow the consumer to "poll" the values on demand, and therefore it makes the most sense as an observable instrument.

Now we understand the difference between observable and normal instruments, let's walk through all the instrumentation types and see how they're used in the .NET base class libraries.

Understanding the different Instrument types

So far in this series we've used a simple Counter<T> that records every time a given event occurs. In this post we'll look at each of the possible Instruments in turn, showing how you create an instrument of that type to produce a given metric. Where possible, I'm showing places within the .NET or ASP.NET Core libraries that use each of these instruments, to give "real world" versions of how these are used.

Counter<T>

The Counter<T> instrument is one of the simplest instruments conceptually. It is used to record how many times a given event occurs.

For example, the aspnetcore.diagnostics.exceptions metric is a Counter<long> which records the "Number of exceptions caught by exception handling middleware."

_handlerExceptionCounter = _meter.CreateCounter<long>(
    "aspnetcore.diagnostics.exceptions",
    unit: "{exception}",
    description: "Number of exceptions caught by exception handling middleware.");

Every time the ExceptionHandlerMiddleware (or DeveloperExceptionHandlerMiddleware) catches an exception, it adds 1 to this counter, first constructing an appropriate set of tags, and then calling Add(1, tags):

 private void RequestExceptionCore(string exceptionName, ExceptionResult result, string? handler)
{
    var tags = new TagList();
    tags.Add("error.type", exceptionName);
    tags.Add("aspnetcore.diagnostics.exception.result", GetExceptionResult(result));
    if (handler != null)
    {
        tags.Add("aspnetcore.diagnostics.handler.type", handler);
    }
    _handlerExceptionCounter.Add(1, tags);
}

As this Counter<T> is tracking a number of occurrences, you're always adding positive values, never negative values, though you can increase by more than 1 at a time if needs be.

ObservableCounter<T>

The ObservableCounter<T> is conceptually similar to a Counter<T>, in that it records monotonically increasing values. Being an "observable" instrument, it only records the values when "observed" (we'll look at how to observe the instruments in your own code in a subsequent post).

For example, the dotnet.gc.heap.total_allocated metric is an ObservableCounter<long> which records the "The approximate number of bytes allocated on the managed GC heap since the process has started":

s_meter.CreateObservableCounter(
    "dotnet.gc.heap.total_allocated",
    () => GC.GetTotalAllocatedBytes(),
    unit: "By",
    description: "The approximate number of bytes allocated on the managed GC heap since the process has started. The returned value does not include any native allocations.");

When observed, the lambda included in the definition is called, which invokes GC.GetTotalAllocatedBytes(). Note that this value steadily increases during the lifetime of the app, so it's not returning the difference since last invocation, it's returning the current running total.

UpDownCounter<T>

The UpDownCounter<T> is similar to the Counter<T>, but it supports reporting positive or negative values.

For example, the http.server.active_requests metric is an UpDownCounter<T> that records the "Number of active HTTP server requests.":

_activeRequestsCounter = _meter.CreateUpDownCounter<long>(
    "http.server.active_requests",
    unit: "{request}",
    description: "Number of active HTTP server requests.");

When a request is started, the server calls Add() and increments the value of the counter:

public void RequestStart(string scheme, string method)
{
    // Tags must match request end.
    var tags = new TagList();
    InitializeRequestTags(ref tags, scheme, method);
    _activeRequestsCounter.Add(1, tags);
}

private static void InitializeRequestTags(ref TagList tags, string scheme, string method)
{
    tags.Add(HostingTelemetryHelpers.AttributeUrlScheme, scheme);
    tags.Add(HostingTelemetryHelpers.AttributeHttpRequestMethod, HostingTelemetryHelpers.GetNormalizedHttpMethod(method));
}

Similarly, when the request ends, the server calls Add() to decrement the value of the counter:

public void RequestEnd(string protocol, string scheme, string method, string? route, int statusCode, bool unhandledRequest, Exception? exception, List<KeyValuePair<string, object?>>? customTags, long startTimestamp, long currentTimestamp, bool disableHttpRequestDurationMetric)
{
    var tags = new TagList();
    InitializeRequestTags(ref tags, scheme, method);

    // Tags must match request start.
    if (_activeRequestsCounter.Enabled)
    {
        _activeRequestsCounter.Add(-1, tags);
    }

    // ...
}

Consequently, the UpDownCounter<T> receives a series of increment/decrement values representing the movement of the metric.

ObservableUpDownCounter<T>

The ObservableUpDownCounter<T> is similar to the UpDownCounter<T> in that it reports increasing or decreasing values of a metric. The difference is that it returns the absolute value of the metric when observed, as opposed to a stream of deltas.

For example, the dotnet.gc.last_collection.heap.size metric is an ObservableUpDownCounter<long> that reports "The managed GC heap size (including fragmentation), as observed during the latest garbage collection":

s_meter.CreateObservableUpDownCounter(
    "dotnet.gc.last_collection.heap.size",
    GetHeapSizes,
    unit: "By",
    description: "The managed GC heap size (including fragmentation), as observed during the latest garbage collection.");

When observed, the GetHeapSizes() method is invoked and returns a collection of Measurements, each tagged by the heap generation name:

private static readonly string[] s_genNames = ["gen0", "gen1", "gen2", "loh", "poh"];
private static readonly int s_maxGenerations = Math.Min(GC.GetGCMemoryInfo().GenerationInfo.Length, s_genNames.Length);

private static IEnumerable<Measurement<long>> GetHeapSizes()
{
    GCMemoryInfo gcInfo = GC.GetGCMemoryInfo();

    for (int i = 0; i < s_maxGenerations; ++i)
    {
        yield return new Measurement<long>(gcInfo.GenerationInfo[i].SizeAfterBytes, new KeyValuePair<string, object?>("gc.heap.generation", s_genNames[i]));
    }
}

This returns the size of each heap at the last GC collection, the value of which may obviously increase or decrease.

Gauge<T>

The Gauge<T> is used to record "non-additive" values whenever they occur. These values can go up and down, and be positive or negative, but the point is that they "overwrite" all previous values.

Interestingly, this Instrument type was only added in .NET 9, and I couldn't find a single case of Gauge<T> being used in the .NET runtime, ASP.NET Core, or the .NET extensions packages 😅 So I made one up: for example, consider a gauge that reports the current room temperature when it changes:

var instrument = _meter.CreateGauge<double>(
    name: "locations.room.temperature",
    unit: "°C",
    description: "Current room temperature"
);

Then when the temperature of the room changes, you would report the new value:

public void OnOfficeTemperatureChanged(double newTemperature)
{
    instrument.Record(newTemperature, new KeyValuePair<string, object?>("room", "office"));
}

The gauge values are record whenever the temperature changes.

ObservableGauge<T>

Conceptually the ObservableGauge<T> is the same as a Gauge<T>, except that it only produces a value when observed. ObservableGauge<T> was added way back in .NET 6, and there are some examples of its use in this case.

For example, the process.cpu.utilization metric is an ObservableGauge<double> instrument which reports "The CPU consumption of the running application in range [0, 1]".

_ = meter.CreateObservableGauge(
    name: "process.cpu.utilization",
    observeValue: CpuPercentage);

When observed, the CpuPercentage() method is invoked, which returns a single value for the CPU usage as a value between 0 and 1.

private double CpuPercentage()
{
    // see above link for implementation
}

This Instrument is exposed in the Microsoft.Extensions.Diagnostics.ResourceMonitoring meter, and implemented in the Microsoft.Extensions.Diagnostics.ResourceMonitoring NuGet package.

Histogram<T>

The final instrument type is Histogram<T>, which is used to report arbitrary values, that you will typically want to aggregate using statistics.

For example, the http.server.request.duration metric is a Histogram<double> which records the "Duration of HTTP server requests.". Durations and latencies are a classic example of where you might want to use a histogram, so that you can calculate the p50, p90, p99 etc latencies, or to record all the values and plot them as a graph.

_requestDuration = _meter.CreateHistogram<double>(
    "http.server.request.duration",
    unit: "s",
    description: "Duration of HTTP server requests.",
    advice: new InstrumentAdvice<double> { HistogramBucketBoundaries = MetricsConstants.ShortSecondsBucketBoundaries });

The example above also shows our first example of InstrumentAdvice<T>. This type provides suggested configuration settings for consumers, indicating the best settings to use when processing Instrument values. In this case, the advice provides a suggested set of histogram bucket boundaries: [0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, 10], which can be useful for consumers to know how best to plot the metric values.

The _requestDuration histogram instrument is called whenever an ASP.NET Core request ends, recording the duration of the request, and a large associated number of tags. I've reproduced all the code below for completeness (expanding tag constants for clarity) but it's basically just building up a collection of tags which are recorded along with the duration of the request.

public void RequestEnd(string protocol, string scheme, string method, string? route, int statusCode, bool unhandledRequest, Exception? exception, List<KeyValuePair<string, object?>>? customTags, long startTimestamp, long currentTimestamp, bool disableHttpRequestDurationMetric)
{
    var tags = new TagList();
    InitializeRequestTags(ref tags, scheme, method);

    if (!disableHttpRequestDurationMetric && _requestDuration.Enabled)
    {
        if (HostingTelemetryHelpers.TryGetHttpVersion(protocol, out var httpVersion))
        {
            tags.Add("network.protocol.version", httpVersion);
        }
        if (unhandledRequest)
        {
            tags.Add("aspnetcore.request.is_unhandled", true);
        }

        // Add information gathered during request.
        tags.Add("http.response.status_code", HostingTelemetryHelpers.GetBoxedStatusCode(statusCode));
        if (route != null)
        {
            tags.Add("http.route", RouteDiagnosticsHelpers.ResolveHttpRoute(route));
        }

        // Add before some built in tags so custom tags are prioritized when dealing with duplicates.
        if (customTags != null)
        {
            for (var i = 0; i < customTags.Count; i++)
            {
                tags.Add(customTags[i]);
            }
        }

        // This exception is only present if there is an unhandled exception.
        // An exception caught by ExceptionHandlerMiddleware and DeveloperExceptionMiddleware isn't thrown to here. Instead, those middleware add error.type to custom tags.
        if (exception != null)
        {
            // Exception tag could have been added by middleware. If an exception is later thrown in request pipeline
            // then we don't want to add a duplicate tag here because that breaks some metrics systems.
            tags.TryAddTag("error.type", exception.GetType().FullName);
        }
        else if (HostingTelemetryHelpers.IsErrorStatusCode(statusCode))
        {
            // Add error.type for 5xx status codes when there's no exception.
            tags.TryAddTag("error.type", statusCode.ToString(CultureInfo.InvariantCulture));
        }

        var duration = Stopwatch.GetElapsedTime(startTimestamp, currentTimestamp);
        _requestDuration.Record(duration.TotalSeconds, tags);
    }
}

It's an interesting point to note that while the histogram is strictly about request durations, the presence of the many tags could enable you to derive various other metrics. For example, you could determine the number of "successful" requests, the number of requests to a particular route, or with a given status code.

And that's it, we've covered all of the Insturment types currently available in .NET 10. Note that there's no ObservableHistogram<T> type, as that generally wouldn't be practical to implement.

We now know how to create all the different types of Instrument, and in the first post of this series I showed how to record the metrics using dotnet-counters. In the following post in this series, we'll look at how to record these values in-process instead.

Summary

In this post, I described each of the different Instrument<T> types exposed by the System.Diagnostics.Metrics APIs. For each type I described when you would use it and provided an example of both how to create the Instrument<T>, and how to record values, using examples from the .NET base class libraries and ASP.NET Core. In the next post we'll look at how to record values produced by Instrument<T> types in-process.

Read the whole story
alvinashcraft
39 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Human-centered AI: How to keep humans at the center of your AI efforts

1 Share

Human-centered AI: How to keep humans at the center of your AI efforts

Read the whole story
alvinashcraft
40 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Most Popular AI Tools: What Developers Use and Why

1 Share

AI tools have become a core part of modern software development. Developers rely on them throughout the life cycle, from writing and refactoring code to testing, documentation, and analysis.

Once experimental add-ons, these tools now function as everyday assistants and are firmly embedded in routine workflows. But why have AI tools become so essential – and how are developers actually using them?

The insights in this article draw on findings from the JetBrains State of Developer Ecosystem Report 2025, which tracks how developers use tools, languages, and technologies, including AI tools, in real-world environments. Shifting the focus from technical model performance, this article looks at usage patterns, developer preferences, and adoption trends across tools, regions, and workflows.

Before we work through which AI tools developers use most, why they choose them, and how these tools fit into everyday work, let’s first clarify what AI tools are and why they matter so much right now.

Disclaimer: Please note that the findings in the article reflect data collected during the specific research period set out in the report.

Table of Contents

·       What AI tools are and why they matter now

·       Most popular AI tools among developers

·       What makes developers choose one AI tool over another

·       How developers use AI tools in daily workflows

·       Global snapshot: How AI tool adoption differs across regions

·       Barriers to adopting AI tools

·       Future of AI tools: What developers want next

·       FAQ

·       Conclusion

What AI tools are and why they matter now

Today’s AI tools for developers span several categories. They include code assistants that suggest or generate code, as well as tools that review code autonomously. Many come as IDE integrations that understand project context.

There are also AI-powered search and navigation tools, refactoring helpers, and documentation generators. In addition, teams now use testing assistants and autonomous or semi-autonomous agents to support more complex workflows.

Understanding today’s AI tools list for developers matters because these tools directly address growing pressures in modern development. They shorten development cycles, reduce manual tasks, and help teams maintain quality, which is especially important as codebases grow.

This growing reliance makes it important to understand which tools developers actually use most. In the next section, we will see what these AI tools are.

Most popular AI tools among developers

Developers rarely rely on a single AI tool. Instead, they combine multiple tools depending on their IDE, workflow style, and project requirements. According to the AI usage insights in the the JetBrains State of Developer Ecosystem Report 2025, adoption clusters around three main categories: IDE-native assistants, standalone AI-powered development environments, and browser-based or cloud chat tools.

Across these categories, the most popular AI assistants are GitHub Copilot, JetBrains AI Assistant, Cursor, Windsurf, and Tabnine. Adoption of these top AI tools varies based on ecosystem, IDE choice, and workflow style.

IDE-native assistants, such as GitHub Copilot and JetBrains AI Assistant, remain among the most popular AI tools because they operate inside the editor and integrate directly into existing workflows, making them more context-aware.

Standalone AI-focused editors and assistants, such as Cursor and Windsurf, often emphasise more experimental or agent-style workflows. This is an area that is evolving across the ecosystem, with increasing convergence between IDE-native tools and more agent-driven capabilities.

Other tools focus on specific priorities. For example, Tabnine attracts teams that prioritize privacy and local inference. Region-specific tools also play an important role in areas with strong domestic AI ecosystems or regulatory constraints.

This diversity becomes clearer when comparing the best AI tools for developers side by side.

Comparison table: AI tools overview

AI toolTypical use caseUnderlying modelsDistinct featuresIntegration type
GitHub CopilotCode generation and completionGPT familyTight GitHub + VS Code workflowsIDE / Cloud
JetBrains AI AssistantContext-aware help, refactoringClaude / GPT / GeminiDeep IDE context + privacy focusIn-IDE
CursorInline edits, debugging, chatClaude / GeminiFast UI, multi-step editsIDE plugin
WindsurfAutonomous task execution and code changesClaude / GPTAgent-like capabilitiesStandalone
TabninePrivacy-oriented code suggestionsProprietary / DeepSeekLocal inference optionsIDE plugin
Disclaimer: Please note that the findings reflect data collected during the specific research period set out in the report.

What makes developers choose one AI tool over another

Developers are not choosing AI tools solely on novelty. They evaluate how well a tool fits existing workflows, how reliable the output feels, and whether the tool aligns with team constraints. The JetBrains State of Developer Ecosystem Report 2025 identifies several of these practical considerations that shape decision-making.

Integration quality ranks among the most important factors. Developers prefer AI coding tools that work seamlessly inside their preferred IDE. A tool that interrupts flow or requires constant context switching often fails to gain long-term adoption.

Accuracy and code quality are equally crucial. Developers expect AI coding tools to produce reliable results that they can trust. When outputs require extensive correction, confidence drops quickly.

Privacy and data security also influence developer AI preferences. This is especially true in enterprise environments. Tools that offer local processing or clear privacy guarantees often see stronger uptake in regulated industries.

Finally, pricing, transparency, and vendor reputation affect adoption. Developers value clear pricing models, flexible access, and vendors with a track record of supporting developer tools. Trust builds over time through consistency and ongoing communication.

Let’s see how developers evaluate each of these factors in this AI assistant comparison.

Key factors influencing tool choice

FactorWhy it mattersHow developers evaluate it
IDE integrationSupports smooth workflowsWorks natively in their preferred IDE
Code accuracy and qualityAffects trust and usabilityProduces correct, clear, and maintainable code
Privacy and securityProtects source code and IPProvides clear data handling and local mode options
Pricing and accessImpacts adoption at scaleOffers flexible tiers and predictable costs
TransparencyBuilds confidenceDiscloses model provider and data policies
Vendor reputationSignals long-term reliabilityDemonstrates a history of dev tools and quality support
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.

How developers use AI tools in daily workflows

Developers integrate AI tool usage throughout the development life cycle rather than limiting it to a single task. Most workflows combine several forms of AI access depending on the problem at hand.

When coding with AI tools, developers may use in-IDE assistants for context-aware code help and chat-based interfaces for problem-solving and prototyping. In addition, developer AI assistant usage may combine browser tools for quick inline answers, APIs for automation and CI/CD tasks, and local models for privacy-restricted environments.

Across these use cases, developers are clearly no longer relying on a single tool. AI workflows increasingly involve choosing the right tool for the task at hand, be it writing code, refactoring, debugging, generating documentation, testing, or understanding unfamiliar code.

The JetBrains State of Developer Ecosystem Report 2025 indicates that developers frequently switch between AI access points in this way. They choose the interface that best fits the task rather than expecting one tool to handle everything.


Workflow types and examples

Workflow typeTypical use caseExample toolsIntegration contextDeveloper benefit
In-IDE assistanceCode suggestions, refactoringJetBrains AI Assistant, GitHub CopilotIDEImmediate, context-aware help
Chat-based interactionExplanations, brainstorming, regex, prototypingChatGPT, ClaudeBrowser / CloudFast iteration and reasoning
API integrationAutomation, CI tasks, documentationOpenAI API, Anthropic APIBackend / DevOpsScalable automation
Browser extensionsQuick inline code insightsCodeium, AIXWebLightweight access
Local/private modelsSecure, offline codingTabnine, DeepSeek (self-hosted models)On-premises / EnterpriseHigh privacy and control
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.

With AI firmly established in daily workflows, the next section looks at regional differences in AI tool adoption.

Global snapshot: How AI tool adoption differs across regions

Global AI adoption patterns do not look the same everywhere. Regional ecosystems, regulations, and developer communities shape which tools gain traction. The JetBrains State of Developer Ecosystem Report 2025 highlights clear regional AI trends.

In North America, developers commonly adopt mainstream tools such as GitHub Copilot, JetBrains AI Assistant, and Claude-based assistants. Strong cloud infrastructure and rapid LLM innovation encourage experimentation with multiple tools.

European developers balance adoption with privacy considerations. Data residency and compliance requirements influence tool selection, leading to broader interest in solutions that offer transparency and local processing options.

In the Asia-Pacific region, developers often combine global tools with regional offerings. Mobile-first development cultures and fast-growing ecosystems drive rapid experimentation, particularly with cloud-based assistants.

Mainland China stands out due to its strong domestic AI ecosystem. Developers there frequently rely on local tools and models such as DeepSeek, Qwen, and Hunyuan, which align better with infrastructure and regulatory realities.


Regional highlights and local leaders

RegionMost used toolsLocal ecosystem driversNotable observations
North AmericaGitHub Copilot, JetBrains AI Assistant, ClaudeStrong cloud and LLM innovationHigh multi-tool adoption
EuropeJetBrains AI Assistant, GitHub CopilotPrivacy regulations, data residencyBalanced adoption across tools
Asia-PacificGitHub Copilot, GeminiMobile/cloud-first development culturesRapid experimentation and growth
Mainland ChinaDeepSeek, Qwen, HunyuanStrong domestic AI ecosystemPreference for locally hosted models
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.

While AI tool usage worldwide is undoubtedly gaining momentum, barriers to AI adoption also exist, which we explore in the next section.


Barriers to adopting AI tools

Despite growing interest, not all developers or teams adopt AI tools easily. The JetBrains State of Developer Ecosystem Report 2025 shows that such AI adoption challenges often stem from uncertainty rather than opposition.

Privacy and security concerns remain the most common AI coding tool barriers. Teams worry about exposing sensitive code or intellectual property, especially when tools rely on cloud processing. Without clear guarantees, organizations may restrict or ban usage.

Legal and ownership questions are other reasons why developers avoid AI tools. Developers and managers want clarity about who owns AI-generated code and how licensing applies. Uncertainty leads many teams to limit AI use to non-critical tasks.

Individual barriers matter as well. Some developers lack confidence in using AI tools effectively or struggle to evaluate output quality. Others distrust AI suggestions due to past inaccuracies.

Cost, licensing, and infrastructure constraints can also limit adoption, particularly for larger teams. Per-seat pricing and usage caps further complicate budgeting and rollout decisions.


Obstacles and evaluation criteria

BarrierWhy it mattersTypical impact
Privacy and security concernsIncreases the risk of exposing sensitive codeUsage blocked or restricted
IP and code ownership concernsCreates legal uncertaintyHesitation to rely on AI for core code
Lack of knowledge or trainingReduces confidence in using toolsSlower individual adoption
Accuracy and reliability issuesImpacts trust in outputsMore manual review required
Internal policies and processesRequires compliance and complex approval workflowsDelayed tool rollout
Cost and licensingExceeds budget or per-seat limitsPartial or limited deployment
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.

In the next section, we move from the barriers of today to developers’ hopes for the future.

Future of AI tools: What developers want next

Developers do not simply want more AI features. They want better ones. The JetBrains State of Developer Ecosystem Report 2025 not only indicates greater adoption but also shows that developers are hopeful about the future of AI. Their expectations focus on reliability, integration depth, and control rather than novelty.

Higher code quality tops developer AI expectations. Developers want fewer hallucinations, cleaner outputs, and suggestions that respect project conventions. Trust grows when AI behaves predictably.

Deeper IDE integration also ranks high. Developers expect future AI tools to understand entire projects, not just individual files. Context retention across sessions and multi-file awareness are increasingly important.

Privacy remains central. Many developers want local or on-device options that allow them to use AI without sharing code externally. Transparent data handling builds confidence.

Pricing clarity and explainability also influence future AI assistant trends. Developers want predictable costs and better insight into why tools suggest certain changes.

But most significantly, as AI tools evolve, developers want support for complex workflows and architecture reasoning. The goalpost is also shifting. Developers now expect future AI tools to move beyond basic autocomplete and act as collaborative partners.


Developer expectations and trends

ExpectationWhy developers want itExample improvements
Higher code qualityTrust and reliabilityFewer hallucinations, cleaner output
Deeper IDE integrationSeamless workflowsContext retention, multi-file awareness
Privacy and controlSecure code handlingOn-device or local LLM options
Transparent pricingPredictable team adoptionUsage-based models, clearer tiers
Explainability and reasoningTrust in decisionsClearer chain-of-thought summaries
Context awarenessHandling real projectsLarger context windows, project-wide understanding
Disclaimer: please note that the findings reflect data collected during the specific research period set out in the report.

The following FAQ addresses some of the most common questions developers ask when evaluating and using AI tools.

FAQ

What are the most popular AI tools among developers today?
According to the report’s findings, developers commonly use tools such as GitHub Copilot, JetBrains AI Assistant, Cursor, and Tabnine, often combining them rather than using a single tool.

Are AI tools safe for use with private or proprietary code?
Safety depends on the tool. Developers increasingly prefer tools that provide clear privacy policies or local processing options.

Which AI tools work best inside IDEs?
IDE-native tools tend to perform best for daily coding tasks because they understand project context and workflows.

Do developers prefer local AI models or cloud-based solutions?
Preferences vary. Some developers value cloud flexibility, while others prioritize local models for privacy and compliance.

How do AI tools help with debugging and documentation?
They explain code, identify errors, suggest fixes, and generate comments or documentation drafts.

Are AI tools suitable for enterprise teams with strict security requirements?
Many are, especially when they offer strong privacy guarantees, administrative controls, and predictable pricing.

Can AI tools speed up development without reducing code quality?

Yes, when developers use them intentionally. AI tools speed up repetitive tasks such as code generation, refactoring, testing, and documentation, while reviews, IDE checks, and automated tests help maintain quality.

Conclusion

AI tools have evolved from optional add-ons into essential components of modern software development. Developers now rely on them for coding, refactoring, documentation, testing, and learning, integrating AI assistance throughout daily workflows.

Current adoption trends show that developers value accuracy, deep integration, and privacy above experimental features. The JetBrains State of Developer Ecosystem Report 2025 reflects broad and growing use across regions, tools, and development styles.

As AI tools continue to evolve, they move toward deeper context awareness, stronger reasoning, and more secure deployment options.

For developers, AI no longer represents a future possibility. It has become a practical, everyday partner in building software.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Data is the new oil, and your database is the only way to extract it

1 Share
Ryan sits down with Shireesh Thota, CVP of Azure Databases at Microsoft, to discuss the evolution of databases at Microsoft; Azure’s comprehensive portfolio that includes SQL Server, CosmosDB, and Postgres; and the challenges that come with database architecture, from the importance of cost governance and multi-cloud strategies to the future of databases when it comes to AI.
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Levelling up with Python: Create with data

1 Share

Learning Python often starts with the same building blocks: variables, functions, and loops. However, once young people have learnt these essential foundations, they may be eager to grow their skills and start using Python to explore data and create something meaningful to them. 

A young learner showing a Python project in the Code Editor.

Our free ‘More Python’ project path helps learners move beyond the basics and use data to create impactful projects of their own.

Python as a tool for exploring the world

Python is the most widely used programming language in the world, not just because it’s accessible, but because it’s powerful. It is used to analyse data, build models, create data visualisations, and explore important questions.

A young learners is excited about his Python project.

For young learners, this means learning Python can become more than a coding exercise. It can be a way to investigate topics they care about, analyse and understand information, and tell powerful stories about real-world issues.

A illustration featuring examples of different types of graphs: a line graph, a bar chart, and a venn diagram.

Working with data helps learners see how coding connects to the world around them — and builds confidence along the way.

Why learning with data matters

In our day-to-day lives, data is everywhere: in sports results, maps, and scientific research, to name only a few examples. Learning how to work with data helps young people develop skills that go far beyond programming, including:

  • Thinking logically and solving problems
  • Interpreting and questioning information
  • Making decisions based on evidence

Data also underpins many of the AI systems people use today. For example, large language models, used to build tools such as ChatGPT, are trained on vast amounts of data. Therefore, understanding how data is collected, organised, and used is an important part of AI literacy.

In Python, structures like lists and dictionaries make it possible to organise, analyse, and explore data in creative ways. Using these tools to build projects can help abstract computing concepts start to feel more concrete and meaningful.

What learners create in the ‘More Python’ project path

The ‘More Python’ project path supports learners through three stages: Explore, Design, and Invent. Each stage builds skills while giving learners more ownership over what they create.

In the Explore stage, young people learn new concepts and build confidence in using data and core Python structures, such as lists and dictionaries. Projects include:

  • Making an interactive chart of Olympic medals
  • Building a model of the solar system
  • Creating a frequency graph that learners can analyse to crack a code

These projects help learners develop new skills, while exploring how Python can be used to analyse and explain real-world information.

A young learner uses the Code Club Projects site on computer to do Python coding.

As learners progress to the Design stage, they start making creative choices about how their projects look and behave. In this stage, they:

  • Create a project that produces encoded art based on a user’s name
  • Build an interactive world map that helps users learn interesting facts

Here, Python becomes a creative medium. As well as putting their new skills into practice, learners think about audience, interaction, and presentation to make their projects their own.

In the Invent stage, learners bring everything together. Using the skills they have built, they design and create a data visualisation on a topic they are passionate about. This final project gives learners the freedom to choose their data, shape their idea, and tell a story that matters to them.

An illustration of a robot on wheels.

By this point, learners are planning and creating their own projects, growing in confidence and independence.

Take the next step with Python

If the young people you support have already learned the basics of Python, ‘More Python’ offers a clear and creative next step. The projects are designed to be accessible, and young people can work through them at their own pace, whether they are learning independently, at a Code Club, or in the classroom.

By working with data, getting creative, and making their own original projects, learners can build confidence and start to see what they can achieve with Python.

Alongside the ‘More Python’ project path, you can access hundreds of free coding projects on our Code Club Projects site. Find more projects to suit your learners’ interests, and support them to build their digital skills through creativity and making.

The post Levelling up with Python: Create with data appeared first on Raspberry Pi Foundation.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories