Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147526 stories
·
33 followers

10.0.40 SR4

1 Share

What's Changed

.NET MAUI 10.0.40 introduces significant improvements across all platforms with focus on quality, performance, and developer experience. This release includes 143 commits with various improvements, bug fixes, and enhancements.

AI

  • Improve write-tests-agent with best practices by @sheiksyedm in #33860

  • [Sample] Add Microsoft.Maui.Essentials.AI sample app with multi-agent workflow by @mattleibow in #33610

Ai Agents

  • Add FileLoggingProvider for MacCatalyst UI test logging by @PureWeen in #33518

  • Improve verify-tests-fail-without-fix Skill by @kubaflo in #33513

  • Add find-reviewable-pr skill from existing PR by @PureWeen via @Copilot in #33349

  • Add learn-from-pr agent and enhance skills framework structure by @PureWeen via @Copilot in #33579

  • Fix PS1 scripts for Windows compatibility by @PureWeen in #33679

  • Improve skills and scripts for better agent workflows by @PureWeen in #33699

  • [XEXPR] Refactor test skills/agents to dispatcher pattern by @PureWeen via @Copilot in #33721

  • [ai] Skill for running device tests by @rmarinho in #33484

  • Add ai-summary-comment skill for automated PR review comments by @kubaflo in #33585

  • Add PR label management to test verification skill by @kubaflo in #33739

  • ai-summary-comment: Simplify PR finalize to two collapsible sections by @kubaflo in #33771

  • Improve issue-triage skill: Add gh CLI checks and fix workflow by @PureWeen in #33750

  • [ai] Add integration test runner skill by @rmarinho in #33654

  • Improve PR Agent Gate verification to prevent result fabrication by @PureWeen in #33806

  • Improve test report formatting and summary extraction by @kubaflo in #33793

  • Improve try-fix comment parsing and summary by @kubaflo in #33794

  • Enhance PR agent: multi-model workflow, blocker handling, shared rules extraction by @PureWeen in #33813

  • Enhance pr-finalize skill with code review phase and safety rules by @PureWeen in #33861

  • Remove Phase 2 (Tests) from PR agent workflow by @kubaflo in #33905

Blazor

BlazorWebView

  • Add doc comment explaining EnableDefaultCssItems in Blazor templates by @akoeplinger in #33845

Button

  • [Testing] Fix flaky UI tests: retryTimeout and SwipeView button fix by @PureWeen in #33749

Checkbox

CollectionView

  • [Android] Fixed EmptyView doesn’t display when CollectionView is placed inside a VerticalStackLayout by @NanthiniMahalingam in #33134
🔧 Fixes

Core Lifecycle

DateTimePicker

Dialogalert

Docs

Essentials

Flyout

Fonts

Gestures

Image

Label

Mediapicker

Navigation

Packaging

Picker

SafeArea

Shapes

Shell

Templates

Theme

Theming

  • Fix SourceGen missing diagnostic for keyless ResourceDictionary items by @rmarinho in #33708

  • [XSG] Fix Style Setters referencing source-generated bindable properties by @simonrozsival in #33562

Titlebar

WebView

  • Fix WebView JavaScript string escaping for backslashes and quotes by @StephaneDelcroix in #33726

  • Skip HybridWebView interception test on iOS/MacCatalyst by @rmarinho via @Copilot in #33981

Xaml

🔧 Infrastructure (16)
🧪 Testing (14)
📦 Other (26)
**Full Changelog**: https://github.com/dotnet/maui/compare/10.0.31...10.0.40
Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

dotnet-1.0.0-preview.260212.1

1 Share

Python: [BREAKING] PR2 — Wire context provider pipeline, remove old t…

Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – February 12, 2026 (#720)

1 Share

I had four spontaneous meetings today, all on different topics. It’s important to me to have flex in my calendar so I can tackle things that pop up on a given day. But, it does contribute to a more chaotic schedule. Do you keep slack in the day, or schedule people out on days where you’re officially “open”?

[blog] Beyond one-on-one: Authoring, simulating, and testing dynamic human-AI group conversations. This is bonkers and super cool. Google did research into group conversations and open sourced a framework and tool to create and run these workflows.

[blog] Ship types, not docs. Good message. This is one reason our Google Cloud reference API docs are auto-generated when upstream protobuf definitions change. Need to keep this all in sync!

[blog] An AI Agent Published a Hit Piece on Me. Yeah, this is a pretty crazy story. An agent gets a PR refused, and it writes a takedown post about the maintainer!

[blog] Five key recommendations for platform teams in 2026. Except for #5, these have been oft-repeated recommendations for years now.

[blog] Don’t Make Them Wait: Improving AI UX with Streaming Thoughts. We’re all impatient, even when waiting for a magic LLM to deliver magic answers. This is a good post about maintaining engagement by streaming thoughts back to the user.

[blog] 7 Technical Takeaways from Using Gemini to Generate Code Samples at Scale. A year ago, we started on an effort to responsibly use AI to help build and maintain our code sample portfolio. Here’s a check-in.

[blog] Agents Can Either Be Useful or Secure. It’s time to rethink authorization approaches and how you secure your systems.

[artice] Your Strategy Needs a Visual Metaphor. Break up your dense, complex, and boring strategies with a compelling visual. AI tools are genuinely helpful here.

[blog] I Built an Agent Skill for Google’s ADK — Here’s Why Your Coding Agent Needs One Too. I’d like to see frameworks come with Skills or whatever can help agentic tools use those frameworks more effectively.

[article] The death of reactive IT: How predictive engineering will redefine cloud performance in 10 years. Our platforms will absolutely become more autonomous and proactive in handling operational tasks.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

🏥 RepoDoctor - AI-Powered Repository Health Analysis with GitHub Copilot CLI

1 Share

This is a submission for the GitHub Copilot CLI Challenge

What I Built

RepoDoctor - Your Repository's AI Doctor 🩺

RepoDoctor is a Copilot-first CLI tool that revolutionizes how developers analyze and maintain their codebases. Instead of relying on rigid, hardcoded rules like traditional static analysis tools, RepoDoctor acts as an intelligent orchestrator that delegates all analysis logic to GitHub Copilot CLI.

Think of it as having an AI-powered code doctor that:

  • 🍔 Diagnoses bloat - Identifies large files, build artifacts, and missing hygiene files
  • 🌎 Creates onboarding guides - Generates comprehensive TOUR.md files for new contributors
  • 🐳 Audits Dockerfiles - Provides security and optimization recommendations
  • 💀 Detects dead code - Finds unused code with confidence levels
  • 🔬 Performs health scans - Multi-module analysis with overall health scoring
  • 📋 Generates reports - Beautiful markdown reports from scan results

What makes RepoDoctor special?

Traditional tools like ESLint, Pylint, or SonarQube use static rules that can't understand context. They tell you what's wrong but not why or how to fix it in your specific situation.

RepoDoctor is different because:

  • Contextual AI Analysis - Understands your tech stack, project structure, and patterns
  • Actionable Recommendations - Not just "this is bad," but "here's how to improve it"
  • Zero Configuration - No complex rule files or configuration needed
  • Extensible Prompts - Easy to add new analysis types with prompt templates
  • Human-Readable Output - Generates documentation, not just error lists

Why it matters to me:

As a developer, I've spent countless hours:

  • Onboarding new team members who struggle to understand large codebases
  • Debugging performance issues caused by bloated repositories
  • Reviewing PRs with potential security issues in Docker configurations
  • Hunting for dead code that clutters the codebase

RepoDoctor solves these pain points by leveraging AI to provide intelligent, context-aware analysis that actually helps developers improve their code quality.

Project Links

Demo

repodoc

repodoc-docker

repodoc-diet

repodoc-report

repodoc-deadcode

Quick Installation & Usage

# Install from PyPI
pip install repodoc

# Install from uv
uv install repodoc

# Use repodoc without installation
uv tool run repodoc

# Analyze repository bloat
repodoc diet

# Generate onboarding guide
repodoc tour

# Run full health scan
repodoc scan

# Generate beautiful report
repodoc report

My Experience with GitHub Copilot CLI

Building RepoDoctor was a transformative experience that fundamentally changed how I approach software development. GitHub Copilot CLI wasn't just a tool I used—it became the core architecture of my entire application.

The Copilot-First Architecture

Instead of building traditional static analysis with hardcoded rules, I had a radical idea: What if the AI itself is the analysis engine?

This led to RepoDoctor's unique architecture:

  1. Prompt Templates - Each analysis type (diet, tour, docker, etc.) has a carefully crafted prompt
  2. Workflow Orchestration - RepoDoctor manages file discovery, data collection, and output formatting
  3. AI Delegation - All actual analysis logic is delegated to GitHub Copilot CLI
  4. Schema Validation - Pydantic schemas ensure the AI returns structured, reliable data

How I Used GitHub Copilot CLI

1. Prompt Engineering as Code

I created a sophisticated prompt template system that provides Copilot CLI with:

  • Context - File listings, directory structure, key metrics
  • Instructions - Clear objectives and output format requirements
  • Constraints - What to focus on, what to ignore
  • Examples - Sample outputs to guide the AI

2. Iterative Development with Copilot CLI

During development, Copilot CLI helped me:

Code Generation:

  • Generated boilerplate for CLI commands with Typer
  • Created Pydantic schemas for each analysis type
  • Built async orchestration code for subprocess management

Debugging:

# When tests failed, I asked Copilot:
copilot "Why is this pytest fixture not mocking shutil.which correctly?"
copilot "How do I handle UTF-8 encoding on Windows for subprocess output?"

Architecture Decisions:

copilot "Should I use sync or async for subprocess calls to GitHub Copilot CLI?"
copilot "What's the best way to cache analysis results for the report command?"

3. AI-Powered Documentation Generation

The tour command is my favorite feature—it uses Copilot CLI to generate comprehensive onboarding guides by analyzing:

  • Project structure
  • Code patterns
  • Tech stack
  • Dependencies
  • Common workflows

This would have taken hours to write manually, but Copilot CLI generates it in seconds with context-aware understanding of the codebase.

Leveraging Agent Skills

One of my secret weapons during development was my Virtual Company project—a collection of 27 specialized agent skills that enhance AI agents with domain-specific expertise.

These skills act as expert personas that guide GitHub Copilot CLI through complex workflows.

These skills transformed GitHub Copilot CLI from a general assistant into a team of specialized experts, each bringing deep domain knowledge to different aspects of the project. It's like having a senior developer, tech writer, QA engineer, and DevOps specialist all working together!

Virtual Company is open source: https://github.com/k1lgor/virtual-company

Impact on My Development Experience

Before GitHub Copilot CLI:

  • ❌ Spent hours writing static analysis rules
  • ❌ Struggled with complex regex patterns
  • ❌ Wrote boilerplate code manually
  • ❌ Context-switched between docs and coding

With GitHub Copilot CLI + Virtual Company Skills:

  • 10x faster development - Instant code generation and debugging
  • Expert-level guidance - Each skill provides specialized domain knowledge
  • Better architecture - Copilot with tech-lead skill suggested async patterns I wouldn't have considered
  • Fewer bugs - AI-reviewed code with bug-hunter skill before I even ran it
  • More creativity - Spent time on features, not implementation details
  • Continuous learning - Copilot taught me new Python patterns and best practices
  • Comprehensive testing - Test-genius skill helped achieve 48 passing tests with proper mocking

The Meta Experience

The most mind-bending part? I built a tool powered by GitHub Copilot CLI, while using GitHub Copilot CLI to build it. 🤯

It was like:

  • Using Copilot CLI to debug Copilot CLI integration
  • Asking Copilot to generate prompts for Copilot
  • Having Copilot help me test code that invokes Copilot

This recursive AI-assisted development felt like a glimpse into the future of software engineering.

Key Takeaways

  1. AI-First Architecture is Real - RepoDoctor proves you can build production tools with AI as the core logic engine
  2. Prompt Engineering Matters - The quality of your prompts directly impacts output quality
  3. Specialized Skills Amplify AI - Using domain-specific agent skills (Virtual Company) accelerates development exponentially
  4. Copilot CLI for Everything - From code generation to debugging to documentation
  5. Ship Faster, Iterate Smarter - Copilot CLI enabled rapid prototyping and validation
  6. The Future is AI-Native - Tools will increasingly delegate intelligence to AI rather than hardcode it

Try RepoDoctor Today!

uv install repodoc
cd your-project
repodoc scan

GitHub: https://github.com/k1lgor/RepoDoctor

PyPI: https://pypi.org/project/repodoc/

Built with ❤️ and 🤖 by @k1lgor using GitHub Copilot CLI

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers.

1 Share

Open collaboration runs on trust. For a long time, that trust was protected by a natural, if imperfect filter: friction.

If you were on Usenet in 1993, you’ll remember that every September a flood of new university students would arrive online, unfamiliar with the norms, and the community would patiently onboard them. Then mainstream dial-up ISPs became popular and a continuous influx of new users came online. It became the September that never ended.

Today, open source is experiencing its own Eternal September. This time, it’s not just new users. It’s the sheer volume of contributions.

When the cost to contribute drops

In the era of mailing lists contributing to open source required real effort. You had to subscribe, lurk, understand the culture, format a patch correctly, and explain why it mattered. The effort didn’t guarantee quality, but it filtered for engagement. Most contributions came from someone who had genuinely engaged with the project.

It also excluded people. The barrier to entry was high. Many projects worked hard to lower it in order to make open source more welcoming.

A major shift came with the pull request. Hosting projects on GitHub, using pull requests, and labeling “Good First Issues” reduced the friction needed to contribute. Communities grew and contributions became more accessible.

That was a good thing.

But friction is a balancing act. Too much keeps people and their ideas out, too little friction can strain the trust open source depends on.

Today, a pull request can be generated in seconds. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.

It’s worth saying: most contributors are acting in good faith. Many want to help projects they care about. Others are motivated by learning, visibility, or the career benefits of contributing to widely used open source. Those incentives aren’t new and they aren’t wrong.

The challenge is what happens when low-quality contributions arrive at scale. When volume accelerates faster than review capacity, even well-intentioned submissions can overwhelm maintainers. And when that happens, trust, the foundation of open collaboration, starts to strain.

The new scale of noise

It is tempting to frame “low-quality contributions” or “AI slop” contributions as a unique recent phenomenon. It isn’t. Maintainers have always dealt with noisy inbound.

  • The Linux kernel operates under a “web of trust” philosophy and formalized its SubmittingPatches guide and introduced the Developer Certificate of Origin (DCO) in 2004 for a reason.
  • Mozilla and GNOME built formal triage systems around the reality that most incoming bug reports needed filtering before maintainers invested deeper time.
  • Automated scanners: Long before GenAI, maintainers dealt with waves of automated security and code quality reports from commercial and open source scanning tools.

The question from maintainers has often been the same: “Are you really trying to help me, or just help yourself?

Just because a tool—whether a static analyzer or an LLM—makes it easy to generate a report or a fix, it doesn’t mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit. The contributor maybe gets the credit (or the CVE, or the visibility), while the maintainer gets the maintenance burden.

Maintainers are feeling that directly. For example:

  • curl ended its bug bounty program after AI-generated security reports exploded, each taking hours to validate.
  • Projects like Ghostty are moving to invitation-only contribution models, requiring discussion before accepting code contributions.
  • Multiple projects are adopting explicit rules about AI-generated contributions.

These are rational responses to an imbalance.

What we’re doing at GitHub

At GitHub, we aren’t just watching this happen. Maintainer sustainability is foundational to open source, and foundational to us. As the home of open source, we have a responsibility to help you manage what comes through the door.

We are approaching this from multiple angles: shipping immediate relief now, while building toward longer-term, systemic improvements. Some of this is about tooling. Some is about creating clearer signals so maintainers can decide where to spend their limited time.

Features we’ve already shipped

  • Pinned comments on issues: You can now pin a comment to the top of an issue from the comment menu.
  • Banners to reduce comment noise: Experience fewer unnecessary notifications with a banner that encourages people to react or subscribe instead of leaving noise like “+1” or “same here.”
  • Pull request performance improvements: Pull request diffs have been optimized for greater responsiveness and large pull requests in the new files changed experience respond up to 67% faster.
  • Faster issue navigation: Easier bug triage thanks to significantly improved speeds when browsing and navigating issues as a maintainer.
  • Temporary interaction limits: You can temporarily enforce a period of limited activity for certain users on a public repository.

These improvements focus on reducing review overhead.

Features we’ll be shipping soon

  • Repo-level pull request controls: Gives maintainers the option to limit pull request creation to collaborators or disable pull requests entirely. While the introduction of the pull request was fundamental to the growth of open source, maintainers should have the tools they need to manage their projects.
  • Pull request deletion from the UI: Remove spam or abusive pull requests so repositories can stay more manageable.

Exploring next steps

We know that walls don’t build communities. As we explore next steps, our focus is on giving maintainers more control while helping protect what makes open source communities work.

Some of the directions we’re exploring in consultation with maintainers include:

  • Criteria-based gating: Requiring a linked issue before a pull request can be opened, or defining rules that contributions must meet before submission.
  • Improved triage tools: Potentially leveraging automated triage to evaluate contributions against a project’s own guidelines (like CONTRIBUTING.md) and surface which pull requests should get your attention first.

These tools are meant to support decision-making, not replace it. Maintainers should always remain in control.

We are also aware of tradeoffs. Restrictions can disproportionately affect first-time contributors acting in good faith. That’s why these controls are optional and configurable.

The community is building ladders

One of the things I love most about open source is that when the community hits a wall, people build ladders. We’re seeing a lot of that right now.

Maintainers across the ecosystem are experimenting with different approaches. Some projects have moved to invitation-only workflows. Others are building custom GitHub Actions for contributor triage and reputation scoring.

Mitchell Hashimoto’s Vouch project is an interesting example. It implements an explicit trust management system where contributors must be vouched for by trusted maintainers before they can participate. It’s experimental and some aspects will be debated, but it fits a longer lineage, from Advogato’s trust metric to Drupal’s credit system to the Linux kernel’s Signed-off-by chain.

At the same time, many communities are investing heavily in education and onboarding to widen who can contribute while setting clearer expectations. The Python community, for example, emphasizes contributor guides, mentorship, and clearly labeled entry points. Kubernetes pairs strong governance with extensive documentation and contributor education, helping new contributors understand not just how to contribute, but what a useful contribution looks like.

These approaches aren’t mutually exclusive. Education helps good-faith contributors succeed. Guardrails help maintainers manage scale.

There is no single correct solution. That’s why we are excited to see maintainers building tools that match their project’s specific values. The tools communities build around the platform often become the proving ground for what might eventually become features. So we’re paying close attention.

Building community, not just walls

We also need to talk about incentives. If we only build blocks and bans, we create a fortress, not a bazaar.

Right now, the concept of “contribution” on GitHub still leans heavily toward code authorship. In WordPress, they use manually written “props” credit given not just for code, but for writing, reproduction steps, user testing, and community support. It recognizes the many forms of contribution that move a project forward.

We want to explore how GitHub can better surface and celebrate those contributions. Someone who has consistently triaged issues or merged documentation PRs has proven they understand your project’s voice. These are trust signals we should be surfacing to help you make decisions faster.

Tell us what you need

We’ve opened a community discussion to gather feedback on the directions we’re exploring: Exploring Solutions to Tackle Low-Quality Contributions on GitHub.

We want to hear from you. Share what is working for your projects, where the gaps are, and what would meaningfully improve your experience maintaining open source.

Open source’s Eternal September is a sign of something worth celebrating: more people want to participate than ever before. The volume of contributions is only going to grow — and that’s a good thing. But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same. Not by raising the drawbridge, but by giving maintainers better signals, better tools, and better ways to channel all that energy into work that moves their projects forward.

Let’s build that together.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

42% of Code Is Now AI-Assisted!

1 Share

In 2025, Sonar surveyed over 1.100 developers worldwide to see how software engineering is evolving. The findings show AI is now central to development – but it hasn’t made the job easier, only reshaped it.

Developers now write code faster than ever. At the same time, they spend more time questioning, reviewing, and validating what gets shipped. Productivity has increased. Confidence has not kept pace. This tension defines the current “State of Code” research.

AI is no longer experimental, it’s operational!

  • 72% of developers who use AI coding tools rely on them daily
  • Developers estimate that 42% of the code they commit is AI-assisted
  • They expect that number to rise to 65% by 2027

AI-assisted coding has shifted from novelty to routine. Among developers who have tried AI tools, most now use them daily. Many estimate that more than 40% of the code they commit includes AI assistance. They expect that percentage to grow significantly in the next two years.

Developers use AI across the full spectrum of work: prototypes, internal tools, customer-facing products, and even business-critical systems. AI no longer supports side experiments. It participates directly in production workflows.

This level of integration signals a structural shift in software engineering. Teams no longer ask whether to use AI. They focus on how to use it responsibly.

Do developers trust AI?

  • 58% use it in business-critical services
  • 88% use AI for prototypes and proof-of-concept work
  • 83% use it for internal production systems
  • 73% integrate it into customer-facing applications

Developers consistently report that AI makes them faster. Most say AI helps them solve problems more efficiently and reduces time spent on repetitive tasks. Many even report increased job satisfaction because they can offload boilerplate and mechanical work.

However, speed does not equal certainty. Nearly all developers (96%) express doubts about the reliability of AI-generated code. They acknowledge that AI often produces output that looks correct at first glance but contains subtle errors or hidden flaws. This creates a trust deficit.

Despite this skepticism, not all developers consistently verify AI output before committing it. The pressure to ship features quickly often outweighs the discipline of thorough review. As a result, teams face a new bottleneck: verification.

Reviewing AI-generated code frequently demands more effort than reviewing human-written code. Developers must reconstruct intent, validate assumptions, and check edge cases without knowing how the model arrived at its solution. AI compresses the time spent writing code but expands the time required to evaluate it.

In this environment, confidence – not velocity – becomes the true measure of engineering maturity.

AI doesn’t remove toil, it changes it!

75% believe AI reduces toil. However, time allocation data reveals a more complex picture:

  • Developers still spend roughly 23–25% of their work week on low-value or repetitive tasks
  • This percentage remains consistent regardless of AI usage frequency

One of the promises of AI tools involved reducing developer toil: repetitive, frustrating tasks such as writing documentation, generating tests, or navigating poorly structured codebases. Many developers believe AI reduces certain kinds of toil. They report improvements in documentation quality, test coverage, and refactoring efficiency.

However, the overall proportion of time developers spend on low-value work has not meaningfully decreased. Instead, AI shifts the nature of toil. Developers now spend less time writing boilerplate and more time validating AI suggestions. They spend less time drafting documentation and more time correcting generated code. Traditional frustrations have not disappeared; they have transformed.

This dynamic challenges simplistic narratives about AI productivity gains. AI does not eliminate friction. It redistributes it.

Teams that recognize this reality can adapt workflows accordingly. Teams that assume AI automatically saves time risk underestimating the verification cost.

Technical debt: reduction and acceleration at once

Positive impact:

  • 93% report at least one improvement related to technical debt
  • 57% see better documentation
  • 53% report improved testing or debugging
  • Nearly half report easier refactoring

Negative impact:

  • 88% report at least one negative consequence
  • 53% say AI generates code that appears correct but is unreliable
  • 40% say it produces unnecessary or duplicative code

AI influences technical debt in both directions. On the positive side, developers use AI to modernize legacy code, generate missing tests, improve documentation, and refactor inefficient structures. These activities reduce long-standing debt and improve maintainability.

On the negative side, AI sometimes generates redundant, overly verbose, or structurally weak code. Developers frequently encounter code that appears correct but introduces subtle reliability problems. When teams integrate such code without rigorous review, they create new debt.

AI therefore acts as both a debt reducer and a debt accelerator. Managing this tension requires deliberate governance. Teams must treat AI contributions as first-class code changes subject to the same standards as human-written code. Automated testing, static analysis, and clear architectural principles become even more critical in this environment.

Technical debt already ranks among developers’ top frustrations. Uncontrolled AI usage can amplify that burden rather than alleviate it.

What is shadow AI?

Most used tools:

  • GitHub Copilot (75%)
  • ChatGPT (74%)
  • Claude (48%)
  • Google Codey/Duet (37%)
  • Cursor (31%)

GitHub Copilot and ChatGPT dominate AI-assisted coding usage, but developers rely on a growing ecosystem of tools. Many teams use multiple AI platforms simultaneously, selecting each for specific strengths. This fragmentation creates flexibility but introduces complexity.

A critical risk signal:

  • 35% of developers use AI tools through personal accounts
  • 52% of ChatGPT users access it outside company-managed environments
  • 57% worry about exposing sensitive data

Developers often access AI tools through personal accounts rather than company-approved environments. This behavior reflects demand for productivity but introduces governance risks. When developers paste proprietary code into unmanaged AI systems, organizations lose visibility and control.

Security concerns rank among the most significant worries associated with AI adoption. Developers themselves recognize the risk of exposing sensitive data.

Organizations must respond pragmatically. Banning AI rarely works. Developers adopt tools that help them work faster. Instead, companies should provide secure, sanctioned AI environments and define clear usage guidelines. Enablement, not prohibition, produces safer outcomes.

The experience divide

Less experienced developers:

  • Use AI for understanding codebases
  • Rely more heavily on AI for implementation
  • Estimate a higher percentage of AI-assisted code

Senior developers:

  • Use AI more selectively
  • Focus on review, optimization, and refactoring
  • Express higher skepticism

AI does not affect all developers equally. Less-experienced developers tend to adopt new AI tools more aggressively. They use AI to understand unfamiliar codebases, generate implementations, and explore new frameworks. For them, AI functions as both assistant and tutor.

More experienced developers integrate AI differently. They rely on it to review code, optimize performance, and assist with maintenance tasks. Their experience enables them to identify flaws more quickly and apply AI selectively.

Junior developers often trust AI more readily. Senior developers approach it more cautiously. This difference does not reflect competence; it reflects perspective.

High-performing teams combine both approaches. Junior engineers introduce experimentation and speed. Senior engineers provide oversight and architectural discipline. Together, they mitigate risk while capturing gains.

What this means for engineering teams?

The State of Code 2025 reveals a profession in transition. AI increases output but raises the bar for validation. It reduces certain manual burdens while introducing new forms of cognitive load. It offers opportunities to address legacy debt while risking new complexity.

The central lesson does not concern speed. It concerns confidence. Developers must treat AI as a powerful collaborator that requires supervision. Engineering leaders must invest in testing infrastructure, review practices, and secure tooling environments. Organizations must acknowledge that productivity improvements depend on disciplined integration, not blind adoption.

Developers no longer compete on how quickly they can type code. They compete on how effectively they can evaluate, refine, and ship trustworthy systems.

The tools have changed. The responsibility has not. Teams that focus on clarity, code quality, and secure AI integration will thrive in this new environment. Teams that chase velocity without verification will accumulate invisible risk.

The post 42% of Code Is Now AI-Assisted! appeared first on ShiftMag.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories