Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153199 stories
·
33 followers

M365 Copilot Agent-a-thon Coming to Arlington, VA

1 Share

Join Us In-Person on February 26, 2026 – Arlington, VA 

Ready to boost your productivity, streamline mission workflows, and get hands-on with the latest in Microsoft 365 AI? We’re excited to invite government employees to an immersive, in-person Agentathon at the Washington DC Microsoft Innovation Hub on Thursday, February 26, 2026, from 9:00 AM to 4:00 PM (EST). 

This highly interactive event is designed specifically to help government teams unlock the potential of agents in Microsoft 365 Copilot (GCCaligned)—and transform real mission work into repeatable, instruction-driven assistance. 

Register now — space is very limited! 

Why You Should Attend 

Modern government work demands more agility, smarter processes, and tools that help teams stay ahead of mission needs. This session focuses on exactly that—personal productivity agents built within Microsoft 365 that connect your natural language instructions with your organization’s existing data. 

You’ll walk away with clarity on: 

  • When to use M365 Copilot vs. agents 
  • How to design instruction-driven agents 
  • How to turn common workflows into repeatable AI powered patterns 
  • What capabilities exist today in GCC, GCC High, and DoD—plus what’s coming next 

Our Microsoft experts and coaches will guide you every step of the way. 

What You’ll Do 

Collaborate with Government Peers & Microsoft Experts 

Work together to explore agent concepts, compare approaches, and hone instruction patterns using government-aligned Microsoft 365 capabilities. 

Hands-On Agent Building 

Draft agent instructions, validate inputs, define success measures, and refine solutions through real-world scenario-focused exercises. 

Interactive Learning & Real Examples 

See how agents fit across Microsoft 365, how they extend personal productivity, and how to map mission workflows into AI-driven guidance. 

Breakout Sessions 

Small-group coaching sessions help you sharpen your scenario, simplify your design, and accelerate your build. 

What You’ll Learn 

  • What to use when: Make clear decisions on when M365 Copilot fits your need—and when an agent is the stronger choice. 
  • Personal Productivity Agent Design: Learn how natural language + your Microsoft 365 data = faster, more consistent work. 
  • Cross Agency Best Practices: Share designs, trade insights, and take home reusable patterns you can apply immediately in your environment. 

This is your opportunity to level up your AI expertise—at no additional cost. 

Agenda Snapshot 

09:00 – 09:30 Arrival & Networking 
09:30 – 10:00 State of AI in Government 
10:00 – 10:45 Agents in Microsoft 365: What They Are + What to Use When 
10:45 – 12:00 Breakouts: Scenario Selection + Instruction Patterns 
12:00 – 01:00 Lunch & Build Sprint 
01:00 – 02:30 Breakouts: Build, Refine & Test 
02:30 – 02:45 Break 
02:45 – 03:45 Share Out: Your Agent + Reusable Patterns 
03:45 – 04:00 Wrap-Up & Next Steps 

Location 

Microsoft Innovation Hub – Washington, D.C. 
1300 Wilson Blvd, 14th Floor 
Arlington, VA 22209 

Important Notes 

  • Registration is processed first come, first served. 
  • Due to limited capacity (~30 participants), you’ll be automatically waitlisted upon registration. A confirmation email will follow within a few business days. 
  • Government employees should confirm participation complies with applicable organizational policies and laws. 

Secure Your Spot Today 

Don’t miss this opportunity to supercharge your personal productivity and learn how Microsoft 365 Copilot and agent capabilities can accelerate the mission each day. 
Seats are limited—register now to save your place! 

Read the whole story
alvinashcraft
14 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build a The Backrooms Game in Unreal Engine 5

1 Share

The "liminal space" aesthetic of the Backrooms has become a staple of modern indie horror. If you’ve ever wanted to create your own unsettling, claustrophobic experience, the latest course on the freeCodeCamp.org YouTube channel is your perfect starting point.

In this comprehensive tutorial from DevEdge Studio, you’ll learn how to build a complete Backrooms-style horror game from scratch using Unreal Engine 5. And it is created entirely using Blueprints, meaning you don’t need to write a single line of C++ code to create the game.

Thought In this project, you will develop a fully playable "Body Cam" style horror game. You’ll build a complete gameplay loop featuring intelligent enemy AI, interactive puzzles, and custom jump scares, all while utilizing Blueprint Interfaces. And then at the end you will package a polished, standalone .exe file ready for distribution.

Watch the full course on the freeCodeCamp.org YouTube channel (3-hour watch).



Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Results from the 2025 Go Developer Survey

1 Share

The Go Blog

Results from the 2025 Go Developer Survey

Todd Kulesza, on behalf of the Go team
21 January 2026

Hello! In this article we’ll discuss the results of the 2025 Go Developer Survey, conducted during September 2025.

Thank you to the 5,379 Go developers who responded to our survey invitation this year. Your feedback helps both the Go team at Google and the wider Go community understand the current state of the Go ecosystem and prioritize projects for the year ahead.

Our three biggest findings are:

  • Broadly speaking, Go developers asked for help with identifying and applying best practices, making the most of the standard library, and expanding the language and built-in tooling with more modern capabilities.
  • Most Go developers are now using AI-powered development tools when seeking information (e.g., learning how to use a module) or toiling (e.g., writing repetitive blocks of similar code), but their satisfaction with these tools is middling due, in part, to quality concerns.
  • A surprisingly high proportion of respondents said they frequently need to review documentation for core go subcommands, including go build, go run, and go mod, suggesting meaningful room for improvement with the go command’s help system.

Read on for the details about these findings, and much more.

Sections

Who did we hear from?

Most survey respondents self-identified as professional developers (87%) who use Go for their primary job (82%). A large majority also uses Go for personal or open-source projects (72%). Most respondents were between 25 – 45 years old (68%) with at least six years of professional development experience (75%). Going deeper, 81% of respondents told us they had more professional development experience than Go-specific experience, strong evidence that Go is usually not the first language developers work with. In fact, one of the themes that repeatedly surfaced during this year’s survey analysis seems to stem from this fact: when the way to do a task in Go is substantially different from a more familiar language, it creates friction for developers to first learn the new (to them) idiomatic Go pattern, and then to consistently recall these differences as they continue to work with multiple languages. We’ll return to this theme later.

The single most common industry respondents work in was “Technology” (46%), but a majority of respondents work outside of the tech industry (54%). We saw representation of all sizes of organizations, with a bare majority working somewhere with 2 – 500 employees (51%), 9% working alone, and 30% working at enterprises of over 1,000 employees. As in prior years, a majority of responses come from North America and Europe.

This year we observed a decrease in the proportion of respondents who said they were fairly new to Go, having worked with it for less than one year (13%, vs. 21% in 2024). We suspect this is related to industry-wide declines in entry-level software engineering roles; we commonly hear from people that they learned Go for a specific job, so a downturn in hiring would be expected to reduce the number of developers learning Go in that year. This hypothesis is further supported by our finding that over 80% of respondents learned Go after beginning their professional career.

Other than the above, we found no significant changes in other demographics since our 2024 survey.

During the past year, in which types of
situations have you used Go? How long have you been using Go? Which of the following
best describe how or why you work with Go? How old are you? How many years of professional coding
experience do you have? How many people work at your organization? Which of the following best describes the
industry in which your organization operates? Where do you live?

How do people feel about Go?

The vast majority of respondents (91%) said they felt satisfied while working with Go. Almost ⅔ were “very satisfied”, the highest rating. Both of these metrics are incredibly positive, and have been stable since we began asking this question in 2019. The stability over time is really what we monitor from this metric — we view it as a lagging indicator, meaning by the time this satisfaction metric shows a meaningful change, we would expect to already have seen earlier signals from issue reports, mailing lists, or other community feedback.

Overall, how satisfied or dissatisfied
have you been using Go during the past year?

Why were respondents so positive about Go? Looking at open-text responses to several different survey questions suggests that it’s the gestalt, rather than any one thing. These folks are telling us that they find tremendous value in Go as a holistic platform. That doesn’t mean it supports all programming domains equally well (it surely does not), but that developers’ value the domains it does nicely support via stdlib and built-in tooling.

Below are some representative quotations from respondents. To provide context for each quote, we also identify the satisfaction level, years of experience with Go, and industry of the respondent.

“Go is by far my favorite language; other languages feel far too complex and unhelpful. The fact that Go is comparatively small, simple, with fewer bells and whistles plays a massive role in making it such a good long-lasting foundation for building programs with it. I love that it scales well to being used by a single programmer and in large teams.” — Very satisfied / 10+ years / Technology company

“The entire reason I use Go is the great tooling and standard library. I’m very thankful to the team for focusing on great HTTP, crypto, math, sync, and other tools that make developing service-oriented applications easy and reliable.” — Very satisfied / 10+ years / Energy company

“[The] Go ecosystem is the reason why I really like the programming language. There are a lot of npm issues lately but not with Go.” — Very satisfied / 3 – 10 years / Financial services

This year we also asked about the other languages that people use. Survey respondents said that besides Go, they enjoy working with Python, Rust, and TypeScript, among a long tail of other languages. Some shared characteristics of these languages align with common points of friction reported by Go developers, including areas like error handling, enums, and object-oriented design patterns. For example, when we sum the proportion of respondents who said their next-favorite language included one of the following factors, we found that majorities of respondents enjoy using languages with inheritance, type-safe enums, and exceptions, with only a bare majority of these languages including a static type system by default.

Concept or feature Proportion of respondents
Inheritance 71%
Type-safe enums 65%
Exceptions 60%
Static typing 51%

We think this is important because it reveals the larger environment in which developers operate — it suggests that people need to use different design patterns for fairly mundane tasks, depending on the language of the codebase they’re currently working on. This leads to additional cognitive load and confusion, not only among developers new to Go (who must learn idiomatic Go design patterns), but also among the many developers who work in multiple codebases or projects. One way to alleviate this additional load is context-specific guidance, such as a tutorial on “Error handling in Go for Java developers”. There may even be opportunities to build some of this guidance into code analyzers, making it easier to surface directly in an IDE.

Not including Go, what's your favorite
programming language?

This year we asked the Go community to share their sentiment towards the Go project itself. These results were quite different from the 91% satisfaction rate we discussed above, and point to areas the Go Team plans to invest our energy during 2026. In particular, we want to encourage more contributors to get involved, and ensure the Go Team accurately understands the challenges Go developers currently face. We hope this focus, in turn, will help to increase developer trust in both the Go project and the Go Team leadership. As one respondent explained the problem:

“Now that the founding first generation of Go Team members [are] not involved much anymore in the decision making, I am a bit worried about the future of Go in terms of quality of maintenance, and its balanced decisions so far wrt to changes in the language and std lib. More presence in form of talks [by] the new core team members about the current state and future plans might be helpful to strengthen trust.” — Very satisfied / 10+ years / Technology company

To what extent do you agree or disagree
with the following statements?

What are people building with Go?

We revised this list of “what types of things do you build with Go?” from 2024 with the intent of more usefully teasing apart what people are building with Go, and avoid confusion around evolving terms like “agents”. Respondent’s top use cases remain CLIs and API services, with no meaningful change in either since 2024. In fact, a majority of respondents (55%) said they build both CLIs and API services with Go. Over ⅓ of respondents specifically build cloud infrastructure tooling (a new category), and 11% work with ML models, tools, or agents (an expanded category). Unfortunately embedded use cases were left off of the revised list, but we’ll fix this for next year’s survey.

What types of things do you build with
Go?

Most respondents said they are not currently building AI-powered features into the Go software they work on (78%), with ⅔ reporting that their software does not use AI functionality at all (66%). This appears to be a decrease in production-related AI usage year-over-year; in 2024, 59% of respondents were not involved in AI feature work, while 39% indicated some level of involvement. That marks a shift of 14 points away from building AI-powered systems among survey respondents, and may reflect some natural pullback from the early hype around AI-powered applications: it’s plausible that lots of folks tried to see what they could do with this technology during its initial rollout, with some proportion deciding against further exploration (at least at this time).

Think about the Go software that you've
worked on the most during the past month. Does it use AI for any of its
functionality?

Among respondents who are building AI- or LLM-powered functionality, the most common use case was to create summaries of existing content (45%). Overall, however, there was little difference between most uses, with between 28% – 33% of respondents adding AI functionality to support classification, generation, solution identification, chatbots, and software development.

The Go software that I build uses AI
or LLMs to:

What are the biggest challenges facing Go developers?

One of the most helpful types of feedback we receive from developers are details about the challenges people run into while working with Go. The Go Team considers this information holistically and over long time horizons, because there is often tension between improving Go’s rougher edges and keeping the language and tooling consistent for developers. Beyond technical factors, every change also incurs some cost in terms of developer attention and cognitive disruption. Minimizing disruption may sound a bit dull or boring, but we view this as an important strength of Go. As Russ Cox wrote in 2023, “Boring is good… Boring means being able to focus on your work, not on what’s different about Go.”.

In that spirit, this year’s top challenges are not radically different from last year’s. The top three frustrations respondents reported were “Ensuring our Go code follows best practices / Go idioms” (33% of respondents), “A feature I value from another language isn’t part of Go” (28%), and “Finding trustworthy Go modules and packages” (26%). We examined open-text responses to better understand what people meant. Let’s take a minute to dig into each.

Respondents who were most frustrated by writing idiomatic Go were often looking for more official guidance, as well as tooling support to help enforce this guidance in their codebase. As in prior surveys, questions about how to structure Go projects were also a common theme. For example:

“The simplicity of go helps to read and understand code from other developers, but there are still some aspects that can differ quite a lot between programmers. Especially if developers come from other languages, e.g. Java.” — Very satisfied / 3 – 10 years / Healthcare and life sciences

“More opinionated way to write go code. Like how to structure a Go project for services/cli tool.” — Very satisfied / < 3 years / Technology

“It’s hard to figure out what are good idioms. Especially since the core team doesn’t keep Effective Go up-to-date.” — Very satisfied / 3 – 10 years / Technology

The second major category of frustrations were language features that developers enjoyed working with in other ecosystems. These open-text comments largely focused on error handling and reporting patterns, enums and sum types, nil pointer safety, and general expressivity / verbosity:

“Still not sure what is the best way to do error handling.” — Very satisfied / 3 – 10 years / Retail and consumer goods

“Rust’s enums are great, and lead to writing great type safe code.” — Somewhat satisfied / 3 – 10 years / Healthcare and life sciences

“There is nothing (in the compiler) that stops me from using a maybe nil pointer, or using a value without checking the err first. That should be [baked into] the type system.” — Somewhat satisfied / < 3 years / Technology

“I like [Go] but I didn’t expect it to have nil pointer exceptions :)” — Somewhat satisfied / 3 – 10 years / Financial services

“I often find it hard to build abstractions and to provide clear intention to the future readers of my code.” — Somewhat dissatisfied / < 3 years / Technology

The third major frustration was finding trustworthy Go modules. Respondents often described two aspects to this problem. One is that they considered many 3rd-party modules to be of marginal quality, making it hard for really good modules to stand out. The second is identifying which modules are commonly used and under which types of conditions (including recent trends over time). These are both problems that could be addressed by showing what we’ll vaguely call “quality signals” on pkg.go.dev. Respondents provided helpful explanations of the signals they use to identify trustworthy modules, including project activity, code quality, recent adoption trends, or the specific organizations that support or rely upon the module.

“Being able to filter by criteria like stable version, number of users and last update age at pkg.go.dev could make things a bit easier.” — Very satisfied / < 3 years / Technology

“Many pacakges are just clones/forks or one-off pojects with no history/maintenance. [sic]” — Very satisfied / 10+ years / Financial services

“Maybe flagging trustworthy packages based on experience, maturity and community feedback?” — Very satisfied / 3 – 10 years / Healthcare and life sciences

We agree that these are all areas where the developer experience with Go could be improved. The challenge, as discussed earlier, is doing so in such a way that doesn’t lead to breaking changes, increased confusion among Go developers, or otherwise gets in the way of people trying to get their work done with Go. Feedback from this survey is a major source of information we use when discussing proposals, but if you’d like to get involved more directly or follow along with other contributors, visit the Go proposals on GitHub; please be sure to follow this process if you’d like to add a new proposal.

What are your three most
frustrating things about working with Go?

In addition to these (potentially) ecosystem-wide challenges, this year we also asked specifically about working with the go command. We’ve informally heard from developers that this tool’s help system can be confusing to navigate, but we haven’t had a great sense of how frequently people find themselves reviewing this documentation.

Respondents told us that except for go test, between 15% – 25% of them felt they “often needed to review documentation” with working with these tools. This was surprising, especially for commonly-used subcommands like build and run. Common reasons included remembering specific flags, understanding what different options do, and navigating the help system itself. Participants also confirmed that infrequent use was one reason for frustration, but navigating and parsing command help appears to be the underlying cause. In other words, we all expect to need to review documentation sometimes, but we don’t expect to need help navigating the documentation system itself. As on respondent described their journey:

“Accessing the help is painful. go test –help # didn’t work, but tell[s] me to type go help test instead… go help test # oh, actually, the info I’m looking for is in testflag go help testflag # visually parsing through text that looks all the same without much formatting… I just lack time to dig into this rabbit hole.” — Very satisfied / 10+ years / Technology

Do you find yourself frequently
reviewing documentation for any of the following Go subcommands?

What does their development environment look like?

Operating systems and architectures

Generally, respondents told us their development platforms are UNIX-like. Most respondents develop on macOS (60%) or Linux (58%) and deploy to Linux-based systems, including containers (96%). The largest year-over-year change was among “embedded devices / IoT” deployments, which increased from 2% -> 8% of respondents; this was the only meaningful change in deployment platforms since 2024.

The vast majority of respondents develop on x86-64 or ARM64 architectures, with a sizable group (25%) still potentially working on 32-bit x86 systems. However, we believe the wording of this question was confusing to respondents; next year we’ll clarify the 32-bit vs. 64-bit distinction for each architecture.

Which platforms do you use when writing
Go code? Which systems
do you deploy your Go software to? Which architectures do you deploy your Go
software to?

Code editors

Several new code editors have become available in the past two years, and we expanded our survey question to include the most popular ones. While we saw some evidence of early adoption, most respondents continued to favor VS Code (37%) or GoLand (28%). Of the newer editors, Zed and Cursor were the highest ranked, each becoming the preferred editor of 4% of respondents. To put those numbers in context, we looked back at when VS Code and GoLand were first introduced. VS Code (released in 2015) was favored by 16% of respondents one year after its release. IntelliJ has had a community-led Go plugin longer than we’ve been surveying Go developers (💙), but if we look at when JetBrains began officially supporting Go in IntelliJ (2016), within one year IntelliJ was preferred by 20% of respondents.

Note: This analysis of code editors does not include respondents who were referred to the survey directly from VS Code or GoLand.

What is your favorite code editor for
Go?

Cloud environments

The most common deployment environments for Go continue to be Amazon Web Services (AWS) at 46% of respondents, company-owned servers (44%), and Google Cloud Platform (GCP) at 26%. These numbers show minor shifts since 2024, but nothing statistically significant. We found that the “Other” category increased to 11% this year, and this was primarily driven by Hetzner (20% of Other responses); we plan to include Hetzner as a response choice in next year’s survey.

We also asked respondents about their development experience of working with different cloud providers. The most common responses, however, showed that respondents weren’t really sure (46%) or don’t directly interact with public cloud providers (21%). The biggest driver behind these responses was a theme we’ve heard often before: with containers, it’s possible to abstract many details of the cloud environment away from the developer, so that they don’t meaningfully interact with most provider-specific technologies. This result suggests that even developers whose work is deployed to clouds may have limited experience with the larger suite of tools and technology associated with each cloud provider. For example:

“Kinda abstract to the platform, Go is very easy to put in a container and so pretty easy to deploy anywhere: one of its big strength[s].” — [no satisfaction response] / 3 – 10 years / Technology

“The cloud provider really doesn’t make much difference to me. I write code and deploy it to containers, so whether that’s AWS or GCP I don’t really care.” — Somewhat satisfied / 3 – 10 years / Financial services

We suspect this level of abstraction is dependant on the use case and requirements of the service that’s being deployed — it may not always make sense or be possible to keep it highly abstracted. In the future, we plan to further investigate how Go developers tend to interact with the platforms where their software is ultimately deployed.

My team at work deploys Go programs
to:

In your experience, which
public cloud provider offers the best experience for Go developers?

Developing with AI

Finally, we can’t discuss development environments in 2025 without also mentioning AI-powered software development tools. Our survey suggests bifurcated adoption — while a majority of respondents (53%) said they use such tools daily, there is also a large group (29%) who do not use these at all, or only used them a few times during the past month. We expected this to negatively correlate with age or development experience, but were unable to find strong evidence supporting this theory except for very new developers: respondents with less than one year of professional development experience (not specific to Go) did report more AI use than every other cohort, but this group only represented 2% of survey respondents.

During the past month, how often did
you use AI-powered tools when writing Go?

At this time, agentic use of AI-powered tools appears nascent among Go developers, with only 17% of respondents saying this is their primary way of using such tools, though a larger group (40%) are occasionally trying agentic modes of operation.

When working with AI-powered
development tools, do you tend to use them as unsupervised agents?

The most commonly used AI assistants remain ChatGPT, GitHub Copilot, and Claude. Most of these agents show lower usage numbers compared with our 2024 survey (Claude and Cursor are notable exceptions), but due to a methodology change, this is not an apples-to-apples comparison. It is, however, plausible that developers are “shopping around” less than they were when these tools were first released, resulting in more people using a single assistant for most of their work.

When writing Go code, which AI
assistants or agents have you used in the last month?

We also asked about overall satisfaction with AI-powered development tools. A majority (55%) reported being satisfied, but this was heavily weighted towards the “Somewhat satisfied” category (42%) vs. the “Very satisfied” group (13%). Recall that Go itself consistently shows a 90%+ satisfaction rate each year; this year, 62% of respondents said they are “Very satisfied” with Go. We add this context to show that while AI-powered tooling is starting to see adoption and finding some successful use cases, developer sentiment towards them remains much softer than towards more established tooling (among Go developers, at least).

What is driving this lower rate of satisfaction? In a word: quality. We asked respondents to tell us something good they’ve accomplished with these tools, as well as something that didn’t work out well. A majority said that creating non-functional code was their primary problem with AI developer tools (53%), with 30% lamenting that even working code was of poor quality. The most frequently cited benefits, conversely, were generating unit tests, writing boilerplate code, enhanced autocompletion, refactoring, and documentation generation. These appear to be cases where code quality is perceived as less critical, tipping the balance in favor of letting AI take the first pass at a task. That said, respondents also told us the AI-generated code in these successful cases still required careful review (and often, corrections), as it can be buggy, insecure, or lack context.

“I’m never satisfied with code quality or consistency, it never follows the practices I want to.” — [no satisfaction response] / 3 – 10 years / Financial services

“All AI tools tend to hallucinate quickly when working with medium-to-large codebases (10k+ lines of code). They can explain code effectively but struggle to generate new, complex features” — Somewhat satisfied / 3 – 10 years / Retail and consumer goods

“Despite numerous efforts to make it write code in an established codebase, it would take too much effort to steer it to follow the practices in the project, and it would add subtle behaviour paths - i.e. if it would miss some method it would try to find its way around it or rely on some side effect. Sometimes those things are hard to recognize during code review. I also found it mentally taxing to review ai generated code and that overhead kills the productivity potential in writing code.” — Very satisfied / 10+ years / Technology

Overall, how satisfied or dissatisfied
have you felt while working with your AI-powered development tools during the
past month?

When we asked developers what they used these tools for, a pattern emerged that is consistent with these quality concerns. The tasks with most adoption (green in the chart below) and least resistance (red) deal with bridging knowledge gaps, improving local code, and avoiding toil. The frustrations that developers talk about with code-generating tools were much less evident when they’re seeking information, like how to use a specific API or configure test coverage, and perhaps as a result, we see higher usage of AI in these areas. Another spot that stood out was local code review and related suggestions — people were less interested in using AI to review other people’s code than in reviewing their own. Surprisingly, “testing code” showed lower AI adoption than other toilsome tasks, though we don’t yet have strong understanding of why.

Of all the tasks we asked about, “Writing code” was the most bifurcated, with 66% of respondents already or hoping to soon use AI for this, while ¼ of respondents didn’t want AI involved at all. Open-ended responses suggest developers primarily use this for toilsome, repetitive code, and continue to have concerns about the quality of AI-generated code.

How are you using AI-powered
tools with Go today?

Closing

Once again, a tremendous thank-you to everyone who responded to this year’s Go Developer Survey!

We plan to share the raw survey dataset in Q1 2026, so the larger community can also explore the data underlying these findings. This will only include responses from people who opted in to share this data (82% of all respondents), so there may be some differences from the numbers we reference in this post.

Survey methodology

This survey was conducted between Sept 9 - Sept 30, 2025. Participants were publicly invited to respond via the Go Blog, invitations on social media channels (including Bluesky, Mastodon, Reddit, and X), as well as randomized in-product invitations to people using VS Code and GoLand to write Go software. We received a total of 7,070 responses. After data cleaning to remove bots and other very low quality responses, 5,379 were used for the remainder of our analysis. The median survey response time was between 12 – 13 minutes.

Throughout this report we use charts of survey responses to provide supporting evidence for our findings. All of these charts use a similar format. The title is the exact question that survey respondents saw. Unless otherwise noted, questions were multiple choice and participants could only select a single response choice; each chart’s subtitle will tell the reader if the question allowed multiple response choices or was an open-ended text box instead of a multiple choice question. For charts of open-ended text responses, a Go team member read and manually categorized all of the responses. Many open-ended questions elicited a wide variety of responses; to keep the chart sizes reasonable, we condensed them to a maximum of the top 10-12 themes, with additional themes all grouped under “Other”. The percentage labels shown in charts are rounded to the nearest integer (e.g., 1.4% and 0.8% will both be displayed as 1%), but the length of each bar and row ordering are based on the unrounded values.

To help readers understand the weight of evidence underlying each finding, we included error bars showing the 95% confidence interval for responses; narrower bars indicate increased confidence. Sometimes two or more responses have overlapping error bars, which means the relative order of those responses is not statistically meaningful (i.e., the responses are effectively tied). The lower right of each chart shows the number of people whose responses are included in the chart, in the form “n = [number of respondents]”.

Previous article: Go’s Sweet 16
Blog Index

Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Blocking AI crawlers might be a bad idea

1 Share

Over at Talk Python To Me, I added a couple of deep AI integrations. You can read about them here at talkpython.fm/blog/posts/announcing-talk-python-ai-integrations/. A couple of folks in the community asked what I thought about how embracing AI consumption of our content would affect us. Or, how in the light of the tailwind situation, it might even undermine us.

Wait, what happened to Tailwind?

If you’re not familiar with the Tailwind situation, the TLDR is that their usage has gone up 6x over the last year while the revenue has fallen to one-fifth of what it was a year prior. Basically, AI is using and recommending Tailwind like crazy, which counterintuitively has destroyed their web traffic, and hence reduced people seeing and upgrading to their pro offers.

Fear of AI ingestion

I know many content creators (writers, podcasters, and so on), are pretty frustrated with AI. I totally get it. We all work very hard to create content that gets ingested by AI. Then people ask questions of the AI. AI uses our content to answer the question, usually without referencing our work.

This has led a lot of people to think that maybe they should block AI from even reading what they’re doing. Here are some examples:

  1. Reddit: How can we stop AI to read our website information
  2. Not On My Watch! 🚫 Block AI From Training On Your Website

I get the frustration and the ick factor. But I fear that blocking AI is going to end up in a similar situation as if you decided to block Google in 1998. For sure, those suckers will not be able to train on your content. But as users rely more and more on AI for recommendations, you will vanish from awareness in much the same way as if you had vanished from Google search results.

You’re welcome to block AI crawlers, and most of them will respect it. But that won’t make AI go away, nor will it make users stop using AI.

If you’re going to be in AI results…

If you’re going to be in AI results anyway, you should want the very best experience for current and potential users.

First, you do want to be recommended. After all, that’s how I sold this to you, right? The new Google. It’s a bit of the Wild West still, but there are tools that you can use to check how you appear in AI results. Here’s an example from ProductRank.ai for Python Podcasts.

Talk Python to Me is the number one recommended podcast when users ask ChatGPT, Claude, or Perplexity for a Python podcast. And my other podcast, Python Bytes, is number three. Honestly, I’m thrilled at this result. If anyone is talking to these AIs and says, “Hey, I would love a podcast to listen to. Do you have recommendations?” Here you go.

What would the experience be if I got mad at AI and blocked it? I simply would be invisible to all of these users wanting to learn about Python podcasts. Whatever effect AI is going to have on Talk Python in the future, disappearing from its recommendations is not going to make it better.

What will make it better?

How can we improve our situation of, say, decreased traffic or fewer customers? And I’m not even saying that that has actually happened for Talk Python, the podcast. It’s still going strong, just speaking generally.

Obviously, getting recommended more is key, so see above. Offering a better experience to your users when they ask questions about topics your content covers or even specifically about your content in particular.

That is why I added the AI integrations to Talk Python. Not so that AI could undermine us even more, but so that our users would get more value from years and years of content we’ve already created. Counting the transcripts, deep dives, and episode pages, we have well over 7 million words of content at talkpython.fm.

If users want to ask AI about that content, I want AI to give them as good and accurate of a response as possible. Here are some super frustrating experiences that you sometimes get from AI.

  1. Outdated data: My training only goes back to the summer of 2025, so here are the most recent results according to that.
  2. Imaginary data: My mistake. You’re absolutely right! President Obama did appear on Talk Python back in 2008! Here’s a link to that interview …

Out of date data, and especially, hallucinated data is why so many people are creeped out by AI and want to use AI less. The AI integrations that I created significantly reduced this by providing real-time information to AI, as well as tools to verify what it thinks exists, actually does, and link back to it.

How it looks with AI integrations

Check out this response from Claude when I asked what the latest 5 episodes of Talk Python To Me are:

This is real-time information. If I publish a new episode, or heck, even change the title of an older one, and ask this question again in a separate chat conversation 10 seconds later I will get up-to-date information.

There are many tools that the AIs can use that we’ve provided so that our content is more accurate, more up-to-date, and more useful. This can only mean that AI will recommend our content more and link back to it more accurately.

Here is another example which I posed in a totally fresh chat: Can you recommend a Python course on agentic AI and cursor?

Notice, I did not ask Claude to recommend a Talk Python course on Agentic AI. I simply asked it for any Python-based Agentic AI and Cursor course. Now, I have installed the Talk Python MCP server, so that probably influenced Claude, but still this is very powerful.

Blocking AI crawlers might be a bad idea

This is exactly why I think blocking AI crawlers is probably a bad idea. Yes, it’ll make you feel great if you’re pissed at AI: “You’ll show them!" But it likely will not further your cause in sharing ideas, gaining awareness, and much more.

So for all of you who have asked me why I’m willing to make Talk Python more accessible to AI in the immediate shadow of Tailwind and Stack Overflow suffering greatly from AI, this is why.

Thanks for checking out my content, whether you got here through an RSS reader, web search, or maybe even an AI recommendation. ;)

Cheers, Michael.

Read the whole story
alvinashcraft
50 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Now in Public Preview: GitHub Copilot build performance for Windows

1 Share

Last year, we launched our new GitHub Copilot build performance capabilities in Private Preview. With help from our fantastic C++ community, we gathered insights and addressed key feedback. We’re happy to share that GitHub Copilot build performance for Windows is now in Public Preview. Today, all C++ developers can try out the new capabilities in the latest Visual Studio 2026 Insiders.

“I’ve tried the feature for a few hours and I’m happily impressed. The agent provided accurate suggestions, implemented them, and managed to reduce my build time by about 20%.” – Alessandro Vergani, ARGO Vision

Optimizing Build Times with GitHub Copilot

When you use this new capability in Visual Studio, GitHub Copilot will use an agent to:

  1. Initiate a build and capture a diagnostic trace
  2. Identify bottlenecks in the following areas:
    • expensive headers
    • long function generation times
    • costly template instantiations
  3. Suggest and apply optimizations
  4. Validate changes through rebuilds so your code stays correct
  5. Report measurable improvements and recommend next steps

To see how it works in action, please watch our demo below.

gif of github copilot in action reducing build time for sqlite_flux

Learn more: Documentation for GitHub Copilot build performance for Windows | Microsoft Learn

Using GitHub Copilot build performance for Windows

There are several ways to start the new build performance capabilities:

1. Select the responder in Copilot Chat by typing “@BuildPerfCpp”

typing @BuildPerfCpp to start the responder in GitHub Copilot chat

2. Select menu entry Build > Run Build Insights > Improve build performance

context menu entry Build > Run Build Insights > Improve build performance

3. If you already have a .etl trace file open from Build Insights, click Improve on the top right corner of the report view. The view or tab you click from gives GitHub Copilot important context to focus on the relevant hot spots when the chat session begins.

entry point from a build insights report

Once you start a chat session from any of the entry points above, GitHub Copilot begins analyzing your build and offering suggestions to reduce build time. It will iterate on these optimizations until your build completes successfully. You’ll always have final approval on whether to apply the changes.

Using The Build Insights Tool

To start trace collection, GitHub Copilot will ask your permission to run the Build Insights tool.

accepting using build insights tool

If it is your first time using Build Insights, you will need to grant a one-time elevated request.bpa elevated prompt

 

Learn more: Build Insights needs additional permissions | Microsoft Learn

Template Instantiation Collection

To collect template information, you must opt-in via Tools > Options > Build Insights > Trace Collection > Collect template instantiation.

Default Report Location

By default, your reports will be saved in %TEMP%/BuildInsights. You can customize the save location at Tools > Options > Build Insights > Trace Collection > Override default location for saved reports.

Share your feedback

We’d love to hear feedback on how we can improve GitHub Copilot build performance for Windows. Please leave feedback by commenting below or report any issues with Help > Send Feedback.

The post Now in Public Preview: GitHub Copilot build performance for Windows appeared first on C++ Team Blog.

Read the whole story
alvinashcraft
54 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Quadratic Regression with SGD Training Using C#

1 Share
Dr. James McCaffrey presents a complete end-to-end demonstration of quadratic regression, implemented from scratch, with SGD training, using C#. Compared to standard linear regression, quadratic regression is better able to handle data with a non-linear structure, and data with interactions between predictor variables.
Read the whole story
alvinashcraft
59 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories