Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151785 stories
·
33 followers

Ticketmaster is an illegal monopoly, jury finds

1 Share
Photo illustration of a gavel next to a phone showing the Ticketmaster logo.

Live Nation-Ticketmaster is an illegal monopolist, a Manhattan jury found, according to Bloomberg. The jury found the company liable on three counts: illegally monopolizing the market for live event ticketing, amphitheaters, and tying its concert promotions business with the use of its venues, Bloomberg reported.

The verdict, reached after several days of deliberation, leaves the live entertainment giant open to a potential breakup - which was the stated goal of the lawsuit back when it was filed by the Biden administration's Department of Justice. Such an outcome would go far beyond the settlement that the Trump administration's DOJ reache …

Read the full story at The Verge.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Pentagon's AI Plan + Behind the Anthropic Fight — With Under Secretary of War Emil Michael

1 Share

Emil Michael is the Under Secretary of War for Research and Engineering at the Pentagon. Michael joins Big Technology to discuss how AI is transforming the Department of War, from targeting systems to drone warfare to cyber defense. Tune in to hear his account of why the Pentagon designated Anthropic a supply chain risk, what actually happened in the contract negotiations, and whether the decision was wise. We also cover how the military's Maven Smart System works in practice, what the U.S. learned from drone warfare in Ukraine and Iran, and whether the Pentagon Pizza Index is credible. Hit play for one of the most candid conversations you'll hear about AI and national security.

---

Questions? Feedback? Write bigtechnologypodcast@gmail.com

Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice.

Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b



Learn more about your ad choices. Visit megaphone.fm/adchoices





Download audio: https://pdst.fm/e/tracking.swap.fm/track/t7yC0rGPUqahTF4et8YD/pscrb.fm/rss/p/traffic.megaphone.fm/AMPP7230758441.mp3?updated=1776279931
Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

We rebuilt Flutter’s websites with Dart and Jaspr

1 Share
Dash and Jasper sitting behind a laptop, checking out the new Dart and Flutter websites built with Dart and Jaspr, with a mockup of a website layout behind them.
Rebuilding three websites using Jaspr, a Dart-based, open-source web framework.

Despite Dart starting out as a web language and being used every day to build apps across platforms, including the web, our own websites (dart.dev, flutter.dev, docs.flutter.dev) relied on a fragmented mix of non-Dart tools. That’s finally changed. We’ve migrated all three websites to use Jaspr, an open-source framework for building websites with Dart.

The result is a unified stack with a consistent developer experience where contributing only requires Dart. If you’re curious about building web experiences with Dart beyond standard Flutter web apps, this post explores what motivated our migration and how Dart and Jaspr made it all possible.

A fragmented and unfamiliar technical stack

While the previous setup of our sites worked, their implementations were fragmented, and required increasingly more effort to update the sites to meet our evolving needs. The documentation sites were built with Eleventy, a Node.js static-site generator. Meanwhile, flutter.dev had a completely separate setup, powered by Wagtail, a CMS built on Python and Django.

This fragmentation meant that anyone wanting to contribute to or maintain our sites needed additional experience and tooling outside the Dart ecosystem: Node.js tooling for one set of sites, Python for another. While some surrounding infrastructure and interactive components were already built with Dart, the separate ecosystems limited code sharing, significantly increased set-up and contribution friction, and grew increasingly complicated.

We wanted to change that. We wanted a single, unified stack built on the language and tools our team and community already know. We also had growing ambitions and needs for interactivity on our sites, from richer code samples to quizzes for tutorials. Our existing setups made each new interactive element an uphill battle, often requiring one-off imperative DOM logic.

Finding a unified solution in Jaspr

Jaspr is a versatile Dart web framework that supports client-side rendering, server-side rendering, and static site generation. Beyond being a traditional DOM-based (with HTML and CSS) web framework and being written in the language we already know, Jaspr stood out for a few reasons:

Flutter skills transfer directly. The Jaspr framework and its component model were designed to feel natural and familiar to any Flutter developer while being compatible with the DOM model of the web. If you’ve written a Flutter widget before, you can read this:

class FeatureCard extends StatelessComponent {
const FeatureCard({
required this.title,
required this.description,
super.key,
});

final String title;
final String description;

@override
Component build(BuildContext context) {
return div(classes: 'feature-card', [
h3([.text(title)]),
p([.text(description)]),
]);
}
}

With Jaspr, contributors can directly apply the Dart and Flutter experience they already have to a new platform, significantly lowering the barrier to entry for team and community members who want to improve our documentation and websites.

Seamless support for partial hydration. One major underlying reason for this exploration and migration was to make it easier to build and integrate interactive experiences on our sites. Jaspr’s built-in support for partial hydration allows each page to be prerendered as static HTML, then the client-side logic is attached only for the components that need it. This is perfect for websites like ours, where the majority of the content is static and only small pockets of interactivity are needed, ensuring quick page loading and good SEO.

Jaspr Content for Markdown-driven sites. Jaspr also provides Jaspr Content, a package that supports quickly building content-driven sites. It provides enough out-of-the-box functionality to create a running Markdown-based website in just a few minutes while also being easy to expand and customize extensively. This built-in functionality saved a significant amount of time while the customizability enabled us to keep our original functionality and content practices intact.

What we gained

The migration brought all the benefits we imagined and more, both for the sites themselves and the contribution experience.

A singular, unified toolchain. With everything written in Dart, not only do you need just one SDK to contribute, we also gained access to Dart’s powerful, unified tooling. We can manage all dependencies with dart pub, format code with dart format, analyze it with dart analyze, and then test it with dart test. Managing the site now requires only one set of tools to know, one set of conventions to follow, and one ecosystem to stay current with, and it’s the one we’re already most familiar with.

A stack our contributors already know. Our websites have a lot of contributors, from engineers, to technical writers, to passionate community members. We want everyone to be able to contribute, but the fragmented setup was complex and unfamiliar to most. Now the sites are implemented as standard Dart projects, and if you know Dart, you have everything you need. We hope this lowers the barrier for team and community members who want to help improve Flutter and Dart’s documentation.

Less had to change than you’d expect. With Jaspr Content supporting most of what we needed out of the box, such as templating support, Markdown, and data loading, our content and writing workflows barely needed to change. Nor did our styles, as we already used Sass, a CSS extension language, which is actually implemented in Dart, and therefore requires an even simpler setup than we had before.

The collaborative migration

Overall, the site migration to Jaspr and Jaspr Content went well, but there were, of course, some challenges along the way. We occasionally ran into issues as well as opportunities for improvement with both Dart’s web tooling and Jaspr itself.

What made the migration possible was Kilian, Jaspr’s creator and maintainer. Beyond creating Jaspr, he supported us throughout the migration. He migrated components as early proofs of concept, responded to issues, shipped fixes, improved the developer experience, and even built out Jaspr Content with our websites as a driving use case. To support this ongoing effort and formalize the collaboration, we partnered with Kilian and his consultancy, Netlight, to help us migrate the rest of our web presence and continue investing directly in Jaspr. It was a genuinely collaborative process. Our sites and Jaspr both grew as a result.

In the Dart and Flutter ecosystem, the community is everything and what Kilian has provided to the community with Jaspr is a great example of that. Jaspr has shown itself to be a powerful and modern web framework that is well maintained, responsive to feedback, and ready for you to try out. Thank you, Kilian!

To hear Kilian’s perspective on building and maintaining the framework, check out his article: Jaspr: Why web development in Dart might just be a good idea.

Dart and Jaspr growing together

One of the most rewarding aspects of building on an all-Dart stack is that improvements to the Dart language and surrounding tooling benefit everything. Not just your Flutter apps, but your websites too. Here are a few recent Dart features that have directly impacted and improved the experience of building with Jaspr.

Dot shorthands make component trees cleaner. Dart 3.10 introduced support for a dot shorthand syntax enabling you to omit the type name from static member accesses when they can be inferred from the context. Kilian took advantage of this by consolidating several component constructors onto the Component class and designing them to work naturally with the new syntax:

Component build(BuildContext context) => const div([
// After the API changes:
h1([Component.text('Dash says hi!')]),
Component.fragment([
Component.text('First element'),
Component.text('Second element'),
]),
Component.empty(),

// With dot shorthands:
h1([.text('Dash says hi!')]),
.fragment([
.text('First element'),
.text('Second element'),
]),
.empty(),
]);

The result was a more consistent API with better discoverability and a concise syntax that still works in constant contexts. Best of all, Jaspr’s CLI comes with a jaspr migrate command that automatically handled the migration to the new API as well as other changes.

Null-aware collection elements simplify conditional rendering. Dart 3.8 added support for null-aware collection elements, providing a clean syntax to conditionally include non-null values in collections. In Jaspr code, where you’re regularly composing lists of child components, they offer an elegant way to handle conditional UI elements:

Component build(BuildContext context) => div(classes: 'header', [
h1([.text('Welcome to Flutter!')]),

// Before null-aware collection elements:
if (eventBanner != null) eventBanner!,

// With a null-aware collection element:
?eventBanner,
]);

No more verbose if checks and not-null assertions cluttering your component trees.

Modern, lightweight JS interop and compilation to WebAssembly. To enable efficient access to modern web APIs and compilation to WebAssembly, Dart 3.3 introduced new JS interop libraries as well as package:web. Jaspr was quick to migrate to and support the new APIs, ensuring Jaspr developers could benefit from their new capabilities and build modern Dart apps. Building on this, Jaspr additionally supports experimental compilation to WebAssembly when running on the client. In fact, dart.dev already uses and benefits from this support on compatible browsers.

A helpful, integrated analyzer plugin. For a while, Jaspr had a helpful linting package built on top of package:custom_lint, helping developers write idiomatic and correct Jaspr code. With the release of official analyzer plugin support in Dart 3.10, Jaspr migrated to adopt the feature. The plugin provides a great example of what is possible, providing Jaspr-specific diagnostics and code assists. For example, it can convert between component types or quickly wrap a component with another, similar to the assists you might already be used to with Flutter.

None of these features were built specifically for Jaspr. They’re improvements to the Dart language and tooling that benefit the entire ecosystem, not just Flutter. For some of them, Jaspr was able to immediately take advantage, while others required framework changes from Kilian and contributors to unlock their potential. Either way, it’s clear that Dart keeps evolving and that evolution continues to open up improvements and possibilities for everything built with it, including Jaspr and Flutter.

What’s next and how to get started

We’re not done yet. Now that our websites share this new technical stack, we can start to share more code, build new interactive features, and continue to improve Dart’s web development story. We’re also migrating the Dart and Flutter blogs from Medium to being directly hosted on our Jaspr-powered sites. You’ll hopefully be able to read this very post there soon.

If you’re a Dart or Flutter developer curious about building websites with the skills you already have, there’s never been a better time to try. Jaspr is a great option for content-heavy sites, such as landing pages and documentation. It can even naturally integrate with your Flutter web apps. Try it out now on Jaspr’s online playground (which is also built with Jaspr!) or by following the Jaspr quickstart.

Or, if you’re interested in contributing to the Flutter or Dart documentation sites, the barrier to entry just got a lot lower. Now with Jaspr, all you need is Dart.


We rebuilt Flutter’s websites with Dart and Jaspr was originally published in Flutter on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Understanding AI’s Impact on Developer Workflows

1 Share

AI coding assistants are no longer shiny add‑ons: they are standard parts of our daily workflows. We know developers’ perspectives on them in the short term, and many say they get more done and spend less time on boilerplate or boring tasks. But we know far less about what happens over years of real work in real projects – and whether what developers perceive has changed in the workflow has actually changed.

There are already a number of studies on developer-AI interaction, but the existing research is often limited in scale or depth, and the studies are rarely long-term investigations. Our Human-AI Experience team (HAX) was interested in developers’ experience with AI tools over a long period of time, so they analyzed two years of log data from 800 software developers. They also wanted to analyze self-reported perceptions and compare them with the objective data, so they conducted a survey and follow-up interviews.

Here we present our findings from our HAX team’s mixed-method study, which our team is presenting this week at ICSE 2026 in Rio de Janeiro.

The study demonstrates how developers’ workflows have evolved with AI tools. A major takeaway from the study is that AI redistributes and reshapes developers’ workflows in ways that often elude their own perceptions. 

In this blog post, we:

  • Present our methodology for the study, namely:
    • Log data from a two-year period
    • Survey and interview responses
  • Describe the components of developer workflows, including relevant previous research.
  • Discuss the results of our mixed-methods study.

Analyzing how developers are evolving their workflows with AI tools

In this section, we will describe how we set up the HAX study, investigating both how developers behave (using log data) and what developers perceive (through interviews).

A major advantage of this design is that the methods compensate for each other’s blind spots. Logs can show that workflows are changing, but not why; self-reports can explain motivations and context, but they are biased and often miss subtle behavioral shifts. By triangulating across both, mixed methods make it easier to spot gaps between perception and practice and to build a more complete, grounded picture of how AI is reshaping everyday development work.

Telemetry in research 

Telemetry is a well-established method for gathering data, in use since at least the 1800s. The word itself comes from the Greek for ‘far’ (tele) and ‘measure’ (metron), and it usually refers to data collected remotely. Its use enables more accurate experiments and better observability without constant manual measurement.

For example, telemetry is useful in healthcare settings for measuring blood pressure, heart rate, and oxygen levels over extended periods. In research, it can be used to capture continuous, real‑time data (for example, physiological, environmental, or system-performance signals) and send it to a central system for monitoring and analysis.

In the context of our research, telemetry is the stream of fine-grained, anonymized events that IDEs automatically record as developers work: which actions they take, when they take them, and in what sequence. That is, it collects information like the number of characters typed (not the actual characters), debugging sessions started, code deletions, paste operations, and window focus changes. If you’re interested, see our Product Data Collection and Usage Notice for further details.

We know from previous studies that telemetry can indeed uncover interesting patterns in developers’ behavior. For example, these researchers found that developers actually spend a good chunk (70%) of their time on comprehension activities like reading, navigating, and reviewing source code. 

How developers behave: Investigating log data from a two-year period

For this study, telemetry served as a behavioral lens on developer workflows. Instead of asking developers how they think AI assistants affect their workflows, we looked at what actually happens in the editor over a span of two years: how much code is written, how often code is edited or undone, when external snippets are inserted, and how frequently developers switch back into the IDE from other tools. By aggregating and comparing these signals for AI users and non‑users, telemetry made it possible to observe subtle, long-term shifts in everyday practice that would be hard to capture through surveys or controlled lab tasks alone.

More specifically, our team worked with anonymized usage logs from several JetBrains IDEs, including IntelliJ IDEA, PyCharm, PhpStorm, and WebStorm. We filtered down to devices that were active in both October 2022 and October 2024 so the same developers could be tracked over a full two-year window. Note that the first date (October 2022) was chosen because that was when ChatGPT was first released.

From there, we built two groups: 400 AI users, whose devices interacted with JetBrains AI Assistant at least once a month from April to October 2024, and 400 AI non-users, whose devices never used the assistant during the study period. The reasoning behind checking for use from April 2024 was that at that point, AI assistants had become widely available and stable in IDEs; we also wanted to ensure that the users really had integrated AI assistants into their workflows.

As the telemetry logs are by nature complex, our team picked out well-defined events, i.e. user actions, to represent each of the workflow dimensions which are described in more detail below. These are:

  1. Typed characters – productivity
  2. Debugging session starts – code quality
  3. Delete and undo actions – code editing
  4. External paste events without an in-IDE copy – code reuse
  5. IDE window activations – context switching

Of course, any chosen proxy would have its limitations, and our chosen metrics do as well. That being said, our goal with this study was to detect patterns of change in developer workflows over time and not to identify causal effects.

By aggregating these metrics per user per month, we could see how each dimension evolved over time for AI users versus non-users, focusing on patterns of change. Overall, our dataset comprised 151,904,543 logged events performed by the 800 users.

Our data processing involved computing the total number of occurrences per device per month. This means we had a clear, high-level dataset of monthly counts for every action, which is ideal for tracking activity and identifying meaningful behavioral trends. 

What developers perceive: Qualitative insights from surveys and interviews

To balance the behavioral view with developers’ own perspectives, our team also ran an online survey aimed at professional developers. We framed the questions around the same workflow dimensions, asking how they felt AI assistants had affected their productivity, code quality, editing habits, reuse patterns, and context switching. In total, 62 developers completed the survey, giving us a broad picture of perceived benefits, drawbacks, and changes since they started using AI tools.

The details of the survey can be found in §3.1 of the paper, as well as in the supplementary materials. In addition to demographic questions, the survey included:

  1. Scale questions about overall experience and reliance on AI tools for coding
  2. Scale questions on developer perception, specific to the workflow dimensions 
  3. Open-ended question asking for a specific example of workflow impact and AI tools for coding 

The first two types of questions (items 1 and 2) were constructed as 5-point scales, where the participants were asked to provide a rating, indicating the degree to which they agreed or disagreed with a statement. In our survey, the scale concerned the degree of change: with (1) being significantly decreased and (5) significantly increased

The questions from item 2 can be summarized as follows: 

  • For productivity, we asked directly about overall productivity and also about time spent coding.
  • For code quality, we asked directly about the quality of code and also about code readability.
  • For code editing, we asked about the frequency of editing or modifying their own code.
  • For code reuse, we asked about the frequency of use of code from external sources, e.g. libraries, online examples, or AI-suggested code.
  • For context switching, we asked directly about the frequency of context switching – switching between different tasks or thought processes.

After the survey, our team invited a smaller group of participants to short, semi-structured interviews. In those conversations, we dug deeper into how they actually use AI day to day: when they reach for it, how they decide whether to trust a suggestion, and whether their work feels more or less fragmented now. Those qualitative stories helped us interpret the telemetry curves: for example, understanding why someone might report “not much has changed” even when their logs show big shifts in how much they type, delete, or paste external code.

Dimensions of the developer workflow 

To understand the developer workflows better, our HAX study divided the areas of interest into the following dimensions, mentioned above:

  1. Productivity
  2. Code quality
  3. Code editing
  4. Code reuse
  5. Context switching

Productivity captures the most intuitive question people ask about AI tools: Do they help developers get more done? This dimension sets the stage by asking whether AI-assisted workflows are simply faster at producing code, and how that plays out over time compared to developers who do not use AI at all. 

And although the impact of LLM-based coding tools on developer productivity has been a subject of many studies, there is not yet a clear picture of how – or whether – AI assistance really has a positive impact on developers’ productivity. On top of that, researchers use various measures (e.g. characters typed, tasks completed, completion requests accepted). Interestingly, this study observed that developers perceived their productivity as increasing with Copilot despite the data showing otherwise. Another study had similar findings: even though developers thought that their completion time of repository issues improved by 20%, the numbers actually show that they were 19% slower at completing tasks. In our study, we chose to measure code quantity.

Code quality shifts the focus from “how much code is written” to “how well the code is written.” Rather than inspecting code directly, this dimension looks at how often developers enter debugging workflows as a behavioral signal of running into problems or uncertainty. It introduces the idea that AI might change not only the number of issues that surface, but also how and when developers choose to investigate them, offering a window into how confident they feel about the code that ends up in their projects. Researchers in this study observed that developers spend more than a third of their time double-checking and editing Copilot suggestions. 

Code editing looks at what happens after the first draft: how frequently code is reshaped, corrected, or thrown away. Here, the interest is in how much developers are editing, undoing, and deleting – as a way of understanding whether AI turns programming into a more iterative, high‑revision activity. This dimension helps illuminate the “curation” side of AI use: accepting suggestions, reworking them, and deciding what ultimately stays in the codebase. This study looked not at the time spent editing, but how much is deleted or reworked: of the code that is at first accepted, almost a fifth is later deleted, and about 7% is heavily rewritten. 

Developers have always reused code (e.g. from libraries, internal snippets, Stack Overflow), but AI assistants introduce a new, often opaque channel for bringing external code into a project. Code reuse as a workflow dimension zooms out to ask where code comes from in the first place. We know from previous studies like this one that AI assistants provide boilerplate code and suggest commonly used patterns or snippets derived from training data. This dimension focuses on how often developers appear to integrate code from outside the current file or project, framing AI as part of a broader shift in reuse practices rather than an isolated feature.

Modern development work already involves frequent jumping between integrated development environments (IDEs), browsers, terminals, and communication tools, and AI assistants promise to streamline some of this jumping by keeping more help inside the editor. Context switching widens the lens from code to attention. This dimension asks whether that promise holds in practice, or whether AI ends up reshaping – rather than simply reducing – the ways developers move their focus across tools and tasks during everyday work.

For one, this study has shown that interacting with AI assistants can add cognitive overhead and fragment tasks, as developers alternate between writing code, interpreting suggestions, and managing the dialogue with the system. This raises an open question: are these tools actually reducing context switching overall, or mostly trading one form of interruption for another?

Two lenses on AI-assisted workflows

By combining different methods, our study is able to deliver a more complete picture of how AI coding tools change (or don’t change) developers’ workflows. We also learned that behavioral changes are largely invisible to the developers themselves. Together, these patterns sketch out what it really means to evolve with AI in a modern IDE.

In this section, we walk you through the results, presenting them by dimension. The table below displays an overview of our results. 

Understanding AI's Impact on Developer Workflows

Productivity

The first dimension of our study looked at how AI assistants affect productivity, measured in the telemetry part as how much code developers type over time and in the survey as how the developers perceived their productivity and time spent coding. Here, both the actual behavior and perception are aligned: with an in-IDE AI assistant, developers are writing more code.

In the graph below, the average number of typed characters is displayed for the AI users and AI non-users for the investigated time period. Shaded regions represent a ±1 deviation from the mean.

Understanding AI's Impact on Developer Workflows

From the graph above, it is clear that developers who adopted the in‑IDE AI assistant consistently typed more characters than those who never used it, and this gap grew over the two‑year period. The log data revealed the trend that AI users increased the number of characters typed by almost 600 per month, in contrast to AI non-users, who only displayed an average increase of 75 characters per month. This data suggests that the difference is not just a one‑off spike; it’s a sustained shift in developer behavior.

Survey respondents (all AI users) similarly experienced an increase in productivity. Over 80% of respondents reported that the introduction of AI coding tools slightly or significantly increased their productivity, while two respondents said that it slightly or significantly decreased it. Regarding time spent coding, more than half said that their coding time decreased, while about 15% indicated that it increased.

Interview participants largely echo this in their own words. For example, one developer (3–5 years of experience and a regular AI user) said: 

When I get stuck on naming or documentation, I immediately turn to AI, and it really helps.

In this dimension, both the perception and actual behavior are similar. These results demonstrate that developers are producing more code in the editor and perceive a productivity increase with AI tools

Code quality

For code quality, the study uses a simple behavioral signal: how often developers start a debugging session in the IDE. This is not a perfect measure of “good” or “bad” code, but it does tell us how frequently people feel the need to step through their program to understand or fix something. In this dimension, the developers’ behavior and perception are not aligned, at least not in a statistically significant way: there is no change in AI users’ debugging behavior, but a slight improvement in perception of code quality and readability.

In the graph below, the average number of started debugging instances is displayed for the AI users and AI non-users for the investigated time period. As before, shaded regions represent a ±1 deviation from the mean. Across the two years, both AI users and non‑users show active debugging behavior, and the differences between the groups are much less dramatic than for the productivity measure.

Understanding AI's Impact on Developer Workflows

The above graph shows the average number of debugging instances per month for each group as two lines that sit close together. Our statistical analysis told us that for AI users, there was no significant change in behavior over time. For AI non-users, there was a slight decrease in debugging starts in the time period.

Most survey respondents say using AI coding tools has somewhat positively changed their code quality. Namely, when asked whether the quality of their code increased because of using AI coding tools, almost half say that it slightly or significantly increased, while about 10% say that it slightly or significantly decreased. For the readability of the code, the respective numbers are 43.5% and 6.5%, though 50% indicate that they did not observe a change. 

Despite this improvement with AI, some developers still do not completely trust AI-generated code. For example, a developer with 3–5 years of experience reported in the interview:

I triple-check it, and even then, I still feel a bit uneasy.

For this dimension, our results show us that although developers report an increase in code quality from their point of view, their behavior (in the proxy we chose to analyze) does not show a change for AI users.

Code editing

When we look at code editing, the difference between behavior and perception is more striking: while the developers reported little change, the log data showed a stark rise in their behavior over time. Here, the telemetry value was how often developers delete or undo code, and in the survey, they were asked whether they thought they edited their own code more with AI tools. 

In the graph below, the average number of deletions is displayed for the AI users and AI non-users for the investigated time period. As before, shaded regions represent a ±1 deviation from the mean. Across the two years, the trend lines show a big difference between AI users and non‑users.

Understanding AI's Impact on Developer Workflows

The AI users’ line sits noticeably higher, with a statistically significant increase of about 100 deletions per month. In contrast, AI non-users increased their deletions in the same period, on average, only about 7 times per month. This data suggests more frequent editing and rework when AI is helping to generate code.

In the qualitative data, the developers do not report seeing such a big change. Half of the respondents reported no perceived change in their code-editing behavior since adopting AI tools; about 40% reported a slight or significant increase, and about 7% reported a decrease. A system architect with more than 15 years of coding experience said: 

AI is like a second pair of eyes, offering pair programming benefits without social pressure – especially helpful for neurodivergent people. It’s not always watching, but I can call on it for code review and feedback when needed.

Compared to the previous dimension of code quality, developers’ perception and actual behavior with respect to code editing are inverted. Namely, they do not perceive a significant change in how much they are editing code, but the log data shows a large increase for developers who have adopted AI assistance

Code reuse

For code reuse, the study looked at how often developers paste content into the IDE that does not come from a copy action inside the same IDE session. The results for this dimension are less divergent, both between AI users vs. non-users and between perception vs. behavior. 

In the graph below, the average number of external pastes is displayed for the AI users and AI non-users for the investigated time period. As before, shaded regions represent a ±1 deviation from the mean. Across the two years, the trend lines do not show a big change for either AI users or non‑users.

Understanding AI's Impact on Developer Workflows

The trend line for AI users is higher overall than for AI non-users, indicating that they reuse external code more frequently. However, there is not a large change over time for either group. 

The responses from the survey and interview don’t show a clear pattern either. In the survey, about a third of respondents said they perceived that their use of code from external sources slightly or significantly increased with the adoption of AI tools, while a fifth reported it decreased; 44% observed no change. 

From previous studies, we might have expected that developers using AI tools are more likely to reuse external code. In contrast, they report a different picture. For example, a developer with over 15 years of experience says: 

For me, it’s better to take responsibility for what I did myself rather than adopt a third-party solution.

Context switching

The last dimension we studied is context switching, or how often developers jump back into the IDE after working in another window, such as a browser. AI tools, especially in-IDE ones, are often marketed as a way to keep developers “in flow” by reducing the need to leave the editor for help. Although the qualitative data does not show a pattern in either direction, the telemetry tells a more complicated story: over time, AI users actually show more IDE activations than non‑users, meaning they are switching contexts at least as much, if not more.

In the graph below, the average number of IDE activations is displayed for the AI users and AI non-users for the investigated time period. As before, shaded regions represent a ±1 deviation from the mean. Across the two years, the trend lines do show a slight increase for AI users.

Understanding AI's Impact on Developer Workflows

Like for the previous dimension, the trend line for AI users overall sits higher. However, here the trends diverge: AI users show an increase of about 6 IDE activations per month, while AI non-users show the opposite, a decrease of about 7 per month. 

In contrast to the differences seen in the log data, the survey responses were less suggestive of a clear pattern. Namely, about a quarter of respondents indicated an increase, about a fifth a decrease, and about half no change. 

In the interviews, developers indicate that using AI tools does not result in a simple drop in context switching, but a different pattern of fragmentation. One developer said:

I stopped switching contexts, saving a few seconds every time I would have googled something.

In this workflow dimension, we have seen a similar pattern as before: there was a slight increase in the log data for AI users, but no clear pattern in the qualitative data. This suggests that even with in-IDE AI assistance, developers are still switching contexts, sometimes even more than those not using AI assistance.

AI’s impact on effort and attention

Taken together, the results of our HAX study suggest that AI coding assistants are quietly reshaping developer workflows in ways that otherwise can go unnoticed. That is, our study shows that these shifts are subtle enough that developers don’t always see them clearly in their own habits. That’s why combining methods in investigations, like telemetry with surveys and interviews, matters: it reveals the gap between what feels different and what actually changes in day‑to‑day behavior. 

If you’re building or adopting AI tools, the takeaway is simple: don’t just ask whether people like them. You should look closely at what they are actually doing!

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft counters the MacBook Neo with freebies for students

1 Share

Apple's $599 MacBook Neo ($499 for students) has sent shockwaves through the PC ecosystem, and now Microsoft is responding with deals targeting students in the US. A new "Microsoft College Offer" is launching today, which will see the software giant bundle 12 months of free Microsoft 365 Premium and Xbox Game Pass Ultimate with select Windows 11 PCs that have also been discounted.

Acer, Asus, Dell, HP, and Lenovo are all participating in this Microsoft College Offer, and Microsoft is even discounting some Surface devices days after hiking the prices of its Surface Pro and Surface Laptop models. Best Buy is selling a 15.3-inch Lenovo IdeaPad …

Read the full story at The Verge.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Incident response for AI: Same fire, different fuel

1 Share

When a traditional security incident hits, responders replay what happened. They trace a known code path, find the defect, and patch it. The same input produces the same bad output, and a fix proves it will not happen again. That mental model has carried incident response for decades.

AI breaks it. A model may produce harmful output today, but the same prompt tomorrow may produce something different. The root cause is not a line of code; it is a probability distribution shaped by training data, context windows, and user inputs that no one predicted. Meanwhile, the system is generating content at machine speed. A gap in a safety classifier does not leak one record. It produces thousands of harmful outputs before a human reviewer sees the first one.

Fortunately, most of the fundamentals that make incident response (IR) effective still hold true. The instincts that seasoned responders have developed over time still apply: prioritizing containment, communicating transparently, and learning from each.

AI introduces new categories of harm, accelerates response timelines, and calls for skills and telemetry that many teams are still developing. This post explores which practices remain effective and which require fresh preparation.

The fundamentals still hold

The core insight of crisis management applies to AI without modification: the technical failure is the mechanism, but trust is the actual system under threat. When an AI system produces harmful output, leaks training data, or behaves in ways users did not expect, the damage extends beyond the technical artifact. Trust has technical, legal, ethical, and social dimensions. Your response must address all of them, which is why incident response for AI is inherently cross-functional.

Several established principles transfer directly.

Explicit ownership at every level. Someone must be in command. The incident commander synthesizes input from domain experts; they do not need to be the deepest technical expert in the room. What matters is that ownership is clear and decision-making authority is understood.

Containment before investigation. Stop ongoing harm first. Investigation runs in parallel, not after containment is complete. For AI systems, this might mean disabling a feature, applying a content filter, or throttling access while you determine scope.

Escalation should be psychologically safe. The cost of escalating unnecessarily is minor. The cost of delayed escalation can be severe. Build a culture where raising a flag early is expected, not penalized.

Communication tone matters as much as content. Stakeholders tolerate problems. They cannot tolerate uncertainty about whether anyone is in control. Demonstrate active problem-solving. Be explicit about what you know, what you suspect, and what you are doing about each.

These principles are tested, and they are effective in guiding action. The challenge with AI is not that these principles no longer apply; it is that AI introduces conditions where applying them requires new information, new tools, and new judgment.

Where AI changes the equation

Non-determinism and speed are the headline shifts, but they are not the only ones.

New harm types complicate classification and triage. Traditional IR taxonomies center on confidentiality, integrity, and availability. AI incidents can involve harms that do not fit those categories cleanly: generating dangerous instructions, producing content that targets specific groups, or enabling misuse through natural language interfaces. By making advanced capabilities easy to use, these interfaces enable untrained users to perform complex actions, increasing the risk of misuse or unintended harm. This is why we need an expanded taxonomy. If your incident classification system lacks categories for these harms, your triage process will default to “other” and lose signal.

Severity resists simple quantification. A model producing inaccurate medical information is a different severity than the same model producing inaccurate trivia answers. Good severity frameworks guide judgment; they cannot replace it. For AI incidents, the context around who is affected and how they are affected carries more weight than traditional security metrics alone can capture.

Root cause is often multi-dimensional. In traditional incidents, you find the bug and fix it. In AI incidents, problematic behavior can emerge from the interaction of training data, fine-tuning choices, user context, and retrieval inputs. Investigation may narrow the contributing factors without isolating one defect. Your process must accommodate that ambiguity rather than stalling until certainty arrives.

Before the crisis is the time to work through these implications. The questions that matter: How and when will you know? Who is on point and what is expected of them? What is the response plan? Who needs to be informed, and when? Every one of these questions that you answer before the incident is time you buy during it.

Closing the gaps in telemetry, tooling, and response

If AI changes the nature of incidents, it also changes what you need to detect and respond to them.

Observability is the first gap. Traditional security telemetry monitors network traffic, authentication events, file system changes, and process execution. AI incidents generate different signals: anomalous output patterns, spikes in user reports, shifts in content classifier confidence scores, unexpected model behavior after an update. Many organizations have not yet instrumented AI systems for these signals and, without clear signal, defenders may first learn about incidents from social media or customer complaints. Neither provides the early warning that effective response requires.

AI systems are built with strong privacy defaults – minimal logging, restricted retention, anonymized inputs – and those same defaults narrow the forensic record when you need to establish what a user saw, what data the model touched, or how an attacker manipulated the system. Privacy-by-design and investigative capability require deliberate reconciliation before an incident, because that decision does not get easier once the clock is running.

AI can also help close these gaps. We use AI in our own response operations to enhance our ability to:

  • Detect anomalous outputs as they occur
  • Enforce content policies at system speed
  • Examine model outputs at volumes no human team can match
  • Distill incident discussions so responders spend time deciding rather than reading
  • Coordinate across response workstreams faster than email chains allow

Staged remediation reflects the reality of AI fixes. Incidents require both swift action and thorough review. A model behavior change or guardrail update may not be immediately verifiable in the way a traditional patch is. We use a three-stage approach:

  • Stop the bleed. Tactical mitigations: block known-bad inputs, apply filters, restrict access. The goal is reducing active harm within the first hour.
  • Fan out and strengthen. Broader pattern analysis and expanded mitigations over the next 24 hours, covering thousands of related items. Automation is essential here; manual review cannot keep pace.
  • Fix at the source. Classifier updates, model adjustments, and systemic changes based on what investigation revealed. This stage takes longer, and that is acceptable. The first two stages bought time.

One practical tip: tactical allow-and-block lists are a necessary triage tool, but they are a losing proposition as a permanent solution. Adversaries adapt. Classifiers and systemic fixes are the durable answer.

Watch periods after remediation matter more for AI than for traditional patches. Because model behavior is non-deterministic, verification relies on sustained testing and monitoring across varied conditions rather than a single test pass. Sustained monitoring after each stage confirms that the remediation holds under varied conditions.

The human dimension

There is a dimension of AI incident response that traditional IR addresses unevenly and that AI makes urgent: the wellbeing of the people doing the work.

Defenders handling AI abuse reports and safety incidents are routinely exposed to harmful content. This is not the same cognitive load as analyzing malware samples or reviewing firewall logs. Exposure to graphic, violent, or exploitative material has measurable psychological effects, and extended incidents compound that exposure over days or weeks.

Human exhaustion threatens correctness, continuity, and judgment in any prolonged incident. AI safety incidents place an additional emotional burden on responders due to exposure to distressing content. Recognizing and addressing this challenge is essential, as it directly impacts the well-being of the team and the quality of the response.

What helps:

  • Talk to your team about well-being before the crisis, not during it.
  • Manager-sponsored interventions during extended response work, including scheduled breaks, structured handoffs, and deliberate activities that provide cognitive relief.
  • Some teams use structured cognitive breaks, including visual-spatial activities, to reduce the impact of prolonged exposure to harmful content.
  • Coaching and peer mentoring programs normalize the impact rather than framing it as individual weakness.
  • Leveraging proven practices from safety content moderation teams, whose operational workflows for content review and escalation map directly to AI security moderation is a natural collaboration opportunity.

If your incident response plan does not account for the humans executing it, the plan is incomplete.

Looking ahead

Incident response for AI is not a solved problem. The threat surface is evolving as models gain new capabilities, as agentic architectures introduce autonomous action, and as adversaries learn to exploit natural language at scale. The teams that will handle this well are the ones building adaptive capacity now. Extend playbooks. Instrument AI systems for the right signals. Rehearse novel scenarios. Invest in the people who will be on the front line when something breaks. Good response processes limit damage. Great ones make you stronger for the next incident.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Incident response for AI: Same fire, different fuel appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories