Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147339 stories
·
32 followers

From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available

1 Share

We are excited to announce the general availability of Azure Storage Discovery, a fully managed service that delivers enterprise-wide visibility into your data estate in Microsoft Azure Blob Storage and Azure Data Lake Storage. Azure Storage Discovery helps you optimize storage costs, comply with security best practices, and drive operational efficiency. With the included Microsoft Copilot in Azure integration, all decision makers and data users can access and uncover valuable data management insights using simple, everyday language—no specialized programming or query skills required. The intuitive experience provides advanced data visualizations and actional intelligence that are most important to you.

Businesses are speeding up digital transformation by storing large amounts of data in Azure Storage for AI, analytics, cloud native apps, HPC, backup, and archive. This data spans multiple subscriptions, regions, and accounts to meet workload needs and compliance rules. The data sprawl makes it challenging to track data growth, spot unexpected data reduction, or optimize costs without clear visibility into data trends and access patterns. Organizations struggle to identify which datasets and business units drive growth and usage the most. Without a global view and streamlined insights across all storage accounts, it’s challenging to ensure data availability, residency, security, and redundancy are consistently aligned with best practices and regulatory compliance requirements.

Azure Storage Discovery makes it simple to gain and analyze insights to manage such large data estates.

Analyze your data estate with Azure Storage Discovery

Azure Storage Discovery lets you easily set up a workspace with storage accounts from any region or subscription you can access. The first insights are available in less than 24 hours, and you can get started by analyzing your data estate.

Unlock intelligent insights using natural language with Copilot in Azure

Use natural language to ask for the storage insights you need to accomplish your storage management goals. Copilot in Azure expresses them using rich data visualizations, like tables and charts.

Interactive reports built into the Azure portal

Azure Storage Discovery generates out-of-box dashboards you can access from the Azure portal, with insights that help you visualize and analyze your data estate. The reports include filters for your storage data estate by region, redundancy, and performance, allowing you to quickly drill down and uncover the insights important to you.

Advanced storage insights

The reports deliver insights, at a glance, across multiple dimensions, helping you manage your data effectively:

  • Capacity: Insights about resource, object sizes, and counts aggregated by subscriptions, resource groups, and storage accounts with growth trends.
  • Activity: Visualize transactions, ingress, and egress for insights on how your storage is accessed and utilized.
  • Security: Highlights critical security configurations of your storage resources with outliers including public network access, shared access key, anonymous access to blobs, and encryption settings.
  • Configurations: Surfaces configuration patterns across your storage accounts like redundancy, lifecycle management, inventory, and others.
  • Errors: Highlights failed operations and error codes to help identify patterns of issues that might be impacting workloads.

Kickstart your insights for free, including 15 days of historical data

Getting started is easy with access to 15 days of historical insights within hours of deploying your Azure Storage Discovery workspace. The standard pricing plan offers the most comprehensive set of insights, while the free pricing plan gets you going with the basics.

The Azure Storage Discovery workspace with the standard pricing plan, will retain insights for up to 18 months so you can analyze long-term trends and any business or season specific workload patterns.

Azure Storage Discovery is available to you today! You can learn more about Azure Storage Discovery here and even get started in the Azure Portal here.

Use Copilot to solve the most important business problems

During the design of Azure Storage Discovery, we spoke with many customers across various business-critical roles, such as IT managers, data engineers, and CIOs. We realized AI could simplify onboarding by removing the need for infrastructure deployment or coding knowledge. As a result, we included Copilot in Azure Storage Discovery from the start. It offers insights beyond standard reports and dashboards using natural language queries to deliver actionable information through visualizations like trend charts and tables.

To get started, simply navigate to your Azure Storage Discovery workspace resource in the Azure portal, and activate Copilot.

Azure Storage Discovery workspace in the Azure portal, with Copilot panel and data insights charts. 

Identify opportunity to optimize costs

Understanding storage size trends is crucial for cost optimization, and analyzing these trends by region and performance type can reveal important patterns about how the data is evolving over time. With Azure Storage Discovery’s 18 months of data retention, you can uncover long-term trends and unexpected changes across your data estate, while Copilot quickly visualizes storage size trends broken down by region.

“How is the storage size trending over the past month by region?”

Line graph showing storage size trend over the past months by region. 

Finding cost-saving opportunities across many storage accounts can be difficult, but Copilot simplifies this by highlighting accounts with the highest savings potential based on capacity and transactions as shown below.

“Provide a list of storage accounts with default access tier as Hot, that are above 1TiB in size and have the least transactions”

A table with a list of storage accounts with more than 1TB of data but least transactions.

Before taking any action, you can dive even deeper into the insights by evaluating distributions. For example, a distribution of access tiers across blobs.

“Show me a chart of blob count by blob access tier”

Column chart with a distribution of blob count by blob access tier.

Knowing that the majority of objects are still in the Hot tier provides immediate opportunities to reduce costs by enabling Azure Storage Actions to tier down or even delete data that is not accessed frequently. Azure Storage Actions is a fully managed, serverless platform that automates data management tasks—like tiering, retention, and metadata updates—across millions of blobs in Azure Blob Storage and Data Lake Storage.

Assess whether storage configurations align with security best practices

For better storage security, Microsoft recommends using Microsoft Entra ID with managed identities instead of Shared Key authentication. Azure Storage Discovery enables you to quickly see that there are still many storage accounts with shared access keys enabled and drill down into a list of Storage accounts that need optimization.

“Show me a pie chart of my storage accounts with shared access key enabled by region”

Pie chart of storage accounts by region with shared access key enabled over past week.

Manage your data redundancy requirements

Azure provides several redundancy options to meet data availability, disaster recovery, performance, and cost needs. These choices should be regularly reviewed against risks and benefits for an effective storage strategy. Azure Storage Discovery quickly shows the redundancy configuration for all storage accounts and allows you to analyze the most suitable option for each workload and critical business data.

“Show me a chart of my storage account count by redundancy”

Column chart of storage account count by redundancy type.

Pricing and availability

A single Azure Storage Discovery workspace can analyze the subscriptions and storage accounts from all supported regions. Learn more about the regions supported by Azure Storage Discovery here. The service offers a free pricing plan with insights related to capacity and configurations retained for up to 15 days and a standard pricing plan that also includes advanced insights related to activity, errors, and security configurations. Insights are retained for up to 18 months, allowing you to analyze trends and business cycles.

Learn more about the pricing plans in the Azure Storage Discovery documentation and access the prices for your region here.

Get started with Azure Storage Discovery

Getting started with Azure Storage Discovery is easy. Simply follow these two steps:

  1. Configure an Azure Storage Discovery workspace and select the set of subscriptions and resource groups containing your storage accounts.
  2. Define the “Scopes” that represent your business groups or workloads.

That’s it! Give it a moment. Once a workspace is configured, Azure Storage Discovery starts aggregating the relevant insights and makes them available to you via intuitive charts. You’ll find them in the Azure portal, on different report pages of your workspace. We’ll even look back in time and provide 15 days of historic data. Your insights are typically available within a few hours.

To get started, visit Azure Storage Discovery in the Azure Marketplace.

You can also deploy via the brand new Storage Center in the Azure portal. Find Azure Storage Discovery in the “Data management” section.

Want to read more before deploying? The planning guide walks you through all the important considerations for a successful Azure Storage Discovery deployment.

We’d love to hear your feedback. What insights are most valuable to you? What would make Azure Storage Discovery more valuable for your business? Let us know at: StorageDiscoveryFeedback@service.microsoft.com.

The post From queries to conversations: Unlock insights about your data using Azure Storage Discovery—now generally available appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Sora 2 now available in Azure AI Foundry

1 Share

Turning imagination into reality has never been more instantaneous—and powerful—as it is today, with the launch of OpenAI’s Sora 2: Now in public preview in Azure AI Foundry.

Azure AI Foundry is the developer destination built for creators, from startups to global businesses. The platform now offers a curated catalog of generative media models, including OpenAI’s Sora, GPT-image-1 and GPT-image-1-mini, Black Forest Lab’s Flux 1.1 and Kontext Pro, and more. These models empower software development companies and builders to serve creatives with new and unique capabilities—to accelerate storyboarding, drive engagement, and transform the creative process, all without sacrificing the safety, reliability, and integration businesses expect.

What can you create with Sora 2 in Azure AI Foundry?

Sora 2 in Azure AI Foundry isn’t just another video generation tool; it’s a creative powerhouse, seamlessly integrated into a platform built for innovation, trust, and scale. Unlike standalone solutions, Azure AI Foundry offers a unified platform where developers can access Sora 2 alongside other leading generative models in a secure, scalable, and structured environment to achieve more:

  • Marketers can rapidly produce stunning, branded campaign assets including animated assets for product launches and personalized content to capture attention and drive engagement.
  • Retailers can engage customers with interactive and localized campaigns to accelerate time-to-market and transform their customers’ online shopping experience.
  • Creative directors can transform imaginative ideas into dynamic movie trailers and cinematic experiences to test concepts, while Sora 2’s realistic world simulation, synchronized audio, and creative controls help bring visions to life.
  • Educators can create immersive lesson plans and interactive media that spark curiosity and deepen understanding.

With Sora 2 in Azure AI Foundry, developers across industries can innovate boldly and confidently. Azure AI Foundry’s unified environment, advanced capabilities, and enterprise-grade security provide the foundation for creativity to flourish and ideas to become reality.

What features and controls are available?

Sora 2 in Azure AI Foundry stands out by combining OpenAI’s most advanced video generation capabilities with the trusted infrastructure and security controls of Microsoft Azure, unlocking new possibilities for every developer with a set of core features:

  • Realistic video generation powered by advanced world simulation and physics.
  • Generation based on input text, images, and video.
  • Synchronized audio and dialogue for immersive storytelling.
  • Audio available in multiple languages.
  • Enhanced creative control, including detailed prompt understanding for studio shots, scene details, and camera angles.
  • Seamless integration into business workflows, backed by Microsoft’s enterprise-grade safety and security.

Microsoft is committed to delivering secure and safe AI solutions for organizations of all sizes. Through Azure AI Foundry and our responsible AI principles, we empower customers with embedded security, safety, and privacy controls.

This foundation extends to Sora 2, where our advanced safety systems and robust controls work together to help developers innovate more confidently in Azure AI Foundry:

  • Content filters for inputs: Screens text, image, and video inputs for prompts. 
  • Content filters for outputs: Analyzes video frames and audio; can block content to help comply with organizational policies.
  • Enterprise-grade security: Azure’s compliance and governance frameworks protect customer data and creative assets. 

Sora 2 Azure AI Foundry pricing and availability

Starting today, Sora 2 is available via API through Standard Global in Azure AI Foundry.


Model



Size



Price per second (in USD)


Sora 2

Portrait: 720×1280

Landscape: 1280×720

$0.10

Please refer to the Azure AI Foundry Models page for future updates in deployment types and availability.

Get started with AI as your creative partner

Sora 2 is designed to empower and inspire developers. By accelerating early production and enabling rapid prototyping, Sora 2 frees up time for more ideation and storytelling. The goal is to bring human creativity to the next level, making it easier for anyone to turn ideas into compelling visual stories.

The post Sora 2 now available in Azure AI Foundry appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS: The Evolution of Agile - From Project Management to Adaptive Intelligence | Mario Aiello

1 Share

BONUS: The Evolution of Agile - From Project Management to Adaptive Intelligence, With Mario Aiello

In this BONUS episode, we explore the remarkable journey of Mario Aiello, a veteran agility thinker who has witnessed and shaped the evolution of Agile from its earliest days. Now freshly retired, Mario shares decades of hard-won insights about what works, what doesn't, and where Agile is headed next. This conversation challenges conventional thinking about methodologies, certifications, and what it truly means to be an Agile coach in complex environments.

The Early Days: Agilizing Before Agile Had a Name

"I came from project management and project management was, for me, was not working. I used to be a wishful liar, basically, because I used to manipulate reports in such a way that would please the listener. I knew it was bullshit."

Mario's journey into Agile began around 2001 at Sun Microsystems, where he was already experimenting with iterative approaches while the rest of the world was still firmly planted in traditional project management. Working in Palo Alto, he encountered early adopters discussing Extreme Programming and had an "aha moment" - realizing that concepts like short iterations, feedback loops, and learning could rescue him from the unsustainable madness of traditional project management. He began incorporating these ideas into his work with PRINCE2, calling stages "iterations" and making them as short as possible. His simple agile approach focused on: work on the most important thing first, finish it, then move to the next one, cooperate with each other, and continuously improve.

The Trajectory of Agile: From Values to Mechanisms

"When the craze of methodologies came about, I started questioning the commercialization and monetization of methodologies. That's where things started to get a little bit complicated because the general focus drifted from values and principles to mechanisms and metrics."

Mario describes witnessing three distinct phases in Agile's evolution. The early days were authentic - software developers speaking from the heart about genuine needs for new ways of working. The Agile Manifesto put important truths in front of everyone. However, as methodologies became commercialized, the focus shifted dangerously away from the core values and principles toward prescriptive mechanisms, metrics, and ceremonies. Mario emphasizes that when you focus on values and principles, you discover the purpose behind changing your ways of working. When you focus only on mechanics, you end up just doing things without real purpose - and that's when Agile became a noun, with people trying to "be agile" instead of achieving agility. He's clear that he's not against methodologies like Scrum, XP, SAFe, or LeSS - but rather against their mindless application without understanding the essence behind them.

Making Sense Before Methodology: The Four-Fit Framework

"Agile for me has to be fit for purpose, fit for context, fit for practice, and I even include a fourth dimension - fit for improvement."

Rather than jumping straight to methodology selection, Mario advocates for a sense-making approach. First, understand your purpose - why do you want Agile? Then examine your context - where do you live, how does your company work? Only after making sense of the gap between your current state and where the values and principles suggest you should be, should you choose a methodology. This might mean Scrum for complex environments, or perhaps a flow-based approach for more predictable work, or creating your own hybrid. The key insight is that anyone who understands Agile's principles and values is free to create their own approach - it's fundamentally about plan, do, inspect, and adapt.

Learning Through Failure: Context is Paramount

"I failed more often than I won. That teaches you - being brave enough to say I failed, I learned, I move on because I'm going to use it better next time."

Mario shares pivotal learning moments from his career, including an early attempt to "agilize PRINCE2" in a command-and-control startup environment. While not an ultimate success, this battle taught him that context is paramount and cannot be ignored. You must start by understanding how things are done today - identifying what's good (keep doing it), what's bad (try to improve it), and what's ugly (eradicate it to the extent possible). This lesson shaped his next engagement at a 300-person organization, where he spent nearly five months preparing the organizational context before even introducing Scrum. He started with "simple agile" practices, then took a systems approach to the entire delivery system.

A Systems Approach: From Idea to Cash

"From the moment sales and marketing people get brilliant ideas they want built, until the team delivers them into production and supports them - all that is a system. You cannot have different parts finger-pointing."

Mario challenges the common narrow view of software development systems. Rather than focusing only on prioritization, development, and testing, he advocates for considering everything that influences delivery - from conception through to cash. His approach involved reorganizing an entire office floor, moving away from functional silos (sales here, marketing there, development over there) to value stream-based organization around products. Everyone involved in making work happen, including security, sales, product design, and client understanding, is part of the system. In one transformation, he shifted security from being gatekeepers at the end of the line to strategic partners from day one, embedding security throughout the entire value stream. This comprehensive systems thinking happened before formal Scrum training began.

Beyond the Job Description: What Can an Agile Coach Really Do?

"I said to some people, I'm not a coach. I'm just somebody that happens to have experience. How can I give something that can help and maybe influence the system?"

Mario admits he doesn't qualify as a coach by traditional standards - he has no formal coaching qualifications. His coaching approach comes from decades of Rugby experience and focuses on establishing relationships with teams, understanding where they're going, and helping them make sense of their path forward. He emphasizes adaptive intelligence - the probe, sense, respond cycle. Rather than trying to change everything at once and capsizing the boat, he advocates for challenging one behavior at a time, starting with the most important, encouraging adaptation, and probing quickly to check for impact of specific changes. His role became inviting people to think outside the box, beyond the rigidity of their training and certifications, helping individuals and teams who could then influence the broader system even when organizational change seemed impossible.

The Future: Adaptive Intelligence and Making Room for Agile

"I'm using a lot of adaptive intelligence these days - probe, sense, respond, learn and adapt. That sequence will take people places."

Looking ahead, Mario believes the valuable core of Agile - its values and principles - will remain, but the way we apply them must evolve. He advocates for adaptive intelligence approaches that emphasize sense-making and continuous learning rather than rigid adherence to frameworks. As he enters retirement, Mario is determined to make room for Agile in his new life, seeking ways to give back to the community through his blog, his new Substack "Adaptive Ways," and by inviting others to think differently. He's exploring a "pay as you wish" approach to sharing his experience, recognizing that while he may not be a traditional coach or social media expert, his decades of real-world experience - with its failures and successes - holds value for those still navigating the complexity of organizational change.

About Mario Aiello

Retired from full-time work, Mario is an agility thinker shaped by real-world complexity, not dogma. With decades in VUCA environments, he blends strategic clarity, emotional intelligence, and creative resilience. He designs context-driven agility, guiding teams and leaders beyond frameworks toward genuine value, adaptive systems, and meaningful transformation.

You can link with Mario Aiello on LinkedIn, visit his website at Agile Ways.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251018_Mario_Aiello_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to accelerate 0→1 research with AI

1 Share

Finding your footing in a new domain, without cutting corners.

I used to think AI was just a stochastic text extruder. But I’ve learned it can be a significant accelerant for 0–1 research as a guide and partner.

Abstract AI-generated image. Silhouette pointing at glowing node on a purple-pink gradient background with network lines and icons for search, chat, book, and progress.

LLMs’ sycophantic desire to please can pose real risks to uncritical product makers looking to shortcut product discovery work. But deep research and grounded tools can make AI an excellent dance partner to help you drive understanding and discover opportunities.

Back in February, execs asked me to help them understand our opportunity with SREs (site reliability engineers). But aside from witnessing DiRT at Google and meeting a few SREs at parties, I didn’t know much about them. And when I asked around, no one else had much experience with them either.

Diving into deep research

So, my first instinct as a researcher is to explore the space and learn the lit. “Desk research” we used to call it, before COVID made all research “desk research”. I used Perplexity’s Deep Research mode (now also found in ChatGPT, Copilot, and others) to start asking questions and get the lay of the land. You could try asking an LLM to write an entire report to hand off to stakeholders. But where we humans still add value is in translating between the language our stakeholders use — and the way they think about a problem space — into the way other people talk about it in industry.

Here’s what worked for me:

  • Build an annotated bibliography. Research mode will give you a well-structured essay. But LLM hallucinations are all but inevitable. So instead, I see this as building an “annotated bibliography” of expert viewpoints and analyses. Within an hour I found 36 articles, books, and videos to orient me to the space, so I could double-check the claims I was reading and go deeper on interesting points.
  • Test arguments. As my understanding and my own narrative started to take shape, I found myself wanting to connect the dots and make arguments I didn’t have evidence for. Using research mode, I could float my straw man arguments and ask for sources that supported and refuted it and ask whether specific sources agree.
Read the discussion in this Reddit thread. Does it specifically describe SRE roles?
Are there any quotes in this document supporting this statement: …
Are there any sources to support or refute this statement: …
  • Identify contrast. Other stakeholders were beginning to express their point of view, and I wanted to see where we differed. So, I uploaded their draft and mine into Copilot and used prompts like:
Let’s compare two analyses of what an SRE is. Please read both and then summarize what makes Source B different from Source A.
What should the author of Source A learn from Source B?
How do the responsibilities of SREs compare in Source A and Source B?

All this turned up a trove of content, including Google’s seminal SRE book, conveniently available as a free PDF. I found NotebookLM invaluable to digest my new bibliography, using Gemini’s ludicrous context window to absorb all that material. NotebookLM’s strength is being grounded in the specific sources you give it. It uncovered a story about one company’s transformation buried deep in the SRE book that I never would’ve noticed.

NotebookLM interface showing sources list on the left and SRE principles and Q&A content on the right.
NotebookLM, seeded with the sources I discovered in deep research.

After adding the sources I’d discovered, I generated a 20-minute podcast to send to my phone so I could get out of the house and review the material on a walk. It feels a little corny at times — and the male voice sounds uncannily like Kai Ryssdal — but the audio overviews do a great job of summarizing the body of work they’re assigned, zeroing in on stories and analogies to help break down the subject matter. (You can also get it to have an existential crisis or rhapsodize about poop.)

My colleague Maryam Maleki has an excellent article with more techniques for deep research with AI.

Navigating a layer of thick data

With a bit of context, I interviewed 14 SREs about their work, their needs, and their wishes for an SRE Agent. Semi-structured interviews are a rich source of “thick data” to get a sense of how people feel about their work and what really matters to them. The conversation often drifts where you don’t expect and takes unexpected twists that yield insight you never knew to ask for. But mining those nuggets out of the discussions can be time-consuming, forcing you to spend hours reviewing video footage.

I used Marvin to record and transcribe interviews (with participant consent). Instead of furiously taking notes and wearing out my keyboard, I found myself more present in the conversation. Since I trusted the system to capture what people were saying, I was more attuned to how they were saying it — able to probe more on their moments of delight and frustration.

Here’s what I did:

  • Compare participants. Compared to most LLMs, Marvin does a remarkably good job of admitting when it can’t answer a question. So, by feeding in my interview guide, I could get back a handy table summarizing how each participant answered my questions, with blank cells where we didn’t cover a topic. But since it analyzes the entire interview, it doesn’t matter if the answer came literally after you uttered the question or if it came up earlier. Likewise, you can also give it questions post hoc, and it’ll see if the interview provides answers anywhere. So, I can provide questions I wish I had asked, based on something a handful of participants alluded to, and recover moments where other people commented on a topic.
Marvin table with questions and answers about work tasks and preferences. Columns include questions on typical work, enjoyable tasks, disliked tasks, and repetitive toil. Rows show responses from different participants, with one cell highlighted in yellow reading ‘No specific answer.’
Marvin compares participants against your interview questions, and admits when it doesn’t know the answers.
  • Finding quotes. Like interrogating blog posts or articles about a question, I could go back and identify quotes across all 14 hours of content related to a topic or answering a question. This made it easy to index on key moments and well-articulated thoughts I wanted to highlight later.

Abstracting out common needs

The Jobs To Be Done framework is a framing device for user needs, but it’s really abstract. You can’t just ask someone directly, “what are your jobs to be done?” So traditionally you’d elucidate this in an analysis workshop or by rewatching all the interviews. I’d never used JTBD before, and it turns out there’s lots of different definitions out there, so I asked Copilot to synthesize a formula for analysis. Then I gave this prompt to Marvin:

You are a user experience researcher who’s an expert on SRE practices. A Job to Be Done (JTBD) statement typically follows a specific format to clearly articulate the customer’s needs and desired outcomes.
The formula for creating a JTBD statement is: When [situation], I want to [motivation], so I can [expected outcome].
Here’s a breakdown of each component:
• When [situation]: Describes the context or situation in which the job arises.
• I want to [motivation]: Specifies the customer’s motivation or the task they want to accomplish.
• So I can [expected outcome]: Highlights the desired result or benefit the customer expects to achieve.
For example: When I am commuting to work, I want to listen to my favorite podcasts, so I can stay informed and entertained during my journey.
For each participant, identify the jobs to be done they mention, using the formula above where possible to define the JTBD.

…and it churned through each interview and created several statements for them that fit the formula. I spot-checked a bunch, and they were all true, though a few were off-topic for SRE work. There’s room for semantic squabbles about whether these statements are broad enough to generalize, but within the context of how a specific respondent described their work, it works well. You get statements like these:

When encountering repetitive manual tasks, I want to automate these processes, so I can reduce toil and increase operational efficiency.
When developing and maintaining infrastructure, I want to integrate secure practices and compliance measures, so I can protect [company’s] data and prevent security breaches.

They run the gamut from general wishes to specific actions, so I find an affinitizing step helpful. But since each job has a specific person attached to it, I can easily see which broader jobs are more prevalent in which customer segments. Copilot gives me a head start on this:

This file contains a list of participants and the JTBD they reported. Please affinitize the JTBD into groups of similar items, reporting how many participants’ JTBD fall into each group.

This process still requires human review: I ultimately brought all the data into Mural to refine the categories; this gave me a sense of the recurring core jobs and tasks reported by my participants and a conceptual framework I used with my team to get them thinking about the wide range of things SREs actually do.

Clustered diagram titled JTBD at the top, with color-coded sticky notes in green and yellow arranged under headings like Lorem a, Lorem d, Lorem e, and others. Notes represent tasks or ideas grouped by categories and audience size, indicated by labels such as ‘<100 people,’ ‘100–1000 people,’ and ‘>1000 people’.
A whiteboard showing that I did affinitize all the data…converted here to lorem ipsum to protect privacy.

Likewise, I used a similar process to extract out tasks people wished an AI agent could do for them: Marvin pulled these wishes directly out of the interviews, and I affinitized them into a similar framework to identify a core set of tasks that people really want help with.

Accelerating survey analysis

After workshopping with the product and eng teams, we landed on a set of tasks we agreed were feasible for the team to build…but the list was too long to cover everything. So, I ran a MaxDiff study on Qualtrics: a fancy survey that asked SREs to rank which tasks they’d prioritize, eliciting their values.

But as anyone who runs online surveys knows, some people will speed through to claim incentives, leaving garbage data in their wake. Detecting these speedsters usually depends on survey context (how long does the survey normally take, what attention checks and red herring questions you have, etc.), so survey platforms usually don’t do this for you. The last time I did this I had to identify low-quality respondents each time I downloaded a new batch of data from Qualtrics.

So instead, I vibe coded a solution to automate this: using GitHub Copilot in VS Code to build a Streamlit app that would fetch data from the Qualtrics API directly, show live data, implement my quality checks, and extend Qualtrics’ own analysis to evaluate statistical significance.

Here’s what worked for me:

  • Start incrementally. When I taught programming, we instructed students to break assignments down into smaller tasks, and make sure they worked before moving onto the next step. Building interactively with an AI copilot works the same way: you’re more likely to catch errors early on when you’re only adding one piece of functionality at a time.
  • Be specific. You’re likely to get better results with narrow and specific instructions, name-dropping the specific technologies and techniques you want to use. Telling Copilot to use Streamlit, for example, made the resulting code more familiar. Giving it bulleted step-by-step instructions for what I wanted it to do (and what statistical tests to run) made it easier to inspect that it did the right thing.
  • Get a second opinion. Copilot doesn’t just edit code — it can read any text open in an editor and draw from web references. So, I could paste in a Qualtrics qsf file and ask it to explain how Qualtrics stored responses. It also helped me figure out how to find my survey ID from the Qualtrics UI.
  • Describe your goals. At times, Copilot recommended statistical tests and libraries I’d never heard of when I told it I wanted to model sampling uncertainty in the data. I was nervous about reading too much into a small sample, and when I explained this Copilot suggested implementing bootstrap sampling as a flexible means of measuring the confidence interval. It’s still on you to read up on those methodological tricks instead of relying on “vibe stats”, but it can be a great way to discover and learn new methods.
Two side-by-side chat panels showing GitHub Copilot responses. The left panel explains steps to build a Streamlit web app using Qualtrics API, including setting up the app, fetching data, processing data, and rendering tables and histograms. The right panel provides instructions to find a survey ID in Qualtrics, listing steps like logging in, navigating to the survey, accessing settings, and using the API directory, followed by a short Python code snippet.
Incrementally building my analysis pipeline with GitHub Copilot

Conclusion

AI helped me to go from knowing nothing about SREs to giving our team confidence in what to prioritize for their launch, giving us a sense of footing in just a few weeks.

The inflection point of UX research in large organizations is the moment where you can scale the practice. While I’ve seen UXRs bragging about how many more studies they can run with AI, the real power I’ve seen is not just for accelerating your research but for embiggening it: enabling you to talk to more customers and analyze more data. I’ve long been a fan of using machine learning to scale qualitative research practice; AI opens another opportunity for us to cast a wide net and listen to more people.

The caveat, of course, is that AI doesn’t know what’s important for your stakeholders or customers, and it doesn’t summarize text, it shortens. The onus is on us as product makers to draw connections and insights to find genuine opportunities from all the noise and opinion.

The opportunity for us as researchers is to reimagine how we gather and make sense of data — facilitating empathy with and understanding of users at the pace demanded by product development.

How have you tried using AI to scale your understanding?

Mike Brzozowski is a principal UX researcher in Microsoft’s Core AI group. He has over 15 years’ experience guiding product teams to make informed, human-centered decisions with confidence and conviction.


How to accelerate 0→1 research with AI was originally published in UXR @ Microsoft on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #42, 2025 – Hi, Victor & .NET 10 RC2

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

Write your own tiny programming system(s)! with Tomas Petricek

Videos

Blogs

Our Victor CLI tool is a play on words for the idea that Hugo templates should be fair game from bringing into the #fsharp framework. speakez.tech/blog/victor-…

SpeakEZ.tech (@speakeztech.bsky.social) 2025-10-17T13:37:25.012Z

F# vNext

Highlighted projects

New Releases

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The US has a new roadmap for fusion energy, without the funds to back it up

1 Share
A silhouette of a person standing in front of a screen.
Guests await the beginning of a news conference at the Department of Energy headquarters to announce a breakthrough in fusion research on December 13, 2022 in Washington, DC. The officials announced that experiments at the National Ignition Facility at the LLNL achieved ‘ignition.’ | Photo: Getty Images

The Department of Energy (DOE) released a new roadmap for the US to realize the decades-long dream of harnessing fusion energy.

It’s a commitment to support research and development efforts and pursue public-private partnerships to finally build the first generation of fusion power plants. And of course, the plan hypes up AI as both a tool that can lead to new breakthroughs and as the motivation to create a new energy source that can satiate data centers’ growing electricity demands.

The DOE is eyeing an extremely ambitious timeline, although the details on how to accomplish that are vague considering success still relies on achieving scientific breakthroughs that have evaded scientists for the better part of a century. Moreover, the burgeoning ecosystem of startups and researchers committed to this task is clamoring for more cash — funds the DOE admits it doesn’t yet have to give. 

Of course, the plan hypes up AI

A press release from the DOE yesterday boasts that its new strategy aims to deploy commercial-scale fusion power to electricity grids by the mid-2030s. The actual roadmap, however, paints a fuzzier picture. The document says in bold that its goal “is to deliver the public infrastructure that supports the fusion private sector scale up in the 2030s.” Regardless, there are still a lot of hurdles and uncertainties to face, which could realistically make powering our homes and businesses with fusion energy decades away, if ever. 

Why is this such a large task? Today’s nuclear fission plants split atoms apart to release energy. Nuclear fusion plants, in contrast, would fuse atoms together to generate energy in a controlled way. (You get a hydrogen bomb when this is done in an uncontrolled way.) The upside to achieving fusion would be that it doesn’t produce the same radioactive waste as fission, nor does the process rely on polluting fossil fuels. 

Fusion essentially mimics the way stars produce their own light and heat. While this could be an abundant carbon-free energy source, it also takes a tremendous amount of heat and pressure to fuse atoms together. As a result, it’s been extraordinarily difficult to achieve a fusion reaction that results in a net energy gain (something called “ignition” in industry-speak). Scientists accomplished this for the first time in 2022 using lasers. Researchers developing fusion technologies are working to re-create that feat and figure out how to sustain the reaction longer. 

There have been some other significant changes in recent years that have fed into all the current buzz around fusion. The generative AI boom has left big tech companies scrambling to get enough electricity to power more data centers. Sam Altman, Bill Gates, and Jeff Bezos have all backed fusion startups developing their own plant designs. Both Google and Microsoft have announced plans to purchase electricity from forthcoming fusion power plants that are supposed to be online by the late 2020s or 2030s. More than $9 billion in private investments have flowed into fusion demonstrations and prototype reactors, the DOE says.

There are other big gaps to fill, which is where the DOE says it can step in. The roadmap emphasizes bringing together the public and private sectors to build out the “critical infrastructure” needed to make fusion commercially viable, such as producing and recycling fusion fuels (typically hydrogen isotopes called tritium and deuterium). Another “core challenge area” the document highlights is the need to develop structural materials strong enough to withstand the extreme conditions at a fusion plant. (Remember, you’re sort of replicating the environment within a star.)  

It also mentions the development of regional hubs for fusion innovation, where DOE laboratories might work with universities, local and state governments, and private companies to build up a workforce for these new technologies. One hub would be a collaboration between Nvidia, IBM, and the Princeton Plasma Physics Laboratory, and the DOE to “establish an AI-optimized fusion-centric supercomputing cluster” called Stellar-AI.

The DOE dedicates an entire section of the roadmap to AI, which it calls a “transformative tool for fusion energy.” Researchers can use AI models to construct “digital twins” to more quickly study how experimental facilities would perform, the roadmap says as an example. 

The document also comes with a big disclaimer. Written at the top, above the executive summary, it says: “This Roadmap is not committing the Department of Energy to specific funding levels, and future funding will be subject to Congressional appropriations.” In other words, the DOE isn’t ready to throw any money at this plan just yet. 

And while the Trump administration has folded fossil fuels, nuclear fission, and fusion into its ambitions for so-called “energy dominance,” the president has clawed back funding for solar and wind energy projects that are already much faster and typically cheaper to deploy to meet America’s growing electricity demand. 

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories