Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146145 stories
·
33 followers

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests

2 Shares
An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month. The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...." In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases. "Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

AI Book Club recording of God, Human, Animal, Machine

1 Share
This is a recording of our AI Book Club discussion of God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O'Gieblyn, held Jan 18, 2026. Our discussion touches upon a variety of parallels between religion and AI, such as the black box nature of AI and the incomprehensibility of God's will, transhumanism and resurrection, predictive algorithms and free will, and more. This post also provides discussion questions, a transcript, and other resources.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Beyond boundaries: The future of Azure Storage in 2026

1 Share

2025 was a pivotal year in Azure Storage, and we’re heading into 2026 with a clear focus on helping customers turn AI into real impact. As outlined in last December’s Azure Storage innovations: Unlocking the future of data, Azure Storage is evolving as a unified intelligent platform that supports the full AI lifecycle at enterprise scale with the performance modern workloads demand.

Looking ahead to 2026, our investments span the full breadth of that lifecycle as AI becomes foundational across every industry. We are advancing storage performance for frontier model training, delivering purpose‑built solutions for large‑scale AI inferencing and emerging agentic applications, and empowering cloud‑native applications to operate at agentic scale. In parallel, we are simplifying adoption for mission‑critical workloads, lowering TCO, and deepening partnerships to co‑engineer AI‑optimized solutions with our customers.

We’re grateful to our customers and partners for their trust and collaboration, and excited to shape the next chapter of Azure Storage together in the year ahead.

Extending from training to inference

AI workloads extend from large, centralized model training to inference at scale, where models are applied continuously across products, workflows, and real-world decision making. LLM training continues to run on Azure, and we’re investing to stay ahead by expanding scale, improving throughput, and optimizing how model files, checkpoints, and training datasets flow through storage.

Innovations that helped OpenAI to operate at unprecedented scale are now available for all enterprises. Blob scaled accounts allow storage to scale across hundreds of scale units within a region, handling millions of objects required to enable enterprise data to be used as training and tuning datasets for applied AI. Our partnership with NVIDIA DGX on Azure shows that scale translates into real-world inference. DGX cloud was co-engineered to run on Azure, pairing accelerated compute with high-performance storage, Azure Managed Lustre (AMLFS), to support LLM research, automotive, and robotics applications. AMLFS provides the best price-performance for keeping GPU fleets continuously fed. We recently released Preview support for 25 PiB namespaces and up to 512 GBps of throughput, making AMLFS best in class managed Lustre deployment on Cloud.

As we look ahead, we’re deepening integration across popular first and third-party AI frameworks such as Microsoft Foundry, Ray, Anyscale, and LangChain, enabling seamless connections to Azure Storage out of box. Our native Azure Blob Storage integration within Foundry enables enterprise data consolidation into Foundry IQ, making blob storage the foundational layer for grounding enterprise knowledge, fine-tuning models, and serving low-latency context to inference, all under the tenant’s security and governance controls.

From training through full-scale inferencing, Azure Storage supports the entire agent lifecycle: from distributing large model files efficiently, storing and retrieving long-lived context, to serving data from RAG vector stores. By optimizing for each pattern end-to-end, Azure Storage has performant solutions for every stage of AI inference.

Evolving cloud native applications for agentic scale

As inference becomes the dominant AI workload, autonomous agents are reshaping how cloud native applications interact with data. Unlike human-driven systems with predictable query patterns, agents operate continuously, issuing an order of magnitude more queries than traditional users ever did. This surge in concurrency stresses databases and storage layers, pushing enterprises to rethink how they architect new cloud native applications.

Azure Storage is building with SaaS leaders like ServiceNow, Databricks, and Elastic to optimize for agentic scale leveraging our block storage portfolio. Looking forward, Elastic SAN becomes a core building block for these cloud native workloads, starting with transforming Microsoft’s own database solutions. It offers fully managed block storage pools for different workloads to share provisioned resources with guardrails for hosting multi-tenant data. We’re pushing the boundaries on max scale units to enable denser packing and capabilities for SaaS providers to manage agentic traffic patterns.

As cloud native workloads adopt Kubernetes to scale rapidly, we are simplifying the development of stateful applications through our Kubernetes native storage orchestrator, Azure Container Storage (ACStor) alongside CSI drivers. Our recent ACStor release signals two directional changes that will guide upcoming investments: adopting the Kubernetes operator model to perform more complex orchestration and open sourcing the code base to collaborate and innovate with the broader Kubernetes community.

Together, these investments establish a strong foundation for the next generation of cloud native applications where storage must scale seamlessly and deliver high efficiency to serve as the data platform for agentic scale systems.

Breaking price performance barriers for mission critical workloads

In addition to evolving AI workloads, enterprises continue to grow their mission critical workloads on Azure.

SAP and Microsoft are partnering together to expand core SAP performance while introducing AI-driven agents like Joule that enrich Microsoft 365 Copilot with enterprise context. Azure’s latest M-series advancements add substantial scale-up headroom for SAP HANA, pushing disk storage performance to ~780k IOPS and 16 GB/s throughput. For shared storage, Azure NetApp Files (ANF) and Azure Premium Files deliver the high throughput NFS/SMB foundations SAP landscapes rely on, while optimizing TCO with ANF Flexible Service Level and Azure Files Provisioned v2. Coming soon, we will introduce Elastic ZRS storage service level in ANF, bringing zone‑redundant high availability and consistent performance through synchronous replication across availability zones leveraging Azure’s ZRS architecture, without added operational complexity.

Similarly, Ultra Disks have become foundational to platforms like BlackRock’s Aladdin, which must react instantly to market shifts and sustain high-performance under heavy load. With average latency well under 500 microseconds, support for 400K IOPS, and 10 GB/s throughput, Ultra Disks enable faster risk calculation, more agile portfolio management, and resilient performance on BlackRock’s highest-volume trading days. When paired with Ebsv6 VMs, Ultra Disks can reach 800K IOPS and 14 GB/s for the most demanding mission critical workloads. And with flexible provisioning, customers can tune performance precisely to their needs while optimizing TCO.

These combined investments give enterprises a more resilient, scalable, and cost-efficient platform for their most critical workloads.

Designing for new realities of power and supply

The global AI surge is straining power grids and hardware supply chains. Rising energy costs, tight datacenter budgets, and industry-wide HDD/SSD shortages mean organizations can’t scale infrastructure simply by adding more hardware. Storage must become more efficient and intelligent by design.

We’re streamlining the entire stack to maximize hardware performance with minimal overhead. Combined with intelligent load balancing and cost-effective tiering, we are uniquely positioned to help customers scale storage sustainably even as power and hardware availability become strategic constraints. With continued innovations on Azure Boost Data Processing Units (DPUs), we expect step function gains in storage speed and feeds at even lower per unit energy consumption.

AI pipelines can span on-premises estates, neo cloud GPU clusters, and cloud, yet many of these environments are limited by power capacity or storage supply. When these limits become a bottleneck, we make it easy to shift workloads to Azure. We’re investing in integrations that make external datasets first class citizens in Azure, enabling seamless access to training, finetuning, and inference data wherever it lives. As cloud storage evolves into AI-ready datasets, Azure Storage is introducing curated, pipeline optimized experiences to simplify how customers feed data into downstream AI services.

Accelerating innovations through the storage partner ecosystem

We can’t do this alone. Azure Storage partners closely with strategic partners to push inference performance to the next level. In addition to the self-publishing capabilities available in Azure Marketplace, we go a step further by devoting resources with expertise to co-engineer solutions with partners to build highly optimized and deeply integrated services.

In 2026, you will see more co-engineered solutions like Commvault Cloud for Azure, Dell PowerScale, Azure Native Qumulo, Pure Storage Cloud, Rubrik Cloud Vault, and Veeam Data Cloud. We will focus on hybrid solutions with partners like VAST Data and Komprise to enable data movement that unlocks the power of Azure AI services and infrastructure—fueling impactful customer AI Agent and Application initiatives.

To an exciting new year with Azure Storage

As we move into 2026, our vision remains simple: help every customer unlock more value from their data with storage that is faster, smarter, and built for the future. Whether powering AI, scaling cloud native applications, or supporting mission critical workloads, Azure Storage is here to help you innovate with confidence in the year ahead.

The post Beyond boundaries: The future of Azure Storage in 2026 appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

String Performance: The Fastest Way to Get a String’s Length

1 Share
Retrieving the character count of a string in .NET has various methods: using Span with Length, Length, or Enumerable.Count(). This article will prove which is the fastest method.



Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Day 25: Refactoring AI Code: From Working to Maintainable

1 Share

The feature worked perfectly. The code was a mess.

AI had generated 400 lines in a single file. Functions that did three things. Variables named data, result, temp. No comments explaining the non-obvious parts. It worked, but I dreaded having to modify it later.

This is the AI code paradox. The faster you ship, the more you accumulate code that’s hard to maintain. AI optimizes for “make it work,” not “make it understandable.”

But here’s the thing: AI is also good at refactoring. The same tool that created the mess can help clean it up. You just have to ask.

The Code Smell Detection Prompt

Start by identifying problems:

Analyze this code for maintainability issues.

Look for:
1. Functions doing too many things
2. Poor naming (unclear variables, generic names)
3. Code duplication
4. Deep nesting
5. Long functions (over 30 lines)
6. Missing error handling
7. Implicit behavior that should be explicit
8. Tight coupling between components
9. Missing abstractions
10. Inconsistent patterns

For each issue:
- Location (function/line)
- The problem
- Why it matters
- Suggested fix

Code:
[paste code]

The Incremental Refactoring Prompt

Don’t refactor everything at once. Go step by step:

Help me refactor this code incrementally.

Current code:
[paste code]

Goal: Make this more maintainable without changing behavior.

Step 1: What's the single most impactful refactoring?
- What to change
- Why it helps
- The refactored code

After each step, I'll verify tests still pass before the next step.

The Function Extraction Prompt

When functions do too much:

This function does too many things. Help me break it apart.

Function:
[paste the long function]

For this refactoring:
1. Identify distinct responsibilities
2. Extract each into a separate function
3. Name each function clearly
4. Keep the original function as an orchestrator

Constraints:
- Tests must still pass
- No behavior changes
- Each extracted function should be testable independently

A Real Refactoring Session

Here’s code AI generated for handling a card trade:

async function handleTrade(tradeId: string, action: string, userId: string) {
  const trade = await db.trade.findUnique({ where: { id: tradeId }, include: { offeredCards: true, requestedCards: true, fromUser: true, toUser: true } });

  if (!trade) throw new Error('Trade not found');

  if (action === 'accept') {
    if (trade.toUserId !== userId) throw new Error('Not authorized');
    if (trade.status !== 'pending') throw new Error('Trade not pending');

    for (const card of trade.offeredCards) {
      await db.card.update({ where: { id: card.id }, data: { ownerId: trade.toUserId } });
    }
    for (const card of trade.requestedCards) {
      await db.card.update({ where: { id: card.id }, data: { ownerId: trade.fromUserId } });
    }

    await db.trade.update({ where: { id: tradeId }, data: { status: 'completed', completedAt: new Date() } });

    await db.notification.create({ data: { userId: trade.fromUserId, type: 'trade_accepted', message: `Your trade was accepted by ${trade.toUser.username}` } });

    return { success: true, trade };
  } else if (action === 'reject') {
    if (trade.toUserId !== userId) throw new Error('Not authorized');
    if (trade.status !== 'pending') throw new Error('Trade not pending');

    await db.trade.update({ where: { id: tradeId }, data: { status: 'rejected' } });

    await db.notification.create({ data: { userId: trade.fromUserId, type: 'trade_rejected', message: `Your trade was rejected by ${trade.toUser.username}` } });

    return { success: true, trade };
  } else if (action === 'cancel') {
    if (trade.fromUserId !== userId) throw new Error('Not authorized');
    if (trade.status !== 'pending') throw new Error('Trade not pending');

    await db.trade.update({ where: { id: tradeId }, data: { status: 'cancelled' } });

    await db.notification.create({ data: { userId: trade.toUserId, type: 'trade_cancelled', message: `Trade was cancelled by ${trade.fromUser.username}` } });

    return { success: true, trade };
  }

  throw new Error('Invalid action');
}

After refactoring:

async function handleTrade(
  tradeId: string,
  action: TradeAction,
  userId: string
): Promise<TradeResult> {
  const trade = await findTradeWithDetails(tradeId);
  validateTradeAction(trade, action, userId);

  switch (action) {
    case 'accept':
      return acceptTrade(trade);
    case 'reject':
      return rejectTrade(trade);
    case 'cancel':
      return cancelTrade(trade);
  }
}

async function findTradeWithDetails(tradeId: string): Promise<TradeWithDetails> {
  const trade = await db.trade.findUnique({
    where: { id: tradeId },
    include: {
      offeredCards: true,
      requestedCards: true,
      fromUser: true,
      toUser: true
    }
  });

  if (!trade) {
    throw new NotFoundError('Trade not found');
  }

  return trade;
}

function validateTradeAction(
  trade: TradeWithDetails,
  action: TradeAction,
  userId: string
): void {
  if (trade.status !== 'pending') {
    throw new InvalidStateError('Trade is not pending');
  }

  const authorizedUser = action === 'cancel' ? trade.fromUserId : trade.toUserId;
  if (authorizedUser !== userId) {
    throw new ForbiddenError('Not authorized for this action');
  }
}

async function acceptTrade(trade: TradeWithDetails): Promise<TradeResult> {
  await transferCards(trade.offeredCards, trade.toUserId);
  await transferCards(trade.requestedCards, trade.fromUserId);

  const updated = await updateTradeStatus(trade.id, 'completed');

  await notifyUser(
    trade.fromUserId,
    'trade_accepted',
    `Your trade was accepted by ${trade.toUser.username}`
  );

  return { success: true, trade: updated };
}

async function transferCards(cards: Card[], newOwnerId: string): Promise<void> {
  await db.card.updateMany({
    where: { id: { in: cards.map(c => c.id) } },
    data: { ownerId: newOwnerId }
  });
}

// ... similar for rejectTrade and cancelTrade

The refactored version is longer but:

  • Each function has one job
  • Functions are testable independently
  • Error types are specific
  • The flow is easy to follow
  • Changes are isolated

The Naming Improvement Prompt

When names are unclear:

Improve the naming in this code.

Code:
[paste code]

For each rename:
1. Current name
2. Suggested name
3. Why it's better

Naming rules to follow:
- Functions should be verbs (getUser, validateInput)
- Booleans should be questions (isValid, hasPermission)
- Collections should be plural (users, items)
- Avoid generic names (data, result, temp, info)
- Abbreviations should be obvious (id, url) or spelled out

The Duplication Elimination Prompt

When you see patterns repeated:

Find and eliminate duplication in this code.

Code:
[paste code]

For each duplication:
1. Where it appears
2. A shared abstraction
3. The refactored code

Don't over-abstract. Only extract if:
- The pattern appears 3+ times
- The extracted function has a clear name
- Future changes would need to happen in all places

The Complexity Reduction Prompt

When code is too nested:

Reduce the complexity of this code.

Code:
[paste code]

Techniques to apply:
- Early returns instead of deep nesting
- Guard clauses at function start
- Extract conditions into named booleans
- Replace conditionals with polymorphism where appropriate

Show the refactored code with complexity reduced.

The Consistency Prompt

When patterns are inconsistent:

Make this code consistent with our patterns.

Our patterns:
[describe your patterns or reference a file]

Code to update:
[paste code]

Check for:
- Error handling pattern consistency
- Naming convention consistency
- Structure consistency
- Response format consistency

Show the code updated to match our patterns.

The Comments and Documentation Prompt

When code needs explanation:

Add appropriate documentation to this code.

Code:
[paste code]

Add:
1. Function documentation (what it does, parameters, returns)
2. Comments for non-obvious logic (why, not what)
3. TODO comments for known issues
4. Type documentation where types are complex

Don't add:
- Comments that just repeat the code
- Comments for obvious things
- Excessive inline comments

Refactoring With Tests As Safety Net

Always have tests before refactoring:

I want to refactor this code. First, what tests should exist?

Code:
[paste code]

Generate tests that verify:
1. Happy path behavior
2. Error cases
3. Edge cases

These tests should pass before AND after refactoring.
Then show me the refactoring.

The Boy Scout Rule

Leave code better than you found it:

I'm working on this file. What small improvements can I make while I'm here?

File:
[paste code]

Current task: [what you're actually trying to do]

Suggest 2-3 small improvements that:
- Won't take more than 5 minutes each
- Don't change behavior
- Make future work easier
- Are in the same area I'm already touching

When NOT to Refactor

Refactoring isn’t always worth it:

  • Code that’s about to be deleted: Don’t polish what’s leaving
  • Code that never changes: If it works and you never touch it, leave it
  • During an incident: Fix first, refactor later
  • Without tests: Refactoring without tests is gambling

Ask AI:

Should I refactor this code, or leave it alone?

Code:
[paste code]

Consider:
- How often is this code modified?
- What's the cost of a bug here?
- Are there tests?
- Is this blocking other work?

Recommend: refactor now, refactor later, or leave alone.

Tomorrow

Refactoring one service is straightforward. But what about features that span multiple services? Frontend, backend, database, external APIs. Tomorrow I’ll show you how to coordinate AI work across multiple services.


Try This Today

  1. Find a piece of AI-generated code you’ve been avoiding
  2. Run the code smell detection prompt
  3. Pick one smell and fix it

You don’t have to fix everything. Start with the worst part. Make it a little better. Repeat next time you’re in the file.

Code quality is incremental. Perfect is the enemy of better.

Read the whole story
alvinashcraft
7 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The day of the second killing

1 Share

Steven Garcia, as told to Gaby Del Valle:

I was in the middle of a frozen lake when I got the notification from the Minnesota Star Tribune that there had been a shooting. I was on assignment at a pond hockey event, and someone who was supposed to play later that evening said he probably wouldn't be able to make it - they knew there would be protests and demonstrations happening.

I arrived a little over three hours later. Federal officers had already cleared the scene - the FBI had been there investigating - so the only law enforcement present were state and local officials: the Minneapolis Police Department, their SWAT team, the Hennepin …

Read the full story at The Verge.

Read the whole story
alvinashcraft
14 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories