Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151936 stories
·
33 followers

The ‘AI is inevitable’ trap

1 Share

In the latest sign of AI silly season, Allbirds, the shoe company, told the world it was now an AI company and briefly managed to septuple its stock price. The Newbird AI story is really just one of a bunch of things this week that made us wonder: have we reached the peak of AI, or at least a peak of AI?

Verge subscribers, don't forget you get exclusive access to ad-free Vergecast wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

On this episode of The Vergecast, we look at both the data and the vibes. David and Nilay explore a new study from Stanford that says AI is getting better at lots of things, a …

Read the full story at The Verge.

Read the whole story
alvinashcraft
48 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

SaySo is a new short-form video app that aims to restore users’ trust in news

1 Share
Users are fed up with misinformation and AI slop cluttering their feeds. SaySo is a new short-form video app that delivers news from vetted creators and journalists.
Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Cloud Cost Optimization: Principles that still matter

1 Share

This blog post is the second in a multi-part series called Cloud Cost Optimization. Throughout this series, we’ll share practical strategies, best practices, and actionable guidance to help you plan, design, and manage AI investments for sustainable value and efficiency.

Cloud cost optimization continues to be a top priority for organizations of every size. As cloud environments grow and workloads scale, leaders are under constant pressure to control spend, reduce waste, and ensure that resources are being used efficiently. What was once a secondary operational concern has become a strategic capability tied directly to business performance, resilience, and long‑term growth.

At the same time, the rapid growth of AI workloads is adding a new layer of complexity to managing cloud costs. AI‑powered workloads and evolving usage patterns are transforming how organizations approach cloud optimization and investment planning. However, these changes do not replace the need for strong cost optimization practices. Instead, they make cloud cost optimization and AI cost management more critical than ever.

This article provides a practical, evergreen overview of cloud cost optimization, how AI changes the cost landscape, and the principles organizations can apply to optimize cloud and AI workloads over time.

What is cloud cost optimization and why does it still matter?

Cloud cost optimization refers to the ongoing practice of analyzing cloud usage and making informed decisions to reduce unnecessary spend while maintaining performance, reliability, and scalability. It is not about cutting costs indiscriminately, but about ensuring that cloud resources are aligned to real workload demand and business value.

Unlike traditional IT environments, cloud platforms operate on consumption‑based pricing models. This means costs are directly tied to how resources are used, not just what is deployed. As a result, cost optimization is not a one‑time exercise. It requires continuous attention as environments evolve, workloads change, and new services are introduced.

Organizations that invest in cloud cost optimization benefit from:

  • Improved visibility into where cloud spend is going.
  • Reduced waste from underutilized or idle resources.
  • Better alignment between cloud usage and business needs.
  • Greater confidence when scaling workloads.

As cloud environments grow more complex (spanning multiple services, regions, and architectures), the importance of structured cloud cost management and optimization only increases. For organizations operating in the cloud, this makes cost optimization a foundational capability rather than an operational afterthought.

How AI workloads change traditional cost optimization

AI workloads introduce new cost dynamics that can challenge traditional cloud cost optimization approaches. While many principles still apply, the pace and variability of AI usage amplify the need for strong cost governance.

  1. AI consumption patterns are often less predictable. Training models, running inference, and experimenting with different architectures can cause rapid fluctuations in compute and storage usage. Costs may spike during experimentation phases and stabilize later in production or shift again as models evolve.
  2. AI development typically involves a higher degree of iteration. Teams may test multiple models, datasets, or configurations before settling on a production approach. Without strong visibility and controls, these experiments can quietly drive significant cloud costs and complicate efforts to optimize cloud costs effectively.
  3. AI workloads often rely on specialized infrastructure and services that increase cost sensitivity. As a result, maintaining visibility and control requires intentional AI cost optimization and disciplined cloud cost management practices.

This makes cloud cost optimization even more critical in AI‑powered environments, not optional.

Cloud cost optimization best practices for AI and modern workloads

While technologies change, many cloud cost optimization best practices remain consistent across traditional and AI workloads. The key is applying them continuously and adapting them to modern usage patterns.

Visibility and usage awareness

Effective cost optimization starts with understanding how resources are being consumed. Organizations need clear insight into usage patterns across environments, workloads, and services to identify inefficiencies and optimization opportunities. Visibility is the foundation of both cloud cost management and AI cost management.

Governance guardrails

Guardrails help prevent unnecessary spend before it occurs. These can include usage boundaries, policy‑driven controls, and standardized approaches that encourage efficient resource consumption without slowing innovation. Strong governance supports sustainable cost optimization as environments scale.

Rightsizing and lifecycle thinking

Workloads change over time. Resources that were appropriate during development may be inefficient in production, or vice versa. Rightsizing and lifecycle awareness help ensure resources match actual needs at every stage, which is essential to optimizing cloud costs over the long term.

Continuous review and iteration

Cloud cost optimization is not static. Regular review cycles allow teams to adapt to changing usage patterns, new workloads, and evolving priorities, especially as AI solutions move from experimentation to scale.

These cloud cost optimization best practices apply whether organizations are optimizing traditional applications, data platforms, or AI workloads running at scale.

Cloud cost management versus cost optimization

Cloud cost management and cost optimization are closely related, but not the same.

Cloud cost management focuses on tracking, reporting, and understanding cloud spend. It answers questions like:

  • Where is money being spent?
  • How is usage trending over time?
  • Which workloads or services are driving costs?

Cloud cost optimization, on the other hand, is about action and decision‑making. It builds on cost management insights to determine:

  • Where inefficiencies exist.
  • What changes can reduce waste.
  • How to improve efficiency without compromising outcomes.

Organizations need both. Cloud cost management provides visibility, while cost optimization turns that visibility into informed decisions that improve efficiency, scalability, and resiliency (especially in AI‑heavy environments).

Measuring value alongside cloud cost optimization

Reducing cloud costs alone is rarely the goal. The real objective is ensuring that cloud and AI investments deliver sustainable value over time.

Effective cost optimization balances efficiency with outcomes. This means considering how resources contribute to workload performance, reliability, and long‑term viability (not just minimizing spend). For AI workloads, this balance is particularly important, as experimentation and innovation are essential but must still be managed responsibly.

By measuring efficiency and aligning cloud cost optimization and AI cost optimization efforts with workload value, organizations can avoid short‑term savings that undermine long‑term success. This value‑driven approach to managing cloud costs ensures optimization supports growth rather than constraining it.

Next steps for cloud cost optimization on Azure

Azure provides a broad set of resources designed to help organizations manage and optimize cloud and AI costs over time.

To explore guidance, best practices, and curated resources that support cost optimization across cloud and AI workloads, visit the solutions pages:

For deeper perspectives on related topics, you may also find these resources helpful:

Cost optimization is a continuous journey, one that becomes even more important as AI adoption accelerates. By applying durable principles and maintaining ongoing visibility and control, organizations can scale cloud and AI investments responsibly while maximizing long‑term value.

To go deeper, explore the Cloud Cost Optimization series for best practices and guidance on optimizing cloud and AI investments for long-term business impact.

Did you miss these posts in the Cloud Cost Optimization series?

The post Cloud Cost Optimization: Principles that still matter appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Optimize object storage costs automatically with smart tier—now generally available

1 Share

We are excited to announce the general availability (GA) of smart tier for Azure Blob and Data Lake Storage. Smart tier is a fully managed, automated tiering capability for Azure Blob Storage and Data Lake Storage that helps optimize storage costs without ongoing operational effort. By continuously optimizing data placement, smart tier ensures your storage costs are aligned with actual usage.

As data estates expand and access patterns evolve, managing lifecycle rules at scale becomes complex. Customers need automated, continuous tiering to keep costs aligned with usage.

Smart tier continuously evaluates your data access patterns and automatically moves objects across the hot, cool, and cold tiers to keep your costs aligned with usage without manual configuration.

Since launching the public preview of smart tier at Ignite in November 2025, customers and partners have adopted it across a range of data estates and over 50% of smart-tier–managed capacity has automatically shifted to cooler tiers based on actual access patterns:

We see a significant and measurable benefit from adopting smart tier in Azure Storage for our Azure Data Explorer (ADX) clusters. By intelligently placing data in the most cost‑effective tier based on actual usage patterns, smart tier allows us to optimize storage spend without sacrificing performance. Hot data remains instantly accessible for query workloads, while cooler, less frequently accessed data is automatically shifted to lower‑cost tiers. Smart tier effectively removed the guesswork from storage optimization, enabling us to focus on delivering insights rather than managing data placement.

Brad Watts, Principal PM for Azure Data Explorer

The Azure Blob and Data Lake Storage partner ecosystem is also integrating smart tier into their solutions:

Smart Tier represents a major step forward in simplifying how enterprises optimize storage in the cloud. The ability to automate tiering while maintaining resilience and predictable economics is highly complementary to Qumulo’s data services on Azure. Together with Microsoft, we’re enabling customers to modernize file workloads on Azure while reducing operational complexity and improving long‑term cost efficiency.

Brandon Whitelaw, SVP and Head of Product at Qumulo

Smart tier is generally available today in nearly all zonal public cloud regions, supporting both Azure Blob and Data Lake Storage.

How smart tier makes tiering decisions

Smart tier continuously evaluates the last access time of each individual object on the storage account where smart tier is enabled.

Frequently accessed data stays in the hot tier to support performance and transaction efficiency; inactive data transitions to the cool tier after 30 days and to the cold tier after an additional 60 days. When data is accessed again, it is immediately promoted back to hot and the tiering cycle restarts. This means your datasets remain in the most cost-effective tier automatically, removing the need to predict access patterns.

Read and write operations against an object, i.e. Get Blob or Put Blob operations are restarting the tiering cycle. Metadata operations, i.e. Get Blob Properties, are not impacting transitions. These static tiering rules are part of the underlying service and ensure automatic optimizations without the need for manual maintenance.

Setting up smart tier

Enabling smart tier is straightforward and designed to minimize change management while delivering immediate cost-optimization benefits:

  1. During storage account creation, just select smart tier as the default access tier through the storage account configuration for any storage account with zonal redundancy. This is supported both via API and the Azure portal.
  2. Enable existing accounts with zonal redundancies by switching the blob access tier from default to smart through the same tooling.
  3. Let Azure optimize automatically: Objects inheriting the default tier are continuously managed without manual interventions needed.

Please note: Smart tier doesn’t support legacy account types such as Standard general-purpose v1 (GPv1) and is not applicable on page or append blobs.

For objects managed by smart tier, you pay standard hot, cool, and cold capacity rates, without additional charges for tier transitions, early deletion, or data retrieval. Moving existing objects into smart tier does not incur tier-change fees; a monitoring fee covers the orchestration.

Over time, automated down-tiering of inactive data combined with smart tier’s simplified billing can translate into meaningful savings at scale.

Best practices for maximizing smart tier value

  • After enabling smart tier on the account level, you can explicitly pin objects that you don’t want to be managed by smart tier to other tiers. No monitoring fee will apply to those objects.
  • Don’t exclude small objects. Objects less than 128 KiB stay in hot, don’t tier down, and don’t incur the monitoring fee. If an object later grows to equal to or greater than 128 KiB, smart tier policies apply automatically.
  • Common pitfall: Avoid trying to influence tiering behavior using lifecycle rules or other tier optimization mechanisms for smart tier–managed objects.

Based on patterns observed across multiple large smart tier preview deployments, customers commonly see the following outcomes after enabling smart tier:

Smart tier adoption for a large analytics workload

During public preview, a large data analytics customer enabled smart tier across hundreds of tebibytes of telemetry and log data with mixed and evolving access patterns.

Before enabling smart tier, the team relied on custom lifecycle rules that required frequent retuning as access patterns evolved and often led to unexpected cost spikes after re-access.

After enabling smart tier:

  • More than half of this customer’s managed data footprint automatically transitioned to cooler tiers based on actual usage patterns.
  • The team eliminated lifecycle policy management entirely, freeing engineering time.
  • Storage costs became more predictable and resilient to re-access spikes, since rehydration occurred automatically without retrieval or early deletion charges.

While savings vary by workload, this pattern reflects how smart tier helps align object storage costs with real usage.

Who should use smart tier?

Smart tier is well suited for organizations that:

  • Manage large or fast-growing object data estates.
  • Have mixed, evolving, or unpredictable access patterns.
  • Want to optimize costs without maintaining lifecycle rules.
  • Need data to remain online and immediately accessible, even when infrequently used.
  • Want safeguards against billing spikes caused by unplanned rehydration of cooler-tier datasets.

This includes analytics pipelines, data lakes, logs, telemetry, and application data where usage naturally changes over time.

Why enable smart tier now?

  • Reduce operational overhead: No lifecycle rules to design, test, or maintain.
  • Align costs with real usage: Data continuously moves to the most appropriate tier based on access patterns.
  • Preserve performance: Frequently accessed data remains hot; re‑access is automatic.
  • Simplify billing: No tier transition, early deletion, or retrieval charges within smart tier; a monthly monitoring fee occurs for each object in scope.
  • Scale with confidence: Built for large, evolving data estates.

What’s next for smart tier?

Smart tier is designed as a foundational capability that will continue to evolve. Upcoming improvements focus on:

  • Broader regional availability, including additional public cloud regions as GA rollout progresses.
  • Client tooling support: Watch out for upcoming releases of our Storage SDKs and tooling supporting this new capability.

Get started with smart tier

Enable smart tier during storage account creation or update an existing zonal storage account by setting smart tier as the default access tier. Once enabled, Azure continuously optimizes data placement—no ongoing configuration required.

The post Optimize object storage costs automatically with smart tier—now generally available appeared first on Microsoft Azure Blog.

Read the whole story
alvinashcraft
49 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Issue 748

1 Share

Comment

One community announcement that stood out to me this week was the Hummingbird project announcing their AI tool policy.

The topic of LLM-written pull requests on open-source projects has been escalating for a while now. Maintainers are being overwhelmed with AI slop to the point of shutting down bug bounties. AI agents are writing retaliatory blog posts when their pull requests get closed. Is either of these events the Archduke Ferdinand moment of the first great human vs. machine war? 😬 Probably not, but it’s definitely something that’s causing friction in our communities.

We’re even being affected a little at the Swift Package Index, although not to the point of it being a huge issue, and not in the main repository. Instead, it seems that LLMs have baked into their knowledge the old method for submitting a package to the index which we unfortunately no longer support. Currently, around 30% of package submissions are coming from agents, and they always use the incorrect method. It’s not a huge problem, but it’s more work, and shows how far this issue stretches.

Having a policy in place is a great place to start, and I’m glad to see Hummingbird tackle the subject. This issue isn’t going to go away, and having a written policy helps prevent arguments.

The policy centres around the idea that LLM-generated PRs are acceptable, but only if a human was involved and understands the code. It also requires transparency around how much of a PR’s code was tool-generated, and that PR descriptions are written primarily by the contributor, not an LLM. Finally, and I really like this point, you can’t implement “good first issues” with a tool. Those have a special purpose, and are protected.

It’s so important that we encourage open-source participation, and in my opinion, it’s not reasonable or practical to completely ban these tools. They are remarkable when used well. I think this is a great initiative from the Hummingbird team, and if you run or help maintain an open source project, even a small one, you might want to think about what your policy is, and get it written down.

– Dave Verwer

Tools

FormatStyle Guide

If you’ve used Chris Eidhof’s SwiftUI Field Guide, you might wonder how it works. Did he really implement everything the site demonstrates in TypeScript? Yes, he actually did, but he took a different approach with his new project and the FormatStyle Guide is a Wasm project! The page takes a few seconds to load, but you can be sure the results are accurate because it’s running Swift behind the scenes. This is a great example of where a project genuinely benefits from running Swift via Wasm. 🚀


Luca: A Decentralized Tool and Skills Manager

Alberto De Bortoli writes about Luca, a “minimalistic, lightweight, decentralised tool and skill manager” that solves a genuine problem for teams: Making sure everyone has the same versions of the tools needed to work on a project. From the announcement of this post:

It started as an internal tool to pin CLI binary versions per project. It now handles AI agent skills management too and has a growing ecosystem around it.

It looks like a promising tool, and if you’ve been reading Alberto’s blog for any length of time, you’ll know that solving this problem for teams is very much an area of interest for him.

Code

Package Traits in Xcode

Matt Massicotte writes about package traits, which have been in the language since 6.1 but which are now possible to set in Xcode projects as of Xcode 26.4. I like the idea of traits, but they do strike me as a source of potential bugs unless used carefully. They can potentially completely switch the implementation of an API based on the trait, so make sure your tests cover all possible configurations!


A ridiculously-lightweight push notification service

Shaun Donnelly:

So, I replaced it with a Cloudflare Worker that’s about 200 lines of TypeScript plus a few lines of Swift in the app. The whole thing costs nothing on the free tier.

Cloudflare Workers are great and push notifications are easier than you think. I would thoroughly recommend the solution Shaun came up with in this post as a great (and free!) venue for any small app that doesn’t require any other aspect of a custom back-end server.


Building List replacement in SwiftUI

Majid Jabrayilov:

Whenever you consider creating a scrollable screen in SwiftUI, you might think of using a List. However, it’s not always the best choice. Lists are great for displaying uniform data. For anything else, a ScrollView with a lazy stack is almost always the best option.

The SwiftUI List is great, but it’s quite inflexible once you start to stray too far from something like a list of emails in Mail.app. That doesn’t mean you need to build something from first principles every time, though.

Business and Marketing

Introducing Guild Ads

I really like smaller, “boutique”, ad networks like the one Tyler Hillsman launched this week. They tend to have higher quality advertisers and provide higher quality traffic. What makes Guild Ads a little different is the business model:

Every week, the network has a price. That price fluctuates with demand: if the network sells out, the price for the following week goes up; if it does not fill, the price goes down. Advertisers choose how much of the network they want to buy. When inventory is gone, it is gone.

I like that model and I wish Tyler, and everyone who participates, luck with it!

And finally...

You know what would make 2026 the best WWDC ever? An official “Lil Finder Guy” scavenger hunt across Apple Park! 🕵️‍♂️

Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android CLI

1 Share

Android CLI

Google has announced Android CLI, a new official command line tool for Android development aimed at agent based workflows outside Android Studio. The release is part of a broader set of agent tools that also includes Android Skills and the Android Knowledge Base. Google says these tools are meant to help agents work with official Android workflows, recommended patterns, and up to date guidance.

According to the announcement, Android CLI is designed to handle core terminal based Android tasks such as environment setup, project creation, SDK management, emulator management, and app deployment. Google highlights commands such as android sdk install, android create, android emulator, android run, and android update as the foundation of the new workflow.

Google also says that in its internal experiments, Android CLI reduced LLM token usage for project and environment setup by more than 70 percent and completed tasks 3 times faster than agents using standard toolsets alone. That performance claim comes directly from Google’s own announcement, so it should be viewed as an internal benchmark rather than an independent measurement.

Google also launched the Android Knowledge Base. Google says it can be accessed through the android docs command and is intended to give agents fresher, authoritative context from Android developer docs, Firebase, Google Developers, and Kotlin documentation. That is an important detail for teams using coding agents, because it gives Android a more official path for grounding AI generated output in current platform guidance.

Official announcement:
https://android-developers.googleblog.com/2026/04/build-android-apps-3x-faster-using-any-agent.html

Read the whole story
alvinashcraft
50 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories