Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151656 stories
·
33 followers

Wall Street Has Stopped Rewarding 'Strategic' Layoffs

1 Share
Goldman Sachs analysts have identified a notable shift in how investors respond to corporate layoff announcements, finding that even job cuts attributed to automation and AI-driven restructuring are now causing stock prices to fall rather than rise. The investment bank linked recent layoff announcements to public companies' earnings reports and stock market data, concluding that stocks dropped by an average of 2% following such announcements, and companies citing restructurings faced even harsher punishment. The traditional Wall Street playbook held that layoffs tied to strategic restructuring would boost stock prices, while cuts driven by declining sales would hurt them. That distinction appears to have collapsed. Goldman's analysts suggest investors simply don't believe what companies are saying -- firms announcing layoffs have experienced higher capex, debt and interest expense growth alongside lower profit growth compared to industry peers this year. The real driver, analysts suspect, may be cost reduction to offset rising interest expenses and declining profitability rather than any forward-looking efficiency play. Goldman expects layoffs to keep rising, motivated in part by companies' stated desire to use AI to reduce labor costs.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft says Windows 11 File Explorer will soon use less RAM when you search files

1 Share

Windows 11 is testing a new underlying improvement for File Explorer that could potentially reduce RAM usage when you’re actively searching for an image or other files, such as those in Excel or PowerPoint. Microsoft is optimizing “search” performance in File Explorer, as it’s been causing high memory usage.

Microsoft is testing an efficient File Explorer search bar in Windows 11 Build 26220.7523 or newer, but it’s currently locked to Windows Insider machines only. Once you’ve access to the feature, File Explorer will automatically remove duplicate file indexing operations, which means Windows will do less redundant work when you search in File Explorer.

“Made some improvements to File Explorer search performance by eliminating duplicate file indexing operations, which should result in faster searches and reduced system resource usage during file operations.”

File Explorer Search is not a separate index or engine, as it’s built on top of Windows Search Indexer. While indexer is designed to be ‘smart,’ duplicate file indexing operations can happen sometimes, and in those cases, Windows ends up scanning or processing the same files or folders more than once for indexing purposes.

Windows Search index will now avoid duplicate file operations, which should result in less disk I/O, lower CPU cycles, and fewer background indexing tasks, so it’ll automatically reduce RAM usage.

Context menu is being decluttered

File Explorer context menu decluttered

Microsoft is also making other parts of File Explorer better, including the context menu, which has been the center of attention lately because of the clutter mess it has become over the past few years.

In our tests, Windows Latest observed that Microsoft is moving options like “Compress to,” “Copy as path,” “Rotate right,” “Rotate left,” and “Set as desktop background” to a separate sub-menu called “Manage file.” On another PC, this sub-menu is called “Other actions,” which seems to suggest that Microsoft wants to dump all lesser-used options in this sub-menu.

Alll these improvements are being tested and will be rolled out in the last week of January or February.

The post Microsoft says Windows 11 File Explorer will soon use less RAM when you search files appeared first on Windows Latest

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS Breaking Through The Organizational Immune System | Vasco Duarte

1 Share

BONUS: Breaking Through The Organizational Immune System - Why Software-Native Organizations Are Still Rare With Vasco Duarte

In this BONUS episode, we explore the organizational barriers that prevent companies from becoming truly software-native. Despite having proof that agile, iterative approaches work at scale—from Spotify to Amazon to Etsy—most organizations still struggle to adopt these practices. We reveal the root cause behind this resistance and expose four critical barriers that form what we call "The Organizational Immune System." This isn't about resistance to change; it's about embedded structures, incentives, and mental models that actively reject beneficial transformation.

The Root Cause: Project Management as an Incompatible Mindset

"Project management as a mental model is fundamentally incompatible with software development. And will continue to be, because 'project management' as an art needs to support industries that are not software-native."

The fundamental problem isn't about tools or practices—it's about how we think about work itself. Project management operates on assumptions that simply don't hold true for software development. It assumes you can know the scope upfront, plan everything in advance, and execute according to that plan. But software is fundamentally different. A significant portion of the work only becomes visible once you start building. You discover that the "simple" feature requires refactoring three other systems. You learn that users actually need something different than what they asked for. This isn't poor planning—it's the nature of software. Project management treats discovery as failure ("we missed requirements"), while software-native thinking treats discovery as progress ("we learned something critical"). As Vasco points out in his NoEstimates work, what project management calls "scope creep" should really be labeled "value discovery" in software—because we're discovering more value to add.

Discovery vs. Execution: Why Software Needs Different Success Metrics

"Software hypotheses need to be tested in hours or days, not weeks, and certainly not months. You can't wait until the end of a 12-month project to find out your core assumption was wrong."

The timing mismatch between project management and software development creates fundamental problems. Project management optimizes for plan execution with feedback loops that are months or years long, with clear distinctions between teams doing requirements, design, building, and testing. But software needs to probe and validate assumptions in hours or days. Questions like "Will users actually use this feature?" or "Does this architecture handle the load?" can't wait for the end of a 12-month project. When we finally discover our core assumption was wrong, we need to fully replan—not just "change the plan." Software-native organizations optimize for learning speed, while project management optimizes for plan adherence. These are opposing and mutually exclusive definitions of success.

The Language Gap: Why Software Needs Its Own Vocabulary

"When you force software into project management language, you lose the ability to manage what actually matters. You end up tracking task completion while missing that you're building the wrong thing."

The vocabulary we use shapes how we think about problems and solutions. Project management talks about tasks, milestones, percent complete, resource allocation, and critical path. Software needs to talk about user value, technical debt, architectural runway, learning velocity, deployment frequency, and lead time. These aren't just different words—they represent fundamentally different ways of thinking about work. When organizations force software teams to speak in project management terms, they lose the ability to discuss and manage what actually creates value in software development.

The Scholarship Crisis: An Industry-Wide Knowledge Gap

"Agile software development represents the first worldwide trend in scholarship around software delivery. But most organizational investment still goes into project management scholarship and training."

There's extensive scholarship in IT, but almost none about delivery processes until recently. The agile movement represents the first major wave of people studying what actually works for building software, rather than adapting thinking from manufacturing or construction. Yet most organizational investment continues to flow into project management certifications like PMI and Prince2, and traditional MBA programs—all teaching an approach with fundamental problems when applied to software. This creates an industry-wide challenge: when CFOs, executives, and business partners all think in project management terms, they literally cannot understand why software needs to work differently. The mental model mismatch isn't just a team problem—it's affecting everyone in the organization and the broader industry.

Budget Cycles: The Project Funding Trap

"You commit to a scope at the start, when you know the least about what you need to build. The budget runs out exactly when you're starting to understand what users actually need."

Project thinking drives project funding: organizations approve a fixed budget (say $2M over 9 months) to deliver specific features. This seems rational and gives finance predictability, but it's completely misaligned with how software creates value. Teams commit to scope when they know the least about what needs building. The budget expires just when they're starting to understand what users actually need. When the "project" ends, the team disbands, taking all their accumulated knowledge with them. Next year, the cycle starts over with a new project, new team, and zero retained context. Meanwhile, the software itself needs continuous evolution, but the funding structure treats it as a series of temporary initiatives with hard stops.

The Alternative: Incremental Funding and Real-Time Signals

"Instead of approving $2M for 9 months, approve smaller increments—maybe $200K for 6 weeks. Then decide whether to continue based on what you've learned."

Software-native organizations fund teams working on products, not projects. This means incremental funding decisions based on learning rather than upfront commitments. Instead of detailed estimates that pretend to predict the future, they use lightweight signals from the NoEstimates approach to detect problems early: Are we delivering value regularly? Are we learning? Are users responding positively? These signals provide more useful information than any Gantt chart. Portfolio managers shift from being "task police" asking "are you on schedule?" to investment curators asking "are we seeing the value we expected? Should we invest more, pivot, or stop?" This mirrors how venture capital works—and software is inherently more like VC than construction. Amazon exemplifies this approach, giving teams continuous funding as long as they're delivering value and learning, with no arbitrary end date to the investment.

The Business/IT Separation: A Structural Disaster

"'The business' doesn't understand software—and often doesn't want to. They think in terms of features and deadlines, not capabilities and evolution."

Project thinking reinforces organizational separation: "the business" defines requirements, "IT" implements them, and project managers coordinate the handoff. This seems logical with clear specialization and defined responsibilities. But it creates a disaster. The business writes requirements documents without understanding what's technically possible or what users actually need. IT receives them, estimates, and builds—but the requirements are usually wrong. By the time IT delivers, the business need has changed, or the software works but doesn't solve the real problem. Sometimes worst of all, it works exactly as specified but nobody wants it. This isn't a communication problem—it's a structural problem created by project thinking.

Product Thinking: Starting with Behavior Change

"Instead of 'build a new reporting dashboard,' the goal is 'reduce time finance team spends preparing monthly reports from 40 hours to 4 hours.'"

Software-native organizations eliminate the business/IT separation by creating product teams focused on outcomes. Using approaches like Impact Mapping, they start with behavior change instead of features. The goal becomes a measurable change in business behavior or performance, not a list of requirements. Teams measure business outcomes, not task completion—tracking whether finance actually spends less time on reports. If the first version doesn't achieve that outcome, they iterate. The "requirement" isn't sacred; the outcome is. "Business" and "IT" collaborate on goals rather than handing off requirements. They're on the same team, working toward the same measurable outcome with no walls to throw things over. Spotify's squad model popularized this approach, with each squad including product managers, designers, and engineers all focused on the same part of the product, all owning the outcome together.

Risk Management Theater: The Appearance of Control

"Here's the real risk in software: delivering software that nobody wants, and having to maintain it forever."

Project thinking creates elaborate risk management processes—steering committees, gate reviews, sign-offs, extensive documentation, and governance frameworks. These create the appearance of managing risk and make everyone feel professional and in control. But paradoxically, the very practices meant to manage risk end up increasing the risk of catastrophic failure. This mirrors Chesterton's Fence paradox. The real risk in software isn't about following the plan—it's delivering software nobody wants and having to maintain it forever. Every line of code becomes a maintenance burden. If it's not delivering value, you're paying the cost forever or paying additional cost to remove it later. Traditional risk management theater doesn't protect against this at all. Gates and approvals just slow you down without validating whether users will actually use what you're building or whether the software creates business value.

Agile as Risk Management: Fast Learning Loops

"Software-native organizations don't see 'governance' and 'agility' as a tradeoff. Agility IS governance. Fast learning loops ARE how you manage risk."

Software-native organizations recognize that agile and product thinking ARE risk management. The fastest way to reduce risk is delivering quickly—getting software in front of real users in production with real data solving real problems, not in demos or staging environments. Teams validate expected value by measuring whether software achieves intended outcomes. Did finance really reduce their reporting time? Did users actually engage with the feature? When something isn't working, teams change it quickly. When it is working, they double down. Either way, they're managing risk through rapid learning. Eric Ries's Lean Startup methodology isn't just for startups—it's fundamentally a software-native management practice. Build-Measure-Learn isn't a nice-to-have; it's how you avoid the catastrophic risk of building the wrong thing.

The Risk Management Contrast: Theater vs. Reality

"Which approach actually manages risk? The second one validates assumptions quickly and cheaply. The first one maximizes your exposure to building the wrong thing."

The contrast between approaches is stark. Risk management theater involves six months of requirements gathering and design, multiple approval gates that claim to prevent risk but actually accumulate it, comprehensive test plans, and a big-bang launch after 12 months. Teams then discover users don't want it—and now they're maintaining unwanted software forever. The agile risk management approach takes two weeks to build a minimal viable feature, ships to a subset of users, measures actual behavior, learns it's not quite right, iterates in another two weeks, validates value before scaling, and only maintains software that's proven valuable. The second approach validates assumptions quickly and cheaply. The first maximizes exposure to building the wrong thing.

The Immune System in Action: How Barriers Reinforce Each Other

"When you try to 'implement agile' without addressing these structural barriers, the organization's immune system rejects it. Teams might adopt standups and sprints, but nothing fundamental changes."

These barriers work together as an immune system defending the status quo. It starts with the project management mindset—the fundamental belief that software is like construction, that we can plan it all upfront, that "done" is a meaningful state. That mindset creates funding models that allocate budgets to temporary projects instead of continuous products, organizational structures that separate "business" from "IT" and treat software as a cost center, and risk management theater that optimizes for appearing in control rather than actually learning. Each barrier reinforces the others. The funding model makes it hard to keep stable product teams. The business/IT separation makes it hard to validate value quickly. The risk theater slows down learning loops. The whole system resists change—even beneficial change—because each part depends on the others. This is why so many "agile transformations" fail: they treat the symptoms (team practices) without addressing the disease (organizational structures built on project thinking).

Breaking Free: Seeing the System Clearly

"Once you see the system clearly, you can transform it. You now know the root cause, how it manifests, and what the alternatives look like."

Understanding these barriers is empowering. It's not that people are stupid or resistant to change—organizations have structural barriers built on a fundamental mental model mismatch. But once you see the system clearly, transformation becomes possible. You now understand the root cause (project management mindset), how it manifests in your organization (funding models, business/IT separation, risk theater), and what the alternatives look like through real examples from companies successfully operating as software-native organizations. The path forward requires addressing the disease, not just the symptoms—transforming the fundamental structures and mental models that shape how your organization approaches software.

Recommended Further Reading

About Vasco Duarte

Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success.

You can link with Vasco Duarte on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20251225_XMAS_2025_Thu.mp3?dest-id=246429
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

How a long-lost yearbook revealed the origin of 'hella,' with Ben Zimmer

1 Share

1145. In this bonus segment from October, I talk with Ben Zimmer about "hella" and how even yearbook messages can be digitized to help preserve the language record. Ben shares the full story of this slang term, and we also talk about the detective work that led to the OED using Run DMC's use of "drop" in “Spin Magazine” as a citation.

Ben Zimmer's website: Benzimmer.com

Ben Zimmer's social media: Bluesky. Facebook

Links to Get One Month Free of the Grammar Girl Patreon (different links for different levels)

🔗 Share your familect recording in Speakpipe or by leaving a voicemail at 833-214-GIRL (833-214-4475)

🔗 Watch my LinkedIn Learning writing courses.

🔗 Subscribe to the newsletter.

🔗 Take our advertising survey

🔗 Get the edited transcript.

🔗 Get Grammar Girl books

🔗 Join GrammarpaloozaGet ad-free and bonus episodes at Apple Podcasts or SubtextLearn more about the difference

| HOST: Mignon Fogarty

| Grammar Girl is part of the Quick and Dirty Tips podcast network.

  • Audio Engineer: Dan Feierabend
  • Director of Podcast: Holly Hutchings
  • Advertising Operations Specialist: Morgan Christianson
  • Marketing and Video: Nat Hoopes, Rebekah Sebastian

| Theme music by Catherine Rannus.

| Grammar Girl Social Media: YouTubeTikTokFacebook. ThreadsInstagramLinkedInMastodonBluesky.


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.





Download audio: https://dts.podtrac.com/redirect.mp3/media.blubrry.com/grammargirl/stitcher.simplecastaudio.com/e7b2fc84-d82d-4b4d-980c-6414facd80c3/episodes/07375a21-e9d1-4636-8892-f17ee8b97f8a/audio/128/default.mp3?aid=rss_feed&awCollectionId=e7b2fc84-d82d-4b4d-980c-6414facd80c3&awEpisodeId=07375a21-e9d1-4636-8892-f17ee8b97f8a&feed=XcH2p3Ah
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Space Geek Out 2025

1 Share
Space Geek Out Time - 2025 Edition! Richard talks to Carl about the past year in space, starting with a reader comment about 3I/ATLAS, the interstellar comet passing through our solar system that has kicked off conspiracies about aliens coming to visit - hint, it's just a comet. Then, into another record-breaking year of spaceflight with a record number of Falcon 9 flights, Starship tests, United Launch Alliance underperforming, and New Glenn finally getting to orbit! The International Space Station has passed 25 years of continuous habitation and is only five years away from being sent to a watery grave. But there are new space stations in the works! Finally, the stories of landers on the Moon, trouble at Mars, and how silly the idea of building data centers in space really is. A fantastic year for space!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/69201179/dotnetrocks_1982_space_geek_out_2025.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

335: EKS Network Policies: Now With More Layers Than Your Security Team's Org Chart

1 Share

Welcome to episode 335 of The Cloud Pod, where the forecast is always cloudy! This pre-Christmas week, Ryan and Justin have hit the studio to bring you the final show of 2025. We’ve got lots of AI images, EKS Network Policies, Gemini 3, and even some Disney drama. 

Let’s get into it! 

Titles we almost went with this week

  • From Roomba to Tomb-ba: How the Robot Vacuum Pioneer Got Cleaned Out **OpenAI
  • From Napkin Sketch to Production: Google’s App Design Center Goes GA
  • Terraform Gets a Canvas: Google Paints Infrastructure Design with AI
  • Mickey Mouse Takes Off the Gloves: Disney vs Google AI Showdown
  • From Data Silos to Data Solos: Google Conducts the Integration Orchestra
  • No More Thread Dread: AWS Brings AI to JVM Performance Troubleshooting
  • MCP: More Corporate Plumbing Than You Think
  • GPT-5.2 Beats Humans at Work Tasks, Still Can’t Get You Out of Monday Meetings
  • Kerberos More Like Kerbero-Less: Microsoft Axes Ancient Encryption Standard
  • OpenAI Teaches GPT-5.2 to PowerPoint: Death by Bullet Points Now AI-Generated
  • MCP: Like USB-C, But Everyone’s Keeping Theirs in the Drawer
  • Flash Gordon: Google’s Gemini 3 Gets a Speed Boost Without the Sacrifice
  • Tag, You’re It: AWS Finally Knows Who to Bill
  • Snowflake Gets a GPT-5.2 Upgrade: Now With More Intelligence Per Query
  • OpenAI and Snowflake: Making Data Warehouses Smarter Than Your Average Analyst
  • GPT-5.2 Moves Into the Snowflake: No Melting Required

AI Is Going Great, or How ML Makes Money 

01:06 Meta’s multibillion-dollar AI strategy overhaul creates culture clash:

  • Meta is developing Avocado, a new frontier AI model codenamed to succeed Llama, now expected to launch in Q1 2026 after internal delays related to training performance testing. 
  • The model may be proprietary rather than open source, marking a significant shift from Meta’s previous strategy of freely distributing Llama’s weights and architecture to developers. We feel like this is an interesting choice for Meta, but what do we know? 
  • Meta spent 14.3 billion dollars in June 2025 to hire Scale AI founder Alexandr Wang as Chief AI Officer and acquire a stake in Scale, while raising 2026 capital expenditure guidance to 70-72 billion dollars. 
    • Wang now leads the elite TBD Lab developing Avocado, operating separately from traditional Meta teams and not using the company’s internal workplace network.
  • The company has restructured its AI leadership following the poor reception of Llama 4 in April, with Chief Product Officer Chris Cox no longer overseeing the GenAI unit. 
  • Meta cut 600 jobs in Meta Superintelligence Labs in October, contributing to the departure of Chief AI Scientist Yann LeCun to launch a startup, while implementing 70-hour workweeks across AI organizations.
  • Meta’s new AI leadership under Wang and former GitHub CEO Nat Friedman has introduced a “demo, don’t memo” development approach, replacing traditional multi-step approval processes with rapid prototyping using AI agents and newer tools. 
  • The company is also leveraging third-party cloud services from CoreWeave and Oracle while building the 27 billion dollar Hyperion data center in Louisiana.
  • Meta’s Vibes AI video product, launched in September, trails OpenAI’s Sora 2 in downloads, and was criticized for lacking features like realistic lip-synced audio, while the company increasingly relies on external AI models from Black Forest Labs and Midjourney rather than exclusively using internal technology.

02:23 Ryan – “I guess I really don’t understand the business of the AI models. I guess if you’re going to offer a chat service, you have to have a proprietary model, but it’s kind of strange.”

03:04 Disney says Google AI infringes copyright “on a massive scale” – Ars Technica

  • Disney has issued a cease and desist letter to Google alleging copyright infringement through its generative AI models, claiming Google trained its systems on Disney’s copyrighted content without authorization and now enables users to generate Disney-owned characters like those from The Lion King, Deadpool, and Star Wars. 
  • This represents one of the first major legal challenges from a content owner with substantial legal resources against a cloud AI provider.
  • The legal notice targets two specific violations: Google’s use of Disney’s copyrighted works in training data for its image and video generation models, and the distribution of Disney character reproductions to end users through AI-generated outputs. 
    • Disney demands the immediate cessation of using its content and the implementation of safeguards to prevent the future generation of Disney-owned intellectual property.
  • This case could establish important precedents for how cloud providers handle copyrighted training data and implement content filtering in AI services. 
  • The outcome may force cloud AI platforms to develop more sophisticated copyright detection systems or negotiate licensing agreements with content owners before deploying generative models.
  • Disney’s involvement brings considerable legal firepower to the AI copyright debate, as the company has historically shaped US copyright law through decades of litigation to protect its intellectual property. 
  • Cloud providers offering generative AI services may need to reassess their training data sources and output filtering mechanisms to avoid similar legal challenges from other major content owners.

04:06 Ryan – “Disney – suing for copyright infringement – shocking.” 

04:54 Disney invests $1 billion in OpenAI, licenses 200 characters for AI video app Sora – Ars Technica

  • Disney invests $1 billion in OpenAI and licenses over 200 characters from Disney, Marvel, Pixar, and Star Wars franchises for use in Sora video generator. 
  • This marks the first major Hollywood studio content licensing deal for OpenAI’s AI video platform, which launched in late September and faced industry criticism over copyright concerns.
  • The three-year licensing agreement allows Sora users to create short video clips featuring licensed Disney characters, representing a shift from OpenAI’s previous approach of training models on copyrighted material without permission. 
  • This deal is notable given Disney’s history of aggressive copyright protection and lobbying that shaped modern US copyright law in the 1990s.
  • OpenAI has been pursuing content licensing deals with major IP holders after facing multiple lawsuits over unauthorized use of copyrighted training data. 
  • The company previously argued that useful AI models cannot be created without copyrighted material, but has shifted strategy since becoming well-funded through investments.
  • The partnership aims to extend Disney’s storytelling reach through generative AI while addressing creator concerns about unauthorized use of intellectual property. 
  • Disney CEO Robert Iger emphasized the company’s commitment to respecting and protecting creators’ works while leveraging AI technology for content creation.
  • This deal could establish a precedent for how AI companies and content owners structure licensing agreements, potentially influencing how other studios and IP holders approach AI-generated content partnerships. 
  • The financial terms suggest significant value in controlled character licensing for AI applications.

06:26 Ryan – “Is it just a way to get out of the lawsuit so they can generate the content?” 

07:12 The new ChatGPT Images is here | OpenAI

  • OpenAI released GPT Image 1.5, their new flagship image generation model, now available in ChatGPT for all users and via API. 
  • The model generates images up to 4x faster than the previous version and includes a dedicated Images feature in the ChatGPT sidebar with preset filters and prompts for quick exploration.
  • The model delivers improved image editing capabilities with better preservation of original elements like lighting, composition, and people’s appearance across edits. 
  • It handles precise modifications, including adding, subtracting, combining, and blending elements while maintaining consistency, making it suitable for practical photo edits and creative transformations.
  • GPT Image 1.5 shows improvements in text rendering with support for denser and smaller text, better handling of multiple small faces, and more natural-looking outputs. 
  • The model follows instructions more reliably than the initial version, enabling more intricate compositions where relationships between elements are preserved as intended.
  • API pricing for GPT Image 1.5 is 20% cheaper than GPT Image 1 for both inputs and outputs, allowing developers to generate and iterate on more images within the same budget. 
  • The model is particularly useful for marketing teams, ecommerce product catalogs, and brand work requiring consistent logo and visual preservation across multiple edits.
  • The new ChatGPT Images model works across all ChatGPT models without requiring manual selection, while the earlier version remains available as a custom GPT. 
  • Business and Enterprise users will receive access to the new Images experience later, with the API version available now through OpenAI Playground. 

07:38 Justin – “It’s very competitive against Nano Banana, and I was looking at some of the charts, and it’s already jumped to the top of the charts.” 

08:52 Introducing GPT-5.2 | OpenAI

  • OpenAI has released GPT-5.2, now generally available in ChatGPT for paid users and via API as gpt-5.2, with three variants: Instant for everyday tasks, Thinking for complex work, and Pro for the highest-quality outputs. 
  • The model introduces native spreadsheet and presentation generation capabilities, with ChatGPT Enterprise users reporting 40-60 minutes saved daily on average.
  • GPT-5.2 Thinking achieves a 70.9% win rate against human experts on GDPval benchmark spanning 44 occupations and sets new records on SWE-Bench Pro at 55.6% (80% on SWE-bench Verified). 
  • The model demonstrates 11x faster output generation and less than 1% the cost of expert professionals on knowledge work tasks, though human oversight remains necessary.
  • Long-context performance reaches near 100% accuracy on the 4-needle MRCR variant up to 256k tokens, with a new Responses compact endpoint extending the effective context window for tool-heavy workflows. Vision capabilities show roughly 50% error reduction on chart reasoning and interface understanding compared to GPT-5.1.
  • API pricing is set at $1.75 per million input tokens and $14 per million output tokens, with a 90% discount on cached inputs. 
  • OpenAI reports that despite higher per-token costs, GPT-5.2 achieves a lower total cost for given quality levels due to improved token efficiency. 
  • The company has no current plans to deprecate GPT-5.1, GPT-5, or GPT-4.1.
  • The model introduces improved safety features, including strengthened responses for mental health and self-harm scenarios, plus a gradual rollout of age prediction for content protections. 
  • GPT-5.2 was built on NVIDIA H100, H200, and GB200-NVL72 GPUs in Microsoft Azure data centers, with a Codex-optimized version planned for the coming weeks.

10:06 Ryan – “I’m happy to see the improved safety features because that’s come up in the news recently and had some high-profile events happen, where it’s become a concern, for sure. So I want to see more protection in that space from all the providers.” 

Cloud Tools

10:58 Cedar Joins CNCF as a Sandbox Project | AWS Open Source Blog

  • Cedar is an open source authorization policy language that just joined CNCF as a Sandbox project, solving the problem of hard-coded access control by letting developers define fine-grained permissions as policies separate from application code. 
    • It supports RBAC, ABAC, and ReBAC models with fast real-time evaluation.
  • The language stands out for its formal verification using the Lean theorem prover and differential random testing against its specification, providing mathematical guarantees for security-critical authorization logic. This rigor addresses the growing complexity of cloud-native authorization, where traditional ad-hoc systems fall short.
  • Production adoption is already strong with users including Cloudflare, MongoDB, AWS Bedrock, and Kubernetes integrations like kubernetes-cedar-authorizer
  • The CNCF move provides vendor-neutral governance and broader community access beyond AWS stewardship.
  • Cedar offers an interactive policy playground and Rust SDK for developers to test authorization logic before deployment. 
  • The analyzability features enable automated policy optimization and verification, reducing the risk of misconfigured permissions in production.
  • The CNCF acceptance fills a gap in the cloud-native landscape for a foundation-backed authorization standard, complementing existing projects and potentially becoming the go-to solution as it progresses from Sandbox to Incubation status.

12:05 Ryan – “I think this kind of policy is going to be absolutely key to managing permissions going forward.” 

AWS

12:50 GuardDuty Extended Threat Detection uncovers a cryptomining campaign on Amazon EC2 and Amazon ECS | AWS Security Blog

  • GuardDuty Extended Threat Detection identified a coordinated cryptomining campaign starting November 2, 2025, where attackers used compromised IAM credentials to deploy miners across EC2 and ECS within 10 minutes of initial access. 
  • The new AttackSequence: EC2/CompromisedInstanceGroup finding correlated signals across multiple data sources to detect the sophisticated attack pattern, demonstrating how Extended Threat Detection capabilities launched at re:Invent 2025 can identify coordinated campaigns.
  • The attackers employed a novel persistence technique using ModifyInstanceAttribute to disable API termination on all launched instances, forcing victims to manually re-enable termination before cleanup and disrupting automated remediation workflows. 
  • They also created public Lambda endpoints without authentication and established backdoor IAM users with SES permissions, showing advancement in cryptomining persistence methodologies beyond typical mining operations.
  • The campaign targeted high-value GPU and ML instances (g4dn, g5, p3, p4d) through auto scaling groups configured to scale from 20 to 999 instances, with attackers first using DryRun flags to validate permissions without triggering costs. The malicious Docker Hub image yenik65958/secret accumulated over 100,000 pulls before takedown, and attackers created up to 50 ECS clusters per account with Fargate tasks configured for maximum CPU allocation of 16,384 units.
  • AWS recommends enabling GuardDuty Runtime Monitoring alongside the foundational protection plan for comprehensive coverage, as Runtime Monitoring provides host-level signals critical for Extended Threat Detection correlation and detects crypto mining execution through Impact:Runtime/CryptoMinerExecuted findings. 
  • Organizations should implement SCPs to deny Lambda URL creation with an AuthType of NONE and monitor CloudTrail for unusual DryRun API patterns as early warning indicators.
  • The attack demonstrates the importance of temporary credentials over long-term access keys, MFA enforcement, and least privilege IAM policies, as the compromise exploited valid credentials rather than AWS service vulnerabilities. GuardDuty’s multilayered detection using threat intelligence, anomaly detection, and Extended Threat Detection successfully identified all attack stages from initial access through persistence.

55:31 Justin – “Hackers have the same tools we do for development.” 

16:17 Amazon EKS introduces enhanced network policy capabilities | Containers

  • Amazon EKS now supports Admin Network Policies and Application Network Policies, giving cluster administrators centralized control over network security across all namespaces while allowing namespace administrators to filter outbound traffic using domain names instead of maintaining IP address lists. 
  • This addresses a key limitation of standard Kubernetes Network Policies, which only work within individual namespaces and lack explicit deny rules or policy hierarchies.
  • The new Admin Network Policies operate in two tiers: Admin Tier rules that cannot be overridden by developers, and Baseline Tier rules that provide default connectivity but can be overridden by standard Network Policies. 
  • This enables platform teams to enforce cluster-wide security requirements like isolating sensitive workloads or ensuring monitoring access while still giving application teams flexibility within those boundaries.
  • Application Network Policies, exclusive to EKS Auto Mode clusters, add Layer 7 FQDN-based filtering to traditional Layer 3/4 network policies, solving the problem of managing egress to external services with frequently changing IP addresses. Instead of maintaining IP lists for SaaS providers or on-premises resources behind load balancers, teams can simply whitelist domain names like internal-api.company.com, and policies remain valid even when underlying IPs change.
  • Requirements include Kubernetes 1.29 or later, Amazon VPC CNI plugin v1.21.0 for standard EKS clusters, and EKS Auto Mode for Application Network Policies with DNS filtering. 
  • The feature is available now for new clusters, with support for existing clusters coming in the following weeks, though pricing remains unchanged, as this is a native capability of the VPC CNI plugin.

17:30 Ryan – “This is one of those things that’s showing a maturity level of container-driven applications. It’s been a while since security teams have been aware of some of the things you can do with network policies and routing, and so you want to empower your developers, but also being able to have a comprehensive way to ban and approve has been missing from a lot of these ingress controllers. So this is a great thing for security teams, and probably terrible for developers.” 

19:12 Automate java performance troubleshooting with AI-Powered thread dump analysis on Amazon ECS and EKS | Containers

  • AWS has released an automated Java thread dump analysis solution that combines Prometheus monitoring, Grafana alerting, Lambda orchestration, and Amazon Bedrock AI to diagnose JVM performance issues in seconds rather than hours. 
  • The system works across both ECS and EKS environments, automatically detecting high thread counts and generating actionable insights without requiring deep JVM expertise from operations teams.
  • The solution uses Spring Boot Actuator endpoints for ECS deployments and Kubernetes API commands for EKS to capture thread dumps when Grafana alerts trigger. 
  • Amazon Bedrock then analyzes the dumps to identify deadlocks, performance bottlenecks, and thread states while providing structured recommendations across six key areas, including executive summary and optimization guidance.
  • Deployment is handled through CloudFormation templates available in the Java on AWS Immersion Day Workshop, with all thread dumps and AI analysis reports automatically stored in S3 for historical trending. 
  • The architecture follows event-driven principles with modular components that can be extended to other diagnostic tools like heap dump analysis or automated remediation workflows.
  • The system enriches JVM metrics with contextual tags, including cluster identification and container metadata, enabling the Lambda function to determine the appropriate thread dump collection method. This metadata-driven approach allows a single solution to handle heterogeneous container environments without manual configuration for each deployment type.
  • Pricing follows standard AWS service costs for Lambda invocations, Bedrock LLM usage per token, S3 storage, and CloudWatch metrics, with no additional licensing fees for the open source monitoring components. 
  • The solution addresses the common problem where only a handful of engineers on most teams can interpret thread dumps, democratizing JVM troubleshooting across operations teams.

20:55 Justin – “This tells me that if you have a bad container that crashes a lot, you could spend a lot of money on LLM usage for tokens analyzing your exact same crash dump every time. Do keep that in mind.” 

22:50 EC2 Auto Scaling now offers a synchronous API to launch instances inside an Auto Scaling group

  • EC2 Auto Scaling introduces a new LaunchInstances API that provides synchronous feedback when launching instances, allowing customers to immediately know if capacity is available in their specified Availability Zone or subnet. 
  • This addresses scenarios where customers need precise control over instance placement and real-time confirmation of scaling operations rather than waiting for asynchronous results.
  • The API enables customers to override default Auto Scaling group configurations by specifying exact Availability Zones and subnets for new instances, while still maintaining the benefits of automated fleet management like health checks and scaling policies. Optional asynchronous retries are included to help reach the desired capacity if initial synchronous attempts fail.
  • This feature is particularly useful for workloads that require strict placement requirements or need to implement fallback strategies quickly when capacity constraints occur in specific zones. Customers can now build more sophisticated scaling logic that responds immediately to capacity availability rather than discovering issues after the fact.
  • Available immediately in all AWS Regions and GovCloud at no additional cost beyond standard EC2 and EBS charges. Customers can access the feature through AWS CLI and SDKs, with documentation available at https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-instances-synchronously.

23:47 Ryan – “I find that the things that it’s allowing you to tune – it’s the things that I moved to autoscaling for; I don’t want to deal with any of this nonsense. And so you still have to maintain your own orchestration, which understands which zone that you need to roll out to, because it’s going to have to call that API.” 

24:28 Announcing cost allocation using users’ attributes

  • AWS now enables cost allocation based on workforce user attributes like cost center, division, and department imported from IAM Identity Center
  • This allows organizations to automatically tag per-user subscription and on-demand fees for services like Amazon Q Business, Q Developer, and QuickSight with organizational metadata for chargeback purposes.
  • The feature addresses a common FinOps challenge where companies struggle to attribute SaaS-style AWS application costs back to specific business units. Once user attributes are imported to IAM Identity Center and enabled as cost allocation tags in the Billing Console, usage automatically flows to Cost Explorer and CUR 2.0 with the appropriate organizational tags attached.
  • This capability is particularly relevant for enterprises deploying Amazon Q Business or QuickSight at scale, where individual user subscriptions can quickly add up across departments. Instead of manually tracking which users belong to which cost centers, the system automatically associates costs based on existing identity data.
  • The feature is generally available in all commercial AWS regions except GovCloud and China regions. 
  • No additional pricing is mentioned beyond the standard costs of the underlying AWS applications being tracked.

25:26 Justin – “There’s lots of use cases; this gets interesting real quickly. It’s a really nice feature that I’m really happy about.”  

GCP

26:34 Introducing Gemini 3 Flash: Benchmarks, global availability

  • Google launches Gemini 3 Flash in general availability, positioning it as a frontier intelligence model optimized for speed at reduced cost. 
  • The model processes over 1 trillion tokens daily through Google’s API and replaces Gemini 2.5 Flash as the default model in the Gemini app globally at no cost to users.
  • Gemini 3 Flash achieves strong benchmark performance with 90.4% on GPQA Diamond and 81.2% on MMMU Pro while running 3x faster than Gemini 2.5 Pro and using 30% fewer tokens on average for typical tasks. 
  • Pricing is set at $0.50 per million input tokens and $3 per million output tokens, with audio input at $1 per million tokens.
  • The model demonstrates strong coding capabilities with a 78% score on SWE-bench Verified, outperforming both the 2.5 series and Gemini 3 Pro. This makes it suitable for agentic workflows, production systems, and interactive applications requiring both speed and reasoning depth.
  • Gemini 3 Flash is available through multiple channels, including Google AI Studio, Vertex AI, Gemini Enterprise, Google Antigravity platform, Gemini CLI, and Android Studio
  • The model is also rolling out as the default for AI Mode in Search globally, combining real-time information retrieval with multimodal reasoning capabilities.
  • Early enterprise adopters, including JetBrains, Bridgewater Associates, and Figma, are using the model for applications ranging from video analysis and data extraction to visual Q&A and in-game assistance. 
  • The multimodal capabilities support real-time analysis of images, video, and audio content for actionable insights.

27:01 Justin – “This, just in general, is a pretty big improvement from not only the cost perspective, but also the overall performance, and the ability to run this on local devices, for like Android phones, is gonna be a huge breakthrough in LM performance on the device. So I suspect you’ll see a lot of Gemini 3 flash getting rolled out all over the place because it does a lot of things really darn well.”

28:16 Connect Google Antigravity IDE to Google’s Data Cloud services | Google Cloud Blog

  • Google has integrated Model Context Protocol servers into its new Antigravity IDE, allowing AI agents to directly connect to Google Cloud data services, including AlloyDB, BigQuery, Spanner, Cloud SQL, and Looker
  • The MCP Toolbox for Databases provides pre-built connectors that eliminate manual configuration, letting developers access enterprise data through a UI-driven setup process within the IDE.
  • The integration enables AI agents to perform database administration tasks, generate SQL code, and run queries without switching between tools. 
  • For AlloyDB and Cloud SQL, agents can explore schemas, develop queries, and optimize performance using tools like list_tables, execute_sql, and get_query_plan directly in the development environment.
  • BigQuery and Looker connections extend agent capabilities into analytics and business intelligence workflows. 
  • Agents can forecast trends, search data catalogs, validate metric definitions against semantic models, and run ad-hoc queries to ensure application logic matches production reporting standards.
  • The MCP servers use IAM credentials or secure password storage to maintain security while giving agents access to production data sources. This approach positions Antigravity as a data-aware development environment where AI assistance is grounded in actual enterprise data rather than abstract reasoning alone.
  • The feature is available now through the Antigravity MCP Store with documentation at cloud.google.com/alloydb/docs and the open-source MCP Toolbox on GitHub at googleapis/genai-toolbox. 
  • No specific pricing information was provided for the MCP integration itself, though standard data service costs for AlloyDB, BigQuery, and other connected services apply.

29:15 Announcing official MCP support for Google services | Google Cloud Blog

  • Google now offers fully-managed, remote Model Context Protocol (MCP) servers for its services, eliminating the need for developers to deploy and maintain individual local MCP servers. 
  • This provides a unified, enterprise-ready endpoint for connecting AI agents to Google and Google Cloud services with built-in IAM, audit logging, and Model Armor security.
  • Initial MCP support launches for four key services: Google Maps Platform for location grounding, BigQuery for querying enterprise data in-place, Compute Engine for infrastructure management, and GKE for container operations. Additional services, including Cloud Run, Cloud Storage, AlloyDB, Spanner, and SecOps, will receive MCP support in the coming months.
  • Apigee integration allows enterprises to expose their own custom APIs and third-party APIs as discoverable tools for AI agents, extending MCP capabilities beyond Google services to the broader enterprise stack. 
  • Organizations can use Cloud API Registry and Apigee API Hub to discover and govern available MCP tools across their environment.
  • The implementation enables agents to perform complex multi-step workflows like analyzing BigQuery sales data for revenue forecasting while simultaneously querying Google Maps for location intelligence, all through standardized MCP interfaces. 
  • This approach keeps data in place rather than moving it into context windows, reducing security risks and latency.

30:34 MCP support for Apigee | Google Cloud Blog

  • Apigee now supports Model Context Protocol (MCP), allowing organizations to expose their existing APIs as tools for AI agents without writing code or managing MCP servers. Google handles the infrastructure, transcoding, and protocol management while Apigee applies its 30+ built-in policies for authentication, authorization, and security to govern agentic interactions.
  • The implementation automatically registers deployed MCP proxies in Apigee API hub as searchable MCP APIs, enabling centralized tool catalogs and granular access controls through API products
  • Organizations can apply quota policies and identity controls to restrict which agents and clients can access specific MCP tools, with full visibility through Apigee Analytics and the new API Insights feature.
  • Integration with Google’s Agent Development Kit (ADK) provides streamlined access to Apigee MCP endpoints for developers building custom agents, with an ApigeeLLM wrapper available for routing LLM calls through Apigee proxies. 
  • The feature works with multiple agent frameworks, including LangGraph, though ADK users get optimized tooling for the Google ecosystem, including Vertex AI Agent Engine and Gemini Enterprise deployment options.
  • Security capabilities extend beyond standard API protection to include Cloud Data Loss Prevention for sensitive data classification and Model Armor for defending against prompt injection attacks. 
  • The feature is currently in preview with select customers, requiring contact with Apigee or Google Cloud account teams for access, with no pricing information disclosed yet.

31:07 Ryan – “I just did some real-time analysis about the features of the MCP and then also the browser and stuff. It’s one of those things where it is the newer model of coding, where you’re having distributed agents do tasks, and that, so the new IDs are taking advantage of that… And it is a VS Code fork. So it’s very comfortable to your VS Code users.”

32:05 Application Design Center now GA | Google Cloud Blog

  • Google’s Application Design Center reaches general availability as a visual, AI-powered platform for designing and deploying Terraform-backed application infrastructure on GCP. 
  • The service integrates with Gemini Cloud Assist to let users describe infrastructure needs in natural language and receive deployable architecture diagrams with Terraform code, while automatically registering applications with App Hub for unified management.
  • The platform addresses platform engineering needs by providing a curated catalog of opinionated application templates, including specialized GKE templates for AI inference workloads using various LLM models. 
  • Organizations can bring their own Terraform configurations from Git repositories and combine them with Google-provided components to create standardized infrastructure patterns for reuse across development teams.
  • New GA features include public APIs and gcloud CLI support, VPC service controls compatibility, and GitOps integration for CI/CD workflows. 
  • The service offers application template revisions as an immutable audit trail and automatically detects configuration drift between intended designs and deployed applications to maintain compliance.
  • The platform is available free of cost for building and deploying application templates, with pricing details at cloud.google.com/products/application-design-center/pricing. 
  • Integration with Cloud Hub provides operational insights and a unified control plane for managing application portfolios across the organization.
  • Platform teams can create secure, shareable catalogs of approved templates that give developers self-service access to compliant infrastructure while maintaining governance and security standards. 
  • The service supports downloading templates as infrastructure-as-code for direct editing in local IDEs with changes flowing through standard Git pull request workflows.

33:10 Ryan – “It’s kind of the pangea that everyone’s been hoping for, for a long time. With AI making it possible. Being able to plain text speak your infrastructure into existence…I definitely like this model better than like Beanstalk or the hosted application model, which has been the solution until this. This is the answer I want.” 

Azure

34:30 Microsoft will finally kill obsolete cipher that has wreaked decades of havoc – Ars Technica

  • Microsoft is deprecating RC4 encryption in Windows Active Directory after 26 years of default support, following its role in major breaches, including the 2024 Ascension healthcare attack that affected 5.6 million patient records. 
  • The cipher has been cryptographically weak since 1994 and enabled Kerberoasting attacks that have compromised enterprise networks for over a decade.
  • Windows servers have continued to accept RC4-based authentication requests by default even after AES support was added, creating a persistent attack vector that hackers routinely exploit. 
  • Senator Ron Wyden called for an FTC investigation of Microsoft in September 2025 for gross cybersecurity negligence related to this default configuration.
  • The deprecation addresses a fundamental security gap in enterprise identity management that has existed since Active Directory launched in 2000. Organizations using Windows authentication will need to ensure their systems are configured to use AES encryption and disable RC4 fallback to prevent downgrade attacks.
  • This change affects any organization running Active Directory for user authentication and access control, particularly those in healthcare, finance, and other regulated industries where credential theft can lead to catastrophic breaches. (Or literally anyone running Windows.) 
  • The move comes after years of security researchers and government officials pressuring Microsoft to remove the obsolete cipher from default configurations.

36:06 Ryan – “It’s so complex, everyone just accepts the defaults just to get it up and going, and if you don’t know how compromised the cipher is, you don’t really prioritize getting back and fixing the encryption. So I’m really happy to see this; it’s always been a black mark that’s made me not trust Windows.” 

37:11 Azure Storage innovations: Unlocking the future of data 

  • Azure Blob Storage now scales to exabytes with 50+ Tbps throughput and millions of IOPS, specifically architected to keep GPUs continuously fed during AI training workloads. 
  • The platform powers OpenAI’s model training and includes a new Smart Tier preview that automatically moves data between hot, cool, and cold tiers based on 30 and 90-day access patterns to optimize costs without manual intervention.
  • Azure Ultra Disk delivers sub-0.5ms latency with 30% improvement on Azure Boost VMs, scaling to 400K IOPS per disk and up to 800K IOPS per VM on new Ebsv6 instances. 
  • The new Instant Access Snapshots preview eliminates pre-warming requirements and reduces recovery times from hours to seconds for Premium SSD v2 and Ultra Disk, while flexible provisioning can reduce total cost of ownership by up to 50%.
  • Azure Managed Lustre AMLFS 20 preview supports 25 PiB namespaces with 512 GBps throughput, featuring auto-import and auto-export capabilities for seamless data movement between AMLFS and Azure Blob Storage. 
  • This addresses the specific challenge of training AI models at terabyte and petabyte scale by maintaining high GPU utilization through parallel I/O operations.
  • Azure Files introduces Entra-only identity support for SMB shares, eliminating the need for on-premises Active Directory infrastructure and enabling cloud-native identity management, including external identities for Azure Virtual Desktop. Storage Mover adds cloud-to-cloud transfers and on-premises NFS to Azure Files NFS 4.1 migration, while Azure NetApp Files large volumes now scale to 7.2 PiB capacity with 50 GiBps throughput, representing a 3x and 4x increase, respectively.
  • Azure Native offers now include Pure Storage and Dell PowerScale for customers wanting to migrate existing on-premises partner solutions to Azure using familiar technology stacks. The Storage Migration Program provides access to partners like Atempo, Cirata, Cirrus Data, and Komprise for SAN and NAS workload migrations, with a new Storage Migration Solution Advisor in Copilot to streamline decision-making. Pricing details were not disclosed in the announcement.

38:26 Ryan – “It just dawned on me, as you’re reading through here… this is interesting; getting all this high performance from object stores just sort of blows my mind. And then I realized that all these sorts of ‘cloud file systems’ have been backed underneath by these object stores for a long time; like, of course, they need this.”

39:49 Future-Ready Cloud: Microsoft’s U.S. Infrastructure Investments

  • Microsoft is expanding its U.S. datacenter footprint with a new East US 3 region launching in Greater Atlanta in early 2027, plus adding Availability Zones to five existing regions by the end of 2027. 
  • The Atlanta, Georgia region will support advanced AI workloads and feature zone-redundant storage for improved application resilience, designed to meet LEED Gold certification standards for sustainability.
  • The expansion adds Availability Zones to North Central US, West Central US, and US Gov Arizona regions, plus enhances existing zones in East US 2 Virginia and South Central US Texas. 
  • This provides customers with more options for multi-region architectures to improve recovery time objectives and meet compliance requirements like CMMC and NIST guidance for government workloads.
  • Azure Government customers get dedicated infrastructure expansion with three Availability Zones coming to US Gov Arizona in early 2026, specifically supporting Defense Industrial Base requirements. 
  • This complements the Azure for US Government Secret cloud region launched earlier in 2025, offering an alternative to US Gov Virginia for latency-sensitive and mission-critical deployments.
  • The infrastructure investments support organizations like the University of Miami using Availability Zones for disaster recovery in hurricane-prone regions, and the State of Alaska consolidating legacy systems while improving reliability. 
  • Microsoft emphasizes its global network of over 70 regions, 400 datacenters, and 370,000 miles of fiber as a foundation for resilient cloud strategies using its Cloud Adoption Framework and Well-Architected Framework guidance.
  • ai.azure.com for building production-ready AI agents.

40:33 Ryan – “AI is definitely driving a lot of this, but like with large data sets, you don’t really want that distributed globally. But I also think that they’re just purely running out of space.”

41:17 Azure Networking Updates: Secure, Scalable, and AI-Optimized

  • Azure is tripling down on AI infrastructure with its global network now reaching 18 petabits per second of total capacity, up from 6 Pbps at the end of FY24. 
  • The network spans over 60 AI regions with 500,000 miles of fiber and 4 Pbps of WAN capacity, using InfiniBand and high-speed Ethernet for lossless data transfer between GPU clusters.
  • NAT Gateway Standard V2 enters public preview with zone redundancy by default at no additional cost, delivering 100 Gbps throughput and 10 million packets per second. 
  • This joins ExpressRoute, VPN, and Application Gateway in offering zone-resilient SKUs as part of Azure’s resiliency-by-default strategy.
  • Security updates include DNS Security Policy with Threat Intel now generally available for blocking malicious domains, Private Link Direct Connect in preview for extending connectivity to any routable private IP, and JWT validation at Layer 7 in Application Gateway preview to offload token validation from backend servers.
  • ExpressRoute is getting 400G direct ports in select locations starting in 2026 for multi-terabit throughput, while VPN Gateway, now generally available, supports 5 Gbps single TCP flow and 20 Gbps total throughput with four tunnels. 
  • Private Link scales to 5,000 endpoints per VNet and 20,000 across peered VNets.
  • Container networking improvements for AKS include eBPF Host Routing for lower latency, Pod CIDR Expansion without cluster redeployment, WAF for Application Gateway for Containers now generally available, and Azure Bastion support for private AKS cluster access.

42:45 Ryan – “If you have those high-end network throughput needs, that’s fantastic! It’s been a while since I’ve really got into cloud at that deep layer, but I do remember in AWS the VPN limitations really biting; it was easy to hit those limits really fast.” 

After Show 

44:22 Roomba maker iRobot swept into bankruptcy

  • iRobot’s bankruptcy marks the end of an era for the company that pioneered consumer robotics with the Roomba, now being acquired by its Chinese supplier Picea Robotics after losing ground to cheaper competitors. 
  • The stock crashed from Amazon’s $52 offer in 2023 to just $4, showing how quickly market leaders can fall when undercut on price.
  • The failed Amazon acquisition in 2023 due to EU antitrust concerns looks particularly painful in hindsight, as iRobot might have been better off with Amazon’s resources than facing bankruptcy. 
  • This highlights how regulatory decisions intended to preserve competition can sometimes accelerate a company’s decline instead.
  • For cloud professionals, this demonstrates how hardware IoT companies struggle without strong cloud services and ecosystem lock-in that could justify premium pricing. iRobot’s inability to differentiate beyond hardware shows why companies like Amazon, Google, and Apple integrate devices tightly with their cloud platforms.
  • The Chinese supplier takeover raises questions about data privacy and security for the millions of Roombas already mapping homes worldwide. 
  • This could become a cautionary tale about supply chain dependencies and what happens when your manufacturer becomes your owner.
  • Founded by MIT engineers in 1990 and selling 40 million devices, iRobot’s fall shows that innovation alone isn’t enough without sustainable competitive advantages in manufacturing costs and ongoing software value.
  • This is a sad day, especially if you’re a fan of all things serverless, as they were the poster child of all things serverless.  

Closing

And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod





Download audio: https://episodes.castos.com/5e2d2c4b117f29-10227663/2304352/c1e-rodobwg4r4f8d507-ndvg6gzkfgv9-qkjs9t.mp3
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories