Windows 11 is testing a new underlying improvement for File Explorer that could potentially reduce RAM usage when you’re actively searching for an image or other files, such as those in Excel or PowerPoint. Microsoft is optimizing “search” performance in File Explorer, as it’s been causing high memory usage.
Microsoft is testing an efficient File Explorer search bar in Windows 11 Build 26220.7523 or newer, but it’s currently locked to Windows Insider machines only. Once you’ve access to the feature, File Explorer will automatically remove duplicate file indexing operations, which means Windows will do less redundant work when you search in File Explorer.
“Made some improvements to File Explorer search performance by eliminating duplicate file indexing operations, which should result in faster searches and reduced system resource usage during file operations.”
File Explorer Search is not a separate index or engine, as it’s built on top of Windows Search Indexer. While indexer is designed to be ‘smart,’ duplicate file indexing operations can happen sometimes, and in those cases, Windows ends up scanning or processing the same files or folders more than once for indexing purposes.
Windows Search index will now avoid duplicate file operations, which should result in less disk I/O, lower CPU cycles, and fewer background indexing tasks, so it’ll automatically reduce RAM usage.
In our tests, Windows Latest observed that Microsoft is moving options like “Compress to,” “Copy as path,” “Rotate right,” “Rotate left,” and “Set as desktop background” to a separate sub-menu called “Manage file.” On another PC, this sub-menu is called “Other actions,” which seems to suggest that Microsoft wants to dump all lesser-used options in this sub-menu.
Alll these improvements are being tested and will be rolled out in the last week of January or February.
BONUS: Breaking Through The Organizational Immune System - Why Software-Native Organizations Are Still Rare With Vasco Duarte
In this BONUS episode, we explore the organizational barriers that prevent companies from becoming truly software-native. Despite having proof that agile, iterative approaches work at scaleâfrom Spotify to Amazon to Etsyâmost organizations still struggle to adopt these practices. We reveal the root cause behind this resistance and expose four critical barriers that form what we call "The Organizational Immune System." This isn't about resistance to change; it's about embedded structures, incentives, and mental models that actively reject beneficial transformation.
The Root Cause: Project Management as an Incompatible Mindset
"Project management as a mental model is fundamentally incompatible with software development. And will continue to be, because 'project management' as an art needs to support industries that are not software-native."
The fundamental problem isn't about tools or practicesâit's about how we think about work itself. Project management operates on assumptions that simply don't hold true for software development. It assumes you can know the scope upfront, plan everything in advance, and execute according to that plan. But software is fundamentally different. A significant portion of the work only becomes visible once you start building. You discover that the "simple" feature requires refactoring three other systems. You learn that users actually need something different than what they asked for. This isn't poor planningâit's the nature of software. Project management treats discovery as failure ("we missed requirements"), while software-native thinking treats discovery as progress ("we learned something critical"). As Vasco points out in his NoEstimates work, what project management calls "scope creep" should really be labeled "value discovery" in softwareâbecause we're discovering more value to add.
Discovery vs. Execution: Why Software Needs Different Success Metrics
"Software hypotheses need to be tested in hours or days, not weeks, and certainly not months. You can't wait until the end of a 12-month project to find out your core assumption was wrong."
The timing mismatch between project management and software development creates fundamental problems. Project management optimizes for plan execution with feedback loops that are months or years long, with clear distinctions between teams doing requirements, design, building, and testing. But software needs to probe and validate assumptions in hours or days. Questions like "Will users actually use this feature?" or "Does this architecture handle the load?" can't wait for the end of a 12-month project. When we finally discover our core assumption was wrong, we need to fully replanânot just "change the plan." Software-native organizations optimize for learning speed, while project management optimizes for plan adherence. These are opposing and mutually exclusive definitions of success.
The Language Gap: Why Software Needs Its Own Vocabulary
"When you force software into project management language, you lose the ability to manage what actually matters. You end up tracking task completion while missing that you're building the wrong thing."
The vocabulary we use shapes how we think about problems and solutions. Project management talks about tasks, milestones, percent complete, resource allocation, and critical path. Software needs to talk about user value, technical debt, architectural runway, learning velocity, deployment frequency, and lead time. These aren't just different wordsâthey represent fundamentally different ways of thinking about work. When organizations force software teams to speak in project management terms, they lose the ability to discuss and manage what actually creates value in software development.
The Scholarship Crisis: An Industry-Wide Knowledge Gap
"Agile software development represents the first worldwide trend in scholarship around software delivery. But most organizational investment still goes into project management scholarship and training."
There's extensive scholarship in IT, but almost none about delivery processes until recently. The agile movement represents the first major wave of people studying what actually works for building software, rather than adapting thinking from manufacturing or construction. Yet most organizational investment continues to flow into project management certifications like PMI and Prince2, and traditional MBA programsâall teaching an approach with fundamental problems when applied to software. This creates an industry-wide challenge: when CFOs, executives, and business partners all think in project management terms, they literally cannot understand why software needs to work differently. The mental model mismatch isn't just a team problemâit's affecting everyone in the organization and the broader industry.
Budget Cycles: The Project Funding Trap
"You commit to a scope at the start, when you know the least about what you need to build. The budget runs out exactly when you're starting to understand what users actually need."
Project thinking drives project funding: organizations approve a fixed budget (say $2M over 9 months) to deliver specific features. This seems rational and gives finance predictability, but it's completely misaligned with how software creates value. Teams commit to scope when they know the least about what needs building. The budget expires just when they're starting to understand what users actually need. When the "project" ends, the team disbands, taking all their accumulated knowledge with them. Next year, the cycle starts over with a new project, new team, and zero retained context. Meanwhile, the software itself needs continuous evolution, but the funding structure treats it as a series of temporary initiatives with hard stops.
The Alternative: Incremental Funding and Real-Time Signals
"Instead of approving $2M for 9 months, approve smaller incrementsâmaybe $200K for 6 weeks. Then decide whether to continue based on what you've learned."
Software-native organizations fund teams working on products, not projects. This means incremental funding decisions based on learning rather than upfront commitments. Instead of detailed estimates that pretend to predict the future, they use lightweight signals from the NoEstimates approach to detect problems early: Are we delivering value regularly? Are we learning? Are users responding positively? These signals provide more useful information than any Gantt chart. Portfolio managers shift from being "task police" asking "are you on schedule?" to investment curators asking "are we seeing the value we expected? Should we invest more, pivot, or stop?" This mirrors how venture capital worksâand software is inherently more like VC than construction. Amazon exemplifies this approach, giving teams continuous funding as long as they're delivering value and learning, with no arbitrary end date to the investment.
The Business/IT Separation: A Structural Disaster
"'The business' doesn't understand softwareâand often doesn't want to. They think in terms of features and deadlines, not capabilities and evolution."
Project thinking reinforces organizational separation: "the business" defines requirements, "IT" implements them, and project managers coordinate the handoff. This seems logical with clear specialization and defined responsibilities. But it creates a disaster. The business writes requirements documents without understanding what's technically possible or what users actually need. IT receives them, estimates, and buildsâbut the requirements are usually wrong. By the time IT delivers, the business need has changed, or the software works but doesn't solve the real problem. Sometimes worst of all, it works exactly as specified but nobody wants it. This isn't a communication problemâit's a structural problem created by project thinking.
Product Thinking: Starting with Behavior Change
"Instead of 'build a new reporting dashboard,' the goal is 'reduce time finance team spends preparing monthly reports from 40 hours to 4 hours.'"
Software-native organizations eliminate the business/IT separation by creating product teams focused on outcomes. Using approaches like Impact Mapping, they start with behavior change instead of features. The goal becomes a measurable change in business behavior or performance, not a list of requirements. Teams measure business outcomes, not task completionâtracking whether finance actually spends less time on reports. If the first version doesn't achieve that outcome, they iterate. The "requirement" isn't sacred; the outcome is. "Business" and "IT" collaborate on goals rather than handing off requirements. They're on the same team, working toward the same measurable outcome with no walls to throw things over. Spotify's squad model popularized this approach, with each squad including product managers, designers, and engineers all focused on the same part of the product, all owning the outcome together.
Risk Management Theater: The Appearance of Control
"Here's the real risk in software: delivering software that nobody wants, and having to maintain it forever."
Project thinking creates elaborate risk management processesâsteering committees, gate reviews, sign-offs, extensive documentation, and governance frameworks. These create the appearance of managing risk and make everyone feel professional and in control. But paradoxically, the very practices meant to manage risk end up increasing the risk of catastrophic failure. This mirrors Chesterton's Fence paradox. The real risk in software isn't about following the planâit's delivering software nobody wants and having to maintain it forever. Every line of code becomes a maintenance burden. If it's not delivering value, you're paying the cost forever or paying additional cost to remove it later. Traditional risk management theater doesn't protect against this at all. Gates and approvals just slow you down without validating whether users will actually use what you're building or whether the software creates business value.
Agile as Risk Management: Fast Learning Loops
"Software-native organizations don't see 'governance' and 'agility' as a tradeoff. Agility IS governance. Fast learning loops ARE how you manage risk."
Software-native organizations recognize that agile and product thinking ARE risk management. The fastest way to reduce risk is delivering quicklyâgetting software in front of real users in production with real data solving real problems, not in demos or staging environments. Teams validate expected value by measuring whether software achieves intended outcomes. Did finance really reduce their reporting time? Did users actually engage with the feature? When something isn't working, teams change it quickly. When it is working, they double down. Either way, they're managing risk through rapid learning. Eric Ries's Lean Startup methodology isn't just for startupsâit's fundamentally a software-native management practice. Build-Measure-Learn isn't a nice-to-have; it's how you avoid the catastrophic risk of building the wrong thing.
The Risk Management Contrast: Theater vs. Reality
"Which approach actually manages risk? The second one validates assumptions quickly and cheaply. The first one maximizes your exposure to building the wrong thing."
The contrast between approaches is stark. Risk management theater involves six months of requirements gathering and design, multiple approval gates that claim to prevent risk but actually accumulate it, comprehensive test plans, and a big-bang launch after 12 months. Teams then discover users don't want itâand now they're maintaining unwanted software forever. The agile risk management approach takes two weeks to build a minimal viable feature, ships to a subset of users, measures actual behavior, learns it's not quite right, iterates in another two weeks, validates value before scaling, and only maintains software that's proven valuable. The second approach validates assumptions quickly and cheaply. The first maximizes exposure to building the wrong thing.
The Immune System in Action: How Barriers Reinforce Each Other
"When you try to 'implement agile' without addressing these structural barriers, the organization's immune system rejects it. Teams might adopt standups and sprints, but nothing fundamental changes."
These barriers work together as an immune system defending the status quo. It starts with the project management mindsetâthe fundamental belief that software is like construction, that we can plan it all upfront, that "done" is a meaningful state. That mindset creates funding models that allocate budgets to temporary projects instead of continuous products, organizational structures that separate "business" from "IT" and treat software as a cost center, and risk management theater that optimizes for appearing in control rather than actually learning. Each barrier reinforces the others. The funding model makes it hard to keep stable product teams. The business/IT separation makes it hard to validate value quickly. The risk theater slows down learning loops. The whole system resists changeâeven beneficial changeâbecause each part depends on the others. This is why so many "agile transformations" fail: they treat the symptoms (team practices) without addressing the disease (organizational structures built on project thinking).
Breaking Free: Seeing the System Clearly
"Once you see the system clearly, you can transform it. You now know the root cause, how it manifests, and what the alternatives look like."
Understanding these barriers is empowering. It's not that people are stupid or resistant to changeâorganizations have structural barriers built on a fundamental mental model mismatch. But once you see the system clearly, transformation becomes possible. You now understand the root cause (project management mindset), how it manifests in your organization (funding models, business/IT separation, risk theater), and what the alternatives look like through real examples from companies successfully operating as software-native organizations. The path forward requires addressing the disease, not just the symptomsâtransforming the fundamental structures and mental models that shape how your organization approaches software.
Vasco Duarte is a thought leader in the Agile space, co-founder of Agile Finland, and host of the Scrum Master Toolbox Podcast, which has over 10 million downloads. Author of NoEstimates: How To Measure Project Progress Without Estimating, Vasco is a sought-after speaker and consultant helping organizations embrace Agile practices to achieve business success.
1145. In this bonus segment from October, I talk with Ben Zimmer about "hella" and how even yearbook messages can be digitized to help preserve the language record. Ben shares the full story of this slang term, and we also talk about the detective work that led to the OED using Run DMC's use of "drop" in âSpin Magazineâ as a citation.
Space Geek Out Time - 2025 Edition! Richard talks to Carl about the past year in space, starting with a reader comment about 3I/ATLAS, the interstellar comet passing through our solar system that has kicked off conspiracies about aliens coming to visit - hint, it's just a comet. Then, into another record-breaking year of spaceflight with a record number of Falcon 9 flights, Starship tests, United Launch Alliance underperforming, and New Glenn finally getting to orbit! The International Space Station has passed 25 years of continuous habitation and is only five years away from being sent to a watery grave. But there are new space stations in the works! Finally, the stories of landers on the Moon, trouble at Mars, and how silly the idea of building data centers in space really is. A fantastic year for space!
Welcome to episode 335 of The Cloud Pod, where the forecast is always cloudy! This pre-Christmas week, Ryan and Justin have hit the studio to bring you the final show of 2025. Weâve got lots of AI images, EKS Network Policies, Gemini 3, and even some Disney drama.Â
Letâs get into it!Â
Titles we almost went with this week
From Roomba to Tomb-ba: How the Robot Vacuum Pioneer Got Cleaned Out **OpenAI
From Napkin Sketch to Production: Googleâs App Design Center Goes GA
Terraform Gets a Canvas: Google Paints Infrastructure Design with AI
Mickey Mouse Takes Off the Gloves: Disney vs Google AI Showdown
From Data Silos to Data Solos: Google Conducts the Integration Orchestra
No More Thread Dread: AWS Brings AI to JVM Performance Troubleshooting
MCP: More Corporate Plumbing Than You Think
GPT-5.2 Beats Humans at Work Tasks, Still Canât Get You Out of Monday Meetings
Kerberos More Like Kerbero-Less: Microsoft Axes Ancient Encryption Standard
OpenAI Teaches GPT-5.2 to PowerPoint: Death by Bullet Points Now AI-Generated
MCP: Like USB-C, But Everyoneâs Keeping Theirs in the Drawer
Flash Gordon: Googleâs Gemini 3 Gets a Speed Boost Without the Sacrifice
Tag, Youâre It: AWS Finally Knows Who to Bill
Snowflake Gets a GPT-5.2 Upgrade: Now With More Intelligence Per Query
OpenAI and Snowflake: Making Data Warehouses Smarter Than Your Average Analyst
GPT-5.2 Moves Into the Snowflake: No Melting Required
Meta is developing Avocado, a new frontier AI model codenamed to succeed Llama, now expected to launch in Q1 2026 after internal delays related to training performance testing.Â
The model may be proprietary rather than open source, marking a significant shift from Metaâs previous strategy of freely distributing Llamaâs weights and architecture to developers. We feel like this is an interesting choice for Meta, but what do we know?Â
Meta spent 14.3 billion dollars in June 2025 to hire Scale AI founder Alexandr Wang as Chief AI Officer and acquire a stake in Scale, while raising 2026 capital expenditure guidance to 70-72 billion dollars.Â
Wang now leads the elite TBD Lab developing Avocado, operating separately from traditional Meta teams and not using the companyâs internal workplace network.
The company has restructured its AI leadership following the poor reception of Llama 4 in April, with Chief Product Officer Chris Cox no longer overseeing the GenAI unit.Â
Meta cut 600 jobs in Meta Superintelligence Labs in October, contributing to the departure of Chief AI Scientist Yann LeCun to launch a startup, while implementing 70-hour workweeks across AI organizations.
Metaâs new AI leadership under Wang and former GitHub CEO Nat Friedman has introduced a âdemo, donât memoâ development approach, replacing traditional multi-step approval processes with rapid prototyping using AI agents and newer tools.Â
The company is also leveraging third-party cloud services from CoreWeave and Oracle while building the 27 billion dollar Hyperion data center in Louisiana.
Metaâs Vibes AI video product, launched in September, trails OpenAIâs Sora 2 in downloads, and was criticized for lacking features like realistic lip-synced audio, while the company increasingly relies on external AI models from Black Forest Labs and Midjourney rather than exclusively using internal technology.
02:23 Ryan â âI guess I really donât understand the business of the AI models. I guess if youâre going to offer a chat service, you have to have a proprietary model, but itâs kind of strange.â
Disney has issued a cease and desist letter to Google alleging copyright infringement through its generative AI models, claiming Google trained its systems on Disneyâs copyrighted content without authorization and now enables users to generate Disney-owned characters like those from The Lion King, Deadpool, and Star Wars.Â
This represents one of the first major legal challenges from a content owner with substantial legal resources against a cloud AI provider.
The legal notice targets two specific violations: Googleâs use of Disneyâs copyrighted works in training data for its image and video generation models, and the distribution of Disney character reproductions to end users through AI-generated outputs.Â
Disney demands the immediate cessation of using its content and the implementation of safeguards to prevent the future generation of Disney-owned intellectual property.
This case could establish important precedents for how cloud providers handle copyrighted training data and implement content filtering in AI services.Â
The outcome may force cloud AI platforms to develop more sophisticated copyright detection systems or negotiate licensing agreements with content owners before deploying generative models.
Disneyâs involvement brings considerable legal firepower to the AI copyright debate, as the company has historically shaped US copyright law through decades of litigation to protect its intellectual property.Â
Cloud providers offering generative AI services may need to reassess their training data sources and output filtering mechanisms to avoid similar legal challenges from other major content owners.
04:06 Ryan â âDisney â suing for copyright infringement â shocking.âÂ
Disney invests $1 billion in OpenAI and licenses over 200 characters from Disney, Marvel, Pixar, and Star Wars franchises for use in Sora video generator.Â
This marks the first major Hollywood studio content licensing deal for OpenAIâs AI video platform, which launched in late September and faced industry criticism over copyright concerns.
The three-year licensing agreement allows Sora users to create short video clips featuring licensed Disney characters, representing a shift from OpenAIâs previous approach of training models on copyrighted material without permission.Â
This deal is notable given Disneyâs history of aggressive copyright protection and lobbying that shaped modern US copyright law in the 1990s.
OpenAI has been pursuing content licensing deals with major IP holders after facing multiple lawsuits over unauthorized use of copyrighted training data.Â
The company previously argued that useful AI models cannot be created without copyrighted material, but has shifted strategy since becoming well-funded through investments.
The partnership aims to extend Disneyâs storytelling reach through generative AI while addressing creator concerns about unauthorized use of intellectual property.Â
Disney CEO Robert Iger emphasized the companyâs commitment to respecting and protecting creatorsâ works while leveraging AI technology for content creation.
This deal could establish a precedent for how AI companies and content owners structure licensing agreements, potentially influencing how other studios and IP holders approach AI-generated content partnerships.Â
The financial terms suggest significant value in controlled character licensing for AI applications.
06:26 Ryan â âIs it just a way to get out of the lawsuit so they can generate the content?âÂ
OpenAI released GPT Image 1.5, their new flagship image generation model, now available in ChatGPT for all users and via API.Â
The model generates images up to 4x faster than the previous version and includes a dedicated Images feature in the ChatGPT sidebar with preset filters and prompts for quick exploration.
The model delivers improved image editing capabilities with better preservation of original elements like lighting, composition, and peopleâs appearance across edits.Â
It handles precise modifications, including adding, subtracting, combining, and blending elements while maintaining consistency, making it suitable for practical photo edits and creative transformations.
GPT Image 1.5 shows improvements in text rendering with support for denser and smaller text, better handling of multiple small faces, and more natural-looking outputs.Â
The model follows instructions more reliably than the initial version, enabling more intricate compositions where relationships between elements are preserved as intended.
API pricing for GPT Image 1.5 is 20% cheaper than GPT Image 1 for both inputs and outputs, allowing developers to generate and iterate on more images within the same budget.Â
The model is particularly useful for marketing teams, ecommerce product catalogs, and brand work requiring consistent logo and visual preservation across multiple edits.
The new ChatGPT Images model works across all ChatGPT models without requiring manual selection, while the earlier version remains available as a custom GPT.Â
Business and Enterprise users will receive access to the new Images experience later, with the API version available now through OpenAI Playground.Â
07:38 Justin â âItâs very competitive against Nano Banana, and I was looking at some of the charts, and itâs already jumped to the top of the charts.âÂ
OpenAI has released GPT-5.2, now generally available in ChatGPT for paid users and via API as gpt-5.2, with three variants: Instant for everyday tasks, Thinking for complex work, and Pro for the highest-quality outputs.Â
The model introduces native spreadsheet and presentation generation capabilities, with ChatGPT Enterprise users reporting 40-60 minutes saved daily on average.
GPT-5.2 Thinking achieves a 70.9% win rate against human experts on GDPval benchmark spanning 44 occupations and sets new records on SWE-Bench Pro at 55.6% (80% on SWE-bench Verified).Â
The model demonstrates 11x faster output generation and less than 1% the cost of expert professionals on knowledge work tasks, though human oversight remains necessary.
Long-context performance reaches near 100% accuracy on the 4-needle MRCR variant up to 256k tokens, with a new Responses compact endpoint extending the effective context window for tool-heavy workflows. Vision capabilities show roughly 50% error reduction on chart reasoning and interface understanding compared to GPT-5.1.
API pricing is set at $1.75 per million input tokens and $14 per million output tokens, with a 90% discount on cached inputs.Â
OpenAI reports that despite higher per-token costs, GPT-5.2 achieves a lower total cost for given quality levels due to improved token efficiency.Â
The company has no current plans to deprecate GPT-5.1, GPT-5, or GPT-4.1.
The model introduces improved safety features, including strengthened responses for mental health and self-harm scenarios, plus a gradual rollout of age prediction for content protections.Â
10:06 Ryan â âIâm happy to see the improved safety features because thatâs come up in the news recently and had some high-profile events happen, where itâs become a concern, for sure. So I want to see more protection in that space from all the providers.âÂ
Cedar is an open source authorization policy language that just joined CNCF as a Sandbox project, solving the problem of hard-coded access control by letting developers define fine-grained permissions as policies separate from application code.Â
It supports RBAC, ABAC, and ReBAC models with fast real-time evaluation.
The language stands out for its formal verification using the Lean theorem prover and differential random testing against its specification, providing mathematical guarantees for security-critical authorization logic. This rigor addresses the growing complexity of cloud-native authorization, where traditional ad-hoc systems fall short.
The CNCF move provides vendor-neutral governance and broader community access beyond AWS stewardship.
Cedar offers an interactive policy playground and Rust SDK for developers to test authorization logic before deployment.Â
The analyzability features enable automated policy optimization and verification, reducing the risk of misconfigured permissions in production.
The CNCF acceptance fills a gap in the cloud-native landscape for a foundation-backed authorization standard, complementing existing projects and potentially becoming the go-to solution as it progresses from Sandbox to Incubation status.
12:05 Ryan â âI think this kind of policy is going to be absolutely key to managing permissions going forward.âÂ
GuardDutyExtended Threat Detection identified a coordinated cryptomining campaign starting November 2, 2025, where attackers used compromised IAM credentials to deploy miners across EC2 and ECS within 10 minutes of initial access.Â
The new AttackSequence: EC2/CompromisedInstanceGroup finding correlated signals across multiple data sources to detect the sophisticated attack pattern, demonstrating how Extended Threat Detection capabilities launched at re:Invent 2025 can identify coordinated campaigns.
The attackers employed a novel persistence technique using ModifyInstanceAttribute to disable API termination on all launched instances, forcing victims to manually re-enable termination before cleanup and disrupting automated remediation workflows.Â
They also created public Lambda endpoints without authentication and established backdoor IAM users with SES permissions, showing advancement in cryptomining persistence methodologies beyond typical mining operations.
The campaign targeted high-value GPU and ML instances (g4dn, g5, p3, p4d) through auto scaling groups configured to scale from 20 to 999 instances, with attackers first using DryRun flags to validate permissions without triggering costs. The malicious Docker Hub image yenik65958/secret accumulated over 100,000 pulls before takedown, and attackers created up to 50 ECS clusters per account with Fargate tasks configured for maximum CPU allocation of 16,384 units.
AWS recommends enabling GuardDuty Runtime Monitoring alongside the foundational protection plan for comprehensive coverage, as Runtime Monitoring provides host-level signals critical for Extended Threat Detection correlation and detects crypto mining execution through Impact:Runtime/CryptoMinerExecuted findings.Â
Organizations should implement SCPs to deny Lambda URL creation with an AuthType of NONE and monitor CloudTrail for unusual DryRun API patterns as early warning indicators.
The attack demonstrates the importance of temporary credentials over long-term access keys, MFA enforcement, and least privilege IAM policies, as the compromise exploited valid credentials rather than AWS service vulnerabilities. GuardDutyâs multilayered detection using threat intelligence, anomaly detection, and Extended Threat Detection successfully identified all attack stages from initial access through persistence.
55:31 Justin â âHackers have the same tools we do for development.âÂ
Amazon EKS now supports Admin Network Policies and Application Network Policies, giving cluster administrators centralized control over network security across all namespaces while allowing namespace administrators to filter outbound traffic using domain names instead of maintaining IP address lists.Â
This addresses a key limitation of standard Kubernetes Network Policies, which only work within individual namespaces and lack explicit deny rules or policy hierarchies.
The new Admin Network Policies operate in two tiers: Admin Tier rules that cannot be overridden by developers, and Baseline Tier rules that provide default connectivity but can be overridden by standard Network Policies.Â
This enables platform teams to enforce cluster-wide security requirements like isolating sensitive workloads or ensuring monitoring access while still giving application teams flexibility within those boundaries.
Application Network Policies, exclusive to EKS Auto Mode clusters, add Layer 7 FQDN-based filtering to traditional Layer 3/4 network policies, solving the problem of managing egress to external services with frequently changing IP addresses. Instead of maintaining IP lists for SaaS providers or on-premises resources behind load balancers, teams can simply whitelist domain names like internal-api.company.com, and policies remain valid even when underlying IPs change.
Requirements include Kubernetes 1.29 or later, Amazon VPC CNI plugin v1.21.0 for standard EKS clusters, and EKS Auto Mode for Application Network Policies with DNS filtering.Â
The feature is available now for new clusters, with support for existing clusters coming in the following weeks, though pricing remains unchanged, as this is a native capability of the VPC CNI plugin.
17:30 Ryan â âThis is one of those things thatâs showing a maturity level of container-driven applications. Itâs been a while since security teams have been aware of some of the things you can do with network policies and routing, and so you want to empower your developers, but also being able to have a comprehensive way to ban and approve has been missing from a lot of these ingress controllers. So this is a great thing for security teams, and probably terrible for developers.âÂ
AWS has released an automated Java thread dump analysis solution that combines Prometheus monitoring, Grafana alerting, Lambda orchestration, and Amazon Bedrock AI to diagnose JVM performance issues in seconds rather than hours.Â
The system works across both ECS and EKS environments, automatically detecting high thread counts and generating actionable insights without requiring deep JVM expertise from operations teams.
The solution uses Spring Boot Actuator endpoints for ECS deployments and Kubernetes API commands for EKS to capture thread dumps when Grafana alerts trigger.Â
Amazon Bedrock then analyzes the dumps to identify deadlocks, performance bottlenecks, and thread states while providing structured recommendations across six key areas, including executive summary and optimization guidance.
Deployment is handled through CloudFormation templates available in the Java on AWS Immersion Day Workshop, with all thread dumps and AI analysis reports automatically stored in S3 for historical trending.Â
The architecture follows event-driven principles with modular components that can be extended to other diagnostic tools like heap dump analysis or automated remediation workflows.
The system enriches JVM metrics with contextual tags, including cluster identification and container metadata, enabling the Lambda function to determine the appropriate thread dump collection method. This metadata-driven approach allows a single solution to handle heterogeneous container environments without manual configuration for each deployment type.
Pricing follows standard AWS service costs for Lambda invocations, Bedrock LLM usage per token, S3 storage, and CloudWatch metrics, with no additional licensing fees for the open source monitoring components.Â
The solution addresses the common problem where only a handful of engineers on most teams can interpret thread dumps, democratizing JVM troubleshooting across operations teams.
20:55 Justin â âThis tells me that if you have a bad container that crashes a lot, you could spend a lot of money on LLM usage for tokens analyzing your exact same crash dump every time. Do keep that in mind.âÂ
EC2 Auto Scaling introduces a new LaunchInstances API that provides synchronous feedback when launching instances, allowing customers to immediately know if capacity is available in their specified Availability Zone or subnet.Â
This addresses scenarios where customers need precise control over instance placement and real-time confirmation of scaling operations rather than waiting for asynchronous results.
The API enables customers to override default Auto Scaling group configurations by specifying exact Availability Zones and subnets for new instances, while still maintaining the benefits of automated fleet management like health checks and scaling policies. Optional asynchronous retries are included to help reach the desired capacity if initial synchronous attempts fail.
This feature is particularly useful for workloads that require strict placement requirements or need to implement fallback strategies quickly when capacity constraints occur in specific zones. Customers can now build more sophisticated scaling logic that responds immediately to capacity availability rather than discovering issues after the fact.
Available immediately in all AWS Regions and GovCloud at no additional cost beyond standard EC2 and EBS charges. Customers can access the feature through AWS CLI and SDKs, with documentation available at https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-instances-synchronously.
23:47 Ryan â âI find that the things that itâs allowing you to tune â itâs the things that I moved to autoscaling for; I donât want to deal with any of this nonsense. And so you still have to maintain your own orchestration, which understands which zone that you need to roll out to, because itâs going to have to call that API.âÂ
AWS now enables cost allocation based on workforce user attributes like cost center, division, and department imported from IAM Identity Center.Â
This allows organizations to automatically tag per-user subscription and on-demand fees for services like Amazon Q Business, Q Developer, and QuickSight with organizational metadata for chargeback purposes.
The feature addresses a common FinOps challenge where companies struggle to attribute SaaS-style AWS application costs back to specific business units. Once user attributes are imported to IAM Identity Center and enabled as cost allocation tags in the Billing Console, usage automatically flows to Cost Explorer and CUR 2.0 with the appropriate organizational tags attached.
This capability is particularly relevant for enterprises deploying Amazon Q Business or QuickSight at scale, where individual user subscriptions can quickly add up across departments. Instead of manually tracking which users belong to which cost centers, the system automatically associates costs based on existing identity data.
The feature is generally available in all commercial AWS regions except GovCloud and China regions.Â
No additional pricing is mentioned beyond the standard costs of the underlying AWS applications being tracked.
25:26 Justin â âThereâs lots of use cases; this gets interesting real quickly. Itâs a really nice feature that Iâm really happy about.â Â
Google launches Gemini 3 Flash in general availability, positioning it as a frontier intelligence model optimized for speed at reduced cost.Â
The model processes over 1 trillion tokens daily through Googleâs API and replaces Gemini 2.5 Flash as the default model in the Gemini app globally at no cost to users.
Gemini 3 Flash achieves strong benchmark performance with 90.4% on GPQA Diamond and 81.2% on MMMU Pro while running 3x faster than Gemini 2.5 Pro and using 30% fewer tokens on average for typical tasks.Â
Pricing is set at $0.50 per million input tokens and $3 per million output tokens, with audio input at $1 per million tokens.
The model demonstrates strong coding capabilities with a 78% score on SWE-bench Verified, outperforming both the 2.5 series and Gemini 3 Pro. This makes it suitable for agentic workflows, production systems, and interactive applications requiring both speed and reasoning depth.
The model is also rolling out as the default for AI Mode in Search globally, combining real-time information retrieval with multimodal reasoning capabilities.
Early enterprise adopters, including JetBrains, Bridgewater Associates, and Figma, are using the model for applications ranging from video analysis and data extraction to visual Q&A and in-game assistance.Â
The multimodal capabilities support real-time analysis of images, video, and audio content for actionable insights.
27:01 Justin â âThis, just in general, is a pretty big improvement from not only the cost perspective, but also the overall performance, and the ability to run this on local devices, for like Android phones, is gonna be a huge breakthrough in LM performance on the device. So I suspect youâll see a lot of Gemini 3 flash getting rolled out all over the place because it does a lot of things really darn well.â
Google has integrated Model Context Protocol servers into its new Antigravity IDE, allowing AI agents to directly connect to Google Cloud data services, including AlloyDB, BigQuery, Spanner, Cloud SQL, and Looker.Â
The MCP Toolbox for Databases provides pre-built connectors that eliminate manual configuration, letting developers access enterprise data through a UI-driven setup process within the IDE.
The integration enables AI agents to perform database administration tasks, generate SQL code, and run queries without switching between tools.Â
For AlloyDB and Cloud SQL, agents can explore schemas, develop queries, and optimize performance using tools like list_tables, execute_sql, and get_query_plan directly in the development environment.
BigQuery and Looker connections extend agent capabilities into analytics and business intelligence workflows.Â
Agents can forecast trends, search data catalogs, validate metric definitions against semantic models, and run ad-hoc queries to ensure application logic matches production reporting standards.
The MCP servers use IAM credentials or secure password storage to maintain security while giving agents access to production data sources. This approach positions Antigravity as a data-aware development environment where AI assistance is grounded in actual enterprise data rather than abstract reasoning alone.
The feature is available now through the Antigravity MCP Store with documentation at cloud.google.com/alloydb/docs and the open-source MCP Toolbox on GitHub at googleapis/genai-toolbox.Â
No specific pricing information was provided for the MCP integration itself, though standard data service costs for AlloyDB, BigQuery, and other connected services apply.
Google now offers fully-managed, remote Model Context Protocol (MCP) servers for its services, eliminating the need for developers to deploy and maintain individual local MCP servers.Â
This provides a unified, enterprise-ready endpoint for connecting AI agents to Google and Google Cloud services with built-in IAM, audit logging, and Model Armor security.
Initial MCP support launches for four key services: Google Maps Platform for location grounding, BigQuery for querying enterprise data in-place, Compute Engine for infrastructure management, and GKE for container operations. Additional services, including Cloud Run, Cloud Storage, AlloyDB, Spanner, and SecOps, will receive MCP support in the coming months.
Apigee integration allows enterprises to expose their own custom APIs and third-party APIs as discoverable tools for AI agents, extending MCP capabilities beyond Google services to the broader enterprise stack.Â
Organizations can use Cloud API Registry and Apigee API Hub to discover and govern available MCP tools across their environment.
The implementation enables agents to perform complex multi-step workflows like analyzing BigQuery sales data for revenue forecasting while simultaneously querying Google Maps for location intelligence, all through standardized MCP interfaces.Â
This approach keeps data in place rather than moving it into context windows, reducing security risks and latency.
Apigee now supports Model Context Protocol (MCP), allowing organizations to expose their existing APIs as tools for AI agents without writing code or managing MCP servers. Google handles the infrastructure, transcoding, and protocol management while Apigee applies its 30+ built-in policies for authentication, authorization, and security to govern agentic interactions.
The implementation automatically registers deployed MCP proxies in Apigee API hub as searchable MCP APIs, enabling centralized tool catalogs and granular access controls through API products.Â
Organizations can apply quota policies and identity controls to restrict which agents and clients can access specific MCP tools, with full visibility through Apigee Analytics and the new API Insights feature.
Integration with Googleâs Agent Development Kit (ADK) provides streamlined access to Apigee MCP endpoints for developers building custom agents, with an ApigeeLLM wrapper available for routing LLM calls through Apigee proxies.Â
The feature works with multiple agent frameworks, including LangGraph, though ADK users get optimized tooling for the Google ecosystem, including Vertex AI Agent Engine and Gemini Enterprise deployment options.
Security capabilities extend beyond standard API protection to include Cloud Data Loss Prevention for sensitive data classification and Model Armor for defending against prompt injection attacks.Â
The feature is currently in preview with select customers, requiring contact with Apigee or Google Cloud account teams for access, with no pricing information disclosed yet.
31:07 Ryan â âI just did some real-time analysis about the features of the MCP and then also the browser and stuff. Itâs one of those things where it is the newer model of coding, where youâre having distributed agents do tasks, and that, so the new IDs are taking advantage of that⌠And it is a VS Code fork. So itâs very comfortable to your VS Code users.â
Googleâs Application Design Center reaches general availability as a visual, AI-powered platform for designing and deploying Terraform-backed application infrastructure on GCP.Â
The service integrates with Gemini Cloud Assist to let users describe infrastructure needs in natural language and receive deployable architecture diagrams with Terraform code, while automatically registering applications with App Hub for unified management.
The platform addresses platform engineering needs by providing a curated catalog of opinionated application templates, including specialized GKE templates for AI inference workloads using various LLM models.Â
Organizations can bring their own Terraform configurations from Git repositories and combine them with Google-provided components to create standardized infrastructure patterns for reuse across development teams.
The service offers application template revisions as an immutable audit trail and automatically detects configuration drift between intended designs and deployed applications to maintain compliance.
The platform is available free of cost for building and deploying application templates, with pricing details at cloud.google.com/products/application-design-center/pricing.Â
Integration with Cloud Hub provides operational insights and a unified control plane for managing application portfolios across the organization.
Platform teams can create secure, shareable catalogs of approved templates that give developers self-service access to compliant infrastructure while maintaining governance and security standards.Â
The service supports downloading templates as infrastructure-as-code for direct editing in local IDEs with changes flowing through standard Git pull request workflows.
33:10 Ryan â âItâs kind of the pangea that everyoneâs been hoping for, for a long time. With AI making it possible. Being able to plain text speak your infrastructure into existenceâŚI definitely like this model better than like Beanstalk or the hosted application model, which has been the solution until this. This is the answer I want.âÂ
Microsoft is deprecating RC4 encryption in Windows Active Directory after 26 years of default support, following its role in major breaches, including the 2024 Ascension healthcare attack that affected 5.6 million patient records.Â
The cipher has been cryptographically weak since 1994 and enabled Kerberoasting attacks that have compromised enterprise networks for over a decade.
Windows servers have continued to accept RC4-based authentication requests by default even after AES support was added, creating a persistent attack vector that hackers routinely exploit.Â
Senator Ron Wyden called for an FTC investigation of Microsoft in September 2025 for gross cybersecurity negligence related to this default configuration.
The deprecation addresses a fundamental security gap in enterprise identity management that has existed since Active Directory launched in 2000. Organizations using Windows authentication will need to ensure their systems are configured to use AES encryption and disable RC4 fallback to prevent downgrade attacks.
This change affects any organization running Active Directory for user authentication and access control, particularly those in healthcare, finance, and other regulated industries where credential theft can lead to catastrophic breaches. (Or literally anyone running Windows.)Â
The move comes after years of security researchers and government officials pressuring Microsoft to remove the obsolete cipher from default configurations.
36:06 Ryan â âItâs so complex, everyone just accepts the defaults just to get it up and going, and if you donât know how compromised the cipher is, you donât really prioritize getting back and fixing the encryption. So Iâm really happy to see this; itâs always been a black mark thatâs made me not trust Windows.âÂ
Azure Blob Storage now scales to exabytes with 50+ Tbps throughput and millions of IOPS, specifically architected to keep GPUs continuously fed during AI training workloads.Â
The platform powers OpenAIâs model training and includes a new Smart Tier preview that automatically moves data between hot, cool, and cold tiers based on 30 and 90-day access patterns to optimize costs without manual intervention.
Azure Ultra Disk delivers sub-0.5ms latency with 30% improvement on Azure Boost VMs, scaling to 400K IOPS per disk and up to 800K IOPS per VM on new Ebsv6 instances.Â
The new Instant Access Snapshots preview eliminates pre-warming requirements and reduces recovery times from hours to seconds for Premium SSD v2 and Ultra Disk, while flexible provisioning can reduce total cost of ownership by up to 50%.
Azure Managed Lustre AMLFS 20 preview supports 25 PiB namespaces with 512 GBps throughput, featuring auto-import and auto-export capabilities for seamless data movement between AMLFS and Azure Blob Storage.Â
This addresses the specific challenge of training AI models at terabyte and petabyte scale by maintaining high GPU utilization through parallel I/O operations.
Azure Files introduces Entra-only identity support for SMB shares, eliminating the need for on-premises Active Directory infrastructure and enabling cloud-native identity management, including external identities for Azure Virtual Desktop. Storage Mover adds cloud-to-cloud transfers and on-premises NFS to Azure Files NFS 4.1 migration, while Azure NetApp Files large volumes now scale to 7.2 PiB capacity with 50 GiBps throughput, representing a 3x and 4x increase, respectively.
Azure Native offers now include Pure Storage and Dell PowerScale for customers wanting to migrate existing on-premises partner solutions to Azure using familiar technology stacks. The Storage Migration Program provides access to partners like Atempo, Cirata, Cirrus Data, and Komprise for SAN and NAS workload migrations, with a new Storage Migration Solution Advisor in Copilot to streamline decision-making. Pricing details were not disclosed in the announcement.
38:26 Ryan â âIt just dawned on me, as youâre reading through here⌠this is interesting; getting all this high performance from object stores just sort of blows my mind. And then I realized that all these sorts of âcloud file systemsâ have been backed underneath by these object stores for a long time; like, of course, they need this.â
Microsoft is expanding its U.S. datacenter footprint with a new East US 3 region launching in Greater Atlanta in early 2027, plus adding Availability Zones to five existing regions by the end of 2027.Â
The Atlanta, Georgia region will support advanced AI workloads and feature zone-redundant storage for improved application resilience, designed to meet LEED Gold certification standards for sustainability.
The expansion adds Availability Zones to North Central US, West Central US, and US Gov Arizona regions, plus enhances existing zones in East US 2 Virginia and South Central US Texas.Â
This provides customers with more options for multi-region architectures to improve recovery time objectives and meet compliance requirements like CMMC and NIST guidance for government workloads.
Azure Government customers get dedicated infrastructure expansion with three Availability Zones coming to US Gov Arizona in early 2026, specifically supporting Defense Industrial Base requirements.Â
This complements the Azure for US Government Secret cloud region launched earlier in 2025, offering an alternative to US Gov Virginia for latency-sensitive and mission-critical deployments.
The infrastructure investments support organizations like the University of Miami using Availability Zones for disaster recovery in hurricane-prone regions, and the State of Alaska consolidating legacy systems while improving reliability.Â
Microsoft emphasizes its global network of over 70 regions, 400 datacenters, and 370,000 miles of fiber as a foundation for resilient cloud strategies using its Cloud Adoption Framework and Well-Architected Framework guidance.
ai.azure.com for building production-ready AI agents.
40:33 Ryan â âAI is definitely driving a lot of this, but like with large data sets, you donât really want that distributed globally. But I also think that theyâre just purely running out of space.â
Azure is tripling down on AI infrastructure with its global network now reaching 18 petabits per second of total capacity, up from 6 Pbps at the end of FY24.Â
The network spans over 60 AI regions with 500,000 miles of fiber and 4 Pbps of WAN capacity, using InfiniBand and high-speed Ethernet for lossless data transfer between GPU clusters.
NAT Gateway Standard V2 enters public preview with zone redundancy by default at no additional cost, delivering 100 Gbps throughput and 10 million packets per second.Â
This joins ExpressRoute, VPN, and Application Gateway in offering zone-resilient SKUs as part of Azureâs resiliency-by-default strategy.
ExpressRoute is getting 400G direct ports in select locations starting in 2026 for multi-terabit throughput, while VPN Gateway, now generally available, supports 5 Gbps single TCP flow and 20 Gbps total throughput with four tunnels.Â
Private Link scales to 5,000 endpoints per VNet and 20,000 across peered VNets.
42:45 Ryan â âIf you have those high-end network throughput needs, thatâs fantastic! Itâs been a while since Iâve really got into cloud at that deep layer, but I do remember in AWS the VPN limitations really biting; it was easy to hit those limits really fast.âÂ
iRobotâs bankruptcy marks the end of an era for the company that pioneered consumer robotics with the Roomba, now being acquired by its Chinese supplier Picea Robotics after losing ground to cheaper competitors.Â
The stock crashed from Amazonâs $52 offer in 2023 to just $4, showing how quickly market leaders can fall when undercut on price.
The failed Amazon acquisition in 2023 due to EU antitrust concerns looks particularly painful in hindsight, as iRobot might have been better off with Amazonâs resources than facing bankruptcy.Â
This highlights how regulatory decisions intended to preserve competition can sometimes accelerate a companyâs decline instead.
For cloud professionals, this demonstrates how hardware IoT companies struggle without strong cloud services and ecosystem lock-in that could justify premium pricing. iRobotâs inability to differentiate beyond hardware shows why companies like Amazon, Google, and Apple integrate devices tightly with their cloud platforms.
The Chinese supplier takeover raises questions about data privacy and security for the millions of Roombas already mapping homes worldwide.Â
This could become a cautionary tale about supply chain dependencies and what happens when your manufacturer becomes your owner.
Founded by MIT engineers in 1990 and selling 40 million devices, iRobotâs fall shows that innovation alone isnât enough without sustainable competitive advantages in manufacturing costs and ongoing software value.
This is a sad day, especially if youâre a fan of all things serverless, as they were the poster child of all things serverless. Â
Closing
And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod
Flutter CI/CD Part 2: Automating iOS Deployment to TestFlight with Fastlane & Bitbucket
Welcome to Part 2 of our efforts to automate deployment of our flutter applications to the various stores. In this session, we will be exploring how to make use of Fastlane, and Bitbucket to automate our deployment process.
Requirements:
This section assumes you have read part 1 of the article, not necessarily all of it, but at least part 1 that explores building a bitbucket pipeline. Notwithstanding, the main idea of this article is to give you a platform independent approach to automating and then you can extend it to the platform of choice or familiarity.
Xcode: Make sure Xcode is installed and configured correctly on your macOSÂ machine.
Flutter Project: You have an existing Flutter project.
Git: Basic knowledge of Git for version control (optional but highly recommended).
Ruby: Fastlane is built on Ruby. macOS comes with Ruby pre-installed, ( itâs recommended to use a Ruby version manager like rbenv or RVM to avoid conflicts with system Ruby and manage dependencies better.)
Setup: Install Fastlane
We will use bundler to handle install and setup of fastlane, to contain everything. This is like a fastlane environment for your project that then ensures that all your Fastlane dependencies are consistent across different machines and users.
Navigate to your Flutter projectâs ios directory in the terminal:
cd your_flutter_project/ios
2. Create a Gemfile:
touch Gemfile
3. Edit Gemfile (using nano, vim, or any text editor):
# In your_flutter_project/ios/Gemfile source "https://rubygems.org" gem "fastlane"
4. Install Bundler (if you donât have it):
gem install bundler
5. Install Fastlane via Bundler: This will install Fastlane and its dependencies into a Gemfile.lock file and a vendor/bundle directory inside your ios folder
bundle install --path vendor/bundle
6. Initialize Fastlane: run the initialization command while within the ios directory
bundle exec fastlane init
Fastlane will ask you a few questions:
What would you like to use fastlane for? CHOOSE 4 ( đ Manual setupâââmanually setup your project to automate your tasks) and press enter
It will ask you to confirm a few more things, just keep pressing ENTER till done
Fastlane will then set up the Fastlane directory inside your ios directory, containing:
Appfile: Stores configuration information that is shared across all lanes (e.g., app identifier, Apple ID).
Fastfile: This is where you define your automation lanes (e.g., deploy_io, deploy_android, tests).
Pluginfile: (Optional) If you add any Fastlane plugins.
7. Configuring the Appfile:
The Appfile is one of Fastlaneâs main files and is used to store project-specific information, such as the bundle identifier and the Apple team ID.
1. Open the Appfile file, located in the fastlane folder.
2. Edit the file to include your project settings.
app_identifier("com.yourcompany.yourappname") # Change the identifier of your app apple_id("X1Y2Z3A4B5") # Change for the real ID of your Apple Account
Now our setup is complete. We proceed to configure fastlane to build and sign our app, communicate with app store and actually push our app to testflight. We will do this in 4 main steps.
Step 1: FastlaneâââAppstore auth setup
To start we need to make fastlane successfully communicate with appstore, which means providing the necessary to make the requests valid or authenticated. It is also necessary to bypass interactive 2FA prompts that can halt automation in CI/CD environments. For this, we need the Apple Auth Key ID and the corresponding private key file (.p8 file). These two keys and the issuer ID help generate the necessary authentication tokens that fastlane uses to make requests to the appstore.
Unlike the private key file (which is only downloadable once), the Key ID remains visible in App Store Connect for reference. You will typically set the Key ID as an environment variable (e.g., ASC_KEY_ID) or pass it as a parameter in your Fastfile configuration to use the app_store_connect_api_key action.
Let us proceed.
Access App Store Connect
Navigate to âUsers and Accessâ
On the âUsers and Accessâ page, select the âIntegrationsâ tab
Click on the â+â button to add a new key.
Enter a name for the key, such as Fastlane APIÂ Key.
Choose the necessary permission profiles: 1. Normally, select App Manager or Admin to ensure that the API has full access to the apps in your project.
Click on âGenerateâ.
After generating the key, download the .p8 file. Save the file in the fastlane folder of the project named as âPrivateKey.p8â.
Copy the Key ID and keep
Copy and save the issuer ID, it is found just at the top of the list of Active Keys.
Now let us update our fastfile.
Initially the file looks like
default_platform(:ios) platform :ios do desc "Description of what the lane does" lane :custom_lane do # add actions here: https://docs.fastlane.tools/actions end end
Updates to:
default_platform(:ios)
platform :ios do # Updates starts here New lane for iOS deployment desc "Automates ios deployments" lane :deploy_ios do app_store_connect_api_key( key_id: "2XXXXXXXXJ", # Apple Auth Key ID issuer_id: "7xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxb", # Apple Auth Issuer ID key_filepath: "./fastlane/PrivateKey.p8", # Path to the private key file .p8 duration: 1200, in_house: false ) end # Updates ends here end
Now that we have that and are setup to make successful api requests, letâs proceed to the next step
Step 2: Apple Code Signing
Just like its name entails, this is a method that signs your app, validating that the app comes from a trusted source and that the app has not been tampered with. This process requires 3 main components
Certificates: This certificate verifies the userâs identity.
App ID: This is just the bundle ID for the app
Provision profile: A provisioning profile is a file that acts as the link between your trusted developer identity and the devices and services your app can access. Without a valid provisioning profile, your app cannot be built onto a physical device or submitted to the App Store.
1. Creating the certificate:
Certificate is generated from a public/private key pair created on your development machine this includes:
CSR (Certificate Signing Request) (.certSigningRequest file): helps you ââsecurely request a certificate from Apple without sharing your private encryption key. This file contains the public key and your personal information that is then used to securely ask Apple to generate a signing certificate. When the CSR is created, a private key is also created and left behind in the Keychain Access app.
The certificate (.cer file) is generated with public key information: There are two types: Developer and Distribution. For our case we will use the Distribution which is for signing the app package when preparing it for wider release via the App Store, TestFlight, or enterprise distribution.
Install the Certificate: You download this .cer file and double-click it. It imports into your Keychain Access app, pairing itself with the private key that was already stored there.
Exporting the certificate to p12 (Personal Information Exchange format): Any CI/CD tool outside of your computer needs both your private key and the corresponding public key / certificate to sign your app and since you can not just share these unprotected, the .p12 file is a secure file that has your private key and its associated public certificate chain into a single, password-protected file. This you can pass this file and the password to fastlane to use to sign your app.
Step 1: Generate a CSR (Certificate Signing Request)
Open the Keychain Access application on macOS.
2. In the top menu, click Keychain Access > Certificate Assistant > Request a Certificate From a Certificate Authority.
3. Fill in the required fields:
User Email Address: Enter your Apple Developer account email address.
Common Name: Enter a name to identify the certificate (example: âDeveloper Distribution Certificateâ).
Click on the â+â button to add a new certificate
Select âApple Distributionâ and click Continue.
Submit the CSR generated in step 1 in the Apple Developer Portal
Download the generated certificate
Step 3: Install the certificate
Open (double click) the downloaded .cer file. It will automatically be imported into Keychain Access.
Locate the certificate in the My Certificates tab of Keychain (if you canât find it, look at the expiration date, itâs usually the same date as today, but 1 year ahead).
Step 4: Export the certificate to .p12
Exporting the certificate to p12
In Keychain Access, click on the arrow on the certificate you imported, and select the certificate and private key.
Selected certificate and private key
Right-click and select export 2Â elements
Choose the Personal Information Exchange format (.p12).
Set the name to anything: example FastlaneDistributionCertificate
Save the file in the projectâs fastlane folder
Enter a password for the certificate (save it somewhere, as we will use it very soon to configure Fastfile.)
You will be prompted to enter your machine password to proceed.
Now let us update our fastfile to import the certificate we just created.
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do ... # 2. Import the distribution certificate import_certificate( certificate_path: "./fastlane/FastlaneDistributionCertificate.p12", # Path to the .p12 certificate file certificate_password: "xxxxxxxx", # Password for the .p12 file keychain_name: "login", # Name of the keychain to import the certificate into keychain_password: "" ) end end
2. Creating the Provision Profile
The primary purpose of a provisioning profile is to give an application explicit authorization to launch on an Apple device. The operating system uses the profile to verify:
Who signed the app (via the embedded certificate).
What app it is (via the App ID/Bundle Identifier).
Where it is allowed to run (on specific test devices or publicly via the App Store).
How it can use certain device capabilities/entitlements (e.g., Push Notifications, iCloud, Apple Pay).
Letâs create a provisioning profile for distribution in the App Store.
Go to the Apple Developer Portal
Go to Certificates, Identifiers & Profiles.
Go to the Profiles section.
Click on the â+â button to add a new profile
Select âApp Store Connectâ
Click on Continue
Select the App ID you want to connect your profile to.
Select the certificate you created. You can use the expiry date to inform on which is which
Name the profile: the bundle id with suffix AppStore Example âcom.example.myApp AppStoreâ, this name will be used later when configuring the Fastfile file.
Click on Generate to continue.
Stay on the page.
Now let us configure Fastlane to be able to pull this created profile and use it for signing our project.
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do ... # 3. Fetch the provisioning profile get_provisioning_profile( output_path: "./fastlane/profiles", filename: "appstore.mobileprovision", provisioning_name: "com.example.myApp AppStore", # Name of the provisioning profile in Apple Developer Portal ignore_profiles_with_different_name: true, app_identifier: "com.example.myApp" # Your app's bundle identifier NOTE copy it from the profile creation page ) end end
Step 3: Finalize Fastlane configurations
Update code signing settings in the Xcode project
We have now set up all that is needed from Apple to be able to successfully build our app. To ensure the correct signing identity and provisioning profile are set in the Xcode project file prior to compilation, we use the âupdate_code_signing_settingsâ fastlane action to programmatically modify our Xcode projectâs code signing build settings.
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do ... # 4. Update code signing settings in the Xcode project update_code_signing_settings( use_automatic_signing: false, targets: ["Runner"], path: "Runner.xcodeproj", code_sign_identity: "iPhone Distribution", build_configurations: ['Release'], sdk: "iphoneos*", profile_name: "com.example.myApp AppStore" # Name of the provisioning profile to use ) end end
Building the app:
This process should be a one step process, but because running the native build for iOS only builds the app with the current data available, no preparations, no updates for versioning. The Flutter framework operates differently from traditional native development. The flutter build command performs essential preparatory steps that the native Xcode environment needs to compile the final application bundle. It also automatically manages application versions keeping it synchronized with those defined in pubspec.yaml.
NOTE: If you are writing a script such as github actions or bitbucket, you can actually just add the step that runs âflutter build iosâââreleaseâââno-codesignâ before running the fastlane lane, hence you can ignore this step here
Step 1: Run default flutter build with no code signing
Dir.chdir "../.." do sh( "flutter", "build", "ipa", "--no-codesign" ) end
Step 2: Configure the fastlane build action
build_ios_app( workspace: "Runner.xcworkspace", configuration: "Release", scheme: "Runner", silent: true, # hide information that's not necessary when building clean: true, # if to clean the project before building output_directory: "./fastlane/build", output_name: "app.ipa", #final path will be ./fastlane/build/app.ipa export_method: "app-store-connect", export_options: { # Specifies the provisioning profiles for each app identifier. provisioningProfiles: { "com.example.myApp" => 'com.example.myApp AppStore', } } )
Now our fastfile will look like:
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do ... # 5. Build the Flutter iOS app Dir.chdir "../.." do sh( "flutter", "build", "ipa", "--no-codesign" ) end
# 6. Build the iOS app using Fastlane build_ios_app( workspace: "Runner.xcworkspace", configuration: "Release", scheme: "Runner", silent: true, # hide information that's not necessary when building clean: true, # if to clean the project before building output_directory: "./fastlane/build", output_name: "app.ipa", # final path will be ./fastlane/build/app.ipa export_method: "app-store-connect", export_options: { # Specifies the provisioning profiles for each app identifier. provisioningProfiles: { "com.example.myApp" => 'com.example.myApp AppStore', } } ) end end
Now that we have our build and ipa available, we can upload it to testflight. We are going to use an approach that just sends to testflight, does not wait for apple to process the submission (especially since most CI/CD pipelines pay per build minutes) consequently we wont be able to push to external testers.
Alternatively: This will wait for processing by apple and then distribute the app to testers groups as specified
upload_to_testflight( ipa: "./fastlane/build/app.ipa", distribute_external: true, # Specify the exact names of your tester groups as defined in App Store Connect groups: ["Internal Testers", "iOS Beta Testers (External)"], changelog: "New features and bug fixes for this beta release." )
Now our final Fastlane file:
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do # 1. Set up App Store Connect API Key app_store_connect_api_key( key_id: "2XXXXXXXXJ", # Apple Auth Key ID issuer_id: "7xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxb", # Apple Auth Issuer ID key_filepath: "./fastlane/PrivateKey.p8", # Path to the private key file .p8 duration: 1200, in_house: false )
# 2. Import the distribution certificate import_certificate( certificate_path: "./fastlane/FastlaneDistributionCertificate.p12", # Path to the .p12 certificate file certificate_password: "xxxxxxxx", # Password for the .p12 file keychain_name: "login", # Name of the keychain to import the certificate into keychain_password: "" )
# 3. Fetch the provisioning profile get_provisioning_profile( output_path: "./fastlane/profiles", filename: "appstore.mobileprovision", provisioning_name: "com.example.myApp AppStore", # Name of the provisioning profile in Apple Developer Portal ignore_profiles_with_different_name: true, app_identifier: "com.example.myApp" # Your app's bundle identifier NOTE copy it from the profile creation page )
# 4. Update code signing settings in the Xcode project update_code_signing_settings( use_automatic_signing: false, targets: ["Runner"], path: "Runner.xcodeproj", code_sign_identity: "iPhone Distribution", build_configurations: ['Release'], sdk: "iphoneos*", profile_name: "com.example.myApp AppStore" # Name of the provisioning profile to use )
# 5. Build the Flutter iOS app Dir.chdir "../âŚ" do sh( "flutter", "build", "ipa", "--no-codesign" ) End
# 6. Build the iOS app using Fastlane build_ios_app( workspace: "Runner.xcworkspace", configuration: "Release", scheme: "Runner", silent: true, # hide information that's not necessary when building clean: true, # if to clean the project before building output_directory: "./fastlane/build", output_name: "app.ipa", # final path will be ./fastlane/build/app.ipa export_method: "app-store-connect", export_options: { # Specifies the provisioning profiles for each app identifier. provisioningProfiles: { "com.example.myApp" => 'com.example.myApp AppStore', } } )
# 7. Upload the build to TestFlight upload_to_testflight( ipa: "./fastlane/build/app.ipa", skip_submission: true, skip_waiting_for_build_processing: true, )
end end
You can run and test that the file works as expected. Go to the terminal, change to the ios folder and run the following command.
bundle exec fastlane deploy_ios
At the end of the run, if you get the following output or something similar, it means its successful
Step 4: Setting up bitbucket
Now that we have concluded and have successfully run our fastlane setup locally, we can now proceed to set it up for use on bitbucket.
My idea of doing the initial setup outside and independent of bitbucket is to enable you to use this configuration wherever needed and not just bitbucket.
For our bitbucket configurations there are 3 variables and 2 files we will need to securely access in our script, which means we will figure out a way to store the files securely and access them and also make the variables accessible in our bitbucket environment
Variables:
Key_id
Issuer_id
Certificate_password
Files:
Key_filepath
Certificate_path
a. Add Variables to Bitbucket
Go to Repository settings -> Pipelines -> Repository variables.
Add the following variables. Crucially, check the Secured box for each one.
| Variable Name | Value | | ------------------------- | -------------------------------------------------- | | `AUTH_KEY_ID` | Paste the apple Key ID you are using for this project and continue | | `AUTH_KEY_ISSUER_ID` | Paste your Issuer ID and continue | | `CERTIFICATE_PASSWORD` | Paste the p12 certificate and continue | | ------------------------- | -------------------------------------------------- |
b. Add the files to Bitbucket
There are several ways of securely adding files to bitbucket
Using bitbucket download, where you can upload a files to bitbucket, use your access token to access the files in a secure manner using CURL in your yaml file, you can check out more here
You create a new repository and upload the files into it, then using ssh, you can clone the repository where then you can have access to it in your pipeline. Check more here
The last one i want to explore is simply encoding the file as a Base64 string, adding it to the bitbucket environment variable and then decoding it when needed.
Encode Your Appstore file credentials
To store your files as variables, we must encode them into text format using base64. Open a terminal, move to the fastlane folder and run the appropriate command:
# On macOS/Linux base64 -i AuthKey.p8 -o Authkey.txt
# On Windows (using PowerShell) [Convert]::ToBase64String([IO.File]::ReadAllBytes("AuthKey.p8")) | Out-File -FilePath "Authkey.txt"
This creates a Authkey.txt file containing a very long string. Copy this entire string.
Let us do the same for the key.properties files to encode the credentials to access the .p12 file
# On macOS/Linux base64 -i DistributionCertificate.p12 -o DistributionCertificate.txt
# On Windows (using PowerShell) [Convert]::ToBase64String([IO.File]::ReadAllBytes("DistributionCertificate.p12")) | Out-File -FilePath "DistributionCertificate.txt"
Add Variables to Bitbucket
Go to Repository settings -> Pipelines -> Repository variables.
Add the following variables. Crucially, check the Secured box for each one.
| ------------------------- | -------------------------------------------------- | | Variable Name | Value | | ------------------------- | -------------------------------------------------- | | `AUTH_KEY_FILE` | Paste the entire base64 string from `Authkey.txt`. | | `DIST_CERT_FILE` | Paste the entire base64 string from `DistributionCertificate.txt`. | | ------------------------- | -------------------------------------------------- |
c. Update the Fastlane file and creating ios section of yaml file
We update our fastlane file by adding environment access for the variables of IDs and the password only, the files will be decoded and stored in the right location using the bitbucket-pipelines.yml file, this will help us separate concerns in this file.
default_platform(:ios)
platform :ios do desc "Automates ios deployments" lane :deploy_ios do # 1. Set up App Store Connect API Key app_store_connect_api_key( # >>>> UPDATE STARTS HERE key_id: ENV["AUTH_KEY_ID"], # Apple Auth Key ID issuer_id: ENV["AUTH_KEY_ISSUER_ID"], # Apple Auth Issuer ID # >>>> UPDATE ENDS HERE key_filepath: "./fastlane/PrivateKey.p8", # Path to the private key file .p8 duration: 1200, in_house: false ) # 2. Import the distribution certificate import_certificate( certificate_path: "./fastlane/FastlaneDistributionCertificate.p12", # Path to the .p12 certificate file # >>>> UPDATE STARTS HERE certificate_password: ENV["CERTIFICATE_PASSWORD"], # Password for the .p12 file # >>>> UPDATE ENDS HERE keychain_name: "login", # Name of the keychain to import the certificate into keychain_password: "" ) # 3. Fetch the provisioning profile ... end end
We have previously gone over configuring the bitbucket-pipelines.yml file in our android CI/CD story, if you need more details in doing so, I will advise you read the article
Unlike the Android step where we used a Docker image with Flutter pre-installed (ghcr.io/cirruslabs/flutter), iOS apps can only be built on macOS, we cannot use the standard Linux Docker image we used for Android. Instead, we will instruct Bitbucket to use a self hosted macOs, which is basically Bitbucket using your Mac to run mac dependent tasks.
While Bitbucket offers cloud macOS runners, they often incur additional costs or burn through free build minutes very quickly. A powerful, cost-effective alternative is to use your own Mac (or a spare Mac mini) as a Self-Hosted Runner. This allows Bitbucket to send the build instructions to your machine, execute them, and report back the results.
1. Register the Runner in Bitbucket
Go to your Repository settings > Pipelines >Â Runners.
Click Add runner.
For System, select macOS.
Give the runner a name (e.g., MyMacMini).
Assign a label. This is crucial as it tells the YAML file which runner to pick. Letâs use ios-runner.
Click Next. Bitbucket will provide a command starting with curl.
2. Configure your Mac
Open the terminal on the Mac you intend to use for building:
Paste and run the curl command provided by Bitbucket. This downloads and installs the runner client.
Once installed, the terminal will ask you to start the runner.
Crucial Note: Ensure that the user account running the runner has access to Flutter. You might need to add the path to your Flutter SDK in the runnerâs configuration or ensure it is in your .zshrc / .bash_profile.
3. Update the Pipeline Configuration
Now, we tell Bitbucket to look for your specific machine instead of using their cloud infrastructure. We do this by changing the runs-on property in our YAML file to match the label we created (ios-runner).
Updated YAML for Self-Hosted Runner:
image: ghcr.io/cirruslabs/flutter:3.32.7
definitions: caches: # Cache the Flutter SDK between builds flutter: /flutter # Cache the pub packages pub: $HOME/.pub-cache
clone: depth: 1 # Perform a shallow clone for faster checkout
pipelines: default: - step: ... - step: name: Build and deploy iOS app # Instead of 'runs-on: macos', we use our custom label runs-on: - self.hosted - ios-runner script: - cd ios # Note: Since this is your own machine, you likely already have # Flutter and Pods installed. We might not need 'pod install' # if the state persists, but it is good practice to keep it. - pod install --repo-update
And there you have it! We have successfully automated the deployment of our iOS application using Fastlane and Bitbucket.
In this session, we navigated the complexities of Appleâs code signing, set up a secure communication channel with the App Store, and configured a pipeline that takes your code from a simple commit all the way to TestFlight. By moving these manual steps into an automated workflow, we ensure that our builds are consistent, reliable, and free from human error.
Remember, the core strength of this approach lies in the Fastlane configuration. Because we isolated the build logic within the Fastfile and abstracted the secrets using environment variables, you arenât tied down to Bitbucket. You can easily adapt this setup to GitHub Actions, GitLab CI, or any other CI/CD tool of your choice with minimal changes.
You can now focus on writing code, knowing that the tedious process of building, signing, and distributing your app is handled automatically.