Henrik Werdelin has spent the last 15 years helping entrepreneurs build big brands like Barkbox through his startup studio Prehype. Now, with his new, New York-based venture Audos, he’s betting that AI can help him scale that process from “tens” of startups a year to “hundreds of thousands” of aspiring business owners. The timing certainly […]
Apple's Swift programming language is expanding official support to Android through a new "Android Working Group" which will improve compatibility, integration, and tooling. "As it stands today, Android apps are generally coded in Kotlin, but Apple is looking to provide its Swift coding language as an alternative," notes 9to5Google. "Apple first launched its coding language back in 2014 with its own platforms in mind, but currently also supports Windows and Linux officially." From the report: A few of the key pillars the Working Group will look to accomplish include:
- Improve and maintain Android support for the official Swift distribution, eliminating the need for out-of-tree or downstream patches
- Recommend enhancements to core Swift packages such as Foundation and Dispatch to work better with Android idioms
- Work with the Platform Steering Group to officially define platform support levels generally, and then work towards achieving official support of a particular level for Android
- Determine the range of supported Android API levels and architectures for Swift integration
- Develop continuous integration for the Swift project that includes Android testing in pull request checks.
- Identify and recommend best practices for bridging between Swift and Android's Java SDK and packaging Swift libraries with Android apps
- Develop support for debugging Swift applications on Android
- Advise and assist with adding support for Android to various community Swift packages
An anonymous reader quotes a report from TechCrunch: Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads.
Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters. The new feature is available for free in Google Ad Manager after earlier tests with 1,000 publishers that spanned over a year. While no broad case studies were shared, India's Sakal Media Group implemented Google Ad Manager's Offerwall feature and saw a 20% revenue boost and up to 2 million more impressions in three months. Overall, publishers testing Offerwall experienced an average 9% revenue lift, with some seeing between 5% and 15%.
Edward “Big Balls” Coristine’s placement at the SSA comes after a White House official told WIRED on Tuesday that the 19-year-old had resigned from his position in government.
At Microsoft Build 2025, we introduced Azure AI Foundry resource, Azure AI Foundry API, and supporting tools to streamline the end-to-end development lifecycle of AI agents and applications.
These capabilities are designed to help developers accelerate time-to-market; support production-scale workloads with scale and central governance; and support administrators with a self-serve capability to enable their teams’ experimentation with AI in a controlled environment.
The Azure AI Foundry resource type unifies agents, models and tools under a single management grouping, equipped with built-in enterprise-readiness capabilities — such as tracing & monitoring, agent and model-specific evaluation capabilities, and customizable enterprise setup configurations tailored to your organizational policies like using your own virtual networks. This launch represents our commitment to providing organizations with a consistent, efficient and centrally governable environment for building and operating the AI agents and applications of today, and tomorrow.
New platform capabilities
The new Foundry resource type evolves our vision for Azure AI Foundry as a unified Azure platform-as-a-service offering, enabling developers to focus on building applications rather than managing infrastructure, while taking advantage of native Azure platform capabilities like Azure Data and Microsoft Defender. Previously, Azure AI Foundry portal’s capabilities required the management of multiple Azure resources and SDKs to build an end-to-end application.
New capabilities include:
Foundry resource type enables administrators with a consistent way of managing security and access to Agents, Models, Projects, and Azure tooling Integration. With this change, Azure Role Based Access Control, Networking and Policies are administered under a single Azure resource provider namespace, for streamlined management. ‘Azure AI Foundry’ is a renaming of the former ‘Azure AI Services’ resource type, with access to new capabilities. While Azure AI Foundry still supports bring-your-own Azure resources, we now default to a fully Microsoft-managed experience, making it faster and easier to get started.
Foundry projects are folders thatenable developers to independently create new environments for exploring new ideas and building prototypes, while managing data in isolation. Projects are child resources; they may be assigned their own admin controls but by default share common settings such as networking or connected resource access from their parent resource. This principle aims to take IT admins out of the day-to-day loop once security and governance are established at the resource level, enabling developers to self-serve confidently within their projects.
Azure AI Foundry API is designed from the ground up, to build and evaluate API-first agentic applications, and lets you work across model providers agnostically with a consistent contract.
Azure AI Foundry SDK wraps the Foundry API making it easy to integrate capabilities into code whether your application is built in Python, C#, JavaScript/TypeScript or Java.
Azure AI Foundry for VS Code Extension complements your workflow with capabilities to help you explore models, and develop agents and is now supported with the new Foundry project type.
New built-in RBAC roles provide up-to-date role definitions to help admins differentiate access between Administrator, Project Manager and Project users. Foundry RBAC actions follow strict control- and data plane separation, making it easier to implement the principle of least privilege.
Why we built these new platform capabilities
If you are already building with Azure AI Foundry -- these capabilities are meant to simplify platform management, enhance workflows that span multiple models and tools, and reinforce governance capabilities, as we see AI workloads grow more complex.
The emergence of generative AI fundamentally changed how customers build AI solutions, requiring capabilities that span multiple traditional domains. We launched Azure AI Foundry to provide a comprehensive toolkit for exploring, building and evaluating this new wave of GenAI solutions. Initially, this experience was backed by two core Azure services -- Azure AI Services for accessing models including those from OpenAI, and Azure Machine Learning’s hub, to access tools for orchestration and customization.
With the emergence of AI agents composing models and tools; and production workloads demanding the enforcement of central governance across those, we are investing to bring the management of agents, models and their tooling integration layer together to best serve these workload’s requirements.
The Azure AI Foundry resource and Foundry API are purposefully designed to unify and simplify the composition and management of core building blocks of AI applications:
Models
Agents & their tools
Observability, Security, and Trust
In this new era of AI, there is no one-size-fits-all approach to building AI agents and applications. That's why we designed the new platform as a comprehensive AI factory with modular, extensible, and interoperable components.
Foundry Project vs Hub-Based Project
Going forward, new agents and model-centric capabilities will only land on the new Foundry project type. This includes access to Foundry Agent Service in GA and Foundry API.
While we are transitioning to Azure AI Foundry as a managed platform service, hub-based project type remains accessible in Azure AI Foundry portal for GenAI capabilities that are not yet supported by the new resource type. Hub-based projects will continue to support use cases for custom model training in Azure Machine Learning Studio, CLI and SDK.
For a full overview of capabilities supported by each project type, see this support matrix.
Azure AI Foundry Agent Service
The Azure AI Foundry Agent Service experience, now generally available, is powered by the new Foundry project. Existing customers exploring the GA experience will need the new AI Foundry resource.
All new investments in the Azure AI Foundry Agent Service are focused on the Foundry project experience. Foundry projects act as secure units of isolation and collaboration — agents within a project share:
File storage
Thread storage (i.e. conversation history)
Search indexes
You can also bring your own Azure resources (e.g., storage, bring-your-own virtual network) to support compliance and control over sensitive data.
Start Building with Foundry
Azure AI Foundry is your foundation for scalable, secure, and production-grade AI development. Whether you're building your first agent or deploying a multi-agent workforce at Scale, Azure AI Foundry is ready for what’s next.
Welcome to episode 308 of The Cloud Pod – where the forecast is always cloudy! Justin and Matt are on hand and ready to bring you an action packed episode. Unfortunately, this one is also lullaby free. Apologies. This week we’re talking about Databricks and Lakebridge, Cedar Analysis, Amazon Q, Google’s little hiccup, and updates to SQL – plus so much more! Thanks for joining us.
Titles we almost went with this week:
KV Phone Home: When Your Key-Value Store Goes AWOL
When Your Coreless Service Finds Its Core Problem
Oracle’s Vanity Fair: Pretty URLs for Pretty Penny
From Warehouse to Lakehouse: Your Free Ticket to Cloud Town
1⃣Databricks Uno: Because One is the Loneliest Number
Free as in Beer, Smart as in Data Science
Cedar Analysis: Because Your Authorization Policies Wood Never Lie
Meta is finalizing a $14 billion investment for a 49% stake in Scale AI, with CEO Alexandr Wang joining to lead a new AI research lab at Meta.
This follows similar moves by Google and Microsoft acquiring AI talent through investments rather than direct acquisitions to avoid regulatory scrutiny.
Scale AI specializes in data labeling and annotation services critical for training AI models, serving major clients including OpenAI, Google, Microsoft, and Meta.
The company’s expertise covers approximately 70% of all AI models being built, providing Meta with valuable intelligence on competitor approaches to model development.
The deal reflects Meta’s struggles with its Llama AI models, particularly the underwhelming reception of Llama 4 and delays in releasing the more powerful “Behemoth” model due to concerns about competitiveness with OpenAI and DeepSeek. Meta recently reorganized its GenAI unit into two divisions following these setbacks.
Wang brings both technical AI expertise and business acumen, having built Scale AI from a 2016 startup to a $14 billion valuation. His experience includes defense contracts and the recent Defense Llama collaboration with Meta for national security applications.
For cloud providers and developers, this consolidation signals increased competition in AI infrastructure and services, as Meta seeks to strengthen its position against OpenAI’s consumer applications and model capabilities through enhanced data preparation and training methodologies.
03:29 Matt – “It’s interesting, especially the first part of this where companies are trying to acquire AI talent through investments rather than directly hiring people – and hiring them away from other companies. It’s going to be an interesting trend to see if it continues on in the industry where they just keep doing it this way. They just acquire small companies and medium (or large in this case) in order to continue to grow their teams or to at least augment their teams in that way. Or if they’re going to try to build their own in-house units too.”
Databricks Free Edition provides access to the same data and AI tools used by enterprise customers, removing the cost barrier for students and hobbyists to gain hands-on experience with production-grade platforms.
The offering addresses the growing skills gap in AI/ML roles, where job postings have increased 74% annually over four years and 66% of business leaders require AI skills for new hires.
Free Edition includes access to Databricks’ training resources and industry-recognized certifications, allowing users to validate their skills on the same platform used by major companies.
Universities like Texas A&M are already integrating Free Edition into their curriculum, enabling students to gain practical experience with enterprise data tools before entering the workforce.
This move positions Databricks to capture mindshare among future data professionals while competing with other cloud providers’ free tiers and educational offerings.
Databricks One creates a simplified interface specifically for business users to access data insights without needing technical expertise in clusters, queries, or notebooks.
The consumer access entitlement is available now, with the full experience entering beta later this summer.
The platform provides three key capabilities for non-technical users: AI/BI Dashboards, Genie for natural language data queries, and interaction with Databricks Apps through a streamlined interface designed to minimize complexity.
Security and governance remain centralized through Unity Catalog, allowing administrators to expand access to business users while maintaining existing compliance and auditing controls without changing their governance strategy.
The service will be included at no additional license fee for existing Databricks Intelligence Platform customers, potentially expanding data access across organizations without requiring additional technical training or resources.
Future roadmap includes expanding from single workspace access to account-wide asset visibility, positioning Databricks One as a centralized hub for business intelligence across the entire Databricks ecosystem.
08:42 Justin – “I think the Databricks Free Edition is a really strong move on their part… I can play with it, see what it does and kick the tires on it and be interested in it as a hobbyist. And then I can bring it back to my day job and say, hey, I was using Databricks over the weekend and I did a thing and I think it could work for us at work. Being able to get access to these tools and these types of capabilities to play with, I think it’s a huge advantage. Everything’s moving so fast right now, that unless you have access to these tools, you feel like you’re left behind.”
The collaboration uses AWS cloud infrastructure to process massive datasets from fusion experiments.
The project leverages AWS SageMaker and high-performance computing resources to analyze terabytes of sensor data from fusion reactors, training models that can predict plasma instabilities milliseconds before they occur. This predictive capability could prevent costly reactor damage and accelerate fusion development timelines.
Cloud computing enables fusion researchers to scale their computational workloads dynamically, running complex simulations and ML training jobs that would be prohibitively expensive with on-premises infrastructure.
AWS provides the elastic compute needed to process years of experimental data from multiple fusion facilities worldwide.
The partnership demonstrates how cloud-based AI/ML services are becoming essential for scientific computing applications that require massive parallel processing and real-time analysis.
Fusion researchers can now iterate on models faster and share findings globally through cloud collaboration tools.
This application of cloud AI to fusion energy could accelerate the path to commercial fusion power by reducing experimental downtime and improving reactor designs through better predictive models. Success here would validate cloud platforms as critical infrastructure for next-generation energy research.
This eliminates manual context switching between browser tabs and allows Q Developer to automatically fetch project requirements, design specs, and update task statuses.
MCP provides a standardized way for LLMs to integrate with applications, share context, and interact with APIs. Developers can configure MCP servers with either Global scope (across all projects) or Workspace scope (current IDE only), with granular permissions for individual tools including Ask, Always Allow, or Deny options.
The practical implementation shown demonstrates fetching Jira issues, moving tickets to “In Progress”, analyzing Figma designs for technical requirements, and implementing code changes based on combined context from both tools. This integration allows Q Developer to generate more accurate code by understanding both business requirements and design specifications simultaneously.
This feature builds on Q Developer’s existing agentic coding capabilities which already included executing shell commands and reading local files. The addition of MCP support extends these capabilities to any tool that implements the protocol, with AWS providing an open-source MCP Servers repository on GitHub for additional integrations.
For AWS customers, this reduces development friction by keeping developers in their IDE while maintaining full context from project management and design tools. The feature is available now in Q Developer’s IDE plugins with no additional cost beyond standard Q Developer pricing.
13:26 Justin – “I mean, if you think Q Developer is the best tool for you, then more power to you, and I’m not going to stop you. But I am glad to see this get added to one more place.”
AWS WAF now includes automatic Layer 7 DDoS protection that detects and mitigates attacks within seconds, using machine learning to establish traffic baselines in minutes and identify anomalies without manual rule configuration.
The managed rule group works across CloudFront, ALB, and other WAF-supported services, reducing operational overhead for security teams who previously had to manually configure and tune DDoS protection rules.
Available to all AWS WAF and Shield Advanced subscribers in most regions, the service automatically applies mitigation rules when traffic deviates from normal patterns, with configurable responses including challenges or blocks.
This addresses a critical gap in application-layer protection where traditional network-layer DDoS defenses fall short, particularly important as L7 attacks become more sophisticated and frequent.
Pricing follows standard AWS WAF managed rule group costs, making enterprise-grade DDoS protection accessible without requiring dedicated security infrastructure or expertise.
14:56 Justin – “I have say that I’ve used the WAF now quite a bit – as well as Shield and CloudFront. Compared to using CloudFlare, they’re so limited what you can do on these things. I so much prefer CloudFlare over trying to tune AWS WAF properly.”
Powertools for AWS Lambda now includes a Bedrock Agents Function utility that eliminates boilerplate code when building Lambda functions that respond to Amazon Bedrock Agent action requests.
The utility handles parameter injection and response formatting automatically, letting developers focus on business logic instead of integration complexity.
This utility integrates seamlessly with existing Powertools features like Logger and Metrics, providing a production-ready foundation for AI applications. Available for Python, TypeScript, and .NET, it standardizes how Lambda functions interact with Bedrock Agents across different programming languages.
For organizations building agent-based AI solutions, this reduces development time and potential errors in the Lambda-to-Bedrock integration layer. The utility abstracts away the complex request/response patterns required for agent actions, making it easier to build and maintain serverless AI applications.
Developers can get started by updating to the latest version of Powertools for AWS Lambda in their preferred language. Since this is an open-source utility addition, there are no additional costs beyond standard Lambda and Bedrock usage fees.
This release signals AWS’s continued investment in simplifying AI application development by providing purpose-built utilities that handle common integration patterns. It addresses a specific pain point for developers who previously had to write custom code to properly format Lambda responses for Bedrock Agents.
20:21 Matt – “It’s great to see them making these more accessible to *not* subject matter experts and to the general developer. So would I want to take my full app and go to full production leveraging power tools? No, but it’s good to let the standard developer that just wants to play with something and learn and figure out how to do it. Get something up and running decently easily.”
AWS releases Cedar Analysis as open source tools for verifying authorization policies, addressing the challenge of ensuring fine-grained access controls work correctly across all scenarios rather than just test cases. The toolkit includes a Cedar Symbolic Compiler that translates policies into mathematical formulas and a CLI tool for policy comparison and conflict detection.
The technology uses SMT (Satisfiability Modulo Theories) solvers and formal verification with Lean to provide mathematically proven soundness and completeness, ensuring analysis results accurately reflect production behavior.
This approach can answer questions like whether two policies are equivalent, if changes grant unintended permissions, or if policies contain conflicts or redundancies.
Cedar itself has gained significant traction with 1.17 million downloads and production use by companies like MongoDB and StrongDM, making robust analysis tools increasingly important as applications scale. The open source release under Apache 2.0 license allows developers to independently verify policies and researchers to build upon the formal methods foundation.
The practical example demonstrates how subtle policy refactoring errors can be caught – splitting a single policy into multiple policies accidentally restricted owner access to private photos, which the analysis tool identified before production deployment. This capability helps prevent authorization bugs that could lead to security incidents or access disruptions.
For AWS customers using services like Verified Permissions (which uses Cedar), this provides additional confidence in policy correctness and a path for building custom analysis tools tailored to specific organizational needs. The formal verification aspect also positions Cedar as a research platform for advancing authorization system design.
22:57 Justin – “We’re using strong DM in the day jo0,b and it is very nice to see Cedar getting used in lots of different ways, particularly the mathematical proofs to be used in policies.”
A misconfiguration in Google Cloud’s IAM systems caused widespread outages affecting App Engine, Firestore, Cloud SQL, BigQuery, and Memorystore, demonstrating how a single identity management failure can cascade across multiple cloud services and impact thousands of businesses globally.
The incident highlighted the interconnected nature of modern cloud infrastructure as services like Cloudflare Workers, Spotify, Discord, Shopify, and UPS experienced partial or complete downtime due to their dependencies on Google Cloud components.
Google Workspace applications including Gmail, Drive, Docs, Calendar, and Meet all experienced failures, showing how IAM issues can affect both infrastructure services and end-user applications simultaneously.
The outage underscores the critical importance of IAM redundancy and configuration management in cloud environments, as even major providers like Google can experience service-wide disruptions from a single misconfiguration.
While AWS appeared largely unaffected, Amazon’s Twitch service may have experienced issues due to network-level interdependencies, illustrating how cloud outages can have ripple effects across provider boundaries through shared DNS, CDN, or authentication services.
26:11 Matt – “For the SRE team at Google, within 2 minutes was already triaging, in 10 minutes it identified the root cause – that’s an impressive response time.”
Cloudflare experienced a 2 hour 28 minute global outage on June 12, 2025 affecting Workers KV, WARP, Access, Gateway, Images, Stream, Workers AI, Turnstile, and other critical services due to a third-party storage provider failure that exposed architectural vulnerabilities in their infrastructure.
The incident revealed a critical single point of failure in Workers KV’s central data store, which depends on many Cloudflare products despite being designed as a “coreless” service that should run independently across all locations.
During the outage window, 91% of Workers KV requests failed, cascading failures across dependent services while core services like DNS, Cache, proxy, and WAF remained operational, highlighting the blast radius of shared infrastructure dependencies.
Cloudflare is accelerating migration of Workers KV to their own R2 storage infrastructure and implementing progressive namespace re-enablement tooling to prevent future cascading failures and reduce reliance on third-party providers.
This marks at least the third significant R2-related outage in recent months (March 21 and February 6, 2025 also mentioned), raising questions about the stability of Cloudflare’s storage infrastructure during their architectural transition period.
29:31 Justin – “I think the failure here is they’re running an entire KV on top of GCS or GCP in a way that they were impacted by this word that should be blast radiuses out to multiple clouds. Cloudflare is a partner of AWS, GCP, and Azure. They should be able to make things redundant – because I don’t necessarily know that their infrastructure is going to be better than anyone else’s infrastructure.”
Google Cloud has developed an automated tool that scans open-source packages and Docker images for exposed GCP credentials like API keys and service account keys, processing over 5 billion files across hundreds of millions of artifacts from repositories like PyPI, Maven Central, and DockerHub.
The system detects and reports leaked credentials within minutes of publication, matching the speed at which malicious actors typically exploit them, with automatic remediation options including disabling compromised service account keys based on customer-configured policies.
Unlike GitHub and GitLab’s source code scanning, this tool specifically targets built packages and container images where credentials often hide in configuration files, compiled binaries, and build scripts – areas traditionally overlooked in security scanning.
Google plans to expand beyond GCP credentials to include third-party credential scanning later this year, positioning this as part of their broader deps.dev ecosystem for open-source security analysis.
For GCP customers publishing open-source software, this provides free automated protection against credential exposure without requiring additional tooling or workflow changes, addressing what Mandiant reports as the second-highest cloud attack vector at 16% of investigations.
The moral of the story? Please patch. We know it’s a pain. But please, patch.
33:55 Matt – “I feel like AWS has had this, where they scan the GIthub commits for years – so I appreciate them doing it, don’t get me wrong, but also, I feel like this has been done before?”
The API returns rich metadata including region proximity data (currently only for GCP regions), territory codes for compliance requirements, and carbon footprint information to support sustainability initiatives.
Data freshness is maintained at 24 hours for active regions with automatic removal of deprecated locations.
Key use cases include optimizing multi-cloud deployments by identifying the nearest GCP region to existing AWS/Azure/OCI infrastructure, ensuring data residency compliance by filtering regions by territory, and automating location selection in multi-cloud applications. This addresses a common pain point where organizations maintain hard-coded lists of cloud regions across providers.
While AWS and Azure offer their own region discovery APIs, Google’s approach of providing cross-cloud visibility in a single service is unique among major cloud providers. The inclusion of sustainability metrics like carbon footprint data aligns with Google’s broader environmental commitments.
Google’s C4D VMs are now generally available, powered by 5th Gen AMD EPYC processors (Turin) and delivering up to 80% higher throughput for web serving and 30% better performance for general computing workloads compared to C3D.
The new instances scale up to 384 vCPUs and 3TB of DDR5 memory, with support for Hyperdisk storage offering up to 500K IOPS.
C4D introduces Google’s first AMD-based Bare Metal instances (coming in weeks), providing direct server access for workloads requiring custom hypervisors or specialized licensing needs. The instances also feature next-gen Titanium Local SSD with 35% lower read latency than previous generations.
Performance benchmarks show C4D delivers 25% better price-performance than C3D for general computing and up to 20% better than comparable offerings from other cloud providers. For database workloads like MySQL and Redis, C4D shows 35% better price-performance than competitive VMs, with MySQL seeing up to 55% faster query processing.
The new VMs support AVX-512 with a 512-bit datapath and 50% more memory channels, making them well-suited for CPU-based AI inference workloads with up to 75% price-performance improvement for recommendation inference. C4D also includes confidential computing support via AMD SEV for regulated workloads.
C4D is available in 12 regions and 28 zones at launch, with a 30-day uptime window between planned maintenance events. Early adopters like AppLovin report 40% performance improvements, while Verve Group sees 191% faster ad serving compared to N2D instances.
Google Cloud is first to market with G4 VMs featuring NVIDIA RTX PRO 6000 Blackwell GPUs, combining 8 GPUs with AMD Turin CPUs (up to 384 vCPUs) and delivering 4x compute/memory and 6x memory bandwidth compared to G2 VMs. This positions GCP ahead of AWS and Azure in offering Blackwell-based instances for diverse workloads beyond just AI training.
The G4 instances target a broader range of use cases than typical AI-focused GPUs, including cost-efficient inference, robotics simulations, generative AI content creation, and next-generation game rendering with 2x ray-tracing performance. Key customers include Snap for LLM inference, WPP for robotics simulation, and major gaming companies for next-gen rendering.
With 768GB GDDR7 memory, 12 TiB local SSD, and support for Multi-Instance GPU (MIG), G4 VMs enable running multiple workloads per GPU for better cost efficiency. The instances integrate with Vertex AI, GKE, and Hyperdisk (500K IOPS, 10GB/s throughput) for complete AI inference pipelines.
G4 supports NVIDIA Omniverse workloads natively, opening opportunities in manufacturing, automotive, and logistics for digital twins and real-time simulation. The combination of high CPU-to-GPU ratio (48:1) and Titanium’s 400 Gbps networking makes it suitable for complex simulations where CPUs orchestrate graphics workloads.
Currently in preview with global availability by year-end through Google Cloud Sales representatives. Pricing not disclosed, but positioning suggests premium pricing for specialized workloads requiring both AI and graphics capabilities.
Cross-Tenant customer-managed Keys (CMK) for Premium SSD v2 and Ultra disk are now in preview in select regions.
Encrypting managed disks with cross-tenant CMK enables encrypting the disk with a CMK hosted in an Azure Key Vault in a different Microsoft Entra tenant than the disk.
This will allow customers leveraging SaaS solutions that support CMK to use cross-tenant CMK with Premium SSD v2 and Ultra Disks without ever giving up complete control. (i have doubts)
40:31 Justin – “The only was this makes sense to me is if you have a SaaS application where you’re getting single servers or small cluster of servers per tenant; which I don’t want to manage. But if that’s what you have, then this may make sense to you. But this has a pretty limited use case, in my opinion.”
Azure Carbon Optimization reaches general availability, allowing organizations to track and reduce their cloud carbon footprint alongside cost optimization efforts.
GitHub Copilot’s Next Edit Suggestions (NES) in Visual Studio 2022 17.14 predicts and suggests your next code edit anywhere in the file, not just at cursor location, using AI to analyze previous edits and suggest insertions, deletions, or mixed changes.
The feature goes beyond simple code completion by understanding logical patterns in your editing flow, such as refactoring a 2D Point class to 3D or updating legacy C++ syntax to modern STL, making it particularly useful for systematic code transformations.
NES presents suggestions as inline diffs with red/green highlighting and provides navigation hints with arrows when the suggested edit is on a different line, allowing developers to Tab through related changes across the file.
Early user feedback indicates accuracy issues with less common frameworks like Pulumi in C# and outdated training data for rapidly evolving APIs, highlighting the challenge of AI suggestions for niche or fast-changing technologies.
While this enhances Visual Studio’s AI-assisted development capabilities, the feature currently appears limited to Visual Studio users rather than being a cloud-based service accessible across platforms or IDEs.
45:36 Matt – “It’s a pretty cool feature and I like the premise of it, especially when you are refactoring legacy code or anything along those lines where it’s like, hey, don’t forget this thing over here – because on the flip side, while it’s distracting, it also would be fairly nice to not run everything, compile it, and then have the error because I forgot to refactor this one section out.”
Oracle raised its fiscal 2026 revenue forecast to $67 billion, projecting 16.7% annual growth driven by cloud services demand, with total cloud growth expected to accelerate from 24% to over 40%.
Oracle Cloud Infrastructure (OCI) is gaining traction through multi-cloud strategies and integration with Oracle’s enterprise applications, though this growth primarily benefits existing Oracle customers rather than attracting new cloud-native workloads.
The company’s approach of embedding generative AI capabilities into its cloud applications at no additional cost contrasts with AWS, Azure, and GCP’s usage-based AI pricing models, potentially lowering adoption barriers for Oracle’s enterprise customer base.
Fourth quarter cloud services revenue reached $11.70 billion with 14% year-over-year growth, suggesting Oracle is capturing market share but still trails the big three cloud providers who report quarterly cloud revenues of $25+ billion.
Oracle’s growth story depends heavily on enterprises already invested in Oracle databases and applications migrating to OCI, making it less relevant for organizations without existing Oracle dependencies.
48:18 Justin – “Oracle is actually a really simple cloud. It is just Solaris boxes, as a cloud service to you. It’s all very server-based. That’s why they have iSCSI and they have fiber channels and they have all these things that are very data center centric. So if you love the data center, and you just want a cloud version of it, Oracle cloud is not bad for you. Or if you have a ton of egress traffic, the cost advantages of their networking is far superior to any of the other cloud providers. So there are benefits as much as I hate to say it.”
Oracle announces AMD Instinct MI355X GPUs on OCI, claiming 2X better price-performance than previous generation and offering zettascale AI clusters with up to 131,072 GPUs for large-scale AI training and inference workloads.
This positions Oracle as one of the first hyperscalers to offer AMD’s latest AI accelerators, though AWS, Azure, and GCP already have established GPU offerings from NVIDIA and their own custom silicon, making Oracle’s differentiation primarily about AMD partnership and pricing.
The MI355X delivers triple the compute power and 50% more high-bandwidth memory than its predecessor, with OCI’s RDMA cluster network architecture supporting the massive 131,072 GPU configuration for customers needing extreme scale.
Oracle emphasizes open-source compatibility and flexibility, which could appeal to customers wanting alternatives to NVIDIA’s CUDA ecosystem, though the real test will be whether the price-performance claims hold up against established solutions.
The announcement targets customers running large language models and agentic AI workloads, but adoption will likely depend on actual benchmarks, software ecosystem maturity, and whether Oracle can deliver on the promised cost advantages.
Oracle now allows custom domain names for APEX applications on Autonomous Database, eliminating the need for awkward database-specific URLs like apex.oraclecloud.com/ords/f?p=12345 in favor of cleaner addresses like myapp.company.com.
This vanity URL feature requires configuring DNS CNAME records and SSL certificates through Oracle’s Certificate Service, adding operational complexity compared to AWS CloudFront or Azure Front Door which handle SSL automatically.
The feature is limited to paid Autonomous Database instances only, excluding Always Free tier users, which may restrict adoption for developers testing or running small applications.
While this brings Oracle closer to parity with other cloud providers’ application hosting capabilities, the implementation requires manual certificate management and DNS configuration that competitors have largely automated.
The primary benefit targets enterprises already invested in Oracle’s ecosystem who need professional-looking URLs for customer-facing APEX applications without exposing underlying database infrastructure details.
Closing
And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod