Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152294 stories
·
33 followers

Issue 737

1 Share

Comment

I’m back from my vacation, which was exactly what I needed, and I now find myself writing the last issue of 2025! How did that happen? 😱

To mark the end of the year, Apple announced this year’s App Store Award winners! You can see the full list of awards and winners on the developer site, although I actually prefer their press release writeup as it includes screenshots of the apps. As always, there are some outstanding winners in the list. I especially liked the Be My Eyes app, which restored some of my faith in humanity. It’s such a simple idea, and it brings nothing but good into the world. ❤️ Congratulations to everyone who was nominated, and also to the winners! These awards, combined with the Apple Design Awards, have been inspiring people to make better apps for many years, and long may they both continue.

As always, it’s been a pleasure to have you all continue to read this newsletter for another year. This year has been a bit exhausting, but I’m tremendously excited about 2026, and after last week’s vacation I’m feeling refreshed and ready to go.

Happy holidays, and I’ll speak to you all in the new year! 🎊

– Dave Verwer

RevenueCat Paywalls: Build & iterate subscription flows faster

RevenueCat Paywalls just added a steady stream of new features: more templates, deeper customization, better previews, and new promo tools. Check the Paywalls changelog and keep improving your subscription flows as new capabilities ship.

News

Apple tightens App Review Guidelines to crack down on copycat apps

This is a tricky one, in my opinion. I agree that copycat apps in the App Store are a bad thing, and these new guidelines to strengthen protection against copycats are in principle a good thing, but the devil is always in the details. Could this affect innocent developers who are simply trying to compete, as well as “pure” copycats? The rules look good as written, but the review process is imperfect.

Tools

Building iOS and Mac apps in Zed: SwiftUI Previews

Adrian Ross, following up on his previous article about developing apps with Zed, this time covers how you can work with Xcode in the background and a small AppleScript to get SwiftUI previews on whatever file you’re working on in Zed. 👍

Code

What to fix in AI-generated Swift code

The LLM coding agents are pretty good at writing Swift and SwiftUI. However, because both these technologies have changed significantly in the years they have been around, the agents will sometimes use old or outdated techniques. You can fix some of that with a set of ground rules, and Paul Hudson has put together a great collection to get started with. Oh, and even if you don’t use a coding agent, the list of tips in Paul’s AGENTS.md are good for humans, too! 🤖


Using Swift SDKs with Raspberry PIs

Can you run Swift on a Raspberry Pi? Of course you can! I enjoyed this article from Jesse Zamora where he digs into the various Pi devices (Pies? 🥧) that can run it, and then goes through a step-by-step example of using the swift-sdk-generator to get hummingbird-examples running on either a 64-bit or 32-bit Pi. Follow along and have some Swifty fun with that Pi that is still sitting in its box on your shelf.


Tessera

What a very cool package from Dennis Müller:

Tessera is a Swift package that turns a single generated tile composed of arbitrary SwiftUI views into an endlessly repeating, seamlessly wrapping pattern.

It only does that one job, but check out the README file for examples of how good it looks. It’s perfect for subtle backgrounds in an iOS app.

Business and Marketing

Make your app visible with alternative app names

I liked this quick tip from Wesley de Groot on how adding alternate app names can make your app easier to launch and find in Spotlight. It’ll only take you three minutes to implement, which is a great value for your time! 👍

And finally...

It’s not Swift-related, but here’s something to keep you busy over the holidays!

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Now In Android #123

1 Share

Android XR, Android Studio Otter 2 feature drop; Android 16 QPR2, compose updates, Jetpack Navigation 3, performance and much, much, much more.

Welcome to Now in Android, your ongoing guide to what’s new and notable in the world of Android development.

You can catch a short subset of what’s in this gigantified update on YouTube, but read on for the full story.

Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition 👓

The Android Show | XR Edition introduced updates to the Android XR platform, focusing on new devices and developer tools. The platform is expanding to include lightweight AI and Display AI glasses from Samsung, Gentle Monster, and Warby Parker, integrating Gemini for features like live translation and visual search. Uber is exploring AI Glasses for contextual directions. Wired XR glasses, such as XREAL’s Project Aura, are scheduled for release next year.

Android XR SDK Developer Preview 3 offers increased stability for headset APIs and opens development for AI Glasses. This includes new libraries like Jetpack Compose Glimmer for transparent display UI and Jetpack Projected for extending your mobile apps to glasses. ARCore for Jetpack XR gains Geospatial capabilities, and new APIs enable detection of device field-of-view for adaptive UIs.

The platform, built on OpenXR, supports Unreal Engine development with a Google vendor plugin for hand tracking coming next year, and Godot Engine now includes Android XR support via its OpenXR vendor plugin v4.2.2 stable.

Start building for glasses, new devices for Android XR and more in The Android Show | XR Edition

Check out #TheAndroidShow in 60 seconds for a quick video overview of what we covered.

Build for AI Glasses with the Android XR SDK Developer Preview 3 and unlock new features for immersive experiences 🚀

Android XR SDK Developer Preview 3 is now available, enabling you to build augmented experiences for AI Glasses in addition to immersive experiences for XR Headsets.

Key updates include:

  1. For AI Glasses: New tools and libraries such as Jetpack Projected for accessing sensors, speakers, and displays; Jetpack Compose Glimmer with UI components optimized for display AI Glasses; and an AI Glasses emulator within Android Studio. ARCore for Jetpack XR now supports motion tracking and geospatial capabilities for augmented experiences on AI Glasses.
  2. For Immersive Experiences (XR Headsets and XR Glasses):

To begin building, update to Android Studio Canary (Otter 3, Canary 4 or later) and emulator version 36.4.3 Canary or later, then visit developer.android.com/xr for libraries and samples.

Build for AI Glasses with the Android XR SDK Developer Preview 3 and unlock new features for immersive experiences

Check out the The Android Show XR Edition Recap to get caught up.

Android Studio Otter 2 Feature Drop is stable! 🚀

The Android Studio Otter 2 Feature Drop is now stable. This release introduces updates to Agent Mode, including the Android Knowledge Base for improved accuracy and the option to use the Gemini 3 model. You can now use Backup and Sync to maintain consistent IDE settings across your machines and opt-in to receive communications from the Android Studio team. Additionally, this release incorporates stability and performance enhancements from the IntelliJ IDEA 2025.2 platform, such as Kotlin compiler and terminal improvements.

Android Studio Otter 2 Feature Drop is stable!

We’ve released a bunch of shorts to highlight important Otter 2 features, such as The Gemini 3 model is now available for AI assistance in Android Studio, Agent Mode’s Android knowledge, and

Android Studio Otter Backup and Sync. Also check out Top 4 agentic experiences for Gemini in Android Studio, and What’s new in Android Studio’s AI Agent.

Android 16 QPR2 is Released 🚀

Android 16 QPR2 has been released, marking the first minor SDK version. This release aims to accelerate innovation by delivering new APIs and features outside of major yearly platform releases.

Key updates include:

  • Minor SDK Release: You can now check for new APIs using SDK_INT_FULL and VERSION_CODES_FULL in the Build class as of Android 16.
  • Expanded Dark Theme: This feature provides an option to invert apps that do not have a native dark theme, intended as an accessibility feature. You should declare isLightTheme=”false” in your dark theme if your app does not inherit from standard DayNight themes to prevent unintended inversion.
  • Custom Icon Shapes & Auto-Theming: Users can select custom shapes for app icons, and the system can automatically generate themed icons if your app does not provide one.
  • Interactive Chooser Sessions: The sharing experience now supports real-time content updates within the Chooser, keeping the UI interactive.
  • Linux Development Environment with GUI Applications: You can now run Linux GUI applications directly within the terminal environment.
  • Generational Garbage Collection: The Android Runtime (ART) includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector to reduce CPU usage and improve battery efficiency.
  • Widget Engagement Metrics: You can query user interaction events such as clicks, scrolls, and impressions for your widgets.
  • 16KB Page Size Readiness: Debuggable apps not 16KB page-aligned will receive early warning dialogs.
  • IAMF and Audio Sharing: Software decoding support for Immersive Audio Model and Formats (IAMF) is added, and Personal Audio Sharing for Bluetooth LE Audio is integrated into the system Output Switcher.
  • Health Connect Updates: Health Connect automatically tracks steps using device sensors, and you can now track weight, set index, and Rate of Perceived Exertion (RPE) in exercise segments.
  • Smoother Migrations: A new Data Transfer API enables data migration between Android and iOS devices.
  • Developer Verification: APIs support developer verification during app installation, with ADB commands available to simulate outcomes.
  • SMS OTP Protection: The delivery of messages containing an SMS retriever hash is delayed for most apps by three hours to help prevent OTP hijacking.
  • Secure Lock Device: A new system-level security state locks the device immediately, requiring the primary PIN, pattern, or password to unlock and temporarily disabling biometric unlock.

To get started, you can get the Android 16 QPR2 release on your Pixel device, or use 64-bit system images with the Android Emulator in Android Studio. Using the latest Canary build of Android Studio Otter is recommended.

Android 16 QPR2 is Released

What’s new in the Jetpack Compose December ’25 release 🚀

The Jetpack Compose December ’25 release is now stable, including core Compose modules version 1.10 and Material 3 version 1.4. To use this release, update your Compose BOM version to 2025.12.00.

Key updates include:

  • Performance Improvements: Scroll performance now matches Views, with pausable composition in lazy prefetch enabled by default to reduce jank. Further optimizations improve Modifier.onPlaced and Modifier.onVisibilityChanged performance.

New Features:

  1. The retain API helps persist non-serializable state across configuration changes, useful for objects like media players.
  2. Material 3 1.4 adds an experimental TextFieldState for TextField, new SecureTextField variants, autoSize support for Text, a HorizontalCenteredHeroCarousel variant, TimePicker input mode switching, and a vertical drag handle for adaptive panes.
  3. Animation features include dynamic shared elements, allowing you to control sharedElement() and sharedBounds() animation transitions via SharedContentConfig’s isEnabled property.
  4. Modifier.skipToLookaheadPosition() helps create “reveal” type shared element animations by preserving a composable’s final position.
  5. A new prepareTransitionWithInitialVelocity API supports passing initial gesture velocity to shared element transitions.
  6. Experimental veil option for EnterTransition and ExitTransition lets you specify a color to scrim content during animations.

Tools: Android Studio adds Transform UI for natural language design iteration, the ability to generate @Preview for composables, customized Material Symbols in the Vector Asset wizard, code generation from screenshots using Gemini (with remote MCP support), and UI quality issue fixes.

Upcoming Changes: Modifier.onFirstVisible will be deprecated in Compose 1.11 due to non-deterministic behavior; migrate to Modifier.onVisibilityChanged. Coroutine dispatch in tests will shift to StandardTestDispatcher by default in a future release to align with production behavior; you can opt-in now using effectContext = StandardTestDispatcher() in createComposeRule.

What's new in the Jetpack Compose December '25 release

Jetpack Navigation 3 is stable 🚀

Jetpack Navigation 3 version 1.0 is now stable. This new navigation library is built to embrace Jetpack Compose state, offering full control over your back stack, helping you retain navigation state, and facilitating adaptive layouts. A cross-platform version is also available from JetBrains.

Developed to address the shift to reactive programming and declarative UI, Navigation 3 provides more flexibility and customizability compared to the original Jetpack Navigation (now Nav2) through smaller, decoupled APIs. For example, NavDisplay observes a list of keys backed by Compose state to update the UI. It also allows you to supply your own state for a single source of truth, lets you customize screen animations, and create flexible layouts with the Scenes API.

If you are currently using Navigation Compose with Nav2, you can consider migrating to Nav3. A migration guide is available that outlines key steps, including adding Nav3 dependencies, updating routes to implement NavKey, creating navigation state classes, and replacing NavController with these classes. You also move destinations from NavHost’s NavGraph into an entryProvider and replace NavHost with NavDisplay. You can experiment with an AI agent, like Gemini in Android Studio’s Agent Mode, for this migration by providing the markdown version of the guide as context.

For common navigation scenarios, a recipes repository is available, covering topics such as multiple back stacks, modularization, dependency injection, passing arguments to ViewModels, and returning results from screens. A deep links and Koin integration recipe are currently in development, and a Compose Multiplatform version of the recipes is also available.

To get started, you can refer to the official documentation and the recipes. You can file any issues you encounter in the issue tracker. We have lots of video content that can help, including a Navigation 3 API overview, our 3 things to know about Jetpack Navigation 3, a recording of our Navigation 3 #AskAndroid session, as well as shorts on Navigation 3 basics, How to animate screen transitions, and Implementing deep links.

Jetpack Navigation 3 is stable

Fully Optimized: Wrapping up Performance Spotlight Week 🚀

Performance Spotlight Week concluded with several announcements aimed at optimizing Android app performance.

You can now utilize the R8 optimizer for faster, smaller, and more stable apps, with updated documentation available. For instance, enabling R8 full mode has resulted in 40% faster cold startup and 30% fewer ANR errors for some apps.

Profile Guided Optimizations, including Baseline Profiles and Startup Profiles, can enhance startup speed, scrolling, animation, and rendering performance. Jetpack Compose 1.10 also introduced performance improvements like pausable composition and a customizable cache window for handling complex list items.

To measure performance, a new Performance Leveling Guide outlines a five-step journey, starting with data from Android Vitals and progressing to advanced local tooling like Jetpack Macrobenchmark and the UiAutomator 2.4 API for accurate measurement and verification.

Debugging tools received upgrades, including Automatic Logcat Retrace in Android Studio Narwhal to de-obfuscate stack traces automatically. New guidance on Narrow Keep Rules helps fix runtime crashes, supported by a lint check in Android Studio Otter 3. Additionally, new documentation and the Background Task Inspector offer insights into debugging WorkManager tasks and background work.

Performance optimization is an ongoing process, and the App Performance Score framework can help you integrate continuous improvements into your product roadmap.

Fully Optimized: Wrapping up Performance Spotlight Week

You can learn more on video at App Performance Spotlight Week Overview, App Performance #AskAndroid, App performance improvements, and Boost Android app performance with the R8 optimizer .

Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture 📸

CameraX 1.5 introduces features for video recording and image capture, alongside core API enhancements.

For video, you can now capture slow-motion or high-frame-rate videos. The new Feature Group API enables combinations like 10-bit HDR and 60 FPS, supporting features such as HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR, with plans for 4K recording and ultra-wide zoom.

Concurrent Camera improvements allow binding Preview, ImageCapture, and VideoCapture concurrently, and applying CameraEffects in composition mode. Additionally, CameraX 1.5 includes dynamic audio muting during recording, improved insufficient storage error handling, and a low light boost for dark environments on supported devices.

For image capture, CameraX 1.5 adds support for capturing unprocessed, uncompressed DNG (RAW) files, either standalone or simultaneously with JPEG. You can also leverage Ultra HDR output when using Camera Extensions.

Core API changes include the new SessionConfig API, which centralizes camera setup, removes the need for manual unbind() calls when updating use cases or switching cameras, and provides deterministic frame rate control. The camera-compose library has reached stable version 1.5.1, addressing bugs and preview stretching. Other improvements include fine-grained control over torch strength (querying max strength and setting levels) and NV21 image format support in ImageAnalysis.

To access these features, update your dependencies to CameraX 1.5.1. You can join the CameraX developer discussion group or file bug reports for support.

Introducing CameraX 1.5: Powerful Video Recording and Pro-level Image Capture

Health Connect Jetpack v1.1.0 is now available! 🔗

The Health Connect Jetpack library has reached its 1.1.0 stable release, providing a foundation for health and fitness applications. This version incorporates features such as background reads for continuous data monitoring, historical data synchronization, and support for data types including Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. The platform supports over 50 data types across various health and fitness categories.

Additionally, Health Connect is expanding its device type support, which will be available in version 1.2.0-alpha02. New supported device types include Consumer Medical Devices (e.g., Continuous Glucose Monitors, Blood Pressure Cuffs), Glasses (for smart glasses and head-mounted optical devices), Hearables (for earbuds, headphones, and hearing aids with sensing capabilities), and Fitness Machines (for stationary and outdoor equipment). This expansion aims to enhance data representation by specifying the source hardware.

You are encouraged to upgrade to the 1.1.0 library, review the official documentation and release notes for further details, and submit feedback or report issues via the public issue tracker.

Health Connect Jetpack v1.1.0 is now available!

ML Kit’s Prompt API: Unlock Custom On-Device Gemini Nano Experiences ✨

ML Kit has released the Alpha version of its GenAI Prompt API, enabling custom on-device Gemini Nano experiences. This API allows you to send natural language and multimodal requests to Gemini Nano, supporting use cases requiring more control and flexibility for generative models.

The Prompt API processes data locally, offering offline functionality and enhanced user privacy. Examples of its application include image understanding, intelligent document scanning, transforming data for UI, content prompting, content analysis, and information extraction.

Implementation involves a few lines of code, using Generation.getClient().generateContent() with optional parameters like temperature, topK, candidateCount, and maxOutputTokens. Detailed examples are available in the official documentation and a GitHub sample.

The API performs optimally on the Pixel 10 device series, which features Gemini Nano (nano-v3), built on the same architecture as Gemma 3n. Developers without a Pixel 10 can prototype features locally using Gemma 3n. Refer to the device support documentation for a comprehensive list of compatible devices.

ML Kit's Prompt API: Unlock Custom On-Device Gemini Nano Experiences

Kakao Mobility utilized Gemini Nano via ML Kit’s GenAI Prompt API for two main functions:

  • Parking Assistance: It uses multimodal capabilities to detect improperly parked bikes and scooters on yellow tactile paving, reducing server costs and enhancing user privacy compared to cloud-based image recognition.
  • Improved Address Entry: For parcel delivery, it streamlines entity extraction from natural language order requests, which eliminated error-prone manual address entry by drivers.

The implementation of Gemini Nano on-device led to:

  • Cost savings by shifting AI processing from the cloud.
  • Enhanced user privacy by keeping sensitive location data on the device.
  • Reduced order completion time for delivery orders by 24%.
  • Increased conversion rates for new users by 45% and existing users by 6%.
  • Over 200% increase in AI-powered orders during peak seasons.
  • Reduced developer effort and shortened development time.

You can use ML Kit’s GenAI Prompt API to integrate on-device AI capabilities like Gemini Nano into your applications.

Kakao Mobility uses Gemini Nano on-device to reduce costs and boost call conversion by 45%

Articles 📚

Explore AI on Android with Our Sample Catalog App 🤖

The Android team has launched a redesigned, open-source Android AI Sample Catalog app on GitHub to showcase various AI-enabled features using both on-device (ML Kit GenAI API with Gemini Nano) and cloud (Firebase AI Logic SDK) models. The catalog includes samples for tasks like image generation (Imagen), on-device text summarization, a chatbot for image editing (Gemini 3 Pro Image model), on-device image description, a voice-controlled to-do list, and on-device rewrite assistance. The app features a new Material 3 design and provides structured code for easy integration into your own projects.

Explore AI on Android with Our Sample Catalog App

Learn about our newest Jetpack Navigation library with the Nav3 Spotlight Week 🌟

We had a Nav3 Spotlight Week to help you learn and integrate the library into your app. Nav3 can assist in reducing technical debt, improving separation of concerns, accelerating feature development, and supporting new form factors.

The week featured daily content:

  • API Overview explores core APIs like NavDisplay, NavEntry, and entryProvider, including a coding walkthrough video.
  • Animations demonstrates how to set custom animations for screen transitions and override them for individual screens, with accompanying documentation and recipes.
  • Deep links covers creating deep links with various code recipes, offering a guide and both basic and advanced examples for parsing intents and synthetic back stacks. The Now in Android sample has also migrated to Nav3.
  • Modularization focuses on modularizing navigation code to avoid circular dependencies and using dependency injection and extension functions for feature modules.
  • Ask Me Anything was a live session where the community submitted questions using the #AskAndroid tag on BlueSky, LinkedIn, and X.

Learn about our newest Jetpack Navigation library with the Nav3 Spotlight Week

#WeArePlay: Solving the dinner dilemma — how DELISH KITCHEN empowers 13 million home cooks 🍲

#WeArePlay spotlights DELISH KITCHEN co-founder Chiharu and her app, which provides 55,000 video recipes to over 13 million Japanese users to solve the “dinner dilemma.” Google Play supports the app’s growth, offering distribution to Android users, developer tools, and feature campaigns. Future plans for you to note include an AI-powered cooking assistant, a new health management app, and supermarket partnerships.

#WeArePlay: Solving the dinner dilemma - how DELISH KITCHEN empowers 13 million home cooks

Leveling Guide for your Performance Journey 📈

The Android Developers Blog published a “Leveling Guide for your Performance Journey,” outlining five stages for optimizing app performance.

  1. Level 1: Play Console Field Monitoring
    Use Android Vitals within the Play Console to monitor automatically collected field data, including crash rate, ANR rate, and excessive battery usage.
  2. Level 2: App Performance Score Action Items
    Start with the Static Performance Score (configuration and tooling changes like R8 optimization, Baseline Profiles, and Startup Profiles) before moving to a dynamic assessment to validate improvements on a real device, measuring startup time and rendering performance.
  3. Level 3: Local Performance Test Frameworks
    Integrate automated testing with frameworks like Macrobenchmark (for startup time, dropped frames) and UiAutomator (for simulating user interactions).
  4. Level 4: Trace Analysis Tools
    Use deep analysis tools like Perfetto to capture and analyze the entire device state, including kernel scheduling and system services, to provide context for performance issues. You can record traces via developer options, Android Studio CPU Profiler, or the Perfetto UI, then load and analyze them to debug jank, slow startup, and excessive battery/CPU usage.
  5. Level 5: Custom Performance Tracking Framework
    For teams with dedicated resources, you can build a custom performance tracking framework using Android APIs like ApplicationStartInfo (API 35), ProfilingManager (API 35), and ApplicationExitInfo (API 30) to understand why your app process died (e.g., native crashes, ANRs, out-of-memory kills).

Leveling Guide for your Performance Journey

Stronger threat detection, simpler integration: Protect your growth with the Play Integrity API 🔒

The Play Integrity API has received updates aimed at improving threat detection and simplifying integration. It verifies that user interactions originate from your unmodified app on a certified Android device installed via Google Play, resulting in an average of 80% lower unauthorized usage for apps utilizing its features.

The API provides various verdicts to detect specific threats, including checks for:

  • App Status: If the user installed or paid for the app via Google Play (accountDetails) and if the app binary is unmodified (appIntegrity).
  • Device Status: If the app runs on a genuine Play Protect certified Android device (deviceIntegrity), if the device has recent security updates (MEETS_STRONG_INTEGRITY), and if Google Play Protect is active and no risky apps are present (playProtectVerdict).
  • Security Risks: If risky apps are running that could capture the screen or control the device (appAccessRiskVerdict).

Improvements also focus on user recovery through new Play in-app remediation prompts.

Other integrity solutions include Google Play’s automatic protection (installer checks, advanced anti-tamper protection), Android platform key attestation (which Play Integrity API leverages, with direct implementers needing to prepare for root certificate rotation in February 2026), Firebase App Check, and reCAPTCHA Enterprise.

Stronger threat detection, simpler integration: Protect your growth with the Play Integrity API

How Uber is reducing manual logins by 4 million per year with the Restore Credentials API 📲

Uber has reduced manual logins by an estimated 4 million per year by integrating the Restore Credentials API into its rider app. This feature enables a seamless transition for users when they switch to a new device, eliminating the need for re-authentication.

A five-week A/B experiment confirmed the positive impact, demonstrating:

  • A 3.4% decrease in manual logins (SMS OTP, passwords, social login).
  • A 1.2% reduction in expenses related to SMS OTP logins.
  • A 0.575% increase in the rate of devices successfully reaching the app’s home screen.
  • A 0.614% rise in devices with completed trips.

Interested in implementing Restore Credentials? You can consult sample code, documentation, a codelab, and validate your integration using new features in Android Studio Otter.

How Uber is reducing manual logins by 4 million per year with the Restore Credentials API

Configure and troubleshoot R8 Keep Rules 🔒

R8 is the primary tool for shrinking and optimizing Android apps. Keep Rules are essential because R8 cannot predict dynamic code (like reflection), which could lead to unintended code removal.

Key takeaways for Keep Rules:

  • Location: Write rules in a proguard-rules.pro file, always using proguard-android-optimize.txt.
  • Best Practice: Write narrow, specific rules, and use annotations or common ancestors for scalability.
  • Avoid: Global options (like -dontoptimize) and overly broad rules, as they negate R8’s performance benefits.
  • Troubleshooting: Use -printconfiguration to see all merged rules and -whyareyoukeeping to understand why a class is being preserved.
  • Goal: Use modern libraries with code generation instead of reflection to reduce the need for Keep Rules entirely.

Configure and troubleshoot R8 Keep Rules

Gemini 3 is now available for AI assistance in Android Studio 🚀

The Gemini 3 Pro model is now available for AI assistance, providing new coding and agentic features in the latest version of Android Studio Otter.

Gemini 3 is now available for AI assistance in Android Studio

How Reddit used the R8 optimizer for high impact performance improvements 🚀

Reddit significantly improved its app’s performance by implementing the R8 optimizer in full mode, which took less than two weeks.

Key results from the implementation:

Real-World Metrics (Android Vitals/Crashlytics):

  • 40% faster cold startup time
  • 30% reduction in “Application Not Responding” (ANR) errors
  • 25% improvement in frame rendering
  • 14% decrease in app size

Controlled Testing (Macrobenchmark):

  • 55% faster app startup
  • 18% quicker time for users to begin browsing

You can enable R8 by setting minifyEnabled and shrinkResources to true in your release build type within app/build.gradle.kts. This process should be followed by holistic end-to-end testing, and you may need to define keep rules to prevent R8 from modifying essential parts of your code.

#WeArePlay: Meet the game creators who entertain, inspire and spark imagination 🎮

The latest #WeArePlay stories highlight game creators who develop for Google Play. These stories feature developers who entertain players, inspire new ideas, and spark imagination through their creations.

You can learn about:

  • Ralf and Matt from Vector Unit, creators of Beach Buggy Racing. Their kart racing game has over 557 million downloads and has received community engagement for its console-quality feel on mobile. They continue to update the game and are prototyping new projects.
  • Camilla from Clover-Fi Games, who developed Window Garden. This lofi idle game, which encourages players to care for digital plants and decorate spaces, has surpassed 1 million downloads and received a “Best of 2024” award from Google Play. Camilla aims to expand her studio and collaborate with other creatives.
  • Rodrigo from Kolb Apps, the founder behind Real Drum. This virtual drum set app offers a realistic experience, allowing users to play drums and cymbals. It has accumulated over 437 million downloads, making music accessible to many, and Rodrigo plans to release new apps for children.

#WeArePlay: Meet the game creators who entertain, inspire and spark imagination

#WeArePlay: Meet the people making apps & games to improve your health ❤️

This week’s #WeArePlay series highlights applications and games on Google Play that focus on health and wellness. You can learn about:

  • Alarmy (Delightroom, Seoul), an app for heavy sleepers that uses challenge-based alarms, including math problems and photo missions, and is expanding into sleep tracking and general wellness.
  • Betwixt (Mind Monsters Games, Cambridge, UK), an interactive adventure game designed to reduce anxiety by combining storytelling with evidence-based techniques.
  • MapMyFitness (MapMyFitness, Boulder, CO, U.S.), an app for runners and cyclists to map routes and track training, offering features like adaptive training plans, guided workouts, and live safety tracking.

#WeArePlay: Meet the people making apps & games to improve your health

Android developer verification: Early access starts now as we continue to build with your feedback 🛡️

Android developer verification has begun its early access phase. This initiative introduces verification requirements as an additional layer of security to protect Android users from scams and digital fraud, particularly with sideloaded apps. The system aims to deter malicious app distribution by linking apps to verified identities.

In response to community feedback, changes address specific developer needs:

  • You, as a student or hobbyist, will have a dedicated account type, enabling distribution to a limited number of devices without full verification.
  • For experienced users, a new advanced flow is being developed to permit the installation of unverified apps. This flow will include clear warnings about risks and is designed to resist coercion.

You can find a video walkthrough and detailed guides for the new Android Developer Console experience.

Android developer verification: Early access starts now as we continue to build with your feedback

Raising the bar on battery performance: excessive partial wake locks metric is now out of beta 🔋

The “excessive partial wake locks” metric has moved out of beta and is now generally available as a new core vitals metric in Android vitals. This metric, co-developed with Samsung, identifies user sessions where an app holds more than two cumulative hours of non-exempt wake locks within a 24-hour period.

If your app surpasses a bad behavior threshold — 5% of user sessions being excessive over 28 days — it may be excluded from prominent Google Play discovery surfaces and a warning may appear on its store listing, starting March 1, 2026.

Android vitals now features a wake lock names table to help you pinpoint excessive wake locks by name and duration, particularly those with P90 or P99 durations over 60 minutes. You are encouraged to review your app’s performance in Android vitals and consult the updated documentation for best practices.

Raising the bar on battery performance: excessive partial wake locks metric is now out of beta

redBus uses Gemini Flash via Firebase AI Logic to boost the length of customer reviews by 57% 🗣️✨

redBus utilized Gemini Flash via Firebase AI Logic to revamp its customer review system, resulting in a 57% increase in review length. The company’s previous text-based review process presented challenges such as language barriers and a lack of detailed feedback.

To address this, redBus implemented a voice-first approach, enabling users to submit reviews in their native language. Gemini Flash transcribes and translates speech, performs sentiment analysis, and generates star ratings, relevant tags, and summaries from these voice inputs. Firebase AI Logic facilitated the frontend team’s independent development and launch of this feature within 30 days, removing the need for complex backend implementation. The solution employs structured output to ensure well-formed JSON responses from the AI model. redBus plans to continue exploring on-device generative AI and will use Google AI Studio for prompt iteration.

redBus uses Gemini Flash via Firebase AI Logic to boost the length of customer reviews by 57%

New tools and programs to accelerate your success on Google Play 🚀

Google Play has released new tools and programs designed to streamline your development and accelerate your app’s growth. You can now validate deep links directly within Play Console using a built-in emulator. A new Gemini-powered localization service offers no-cost translations for app strings, automatically translating new app bundles into selected languages while allowing you to preview, edit, or disable them.

On the Statistics page, a new Gemini-powered feature generates automated chart summaries to help you understand data trends and provides access to reporting for screen reader users. The Play Console now includes a “Grow users” overview page, offering a tailored view to acquire new users and expand your reach. A new “You” tab on the Play Store is available for re-engagement; you can integrate with Engage SDK to help users resume content or get personalized recommendations. Game developers can use this tab to showcase in-game events, content updates, and offers, with promotional content, YouTube video listings, and Play Points coupons available.

For monetization, you can now configure one-time products with more flexibility, including limited-time rentals and pre-orders through an early access program, and manage your catalog more efficiently with a new taxonomy. A new Play Points page in Play Console provides reporting on the revenue, buyers, and acquisitions generated by both your developer-created and Google-funded Play Points promotions.

New tools and programs to accelerate your success on Google Play

How Calm Reimagined Mindfulness for Android XR 🌌

Calm has brought its mindfulness content to Android XR. Its engineering team developed functional XR orbiter menus in one day and a core XR experience within two weeks. This involved extending existing Android development, including leveraging Jetpack Compose and reusing codebase components such as backend and media playback.

The team utilized Android XR design guides and evolved features like the “Immersive Breathe Bubble” for 3D breathwork and “Immersive Scene Experiences” for ambient environments. The creative workflow involved concept art, 3D models with human-scale reference, and in-headset testing, with the Android XR emulator available as a testing option.

To build for XR, you can integrate Jetpack XR APIs into existing Android apps and reuse code to create prototypes quickly. Resources for building on the Android XR platform are available at developer.android.com/xr.

How Calm Reimagined Mindfulness for Android XR

Introducing Cahier: A new Android GitHub sample for large screen productivity and creativity ✍️

Android Developers has introduced Cahier, a new GitHub sample application designed to showcase best practices for building productivity and creativity apps optimized for large screens.

Cahier demonstrates how you can develop versatile note-taking applications that combine text, freeform drawings using the Ink API (now in beta), and image attachments. Key features include fluid content integration with drag and drop for importing and sharing, and note organization capabilities.

The sample utilizes an offline-first architecture with Room and supports multi-window and multi-instance capabilities, including desktop windowing. Its user interface adapts to various screen sizes and orientations, including phones, tablets, and foldable devices, by employing ListDetailPaneScaffold and NavigationSuiteScaffold from the material3-adaptive library.

Cahier also illustrates deep system integration, showing you how to enable your app to become the default note-taking app on Android 14 and higher by responding to Notes intents. Lenovo has enabled Notes Role support on its tablets running Android 15 and above, allowing note-taking apps to be set as default on these devices. The sample provides comprehensive input support, including stylus, keyboard shortcuts, and mouse/trackpad interactions.

Introducing Cahier: A new Android GitHub sample for large screen productivity and creativity

Material 3 Adaptive 1.2.0 is stable 📐

Material 3 Adaptive 1.2.0 is now stable, building on previous versions with expanded support for window size classes and new strategies for display pane placement.

The release introduces support for Large (L) and Extra-large (XL) breakpoints for width window size classes, enabled by setting supportLargeAndXLargeWidth = true in your currentWindowAdaptiveInfo() call.

New adaptive strategies, reflow and levitate, are available for ListDetailPaneScaffold and SupportingPaneScaffold. The reflow strategy rearranges panes based on window size or aspect ratio, moving a second pane to the side or underneath. The levitate strategy docks content and offers customization for draggability, resizability, and background scrim. Both strategies can be declared in the Navigator constructor using the adaptStrategies parameter.

Material 3 Adaptive 1.2.0 is stable

5 things you need to know about publishing and distributing your app for Android XR ⚙️

When publishing and distributing your app for Android XR, consider five key areas:

  1. Uphold quality with Android XR app quality guidelines. Ensure your app delivers a safe, comfortable, and performant user experience by following guidelines that cover camera movement, frame rates, visual elements (like strobing), performance metrics, and recommended minimum interactive target sizes for eye-tracking and hand-tracking inputs.
  2. Configure your app manifest correctly. In your AndroidManifest.xml, specify android.software.xr.api.spatial for apps using the Jetpack XR SDK or android.software.xr.api.openxr for apps using OpenXR or Unity. Set android:required=”true” accordingly for dedicated XR tracks or false for mobile tracks. Also, set the android.window.PROPERTY_XR_ACTIVITY_START_MODE on your main activity to define the default user environment (Home Space, Full Space Managed, or Full Space Unmanaged). Check for optional hardware features dynamically at runtime using PackageManager.hasSystemFeature() instead of setting them as required in the manifest to avoid limiting your audience.
  3. Use Play Asset Delivery (PAD) to deliver large assets. For immersive apps with large assets, use PAD’s install-time, fast follow, or on-demand delivery modes. Android XR apps have an increased cumulative asset pack limit of 30 GB. Unity developers can integrate Unity Addressables with PAD.
  4. Showcase your app with spatial video previews. Provide a 180°, 360°, or stereoscopic video asset to offer an immersive 3D preview on the Play Store for users browsing on XR headsets.
  5. Choose your Google Play release track. You can publish to the mobile release track if you are adding spatial XR features to an existing mobile app and can bundle XR features into your existing Android App Bundle (AAB). Alternatively, you can publish to the dedicated Android XR release track for new XR apps or XR versions that are functionally distinct, which restricts visibility to Android XR devices supporting spatial or OpenXR features.

5 things you need to know about publishing and distributing your app for Android XR

Bringing Androidify to XR with the Jetpack XR SDK 🥽

The Android Developers Blog details how the Androidify app was adapted for Extended Reality (XR) using the Jetpack XR SDK, coinciding with the launch of Samsung Galaxy XR powered by Android XR.

Originally designed with adaptive layouts for phones, foldables, and tablets, Androidify is compatible with Android XR without modifications. For a differentiated XR experience, developers created specific spatial layouts.

Key XR concepts include Home Space, which allows multitasking with multiple app windows in a virtual environment, and Full Space, where an app uses the full spatial features of Android XR. You are advised to support both modes.

Designing for XR involved organizing UI elements using containment, embracing spatial UI elements that adjust to the user, and adapting camera layouts for headsets. Design tips for spatial UI include allowing uncontained elements, removing background surfaces, motivating with motion, and choosing an anchor element for content.

For development, the Jetpack XR Compose dependency was added. You can transition to Full Space by checking for XR spatial features using LocalSpatialConfiguration.current.hasXrSpatialFeature and !LocalSpatialCapabilities.current.isSpatialUiEnabled. Spatial UI elements like SpatialPanel, SubspaceModifier, and Orbiter enable the creation of XR layouts with existing 2D content. SpatialPanels can incorporate ResizePolicy and MovePolicy for user interaction, and hierarchical relationships allow grouped movement.

To publish, include <uses-feature android:name=”android.software.xr.api.spatial” android:required=”false” /> in your AndroidManifest.xml to signify XR-differentiated features. The same app binary can be distributed to both mobile and XR users, with options to add XR-specific screenshots or spatial video assets for immersive previews on the Play Store.

Bringing Androidify to XR with the Jetpack XR SDK

Videos 📹

#WeArePlay: Miksapix Interactive — bringing ancient Sámi mythology and culture to gamers worldwide

Miksapix Interactive launched their game “Raanaa” on Google Play, leveraging Sámi mythology to preserve and share indigenous culture. This demonstrates a successful approach to niche content development and localization on the platform, with the game being translated into various Sámi languages.

Building intelligent Android apps with Gemini

Google is empowering you to build intelligent apps using Gemini AI, offering a comprehensive end-to-end AI stack. Key takeaways include:

  • Tools: AI Studio for prototyping, ML Kit GenAI APIs (Beta) for on-device inference (summarization, proofreading, image description, custom prompt API), and Firebase AI Logic SDK for cloud inference with production features (App Check, Remote Config, monitoring).
  • Models: On-device options like Gemini Nano and Gemma 3n; cloud models like Gemini Pro, Flash, and Flash-Lite. Specialized models include Nano Banana and Imagen for image generation.
  • New APIs: The Gemini Live API (Preview) enables real-time voice/video interactions and features “function calling” for Gemini to invoke custom Kotlin functions within apps.
  • Focus: You can choose between on-device (offline, private, no cost) and cloud (powerful, broad availability) AI approaches based on their app’s needs.

Building adaptive apps for Android

It’s time to build adaptive apps that optimally scale across diverse form factors (tablets, foldables, Chromebooks, etc.).

Key takeaways:

  • Incentives: Play Store will prioritize adaptive apps in search/features. By 2026, “quality badging” and form-factor-specific ratings will be introduced.
  • Platform Changes: Android 16 (SDK 34) will remove orientation, resize, and aspect ratio constraints on large screens, aiming to make 75% of top apps automatically adaptive in landscape.
  • Tools & Resources: Leverage new/improved Android Studio tools, including dedicated layout libraries (e.g., SlidingPaneLayout, ActivityEmbedding), enhanced emulators, design guidelines, and Window Size Classes for streamlined layout adaptation.

More customization in Material 3: the path to expressive apps

Material 3 Expressive is now available, offering new capabilities to build more premium, engaging, and expressive UIs.

Key updates include:

  • Enhanced Components: Flexible app bars, buttons with shape-morphing motion, a new FAB Menu (“speed dial”), new loading and progress indicators, and revamped menu/list/slider components.
  • Adaptive UI: New Adaptive Navigation Bar and Rail seamlessly adapt to various window sizes and form factors, including foldables.
  • Style Enhancements: An expanded shape library (35 unique shapes), a physics-based motion system, richer dynamic colors, and emphasized typography with variable font support.

Crucially, these new features are available today for both Jetpack Compose and Android Views, ensuring bidirectional compatibility with existing Material 3 implementations. The update aims to improve clarity, usability, and user delight, as validated by extensive user research.

Building Androidify: an AI-powered Android experience

Androidify has been re-released as an open-source app, built with Jetpack Compose. It offers you a practical example of integrating Firebase AI Logic SDK, Gemini, and a fine-tuned Imagen model for AI features like image validation, captioning, and bot generation.

Key takeaways include:

  • Using ML Kit Subject Segmentation for features like sticker creation.
  • Implementing modern UI/UX with SharedTransitionLayout for smooth transitions.
  • Integrating predictive back support.
  • Building a fully adaptive UI for phones, tablets, foldables, and Chromebooks from a single codebase.

Building for TV and cars with Compose

Android apps using Jetpack Compose will benefit from significant performance improvements, including 21% faster Time to First Frame and a 76% reduction in jank.

New resources are available for TV and car app development:

  • TV: Leverage the dedicated Compose for TV library, with design guidance emphasizing clear focus indicators and a focus management codelab.
  • Cars: A new “Design for cars” guide differentiates Android Auto and Automotive OS, outlines driving restrictions, defines app quality tiers (including making existing apps “Car ready” via Google Play opt-in).

Testing is also enhanced with new Android Automotive OS emulators, an early access program for Firebase Test Lab offering direct device access, and an AAOS image for the Pixel Tablet. Google champions adaptive app development with Compose for extensive code reuse across all Android form factors.

Google Play PolicyBytes — October 2025 policy updates

Google Play’s October 2025 policy updates bring several key changes for developers:

  • Age-Restricted Content: Apps facilitating dating, gambling, or real-money games must now use the “Restrict Minor Access” feature to block minors.
  • Personal Loans (India): Apps must be on the Indian government’s approved digital lending list.
  • Health & Medical Apps: EU medical device apps require regulatory info and will get a “Medical Device” label. Other health/medical apps must include a disclaimer stating they are not medical devices.
  • Subscriptions: Policy clarification emphasizes clear free trial cancellation and prominent display of total charges to avoid violations.
  • Appeals Process: A new 180-day appeal window is being introduced for account terminations.
  • Compliance Deadline: January 2026 for these and other related policy updates.

Google Play Console: Streamlining workflows, from testing to growth

The redesigned Google Play Console introduces key new features for Android developers:

Pre-launch Deep Link Testing: A new built-in emulator on the Deep links page allows developers to test deep links and visualize user experience before launch.

Enhanced Monitoring: The “Monitor and improve” section provides actionable recommendations to address issues like ANR rates and slow warm-start times.

Gemini AI Integration:

  1. Automatically summarizes app metric trends, highlighting performance changes.
  2. Offers high-quality, automated localization of app strings for global markets, improving upon traditional machine translation.

Android Developer Story: Pocket FM cuts 50% in development time with Gemini in Android Studio

Pocket FM significantly cut Android development time (50% for new features, 30% for existing) by integrating Gemini in Android Studio. For developers, this highlights Gemini’s practical utility in generating code (like impression tracking), resolving complex issues (e.g., Media3 errors), and streamlining SDK upgrades by identifying dependencies, enabling engineers to focus on more complex development.

AndroidX Releases 🚀

Here’s a summary of the AndroidX changes, many of which have been covered earlier in the post:

Compose UI & Foundation (1.11.0-alpha01)

New UI Modifiers:

  • Modifier.scrollIndicator: A new API to allow developers to add custom scroll indicators to scrollable containers, offering more control over the scroll UI.
  • Modifier.visible(): Introduced to skip drawing a Composable’s content without affecting the space it occupies in the layout. This is useful for conditional visibility when you want to maintain layout structure.

Important Deprecation:

  • Modifier.onFirstVisible() is now deprecated. Its behavior was often misleading (e.g., triggering on every scroll for lazy lists). Developers are advised to use Modifier.onVisibilityChanged() and manually track visibility state based on their specific use case.

Default Behavior Changes:

  • TextField DPAD navigation and semantic autofill are now enabled by default, removing previous configuration flags.\

Advanced Layouts:

  • MeasuredSizeAwareModifierNode: A new, more specific interface for obtaining onRemeasured() callbacks, recommended for custom layout nodes needing only measurement-related events.

Navigation3 (1.1.0-alpha01)

  • Shared Element Transitions for Scenes: Navigation3 now supports treating “scenes” (likely Compose destinations/screens) as shared element objects. This enables smooth, coordinated transitions between composables as they change, by passing a SharedTransitionScope to NavDisplay or rememberSceneState.

DataStore (1.3.0-alpha01)

  • KMP Web Support: Introduces experimental Kotlin Multiplatform Web support for DataStore, leveraging the browser’s sessionStorage API for temporary data persistence within a single browser tab.

SwipeRefreshLayout (1.2.0)

  • Addresses issues with the refresh icon’s retraction and position reset, ensuring it behaves correctly after being shown and hidden.
  • Corrects requestDisallowInterceptTouchEvent(boolean) behavior, now honoring the request like other ViewGroups (though developers can opt out of this new behavior if necessary).

Window (1.6.0-alpha01)

  • Adaptive UI Helpers: Adds helper methods to construct WindowSizeClassSets in a grid format, simplifying the creation of responsive layouts for different screen sizes and folding states.

Other Noteworthy Releases

  • androidx.webgpu:webgpu:1.0.0-alpha01: Initial alpha release of a new library bringing WebGPU capabilities to Android applications. This is a developer preview aimed at specialized graphics use cases.
  • androidx.xr.glimmer:glimmer:1.0.0-alpha01: Initial alpha release of Jetpack Glimmer, a new design language and UI component library specifically for building Android XR (Extended Reality) experiences.
  • Compose Animation (1.11.0-alpha01): Includes a bug fix ensuring position is acquired for shared elements only when SharedTransitionLayout is attached.
  • Compose Runtime (1.11.0-alpha01): Minor API change with RetainedValuesStore.getExitedValueOrDefault renamed to consumeExitedValueOrDefault, and the experimental concurrent recomposition API has been removed.

Signing off

That’s it for now, with Android XR, the Android Studio Otter 2 feature drop with Gemini 3; the release of Android 16 QPR2, compose updates including the stable release of Jetpack Navigation 3, highlights performance spotlight week, and much, much, much more.

See you all in the new year for more updates from the Android developer ecosystem!


Now In Android #123 was originally published in Android Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Modern Networking in iOS with URLSession and async/await – Part 2

1 Share

Modern Networking in iOS with URLSession and async/await – Part 2

In Part 1 we built a clean networking layer using Swift’s modern concurrency and URLSession. The example endpoints were public and didn’t require authentication. Real-world apps, however, usually require you to authenticate users and attach short-lived access tokens to every request. These tokens eventually expire, and we need a way to obtain a new access token without forcing the user to log in again. In this part we’ll walk through how to implement a secure and robust token handling mechanism on top of the networking client from Part 1.

Understanding access and refresh tokens

  • Access tokens grant access to protected APIs. They are typically short‑lived (minutes to hours) so that an attacker only has a limited window if the token is compromised.

  • Refresh tokens are credentials that allow the client to request a new access token. Because they can be used to mint new access tokens, they need to be protected as if they were user passwords.


Token model

import Foundation

/// A container for access/refresh tokens and their expiration date.
/// Conforms to Codable so it can be encoded to and decoded from JSON.
struct TokenBundle: Codable {
    let accessToken: String
    let refreshToken: String
    let expiresAt: Date

    /// Returns true if the access token is expired.
    var isExpired: Bool {
        return expiresAt <= Date()
    }
}

This struct mirrors the JSON payload returned by your authentication server (e.g. { "access_token":…, "refresh_token":…, "expires_at":… }). The isExpired is a computed property that helps us decide when to refresh.


Secure persistence - Keychain

Let's create a helper for storing and retrieving tokens:

import Foundation
import Security

/// A simple helper for storing and retrieving data from the Keychain.
/// This stores and retrieves a single `TokenBundle` under a fixed key.
/// You can generalize it if you need to store more items.
enum KeychainService {

    /// Errors thrown by keychain operations.
    enum KeychainError: Error {
        case unhandled(status: OSStatus)
    }

    /// Change this to your app’s bundle identifier to avoid key collisions.
    private static let service = "pro.mobile.dev.ModernNetworking"
    private static let account = "authTokens"

    /// Saves the given token bundle to the keychain, overwriting any existing value.
    static func save(_ tokens: TokenBundle) throws {
        let data = try JSONEncoder().encode(tokens)
        // Remove any existing entry.
        try? delete()
        // Add the new entry.
        let query: [String: Any] = [
            kSecClass as String: kSecClassGenericPassword,
            kSecAttrService as String: service,
            kSecAttrAccount as String: account,
            kSecValueData as String: data
        ]
        let status = SecItemAdd(query as CFDictionary, nil)
        guard status == errSecSuccess else {
            throw KeychainError.unhandled(status: status)
        }
    }

    /// Loads the token bundle from the keychain, or returns nil if no entry exists.
    static func load() throws -> TokenBundle? {
        let query: [String: Any] = [
            kSecClass as String: kSecClassGenericPassword,
            kSecAttrService as String: service,
            kSecAttrAccount as String: account,
            kSecReturnData as String: true,
            kSecMatchLimit as String: kSecMatchLimitOne
        ]
        var item: AnyObject?
        let status = SecItemCopyMatching(query as CFDictionary, &item)
        if status == errSecItemNotFound {
            return nil
        }
        guard status == errSecSuccess, let data = item as? Data else {
            throw KeychainError.unhandled(status: status)
        }
        return try JSONDecoder().decode(TokenBundle.self, from: data)
    }

    /// Removes the token bundle from the keychain.
    static func delete() throws {
        let query: [String: Any] = [
            kSecClass as String: kSecClassGenericPassword,
            kSecAttrService as String: service,
            kSecAttrAccount as String: account
        ]
        let status = SecItemDelete(query as CFDictionary)
        guard status == errSecSuccess || status == errSecItemNotFound else {
            throw KeychainError.unhandled(status: status)
        }
    }
}

This helper encodes a TokenBundle to Data, stores it under a single key in the Keychain, and decodes it back to a TokenBundle when needed. It also includes a delete() method to clear the stored tokens when the user logs out.


Concurrency‑safe token management

When multiple network requests need a valid token simultaneously, we must avoid refreshing the token more than once at the same time.
Using a Swift actor ensures that only one refresh call happens concurrently.

import Foundation

/// Manages access and refresh tokens.
/// Uses an actor to serialize token access and refresh operations safely.
actor AuthManager {
    /// The currently running refresh task, if any.
    private var refreshTask: Task<TokenBundle, Error>?
    /// Cached token bundle loaded from the keychain.
    private var currentTokens: TokenBundle?

    init() {
        // Load any persisted tokens at initialization.
        currentTokens = try? KeychainService.load()
    }

    /// Returns a valid token bundle, refreshing if necessary.
    /// Throws if no tokens are available or if refresh fails.
    func validTokenBundle() async throws -> TokenBundle {
        // If a refresh is already in progress, await its result.
        if let task = refreshTask {
            return try await task.value
        }
        // No stored tokens means the user must log in.
        guard let tokens = currentTokens else {
            throw AuthError.noCredentials
        }
        // If not expired, return immediately.
        if !tokens.isExpired {
            return tokens
        }
        // Otherwise refresh.
        return try await refreshTokens()
    }

    /// Forces a refresh of the tokens regardless of expiration status.
    func refreshTokens() async throws -> TokenBundle {
        // If a refresh is already happening, await it.
        if let task = refreshTask {
            return try await task.value
        }
        // Ensure we have a refresh token.
        guard let tokens = currentTokens else {
            throw AuthError.noCredentials
        }
        // Create a new task to perform the refresh.
        let task = Task { () throws -> TokenBundle in
            defer { refreshTask = nil }
            // Build a request to your auth server’s token endpoint.
            // Replace api.example.com and path with your actual auth server and endpoint.
            var components = URLComponents()
            components.scheme = "https"
            components.host = "api.example.com" // change to your auth server
            components.path = "/oauth/token"
            var request = URLRequest(url: components.url!)
            request.httpMethod = "POST"
            request.setValue("application/json", forHTTPHeaderField: "Content-Type")
            let body: [String: String] = ["refresh_token": tokens.refreshToken]
            request.httpBody = try JSONEncoder().encode(body)
            // Perform the network call.
            let (data, response) = try await URLSession.shared.data(for: request)
            guard let httpResponse = response as? HTTPURLResponse,
                  (200..<300).contains(httpResponse.statusCode) else {
                throw AuthError.invalidCredentials
            }
            // Decode the new tokens.
            let newTokens = try JSONDecoder().decode(TokenBundle.self, from: data)
            // Persist and cache the tokens.
            try KeychainService.save(newTokens)
            currentTokens = newTokens
            return newTokens
        }
        // Store the in‑flight refresh task so concurrent callers reuse it.
        refreshTask = task
        return try await task.value
    }

    /// Clears stored tokens from memory and the keychain.
    func clearTokens() async throws {
        currentTokens = nil
        try KeychainService.delete()
    }
}

/// Errors thrown by `AuthManager`.
enum AuthError: Error {
    /// No tokens exist; the user must log in.
    case noCredentials
    /// Refresh failed or credentials are invalid.
    case invalidCredentials
}

This actor lazily loads tokens from the keychain at initialization.
When a valid token is requested, it either returns the cached token (if it hasn’t expired) or refreshes it by calling server’s refresh token endpoint.
The refresh process is protected by a single Task: if multiple calls to validTokenBundle() happen concurrently, they will all await the same refresh task.
If no tokens are stored or the refresh fails, AuthManager throws an AuthError we can react to and logout the user.


Adding auth token to endpoints that require it

We will update our Endpoint enum to have a variable indicating whether this endpoint requires authentication or not.

var requiresAuthentication: Bool {
    switch self {
    case .secureEndpoint: return true
    default : return false
    }
}

We will add the auth token when needed inside the NetworkClient func send<T: APIRequest> :

if request.endpoint.requiresAuthentication {
    let tokens = try await authManager.validTokenBundle()
    urlRequest.setValue("Bearer \(tokens.accessToken)", forHTTPHeaderField: "Authorization")
}

We will need an AuthManager object in our client:

final class NetworkClient {
    private let authManager: AuthManager

    init(authManager: AuthManager = AuthManager()) {
        self.authManager = authManager
    }

    func send<T: APIRequest>(_ request: T, allowRetry: Bool = true) async throws -> T.Response {
        ...
    }
}

Our send function requires a variable to know if a retry is allowed:

 func send<T: APIRequest>(_ request: T, allowRetry: Bool = true) async throws -> T.Response

Lastly let's check for auth errors and react accordingly:

if httpResponse.statusCode == 401 {
    guard allowRetry else {
        throw NetworkError.unauthorized // A new error type 
    }

    do {
        _ = try await authManager.refreshTokens()
        return try await send(request, allowRetry: false)
    } catch {
        // refresh failed -> force re-auth path
        // optionally: try? await authManager.clearTokens()
        throw error
    }
}

We are done!

You can check out the full project on GitHub.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Android Weekly Issue #708

1 Share
Articles & Tutorials
Sponsored
Code 10x faster. Tell Firebender to create full screens, ship features, or fix bugs - and watch it do the work for you. It's been battle tested by the best android teams at companies like Tinder, Adobe, and Instacart.
alt
Azizkhuja Khujaev shares practical Android 15 (API 35) migration lessons including behavior changes and edge case UI fixes across devices and themes that emerged when targeting SDK 35.
Santiago Mattiauda's series of articles on successfully adopting Kotlin Multiplatform.
Anmol Verma explains using config-driven Kotlin Multiplatform architecture to share ~70 % of code while keeping native Jetpack Compose and SwiftUI rendering for a white-label app.
Efe Budak shows how to use ViewModel with Navigation 3 in Compose Multiplatform to share and manage UI state across destinations.
Todd Ginsberg explains why Java’s YYYY week-year causes wrong-formatted dates and how to correct and avoid its misuse.
Wolfram Rittmeyer explains key Compose node types and how they form and support the internal UI tree in Jetpack Compose.
Sasha Denisov shows how FunctionGemma enables efficient on-device AI agents to convert language to function calls for mobile and web apps.
Place a sponsored post
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
alt
Videos & Podcasts
Philipp Lackner shares thoughts about the current tough job market in the tech sector and gives you a clear recommendation on how to proceed with your career.
Learn how to create personalized Wear OS watch faces directly from your Android app.
Philipp Lackner goes into detail how the Uber app and backend really works to allow streaming millions of live locations of their drivers and riders - while making sure the app still runs fluently.
Specials
Carmen Alvarez uses 700 Android Weekly issues to show how interest in RxJava and Jetpack Compose evolved over 13 years in the Android ecosystem.
alt
Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Hundreds of products now powered by Raspberry Pi

1 Share

There are now hundreds of products with Raspberry Pi, in one form or another, at their centre. This includes consumer kit that promises exciting new project features, HATs and accessories for both hobbyist and industrial users, and specialist hardware versions with a Compute Module at the heart of their DNA. The Powered by Raspberry Pi stamp of approval helps assure you that a product has been thoroughly tested and is guaranteed to work flawlessly using Raspberry Pi computers and microcontrollers.

The latest issue of Raspberry Pi Official Magazine featured half a dozen products from around the world that are helping improve things like driver and passenger safety, drone pilots’ chances of a successful landing, and marine pilots’ navigation accuracy. There are also some treats for fans of vintage computers and gaming, as well as AI photography, in the section below.

BlueSCSI

USA | bluescsi.com

Many of us love embracing older technology to enjoy games and programming experiences from a decade or three ago. Inevitably, the storage formats of the 1980s and 1990s have long been superseded, along with the drivers written to work with them. But that doesn’t mean you can’t run older programs, of course; emulators for popular home computers are incredibly popular. BlueSCSI offers a neat way to access games, applications, and files hidden away on otherwise-obsolete external drives so that you can enjoy them all over again. This modern, open source solution replaces your old SCSI drives — including CD-ROM and magneto-optical — with a simple and reliable SD card, offering a fantastic upgrade for your classic Mac, Amiga, Atari, and more! 

Candera CGI Studio Professional

Austria | cgistudio.at

Any full-size Raspberry Pi computer can be used to run Candera’s CGI Studio Professional HMI (human–machine interface). Its rapid design tools are custom-made for small-to-medium-sized businesses, and include an invaluable Scene Composer and pre-built players for Linux-based devices. Certified for Raspberry Pi, CGI Studio Pro offers Python scripting with data model access, making it ideal for designing user interfaces and customer menus for any number of applications. Version 3.15, launched in spring 2025, extends the IntuitiveHMI design suite with simplified workflows, improved graphics, and added AI options — including SoundHound voice recognition — making it ideal for designers creating interfaces across automotive, medical, and other industries. 

Clickdrive

Singapore | clickdrive.io

Clickdrive is a driving training system aimed at driving schools and public transport companies, who have found it invaluable in improving staff retention rates. A self-install kit with wired and wireless options, GPS, and a HD video camera, Clickdrive makes real-world training more intuitive by recording driving footage, integrating features such as bespoke instructor clips, GPS and motion sensors for location accuracy, object detection, and performance analysis. While driving games and simulators focus on overcoming obstacles and taking turns at high speed, Clickdrive records routes driven for self-improvement rather than fun, using customisable training programmes. The Singapore-based company has a roster of satisfied clients, including the city’s own SBS Transit authority and other public transport companies. The Clickdrive PRO system provides 360-degree video feedback alongside objective driving telemetry analysis, so drivers can receive individual post-drive reviews and tailored improvement advice. 

Landmark Precision Landing System

USA | landmarklanding.com

Flying machines have long caught the imagination of amateur pilots, so when drones arrived on the scene, their success was little surprise. If you’re anything like us, though, the joy of seeing your craft aloft is tinged with anxiety about the seemingly inevitable sudden descent back and the potential curtailment of your new hobby. Landmark specialises in helping PX4 and ArduPilot drone and model aircraft pilots achieve precision landings time after time. (OK, the clue’s in the company name.) Promising centimetre-level landing accuracy, the system works in various lighting conditions, including direct sunlight and at night (with target illumination). The landing module attaches to your Raspberry Pi via a single cable, while a ground station such as Mission Planner or QGroundControl is used for all configuration.

Hat Labs HALPI2

Finland | hatlabs.fi

Raspberry Pi Compute Modules, with their industrial-grade specifications, are becoming an increasingly popular choice for marine applications. Finland’s Hat Labs is a long-established open source and open hardware marine specialist. As well as being a keen sailor, founder Matti is an IoT veteran with many years’ experience with CAN bus and NMEA 2000 products. The Helsinki-based firm’s HALPI2 is a marine plotting platform based around Compute Module 5 and an ITX motherboard in a custom-designed, pre-built, fully functional Raspberry Pi boat computer, protected within a waterproof and ruggedised case. HALPI2 plots and tracks routes and acts as a data acquisition and visualisation device, providing a large degree of boat automation and control.

EDATEC CM5 AI Camera Series

China | edatec.cn

EDATEC makes robust hardware based on open source principles, using powerful equipment such as Raspberry Pi. Emerging from the management team at industrial supplier Farnell in 2017, EDATEC was among the very first to recognise Raspberry Pi’s potential as a modular industrial platform — and one of the first to gain Powered by Raspberry Pi accreditation. The 12MP ED-AIC3100 uses Compute Module 5, with its 64-bit SoC platform, to power and control a quad-core AI camera with a 12mm autofocus liquid lens and a C-Mount lens. The 3100-series camera is protected by a bright blue IP65 shockproof metal case that can withstand temperature variations of 0–45°C, and has a mounting bracket to absorb vibrations. Running 64-bit Raspberry Pi OS, the AI camera weighs just 400g and can be triggered remotely or with a single button press, acquiring and processing images at 70 frames per second before efficiently making sense of their contents.

Apply to Powered by Raspberry Pi

Our Powered by Raspberry Pi logo shows customers that your product is powered by our high‑quality computers and microcontrollers. All products licensed under Powered by Raspberry Pi are eligible to appear in our online gallery.

Submit your product for Powered by Raspberry Pi status.

The post Hundreds of products now powered by Raspberry Pi appeared first on Raspberry Pi.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

I Built a Game Engine from Scratch in C++ (Here's What I Learned)

1 Share

I Built a Game Engine from Scratch in C++ (Here's What I Learned)

I crashed my GPU 47 times before I saw my first triangle on screen.

For 3 months, I built a game engine from scratch in C++ using DirectX 9 and Win32—no Unity, no Unreal, no middleware. Just me, the Windows API, and a lot of segmentation faults.

This is the story of how building a simple Breakout clone taught me more about game development, graphics programming, and software architecture than years of using Unity ever did.

Why Build an Engine?

For years, I built games in Unity. I'd drag and drop GameObjects, attach scripts, hit Play, and watch my game come to life. It was magical—until it wasn't.

Questions started nagging at me:

  • How does Unity actually render my sprites? What's happening between GameObject.transform.position = newPos and pixels on screen?
  • Why do people complain about Unreal's performance? If it's "optimized," why do developers still struggle?
  • Why was Kerbal Space Program's physics so buggy? It's Unity—doesn't Unity handle physics automatically?

I realized I was using powerful tools without understanding what they were doing under the hood. I was a chef using a microwave, not knowing how heat actually cooks food.

Then my university professor gave us an assignment: Build a low-level game engine in C++.

No Unity. No libraries. Just C++, DirectX 9, and the Win32 API.

This was my chance to peek behind the curtain.

What I Built: Breakout, But From Scratch

If you've never played Breakout: you control a paddle at the bottom of the screen, bouncing a ball to destroy bricks at the top. Simple concept, complex implementation.

My engine features:

  • Custom rendering pipeline using DirectX 9
  • Fixed-timestep game loop (60 FPS target)
  • AABB and swept collision detection (no tunneling!)
  • State management system (Menu → Level 1 → Level 2 → Level 3 → End Game)
  • Sound system integration
  • Sprite animation system
  • Physics simulation (velocity, acceleration, collision response)

The result: A fully playable Breakout clone running at 60 FPS with ~3,500 lines of C++ code.

Breakout

Tech Stack:

  • Language: C++17
  • Graphics API: DirectX 9 (legacy, but perfect for learning fundamentals)
  • Windowing: Win32 API
  • Audio: Windows multimedia extensions
  • IDE: Visual Studio 2022

Architecture Overview: Separation of Concerns

One of my biggest lessons: good architecture makes or breaks your project.

I learned this the hard way (more on that in "Challenges" below), but here's the final structure I landed on:

Class Diagram

Simply Class Diagram

Core Components

Game Class (The Orchestrator)

  • Owns all managers (Renderer, Input, Physics, Sound)
  • Manages game state transitions (Menu ↔ Level 1 ↔ Game Over)
  • Runs the main game loop

MyWindow (Platform Layer)

  • Wraps Win32 window creation and message processing
  • Handles OS-level events (close, minimize, resize)
  • Why separate? Platform code should be isolated—makes porting to Linux/Mac easier later

Renderer (Graphics Layer)

  • Initializes DirectX 9 device
  • Manages textures and sprites
  • Provides clean API: LoadTexture(), DrawSprite(), BeginFrame()
  • Key insight: The game logic never touches DirectX directly

InputManager (User Input)

  • Polls keyboard state using DirectInput
  • Abstracts raw input into game-meaningful queries: IsKeyDown(DIK_LEFT)
  • Why? Game code doesn't care about DirectInput—it just wants "left" or "right"

PhysicsManager (Collision & Movement)

  • AABB collision detection
  • Swept AABB for fast-moving objects (prevents tunneling)
  • Collision resolution with restitution
  • Lesson learned: Separate detection from resolution (I didn't know this at first!)

SoundManager (Audio)

  • Loads and plays sound effects
  • Handles background music with looping
  • Volume control

IGameState (State Pattern)

  • Interface for all game states: Menu, Level1, Level2, GameOver, YouWin
  • Each state implements: OnEnter(), Update(), Render(), OnExit()
  • This was my "aha!" moment—more on this below

The Game Loop

cpp

while (window.ProcessMessages()) {
    // 1. Calculate delta time (frame-independent movement)
    float dt = CalculateDeltaTime();

    // 2. Update input state
    inputManager.Update();

    // 3. Update current game state
    //    (Menu, Level, GameOver, etc.)
    gameState->Update(dt, inputManager, physicsManager, soundManager);

    // 4. Render everything
    renderer.BeginFrame();
    gameState->Render(renderer);
    renderer.EndFrame();
}

Why this structure?

  • Modularity: Each system has one job
  • Testability: Can test physics without rendering
  • Maintainability: Bug in rendering? Only look in Renderer class
  • Scalability: Adding a new game state? Just implement IGameState

The Rendering Pipeline: From Nothing to Pixels

DirectX 9 has a reputation: it's old (released 2002), verbose, and unforgiving. But that's precisely why it's perfect for learning—you have to understand every step.

Initialization: Setting Up DirectX 9

Getting a window to show anything requires five major steps:

1. Create the Direct3D9 Interface

cpp

IDirect3D9* m_direct3D9 = Direct3DCreate9(D3D_SDK_VERSION);
if (!m_direct3D9) {
    // Failed to create—probably missing DirectX runtime
    return false;
}

This creates the main Direct3D object. Think of it as "connecting to the graphics driver."

2. Query Display Capabilities

cpp

D3DDISPLAYMODE displayMode;
m_direct3D9->GetAdapterDisplayMode(D3DADAPTER_DEFAULT, &displayMode);

We need to know: What resolution? What color format? This tells us what the monitor supports.

3. Configure the Presentation Parameters

cpp

D3DPRESENT_PARAMETERS m_d3dPP = {};
m_d3dPP.Windowed = TRUE;                          // Windowed mode (not fullscreen)
m_d3dPP.BackBufferWidth = width;                   // 800 pixels
m_d3dPP.BackBufferHeight = height;                 // 600 pixels
m_d3dPP.BackBufferFormat = D3DFMT_UNKNOWN;        // Match desktop format
m_d3dPP.BackBufferCount = 1;                       // Double buffering
m_d3dPP.SwapEffect = D3DSWAPEFFECT_DISCARD;       // Throw away old frames
m_d3dPP.EnableAutoDepthStencil = TRUE;             // We need depth testing
m_d3dPP.AutoDepthStencilFormat = D3DFMT_D16;      // 16-bit depth buffer

This is where modern APIs (Vulkan, DX12) get even MORE complex. You're essentially telling the GPU: "Here's how I want my window's backbuffer configured."

4. Create the Device

cpp

HRESULT hr = m_direct3D9->CreateDevice(
    D3DADAPTER_DEFAULT,              // Use default GPU
    D3DDEVTYPE_HAL,                   // Hardware acceleration
    hWnd,                              // Window handle
    D3DCREATE_HARDWARE_VERTEXPROCESSING,  // Use GPU for vertex math
    &m_d3dPP,
    &m_d3dDevice
);

This is where I crashed 47 times. Wrong parameters? Crash. Unsupported format? Crash. Missing depth buffer? Crash.

Fallback strategy: If hardware vertex processing fails (older GPUs), fall back to software:

cpp

if (FAILED(hr)) {
    // Try again with CPU-based vertex processing
    hr = m_direct3D9->CreateDevice(..., D3DCREATE_SOFTWARE_VERTEXPROCESSING, ...);
}

5. Create the Sprite Renderer

cpp

ID3DXSprite* m_spriteBrush;
D3DXCreateSprite(m_d3dDevice, &m_spriteBrush);

DirectX 9's ID3DXSprite is a helper for 2D games. It batches sprite draws and handles transformations.

Rendering Each Frame

Once initialized, every frame follows this pattern:

cpp

void Renderer::BeginFrame() {
    // Clear the screen to black
    m_d3dDevice->Clear(0, NULL, D3DCLEAR_TARGET | D3DCLEAR_ZBUFFER, 
                        D3DCOLOR_XRGB(0, 0, 0), 1.0f, 0);

    m_d3dDevice->BeginScene();           // Start recording draw calls
    m_spriteBrush->Begin(D3DXSPRITE_ALPHABLEND);  // Enable alpha blending for sprites
}

void Renderer::DrawSprite(const SpriteInstance& sprite) {
    // Apply transformations (position, rotation, scale)
    D3DXMATRIX transform = CalculateTransform(sprite);
    m_spriteBrush->SetTransform(&transform);

    // Draw the texture
    m_spriteBrush->Draw(sprite.texture, &sourceRect, nullptr, nullptr, sprite.color);
}

void Renderer::EndFrame() {
    m_spriteBrush->End();          // Finish sprite batch
    m_d3dDevice->EndScene();        // Stop recording
    m_d3dDevice->Present(...);      // Flip backbuffer to screen (VSYNC happens here)
}

Key Concept: Double Buffering

We draw to a "backbuffer" (off-screen), then Present() swaps it with the screen's front buffer. This prevents tearing (seeing half-drawn frames).

Performance Note: Each DrawSprite() call is relatively expensive. In a real engine, you'd batch hundreds of sprites into fewer draw calls. For Breakout (~50 bricks max), it doesn't matter.

Challenges & Solutions: Where I Failed (And What I Learned)

Challenge 1: Architecture Disaster (Week 3)

The Problem:

I made the classic beginner mistake: I started coding without designing.

My first attempt looked like this:

cpp

class Game {
    Renderer renderer;
    InputManager input;

    // OH NO—game logic mixed into Game class!
    Paddle paddle;
    Ball ball;
    Brick bricks[50];

    void Update() {
        // Handle input
        if (input.IsKeyDown(LEFT)) paddle.x -= 5;

        // Update physics
        ball.x += ball.velocityX;

        // Check collisions
        for (auto& brick : bricks) {
            if (CollidesWith(ball, brick)) {
                brick.alive = false;
            }
        }

        // ...300 more lines of spaghetti code
    }
};

This worked fine—until I needed to add a menu screen.

Suddenly I realized: How do I switch between Menu and Level1?

My code had no concept of "states." Everything was hardcoded into one giant Update() function. Adding a menu meant:

  • Wrapping everything in if (currentState == PLAYING)
  • Duplicating input handling for menu vs. gameplay
  • Managing which objects exist when

It was a mess. I was 2 weeks in and facing a complete rewrite.

The Solution: State Pattern

I asked my lecturer (and ChatGPT) for advice. The answer: State Pattern.

cpp

// Interface that all game states implement
class IGameState {
public:
    virtual void OnEnter(GameServices& services) = 0;
    virtual void Update(float dt, ...) = 0;
    virtual void Render(Renderer& renderer) = 0;
    virtual void OnExit(GameServices& services) = 0;
};

Now each screen is its own class:

cpp

class MenuState : public IGameState { /* menu logic */ };
class Level1 : public IGameState { /* level 1 logic */ };
class GameOverState : public IGameState { /* game over logic */ };

The Game class just delegates to the current state:

cpp

class Game {
    std::unique_ptr<IGameState> currentState;

    void Update(float dt) {
        currentState->Update(dt, ...);  // Let the state handle it
    }

    void ChangeState(std::unique_ptr<IGameState> newState) {
        if (currentState) currentState->OnExit(...);
        currentState = std::move(newState);
        if (currentState) currentState->OnEnter(...);
    }
};

What I Learned:

  • Design before code (ESPECIALLY for 1,000+ line projects)
  • Separation of concerns makes code flexible
  • Refactoring hurts, but teaches more than getting it right the first time

The State Pattern is everywhere—React components, game engines, even operating systems use it. This lesson alone was worth the 3 months.

Challenge 2: The Ball Goes Through Bricks (Tunneling)

The Problem:

My first collision detection looked like this:

cpp

if (OverlapsAABB(ball, brick)) {
    brick.alive = false;
    ball.velocityY = -ball.velocityY;  // Bounce
}

This worked at 60 FPS... until the ball moved too fast.

At high speeds, the ball would tunnel—pass completely through a brick between frames:

Frame 1: Ball is here     →  [    ]
                                ↓
Frame 2: Ball is here         [    ]  ← Ball skipped the brick!

The ball moved 50 pixels, but the brick was only 32 pixels wide. By the next frame, the ball was already past the brick, so the overlap check returned false.

First Failed Solution: Smaller Time Steps

I tried updating physics 120 times per second instead of 60. This helped but didn't solve it—at very high velocities, tunneling still occurred.

The Real Solution: Swept AABB

I needed continuous collision detection—checking not just "are they overlapping now?" but "will they overlap at any point during this frame's movement?"

This is called swept AABB (or ray-swept box). Instead of checking the ball's current position, I treat the ball's movement as a ray:

cpp

bool SweepAABB(
    Vector3 ballPos, Vector2 ballSize,
    Vector3 displacement,  // Where the ball will move this frame
    Vector3 brickPos, Vector2 brickSize,
    float& timeOfImpact,   // When in [0,1] does collision happen?
    Vector3& hitNormal     // Which side did we hit?
) {
    // Calculate when the ball's edges cross the brick's edges
    float xEntryTime = ...; // Math for X-axis entry
    float yEntryTime = ...; // Math for Y-axis entry

    float overallEntry = max(xEntryTime, yEntryTime);

    if (overallEntry < 0 || overallEntry > 1) {
        return false;  // No collision this frame
    }

    timeOfImpact = overallEntry;
    return true;
}

Now my collision loop looks like:

cpp

Vector3 displacement = ball.velocity * dt;
float toi;
Vector3 normal;

if (SweepAABB(ball, displacement, brick, toi, normal)) {
    // Move ball to exactly the collision point
    ball.position += displacement * toi;

    // Bounce
    if (normal.x != 0) ball.velocity.x = -ball.velocity.x;
    if (normal.y != 0) ball.velocity.y = -ball.velocity.y;

    brick.alive = false;
}

Result: No more tunneling, even at 1000 pixels/second.

What I Learned:

  • Discrete collision detection (overlap checks) fails at high speeds
  • Continuous collision detection (swept/ray-based) is essential for fast-moving objects
  • This is why bullets in games use raycasts, not overlap checks

The math was painful (lots of min/max comparisons), but understanding this concept changed how I think about physics in games.

Challenge 3: Collision Detection ≠ Collision Resolution

The Confusion:

When I started, I thought "collision detection" and "collision resolution" were the same thing. They're not.

  • Detection = "Did these two objects hit?"
  • Resolution = "Okay, now what do we DO about it?"

My first attempt mixed them together:

cpp

if (OverlapsAABB(ball, paddle)) {
    ball.velocityY = -ball.velocityY;  // This is resolution!
}

This caused bugs:

  • Ball would "stick" to the paddle
  • Multiple collisions in one frame would cancel out
  • Overlapping objects would vibrate

The Fix: Separate Phases

cpp

// Phase 1: Detection (PhysicsManager)
bool hit = SweepAABB(ball, paddle, timeOfImpact, normal);

// Phase 2: Resolution (also PhysicsManager, but separate function)
if (hit) {
    // Move ball to contact point
    ball.position += displacement * timeOfImpact;

    // Apply restitution (bounciness)
    ball.velocity = Reflect(ball.velocity, normal) * restitution;
}

What I Learned:

  • Physics engines have detection and response as separate systems
  • This separation allows for complex scenarios (multiple overlaps, conveyor belts, one-way platforms)
  • YouTube tutorials often skip this distinction—they work for simple cases but fail for complex games

What I'd Do Differently

If I built this again from scratch:

  1. Design first, code second (spend 2-3 days on architecture diagrams)
  2. Use a modern graphics API (DirectX 11 or OpenGL 4.5) instead of DX9
    • DX9 is fine for learning, but dated (no compute shaders, limited pipeline control)
  3. Write unit tests for physics (I caught so many bugs by hand-testing that could've been automated)
  4. Implement an entity-component system (ECS) instead of inheritance-based game objects
  5. Add a debug overlay earlier (FPS counter, collision visualization saved me hours of debugging)
  6. Profile from day one (I didn't measure performance until week 8—wasted time optimizing the wrong things)

Conclusion & Next Steps

Building a game engine from scratch was hard—way harder than Unity tutorials made game development seem.

But that's the point.

What I Gained:

  • Deep understanding of rendering pipelines
  • Practical knowledge of physics simulation
  • Appreciation for what Unity/Unreal abstract away
  • Confidence to debug low-level issues
  • A portfolio piece that stands out

What's Next:

  • Port this engine to DirectX 11 (modern pipeline, compute shaders)
  • Build a voxel engine (Minecraft-style, using Three.js for web)
  • Experiment with Vulkan (the final boss of graphics APIs)

Links:

If you're thinking about building your own engine:

Do it. Not to replace Unity, but to understand Unity.

You'll struggle. You'll debug cryptic errors at 2 AM. You'll question why you didn't just use Godot.

But when you finally see that ball bounce for the first time—compiled from YOUR code, rendered by YOUR pipeline, colliding with YOUR physics—you'll understand why people say "reinventing the wheel is the best way to learn how wheels work."

Questions? Suggestions? Let me know in the comments! 👇

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories