I’m back from my vacation, which was exactly what I needed, and I now find myself writing the last issue of 2025! How did that happen? 😱
To mark the end of the year, Apple announced this year’s App Store Award winners! You can see the full list of awards and winners on the developer site, although I actually prefer their press release writeup as it includes screenshots of the apps. As always, there are some outstanding winners in the list. I especially liked the Be My Eyes app, which restored some of my faith in humanity. It’s such a simple idea, and it brings nothing but good into the world. ❤️ Congratulations to everyone who was nominated, and also to the winners! These awards, combined with the Apple Design Awards, have been inspiring people to make better apps for many years, and long may they both continue.
As always, it’s been a pleasure to have you all continue to read this newsletter for another year. This year has been a bit exhausting, but I’m tremendously excited about 2026, and after last week’s vacation I’m feeling refreshed and ready to go.
Happy holidays, and I’ll speak to you all in the new year! 🎊
RevenueCat Paywalls just added a steady stream of new features: more templates, deeper customization, better previews, and new promo tools. Check the Paywalls changelog and keep improving your subscription flows as new capabilities ship.
This is a tricky one, in my opinion. I agree that copycat apps in the App Store are a bad thing, and these new guidelines to strengthen protection against copycats are in principle a good thing, but the devil is always in the details. Could this affect innocent developers who are simply trying to compete, as well as “pure” copycats? The rules look good as written, but the review process is imperfect.
Adrian Ross, following up on his previous article about developing apps with Zed, this time covers how you can work with Xcode in the background and a small AppleScript to get SwiftUI previews on whatever file you’re working on in Zed. 👍
The LLM coding agents are pretty good at writing Swift and SwiftUI. However, because both these technologies have changed significantly in the years they have been around, the agents will sometimes use old or outdated techniques. You can fix some of that with a set of ground rules, and Paul Hudson has put together a great collection to get started with. Oh, and even if you don’t use a coding agent, the list of tips in Paul’s AGENTS.md are good for humans, too! 🤖
Can you run Swift on a Raspberry Pi? Of course you can! I enjoyed this article from Jesse Zamora where he digs into the various Pi devices (Pies? 🥧) that can run it, and then goes through a step-by-step example of using the swift-sdk-generator to get hummingbird-examples running on either a 64-bit or 32-bit Pi. Follow along and have some Swifty fun with that Pi that is still sitting in its box on your shelf.
Tessera is a Swift package that turns a single generated tile composed of arbitrary SwiftUI views into an endlessly repeating, seamlessly wrapping pattern.
It only does that one job, but check out the README file for examples of how good it looks. It’s perfect for subtle backgrounds in an iOS app.
I liked this quick tip from Wesley de Groot on how adding alternate app names can make your app easier to launch and find in Spotlight. It’ll only take you three minutes to implement, which is a great value for your time! 👍
The Android Show | XR Edition introduced updates to the Android XR platform, focusing on new devices and developer tools. The platform is expanding to include lightweight AI and Display AI glasses from Samsung, Gentle Monster, and Warby Parker, integrating Gemini for features like live translation and visual search. Uber is exploring AI Glasses for contextual directions. Wired XR glasses, such as XREAL’s Project Aura, are scheduled for release next year.
Android XR SDK Developer Preview 3 offers increased stability for headset APIs and opens development for AI Glasses. This includes new libraries like Jetpack Compose Glimmer for transparent display UI and Jetpack Projected for extending your mobile apps to glasses. ARCore for Jetpack XR gains Geospatial capabilities, and new APIs enable detection of device field-of-view for adaptive UIs.
The platform, built on OpenXR, supports Unreal Engine development with a Google vendor plugin for hand tracking coming next year, and Godot Engine now includes Android XR support via its OpenXR vendor plugin v4.2.2 stable.
Android XR SDK Developer Preview 3 is now available, enabling you to build augmented experiences for AI Glasses in addition to immersive experiences for XR Headsets.
Jetpack Compose for XR introduces the UserSubspace component for content follow behavior, spatial animations, and support for specifying layout sizes as fractions of the user’s field of view.
Material Design for XR offers new components that adapt spatially, including dialogs, navigation bars that pop out into an Orbiter, as well as a SpaceToggleButton.
An XR Glasses emulator for devices like Project Aura from XREAL has been added to Android Studio.
The Android XR SDK for Unity expands tracking capabilities to include QR and ArUco codes, planar images, experimental body tracking, and scene meshing.
To begin building, update to Android Studio Canary (Otter 3, Canary 4 or later) and emulator version 36.4.3 Canary or later, then visit developer.android.com/xr for libraries and samples.
The Android Studio Otter 2 Feature Drop is now stable. This release introduces updates to Agent Mode, including the Android Knowledge Base for improved accuracy and the option to use the Gemini 3 model. You can now use Backup and Sync to maintain consistent IDE settings across your machines and opt-in to receive communications from the Android Studio team. Additionally, this release incorporates stability and performance enhancements from the IntelliJ IDEA 2025.2 platform, such as Kotlin compiler and terminal improvements.
Android 16 QPR2 has been released, marking the first minor SDK version. This release aims to accelerate innovation by delivering new APIs and features outside of major yearly platform releases.
Key updates include:
Minor SDK Release: You can now check for new APIs using SDK_INT_FULL and VERSION_CODES_FULL in the Build class as of Android 16.
Expanded Dark Theme: This feature provides an option to invert apps that do not have a native dark theme, intended as an accessibility feature. You should declare isLightTheme=”false” in your dark theme if your app does not inherit from standard DayNight themes to prevent unintended inversion.
Custom Icon Shapes & Auto-Theming: Users can select custom shapes for app icons, and the system can automatically generate themed icons if your app does not provide one.
Interactive Chooser Sessions: The sharing experience now supports real-time content updates within the Chooser, keeping the UI interactive.
Linux Development Environment with GUI Applications: You can now run Linux GUI applications directly within the terminal environment.
Generational Garbage Collection: The Android Runtime (ART) includes a Generational Concurrent Mark-Compact (CMC) Garbage Collector to reduce CPU usage and improve battery efficiency.
Widget Engagement Metrics: You can query user interaction events such as clicks, scrolls, and impressions for your widgets.
16KB Page Size Readiness: Debuggable apps not 16KB page-aligned will receive early warning dialogs.
IAMF and Audio Sharing: Software decoding support for Immersive Audio Model and Formats (IAMF) is added, and Personal Audio Sharing for Bluetooth LE Audio is integrated into the system Output Switcher.
Health Connect Updates: Health Connect automatically tracks steps using device sensors, and you can now track weight, set index, and Rate of Perceived Exertion (RPE) in exercise segments.
Smoother Migrations: A new Data Transfer API enables data migration between Android and iOS devices.
Developer Verification: APIs support developer verification during app installation, with ADB commands available to simulate outcomes.
SMS OTP Protection: The delivery of messages containing an SMS retriever hash is delayed for most apps by three hours to help prevent OTP hijacking.
Secure Lock Device: A new system-level security state locks the device immediately, requiring the primary PIN, pattern, or password to unlock and temporarily disabling biometric unlock.
To get started, you can get the Android 16 QPR2 release on your Pixel device, or use 64-bit system images with the Android Emulator in Android Studio. Using the latest Canary build of Android Studio Otter is recommended.
The Jetpack Compose December ’25 release is now stable, including core Compose modules version 1.10 and Material 3 version 1.4. To use this release, update your Compose BOM version to 2025.12.00.
Key updates include:
Performance Improvements: Scroll performance now matches Views, with pausable composition in lazy prefetch enabled by default to reduce jank. Further optimizations improve Modifier.onPlaced and Modifier.onVisibilityChanged performance.
New Features:
The retain API helps persist non-serializable state across configuration changes, useful for objects like media players.
Animation features include dynamic shared elements, allowing you to control sharedElement() and sharedBounds() animation transitions via SharedContentConfig’s isEnabled property.
Upcoming Changes:Modifier.onFirstVisible will be deprecated in Compose 1.11 due to non-deterministic behavior; migrate to Modifier.onVisibilityChanged. Coroutine dispatch in tests will shift to StandardTestDispatcher by default in a future release to align with production behavior; you can opt-in now using effectContext = StandardTestDispatcher() in createComposeRule.
Developed to address the shift to reactive programming and declarative UI, Navigation 3 provides more flexibility and customizability compared to the original Jetpack Navigation (now Nav2) through smaller, decoupled APIs. For example, NavDisplay observes a list of keys backed by Compose state to update the UI. It also allows you to supply your own state for a single source of truth, lets you customize screen animations, and create flexible layouts with the Scenes API.
If you are currently using Navigation Compose with Nav2, you can consider migrating to Nav3. A migration guide is available that outlines key steps, including adding Nav3 dependencies, updating routes to implement NavKey, creating navigation state classes, and replacing NavController with these classes. You also move destinations from NavHost’s NavGraph into an entryProvider and replace NavHost with NavDisplay. You can experiment with an AI agent, like Gemini in Android Studio’s Agent Mode, for this migration by providing the markdown version of the guide as context.
Profile Guided Optimizations, including Baseline Profiles and Startup Profiles, can enhance startup speed, scrolling, animation, and rendering performance. Jetpack Compose 1.10 also introduced performance improvements like pausable composition and a customizable cache window for handling complex list items.
To measure performance, a new Performance Leveling Guide outlines a five-step journey, starting with data from Android Vitals and progressing to advanced local tooling like Jetpack Macrobenchmark and the UiAutomator 2.4 API for accurate measurement and verification.
Debugging tools received upgrades, including Automatic Logcat Retrace in Android Studio Narwhal to de-obfuscate stack traces automatically. New guidance on Narrow Keep Rules helps fix runtime crashes, supported by a lint check in Android Studio Otter 3. Additionally, new documentation and the Background Task Inspector offer insights into debugging WorkManager tasks and background work.
Performance optimization is an ongoing process, and the App Performance Score framework can help you integrate continuous improvements into your product roadmap.
CameraX 1.5 introduces features for video recording and image capture, alongside core API enhancements.
For video, you can now capture slow-motion or high-frame-rate videos. The new Feature Group API enables combinations like 10-bit HDR and 60 FPS, supporting features such as HDR (HLG), 60 fps, Preview Stabilization, and Ultra HDR, with plans for 4K recording and ultra-wide zoom.
Concurrent Camera improvements allow binding Preview, ImageCapture, and VideoCapture concurrently, and applying CameraEffects in composition mode. Additionally, CameraX 1.5 includes dynamic audio muting during recording, improved insufficient storage error handling, and a low light boost for dark environments on supported devices.
For image capture, CameraX 1.5 adds support for capturing unprocessed, uncompressed DNG (RAW) files, either standalone or simultaneously with JPEG. You can also leverage Ultra HDR output when using Camera Extensions.
Core API changes include the new SessionConfig API, which centralizes camera setup, removes the need for manual unbind() calls when updating use cases or switching cameras, and provides deterministic frame rate control. The camera-compose library has reached stable version 1.5.1, addressing bugs and preview stretching. Other improvements include fine-grained control over torch strength (querying max strength and setting levels) and NV21 image format support in ImageAnalysis.
To access these features, update your dependencies to CameraX 1.5.1. You can join the CameraX developer discussion group or file bug reports for support.
The Health Connect Jetpack library has reached its 1.1.0 stable release, providing a foundation for health and fitness applications. This version incorporates features such as background reads for continuous data monitoring, historical data synchronization, and support for data types including Personal Health records, Exercise Routes, Training Plans, and Skin Temperature. The platform supports over 50 data types across various health and fitness categories.
Additionally, Health Connect is expanding its device type support, which will be available in version 1.2.0-alpha02. New supported device types include Consumer Medical Devices (e.g., Continuous Glucose Monitors, Blood Pressure Cuffs), Glasses (for smart glasses and head-mounted optical devices), Hearables (for earbuds, headphones, and hearing aids with sensing capabilities), and Fitness Machines (for stationary and outdoor equipment). This expansion aims to enhance data representation by specifying the source hardware.
You are encouraged to upgrade to the 1.1.0 library, review the official documentation and release notes for further details, and submit feedback or report issues via the public issue tracker.
ML Kit has released the Alpha version of its GenAI Prompt API, enabling custom on-device Gemini Nano experiences. This API allows you to send natural language and multimodal requests to Gemini Nano, supporting use cases requiring more control and flexibility for generative models.
The Prompt API processes data locally, offering offline functionality and enhanced user privacy. Examples of its application include image understanding, intelligent document scanning, transforming data for UI, content prompting, content analysis, and information extraction.
Implementation involves a few lines of code, using Generation.getClient().generateContent() with optional parameters like temperature, topK, candidateCount, and maxOutputTokens. Detailed examples are available in the official documentation and a GitHub sample.
The API performs optimally on the Pixel 10 device series, which features Gemini Nano (nano-v3), built on the same architecture as Gemma 3n. Developers without a Pixel 10 can prototype features locally using Gemma 3n. Refer to the device support documentation for a comprehensive list of compatible devices.
Parking Assistance: It uses multimodal capabilities to detect improperly parked bikes and scooters on yellow tactile paving, reducing server costs and enhancing user privacy compared to cloud-based image recognition.
Improved Address Entry: For parcel delivery, it streamlines entity extraction from natural language order requests, which eliminated error-prone manual address entry by drivers.
The implementation of Gemini Nano on-device led to:
Cost savings by shifting AI processing from the cloud.
Enhanced user privacy by keeping sensitive location data on the device.
Reduced order completion time for delivery orders by 24%.
Increased conversion rates for new users by 45% and existing users by 6%.
Over 200% increase in AI-powered orders during peak seasons.
Reduced developer effort and shortened development time.
You can use ML Kit’s GenAI Prompt API to integrate on-device AI capabilities like Gemini Nano into your applications.
The Android team has launched a redesigned, open-source Android AI Sample Catalog app on GitHub to showcase various AI-enabled features using both on-device (ML Kit GenAI API with Gemini Nano) and cloud (Firebase AI Logic SDK) models. The catalog includes samples for tasks like image generation (Imagen), on-device text summarization, a chatbot for image editing (Gemini 3 Pro Image model), on-device image description, a voice-controlled to-do list, and on-device rewrite assistance. The app features a new Material 3 design and provides structured code for easy integration into your own projects.
We had a Nav3 Spotlight Week to help you learn and integrate the library into your app. Nav3 can assist in reducing technical debt, improving separation of concerns, accelerating feature development, and supporting new form factors.
The week featured daily content:
API Overview explores core APIs like NavDisplay, NavEntry, and entryProvider, including a coding walkthrough video.
Animations demonstrates how to set custom animations for screen transitions and override them for individual screens, with accompanying documentation and recipes.
Deep links covers creating deep links with various code recipes, offering a guide and both basic and advanced examples for parsing intents and synthetic back stacks. The Now in Android sample has also migrated to Nav3.
Modularization focuses on modularizing navigation code to avoid circular dependencies and using dependency injection and extension functions for feature modules.
Ask Me Anything was a live session where the community submitted questions using the #AskAndroid tag on BlueSky, LinkedIn, and X.
#WeArePlay spotlights DELISH KITCHEN co-founder Chiharu and her app, which provides 55,000 video recipes to over 13 million Japanese users to solve the “dinner dilemma.” Google Play supports the app’s growth, offering distribution to Android users, developer tools, and feature campaigns. Future plans for you to note include an AI-powered cooking assistant, a new health management app, and supermarket partnerships.
Level 4: Trace Analysis Tools Use deep analysis tools like Perfetto to capture and analyze the entire device state, including kernel scheduling and system services, to provide context for performance issues. You can record traces via developer options, Android Studio CPU Profiler, or the Perfetto UI, then load and analyze them to debug jank, slow startup, and excessive battery/CPU usage.
Level 5: Custom Performance Tracking Framework For teams with dedicated resources, you can build a custom performance tracking framework using Android APIs like ApplicationStartInfo (API 35), ProfilingManager (API 35), and ApplicationExitInfo (API 30) to understand why your app process died (e.g., native crashes, ANRs, out-of-memory kills).
The API provides various verdicts to detect specific threats, including checks for:
App Status: If the user installed or paid for the app via Google Play (accountDetails) and if the app binary is unmodified (appIntegrity).
Device Status: If the app runs on a genuine Play Protect certified Android device (deviceIntegrity), if the device has recent security updates (MEETS_STRONG_INTEGRITY), and if Google Play Protect is active and no risky apps are present (playProtectVerdict).
Security Risks: If risky apps are running that could capture the screen or control the device (appAccessRiskVerdict).
Improvements also focus on user recovery through new Play in-app remediation prompts.
Uber has reduced manual logins by an estimated 4 million per year by integrating the Restore Credentials API into its rider app. This feature enables a seamless transition for users when they switch to a new device, eliminating the need for re-authentication.
A five-week A/B experiment confirmed the positive impact, demonstrating:
A 3.4% decrease in manual logins (SMS OTP, passwords, social login).
A 1.2% reduction in expenses related to SMS OTP logins.
A 0.575% increase in the rate of devices successfully reaching the app’s home screen.
R8 is the primary tool for shrinking and optimizing Android apps. Keep Rules are essential because R8 cannot predict dynamic code (like reflection), which could lead to unintended code removal.
Key takeaways for Keep Rules:
Location: Write rules in a proguard-rules.pro file, always using proguard-android-optimize.txt.
Best Practice: Write narrow, specific rules, and use annotations or common ancestors for scalability.
Avoid: Global options (like -dontoptimize) and overly broad rules, as they negate R8’s performance benefits.
Troubleshooting: Use -printconfiguration to see all merged rules and -whyareyoukeeping to understand why a class is being preserved.
Goal: Use modern libraries with code generation instead of reflection to reduce the need for Keep Rules entirely.
Reddit significantly improved its app’s performance by implementing the R8 optimizer in full mode, which took less than two weeks.
Key results from the implementation:
Real-World Metrics (Android Vitals/Crashlytics):
40% faster cold startup time
30% reduction in “Application Not Responding” (ANR) errors
25% improvement in frame rendering
14% decrease in app size
Controlled Testing (Macrobenchmark):
55% faster app startup
18% quicker time for users to begin browsing
You can enable R8 by setting minifyEnabled and shrinkResources to true in your release build type within app/build.gradle.kts. This process should be followed by holistic end-to-end testing, and you may need to define keep rules to prevent R8 from modifying essential parts of your code.
The latest #WeArePlay stories highlight game creators who develop for Google Play. These stories feature developers who entertain players, inspire new ideas, and spark imagination through their creations.
You can learn about:
Ralf and Matt from Vector Unit, creators of Beach Buggy Racing. Their kart racing game has over 557 million downloads and has received community engagement for its console-quality feel on mobile. They continue to update the game and are prototyping new projects.
Camilla from Clover-Fi Games, who developed Window Garden. This lofi idle game, which encourages players to care for digital plants and decorate spaces, has surpassed 1 million downloads and received a “Best of 2024” award from Google Play. Camilla aims to expand her studio and collaborate with other creatives.
Rodrigo from Kolb Apps, the founder behind Real Drum. This virtual drum set app offers a realistic experience, allowing users to play drums and cymbals. It has accumulated over 437 million downloads, making music accessible to many, and Rodrigo plans to release new apps for children.
This week’s #WeArePlay series highlights applications and games on Google Play that focus on health and wellness. You can learn about:
Alarmy (Delightroom, Seoul), an app for heavy sleepers that uses challenge-based alarms, including math problems and photo missions, and is expanding into sleep tracking and general wellness.
Betwixt (Mind Monsters Games, Cambridge, UK), an interactive adventure game designed to reduce anxiety by combining storytelling with evidence-based techniques.
MapMyFitness (MapMyFitness, Boulder, CO, U.S.), an app for runners and cyclists to map routes and track training, offering features like adaptive training plans, guided workouts, and live safety tracking.
Android developer verification has begun its early access phase. This initiative introduces verification requirements as an additional layer of security to protect Android users from scams and digital fraud, particularly with sideloaded apps. The system aims to deter malicious app distribution by linking apps to verified identities.
In response to community feedback, changes address specific developer needs:
You, as a student or hobbyist, will have a dedicated account type, enabling distribution to a limited number of devices without full verification.
For experienced users, a new advanced flow is being developed to permit the installation of unverified apps. This flow will include clear warnings about risks and is designed to resist coercion.
You can find a video walkthrough and detailed guides for the new Android Developer Console experience.
The “excessive partial wake locks” metric has moved out of beta and is now generally available as a new core vitals metric in Android vitals. This metric, co-developed with Samsung, identifies user sessions where an app holds more than two cumulative hours of non-exempt wake locks within a 24-hour period.
If your app surpasses a bad behavior threshold — 5% of user sessions being excessive over 28 days — it may be excluded from prominent Google Play discovery surfaces and a warning may appear on its store listing, starting March 1, 2026.
Android vitals now features a wake lock names table to help you pinpoint excessive wake locks by name and duration, particularly those with P90 or P99 durations over 60 minutes. You are encouraged to review your app’s performance in Android vitals and consult the updated documentation for best practices.
redBus utilized Gemini Flash via Firebase AI Logic to revamp its customer review system, resulting in a 57% increase in review length. The company’s previous text-based review process presented challenges such as language barriers and a lack of detailed feedback.
To address this, redBus implemented a voice-first approach, enabling users to submit reviews in their native language. Gemini Flash transcribes and translates speech, performs sentiment analysis, and generates star ratings, relevant tags, and summaries from these voice inputs. Firebase AI Logic facilitated the frontend team’s independent development and launch of this feature within 30 days, removing the need for complex backend implementation. The solution employs structured output to ensure well-formed JSON responses from the AI model. redBus plans to continue exploring on-device generative AI and will use Google AI Studio for prompt iteration.
Google Play has released new tools and programs designed to streamline your development and accelerate your app’s growth. You can now validate deep links directly within Play Console using a built-in emulator. A new Gemini-powered localization service offers no-cost translations for app strings, automatically translating new app bundles into selected languages while allowing you to preview, edit, or disable them.
On the Statistics page, a new Gemini-powered feature generates automated chart summaries to help you understand data trends and provides access to reporting for screen reader users. The Play Console now includes a “Grow users” overview page, offering a tailored view to acquire new users and expand your reach. A new “You” tab on the Play Store is available for re-engagement; you can integrate with Engage SDK to help users resume content or get personalized recommendations. Game developers can use this tab to showcase in-game events, content updates, and offers, with promotional content, YouTube video listings, and Play Points coupons available.
For monetization, you can now configure one-time products with more flexibility, including limited-time rentals and pre-orders through an early access program, and manage your catalog more efficiently with a new taxonomy. A new Play Points page in Play Console provides reporting on the revenue, buyers, and acquisitions generated by both your developer-created and Google-funded Play Points promotions.
Calm has brought its mindfulness content to Android XR. Its engineering team developed functional XR orbiter menus in one day and a core XR experience within two weeks. This involved extending existing Android development, including leveraging Jetpack Compose and reusing codebase components such as backend and media playback.
The team utilized Android XR design guides and evolved features like the “Immersive Breathe Bubble” for 3D breathwork and “Immersive Scene Experiences” for ambient environments. The creative workflow involved concept art, 3D models with human-scale reference, and in-headset testing, with the Android XR emulator available as a testing option.
To build for XR, you can integrate Jetpack XR APIs into existing Android apps and reuse code to create prototypes quickly. Resources for building on the Android XR platform are available at developer.android.com/xr.
Android Developers has introduced Cahier, a new GitHub sample application designed to showcase best practices for building productivity and creativity apps optimized for large screens.
Cahier demonstrates how you can develop versatile note-taking applications that combine text, freeform drawings using the Ink API (now in beta), and image attachments. Key features include fluid content integration with drag and drop for importing and sharing, and note organization capabilities.
The sample utilizes an offline-first architecture with Room and supports multi-window and multi-instance capabilities, including desktop windowing. Its user interface adapts to various screen sizes and orientations, including phones, tablets, and foldable devices, by employing ListDetailPaneScaffold and NavigationSuiteScaffold from the material3-adaptive library.
Cahier also illustrates deep system integration, showing you how to enable your app to become the default note-taking app on Android 14 and higher by responding to Notes intents. Lenovo has enabled Notes Role support on its tablets running Android 15 and above, allowing note-taking apps to be set as default on these devices. The sample provides comprehensive input support, including stylus, keyboard shortcuts, and mouse/trackpad interactions.
Material 3 Adaptive 1.2.0 is now stable, building on previous versions with expanded support for window size classes and new strategies for display pane placement.
The release introduces support for Large (L) and Extra-large (XL) breakpoints for width window size classes, enabled by setting supportLargeAndXLargeWidth = true in your currentWindowAdaptiveInfo() call.
New adaptive strategies, reflow and levitate, are available for ListDetailPaneScaffold and SupportingPaneScaffold. The reflow strategy rearranges panes based on window size or aspect ratio, moving a second pane to the side or underneath. The levitate strategy docks content and offers customization for draggability, resizability, and background scrim. Both strategies can be declared in the Navigator constructor using the adaptStrategies parameter.
When publishing and distributing your app for Android XR, consider five key areas:
Uphold quality with Android XR app quality guidelines. Ensure your app delivers a safe, comfortable, and performant user experience by following guidelines that cover camera movement, frame rates, visual elements (like strobing), performance metrics, and recommended minimum interactive target sizes for eye-tracking and hand-tracking inputs.
Configure your app manifest correctly. In your AndroidManifest.xml, specify android.software.xr.api.spatial for apps using the Jetpack XR SDK or android.software.xr.api.openxr for apps using OpenXR or Unity. Set android:required=”true” accordingly for dedicated XR tracks or false for mobile tracks. Also, set the android.window.PROPERTY_XR_ACTIVITY_START_MODE on your main activity to define the default user environment (Home Space, Full Space Managed, or Full Space Unmanaged). Check for optional hardware features dynamically at runtime using PackageManager.hasSystemFeature() instead of setting them as required in the manifest to avoid limiting your audience.
Use Play Asset Delivery (PAD) to deliver large assets. For immersive apps with large assets, use PAD’s install-time, fast follow, or on-demand delivery modes. Android XR apps have an increased cumulative asset pack limit of 30 GB. Unity developers can integrate Unity Addressables with PAD.
Showcase your app with spatial video previews. Provide a 180°, 360°, or stereoscopic video asset to offer an immersive 3D preview on the Play Store for users browsing on XR headsets.
Choose your Google Play release track. You can publish to the mobile release track if you are adding spatial XR features to an existing mobile app and can bundle XR features into your existing Android App Bundle (AAB). Alternatively, you can publish to the dedicated Android XR release track for new XR apps or XR versions that are functionally distinct, which restricts visibility to Android XR devices supporting spatial or OpenXR features.
The Android Developers Blog details how the Androidify app was adapted for Extended Reality (XR) using the Jetpack XR SDK, coinciding with the launch of Samsung Galaxy XR powered by Android XR.
Originally designed with adaptive layouts for phones, foldables, and tablets, Androidify is compatible with Android XR without modifications. For a differentiated XR experience, developers created specific spatial layouts.
Key XR concepts include Home Space, which allows multitasking with multiple app windows in a virtual environment, and Full Space, where an app uses the full spatial features of Android XR. You are advised to support both modes.
Designing for XR involved organizing UI elements using containment, embracing spatial UI elements that adjust to the user, and adapting camera layouts for headsets. Design tips for spatial UI include allowing uncontained elements, removing background surfaces, motivating with motion, and choosing an anchor element for content.
For development, the Jetpack XR Compose dependency was added. You can transition to Full Space by checking for XR spatial features using LocalSpatialConfiguration.current.hasXrSpatialFeature and !LocalSpatialCapabilities.current.isSpatialUiEnabled. Spatial UI elements like SpatialPanel, SubspaceModifier, and Orbiter enable the creation of XR layouts with existing 2D content. SpatialPanels can incorporate ResizePolicy and MovePolicy for user interaction, and hierarchical relationships allow grouped movement.
To publish, include <uses-feature android:name=”android.software.xr.api.spatial” android:required=”false” /> in your AndroidManifest.xml to signify XR-differentiated features. The same app binary can be distributed to both mobile and XR users, with options to add XR-specific screenshots or spatial video assets for immersive previews on the Play Store.
Miksapix Interactive launched their game “Raanaa” on Google Play, leveraging Sámi mythology to preserve and share indigenous culture. This demonstrates a successful approach to niche content development and localization on the platform, with the game being translated into various Sámi languages.
Google is empowering you to build intelligent apps using Gemini AI, offering a comprehensive end-to-end AI stack. Key takeaways include:
Tools: AI Studio for prototyping, ML Kit GenAI APIs (Beta) for on-device inference (summarization, proofreading, image description, custom prompt API), and Firebase AI Logic SDK for cloud inference with production features (App Check, Remote Config, monitoring).
Models: On-device options like Gemini Nano and Gemma 3n; cloud models like Gemini Pro, Flash, and Flash-Lite. Specialized models include Nano Banana and Imagen for image generation.
New APIs: The Gemini Live API (Preview) enables real-time voice/video interactions and features “function calling” for Gemini to invoke custom Kotlin functions within apps.
Focus: You can choose between on-device (offline, private, no cost) and cloud (powerful, broad availability) AI approaches based on their app’s needs.
It’s time to build adaptive apps that optimally scale across diverse form factors (tablets, foldables, Chromebooks, etc.).
Key takeaways:
Incentives: Play Store will prioritize adaptive apps in search/features. By 2026, “quality badging” and form-factor-specific ratings will be introduced.
Platform Changes: Android 16 (SDK 34) will remove orientation, resize, and aspect ratio constraints on large screens, aiming to make 75% of top apps automatically adaptive in landscape.
Tools & Resources: Leverage new/improved Android Studio tools, including dedicated layout libraries (e.g., SlidingPaneLayout, ActivityEmbedding), enhanced emulators, design guidelines, and Window Size Classes for streamlined layout adaptation.
Material 3 Expressive is now available, offering new capabilities to build more premium, engaging, and expressive UIs.
Key updates include:
Enhanced Components: Flexible app bars, buttons with shape-morphing motion, a new FAB Menu (“speed dial”), new loading and progress indicators, and revamped menu/list/slider components.
Adaptive UI: New Adaptive Navigation Bar and Rail seamlessly adapt to various window sizes and form factors, including foldables.
Style Enhancements: An expanded shape library (35 unique shapes), a physics-based motion system, richer dynamic colors, and emphasized typography with variable font support.
Crucially, these new features are available today for both Jetpack Compose and Android Views, ensuring bidirectional compatibility with existing Material 3 implementations. The update aims to improve clarity, usability, and user delight, as validated by extensive user research.
Androidify has been re-released as an open-source app, built with Jetpack Compose. It offers you a practical example of integrating Firebase AI Logic SDK, Gemini, and a fine-tuned Imagen model for AI features like image validation, captioning, and bot generation.
Key takeaways include:
Using ML Kit Subject Segmentation for features like sticker creation.
Implementing modern UI/UX with SharedTransitionLayout for smooth transitions.
Integrating predictive back support.
Building a fully adaptive UI for phones, tablets, foldables, and Chromebooks from a single codebase.
Android apps using Jetpack Compose will benefit from significant performance improvements, including 21% faster Time to First Frame and a 76% reduction in jank.
New resources are available for TV and car app development:
TV: Leverage the dedicated Compose for TV library, with design guidance emphasizing clear focus indicators and a focus management codelab.
Cars: A new “Design for cars” guide differentiates Android Auto and Automotive OS, outlines driving restrictions, defines app quality tiers (including making existing apps “Car ready” via Google Play opt-in).
Testing is also enhanced with new Android Automotive OS emulators, an early access program for Firebase Test Lab offering direct device access, and an AAOS image for the Pixel Tablet. Google champions adaptive app development with Compose for extensive code reuse across all Android form factors.
Google Play’s October 2025 policy updates bring several key changes for developers:
Age-Restricted Content: Apps facilitating dating, gambling, or real-money games must now use the “Restrict Minor Access” feature to block minors.
Personal Loans (India): Apps must be on the Indian government’s approved digital lending list.
Health & Medical Apps: EU medical device apps require regulatory info and will get a “Medical Device” label. Other health/medical apps must include a disclaimer stating they are not medical devices.
Subscriptions: Policy clarification emphasizes clear free trial cancellation and prominent display of total charges to avoid violations.
Appeals Process: A new 180-day appeal window is being introduced for account terminations.
Compliance Deadline:January 2026 for these and other related policy updates.
The redesigned Google Play Console introduces key new features for Android developers:
Pre-launch Deep Link Testing: A new built-in emulator on the Deep links page allows developers to test deep links and visualize user experience before launch.
Enhanced Monitoring: The “Monitor and improve” section provides actionable recommendations to address issues like ANR rates and slow warm-start times.
Pocket FM significantly cut Android development time (50% for new features, 30% for existing) by integrating Gemini in Android Studio. For developers, this highlights Gemini’s practical utility in generating code (like impression tracking), resolving complex issues (e.g., Media3 errors), and streamlining SDK upgrades by identifying dependencies, enabling engineers to focus on more complex development.
Modifier.scrollIndicator: A new API to allow developers to add custom scroll indicators to scrollable containers, offering more control over the scroll UI.
Modifier.visible(): Introduced to skip drawing a Composable’s content without affecting the space it occupies in the layout. This is useful for conditional visibility when you want to maintain layout structure.
Important Deprecation:
Modifier.onFirstVisible() is now deprecated. Its behavior was often misleading (e.g., triggering on every scroll for lazy lists). Developers are advised to use Modifier.onVisibilityChanged() and manually track visibility state based on their specific use case.
Default Behavior Changes:
TextField DPAD navigation and semantic autofill are now enabled by default, removing previous configuration flags.\
Advanced Layouts:
MeasuredSizeAwareModifierNode: A new, more specific interface for obtaining onRemeasured() callbacks, recommended for custom layout nodes needing only measurement-related events.
Shared Element Transitions for Scenes: Navigation3 now supports treating “scenes” (likely Compose destinations/screens) as shared element objects. This enables smooth, coordinated transitions between composables as they change, by passing a SharedTransitionScope to NavDisplay or rememberSceneState.
KMP Web Support: Introduces experimental Kotlin Multiplatform Web support for DataStore, leveraging the browser’s sessionStorage API for temporary data persistence within a single browser tab.
Addresses issues with the refresh icon’s retraction and position reset, ensuring it behaves correctly after being shown and hidden.
Corrects requestDisallowInterceptTouchEvent(boolean) behavior, now honoring the request like other ViewGroups (though developers can opt out of this new behavior if necessary).
Adaptive UI Helpers: Adds helper methods to construct WindowSizeClassSets in a grid format, simplifying the creation of responsive layouts for different screen sizes and folding states.
Other Noteworthy Releases
androidx.webgpu:webgpu:1.0.0-alpha01: Initial alpha release of a new library bringing WebGPU capabilities to Android applications. This is a developer preview aimed at specialized graphics use cases.
androidx.xr.glimmer:glimmer:1.0.0-alpha01: Initial alpha release of Jetpack Glimmer, a new design language and UI component library specifically for building Android XR (Extended Reality) experiences.
Compose Animation (1.11.0-alpha01): Includes a bug fix ensuring position is acquired for shared elements only when SharedTransitionLayout is attached.
Compose Runtime (1.11.0-alpha01): Minor API change with RetainedValuesStore.getExitedValueOrDefault renamed to consumeExitedValueOrDefault, and the experimental concurrent recomposition API has been removed.
See you all in the new year for more updates from the Android developer ecosystem!
Now In Android #123 was originally published in Android Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.
In Part 1 we built a clean networking layer using Swift’s modern concurrency and URLSession. The example endpoints were public and didn’t require authentication. Real-world apps, however, usually require you to authenticate users and attach short-lived access tokens to every request. These tokens eventually expire, and we need a way to obtain a new access token without forcing the user to log in again. In this part we’ll walk through how to implement a secure and robust token handling mechanism on top of the networking client from Part 1.
Understanding access and refresh tokens
Access tokens grant access to protected APIs. They are typically short‑lived (minutes to hours) so that an attacker only has a limited window if the token is compromised.
Refresh tokens are credentials that allow the client to request a new access token. Because they can be used to mint new access tokens, they need to be protected as if they were user passwords.
Token model
import Foundation
/// A container for access/refresh tokens and their expiration date.
/// Conforms to Codable so it can be encoded to and decoded from JSON.
struct TokenBundle: Codable {
let accessToken: String
let refreshToken: String
let expiresAt: Date
/// Returns true if the access token is expired.
var isExpired: Bool {
return expiresAt <= Date()
}
}
This struct mirrors the JSON payload returned by your authentication server (e.g. { "access_token":…, "refresh_token":…, "expires_at":… }). The isExpired is a computed property that helps us decide when to refresh.
Secure persistence - Keychain
Let's create a helper for storing and retrieving tokens:
import Foundation
import Security
/// A simple helper for storing and retrieving data from the Keychain.
/// This stores and retrieves a single `TokenBundle` under a fixed key.
/// You can generalize it if you need to store more items.
enum KeychainService {
/// Errors thrown by keychain operations.
enum KeychainError: Error {
case unhandled(status: OSStatus)
}
/// Change this to your app’s bundle identifier to avoid key collisions.
private static let service = "pro.mobile.dev.ModernNetworking"
private static let account = "authTokens"
/// Saves the given token bundle to the keychain, overwriting any existing value.
static func save(_ tokens: TokenBundle) throws {
let data = try JSONEncoder().encode(tokens)
// Remove any existing entry.
try? delete()
// Add the new entry.
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account,
kSecValueData as String: data
]
let status = SecItemAdd(query as CFDictionary, nil)
guard status == errSecSuccess else {
throw KeychainError.unhandled(status: status)
}
}
/// Loads the token bundle from the keychain, or returns nil if no entry exists.
static func load() throws -> TokenBundle? {
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account,
kSecReturnData as String: true,
kSecMatchLimit as String: kSecMatchLimitOne
]
var item: AnyObject?
let status = SecItemCopyMatching(query as CFDictionary, &item)
if status == errSecItemNotFound {
return nil
}
guard status == errSecSuccess, let data = item as? Data else {
throw KeychainError.unhandled(status: status)
}
return try JSONDecoder().decode(TokenBundle.self, from: data)
}
/// Removes the token bundle from the keychain.
static func delete() throws {
let query: [String: Any] = [
kSecClass as String: kSecClassGenericPassword,
kSecAttrService as String: service,
kSecAttrAccount as String: account
]
let status = SecItemDelete(query as CFDictionary)
guard status == errSecSuccess || status == errSecItemNotFound else {
throw KeychainError.unhandled(status: status)
}
}
}
This helper encodes a TokenBundle to Data, stores it under a single key in the Keychain, and decodes it back to a TokenBundle when needed. It also includes a delete() method to clear the stored tokens when the user logs out.
Concurrency‑safe token management
When multiple network requests need a valid token simultaneously, we must avoid refreshing the token more than once at the same time. Using a Swift actor ensures that only one refresh call happens concurrently.
import Foundation
/// Manages access and refresh tokens.
/// Uses an actor to serialize token access and refresh operations safely.
actor AuthManager {
/// The currently running refresh task, if any.
private var refreshTask: Task<TokenBundle, Error>?
/// Cached token bundle loaded from the keychain.
private var currentTokens: TokenBundle?
init() {
// Load any persisted tokens at initialization.
currentTokens = try? KeychainService.load()
}
/// Returns a valid token bundle, refreshing if necessary.
/// Throws if no tokens are available or if refresh fails.
func validTokenBundle() async throws -> TokenBundle {
// If a refresh is already in progress, await its result.
if let task = refreshTask {
return try await task.value
}
// No stored tokens means the user must log in.
guard let tokens = currentTokens else {
throw AuthError.noCredentials
}
// If not expired, return immediately.
if !tokens.isExpired {
return tokens
}
// Otherwise refresh.
return try await refreshTokens()
}
/// Forces a refresh of the tokens regardless of expiration status.
func refreshTokens() async throws -> TokenBundle {
// If a refresh is already happening, await it.
if let task = refreshTask {
return try await task.value
}
// Ensure we have a refresh token.
guard let tokens = currentTokens else {
throw AuthError.noCredentials
}
// Create a new task to perform the refresh.
let task = Task { () throws -> TokenBundle in
defer { refreshTask = nil }
// Build a request to your auth server’s token endpoint.
// Replace api.example.com and path with your actual auth server and endpoint.
var components = URLComponents()
components.scheme = "https"
components.host = "api.example.com" // change to your auth server
components.path = "/oauth/token"
var request = URLRequest(url: components.url!)
request.httpMethod = "POST"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
let body: [String: String] = ["refresh_token": tokens.refreshToken]
request.httpBody = try JSONEncoder().encode(body)
// Perform the network call.
let (data, response) = try await URLSession.shared.data(for: request)
guard let httpResponse = response as? HTTPURLResponse,
(200..<300).contains(httpResponse.statusCode) else {
throw AuthError.invalidCredentials
}
// Decode the new tokens.
let newTokens = try JSONDecoder().decode(TokenBundle.self, from: data)
// Persist and cache the tokens.
try KeychainService.save(newTokens)
currentTokens = newTokens
return newTokens
}
// Store the in‑flight refresh task so concurrent callers reuse it.
refreshTask = task
return try await task.value
}
/// Clears stored tokens from memory and the keychain.
func clearTokens() async throws {
currentTokens = nil
try KeychainService.delete()
}
}
/// Errors thrown by `AuthManager`.
enum AuthError: Error {
/// No tokens exist; the user must log in.
case noCredentials
/// Refresh failed or credentials are invalid.
case invalidCredentials
}
This actor lazily loads tokens from the keychain at initialization. When a valid token is requested, it either returns the cached token (if it hasn’t expired) or refreshes it by calling server’s refresh token endpoint. The refresh process is protected by a single Task: if multiple calls to validTokenBundle() happen concurrently, they will all await the same refresh task. If no tokens are stored or the refresh fails, AuthManager throws an AuthError we can react to and logout the user.
Adding auth token to endpoints that require it
We will update our Endpoint enum to have a variable indicating whether this endpoint requires authentication or not.
var requiresAuthentication: Bool {
switch self {
case .secureEndpoint: return true
default : return false
}
}
We will add the auth token when needed inside the NetworkClientfunc send<T: APIRequest> :
if request.endpoint.requiresAuthentication {
let tokens = try await authManager.validTokenBundle()
urlRequest.setValue("Bearer \(tokens.accessToken)", forHTTPHeaderField: "Authorization")
}
Code 10x faster. Tell Firebender to create full screens, ship features, or fix bugs - and watch it do the work for you. It's been battle tested by the best android teams at companies like Tinder, Adobe, and Instacart.
Azizkhuja Khujaev shares practical Android 15 (API 35) migration lessons including behavior changes and edge case UI fixes across devices and themes that emerged when targeting SDK 35.
Anmol Verma explains using config-driven Kotlin Multiplatform architecture to share ~70 % of code while keeping native Jetpack Compose and SwiftUI rendering for a white-label app.
We reach out to more than 80k Android developers around the world, every week, through our email newsletter and social media channels. Advertise your Android development related service or product!
Philipp Lackner shares thoughts about the current tough job market in the tech sector and gives you a clear recommendation on how to proceed with your career.
Philipp Lackner goes into detail how the Uber app and backend really works to allow streaming millions of live locations of their drivers and riders - while making sure the app still runs fluently.
There are now hundreds of products with Raspberry Pi, in one form or another, at their centre. This includes consumer kit that promises exciting new project features, HATs and accessories for both hobbyist and industrial users, and specialist hardware versions with a Compute Module at the heart of their DNA. The Powered by Raspberry Pi stamp of approval helps assure you that a product has been thoroughly tested and is guaranteed to work flawlessly using Raspberry Pi computers and microcontrollers.
The latest issue of Raspberry Pi Official Magazine featured half a dozen products from around the world that are helping improve things like driver and passenger safety, drone pilots’ chances of a successful landing, and marine pilots’ navigation accuracy. There are also some treats for fans of vintage computers and gaming, as well as AI photography, in the section below.
Many of us love embracing older technology to enjoy games and programming experiences from a decade or three ago. Inevitably, the storage formats of the 1980s and 1990s have long been superseded, along with the drivers written to work with them. But that doesn’t mean you can’t run older programs, of course; emulators for popular home computers are incredibly popular. BlueSCSI offers a neat way to access games, applications, and files hidden away on otherwise-obsolete external drives so that you can enjoy them all over again. This modern, open source solution replaces your old SCSI drives — including CD-ROM and magneto-optical — with a simple and reliable SD card, offering a fantastic upgrade for your classic Mac, Amiga, Atari, and more!
Any full-size Raspberry Pi computer can be used to run Candera’s CGI Studio Professional HMI (human–machine interface). Its rapid design tools are custom-made for small-to-medium-sized businesses, and include an invaluable Scene Composer and pre-built players for Linux-based devices. Certified for Raspberry Pi, CGI Studio Pro offers Python scripting with data model access, making it ideal for designing user interfaces and customer menus for any number of applications. Version 3.15, launched in spring 2025, extends the IntuitiveHMI design suite with simplified workflows, improved graphics, and added AI options — including SoundHound voice recognition — making it ideal for designers creating interfaces across automotive, medical, and other industries.
Clickdrive is a driving training system aimed at driving schools and public transport companies, who have found it invaluable in improving staff retention rates. A self-install kit with wired and wireless options, GPS, and a HD video camera, Clickdrive makes real-world training more intuitive by recording driving footage, integrating features such as bespoke instructor clips, GPS and motion sensors for location accuracy, object detection, and performance analysis. While driving games and simulators focus on overcoming obstacles and taking turns at high speed, Clickdrive records routes driven for self-improvement rather than fun, using customisable training programmes. The Singapore-based company has a roster of satisfied clients, including the city’s own SBS Transit authority and other public transport companies. The Clickdrive PRO system provides 360-degree video feedback alongside objective driving telemetry analysis, so drivers can receive individual post-drive reviews and tailored improvement advice.
Flying machines have long caught the imagination of amateur pilots, so when drones arrived on the scene, their success was little surprise. If you’re anything like us, though, the joy of seeing your craft aloft is tinged with anxiety about the seemingly inevitable sudden descent back and the potential curtailment of your new hobby. Landmark specialises in helping PX4 and ArduPilot drone and model aircraft pilots achieve precision landings time after time. (OK, the clue’s in the company name.) Promising centimetre-level landing accuracy, the system works in various lighting conditions, including direct sunlight and at night (with target illumination). The landing module attaches to your Raspberry Pi via a single cable, while a ground station such as Mission Planner or QGroundControl is used for all configuration.
Raspberry Pi Compute Modules, with their industrial-grade specifications, are becoming an increasingly popular choice for marine applications. Finland’s Hat Labs is a long-established open source and open hardware marine specialist. As well as being a keen sailor, founder Matti is an IoT veteran with many years’ experience with CAN bus and NMEA 2000 products. The Helsinki-based firm’s HALPI2 is a marine plotting platform based around Compute Module 5 and an ITX motherboard in a custom-designed, pre-built, fully functional Raspberry Pi boat computer, protected within a waterproof and ruggedised case. HALPI2 plots and tracks routes and acts as a data acquisition and visualisation device, providing a large degree of boat automation and control.
EDATEC makes robust hardware based on open source principles, using powerful equipment such as Raspberry Pi. Emerging from the management team at industrial supplier Farnell in 2017, EDATEC was among the very first to recognise Raspberry Pi’s potential as a modular industrial platform — and one of the first to gain Powered by Raspberry Pi accreditation. The 12MP ED-AIC3100 uses Compute Module 5, with its 64-bit SoC platform, to power and control a quad-core AI camera with a 12mm autofocus liquid lens and a C-Mount lens. The 3100-series camera is protected by a bright blue IP65 shockproof metal case that can withstand temperature variations of 0–45°C, and has a mounting bracket to absorb vibrations. Running 64-bit Raspberry Pi OS, the AI camera weighs just 400g and can be triggered remotely or with a single button press, acquiring and processing images at 70 frames per second before efficiently making sense of their contents.
Apply to Powered by Raspberry Pi
Our Powered by Raspberry Pi logo shows customers that your product is powered by our high‑quality computers and microcontrollers. All products licensed under Powered by Raspberry Pi are eligible to appear in our online gallery.
I Built a Game Engine from Scratch in C++ (Here's What I Learned)
I crashed my GPU 47 times before I saw my first triangle on screen.
For 3 months, I built a game engine from scratch in C++ using DirectX 9 and Win32—no Unity, no Unreal, no middleware. Just me, the Windows API, and a lot of segmentation faults.
This is the story of how building a simple Breakout clone taught me more about game development, graphics programming, and software architecture than years of using Unity ever did.
Why Build an Engine?
For years, I built games in Unity. I'd drag and drop GameObjects, attach scripts, hit Play, and watch my game come to life. It was magical—until it wasn't.
Questions started nagging at me:
How does Unity actually render my sprites? What's happening between GameObject.transform.position = newPos and pixels on screen?
Why do people complain about Unreal's performance? If it's "optimized," why do developers still struggle?
Why was Kerbal Space Program's physics so buggy? It's Unity—doesn't Unity handle physics automatically?
I realized I was using powerful tools without understanding what they were doing under the hood. I was a chef using a microwave, not knowing how heat actually cooks food.
Then my university professor gave us an assignment: Build a low-level game engine in C++.
No Unity. No libraries. Just C++, DirectX 9, and the Win32 API.
This was my chance to peek behind the curtain.
What I Built: Breakout, But From Scratch
If you've never played Breakout: you control a paddle at the bottom of the screen, bouncing a ball to destroy bricks at the top. Simple concept, complex implementation.
My engine features:
Custom rendering pipeline using DirectX 9
Fixed-timestep game loop (60 FPS target)
AABB and swept collision detection (no tunneling!)
State management system (Menu → Level 1 → Level 2 → Level 3 → End Game)
Key insight: The game logic never touches DirectX directly
InputManager (User Input)
Polls keyboard state using DirectInput
Abstracts raw input into game-meaningful queries: IsKeyDown(DIK_LEFT)
Why? Game code doesn't care about DirectInput—it just wants "left" or "right"
PhysicsManager (Collision & Movement)
AABB collision detection
Swept AABB for fast-moving objects (prevents tunneling)
Collision resolution with restitution
Lesson learned: Separate detection from resolution (I didn't know this at first!)
SoundManager (Audio)
Loads and plays sound effects
Handles background music with looping
Volume control
IGameState (State Pattern)
Interface for all game states: Menu, Level1, Level2, GameOver, YouWin
Each state implements: OnEnter(), Update(), Render(), OnExit()
This was my "aha!" moment—more on this below
The Game Loop
cpp
while(window.ProcessMessages()){// 1. Calculate delta time (frame-independent movement)floatdt=CalculateDeltaTime();// 2. Update input stateinputManager.Update();// 3. Update current game state// (Menu, Level, GameOver, etc.)gameState->Update(dt,inputManager,physicsManager,soundManager);// 4. Render everythingrenderer.BeginFrame();gameState->Render(renderer);renderer.EndFrame();}
Why this structure?
Modularity: Each system has one job
Testability: Can test physics without rendering
Maintainability: Bug in rendering? Only look in Renderer class
Scalability: Adding a new game state? Just implement IGameState
The Rendering Pipeline: From Nothing to Pixels
DirectX 9 has a reputation: it's old (released 2002), verbose, and unforgiving. But that's precisely why it's perfect for learning—you have to understand every step.
Initialization: Setting Up DirectX 9
Getting a window to show anything requires five major steps:
1. Create the Direct3D9 Interface
cpp
IDirect3D9*m_direct3D9=Direct3DCreate9(D3D_SDK_VERSION);if(!m_direct3D9){// Failed to create—probably missing DirectX runtimereturnfalse;}
This creates the main Direct3D object. Think of it as "connecting to the graphics driver."
We need to know: What resolution? What color format? This tells us what the monitor supports.
3. Configure the Presentation Parameters
cpp
D3DPRESENT_PARAMETERSm_d3dPP={};m_d3dPP.Windowed=TRUE;// Windowed mode (not fullscreen)m_d3dPP.BackBufferWidth=width;// 800 pixelsm_d3dPP.BackBufferHeight=height;// 600 pixelsm_d3dPP.BackBufferFormat=D3DFMT_UNKNOWN;// Match desktop formatm_d3dPP.BackBufferCount=1;// Double bufferingm_d3dPP.SwapEffect=D3DSWAPEFFECT_DISCARD;// Throw away old framesm_d3dPP.EnableAutoDepthStencil=TRUE;// We need depth testingm_d3dPP.AutoDepthStencilFormat=D3DFMT_D16;// 16-bit depth buffer
This is where modern APIs (Vulkan, DX12) get even MORE complex. You're essentially telling the GPU: "Here's how I want my window's backbuffer configured."
4. Create the Device
cpp
HRESULThr=m_direct3D9->CreateDevice(D3DADAPTER_DEFAULT,// Use default GPUD3DDEVTYPE_HAL,// Hardware accelerationhWnd,// Window handleD3DCREATE_HARDWARE_VERTEXPROCESSING,// Use GPU for vertex math&m_d3dPP,&m_d3dDevice);
This is where I crashed 47 times. Wrong parameters? Crash. Unsupported format? Crash. Missing depth buffer? Crash.
Fallback strategy: If hardware vertex processing fails (older GPUs), fall back to software:
cpp
if(FAILED(hr)){// Try again with CPU-based vertex processinghr=m_direct3D9->CreateDevice(...,D3DCREATE_SOFTWARE_VERTEXPROCESSING,...);}
DirectX 9's ID3DXSprite is a helper for 2D games. It batches sprite draws and handles transformations.
Rendering Each Frame
Once initialized, every frame follows this pattern:
cpp
voidRenderer::BeginFrame(){// Clear the screen to blackm_d3dDevice->Clear(0,NULL,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER,D3DCOLOR_XRGB(0,0,0),1.0f,0);m_d3dDevice->BeginScene();// Start recording draw callsm_spriteBrush->Begin(D3DXSPRITE_ALPHABLEND);// Enable alpha blending for sprites}voidRenderer::DrawSprite(constSpriteInstance&sprite){// Apply transformations (position, rotation, scale)D3DXMATRIXtransform=CalculateTransform(sprite);m_spriteBrush->SetTransform(&transform);// Draw the texturem_spriteBrush->Draw(sprite.texture,&sourceRect,nullptr,nullptr,sprite.color);}voidRenderer::EndFrame(){m_spriteBrush->End();// Finish sprite batchm_d3dDevice->EndScene();// Stop recordingm_d3dDevice->Present(...);// Flip backbuffer to screen (VSYNC happens here)}
Key Concept: Double Buffering
We draw to a "backbuffer" (off-screen), then Present() swaps it with the screen's front buffer. This prevents tearing (seeing half-drawn frames).
Performance Note: Each DrawSprite() call is relatively expensive. In a real engine, you'd batch hundreds of sprites into fewer draw calls. For Breakout (~50 bricks max), it doesn't matter.
Challenges & Solutions: Where I Failed (And What I Learned)
Challenge 1: Architecture Disaster (Week 3)
The Problem:
I made the classic beginner mistake: I started coding without designing.
My first attempt looked like this:
cpp
classGame{Rendererrenderer;InputManagerinput;// OH NO—game logic mixed into Game class!Paddlepaddle;Ballball;Brickbricks[50];voidUpdate(){// Handle inputif(input.IsKeyDown(LEFT))paddle.x-=5;// Update physicsball.x+=ball.velocityX;// Check collisionsfor(auto&brick:bricks){if(CollidesWith(ball,brick)){brick.alive=false;}}// ...300 more lines of spaghetti code}};
This worked fine—until I needed to add a menu screen.
Suddenly I realized: How do I switch between Menu and Level1?
My code had no concept of "states." Everything was hardcoded into one giant Update() function. Adding a menu meant:
Wrapping everything in if (currentState == PLAYING)
Duplicating input handling for menu vs. gameplay
Managing which objects exist when
It was a mess. I was 2 weeks in and facing a complete rewrite.
The Solution: State Pattern
I asked my lecturer (and ChatGPT) for advice. The answer: State Pattern.
cpp
// Interface that all game states implementclassIGameState{public:virtualvoidOnEnter(GameServices&services)=0;virtualvoidUpdate(floatdt,...)=0;virtualvoidRender(Renderer&renderer)=0;virtualvoidOnExit(GameServices&services)=0;};
Now each screen is its own class:
cpp
classMenuState:publicIGameState{/* menu logic */};classLevel1:publicIGameState{/* level 1 logic */};classGameOverState:publicIGameState{/* game over logic */};
The Game class just delegates to the current state:
cpp
classGame{std::unique_ptr<IGameState>currentState;voidUpdate(floatdt){currentState->Update(dt,...);// Let the state handle it}voidChangeState(std::unique_ptr<IGameState>newState){if(currentState)currentState->OnExit(...);currentState=std::move(newState);if(currentState)currentState->OnEnter(...);}};
What I Learned:
Design before code (ESPECIALLY for 1,000+ line projects)
Separation of concerns makes code flexible
Refactoring hurts, but teaches more than getting it right the first time
The State Pattern is everywhere—React components, game engines, even operating systems use it. This lesson alone was worth the 3 months.
Challenge 2: The Ball Goes Through Bricks (Tunneling)
This worked at 60 FPS... until the ball moved too fast.
At high speeds, the ball would tunnel—pass completely through a brick between frames:
Frame 1: Ball is here → [ ]
↓
Frame 2: Ball is here [ ] ← Ball skipped the brick!
The ball moved 50 pixels, but the brick was only 32 pixels wide. By the next frame, the ball was already past the brick, so the overlap check returned false.
First Failed Solution: Smaller Time Steps
I tried updating physics 120 times per second instead of 60. This helped but didn't solve it—at very high velocities, tunneling still occurred.
The Real Solution: Swept AABB
I needed continuous collision detection—checking not just "are they overlapping now?" but "will they overlap at any point during this frame's movement?"
This is called swept AABB (or ray-swept box). Instead of checking the ball's current position, I treat the ball's movement as a ray:
cpp
boolSweepAABB(Vector3ballPos,Vector2ballSize,Vector3displacement,// Where the ball will move this frameVector3brickPos,Vector2brickSize,float&timeOfImpact,// When in [0,1] does collision happen?Vector3&hitNormal// Which side did we hit?){// Calculate when the ball's edges cross the brick's edgesfloatxEntryTime=...;// Math for X-axis entryfloatyEntryTime=...;// Math for Y-axis entryfloatoverallEntry=max(xEntryTime,yEntryTime);if(overallEntry<0||overallEntry>1){returnfalse;// No collision this frame}timeOfImpact=overallEntry;returntrue;}
Now my collision loop looks like:
cpp
Vector3displacement=ball.velocity*dt;floattoi;Vector3normal;if(SweepAABB(ball,displacement,brick,toi,normal)){// Move ball to exactly the collision pointball.position+=displacement*toi;// Bounceif(normal.x!=0)ball.velocity.x=-ball.velocity.x;if(normal.y!=0)ball.velocity.y=-ball.velocity.y;brick.alive=false;}
Result: No more tunneling, even at 1000 pixels/second.
What I Learned:
Discrete collision detection (overlap checks) fails at high speeds
Continuous collision detection (swept/ray-based) is essential for fast-moving objects
This is why bullets in games use raycasts, not overlap checks
The math was painful (lots of min/max comparisons), but understanding this concept changed how I think about physics in games.
If you're thinking about building your own engine:
Do it. Not to replace Unity, but to understand Unity.
You'll struggle. You'll debug cryptic errors at 2 AM. You'll question why you didn't just use Godot.
But when you finally see that ball bounce for the first time—compiled from YOUR code, rendered by YOUR pipeline, colliding with YOUR physics—you'll understand why people say "reinventing the wheel is the best way to learn how wheels work."
Questions? Suggestions? Let me know in the comments! 👇