In 2025, the key investment areas for Azure CLI and Azure PowerShell are quality and security. We have also made significant efforts to improve the overall user experience. Meanwhile, AI remains a central theme.
At Microsoft Ignite 2025, we are pleased to announce several new features related to these priorities:
In terms of security: MFA enforcement
Azure CLI Upgrade and Python 3.13 Compatibility explanation
New feature: Azure CLI and Azure PowerShell -What-If and -export bicep parameter
Extending our coverage
We’ve rolled out significant updates across Azure CLI and Azure PowerShell to enhance functionality:
Modules are now generally available: DeviceRegistry, DataMigration, FirmwareAnalysis,LoadTesting,StorageDiscovery , DataTransfer, ArizeAI, Fabric, StorageAction, Oracle
Azure CLI Upgrade and Python 3.13 Compatibility Notes
Azure CLI has been upgraded from version 2.76 to 2.77 primarily to address several security vulnerabilities (CVE), including issues related to remote code execution risks and certificate validation flaws in underlying dependencies, ensuring compliance with the latest security standards.
This upgrade requires Python to move from 3.12 to 3.13, which introduces a significant change: Python 3.13 enforces stricter SSL verification rules, causing failures for users running behind proxies that intercept HTTPS traffic. Solution: Update your proxy certificate to comply with strict mode. For instance, Mitmproxy fixed this in version v10.1.2 (reference: https://github.com/Azure/azure-cli/issues/32083#issuecomment-3274196488).
Handling Claims Challenges for MFA in Azure CLI and Azure PowerShell
Claims challenges appear when ARM begins enforcing MFA requirements. If a user performs create, update, or delete operations without the necessary MFA claims, ARM rejects the request and returns a claims challenge, indicating that higher-level authentication is required before the API call can proceed. This mechanism is designed to ensure sensitive operations are performed only by users who have completed MFA.
The challenge arises because Azure CLI and Azure PowerShell can only acquire MFA claims during the login phase, and only if the user’s account is configured to require MFA. Changing this setting affects all services associated with the account, and many customers are reluctant to enable MFA at the account level. As a result, when a claims challenge occurs, Azure CLI and Azure PowerShell cannot automatically trigger MFA in the same way Azure Portal does.
Azure CLI example:
az login --tenant "aaaabbbb-0000-cccc-1111-dddd2222eeee" --scope "https://management.core.windows.net//.default" --claims-challenge "<claims-challenge-token>"
Advanced cloud analysis capabilities, involving capacity insights or forecasting in Azure CLI
With this update, Azure CLI now uses the latest ARM API version (2022-09-01) for endpoint discovery during cloud registration and updates, replacing the older API versions previously used. This ensures more accurate and up-to-date service endpoints, simplifies the configuration of custom Azure clouds, and improves reliability when retrieving required endpoints. By adopting the new API, Azure CLI stays aligned with the latest Azure platform capabilities, increasing both compatibility and forward-compatibility. As a result, users benefit from more accurate endpoint discovery and improved support for new Azure features and service endpoints as they become available.
Azure PowerShell - Add Pagination Support for 'Invoke-AzRestMethod' via '-Paginate' parameter
Invoke-AzRestMethod is a flexible fallback for calling Azure Management APIs, returning raw HTTP responses from underlying endpoints, but it currently lacks built-in pagination, forcing users to implement custom logic when working with large datasets. Since pagination was not part of the original design, changing the default behavior could break existing scripts that depend on the current response format and nextLink handling. To address this without disruption, we plan to introduce pagination as an optional opt-in feature, enabling users to retrieve complete datasets through server-driven pagination without writing custom code while preserving the current behavior by default for full backward compatibility.
Introducing Azure CLI and Azure PowerShell -What-If and -export bicep parameter
We’re introducing two new features in both Azure CLI and Azure PowerShell: the What-If and Export Bicep parameters. The What-If parameter gives you an intelligent preview of which resources will be created, updated, or deleted before a command runs, helping you catch issues early and avoid unexpected changes. The Export Bicep parameter generates the corresponding Bicep templates to streamline your infrastructure-as-code workflows. Both features leverage AI to assist with command interpretation and template generation. If you’d like to try these capabilities in Azure CLI and Azure PowerShell, you can sign up through our form.
Please stay tuned for more updates.
Breaking Changes
The latest breaking change guidance documents can be found at the links below. To read more about the breaking changes migration guide, ensure your environment is ready to install the newest version of Azure CLI and Azure PowerShell.
Thank you for using the Azure command-line tools. We look forward to continuing to improve your experience. We hope you enjoy Ignite and all the great work released this week. We'd love to hear your feedback, so feel free to reach out anytime.
Picture Carl, a Data Analyst, asked to prepare a comparison of mean petal length across Iris species. Instead of manually writing SQL and charting logic, he prompts an internal GPT-5 Agent: “Compute species-level means, then visualize petal length.” Behind the scenes, his Agentic workflow has two custom tools:
sql_exec_csv – accepts raw SQL and returns CSV results from an Iris dataset.
code_exec_javascript – executes raw JavaScript inside a hardened vm sandbox, parsing the CSV and rendering iris_plot.svg.
GPT-5 emits a SQL query (that's not wrapped in JSON), receives CSV output, generates JavaScript code that builds a chart, and finally summarizes the workflow—all in a handful of turns. This smooth, multi-tool flow is enabled by GPT-5’s FreeForm (custom) tool calling.
What Is FreeForm?
FreeForm (custom) tool calling allows GPT-5 to issue tool invocations whose payload is arbitrary unstructured raw text - SQL queries, Python scripts, JavaScript programs, Bash, config files, without being bound to JSON arguments conforming to a defined schema.
Traditional structured function calling solves reliable argument passing workloads but introduces friction for code-heavy and DSL-heavy workflows. You'd have to wrap code inside JSON strings, escape characters, then unpack and execute. With Freeform tool calling, you eliminate this round trip by registering a tool with { "type": "custom" } and GPT-5 can emit the exact text payload the tool expects - without the JSON envelope.
Why does this matter?
No schema friction: The model speaks in the tool's native language allowing you to run what it produces, rather than parsing JSON to recover string values.
Improved intermediate reasoning: Freeform output lets GPT-5 interleave natural-language commentary, raw code, and tool calls in a single response.
Multi-step chaining: Each tool output re-enters the conversation as plain text, and GPT-5 can reason on it directly.
Dimension
Structured Function Tools (JSON Schema)
FreeForm Custom Tools
Payload Shape
Example { "name":"fn",
"arguments":{...}
}
Raw text (code/query/script)
Validation
Automatic via schema (types, required)
Semantic validation implemented in executor
Parsing Overhead
JSON parsing + argument mapping
Minimal (string pass-through)
Ease of Tool Evolution
Schema changes would require code and prompt updates
Only need to update tool/ prompt descriptions
Readability in Logs (Observability)
Nested JSON obscures code
Natural language
Chaining Complexity
Each step returns JSON that must be parsed
Directly feed raw output back for the model to reason over
Errors
JSON malformed or missing required fields
Runtime execution errors
Application
When strict argument validation is required. Schema enforces shape & type (Ex. Deterministic APIs)
When downstream systems auto-ingest structured data
When payload is primarily executable code. Skip JSON wrapping overhead especially when it adds no validation value
Tool discoverability
Model matches tool name, description and the expected schema
Model matches tool name and description
When Not to Use FreeForm
When strict validation is required (Ex. financial transaction parameters, coordinates, PII-handling flags).
For complex nested data (arrays of objects, deeply nested configs) where schema ensures shape.
For mass extraction tasks where consistent JSON accelerates downstream parsing.
Note: Depending on the scenario, you can implement a hybrid design to use structured tools for parameter selection and strict validation, then pass to custom (freeform) tools for code/ query execution.
Implementation breakdown - How Carl's Iris workflow is assembled
Pre-requisites
Azure AI Foundry project
GPT-5 model deployment - v1 API is required to access the latest model features
A link to the full sample code will be provided at the end of this blog, but here’s a minimal explanation of the core logic.
User prompt asking the model to produce SQL/ JavaScript as code blocks: -
Write SQL to compute mean of sepal_length, sepal_width, petal_length, petal_width grouped by species. Return a tidy CSV with species and the four means (rounded to 2 decimals).
Then write JavaScript to read that CSV string (provided as tool output), pretty-print a table, and produce a bar chart of mean petal_length by species.
Tool registry: Two custom tools are defined for the Responses API:
sql_exec_csv
Wraps an in-memory Iris dataset and returns CSV for SELECT queries, including a specialized path for group-by averages.
code_exec_javascript
Executes Node.js-compatible JavaScript inside a hardened vm context, capturing console output and rendering SVG charts via helper utilities.
Tools configuration of type custom - sql_exec_csv and code_exec_javascript
SQL Query generation and execution: First, the model generates the raw SQL query ...
GPT-5 generated raw SQL query following the prompt
... then it explicitly asks to call the sql_exec_csv tool, passing in the full SQL string as the payload.
In structured function calling, you’d expect to see JSON input like:
"arguments": "{\"query\":\"SELECT ...\"}"
but in freeform tool calling, our custom tool isn't restricted to formatting the input in a JSON wrapper. Instead, it executes the raw SQL and returns a tidy CSV with the mean values rounded to 2 decimals, which is then wrapped in a function_call_output and inserted back into the conversation to feed into the context.
sql_exec_function executes the SQL from the model and returns CSV
JavaScript Code Execution: GPT-5 calls the code_exec_javascript tool to parse the CSV, pretty-print a table in the console, create and save the chart visual. The model provides full executable code as the tool argument there being no schema to tell it what fields to send. It simply writes the program
The JS code is executed using code_exec_javascript code
Our output is a mix of the requested result and commentary from the model
Output including a console table, commentary from GPT-5 and the location of the chart fileThe generated chart file
GPT-5 FreeForm tool calling elevates agentic development with less schema scaffolding, more expressive multi-step execution, and execute deterministically. Combined with Azure AI Foundry’s enterprise-grade governance stack, developers can prototype analytical workflows (like Carl's Iris example) and harden them for production quickly.
Next to the performance and security proven engine, with Azure SQL solutions you’re getting dozens of other benefits, such as auto-scaling, auto-patching/maintenance, auto-backups and built-in HA. Free Azure SQL offers fully managed resources with zero cost and zero commitment. There’re two options designed to start for free, test and grow at your pace:
Free Azure SQL Database offer – Ideal for new applications or occasional/light workloads with serverless auto-scale and minimal admin work.
Free Azure SQL Managed Instance offer – Perfect for moderate SQL Server workloads in with near 100% compatibility, cross-DB queries, and SQL Agents.
User can choose one of these two offers or try both of them without any cost!
Path #1 – Start, experiment and build new applicationsin cloud
Azure SQL database free offer is best for building new applications, developing prototypes, or running light workloads in the cloud. This offer lets you create up to 10 Azure SQL databases for free, with no expiration. Each database can use 100,000 vCore-seconds of compute and 32 GB storage per month at no charge – that’s roughly 28 hours of 1 vCore CPU time, refreshed every month, per database. The databases run in the General Purpose tier and are serverless, meaning they automatically scale compute based on load and can pause when idle to save resources.
How to get started with an Azure SQL Database for free
Sign in to Azure and open the Azure SQL Database create page. If you don’t have an Azure account, create one (the Azure Free Account gives you credits and other free services, but this SQL DB free offer works with any subscription). In the Azure Portal, navigate to Azure SQL and choose Create SQL Database – or simply go to the https://aka.ms/azuresqlhuband click “Try Azure SQL Database for free”. This will open the SQL Database deployment blade.
Apply the free offer. At the top of the Create Azure SQL Database form, look for a banner that says “Want to try Azure SQL Database for free?” and click the Apply offer button. (If you used the Try for free link, the offer may be applied automatically.) When applied, you should see the price summary on the right update to Estimated Cost: $0.
Fill in database details. Choose your Azure Subscription and either create or select a Resource Group for this database. Give your database a name (e.g. myFreeDB). For Server, create a new logical server (this is an Azure SQL Server that will host the DB) – provide a unique server name, set admin login and password, and choose a region. Note: All free databases under one subscription must be in the same region (the first free DB’s region becomes fixed for up to 10 free DBs), so pick a region that makes sense for you (ideally where your app runs).
Leave the defaults for compute/storage. In Compute + storage, the form will default to a serverless General Purpose database with a certain compute size. You can always scale later, so it’s fine to start with the defaults.
Set “free limit reached” behavior. In the Basics tab, after applying the offer, you’ll see a setting for Behavior when free limit reached. Choose between:
Auto-pause the database until next month – if the database runs out of free CPU or storage in a month, it will pause (become inaccessible) until the free quota resets next month. This ensures you never get billed.
Continue using database for additional charges – the database will not pause if it exceeds free limits; it will continue running and any overage will be billed at standard rates. You still get the first 100k seconds and 32 GB free each month. (Once you choose “continue with charges,” you can’t revert to auto-pause for that database later in the portal).
If you’re just testing, auto-pause is safest; if you’re building something that needs to run continuously, you might opt to continue (just monitor usage)
Key benefits of the free Database offer
Serverless auto-scaling: The free DB runs in serverless mode, which can automatically pause when idle and automatically resume on activity. This maximizes your free compute: if your app is only active part of the day, the database uses zero vCore seconds while paused.
Monthly reset, no time limit: The free allowances (100k vCore-seconds, 32 GB) renew each month for each database. And unlike some trials, this offer does not expire after several months, you can use it for the lifetime of your subscription. This “free tier” is available to any Azure subscription (new or existing, Pay-Go, CSP, Enterprise, etc.)
Scales with you (optional pay-as-you-go beyond free): If a database hits the free limits in a given month, you have a choice: either let it auto-pause until the next month (so you never incur charges), or switch to Bill overage to keep it running and simply pay for the over-limit usage. Importantly, you don’t lose your data or need to migrate—the transition from free to paid is seamless. And when your database needs more scaling headroom, more storage, or additional performance, switching to other offers in SQL DB is easy, fast, and does not require any application changes. You’ll still enjoy monthly free credits applied to it, and if you stay within the free resource limits each month, it remains completely free.
Path #2 – Already running SQL Server on-premises or in VMs?
Every application needs a database – sometimes just one, sometimes many.Maybe yours began as a small internal tool. Maybe it quietly grew into something business critical. It works. Your application knows how to talk to SQL Server. Your jobs run on schedule. Your database carries years of history, and your backup chain could probably tell the whole story.
Then one day someone asks: “Can we try running this in the cloud?”
The first reaction is often hesitation:
Will everything still work?
Will it be compatible with our SQL Server 20XX version?
Will we need to rewrite part of the app?
And if you’re already running SQL Server (Express, Standard, or Enterprise) and your application depends on features that cloud databases don’t typically support such as:
Isolated environment with dedicated, separate compute
Cross-database queries
SQL Agent jobs
Linked servers
CLR assemblies
Transparent Data Encryption rules already in place
Azure SQL Managed Instance gives you the ability to choose the SQL Server engine (2022, 2025, or always-up-to-date) delivered as a fully managed service – giving you the power of SQL Server with the simplicity of the cloud.
How to get started with SQL MI for free
It’s as easy as it can get – all you need to have is an Azure account for free and Azure subscription. If you already have these two, simply open https://aka.ms/create-free-sqlmi – this link will lead you to create Azure SQL Managed Instance page with free offer automatically applied.
After populating the mandatory (*) fields in the create page, you will get your free SQL managed instance with built-in availability and opened networking defaults. Automatically generated free instance name is customizable as many other options on the create page. After finishing populating the mandatory fields on Basics tab of create page click on "Review + create" and finish creation.
After ~20mins you will be able to find your deployed instance in the Azure portal by using the search bar or finding it within the ‘recent resources’ list on the home page, like on the image below.
After you’ve opened the free SQL MI resource, navigate to the Networking page in Security and copy the public endpoint.
With this endpoint you can now connect to your free SQL managed instance with the tool of your choice, e.g. SSMS 22 or VSCode with MSSQL extension – Voilà! It’s that easy to create a free SQL MI and connect to it.
From here, you can use SSMS Restore wizard or standard T-SQL RESTORE FROM URL command to restore database from backup file. You can either use sample database or your company’s database .bak files to test your real workload with Azure SQL Managed Instance. To restore these after you upload them to Azure Storage.
Key benefits of free Managed Instance offer
You can test your real-life workload for 720vCore hours with a 4-vCore or 8-vCore free SQL managed instance that comes with 64GB of data and backup storage with up to 500 databases.
Benefit
Why it Matters
720vCore hours each month
That means you can run your 4-vCore instance for 180 hours or an 8-vCore instance for 90 hours each month.
Included SQL license
You get hands on enterprise SQL Server – 2022, 2025 or evergreen SQL Server engine version free of charge.
Near 100% compatibility with SQL Server
You don’t have to rewrite your app or re-architect your database model (most of the time)
Start/stop schedule by default
Regular workday (9-5) enabled by default to ensure efficient use (running 4-vCore instance for up to 22 days).
Restore .bak files directly
Upload your backup to Azure storage and restore database. Support for .bak files since SQL Server 2008R2.
SQL Agent Included
Your jobs, schedules, and routines work the way they do today.
Up to 500 databases
With Next-Gen General Purpose tier you can get up to 500 databases.
64GB of data and backup storage
Your database is automatically backed for up to 7 days without by default, and you can store up to 64GB of data.
Automatic Patching and HA
You keep the SQL Server experience you know, without maintaining OS/VMs.
Frequently Asked Questions (FAQs)
What’s the duration of free offer?
Free Azure SQL Managed Instance offer is available for 12 months since the day of activation for that subscription.
Free Azure SQL Database offer is available for up to 10 databases per subscription for lifetime.
What happens after the free period?
Free SQL managed instance will be stopped and you’ll have an option to upgrade it to paid for 30 days. Afterwards it’ll be deleted.
Free SQL database will be auto-paused until next month unless you explicitly set “Continue using database for additional charges” option for “free limit reached” behavior.
What are the prices for paid Azure SQL SKUs? Regular prices for Azure SQL services can vary depending on the region, compute model, SQL license and service tier. For more precise information visit:
Whether you’re building something new or bringing an existing SQL Server workload to the cloud, Azure SQL gives you a way to start free, safely, and on your own terms. No risk, no pressure – just the same familiar SQL experience, with less overhead and more room to grow.
Posted by Dom Elliott – Group Product Manager, Google Play and Eric Lynch - Senior Product Manager, Android Security
In the mobile ecosystem, abuse can threaten your revenue, growth, and user trust. To help developers thrive, Google Play offers a resilient threat detection service, Play Integrity API. Play Integrity API helps you verify that interactions and server requests are genuine—coming from your unmodified app on a certified Android device, installed by Google Play.
The impact is significant: apps using Play integrity features see 80% lower unauthorized usage on average compared to other apps. Today, leaders across diverse categories—including Uber, TikTok, Stripe, Kabam, Wooga, Radar.com, Zimperium, Paytm, and Remini—use it to help safeguard their businesses.
We’re continuing to improve the Play Integrity API, making it easier to integrate, more resilient against sophisticated attacks, and better at recovering users who don’t meet integrity standards or encounter errors with new Play in-app remediation prompts.
Detect threats to your business
The Play Integrity API offers verdicts designed to detect specific threats that impact your bottom line during critical interactions.
Unauthorized access: The accountDetails verdict helps you determine whether the user installed or paid for your app or game on Google Play.
Code tampering: The appIntegrity verdict helps you determine whether you're interacting with your unmodified binary that Google Play recognizes.
Risky devices and emulated environments: The deviceIntegrity verdict helps you determine whether your app is running on a genuine Play Protect certified Android device or a genuine instance of Google Play Games for PC.
Unpatched devices: For devices running Android 13 and higher, MEETS_STRONG_INTEGRITY response in the deviceIntegrity verdict helps you determine if a device has applied recent security updates. You can also opt in to deviceAttributes to include the attested Android SDK version in the response.
Risky access by other apps: The appAccessRiskVerdict helps you determine whether apps are running that could be used to capture the screen, display overlays, or control the device (for example, by misusing the accessibility permission). This verdict automatically excludes apps that serve genuine accessibility purposes.
Known malware: The playProtectVerdict helps you determine whether Google Play Protect is turned on and whether it has found risky or dangerous apps installed on the device.
Hyperactivity: The recentDeviceActivity level helps you determine whether a device has made an anomalously high volume of integrity token requests recently, which could indicate automated traffic and could be a sign of attack.
Repeat abuse and reused devices:deviceRecall (beta) helps you determine whether you're interacting with a device that you've previously flagged, even if your app was reinstalled or the device was reset. With device recall, you can customize the repeat actions you want to track.
The API can be used across Android form factors including phones, tablets, foldables, Android Auto, Android TV, Android XR, ChromeOS, Wear OS, and on Google Play Games for PC.
Make the most of Play Integrity API
Apps and games have found success with the Play Integrity API by following the security considerations and taking a phased approach to their anti-abuse strategy.
Step 1: Decide what you want to protect: Decide what actions and server requests in your apps and games are important to verify and protect. For example, you could perform integrity checks when a user is launching the app, signing in, joining a multiplayer game, generating AI content, or transferring money.
Step 2: Collect integrity verdict responses: Perform integrity checks at important moments to start collecting verdict data, without enforcement initially. That way you can analyze the responses for your install base and see how they correlate with your existing abuse signals and historical abuse data.
Step 3: Decide on your enforcement strategy: Decide on your enforcement strategy based on your analysis of the responses and what you are trying to protect. For example, you could change risky traffic at important moments to protect sensitive functionality. The API offers a range of responses so you can implement a tiered enforcement strategy based on the trust level you give to each combination of responses.
Step 4: Gradually rollout enforcement and support your users: Gradually roll out enforcement. Have a retry strategy when verdicts have issues or are unavailable and be prepared to support good users who have issues. The new Play in-app remediation prompts, described below, make it easier than ever to get users with issues back to a good state.
NEW: Let Play recover users with issues automatically
Deciding how to respond to different integrity signals can be complex, you need to handle various integrity responses and API error codes (like network issues or outdated Play services). We’re simplifying this with new Play in-app remediation prompts. You can show a Google Play prompt to your users to automatically fix a wide range of issues directly within your app. This reduces integration complexity, ensures a consistent user interface, and helps get more users back to a good state.
GET_INTEGRITY automatically detects the issue
(in this example, a network error)
and resolves it.
You can trigger theGET_INTEGRITY dialog, available in Play Integrity API library version 1.5.0+, after a range of issues to automatically guide the user through the necessary fixes including:
Unauthorized access: GET_INTEGRITY guides the user back to a Play licensed response in accountDetails.
Code tampering: GET_INTEGRITY guides the user back to a Play recognized response in appIntegrity.
Device integrity issues: GET_INTEGRITY guides the user on how to get back to the MEETS_DEVICE_INTEGRITY state in deviceIntegrity.
Remediable error codes: GET_INTEGRITY resolves remediable API errors, such as prompting the user to fix network connectivity or update Google Play Services.
We also offer specialized dialogs includingGET_STRONG_INTEGRITY (which works like GET_INTEGRITY while also getting the user back to the MEETS_STRONG_INTEGRITY state with no known malware issues in the playProtectVerdict), GET_LICENSED (which gets the user back to a Play licensed and Play recognized state), and CLOSE_UNKNOWN_ACCESS_RISK and CLOSE_ALL_ACCESS_RISK (which prompt the user to close potentially risky apps).
Choose modern integrity solutions
In addition to Play Integrity API, Google offers several other features to consider as part of your overall anti-abuse strategy. Both Play Integrity API and Play’s automatic protection offer user experience and developer benefits for safeguarding app distribution. We encourage existing apps to migrate to these modern integrity solutions instead of using the legacy Play licensing library.
Automatic protection: Prevent unauthorized access with Google Play’s automatic protection and ensure users continue getting your official app updates. Turn it on and Google Play will automatically add an installer check to your app’s code, with no developer integration work required. If your protected app is redistributed or shared through another channel, then the user will be prompted to get your app from Google Play. Eligible Play developers also have access to Play’s advanced anti-tamper protection, which uses obfuscation and runtime checks to make it harder and costlier for attackers to modify and redistribute protected apps.
Android platform key attestation: Play Integrity API is the recommended way to benefit from hardware-backed Android platform key attestation. Play Integrity API takes care of the underlying implementation across the device ecosystem, Play automatically mitigates key-related issues and outages, and you can use the API to detect other threats. Developers who directly implement key attestation instead of relying on Play Integrity API should prepare for the upcoming Android Platform root certificate rotation in February 2026 to avoid disruption (developers using Play Integrity API do not need to take any action).
Firebase App Check: Developers using Firebase can use Firebase App Check to receive an app and device integrity verdict powered by Play Integrity API on certified Android devices, along with responses from other platform attestation providers. To detect all other threats and use other Play features, integrate Play Integrity API directly.
reCAPTCHA Enterprise: Enterprise customers looking for a complete fraud and bot management solution can purchase reCAPTCHA Enterprise for mobile. reCAPTCHA Enterprise uses some of Play Integrity API’s anti-abuse signals, and combines them with reCAPTCHA signals out of the box.
Safeguard your business today
With a strong foundation in hardware-backed security and new automated remediation dialogs simplifying integration, the Play Integrity API is an essential tool for protecting your growth.
Posted by Ben Weiss - Senior Developer Relations Engineer,
Breana Tate - Developer Relations Engineer,
Jossi Wolf - Software Engineer on Compose
Compose
yourselves and let us guide you through more background on performance.
Welcome
to day 3 of Performance Spotlight Week. Today we're continuing to share details and guidance on
important
areas of app performance. We're covering Profile Guided Optimization, Jetpack Compose
performance
improvements and considerations on working behind the scenes. Let's dive right in.
Profile
Guided Optimization
Baseline
Profiles
and Startup
Profiles
are foundational to improve an Android app's startup and runtime performance. They are part of a
group of
performance optimizations called Profile Guided Optimization.
When
an app is packaged, the d8 dexer takes classes and methods and populates your app's classes.dex
files. When a user opens the app, these dex files are loaded, one after the other until the app
can start.
By providing a Startup
Profile
you let d8 know which classes and methods to pack in the first classes.dex
files. This structure allows the app to load fewer files, which in turn improves startup
speed.
Baseline
Profiles effectively move the Just in Time (JIT) compilation steps away from user devices and
onto developer
machines. The generated Ahead Of Time (AOT) compiled code has proven to reduce startup time and
rendering
issues alike.
Trello
and Baseline Profiles
We
asked engineers on the Trello app how Baseline Profiles affected their app's performance. After
applying
Baseline Profiles to their main user journey, Trello saw a significant 25 % reduction in app
startup
time.
Trello
was able to improve their app's startup time by 25 % by using baseline
profiles.
Across
Meta's apps the teams have seen various critical metrics improve by up to 40 % after
applying Baseline
Profiles.
Technical
improvements like these help you improve user satisfaction and business success as well. Sharing
this with
your product owners, CTOs and decision makers can also help speed up your app's
performance.
Get
started with Baseline Profiles
To
generate either a Baseline or Startup Profile, you write a macrobenchmark
test that exercises the app. During the test profile data is collected which will be used during
app
compilation. The tests are written using the new UiAutomator
API,
which we'll cover tomorrow.
Writing
a benchmark like this is straightforward and you can see the full sample on GitHub.
@Test
funprofileGenerator(){
rule.collect(
packageName=TARGET_PACKAGE,
maxIterations=15,
stableIterations=3,
includeInStartupProfile=true
){
uiAutomator{
startApp(TARGET_PACKAGE)
}
}
}
Considerations
Start
by writing a macrobenchmark tests Baseline Profile and a Startup Profile for the path most
traveled by your
users. This means the main entry point that your users take into your app which usually is
after
they logged in.
Then continue to write more test cases to capture a more complete picture only for Baseline
Profiles. You do
not need to cover everything with a Baseline Profile. Stick to the most used paths and measure
performance
in the field. More on that in tomorrow's post.
Get
started with Profile Guided Optimization
To
learn how Baseline Profiles work under the hood, watch this video from the Android Developers
Summit:
And
check out the Android Build Time episode on Profile Guided Optimization for another in-depth
look:
The
UI framework for Android has seen the performance investment of the engineering team pay off.
From version
1.9 of Jetpack Compose, scroll jank has dropped to 0.2 % during an internal long scrolling
benchmark
test.
These
improvements were made possible because of several features packed into the most recent
releases.
Customizable
cache window
By
default, lazy layouts only compose one item ahead of time in the direction of scrolling, and
after something
scrolls off screen it is discarded. You can now customize the amount of items to retain through
a fraction
of the viewport or dp size. This helps your app perform more work upfront, and after enabling
pausable
composition in between frames, using the available time more efficiently.
To
start using customizable cache windows, instantiate a LazyLayoutCacheWindow
and pass it to your lazy list or lazy grid. Measure your app's performance using different cache
window
sizes, for example 50% of the viewport. The optimal value will depend on your content's
structure and item
size.
val
dpCacheWindow = LazyLayoutCacheWindow(ahead = 150.dp,
behind = 100.dp)
val
state = rememberLazyListState(cacheWindow = dpCacheWindow)
LazyColumn(state
= state) {
//
column contents
}
Pausable
composition
This
feature allows compositions to be paused, and their work split up over several frames. The APIs
landed in
1.9 and it is now used by default in 1.10 in lazy layout prefetch. You should see the most
benefit with
complex items with longer composition times.
More
Compose performance optimizations
In
the versions 1.9 and 1.10 of Compose the team also made several optimizations that are a bit
less
obvious.
Several
APIs that use coroutines under the hood have been improved. For example, when using Draggable
and Clickable,
developers should see faster reaction times and improved allocation counts.
Optimizations
in layout rectangle tracking have improved performance of Modifiers like onVisibilityChanged()
and onLayoutRectChanged().
This speeds up the layout phase, even when not explicitly using these APIs.
Another
performance improvement is using cached values when observing positions via onPlaced().
Prefetch
text in the background
Starting
with version 1.9, Compose adds the ability to prefetch text on a background thread. This enables
you to
pre-warm caches to enable faster text layout and is relevant for app rendering performance.
During layout,
text has to be passed into the Android framework where a word cache is populated. By default
this runs on
the Ui thread. Offloading prefetching and populating the word cache onto a background thread can
speed up
layout, especially for longer texts. To prefetch on a background thread you can pass a custom
executor to
any composable that's using BasicText
under the hood by passing a LocalBackgroundTextMeasurementExecutor
to a CompositionLocalProvider
like so.
BasicText("Some text that should be measured on a background thread!")
}
Depending
on the text, this can provide a performance boost to your text rendering. To make sure that it
improves your
app's rendering performance, benchmark and compare the results.
Background
work performance considerations
Background
Work is an essential part of many apps. You may be using libraries like WorkManager or
JobScheduler to
perform tasks like:
Periodically
uploading analytical events
Syncing
data between a backend service and a database
Processing
media (i.e. resizing or compressing images)
A
key challenge while executing these tasks is balancing performance and power efficiency.
WorkManager allows
you to achieve this balance. It's designed to be power-efficient, and allow work to be deferred
to an
optimal execution window influenced by a number of factors, including constraints you specify or
constraints
imposed by the system.
WorkManager
is not a one-size-fits-all solution, though. Android also has a number of power-optimized APIs
that are
designed specifically with certain common Core User Journeys (CUJs) in
mind.
Reference
the Background
Work landing page
for a list of just a few of these, including updating a widget and getting location in the
background.
Local
Debugging tools for Background Work: Common Scenarios
To
debug Background Work and understand why a task may have been delayed or failed, you need
visibility into
how the system has scheduled your tasks.
To
help with this, WorkManager has several related
tools to help you debug locally
and optimize performance (some of these work for JobScheduler as well)! Here are some common
scenarios you
might encounter when using WorkManager, and an explanation of tools you can use to debug
them.
Debugging
why scheduled work is not executing
Scheduled
work being delayed or not executing at all can be due to a number of factors, including
specified
constraints not being met or constraints having been imposed
by the system.
The
first step in investigating why scheduled work is not running is to confirm
the work was successfully scheduled.
After confirming the scheduling status, determine whether there are any unmet constraints or
preconditions
preventing the work from executing.
There
are several tools for debugging this scenario.
Background
Task Inspector
The
Background Task Inspector is a powerful tool integrated directly into Android Studio. It
provides a visual
representation of all WorkManager tasks and their associated states (Running, Enqueued, Failed,
Succeeded).
To
debug why scheduled work is not executing with the Background Task Inspector, consult the listed
Work
status(es). An ‘Enqueued' status indicates your Work was scheduled, but is still waiting to
run.
Benefits:
Aside from providing an easy way to view all tasks, this tool is especially useful if you have
chained work.
The Background Task inspector offers a graph view that can visualize if a previous task failing
may have
impacted the execution of the following task.
Background
Task Inspector list view
Background
Task Inspector graph view
adb
shell dumpsys jobscheduler
This
command
returns a list of all active JobScheduler jobs (which includes WorkManager Workers) along with
specified
constraints, and system-imposed constraints. It also returns job
history.
Use
this if you want a different way to view your scheduled work and associated constraints. For
WorkManager
versions earlier than WorkManager 2.10.0, adb
shell dumpsys jobscheduler
will return a list of Workers with this name:
Benefits:
This
command is useful for understanding if there were any system-imposed
constraints, which
you cannot determine with the Background Task Inspector. For example, this will return your
app's
standby bucket,
which can affect the window in which scheduled work completes.
Enable
Debug logging
You
can enable custom
logging
to see verbose WorkManager logs, which will have WM—
attached.
Benefits:
This allows you to gain visibility into when work is scheduled, constraints are fulfilled, and
lifecycle
events, and you can consult these logs while developing your app.
WorkInfo.StopReason
If
you notice unpredictable performance with a specific worker, you can programmatically observe
the reason
your worker was stopped on the previous run attempt with WorkInfo.getStopReason.
It's
a good practice to configure your app to observe WorkInfo using getWorkInfoByIdFlow to identify
if your work
is being affected by background restrictions, constraints, frequent timeouts, or even stopped by
the
user.
Benefits:
You can use WorkInfo.StopReason to collect field data about your workers'
performance.
Debugging
WorkManager-attributed high wake lock duration flagged by Android vitals
Android
vitals features an excessive partial wake locks metric, which highlights wake locks contributing
to battery
drain. You may be surprised to know that WorkManager
acquires wake locks to execute tasks,
and if the wake locks exceed the threshold set by Google Play, can have impacts to your app's
visibility.
How can you debug why there is so much wake lock duration attributed to your work? You can use
the following
tools.
Perfetto
is a tool for analyzing system traces. When using it for debugging WorkManager specifically, you
can view
the “Device State” section to see when your work started, how long it ran, and how it
contributes to power
consumption.
Under
“Device State: Jobs” track, you can see any workers that have been executed and their
associated wake
locks.
Device
State section in Perfetto, showing CleanupWorker and BlurWorker execution.
Resources
Consult
the Debug
WorkManager page
for an overview of the available debugging methods for other scenarios you might
encounter.
And
to try some of these methods hands on and learn more about debugging WorkManager, check out
the
Advanced WorkManager and Testing
codelab.
Next
steps
Today
we moved beyond code shrinking and explored how the Android Runtime and Jetpack Compose actually
render your
app. Whether it’s pre-compiling critical paths with Baseline Profiles or smoothing out scroll
states with
the new Compose 1.9 and 1.10 features, these tools focus on the feel
of your app. And we dove deep into best practices on debugging background work.
Ask
Android
On
Friday we're hosting a live AMA on performance. Ask your questions now using #AskAndroid and get
them
answered by the experts.
The
challenge
We
challenged you on Monday to enable R8. Today, we are asking you to generate
one Baseline Profile
for your app.
With
Android
Studio Otter,
the Baseline Profile Generator module wizard makes this easier than ever. Pick your most
critical user
journey—even if it’s just your app startup and login—and generate a profile.
Once
you have it, run a Macrobenchmark to compare CompilationMode.None
vs. CompilationMode.Partial.
Share
your startup time improvements on social media using #optimizationEnabled.
Tune
in tomorrow
You
have shrunk your app with R8 and optimized your runtime with Profile Guided Optimization. But
how do you
prove
these wins to your stakeholders? And how do you catch regressions before they hit
production?
Join
us tomorrow for Day
4: The Performance Leveling Guide,
where we will map out exactly how to measure your success, from field data in Play Vitals to
deep local
tracing with Perfetto.
We had a blast seeing everyone’s kooky creations at Open Sauce this summer, and one of the interesting people we met was Ted Tagami, who told us about a dare he couldn’t turn down over a decade ago…
“In 2013, a dear friend dared me to build an advertising network using satellites in space. Being a child of the 1960s, the idea that running a space programme was possible for me was something I could not pass by. I was not interested in the advertising.”
“That daring friend became my co-founder when we launched Magnitude.io, with zero science or engineering knowledge of how to do this.Fast-forward four years, and ExoLab-1 became our first mission to the International Space Station. With one lab running in microgravity 400km above the planet, we launched with a dozen Californian schools networked with Raspberry Pi–powered, ground-based labs.”
Turning classrooms into space-faring research labs
ExoLab is an educational programme that connects students around the world with real scientific research taking place aboard the International Space Station (ISS). Students in ordinary classrooms on Earth grow plants while an identical experiment unfolds simultaneously in microgravity.
Each participating school receives an ExoLab growth chamber that tracks temperature, humidity, light, and CO₂ levels while capturing timelapse images of plant development. Students plant seeds, collect data, and compare their findings with the parallel experiment happening in space — all in real time.
Over the course of a four-week mission, students join live broadcasts with classrooms all over the world. They hear directly from astronauts and NASA scientists, discuss everyone’s observations, and share their own discoveries.
So far, more than 24,000 students and 1400 teachers across 15 countries have taken part in 12 missions. Keep your eyes peeled for the results of ExoLab-13: Mission MushVroom!
Team Magnitude.io
Not the only Raspberry Pis in space
We liked how much ExoLab reminded us of the Raspberry Pi Foundation’s groundbreaking Astro Pi programme, which sees students run their own code on the International Space Station. While ExoLab works with NASA, Astro Pi sees students collaborate with astronauts from the European Space Agency.
The first pair of Astro Pi computers went up for Tim Peake’s Principia mission
Last year’s challenge was bigger than ever, with 25,405 young people participating across 17,285 teams. They’re now analysing the data they’ve received from the experiments that ran on the ISS. It’s free to take part, so if you know of a young person (under 19 years of age) who would like to launch their code into space, they can choose their mission and get started within an hour!
This wall art was inspired by the James Webb telescope — and there’s a Raspberry Pi inside
If you’ve already done Astro Pi and would like to try a more challenging build, you could look into the ISS Mimic project, which sees student teams build a 1%-scale version of the International Space Station and code it so that it mimics the exact actions of the real thing up in orbit. (It’s very cool. We follow Team ISS Mimic around to events like Open Sauce — they also introduced us to the ExoLab folks.)
ISS Mimic doing its… mimicking
If we’ve piqued your interest, why not peruse the space archives on our website? There are more Raspberry Pis up there than you think!