Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications.
If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17.
Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on:
Android now allows you to tailor the visual presentation of the photo picker to better complement your app’s user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media.
This is all part of our effort to help make the privacy-preserving Android photo picker fit seamlessly with your app experience. Learn more about how you can embed the photo picker directly into your app for the most native experience.
val params = PhotoPickerUiCustomizationParams.Builder()
.setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16)
.build()
val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply {
putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params)
}
startActivityForResult(intent, REQUEST_CODE)
Support for the RAW14 image format: Android 17 introduces support for the RAW14 image format — the de-facto industry standard for high-end digital photography — via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes.
Vendor-defined camera extensions: Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as 'Super Resolution' or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API.
Camera device type APIs: New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera.
Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets.
val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager
val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS)
val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID }
Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device's built-in speaker.
Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name c2.android.xheaac.encoder or by querying for the audio/mp4a-latm MIME type.
val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder")
val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1)
format.setInteger(MediaFormat.KEY_BIT_RATE, 24000)
format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE)
encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE)
Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections.
val alarmManager = getSystemService(AlarmManager::class.java)
val listener = AlarmManager.OnAlarmListener {
// Do work here
}
alarmManager.setExactAndAllowWhileIdle(
AlarmManager.ELAPSED_REALTIME_WAKEUP,
SystemClock.elapsedRealtime() + 60000,
listener,
null
)
Android is introducing a system-rendered location button that you will be able to embed directly into your app's layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission.
This feature splits the existing "Show passwords" system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default.
val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD
val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical)
To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand.
val libraryFile = File(context.filesDir, "my_native_lib.so")
// Mark the file as read-only before loading to comply with Android 17+ security requirements
libraryFile.setReadOnly()
System.load(libraryFile.absolutePath)
To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature.
This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units.
val options = appWidgetManager.getAppWidgetOptions(appWidgetId)
val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID)
val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout)
remoteViews.setViewPadding(
R.id.container,
16f, 8f, 16f, 8f,
TypedValue.COMPLEX_UNIT_DIP
)
Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable.
Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode.
val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0]
appTask.requestWindowingLayer(
ActivityManager.AppTask.WINDOWING_LAYER_PINNED,
context.mainExecutor,
object : OutcomeReceiver<Int, Exception> {
override fun onResult(result: Int) {
if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) {
// Task successfully moved to pinned layer
}
}
override fun onError(error: Exception) {}
}
)
By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel.
val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS)
if (intent.resolveActivity(packageManager) != null) {
startActivity(intent)
}
This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS.
You can enroll any supported Pixel device or use the 64-bit system images with the Android Emulator.
For complete information, visit the Android 17 developer site.
Learn all about what's new across Microsoft SQL at the SQLCon/FabCon conference in Atlanta in March 2026, from Microsoft SQL product leader, Priya Sathy, as well as insights from Bob Ward and Anna Hoffman.
✅ Chapters:
0:00 Introduction
3:10 What's new in Microsoft SQL at SQL Con
8:28 What's new in AI - SQL Developers Certification
10:50 Database Hub in Fabric
✅ Resources:
FabCon and SQLCon 2026: Unifying databases and Fabric on a single data platform: https://azure.microsoft.com/en-us/blog/fabcon-and-sqlcon-2026-unifying-databases-and-fabric-on-a-single-data-platform/
📌 Let's connect:
Twitter - Anna Hoffman, https://twitter.com/AnalyticAnna
Twitter - AzureSQL, https://aka.ms/azuresqltw
🔴 Watch even more Data Exposed episodes: https://aka.ms/dataexposedyt
🔔 Subscribe to our channels for even more SQL tips:
Microsoft Azure SQL: https://aka.ms/msazuresqlyt
Microsoft SQL Server: https://aka.ms/mssqlserveryt
Microsoft Developer: https://aka.ms/microsoftdeveloperyt
#AzureSQL #SQL #LearnSQL
Welcome to episode 347 of The Cloud Pod, where the forecast is always cloudy! Justin, Jonathan, and Ryan are in the studio recording today, and thankfully, Jonathan hasn’t replaced us all with Skynet – yet. This week, we’re discussing how old our tools (and us) are (hint: it’s really old), whether or not the SaasApocalypse is upon us, and whether or not the business or AI is responsible for the latest round of layoffs.
00:54 Microsoft’s brief in Anthropic case shows new alliance and willingness to challenge Trump administration
01:37 Justin – “Oh, yeah, there’s a vested interest in the lawsuit which we did not mention last week, so I wanted to follow up on that, because that explains very clearly why Microsoft is throwing in with Anthropic on this.”
02:37 Atlassian to shed ten percent of staff, because of AI
03:18 Justin – “I’ve seen Rovo, which is Atlassian’s AI suite, and if that’s the best they can do… I have fears for the long-term health and viability of Jira in general. I’m kind of over the whole let’s blame AI for our bad business decisions. That’s going to get old real quick.”
06:18 Claude builds interactive visuals right in your conversation
07:27 Ryan – “Kind of excited when Claude decides that the monkey making the queries needs bigger pictures because the text isn’t working out, so it’s like, I get you, Claude. I see what you’re doing.”
07:38 Jonathan – “Anthropic’s Claude: Now with crayons.”
08:50 Introducing Genie Code
10:05 Ryan – “I don’t think it will kill Glue or any of the ETL things, but hopefully it will just do it for you, and then I don’t think I care anymore.”
11:19 1M context is now generally available for Opus 4.6 and Sonnet 4.6
19:46 Introducing GPT-5.4 mini and nano
21:00 Ryan – “I’m a fan of these little models for certain things; as part of that tuning, my agent definitions have gotten a lot more complex. A lot of times, I’m breaking out agent definitions so that I can specifically use one of the smaller models for certain types of tasks. Data extraction being a big one.”
22:53 Twenty years of Amazon S3 and building what’s next
24:08 Justin – “I am a big fan of the S3 vectors because we use it for Bolt.”
25:39 Introducing account regional namespaces for Amazon S3 general-purpose buckets
27:17 Jonathan – “What’s really annoying is your account number is part of the public S3 bucket name! I wish a security person had been in the room there.”
28:17 Amazon CloudWatch Application Signals adds new SLO capabilities
29:11 Jonathan – “So instead of fixing your product, you just use a tool that tells you that you should turn down your commitments to your customers. Ok…”
29:57 Amazon SimpleDB now supports exporting domain data to Amazon S3
30:53 Justin – “SimpleDB gets a new feature!”
32:19 Amazon CloudWatch introduces organization-wide EC2 detailed monitoring enablement
33:17 Ryan – “I mean, what’s wrong with the previous method of waiting until you had an outage, not having the data, and THEN turning it on for your project?”
33:47 Why context is the missing link in AI data security
35:16 Ryan – “I don’t really think that’s usually where the sensitive data is. It can be, in some workloads, but probably not the majority, so there’s so many false positives, so I really like the idea that they’re having context be a part of that decision.”
37:16 Welcoming Wiz to Google Cloud: Redefining security for the AI era
38:16 Justin – “Typically on these acquisitions, it takes about a year for Google to figure out how to package them properly, and most likely they’ll want a separate contract for it anyways because that’s how all the integration acquisitions they’ve done are.”
39:22 IAP integration with Cloud Run
39:57 Ryan – This is a neat little feature. I don’t know how widely known it is, but it’s something that I’ve been using for a while.”
42:09 Multi-cluster GKE Inference Gateway helps scale AI workloads
43:06 Ryan – “Simplify. Sure…”
44:35 More transparency and control over Gemini API costs
45:13 Justin – “If you’ve ever tried to figure out who is using what models and what they’re doing with them and how much it costs, you know that this is all terrible – and this doesn’t actually improve it all that much.”
47:35 Generally Available: Azure SRE Agent with new capabilities
48:25 Jonathan – “All right, so they run the services, which are going to have problems. And now they want me to pay for another service so that I can use that tool to troubleshoot the problems with the other tools that I’m already paying for. OK…”
55:59 Many agents, one team: Scaling modernization on Azure
52:32 Ryan – “I keep waiting for someone to tout the success of how they did it, they’ve migrated all their terrible legacy code into this new thing, and it all works – but I haven’t seen it…”
53:28 Announcing Fireworks AI on Microsoft Foundry
54:30 Justin – “Sounds like it’s a cross-connect that they’ve done to Firework’s cloud basically, to provide this to you, so it’s sort of interesting.”
56:02 Announcing Copilot leadership update
57:23 Ryan – “Noticeably missing is Github’s Copilot…”
55:59 Washington state hotline callers hear AI voice with Spanish accent
And that is the week in the cloud! Visit our website, the home of the Cloud Pod, where you can join our newsletter, Slack team, send feedback, or ask questions at theCloudPod.net or tweet at us with the hashtag #theCloudPod
“Our internal target is to 2X our impact with AI over one year. Unlike some more outlandish mandates, that one is both aspirational and achievable,” says Emily Nakashima, SVP of Engineering at Honeycomb.
In this episode of The Hangar DX podcast, Emily shares how Honeycomb approached AI adoption at scale and why they try not to focus on metrics that can be gamed but rely more on self-reporting by developers.
Emily also discusses:
- Why flattening org charts is a short-term optimization that will cost companies later
- How Honeycomb issued a company-wide 2X mandate and what actually happened when they did
- Why the "buffet phase" of AI tool adoption is over and what a structured rollout looks like
- Why self-reporting beats hard metrics when measuring AI's impact on your team
- Why observability is more critical than ever in a world of non-deterministic AI-generated code
- Why AI SRE tools demo well but often fall short, and what they need to actually work
About Emily Nakashima
Emily serves as SVP of Engineering at Honeycomb. A former manager and engineering leader at multiple developer tools companies, including Bugsnag and GitHub, Emily is passionate about building best-in-class, consumer-quality tools for engineers. She has a background in product engineering, performance optimization, client-side monitoring, and design.
About Hangar DX (https://dx.community/)
The Hangar is a community of senior DevOps and senior software engineers focused on developer experience. This is a space where vetted, experienced professionals can exchange ideas, share hard-earned wisdom, troubleshoot issues, and ultimately help each other in their projects and careers.
We invite developers who work in DX and platform teams at their respective companies or who are interested in developer productivity.
Thank you for your interest in the new Azure SDKs! We release new features, improvements, and bug fixes every month. Subscribe to our Azure SDK Blog RSS Feed to get notified when a new release is available.
You can find links to packages, code, and docs on our Azure SDK Releases page.
The Azure Identity library for .NET now supports specifying a certificate path in the form of cert:/StoreLocation/StoreName/Thumbprint when using ClientCertificateCredential. This feature allows you to reference a certificate directly from the platform certificate store, such as the Windows Certificate Store or the macOS KeyChain, instead of requiring a file on disk. For example, to load a certificate from the “My” store in the “CurrentUser” location, use the path cert:/CurrentUser/My/E661583E8FABEF4C0BEF694CBC41C28FB81CD870.
The Azure Cosmos DB client library for Rust adds several significant features and breaking changes in this release. New capabilities include basic multi-region writes support, transactional batch support for executing multiple operations atomically within the same partition key, and fault injection support for testing in disaster scenarios. The client construction API was redesigned with a new CosmosClientBuilder, and query methods now return a FeedItemIterator<T> implementing Stream<Item = Result<T>>. Note that wasm32-unknown-unknown support was removed across the Rust SDK.
Azure AI Content Understanding reaches general availability for .NET, JavaScript, and Python. This library provides a ContentUnderstandingClient for analyzing documents, audio, and video content, as well as creating, managing, and configuring analyzers. The .NET release includes strongly typed Value properties on ContentField subclasses, a ContentSource hierarchy for strongly typed parsing of grounding source strings, and ContentRange value type with static factory methods for specifying content ranges.
The post Azure SDK Release (March 2026) appeared first on Azure SDK Blog.