Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
138207 stories
·
31 followers

Boost app performance and battery life: New Android Vitals Metrics are here

1 Share
Posted by Karan Jhavar - Product Manager, Android Frameworks, and Dan Brown - Product Manager, Google Play

Android has long championed performance, continuously evolving to deliver exceptional user experiences. Building upon years of refinement, we're now focusing on pinpointing resource-intensive use cases and developing platform-level solutions that benefit all users, across the vast Android ecosystem.

Since the launch of Android vitals in Play Console in 2017, Play has been investing in providing fleet-wide visibility into performance issues, making it easier to identify and fix problems as they occur. Today, Android and Google Play are taking a significant step forward in partnership with top OEMs, like Samsung, leveraging their real-world insights into excessive resource consumption. Our shared goal is to make Android development more streamlined and consistent by providing a standardized definition of what good and great looks like when it comes to technical quality.

"Samsung is excited to collaborate with Android and Google Play on these new performance metrics. By sharing our user experience insights, we aim to help developers build truly optimized apps that deliver exceptional performance and battery life across the ecosystem. We believe this collaboration will lead to a more consistent and positive experience for all Android users."
Samsung

We're embarking on a multi-year plan to empower you with the tools and data you need to understand, diagnose, and improve your app's resource consumption, resulting in happier and more engaged users, both for your app, and Android as a whole.

Today, we're launching the first of these new metrics in beta: excessive wake locks. This metric directly addresses one of the most significant frustrations for Android users – excessive battery drain. By optimizing your app's wake lock behavior, you can significantly enhance battery life and user satisfaction.

The Android vitals beta metric reports partial wake lock use as excessive when all of the partial wake locks, added together, run for more than 3 hours in a 24-hour period. The current iteration of excessive wake lock metrics tracks time only if the wake lock is held when the app is in the background and does not have a foreground service.

These new metrics will provide comprehensive, fleet-wide visibility into performance and battery life, equipping developers with the data needed to diagnose and resolve performance bottlenecks. We have also revamped our wake lock documentation which shares effective wake lock implementation strategies and best practices.

In addition, we are also launching the excessive wake lock metric documentation to provide clear guidance on interpreting the metrics. We highly encourage developers to check out this page and provide feedback with their use case on this new metric. Your input is invaluable in refining these metrics before their general availability. In this beta phase, we're actively seeking feedback on the metric definition and how it aligns with your app's use cases. Once we reach general availability, we will explore Play Store treatments to help users choose apps that meet their needs.

Later this year, we may introduce additional metrics in Android vitals highlighting additional critical performance issues.

Thank you for your ongoing commitment to delivering delightful, fast, and high-performance experiences to users across the entire Android ecosystem.

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Agentic programming makes me want to stop programming and also do it more

1 Share

TLDR: Using agents for programming can be great, but I fear for the maintainability of the code that's produced. As I'm currently looking for work, this lets me see the value my experience can bring, while also making me fear a future to having to work with a lot of code that was quickly written but is mediocre (at best) and hard to maintain.


I'm currently looking for work. It's frustrating and demotivating. I haven't been in this position in a long time, and it's worse than I remember.

I find it useful to take a break and "relax." Sometimes, I like to do that by writing some code, often for a quick side project.

Yesterday was one such day.

I'd recently been using a library that broke something (deleted a public API) without warning. As part of discussing this issue with the maintainer, I said, "They should use automated tools to check for the removal of public methods" They responded that it was a good idea, but they didn't know where to begin with adding such a check.

Consider me nerd-sniped.

In theory, it's not a complicated task. Simply analyze the diff of a PR and report accordingly.
However, I'd tried making my own GitHub actions previously (to track changes in the number of tests in a PR) and was unsuccessful.
Then I remembered that "AI will solve all our problems."
I was also keen to try and see what Agentic-based coding of a new project was like. True "vibe coding". Start with nothing and get the "AI" to do all the work. Can it really work? Even on "new" things with little existing code on the internet for the LLM to copy from?

So, to distract myself from an unsuccessful days job hunting, can I try to kill multiple birds at once, and also gain more personal experience with "vibe coding"?

I thought I'd give it a try.


As agent support isn't yet in Visual Studio, I decided (had) to use VSCode. I've not really used VSCode for anything heavy on the C# front. This is the first time I've built anything new with it. (If I'm doing something with C# in VSCode previously it's to debug MAUI apps on a Mac.)


So, I created a program that could run as a GitHub Action. I had these goals:

  • Be able to analyze diffs on a PR to detect added tests, removed tests, and deleted public methods.
  • Be written in C#, so I could understand what was happening and potentially be able to change/extend the code in the future.
  • See how good Copilot could be as an agent. (Tell it what I wanted and let it write the code.)
  • Gain practical experience with "vibe coding". - Recently, I heard about how being a good craftsperson is about knowing how to make the best use of the tools available, so I thought it good to gain more personal experience.


I eventually got enough of a solution working that I feel my itch has been suitably scratched. If you want to see the current state of the code, it's at https://github.com/mrlacey/MattsPullRequestHelper 

A good example of what it can do can be seen in this PR (created while debugging). 

Example comment added to the pull request, showing numbers of added and deleted tests and the names of deleted public methods

What follows are some miscellaneous notes and observations from the process.

  • VSCode was not as productive for me as VS. Some of this may be down to muscle memory and there not being things I'm used to. I'm not big on command lines, and would much rather work with a GUI than have to keep switching to the terminal. There were also editor features, extensions, and tool windows that I really missed. The biggest were probably the Test Explorer window and the Live Test runner not being in VSCode.
  • Copilot was too happy to give the equivalent of a vague solution to a request. Take, for example, the code it initially generated to get the names of the files that had been changed as part of the PR.
    How is this useful?
    I asked it to write the code to get the list of files, and it just produced a placeholder.
    This reminded me of a time when I was interviewing someone for a job. When asked how they would solve a technical problem, their answer was that they "would write an algorithm." When pushed for details they couldn't give any. Yes, Copilot reminds me of a less experienced developer who knows the right words but not how to do the work.

// Placeholder: Replace with logic to fetch changed files from the PR diff

  • Knowing how to phrase things so that it produces what I want/need will take time. There were points where I thought it would be quicker to write something myself rather try and work out how to phrase a request so I got what I wanted. But, I pushed through and in most cases got there.
  • When I asked it to fix problems, it looked to address the symptoms rather than the cause. If I didn't know that it was doing something bad/foolish, who knows what I would have ended up with or if it would have ever produced a working solution. 
  • There were a couple of times it was unable to fix an error and eventually gave up. What would someone who didn't know the programming language do here?
  • It tried to do some things that are just fundamentally bad. A few I left it to see if it would fix them. It didn't. I should probably go back and address them if I want to maintain this code.
  • I did have to go to the docs to find solutions to some challenges. I reached a point where I thought it would never get it itself. Maybe this was because of dated training models. Maybe it just wasn't capable of doing some things. Maybe I just couldn't explain it in a way it could understand. Maybe it was just a result of the randomness in the response. Perhaps it would have got there if I kept trying long enough.
  • It struggled with version numbers and compatibility issues. It initially created the two projects targeting different .NET versions. - This is the kind of abstract, high-level knowledge that is missing. Or just different from a "real" person.
  • Some of the justifications for the changes it made and recommendations it gave were just wrong or straight-up anti-patterns. I'm glad I knew this and could avoid some of the bad code it produced.
  • Working with regular expressions was a lot easier than "normal". Especially when I had appropriate tests in place to verify that the changes it made were correct/appropriate. 
  • Being able to say "update the code to make the test all pass" and it actually working was something I've wanted for years. (I tried with LLMs in the past but never got there.) To have it working was kinda magical.
  • The tests it produced were rubbish. They were less than worthless and more likely to cause problems in the future than help avoid them. I'm concerned for the people who see AI as a way to avoid having to write tests themselves. I see the opposite. Developers being able to write tests is going to become more important as they'll need to verify that the code does (and continues to do) what it's supposed to. Writing a load of test cases and then having the tool write the code, which I would then verify, felt very productive. I'll definitely do this more in the future.
  • The generated code was not as well structured or tested as I'd like, or if I'd done it myself. Part of me thinks this could be a problem, but I'm also conscious that I might be in a similar position if another person had written the code. Apart from the testability issue, I want to say this isn't all that important. Having working and maintainable code is more important than it looking a certain way.
  • On several occasions, it rewrote files and removed changes I'd made to what it did previously. This included removing comments I added and re-adding incorrect and unneeded code that I had removed. I think it's going to be useful to make commits before asking it to change the code, so I can more easily track anything it might change that I miss.
  • Reviewing changes, including spotting small but important details amongst a large change is going to be increasingly important. I will benefit from more tooling to do this. This doesn't just apply to formal code reviews.
  • It can easily introduce broad changes to the code base, and this makes differences hard to spot.
  • It would sometimes create duplicate code. Sometimes within methods. 
  • The most frustrating of the unrelated changes that it would make would be changing the accessibility of methods and classes. It would create them as private. I'd make them public so they were easier to test. It would then revert them to private and stop the test from compiling.
  • There were many times it broke stuff - if not for my tests, I wouldn't have known and could have easily wasted lots of time going round in circles making changes and introducing regressions while fixing other things.
  • It really made me wonder how some developers can be productive without a large suite of high-quality tests. They must find development slow and error-prone. (Oh yes, many do!)


In the end, I felt that I'd gotten about as far as I would have in the time, even if I hadn't used Copilot. The big difference is that I was spending my time having to think about different things. There was a lot more time thinking about other code (as written by Copilot) and trying to understand it. This offset some of the time I would have spent looking at docs (it managed to do some things for me that I didn't have to look up). 
Some of this time was also because I was doing something new for the first time. Most coding starts with an existing code base, and so that might not be true in future. I certainly feel that it could definitely save me more time in future.


Takeaways

  • [Full] Visual Studio (rather than Code) is so much more productive for C# development.
  • Copilot (especially in agent mode) has the potential to be useful to any/every developer.
  • It really feels like working with a less experienced developer who doesn't always listen to (or ensure they fully understand) the requirements before producing a lot of code.
  • I was reinforced in my impression that the need to carefully review changes will become even more important in the future. Having additional tools to help with this (both powered by AI and not) will be helpful.
  • Having high-quality tests (not those written by Copilot) and good code coverage will become increasingly important.
  • I'm sure there is more that I can do/learn to get better at using and configuring the agent to do more of what I want the first time.



Now, there is another project I've been thinking of experimenting with that might be even more of a challenge for Copilot. I'm now more inclined to experiment with this soon...



Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

From dashboards to deeper data: Improve app quality and performance with new Play Console insights

1 Share
Posted by Dan Brown, Dina Gandal and Hadar Yanos – Product Managers, Google Play

At Google Play, we partner with developers like you to help your app or game business reach its full potential, providing powerful tools and insights every step of the way. In Google Play Console, you’ll find the features needed to test, publish, improve, and grow your apps — and today, we're excited to share several enhancements to give you even more actionable insights, starting with a redesigned app dashboard tailored to your key workflows, and new metrics designed to help you improve your app quality.

Focus on the metrics that matter with the redesigned app dashboard

The first thing you’ll notice is the redesigned app dashboard, which puts the most essential insights front and center. We know that when you visit Play Console, you usually have a goal in mind — whether that’s checking on your release status or tracking installs. That’s why you’ll now see your most important metrics grouped into four core developer objectives:

    • Test and release
    • Monitor and improve
    • Grow users, and
    • Monetize with Play

Each objective highlights the three metrics most important to that goal, giving you a quick grasp of how your app is doing at a glance, as well as how those metrics have changed over time. For example, you can now easily compare between your latest production release against your app’s overall performance, helping you to quickly identify any issues. In the screenshot below, the latest production release has a crash rate of 0.24%, a large improvement over the 28-day average crash rate shown under “Monitor and Improve."

screen recording of the redesigned app dashboard in Google Play Console
The redesigned app dashboard in Play Console helps you see your most important metrics at a glance.

At the top of the page, you’ll see the status of your latest release changes prominently displayed so you know when it’s been reviewed and approved. If you’re using managed publishing, you can also see when things are ready to publish. And based on your feedback, engagement and monetization metrics now show a comparison to your previous year’s data so you can make quick comparisons.

The new app dashboard also keeps you updated on the latest news from Play, including recent blog posts, new features relevant to your app, and even special invitations to early access programs.

In addition to what’s automatically displayed on the dashboard, we know many of you track other vital metrics for your role or business. That's why we've added the “Monitor KPI trends” section at the bottom of your app dashboard. Simply scroll down and personalize your view by selecting the trends you need to monitor. This customized experience allows each user in your developer account to focus on their most important insights.

Later this year, we’ll introduce new overview pages for each of the four core developer objectives. These pages will help you quickly understand your performance, showcase tools and features within each domain, and list recommended actions to optimize performance, engagement, and revenue across all your apps.

Get actionable notifications when and where you need them

If you spend a lot of time in Play Console, you may have already noticed the new notification center. Accessible from every page, the notification center helps you to stay up to date with your account and apps, and helps you to identify any issues that may need urgent attention.

To help you quickly understand and act on important information, we now group notifications about the same issue across multiple apps. Additionally, notifications that are no longer relevant will automatically expire, ensuring you only see what needs your attention. Plus, notifications will be displayed on the new app dashboard within the relevant objectives.

Improve app quality and performance with new Play Console metrics

One of Play’s top goals is to provide the insights you need to build high-quality apps that deliver exceptional user experiences. We’re continuing to expand these insights, helping you prevent issues like crashes or ANRs, optimize your app’s performance, and reduce resource consumption on users’ devices.

Users expect a polished experience across their devices, and we’ve learned from you it can be difficult to make your app layouts work seamlessly across phones and large screens. To help with this, we’ve introduced pre-review checks for incorrect edge-to-edge rendering, while another new check helps you detect and prevent large screen layout issues caused by letterboxing and restricted layouts, along with resources on how to fix them.

We’re also making it easier to find and triage the most important quality issues in your app. The release dashboard in Play Console now displays prioritized quality issues from your latest release, alongside the existing dashboard features for monitoring post-launch, like crashes and ANRs This addition provides a centralized view of user-impacting issues, along with clear instructions to help you resolve critical user issues to improve your users’ experiences.

The quality panel in the redesigned app dashboard in Google Play Console
The quality panel at the top of the release dashboard gives you a prioritized view of issues that affect users on your latest release and provides instructions on how to fix them.

A new "low memory kill" (LMK) metric is available in Android vitals and the Reporting API. Low memory issues cause your app to terminate without any logging, and can be notoriously difficult to detect. We are making these issues visible with device-specific insights into memory constraints to help you identify and fix these problems. This will improve app stability and user engagement, which is especially crucial for games where LMKs can disrupt real-time gameplay.

The quality panel in the redesigned app dashboard in Google Play Console
The low memory kill metric in Android vitals gives you device-specific insights into low memory terminations, helping you improve app stability and user engagement.

We're also collaborating closely with leading OEMs like Samsung, leveraging their real-world insights to define consistent benchmarks for optimal technical quality across Android devices. Excessive wakelocks are a leading cause of battery drain, a top frustration for users. Today, we're launching the first of these new metrics in beta: excessive wake locks in Android vitals. Take a look at our wakelock documentation and provide feedback on the metric definition. Your input is essential as we refine this metric towards general availability, and will inform our strategy for making this information available to users on the Play Store so they can make informed decisions when choosing apps.

Together, these updates provide you with even more visibility into your app's performance and quality, enabling you to build more stable, efficient, and user-friendly apps across the Android ecosystem. We'll continue to add more metrics and insights over time. To stay informed about all the latest Play Console enhancements and easily find updates relevant to your workflow, explore our new What’s new in Play Console page, where you can filter features by the four developer objectives.

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Fourth Beta of Android 16

1 Share
Posted by Matthew McCullough – VP of Product Management, Android Developer

Today we're bringing you Android 16 beta 4, the last scheduled update in our Android 16 beta program. Make sure your app or game is ready. It's also the last chance to give us feedback before Android 16 is released.

Android 16 Beta 4

This is our second platform stability release; the developer APIs and all app-facing behaviors are final. Apps targeting Android 16 can be made available in Google Play. Beta 4 includes our latest fixes and optimizations, giving you everything you need to complete your testing. Head over to our Android 16 summary page for a list of the features and behavior changes we've been covering in this series of blog posts, or read on for some of the top changes of which you should be aware.

Android 16 Release timeline showing Platform Stability milestone in April

Now available on more devices

The Android 16 Beta is now available on handset, tablet, and foldable form factors from partners including Honor, iQOO, Lenovo, OnePlus, OPPO, Realme, vivo, and Xiaomi. With more Android 16 partners and device types, many more users can run your app on the Android 16 Beta.

Android 16 Beta Release Partners: Google Pixel, iQOO, Lenovo, OnePlus, Sharp, Oppo, RealMe, vivo, Xiaomi, and Honor

Get your apps, libraries, tools, and game engines ready!

If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your developers know if updates to your SDK are needed to fully support Android 16.

Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 16 Beta 4. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are several changes to focus on that apply, even if you aren't yet targeting Android 16:

Other changes that will be impactful once your app targets Android 16:

Get your app ready for the future:

    • Local network protection: Consider testing your app with the upcoming Local Network Protection feature. It will give users more control over which apps can access devices on their local network in a future Android major release.

Remember to thoroughly exercise libraries and SDKs that your app is using during your compatibility testing. You may need to update to current SDK versions or reach out to the developer for help if you encounter any issues.

Once you’ve published the Android 16-compatible version of your app, you can start the process to update your app's targetSdkVersion. Review the behavior changes that apply when your app targets Android 16 and use the compatibility framework to help quickly detect issues.

Two Android API releases in 2025

This Beta is for the next major release of Android with a planned launch in Q2 of 2025 and we plan to have another release with new developer APIs in Q4. This Q2 major release will be the only release in 2025 that includes behavior changes that could affect apps. The Q4 minor release will pick up feature updates, optimizations, and bug fixes; like our non-SDK quarterly releases, it will not include any intentional app-breaking behavior changes.

Android 16 2025 SDK release timeline

We'll continue to have quarterly Android releases. The Q1 and Q3 updates provide incremental updates to ensure continuous quality. We’re putting additional energy into working with our device partners to bring the Q2 release to as many devices as possible.

There’s no change to the target API level requirements and the associated dates for apps in Google Play; our plans are for one annual requirement each year, tied to the major API level.

Get started with Android 16

You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently on Android 16 Beta 3 or are already in the Android Beta program, you will be offered an over-the-air update to Beta 4.

While the API and behaviors are final and we are very close to release, we'd still like you to report issues on the feedback page. The earlier we get your feedback, the better chance we'll be able to address it in this or a future release.

For the best development experience with Android 16, we recommend that you use the latest Canary build of Android Studio Narwhal. Once you’re set up, here are some of the things you should do:

    • Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page.

We’ll update the beta system images and SDK regularly throughout the Android 16 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas.

For complete information on Android 16 please visit the Android 16 developer site.

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new in the Jetpack Compose April ’25 release

1 Share
Posted by Jolanda Verhoef – Developer Relations Engineer

Today, as part of the Compose April ‘25 Bill of Materials, we’re releasing version 1.8 of Jetpack Compose, Android's modern, native UI toolkit, used by many developers. This release contains new features like autofill, various text improvements, visibility tracking, and new ways to animate a composable's size and location. It also stabilizes many experimental APIs and fixes a number of bugs.

To use today’s release, upgrade your Compose BOM version to 2025.04.01 :

implementation(platform("androidx.compose:compose-bom:2025.04.01"))
Note: If you are not using the Bill of Materials, make sure to upgrade Compose Foundation and Compose UI at the same time. Otherwise, autofill will not work correctly.

Autofill

Autofill is a service that simplifies data entry. It enables users to fill out forms, login screens, and checkout processes without manually typing in every detail. Now, you can integrate this functionality into your Compose applications.

Setting up Autofill in your Compose text fields is straightforward:

      1. Set the contentType Semantics: Use Modifier.semantics and set the appropriate contentType for your text fields. For example:

TextField(
  state = rememberTextFieldState(),
  modifier = Modifier.semantics {
    contentType = ContentType.Username 
  }
)

      2. Handle saving credentials (for new or updated information):

          a. Implicitly through navigation: If a user navigates away from the page, commit will be called automatically - no code needed!

          b. Explicitly through a button: To trigger saving credentials when the user submits a form (by tapping a button, for instance), retrieve the local AutofillManager and call commit().

For full details on how to implement autofill in your application, see the Autofill in Compose documentation.

Text

When placing text inside a container, you can now use the autoSize parameter in BasicText to let the text size automatically adapt to the container size:

Box {
    BasicText(
        text = "Hello World",
        maxLines = 1,
        autoSize = TextAutoSize.StepBased()
    )
}
moving image of Hello World text inside a container

You can customize sizing by setting a minimum and/or maximum font size and define a step size. Compose Foundation 1.8 contains this new BasicText overload, with Material 1.4 to follow soon with an updated Text overload.

Furthermore, Compose 1.8 enhances text overflow handling with new TextOverflow.StartEllipsis or TextOverflow.MiddleEllipsis options, which allow you to display ellipses at the beginning or middle of a text line.

val text = "This is a long text that will overflow"
Column(Modifier.width(200.dp)) {
  Text(text, maxLines = 1, overflow = TextOverflow.Ellipsis)
  Text(text, maxLines = 1, overflow = TextOverflow.StartEllipsis)
  Text(text, maxLines = 1, overflow = TextOverflow.MiddleEllipsis)
}
text overflow handling displaying ellipses at the beginning and middle of a text line

And finally, we're expanding support for HTML formatting in AnnotatedString, with the addition of bulleted lists:

Text(
  AnnotatedString.fromHtml(
    """
    <h1>HTML content</h1>
    <ul>
      <li>Hello,</li>
      <li>World</li>
    </ul>
    """.trimIndent()
  )
)
a bulleted list of two items

Visibility tracking

Compose UI 1.8 introduces a new modifier: onLayoutRectChanged. This API solves many use cases that the existing onGloballyPositioned modifier does; however, it does so with much less overhead. The onLayoutRectChanged modifier can debounce and throttle the callback per what the use case demands, which helps with performance when it’s added onto an item in LazyColumn or LazyRow.

This new API unlocks features that depend on a composable's visibility on screen. Compose 1.9 will add higher-level abstractions to this low-level API to simplify common use cases.

Animate composable bounds

Last year we introduced shared element transitions, which smoothly animate content in your apps. The 1.8 Animation module graduates LookaheadScope to stable, includes numerous performance and stability improvements, and includes a new modifier, animateBounds. When used inside a LookaheadScope, this modifier automatically animates its composable's size and position on screen, when those change:

Box(
  Modifier
    .width(if(expanded) 180.dp else 110.dp)
    .offset(x = if (expanded) 0.dp else 100.dp)
    .animateBounds(lookaheadScope = this@LookaheadScope)
    .background(Color.LightGray, shape = RoundedCornerShape(12.dp))
    .height(50.dp)
) {
  Text("Layout Content", Modifier.align(Alignment.Center))
}
a moving image depicting animate composable bounds

Increased API stability

Jetpack Compose has utilized @Experimental annotations to mark APIs that are liable to change across releases, for features that require more than a library's alpha period to stabilize. We have heard your feedback that a number of features have been marked as experimental for some time with no changes, contributing to a sense of instability. We are actively looking at stabilizing existing experimental APIs—in the UI and Foundation modules, we have reduced the experimental APIs from 172 in the 1.7 release to 70 in the 1.8 release. We plan to continue this stabilization trend across modules in future releases.

Deprecation of contextual flow rows and columns

As part of the work to reduce experimental annotations, we identified APIs added in recent releases that are less than optimal solutions for their use cases. This has led to the decision to deprecate the experimental ContextualFlowRow and ContextualFlowColumn APIs, added in Foundation 1.7. If you need the deprecated functionality, our recommendation for now is to copy over the implementation and adapt it as needed, while we work on a plan for future components that can cover these functionalities better.

The related APIs FlowRow and FlowColumn are now stable; however, the new overflow parameter that was added in the last release is now deprecated.

Improvements and fixes for core features

In response to developer feedback, we have shipped some particularly in-demand features and bug fixes in our core libraries:

Get started!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker - they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better.

Happy composing!

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Get ready for Google I/O: Program lineup revealed

1 Share
Posted by the Google I/O team

The Google I/O agenda is live. We're excited to share Google’s biggest announcements across AI, Android, Web, and Cloud May 20-21. Tune in to learn how we’re making development easier so you can build faster.

We'll kick things off with the Google Keynote at 10:00 AM PT on May 20th, followed by the Developer Keynote at 1:30 PM PT. This year, we're livestreaming two days of sessions directly from Mountain View, bringing more of the I/O experience to you, wherever you are.

Here’s a sneak peek of what we’ll cover:

    • AI advancements: Learn how Gemini models enable you to build new applications and unlock new levels of productivity. Explore the flexibility offered by options like our Gemma open models and on-device capabilities.
    • Build excellent apps, across devices with Android: Crafting exceptional app experiences across devices is now even easier with Android. Dive into sessions focused on building intelligent apps with Google AI and boosting your productivity, alongside creating adaptive user experiences and leveraging the power of Google Play.
    • Powerful web, made easier: Exciting new features continue to accelerate web development, helping you to build richer, more reliable web experiences. We’ll share the latest innovations in web UI, Baseline progress, new multimodal built-in AI APIs using Gemini Nano, and how AI in DevTools streamline building innovative web experiences.

Plan your I/O

Join us online for livestreams May 20-21, followed by on-demand sessions and codelabs on May 22. Register today and explore the full program for sessions like these:

We're excited to share what's next and see what you build!

Read the whole story
alvinashcraft
13 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories