Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150962 stories
·
33 followers

Android Studio supports Gemma 4: our most capable local model for agentic coding

1 Share
Posted by Matthew Warner, Google Product Manager


Every developer's AI workflow and needs are unique, and it's important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we're announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities.

AI assistance, locally delivered

By running locally on your machine, Gemma 4 gives you AI code assistance that doesn't require an internet connection or an API key for its core operations. Key benefits include:

  • Privacy and security: Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments.
  • Cost efficiency: Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance.
  • Offline availability: Use the agent to write code even when you don’t have an internet connection.
  • State-of-the-art reasoning: Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode.

Powerful agentic coding

Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as:

  • Designing new features: Developers can ask the agent to build a new feature or an entire app with commands like “build a calculator app” and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose.
  • Refactoring: You can give high-level commands such as "Extract all hardcoded strings and migrate them to strings.xml." The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously.
  • Bug fixing and build resolution: If a project fails to build or has persistent lint errors, you can prompt the agent to "Build my project and fix any errors." The agent will navigate to the offending code and iteratively apply fixes until the build is successful.



Recommended hardware requirements

The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma.

Model Total RAM needed Storage needed
Gemma E2B 8GB 2 GB
Gemma E4B 12 GB 4 GB
Gemma 26B MoE 24 GB 17 GB

Get started

To get started, ensure you have the latest version of Android Studio installed.
  1. Install an LLM provider, such as LM Studio or Ollama, on your local computer.
  2. In Settings > Tools > AI > Model Providers add your LM Studio or Ollama instance.
  1. Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection.
  2. In Agent Mode, select Gemma 4 as your active model.

For a detailed walkthrough on configuration, check out the official documentation on how to use a local model.

We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!

Read the whole story
alvinashcraft
13 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Gemma 4 in the AICore Developer Preview

1 Share
Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer



At Google, we’re committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we’re thrilled to announce the release of our latest state-of-the-art open model: Gemma 4.

These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you’ll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference.

You can get early access to this model today through the AICore Developer Preview.




















Select the Gemini Nano 4 Fast model in the Developer Preview UI
to see its blazing fast inference speed in action before you write any code

Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes:

  • E4B: Designed for higher reasoning power and complex tasks.
  • E2B: Optimized for maximum speed (3x faster than the E4B model!) and lower latency.

The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including:

  • Reasoning: Chain-of-thought commands and conditional statements can now be expected to return higher quality results. For example: “Determine if the following comment for a discussion thread passes the community guidelines. The comment does not pass the community guideline if it contains one or more of these reason_for_flag: profanity, derogatory language, hate speech”. If the review passes the community guidelines, return {true}. Otherwise, return {false, reason_for_flag}.”
  • Math: With better math skills, the model can now more accurately answer questions. For example: “If I get 26 paychecks per year, how much should I contribute each paycheck to reach my savings goal of $10,000 over the course of a year?”
  • Time understanding: The model is now more capable when reasoning about time, making it more accurate for use cases that involve calendars, reminders, and alarms. For example: “The event is at 6PM on August 18th, and a reminder should be sent out 10 hours before the event. Return the time and date the reminder should be sent.”
  • Image understanding: Use cases that involve OCR (Optical Character Recognition) - such as chart understanding, visual data extraction, and handwriting recognition - will now return more accurate results.

Join the Developer Preview today to download these models in preview models and start building next-generation features right away.

Start building with Gemma 4

Start testing the model

You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we’ve made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We’ve introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing.

// Define the configuration with a specific track and preference
val previewFullConfig = generationConfig {
    modelConfig = ModelConfig {
        releaseTrack = ModelReleaseTrack.PREVIEW
        preference = ModelPreference.FULL
    }
}

// Initialize the GenerativeModel with the configuration
val previewModel = GenerativeModel.getClient(previewFullConfig)

// Verify that the specific preview model is available
val previewModelStatus = previewModel.checkStatus()
if (previewModelStatus == FeatureStatus.AVAILABLE) {
    // Proceed with inference
    val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.")

} else {
    // Handle the case where the preview model is not available
    // (e.g., print out log statements)
}

What to expect during the Developer Preview

The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps. 

We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4.

The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We’ll provide support for more devices in the future.

How to get started

Ready to see what Gemma 4 can do for your users?

  1. Opt-in: Sign up for the AICore Developer Preview.
  2. Download: Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device.
  3. Build: Update your ML Kit implementation to target the new models and start building in Android Studio.

Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Gemma 4: The new standard for local agentic intelligence on Android

1 Share
Posted by Matthew McCullough, VP of Product Management Android Development



Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities.

Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars:

  • Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer.
  • On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware.

Coding with Gemma 4 in Android Studio

When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine.

Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively.

Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started.

Prototyping with Gemma 4 on-device

Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery.

To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we’re launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API.

Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here.

Local agentic intelligence with Gemma 4

Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma’s open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. We're updating Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case.

We can’t wait to see what you build!

Read the whole story
alvinashcraft
23 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Increase Guidance and Control over Agent Mode with Android Studio Panda 3

1 Share

Posted by Matt Dyor, Senior Product Manager



Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps.

Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions.

Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars.

Here’s a deep dive into what’s new:

Agent skills

Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio.

You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom “code review” skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library.

Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet—ask your agent to help you build a new skill and it will guide you through the details!

Manually Trigger Agent Skill in Android Studio

Getting Started

To build a skill for your project, do the following:

  • Create a .skills directory inside your project's root folder.
  • Place a SKILL.md file inside this new directory.
  • Add a name and description to the file to define your custom workflow, and your skill is ready.
  • Optionally include scripts, assets, and references to provide even more guidance to your agent.
Agent skills in Android Studio

Manage permissions for Agent Mode

You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you.

When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows—when a tool asks for permission too often, it’s easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off.

Agent Permissions

Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off.

For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent.



Agent Shell Sandbox

Empty Car App Library App template

We’re making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully.

Now, you can accelerate your development with the new “Empty Car App Library App” template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road.

Getting Started

To use the new template:

  • Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project).
  • Search for or select the Empty Car App Library App template.
  • Name your app and click Finish to generate your driving-optimized app.



Empty Car App Library App template

Android Studio Panda releases 

Panda 3 builds off last month’s AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies.

Get started

Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today.

As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Essential Claude Code Skills for Mobile Developers

1 Share

Essential Claude Code Skills for Mobile Developers

Claude Code skills are packaged folders of instructions, scripts and resources that teach the AI coding agent how to perform specialised tasks.

Each skill has a SKILL.md file containing YAML metadata and step by step instructions. Claude scans the names and descriptions of installed skills and loads the full content only when your request matches.

Because skills are part of an open standard adopted by multiple agent frameworks, a single skill can work across Claude Code, OpenAI Codex, Cursor and other tools.


Installing Claude Code skills

Skills can be installed globally or per project. Personal skills go into ~/.claude/skills/ and become available across all your projects, while project specific skills live in .claude/skills/ inside your repository so everyone who clones the repo can use them.

Most skills are plain Markdown folders: clone the repository and copy the skill directory into the appropriate folder. For example, the Swift Development skill’s README shows that you can install it by copying the Swift Development skill directory into ~/.claude/skills/ and then invoking it via /swift-development.


Recommended skills for mobile developers

Below is a curated list of Claude Code skills that are particularly valuable for mobile developers working with native iOS, Android or Kotlin Multiplatform (KMP). The list covers a range of design and development tasks. Each skill is linked to its repository or marketplace page along with a brief description.

Swift Development
https://github.com/hmohamed01/swift-development
This comprehensive skill turns Claude into a competent Swift engineer. It packages patterns for SwiftUI navigation, concurrency (async/await, actors) and testing frameworks, along with helper scripts for building, running tests, formatting code and managing simulators. Installation is simple: copy the Swift Development directory into your personal skills folder and invoke it with /swift-development.

iOS Mobile Design
https://mcpmarket.com/tools/skills/ios-mobile-design
For designers and developers building polished iOS apps, this skill provides expert guidance on Apple’s Human Interface Guidelines. It teaches Claude how to apply SwiftUI layout systems, dynamic type, SF Symbols and modern navigation patterns so interfaces feel native to iPhone, iPad and visionOS. Key features include accessible design with dynamic type, responsive layouts, Material You style theming and Dark Mode optimisation.

iOS App Builder
https://mcpmarket.com/tools/skills/ios-app-builder
This skill transforms Claude into a senior iOS developer who builds and ships native apps using a command line only workflow. It uses Xcode’s xcodebuild and simctl tools to scaffold projects, run tests and launch apps without opening Xcode. The skill emphasises a “prove, don’t promise” methodology: it verifies logic through tests and simulator screenshots, manages the full app life cycle from scaffolding to App Store readiness and includes features like performance profiling and memory leak detection

SwiftUI agent skill
https://github.com/twostraws/swiftui-agent-skill
A SwiftUI skill create by Paul Hudson (hacking with swift).
This skill helps identify and fix common mistakes AI agents make when writing SwiftUI – things like modern API usage, performance, accessibility, and more.

Android Development
https://github.com/dpconde/claude-android-skill
Designed for modern Android projects, this skill teaches Claude clean architecture with separate UI, domain and data layers. It covers Jetpack Compose patterns and best practices, multi module project structures, offline first architecture using Room and reactive streams, dependency injection with Hilt and comprehensive testing strategies. After cloning the repository into your skills folder, you can ask the agent to scaffold features, create Compose screens or set up repositories following Android best practices.

Android Kotlin Development
https://fastmcp.me/skills/details/241/android-kotlin-development
This skill focuses on building native Android apps with Kotlin. The description explains that it covers Model-View-ViewModel architecture with Jetpack, Compose for declarative user interfaces, Retrofit for API calls, Room for local storage and navigation patterns. It includes detailed instructions and code samples for models, API services and MVVM view models, making it a practical guide for Kotlin developers.

Mobile Android Design
https://fastmcp.me/skills/details/644/mobile-android-design
When you need to design Android interfaces that follow Google’s Material Design 3 guidelines, this skill is indispensable. It teaches Claude how to build adaptive layouts with Jetpack Compose, implement navigation patterns like bottom navigation and drawers, use dynamic colour and Material You theming, and create accessible UI for phones, tablets and foldables. It is especially helpful for modernising existing apps or ensuring new designs comply with Material Design principles.

Jetpack Compose Expert
https://github.com/aldefy/compose-skill
This skill provides Claude with real source code and extensive reference guides for Jetpack Compose. Without it, AI tools often misuse remember, generate inefficient recompositions or misorder modifiers; with the skill, Claude picks the right state primitive, applies stability annotations and orders modifiers correctly. The reference guides cover state management, view composition, performance, navigation, animation, lists, side effects, modifiers, theming, accessibility and deprecated patterns. Installation requires cloning the repository and copying the Jetpack Compose expert skill directory into your skills folder; Claude will automatically activate it when you mention Compose or related API names.

Kotlin Multiplatform (KMP) Abstraction Guide
https://fastmcp.me/skills/details/2190/kotlin-multiplatform
KMP projects often involve tough decisions about what to share between platforms and what to keep platform specific. This skill guides those decisions by providing a platform abstraction decision tree. It explains when to use commonMain, jvmAndroid or expect/actual patterns, covers targets like Android, JVM/Desktop and iOS, and offers triggers for questions such as "should I share this ViewModel?". It helps developers place code correctly and prepare for future web or WebAssembly targets.

Read the whole story
alvinashcraft
35 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Hasn’t Made Developers Faster, It’s Made Their Review Queues Longer!

1 Share

A developer uses Copilot to write 30 lines of code in 10 minutes, but then spends 45 minutes reviewing it – checking for bugs, edge cases, and code that doesn’t match team standards.

The time saved during writing gets completely eaten up during validation. And this is exactly what happens repeatedly across teams trying to adopt AI at scale.

At the Pragmatic Summit, Laura Tacho (CTO at DX) shared some interesting research on AI in coding:

Almost 93% of developers use AI assistants every month, and about 27% of production code now comes from AI. Yet, despite all this, overall productivity has barely budged – staying around a 10% boost since AI tools arrived.

AI adoption is everywhere…

The numbers are clear:

  • 92.6% of developers use AI coding assistants monthly
  • 75% use them weekly
  • 26.9% of production code contains AI-authored segments

84% of developers use AI tools, according to Stack Overflow’s 2025 survey. Adoption is now standard – the numbers are probably even bigger now.

…Yet work isn’t moving any quicker

The gap between adoption and productivity appears first as a trust problem.

46% of developers don’t fully trust the output, and that skepticism has a reason: reviewing AI-generated code frequently requires more effort than reviewing human-written one.

The DX AI Measurement Framework (published by vendor DX but structured as an industry standard) identifies this directly:

Code generated by AI may be less intuitive for human developers to understand, potentially creating bottlenecks when issues arise or modifications are needed.

This is why productivity hasn’t jumped. Developers might write code faster with AI, but they end up spending the same time checking, fixing, and making sense of what AI produces. In the end, the overall development cycle doesn’t get any shorter.

Sonar’s research confirms the pattern at scale: 42% of committed code now includes AI assistance, yet 96% of developers say they don’t fully trust AI-generated code. And this is exactly what we see: output is everywhere, but the confidence in it is not.

Why productivity has stalled?

That 10% productivity bump comes down to a workflow mismatch.

Teams started using AI to write code faster, but didn’t adjust how they review, test, or integrate it. In other words, writing got quicker, but everything that comes after stayed just as slow.

The DX research notes a broader context relevant here: most organizations see their biggest bottlenecks not in code generation, but:

In the outer loop, or in human factors like collaboration, alignment, and the ability to do deep, focused work.

AI addresses one specific problem, and that’s code-writing speed. But, as we can see, the overall development cycle has other constraints.

Teams that actually see productivity gains from AI usually do two things: they figure out exactly where AI adds value, and they tweak their workflows to make the most of it. Teams that just deploy AI without changing how they work? They get adoption, but no real boost in productivity.

The 10% productivity ceiling sticks because the time spent validating AI-written code cancels out the speed gains. Most teams focus on writing faster, but few have optimized for faster validation.

It’s an obvious obstacle, but maybe also an opportunity.

The post AI Hasn’t Made Developers Faster, It’s Made Their Review Queues Longer! appeared first on ShiftMag.

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories