Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146700 stories
·
33 followers

Accelerating your insights with faster, smarter monetization data and recommendations

1 Share

Posted by Phalene Gowling, Product Manager, Google Play

To build a thriving business on Google Play, you need more than just data  you need a clear path to action. Today, we’re announcing a suite of upgrades to the Google Play Console and beyond, giving you greater visibility into your financial performance and specific, data-backed steps to improve it.

From new, actionable recommendations to more granular sales reporting, here’s how we’re helping you maximize your ROI.

New: Monetization insights and recommendations
Launch Status: Rolling out today

The Monetize with Play overview page is designed to be your ultimate command center. Today, we are upgrading it with a new dynamic insights section designed to give you a clearer view of your revenue drivers.


This new insights carousel highlights the visible and invisible value Google Play delivers to your bottom line – including recovered revenue. Alongside these insights, you can now track these critical signals alongside your core performance metrics:

  • Optimize conversion: Track your new Cart Conversion Rate.
  • Reduce churn: Track cancelled subscriptions over time.

  • Optimize pricing: Monitor your Average Revenue Per Paying User (ARPPU).

  • Increase buyer reach: Analyze how much of your engaged audience convert to buyers.

But we aren’t just showing you the data – we’re helping you act on it. Starting today, Play Console will surface customized, actionable recommendations. If there are relevant opportunities – for example, a high churn rate – we will suggest specific, high-impact steps to help you reach your next monetization goal. Recommendations include effort levels and estimated ROI (where available), helping you prioritize your roadmap based on actual business value. Learn more.



Granular visibility: Sales Channel reporting
Launch Status: Recently launched

We recently rolled out new Sales Channel data in your financial reporting. This allows you to attribute revenue to specific surfaces - including your app, the Play Store, and platforms like Google Play Games on PC. 

For native-PC game developers and media & entertainment subscription businesses alike, this granularity allows you to calculate the precise ROI of your cross-platform investments and understand exactly which channels are driving your growth. Learn more.



Operational efficiency: The Orders API
Launch Status: Available now

The Orders API provides programmatic access to one-time and recurring order transaction details. If you haven't integrated it yet, this API allows you to ingest real-time data directly into your internal dashboards for faster reconciliation and improved customer support.

Feedback so far has been overwhelmingly positive:
Level Infinite (Tencent) says the API  “works so well that we want every app to use it."

Continuous improvements towards objective-led reporting 

You’ve told us that the biggest challenge isn't just accessing data, but connecting the dots across different metrics to see the full picture. We’re enhancing reporting that goes beyond data dumps to provide straightforward, actionable insights that help you reach business objectives faster.

Our goal is to create a more cohesive product experience centered around your objectives. By shifting from static reporting to dynamic, goal-orientated tools, we’re making it easier to track and optimize for revenue, conversion rates, and churn. These updates are just the beginning of a transformation designed to help you turn data into measurable growth.









Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Embedded Photo Picker

1 Share

Posted by Roxanna Aliabadi Walker, Product Manager and Yacine Rezgui, Developer Relations Engineer

The Embedded Photo Picker: A more seamless way to privately request photos and videos in your app


Get ready to enhance your app's user experience with an exciting new way to use the Android photo picker! The new embedded photo picker offers a seamless and privacy-focused way for users to select photos and videos, right within your app's interface. Now your app can get all the same benefits available with the photo picker, including access to cloud content, integrated directly into your app’s experience.

Why embedded?

We understand that many apps want to provide a highly integrated and seamless experience for users when selecting photos or videos. The embedded photo picker is designed to do just that, allowing users to quickly access their recent photos without ever leaving your app. They can also explore their full library in their preferred cloud media provider (e.g., Google Photos), including favorites, albums and search functionality. This eliminates the need for users to switch between apps or worry about whether the photo they want is stored locally or in the cloud.

Seamless integration, enhanced privacy

With the embedded photo picker, your app doesn't need access to the user's photos or videos until they actually select something. This means greater privacy for your users and a more streamlined experience. Plus, the embedded photo picker provides users with access to their entire cloud-based media library, whereas the standard photo permission is restricted to local files only.

The embedded photo picker in Google Messages

Google Messages showcases the power of the embedded photo picker. Here's how they've integrated it:
  • Intuitive placement: The photo picker sits right below the camera button, giving users a clear choice between capturing a new photo or selecting an existing one.
  • Dynamic preview: Immediately after a user taps a photo, they see a large preview, making it easy to confirm their selection. If they deselect the photo, the preview disappears, keeping the experience clean and uncluttered.
  • Expand for more content: The initial view is simplified, offering easy access to recent photos. However, users can easily expand the photo picker to browse and choose from all photos and videos in their library, including cloud content from Google Photos.
  • Respecting user choices: The embedded photo picker only grants access to the specific photos or videos the user selects, meaning they can stop requesting the photo and video permissions altogether. This also saves the Messages from needing to handle situations where users only grant limited access to photos and videos.


Implementation

Integrating the embedded photo picker is made easy with the Photo Picker Jetpack library.  

Jetpack Compose

First, include the Jetpack Photo Picker library as a dependency.

implementation("androidx.photopicker:photopicker-compose:1.0.0-alpha01")

The EmbeddedPhotoPicker composable function provides a mechanism to include the embedded photo picker UI directly within your Compose screen. This composable creates a SurfaceView which hosts the embedded photo picker UI. It manages the connection to the EmbeddedPhotoPicker service, handles user interactions, and communicates selected media URIs to the calling application. 

@Composable
fun EmbeddedPhotoPickerDemo() {
    // We keep track of the list of selected attachments
    var attachments by remember { mutableStateOf(emptyList<Uri>()) }

    val coroutineScope = rememberCoroutineScope()
    // We hide the bottom sheet by default but we show it when the user clicks on the button
    val scaffoldState = rememberBottomSheetScaffoldState(
        bottomSheetState = rememberStandardBottomSheetState(
            initialValue = SheetValue.Hidden,
            skipHiddenState = false
        )
    )

    // Customize the embedded photo picker
    val photoPickerInfo = EmbeddedPhotoPickerFeatureInfo
        .Builder()
        // Set limit the selection to 5 items
        .setMaxSelectionLimit(5)
        // Order the items selection (each item will have an index visible in the photo picker)
        .setOrderedSelection(true)
        // Set the accent color (red in this case, otherwise it follows the device's accent color)
        .setAccentColor(0xFF0000)
        .build()

    // The embedded photo picker state will be stored in this variable
    val photoPickerState = rememberEmbeddedPhotoPickerState(
        onSelectionComplete = {
            coroutineScope.launch {
                // Hide the bottom sheet once the user has clicked on the done button inside the picker
                scaffoldState.bottomSheetState.hide()
            }
        },
        onUriPermissionGranted = {
            // We update our list of attachments with the new Uris granted
            attachments += it
        },
        onUriPermissionRevoked = {
            // We update our list of attachments with the Uris revoked
            attachments -= it
        }
    )

       SideEffect {
        val isExpanded = scaffoldState.bottomSheetState.targetValue == SheetValue.Expanded

        // We show/hide the embedded photo picker to match the bottom sheet state
        photoPickerState.setCurrentExpanded(isExpanded)
    }

    BottomSheetScaffold(
        topBar = {
            TopAppBar(title = { Text("Embedded Photo Picker demo") })
        },
        scaffoldState = scaffoldState,
        sheetPeekHeight = if (scaffoldState.bottomSheetState.isVisible) 400.dp else 0.dp,
        sheetContent = {
            Column(Modifier.fillMaxWidth()) {
                // We render the embedded photo picker inside the bottom sheet
                EmbeddedPhotoPicker(
                    state = photoPickerState,
                    embeddedPhotoPickerFeatureInfo = photoPickerInfo
                )
            }
        }
    ) { innerPadding ->
        Column(Modifier.padding(innerPadding).fillMaxSize().padding(horizontal = 16.dp)) {
            Button(onClick = {
                coroutineScope.launch {
                    // We expand the bottom sheet, which will trigger the embedded picker to be shown
                    scaffoldState.bottomSheetState.partialExpand()
                }
            }) {
                Text("Open photo picker")
            }
            LazyVerticalGrid(columns = GridCells.Adaptive(minSize = 64.dp)) {
                // We render the image using the Coil library
                itemsIndexed(attachments) { index, uri ->
                    AsyncImage(
                        model = uri,
                        contentDescription = "Image ${index + 1}",
                        contentScale = ContentScale.Crop,
                        modifier = Modifier.clickable {
                            coroutineScope.launch {
                                // When the user clicks on the media from the app's UI, we deselect it
                                // from the embedded photo picker by calling the method deselectUri
                                photoPickerState.deselectUri(uri)
                            }
                        }
                    )
                }
            }
        }
    }
}

Views

First, include the Jetpack Photo Picker library as a dependency.

implementation("androidx.photopicker:photopicker:1.0.0-alpha01")

To add the embedded photo picker, you need to add an entry to your layout file. 

<view class="androidx.photopicker.EmbeddedPhotoPickerView"
    android:id="@+id/photopicker"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

And initialize it in your activity/fragment.

// We keep track of the list of selected attachments
private val _attachments = MutableStateFlow(emptyList<Uri>())
val attachments = _attachments.asStateFlow()

private lateinit var picker: EmbeddedPhotoPickerView
private var openSession: EmbeddedPhotoPickerSession? = null

val pickerListener = object EmbeddedPhotoPickerStateChangeListener {
    override fun onSessionOpened (newSession: EmbeddedPhotoPickerSession) {
        openSession = newSession
    }

    override fun onSessionError (throwable: Throwable) {}

    override fun onUriPermissionGranted(uris: List<Uri>) {
        _attachments += uris
    }

    override fun onUriPermissionRevoked (uris: List<Uri>) {
        _attachments -= uris
    }

    override fun onSelectionComplete() {
        // Hide the embedded photo picker as the user is done with the photo/video selection
    }
}

override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.main_view)
    
    //
    // Add the embedded photo picker to a bottom sheet to allow the dragging to display the full photo library
    //

    picker = findViewById(R.id.photopicker)
    picker.addEmbeddedPhotoPickerStateChangeListener(pickerListener)
    picker.setEmbeddedPhotoPickerFeatureInfo(
        // Set a custom accent color
        EmbeddedPhotoPickerFeatureInfo.Builder().setAccentColor(0xFF0000).build()
    )
}

You can call different methods of EmbeddedPhotoPickerSession to interact with the embedded picker.

// Notify the embedded picker of a configuration change
openSession.notifyConfigurationChanged(newConfig)

// Update the embedded picker to expand following a user interaction
openSession.notifyPhotoPickerExpanded(/* expanded: */ true)

// Resize the embedded picker
openSession.notifyResized(/* width: */ 512, /* height: */ 256)

// Show/hide the embedded picker (after a form has been submitted)
openSession.notifyVisibilityChanged(/* visible: */ false)

// Remove unselected media from the embedded picker after they have been
// unselected from the host app's UI
openSession.requestRevokeUriPermission(removedUris)

It's important to note that the embedded photo picker experience is available for users running Android 14 (API level 34) or higher with SDK Extensions 15+. Read more about photo picker device availability.

For enhanced user privacy and security, the system renders the embedded photo picker in a way that prevents any drawing or overlaying. This intentional design choice means that your UX should consider the photo picker's display area as a distinct and dedicated element, much like you would plan for an advertising banner.

If you have any feedback or suggestions, submit tickets to our
issue tracker.



Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How Automated Prompt Optimization Unlocks Quality Gains for ML Kit’s GenAI Prompt API

1 Share

Posted by Chetan Tekur, PM at AI Innovation and Research, Chao Zhao, SWE at AI Innovation and Research, Paul Zhou, Prompt Quality Lead at GCP Cloud AI and Industry Solutions, and Caren Chang, Developer Relations Engineer at Android


To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers.


Automated Prompt Optimization (APO)

To further help bring your ML Kit Prompt API use cases to production, we are excited to announce Automated Prompt Optimization (APO) targeting On-Device models on Vertex AI. Automated Prompt Optimization is a tool that helps you automatically find the optimal prompt for your use cases.

The era of On-Device AI is no longer a promise—it is a production reality. With the release of Gemini Nano v3, we are placing unprecedented language understanding and multimodal capabilities directly into the palms of users. Through the Gemini Nano family of models, we have wide coverage of supported devices across the Android Ecosystem. But for developers building the next generation of intelligent apps, access to a powerful model is only step one. The real challenge lies in customization: How do you tailor a foundation model to expert-level performance for your specific use case without breaking the constraints of mobile hardware?

In the server-side world, the larger LLMs tend to be highly capable and require less domain adaptation. Even when needed, more advanced options such as LoRA (Low-Rank Adaptation) fine-tuning can be feasible options. However, the unique architecture of Android AICore prioritizes a shared, memory-efficient system model. This means that deploying custom LoRA adapters for every individual app comes with challenges on these shared system services.

But there is an alternate path that can be equally impactful. By leveraging Automated Prompt Optimization (APO) on Vertex AI, developers can achieve quality approaching fine-tuning, all while working seamlessly within the native Android execution environment. By focusing on superior system instruction, APO enables developers to tailor model behavior with greater robustness and scalability than traditional fine-tuning solutions.

Note: Gemini Nano V3 is a quality optimized version of the highly acclaimed Gemma 3N model. Any prompt optimizations that are made on the open source Gemma 3N model will apply to Gemini Nano V3 as well. On supported devices, ML Kit GenAI APIs leverage the nano-v3 model to maximize the quality for Android Developers


APO treats the prompt not as a static text, but as a programmable surface that can be optimized. It leverages server-side models (like Gemini Pro and Flash) to propose prompts, evaluate variations and find the optimal one for your specific task. This process employs three specific technical mechanisms to maximize performance:

  1. Automated Error Analysis: APO analyzes error patterns from training data to Automatically identify specific weaknesses in the initial prompt.

  2. Semantic Instruction Distillation: It analyzes massive training examples to distill the "true intent" of a task, creating instructions that more accurately reflect the real data distribution.

  3. Parallel Candidate Testing: Instead of testing one idea at a time, APO generates and tests numerous prompt candidates in parallel to identify the global maximum for quality.


Why APO Can Approach Fine Tuning Quality

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.



Use Case

Task Type

Task Description

Metric

APO Improvement

Topic classification

Text classification

Classify a news article into topics such as finance, sports, etc

Accuracy

+5%

Intent classification

Text classification

Classify a customer service query into intents

Accuracy

+8.0%

Webpage translation

Text translation

Translate a webpage from English to a local language

BLEU

+8.57%

A Seamless, End-to-End Developer Workflow

It is a common misconception that fine-tuning always yields better quality than prompting. For modern foundation models like Gemini Nano v3, prompt engineering can be impactful by itself:

  • Preserving General capabilities: Fine-tuning ( PEFT/LoRA) forces a model's weights to over-index on a specific distribution of data. This often leads to "catastrophic forgetting," where the model gets better at your specific syntax but worse at general logic and safety. APO leaves the weights untouched, preserving the capabilities of the base model.

  • Instruction Following & Strategy Discovery: Gemini Nano v3 has been rigorously trained to follow complex system instructions. APO exploits this by finding the exact instruction structure that unlocks the model's latent capabilities, often discovering strategies that might be hard for human engineers to find. 

To validate this approach, we evaluated APO across diverse production workloads. Our validation has shown consistent 5-8% accuracy gains across various use cases.Across multiple deployed on-device features, APO provided significant quality lifts.

Conclusion

The release of Automated Prompt Optimization (APO) marks a turning point for on-device generative AI. By bridging the gap between foundation models and expert-level performance, we are giving developers the tools to build more robust mobile applications. Whether you are just starting with Zero-Shot Optimization or scaling to production with Data-Driven refinement, the path to high-quality on-device intelligence is now clearer. Launch your on-device use cases to production today with ML Kit’s Prompt API and Vertex AI’s Automated Prompt Optimization. 

Relevant links: 
















Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Trade-in mode on Android 16+

1 Share

Supporting Longevity through Faster Diagnostics

Posted by Rachel S, Android Product Manager

Trade-in mode: faster assessment of a factory-reset phone or tablet, bypassing setup wizard, a new feature on Android 16 and above.

Supporting device longevity

Android is committed to making devices last longer. With device longevity comes device circularity: phones and tablets traded-in and resold. GSMA reported that secondhand phones have around 80-90% lower carbon emissions than new phones. The secondhand device market has grown substantially both in volume and value, a trend projected to continue.

Android 16 and above offers an easy way to access device information on any factory reset phone or tablet via the new tradeinmode parameter, accessed via adb commands. This means you can view quality indicators of a phone or tablet, skipping each setup wizard step. Simply connect a phone or tablet with adb, and use tradeinmode commands to get information about the device.

Trade-in mode: What took minutes, now takes seconds

Faster trade-in processing – By bypassing setup wizard, trade-in mode improves device trade ins. The mode enables immediate access to understand the ‘health’ of a device, helping everyone along the secondhand value chain check the quality of devices that are wiped. We’ve already seen significant increases in processing secondhand Android devices! 


Secure evaluation To ensure the device information is only accessed in secure situations, the device must 1) be factory reset, 2) not have cellular service, 3) not have connectivity or a connected account, and 4) be running a non-debuggable build.

Get device health information with one command – You can view all the below device information with adb command from your workstation adb shell tradeinmode getstatus, skipping setup wizard: 

  • Device information 

    • Device IMEI(s) 

    • Device serial number 

    • Brand 

    • Model 

    • Manufacturer 

    • Device model, e.g., Pixel 9

    • Device brand, e.g., Google

    • Device manufacturer, e.g., Google

    • Device name, e.g., tokay

    • API level to ensure correct OS version, e.g., launch_level : 34

  • Battery heath 

    • Cycle count

    • Health

    • State, e.g., unknown, good, overheat, dead, over_voltage, unspecified_failure, cold, fair, not_available, inconsistent

    • Battery manufacturing date 

    • Date first used 

    • Serial number (to help provide indication of genuine parts, if OEM supported)

    • Part status, e.g., replaced, original, unsupported

  • Storage 

    • Useful lifetime remaining 

    • Total capacity 

  • Screen Part status, e.g., replaced, original, unsupported

  • Foldables (number of times devices has been folded and total fold lifespan) 

  • Moisture intrusion 

  • UICCS information i.e., Indication if there is an e-SIM or removable SIM and the microchip ID for the SIM slot

  • Camera count and location, e.g., 3 cameras on front and 2 on back

  • Lock detection for select device locks

  • And the list keeps growing! Stay up to date here


Run your own tests – Trade-in mode enables you to run your own diagnostic commands or applications by entering the evaluation flow using tradeinmode evaluate. The device will automatically factory reset on reboot after evaluation mode to ensure nothing remains on the device. 


Ensure the device is running an approved build – Further, when connected to the internet, with a single command tradeinmode getstatus --challenge CHALLENGE you can test the device’s operating system (OS) authenticity, to be sure the device is running a trusted build. If the build passes the test, you can be sure the diagnostics results are coming from a trusted OS. 


There’s more – You can use commands to factory reset, power off, reboot, reboot directly into trade-in mode, check if trade-in mode is active, revert to the previous mode, and pause tests until system services are ready. 


Want to try it? Learn more about the developer steps and commands


Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Remote MCP Servers using .NET SDK – Integrating with custom data and APIs

1 Share

Remote MCP Servers using .NET SDK – Integrating with custom data and APIs

A comprehensive example demonstrating how to build Model Context Protocol (MCP) servers using the .NET SDK, showcasing integration with custom data sources and APIs. This project implements a weather forecast service as an example of how to expose your own APIs through MCP tools.

Table of Contents

Overview

This project demonstrates how to create remote MCP servers that can be consumed by AI applications (like Claude Desktop, VS Code Copilot, etc.) to extend their capabilities with custom tools and data sources. The example implementation includes a weather forecast service that showcases the integration pattern.

MCP servers really shine when they’re connected to existing APIs or services, allowing clients to query real, live data. There’s an expanding ecosystem of MCP servers that can already be used by clients, including tools we rely on daily like Git, GitHub, local filesystem, etc.

With that in mind, let’s enhance our MCP server by wiring it up to an API, accepting query parameters, and returning data-driven responses.

What is MCP?

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). It enables AI assistants to:

  • Access external data sources
  • Execute custom tools and functions
  • Integrate with your own APIs and services
  • Maintain context across multiple interactions

Features

  • HTTP Transport Support – Expose MCP servers via HTTP endpoints for remote access
  • Stdio Transport Support – Support for standard input/output communication
  • Custom Tool Integration – Easy integration of your own APIs and services
  • Health Monitoring – Built-in health check endpoints
  • Stateless Operation – Designed for scalable, stateless deployments
  • .NET 10.0 – Built on the latest .NET framework
  • Structured Logging – Comprehensive logging for debugging and monitoring

Architecture

The project follows a clean architecture pattern:

┌─────────────────┐
│   MCP Client    │ (Claude Desktop, VS Code, etc.)
└────────┬────────┘
         │ HTTP/SSE
         ▼
┌─────────────────┐
│  ASP.NET Core   │
│   Web Host      │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  MCP Protocol   │
│     Layer       │
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│   MCP Tools     │ (Weather, Ping, etc.)
└────────┬────────┘
         │
         ▼
┌─────────────────┐
│  Your APIs &    │
│  Data Sources   │
└─────────────────┘

Prerequisites

Project Structure

remote-MCP-servers-using-dotnet-sdk-integrating-with-our-own-data-or-apis/
├── src/
│   └── McpServer/
│       ├── McpServer/
│       │   ├── Program.cs              # Application entry point
│       │   ├── appsettings.json        # Configuration settings
│       │   ├── McpServerTools.cs       # MCP tool implementations
│       │   └── Services/               # Business logic services
│       │       └── WeatherService.cs   # Weather API integration
│       └── McpServer.sln
├── docs/                               # Additional documentation
├── README.md                           # This file
└── LICENSE

Development

This section explains how to create custom MCP tools and integrate them with your own APIs and data sources.

Creating Custom MCP Tools

MCP tools are the bridge between AI clients and your backend services. Each tool represents a capability that AI assistants can invoke.

Tool Architecture

┌──────────────────┐
│   MCP Client     │ (Claude, VS Code, etc.)
└────────┬─────────┘
         │ "Get weather for Paris"
         ▼
┌──────────────────┐
│ McpServerTools   │ [McpServerTool] decorated methods
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│ Business Service │ IWeatherForecastService
└────────┬─────────┘
         │
         ▼
┌──────────────────┐
│  External API    │ Weather API, Database, etc.
└──────────────────┘

Step 1: Define Your Tool Class

Create a class decorated with [McpServerToolType]:

[McpServerToolType]
public sealed class McpServerTools
{
    private readonly IWeatherForecastService _weatherForecastService;
    private readonly ILogger<McpServerTools> _logger;

    public McpServerTools(
        IWeatherForecastService weatherForecastService, 
        ILogger<McpServerTools> logger)
    {
        _weatherForecastService = weatherForecastService;
        _logger = logger;
    }
}

Key Points:

  • Use [McpServerToolType] to mark the class as containing MCP tools
  • Inject services via constructor (supports standard .NET DI)
  • Follow .NET dependency injection patterns

Step 2: Create Tool Methods

Each method decorated with [McpServerTool] becomes an invokable tool:

[McpServerTool]
[Description("Retrieves the current weather forecast for a specified city.")]
public async Task<WeatherForecast> GetWeather(
    [Description("The name of the city to get weather forecast for.")] 
    string city)
{
    _logger.LogInformation("GetWeather called with city: {City}", city);
    
    // Call your business service/API
    var forecasts = await _weatherForecastService.GetWeatherForecast(city);

    _logger.LogInformation("GetWeather returning forecast for city: {City}", city);
    return forecasts;
}

Key Attributes:

  • [McpServerTool] – Marks method as an MCP tool
  • [Description("...")] – Provides description for AI to understand tool purpose
  • Parameter descriptions help AI choose correct arguments

Best Practices:

  • Use clear, descriptive names (e.g., GetWeather, not GW)
  • Add detailed descriptions for both tool and parameters
  • Use strongly-typed return values
  • Include logging for diagnostics
  • Handle exceptions gracefully

Step 3: Implement Your Business Service

Create a service that encapsulates your API/data logic:

public interface IWeatherForecastService
{
    Task<WeatherForecast> GetWeatherForecast(string city);
}

public class WeatherForecastService : IWeatherForecastService
{
    private readonly HttpClient _httpClient;
    private readonly ILogger<WeatherForecastService> _logger;

    public WeatherForecastService(
        HttpClient httpClient, 
        ILogger<WeatherForecastService> logger)
    {
        _httpClient = httpClient;
        _logger = logger;
    }

    public async Task<WeatherForecast> GetWeatherForecast(string city)
    {
        try
        {
            // Call external weather API
            var response = await _httpClient.GetAsync(
                $"https://api.weather.com/v1/forecast?city={city}");
            
            response.EnsureSuccessStatusCode();
            
            var forecast = await response.Content
                .ReadFromJsonAsync<WeatherForecast>();
            
            return forecast ?? throw new InvalidOperationException(
                "Failed to deserialize weather data");
        }
        catch (HttpRequestException ex)
        {
            _logger.LogError(ex, "Failed to fetch weather for {City}", city);
            throw;
        }
    }
}

Step 4: Register Services

In Program.cs, register your services:

var builder = WebApplication.CreateBuilder(args);

// Register MCP server
builder.Services.AddMcpServer();

// Register your business services
builder.Services.AddHttpClient<IWeatherForecastService, WeatherForecastService>(
    client =>
    {
        client.BaseAddress = new Uri("https://api.weather.com");
        client.DefaultRequestHeaders.Add("User-Agent", "MCP-Weather-Server/1.0");
    });

// Add logging
builder.Services.AddLogging(logging =>
{
    logging.AddConsole();
    logging.AddDebug();
});

var app = builder.Build();

// Map MCP endpoint
app.MapMcp("/mcp");

app.Run();

Quick Start

1. Clone the Repository

git clone https://github.com/azurecorner/remote-MCP-servers-using-dotnet-sdk-integrating-with-our-own-data-or-apis.git
cd remote-MCP-servers-using-dotnet-sdk-integrating-with-our-own-data-or-apis

2. Restore Dependencies

cd src/McpServer/McpServer
dotnet restore

3. Run the Server

dotnet run

The server will start on http://localhost:8081 by default.

4. Verify Health

curl http://localhost:8081/api/healthz

Expected response:

StatusCode        : 200
StatusDescription : OK
Content           : Healthy
RawContent        : HTTP/1.1 200 OK
                    Transfer-Encoding: chunked
                    Content-Type: text/plain; charset=utf-8
                    Date: Sun, 25 Jan 2026 10:20:38 GMT
                    Server: Kestrel

Available MCP Tools

This MCP server exposes tools that can be discovered and invoked by MCP clients. To see all available tools, you can query the server using the tools/list JSON-RPC method.

Current Tools

  • get_weather – Retrieves weather information for a specified city

    • Parameters: city (string) – The name of the city
    • Returns: Weather forecast data including temperature, conditions, and humidity
  • ping – Simple echo tool for testing connectivity

    • Parameters: message (string) – Message to echo back
    • Returns: The same message with a timestamp

Discovering Tools

You can discover all available tools by sending a tools/list request to the MCP server:

bash

curl -X POST http://localhost:8081/mcp \
     -H "Content-Type: application/json" \
     -H "Accept: application/json, text/event-stream" \
     -d '{
           "jsonrpc": "2.0",
           "id": 1,
           "method": "tools/list",
           "params": {}
         }'

powershell

# MCP endpoint
$mcpEndpoint = "http://localhost:8081/mcp"

# JSON-RPC request body
$body = @{
    jsonrpc = "2.0"
    id      = 1
    method  = "tools/list"
    params  = @{}
} | ConvertTo-Json -Depth 5

# HTTP headers
$headers = @{
    "Content-Type" = "application/json"
    "Accept"       = "application/json, text/event-stream"
}

# Send request
$response = Invoke-WebRequest `
    -Uri $mcpEndpoint `
    -Method Post `
    -Headers $headers `
    -Body $body `
    -UseBasicParsing

# Read response content
$content = $response.Content

write-Host "Received Response:" -ForegroundColor Green
Write-Host $content -ForegroundColor White

Expected Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "get_weather",
        "description": "Get weather information for a city",
        "inputSchema": {
          "type": "object",
          "properties": {
            "city": {
              "type": "string",
              "description": "The city name"
            }
          },
          "required": ["city"]
        }
      },
      {
        "name": "ping",
        "description": "Echo a message back",
        "inputSchema": {
          "type": "object",
          "properties": {
            "message": {
              "type": "string",
              "description": "Message to echo"
            }
          },
          "required": ["message"]
        }
      }
    ]
  }
}

Usage

Direct API Testing

You can test MCP tools directly by sending JSON-RPC requests to the server endpoint. This is useful for debugging, testing, and understanding how MCP clients interact with your server.

Test Weather Tool

The get_weather tool retrieves weather information for a specified city. Use the following scripts to invoke the tool:

Bash
# Test Weather

# Default parameters
MCP_ENDPOINT="${1:-http://localhost:8081/mcp}"
TOOL_NAME="${2:-get_weather}"
CITY="${3:-Paris}"

# JSON-RPC request body
read -r -d '' BODY <<EOF
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "tools/call",
  "params": {
    "name": "$TOOL_NAME",
    "arguments": {
      "city": "$CITY"
    }
  }
}
EOF

# Call MCP server and output raw JSON
curl -s -X POST "$MCP_ENDPOINT" \
     -H "Content-Type: application/json" \
     -H "Accept: application/json, text/event-stream" \
     -d "$BODY"
PowerShell
Param(
    [string]$mcpEndpoint = "http://localhost:8081/mcp",
    [string]$toolName = "get_weather",
    [hashtable]$toolParams = @{ city = "Paris" }
)
# Example usage:
#  dotnet run --project .\src\McpServer\McpServer\McpServer.csproj
# .\call-mcp-tool.ps1 -toolName "get_weather" -toolParams @{ city = "Paris" }
# .\call-mcp-tool.ps1 -toolName "ping" -mcpEndpoint http://localhost:8081/mcp  -toolParams @{ message = "hello" }


$mcpEndpoint = "http://localhost:8081/mcp"

$body = @{
    jsonrpc = "2.0"
    id      = 2
    method  = "tools/call"
    params  = @{
        name = $toolName
        arguments = $toolParams
    }
} | ConvertTo-Json -Depth 5

$headers = @{
    "Content-Type" = "application/json"
    "Accept"       = "application/json, text/event-stream"
}

$response = Invoke-WebRequest `
    -Uri $mcpEndpoint `
    -Method Post `
    -Headers $headers `
    -Body $body `
    -UseBasicParsing

Write-Host "Success!" -ForegroundColor Green
write-host $response.Content | ConvertTo-Json

Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT License – see the LICENSE file for details.


Acknowledgments


Author

Gora LEYE

https://logcorner.com/

For questions or support, please open an issue on GitHub.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Boost Your .NET Projects with Spargine: High-Performance ULIDs with the Ulid Struct

1 Share
In Spargine 8, I introduced the UlidGenerator type to make working with ULIDs easier in .NET applications. For the .NET 10 release, I took this idea further — converting ULIDs into a first-class value type (struct) in the DotNetTips.Spargine.Core assembly, similar in spirit to the built-in Guid type. Why? Because modern distributed applications increasingly need … Continue reading Boost Your .NET Projects with Spargine: High-Performance ULIDs with the Ulid Struct



Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories