Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147627 stories
·
32 followers

Google Porting All Internal Workloads To Arm

1 Share
Google is migrating all its internal workloads to run on both x86 and its custom Axion Arm chips, with major services like YouTube, Gmail, and BigQuery already running on both architectures. The Register reports: The search and ads giant documented its move in a preprint paper published last week, titled "Instruction Set Migration at Warehouse Scale," and in a Wednesday post that reveals YouTube, Gmail, and BigQuery already run on both x86 and its Axion Arm CPUs -- as do around 30,000 more applications. Both documents explain Google's migration process, which engineering fellow Parthasarathy Ranganathan and developer relations engineer Wolff Dobson said started with an assumption "that we would be spending time on architectural differences such as floating point drift, concurrency, intrinsics such as platform-specific operators, and performance." [...] The post and paper detail work on 30,000 applications, a collection of code sufficiently large that Google pressed its existing automation tools into service -- and then built a new AI tool called "CogniPort" to do things its other tools could not. [...] Google found the agent succeeded about 30 percent of the time under certain conditions, and did best on test fixes, platform-specific conditionals, and data representation fixes. That's not an enormous success rate, but Google has at least another 70,000 packages to port. The company's aim is to finish the job so its famed Borg cluster manager -- the basis of Kubernetes -- can allocate internal workloads in ways that efficiently utilize Arm servers. Doing so will likely save money, because Google claims its Axion-powered machines deliver up to 65 percent better price-performance than x86 instances, and can be 60 percent more energy-efficient. Those numbers, and the scale of Google's code migration project, suggest the web giant will need fewer x86 processors in years to come.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
46 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

AI Assistants Misrepresent News Content 45% of the Time

1 Share
An anonymous reader quotes a report from the BBC: New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants -- already a daily information gateway for millions of people -- routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context. Key findings: - 45% of all AI answers had at least one significant issue. - 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions. - 20% contained major accuracy issues, including hallucinated details and outdated information. - Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance. - Comparison between the BBC's results earlier this year and this study show some improvements but still high levels of errors. The team has released a News Integrity in AI Assistants Toolkit to help develop solutions to these problems and boost users' media literacy. They're also urging regulators to enforce laws on information integrity and continue independent monitoring of AI assistants.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

OpenBSD 7.8 Released

1 Share
OpenBSD 7.8 has been released, adding Raspberry Pi 5 support, enhanced AMD Secure Encrypted Virtualization (SEV-ES) capabilities, and expanded hardware compatibility including new Qualcomm, Rockchip, and Apple ARM drivers. Phoronix reports: OpenBSD 7.8 also brings multiple improvements around enabling AMD Secure Encrypted Virtualization (AMD SEV) support with support for the PSP ioctl for encrypting and measuring state for SEV-ES, a new VMD option to run guests in SEV-ES mode, and other enablement work pertaining to that AMD SEV work in SEV-ES form at this point as a precursor to SEV-SNP. AMD SEV-ES should be working to start confidential virtual machines (VMs) when using the VMM/VMD hypervisor and the OpenBSD guests with KVM/QEMU. OpenBSD 7.8 also improves compatibility of the FUSE file-system support with the Linux implementation, suspend/hibernate improvements, SMP improvements, updating to the Linux 6.12.50 DRM graphics drivers, several new Rockchip drivers, Raspberry Pi RP1 drivers, H.264 video support for the uvideo driver, and many network driver improvements. The changelog and download page can be found via OpenBSD.org.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

From karaoke terminals to AI résumés: The winners of GitHub’s For the Love of Code challenge

1 Share

Every developer has that project they build just for the fun of it. You know how it goes: you start by asking “what if?” and then you have something weird and wonderful hours later. 

This summer, we decided to celebrate that spirit with For the Love of Code, our first-ever competition for projects built purely for fun. More than 300 developers answered the call. Some leaned on GitHub Copilot to refactor ideas, fix bugs, and spark inspiration. Some teamed up. Others flew solo, guided only by caffeine and curiosity.

Entries spanned everything from a Breakout game powered by your GitHub graph, to a laugh-track that plays on every commit, a Yelp-style code reviewer in VS Code ★★★★☆, a Copilot you can literally see on camera, and even a comic strip made from your release notes.

We invited participants to build anything that sparks joy across six whimsical categories:

  • 🔘 Buttons, beeps & blinkenlights: Hardware hacks, LEDs, sensors, and gadgets galore.
  • 🖥️ Terminal talent: Command-line creations and retro computing love letters.
  • 🌐 World wide wonders: Browser-based experiments, apps, and interactive art.
  • 🤖 Agents of change: AI, bots, and automation with heart.
  • 🕹️ Game on: Games big or small, serious or silly.
  • 🃏 Everything but the kitchen sink: The wildcard (if it doesn’t fit anywhere else, it fits here).

Meet the winners: Open source experiments, AI side projects, and more

Here are the top three entries from each category.

🔘 Buttons, beeps & blinkenlights 

Plane Tracker: DIY radar on your desk

A person holding an Adafruit TFT Gizmo display connected to a laptop. The screen shows a green radar interface with red blips representing nearby planes. In the background, Python code and a terminal window in VS Code display mock plane data being sent via Bluetooth.

Plane Tracker by @cpstroum is a DIY radar that uses an Adafruit Circuit Playground, Bluetooth, and the ADS-B Exchange API to fetch live flight data. It turns nearby planes into a real-time mini radar display.

Copilot cameo: GitHub Copilot helped @cpstroum with Git itself and with structuring the initial project for their first real push to GitHub. Thanks, Copilot! And welcome aboard, @cpstroum!

Cadrephoto: The easy e-ink photo frame

A wooden e-ink photo frame displays a grayscale version of “Girl with a Pearl Earring.” A smartphone next to it shows an email being sent with the same image, and a red arrow points from the phone to the frame, illustrating how the photo is updated remotely.

Cadrephoto by @ozh is a Raspberry Pi and e-ink photo frame that displays pictures emailed to it (no app, no setup, perfect for less tech-savvy people). It checks an inbox, downloads the latest photo, and updates the screen automatically.

Copilot cameo: GitHub Copilot helped @ozh with their first Python project. It worked smoothly inside JetBrains IDEs and made code completion feel almost like magic.

BuildIn: Traffic-light builds for your repository

A collage of four photos showing an Arduino breadboard project with multiple jumper wires and LEDs in different colors—blue, green, yellow, and red—lit up during various testing stages.

BuildIn by @SUNSET-Sejong-University and @lepetitprince99 is a real-life traffic light for your code that sits on your desk. Using an Arduino and the GitHub API, it lights up red, yellow, green, or blue to show your repository’s build status at a glance.

Copilot cameo: GitHub Copilot helped @SUNSET-Sejong-University debug and optimize their code. It saved time tracking down tricky hardware issues and made troubleshooting much easier.

🖥️ Terminal talent

RestoHack: A roguelike resurrected from 1984

A black terminal window displaying ASCII art of a tombstone reading “REST IN PEACE mjh 0 AU killed by a giant rat 2025,” from a retro text-based game.

RestoHack by @Critlist resurrects the 1984 roguelike game that inspired NetHack, rebuilt from the original source with modern tools and a preservationist’s touch. It compiles cleanly, runs faithfully, and proves that forty years later, permadeath still hits hard.

Jukebox CLI

A pixel art jukebox interface in a terminal-based music player. The screen shows colorful pixel graphics in the center, a playlist of songs on the right, and playback controls with progress and volume bars at the bottom.

Jukebox CLI by @FedeCarollo is a colorful, animated jukebox that runs right in your terminal. Built in Rust with Ratatui, it plays MP3s, shows floating musical notes, and color-codes each track in a scrollable playlist. You can play, pause, skip, and adjust the volume without ever leaving your command line.

Copilot cameo: GitHub Copilot helped @FedeCarollo explore unfamiliar Rust libraries and find their footing.

Tuneminal: Sing your commits from the command line

A terminal-based karaoke interface titled “Tuneminal.” The screen displays a song library with “IRIS – Kenshi Yonezu,” current score and accuracy, and placeholders for lyrics and an audio visualizer.

Tuneminal by @heza-ru turns your terminal into a full-blown karaoke stage with scrolling lyrics, live audio visualization, and scoring that rewards your inner rock star. It’s open source, cross-platform, and the perfect excuse to sing while that git clone takes a while.

🌐 World wide wonders

Netstalgia: Surf the ‘90s web on virtual dial-up

A retro 1990s-style web page called “Netsalgia.com” designed to look like a Windows 95 desktop. The page features colorful buttons, visitor counters, fake ads, and a pop-up asking users to star the GitHub repository for this nostalgic project.

Netstalgia by @heza-ru (again!) is a fully functional ‘90s web fever dream built with modern tech, but visually stuck on virtual dial-up. It’s got dancing babies, popup ads, a fake BBS, and more CRT glow than your old Gateway 2000 ever survived.

In true retro internet spirit, it even ships with a fake GitHub Star Ransomware—a tongue-in-cheek “virus” that demands you star the repo to “decrypt your files.” A clever (and harmless) new twist on the eternal quest for GitHub stars. ⭐💾

Bionic Reader: Speed read your screen

Bionic Reader by @Awesome-XV rewires how you read by bolding the first few letters of each word so your brain fills in the rest. It’s like giving your eyes a speed boost without the caffeine jitters to read faster than ever.

Copilot cameo: GitHub Copilot helped @Awesome-XV write project documentation and scaffold the initial codebase.

The Git Roast Show: Roast your GitHub profile… lovingly

A stylized image featuring a cartoon GitHub Octocat character in a tuxedo and sunglasses holding a microphone. The text above reads “The GitRoast Show,” and a speech bubble says “we don’t fork around here.” The background has a swirling teal marble texture.

Git Roast Show by @rawrnuck and @Anmol0201 is a full-stack web app that humorously “roasts” your GitHub profile. Built with React, Vite, and Express, it fetches live GitHub data to generate personalized, sound-enhanced, and animated comedy roasts.

Copilot cameo: GitHub Copilot helped @rawrnuck understand algorithms and handle the repetitive parts of their project.

Nightlio: a mood tracker you actually own

Nightlio by @shirsakm is a privacy-first mood tracker and daily journal you can self-host in minutes. Log how you feel on a 5-point scale, add Markdown notes, tag entries like #Sleep or #Productivity, then explore calendars, streaks, and simple stats to spot patterns. It runs anywhere with Docker, stores data in a local SQLite file, and keeps things clean with JWT-protected APIs, a React/Vite front end, and optional Google OAuth. No ads. No subscriptions. Your server, your rules.

A dark-themed productivity app called Nightliio is shown in motion. The animation highlights mood tracking icons, personal goals such as “Read Before Bed” and “Morning Meditation,” and sections for adding goals, viewing history, and tracking progress through colorful animated bars.

Note: Because @heza-ru placed in two categories, we’ve added a fourth winner to this category.

Copilot cameo: GitHub Copilot helped @shirsakm with refactors, color palette updates, and codebase-wide changes that would have taken much longer by hand.

🤖 Agents of change

Neosgenesis: AI that thinks about thinking

Neosgenesis by @answeryt is a metacognitive AI framework that teaches machines to think about how they think. It runs a five-stage loop (think, verify, learn, optimize, decide) while juggling multiple LLMs, tools, and real-time feedback. A multi-armed bandit picks the best reasoning patterns, and when it stalls, an “aha” mode explores fresh paths.

MediVision Assistant: Accessible AI healthcare for all

MediVision Assistant by @omkardongre is an AI healthcare companion that helps elderly and disabled users manage their health through voice, image, and video. Users can scan medications, analyze skin conditions, log symptoms by voice, and chat with an AI doctor-like assistant.

Copilot cameo: GitHub Copilot helped @omkardongre generate React components, API templates, and AI integration code. It handled the boilerplate so they could focus on building features and improving the experience.

Quiviva: The résumé that talks back

A colorful web interface titled “An Interactive CV that Talks Back.” The animation shows a chatbot window on the right where users can type questions to Kasia’s AI-powered résumé. The left side explains the project as a playful mix of AI, design, and storytelling, with a list of example questions and a button to download the CV as a PDF.

Quiviva by @katawiecz is an interactive AI-powered CV that turns a job hunt into a chat adventure. Ask about skills or projects, or type “Gandalf” to unlock secret nerd mode. All this goes to show that even résumés can be fun.

🕹️ Game on

AI-Dventure: Infinite worlds, infinite choices

A screenshot of a text adventure game.

AI-Dventure by @FedeCarollo is an interactive text adventure built in Rust and powered by OpenAI’s models. Players explore dynamically generated worlds in fantasy, horror, sci-fi, or historical settings where every command shapes the story and no two runs are the same.

BeatBugging: Debug to the beat

A retro-style terminal interface titled “BEATBUGGING SYSTEM” shows a progress bar at 25%, simulating the initialization of a “musical debugging interface” with audio frequencies, memory readouts, and ASCII symbols displayed on a dark screen.

BeatBugging by @sandra-aliaga, @Joshep-c, @RyanValdivia, and @tniia turns debugging into a rhythm game that converts your system logs into musical beats. Built in Python, it lets you fix bugs to the rhythm on a 5-by-5 grid and makes debugging sound unexpectedly good.

Copilot cameo: GitHub Copilot helped the team figure out next steps when they got stuck, offering helpful hints that kept development moving.

MuMind: A multiplayer battle of wits and vibes

MuMind by @FontesHabana is a web-based multiplayer version of the party game Herd Mentality, where players try to match the majority’s answers to score points. Built with React, Tailwind CSS, and Framer Motion, it offers multilingual support, lively animations, and a smooth, responsive experience.

🃏 Everything but the kitchen sink

GitFrag: Defrag your contributions graph

@chornonoh-vova built GitFrag to reorganize your contributions graph using classic sorting algorithms (bubble, merge, quick, and counting sort). Each is visualized with smooth progress animations, GitHub login, and dark mode support. There’s also a wonderful writeup of how the developer approached it.

Copilot cameo: GitHub Copilot helped @chornonoh-vova structure their understanding of algorithms and add thoughtful details that made their visualization shine.

Code Sensei: Meditate your way through VS Code

Code Sensei by @redhatsam09 turns your VS Code sessions into a zen pixel adventure where your focus fuels the fun. Type to walk, pause to hop—but stay away too long away and your sensei meets a dramatic, 8-bit demise.

Reviewer Karma: Good vibes for great reviews

A leaderboard titled “Scoring System” and “Current Rankings” shows how reviewers earn points for giving code reviews, using positive emojis, and writing constructive comments. The rankings table lists @alice in first place with 18 points, followed by @bob, @carol, @dave, and @eve.

Reviewer Karma by @master-wayne7 keeps your pull requests peaceful by rewarding reviewers for good vibes and great feedback. Every emoji, comment, and code critique earns points on a live leaderboard that turns pull request reviews into a friendly competition.

Copilot cameo: GitHub Copilot helped @master-wayne7 write efficient Go code for the GitHub API, structure logic for assigning karma points, and handle repetitive tasks like error checking and markdown generation. It kept the project flowing smoothly from start to finish.

These projects show what’s possible when we let our curiosity take center stage

Remember these are hackathon projects. They might not be feature complete, there may be bugs, spaghetti code, and the occasional rogue program escaped from the Grid. But they are clear examples of what we can accomplish when we do something just for the love of it.

All of our category winners get 12 months of GitHub Copilot Pro+.

If For the Love of Code proved anything, it’s that creativity and code thrive best together—especially with Copilot lending a hand.

Shoutout to the makers

Congratulations to all of our winners: @Anmol0201, @answeryt, @Awesome-XV, @chornonoh-vova, @cpstroum, @Critlist, @FedeCarollo, @FontesHabana, @heza-ru, @joshep-c, @katawiecz, @lepetitprince99, @master-wayne7, @omkardongre, @RyanValdivia, @ozh, @rawrnuck, @redhatsam09, @sandra-aliaga, @shirsakm, @SUNSET-Sejong-University, @tniia.

Massive thank you to our judges, which included a mix of GitHub Stars, Campus Experts, and GitHub Developer Relations friends: @Ba4bes, @colbyfayock, @j0ashm, @JuanGdev, @howard-lio, @luckyjoseph, @metzinaround, @Taiwrash, and @xavidop.

And thank you Copilot for your assistance!

Now back to work everyone! Playtime is over.

💜 If you enjoyed For the Love of Code, stay tuned… Game Off 2025 begins this November!

The post From karaoke terminals to AI résumés: The winners of GitHub’s For the Love of Code challenge appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Identify Device state in EntraID/Defender with PowerShell

1 Share

Proposed solution:

One way to achieve this result is by integrating few pieces of available technology. It sounds like a lot of moving parts but we do not need

  • App registration in EntraID
  • MS Graph API
  • Defender API
  • Powershell scripting language.
  • File encryption

App registration in EntraID:

What this does is to provide a connection point between PowerShell and the information you want to access. Since creating an app registration creates a client secret and appId we can leverage these two pieces of information on the PowerShell script.

For reference from our documentation: How to register an application in EntraID.

1.- Jump into your EntraID tenant --> manage --> App registrations and click on “New registration” (Figure 1)

Figure 1. EntraID App registration process

 

2.- This is a simple app registration process, nothing complicated, it is just to obtain that AppID and ClientSecret value we need for the PowerShell script. For the purpose of the test, I called the app “DefenderEntraQueryApp” and configured with the following settings:

Authentication settings:

Fig 2. Authentication settings of app registration.

 

Certificates & Secrets:

 

 

Fig 3. Certificates and App client secret.

NOTE: remember when you create the app registration the client secret (Value) is shown only that time, after you move away from the app registration creation screen, the client secret will not be shown again. If you cannot grab the ClientSecret during registration of the app, you can click on the “New client secret” button from the view, create a new client secret and delete the previous one.

API Permissions:

 

Fig 4. API permissions.

After these parameters are configured in EntraID, you need to grab the following parameters from EntraID and insert this in the PowerShell script:

$tenantId   = "MY-TENANT-ID"

$clientId   = "MY-CLIENT-ID"

$secretPath = "C:\certs\clientSecret.dat"

$deviceList = Get-Content "C:\temp\devices.txt"

NOTE: The $deviceList variable is the text file where you will input the computer names you want to interrogate. Adjust the path for the text file to your preferred path and file name but be sure to reflect that in the script logic.

 

Encrypting client secret EntraID Application:

Since the client secret generated when registering the application in EntraID is in plain text, we cannot allow this information to be passed on in the script in plain text.

For this, we encrypt the client secret information into a .dat file using Windows DPAPI encryption and the script will pull it from a location on the user’s computer.

It is worth noting that the .dat file is bound to the user creating it, so, if you try to export this .dat file to another computer, the script will fail. Below is the one-time setup needed to create the .dat encrypted file the script will use.

 

To encrypt the client secret from your EntraID registered application, do the following:

The first line on the code sequence below adds the assemblies for System.Security that will instruct PowerShell this is an encryption operation. Run each of these lines, one by one in a PowerShell window with elevated privileges.

NOTE: change the path in line 6 of this piece of code to the path where you want the .dat file generated.

Add-Type -AssemblyName System.Security

$plainText = "your-client-secret"

$secureBytes = [System.Text.Encoding]::UTF8.GetBytes($plainText)

$encrypted = [System.Security.Cryptography.ProtectedData]::Protect($secureBytes, $null, [System.Security.Cryptography.DataProtectionScope]::CurrentUser)

[System.IO.File]::WriteAllBytes("C:\temp\clientSecret.dat", $encrypted)

Now that the app registration and client secret encryption  is out of the way you can populate the text file with the list of computers you want to check, for example:

For the example, I am assuming the path for the text file is c:\temp

 

 

Fig 5. devices.txt file used as input to target multiple computers

 

Single entry per line, no space at the end. As you can see based on the device names from Fig 5. the script works for any supported OS as long as the machine is registered in EntraID.

After the text file is saved, open a Powershell windows with elevated privileges and proceed to connect your EntraID tenant for authentication purposes.

Use this command to connect to EntraID: Connect-AzureAD validate your credentials and them switch to the path where the script is, if you are not already there, and then execute the script in the PowerShell window:

Fig 6. Running the PS Script

 

Fig 7. Authenticating to EntraID:

 

Fig 8. Output of the script:

The script also exports the list to a .csv file, by default the path for the .csv file is c:\temp\DeviceStatus.csv

High level execution of the script:

PowerShell scripting code:

Code disclaimer:

The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

NOTE: When using your scripting editing tool of choice, be aware of any additional spaces added as a result of the copy/past operation into your editing tool.

=== CONFIGURATION ===

#This script loops through a list of devices to check if the device is enabled or disabled in EntraID

#It uses the MS Graph API and a simple app registration in EntraID with consent granted to access

#Defender via Defender API

#Steps to define the pre-requisites for the script to run will be provided in a separate doc guide #Author: Edgar Parra - Microsoft v1.2

$tenantId = "MY-TENANT-ID"

$clientId = "MY-CLIENT-ID"

$secretPath = "C:\certs\clientSecret.dat"

$deviceList = Get-Content "C:\temp\devices.txt"

=== LOAD ENCRYPTED CLIENT SECRET ===

Add-Type -AssemblyName System.Security if (-not (Test-Path $secretPath)) { Write-Host "Encrypted client secret file not found at $secretPath." return } try { $encryptedSecret = [System.IO.File]::ReadAllBytes($secretPath) $decryptedBytes = [System.Security.Cryptography.ProtectedData]::Unprotect( $encryptedSecret, $null, [System.Security.Cryptography.DataProtectionScope]::CurrentUser ) $clientSecret = [System.Text.Encoding]::UTF8.GetString($decryptedBytes) } catch { Write-Host "Error decrypting client secret: $($_.Exception.Message)" return }

=== AUTHENTICATION ===

$body = @{ grant_type = "client_credentials" scope = "https://graph.microsoft.com/.default" client_id = $clientId client_secret = $clientSecret } try { $tokenResponse = Invoke-RestMethod -Method Post -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Body $body $accessToken = $tokenResponse.access_token } catch { Write-Host "Error retrieving token: $($_.Exception.Message)" return } $headers = @{ Authorization = "Bearer $accessToken" "Content-Type" = "application/json" Accept = "application/json" }

=== LOOP THROUGH DEVICES ===

$results = @() foreach ($deviceName in $deviceList) { $escapedDeviceName = $deviceName -replace "'", "''" $uri = "https://graph.microsoft.com/v1.0/devices?`$filter=displayName eq '$escapedDeviceName'" try { $response = Invoke-RestMethod -Uri $uri -Headers $headers -Method Get if ($response.value.Count -eq 0) { Write-Host "Device '$deviceName' not found." $results += [PSCustomObject]@{ DeviceName = $deviceName Status = "Not Found" } } else { $device = $response.value[0] $status = if ($device.accountEnabled) { "Enabled" } else { "Disabled" } Write-Host "$($device.displayName): $status" $results += [PSCustomObject]@{ DeviceName = $device.displayName Status = $status } } } catch { $errorMessage = $.Exception.Response.GetResponseStream() | % { New-Object System.IO.StreamReader($) } | % { $_.ReadToEnd() } Write-Host "Error querying '$deviceName': $errorMessage" $results += [PSCustomObject]@{ DeviceName = $deviceName Status = "Error" } } }

=== EXPORT RESULTS TO CSV ===

$results | Export-Csv -Path "C:\temp\DeviceStatus.csv" -NoTypeInformation

Explore additional resources:

For further insights and guidance on data protection encryption and app registrations in EntraID, consider reviewing these related articles:

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Deployment Guide-Copilot Studio agent with MCP Server exposed by API Management using OAuth 2.0

1 Share

Introduction

In today’s enterprise landscape, enabling AI agents to interact with backend systems securely and at scale is critical. By exposing MCP servers through Azure API Management (APIM), organizations can provide controlled access to these services. When combined with OAuth 2.0 authorization code flow, this setup ensures robust, enterprise-grade security for AI agents built in Copilot Studio—empowering intelligent automation while maintaining strict access governance.

Disclaimer & Caveats

This article explores how to configure a MCP tool—exposed as a MCP server via APIM—for secure consumption by AI agents built in Copilot Studio. Leveraging the OAuth 2.0 Authorization Code Flow, this setup ensures enterprise-grade security by enabling delegated access without exposing user credentials.

With Azure API Management now supporting MCP server capabilities in public preview, developers can expose REST APIs as MCP tools using a standardized JSON-RPC interface. This allows AI agents to invoke backend services securely and scalable, without the need to rebuild existing APIs. Copilot Studio, also in preview for MCP integration, empowers organizations to orchestrate intelligent agents that interact with these tools in real time.

While this guide provides a foundational approach, every environment is unique. You can enhance security further by implementing app roles, conditional access policies, and extending your integration logic with custom Python code for advanced scenarios.

⚠️ Note: Both MCP server support in APIM and MCP tool integration in Copilot Studio are currently in public preview. As these platforms evolve rapidly, expect changes and improvements over time. Always refer to the https://learn.microsoft.com/en-us/azure/api-management/export-rest-mcp-server for the latest updates. 

This article is about consuming remote MCP servers. In Azure, managed identity can also be leveraged for APIM integration.

What is Authorization Code Flow?

The Authorization Code Flow is designed for applications that can securely store a client secret (like server-side apps). It allows the app to obtain an access token on behalf of the user without exposing their credentials. This flow uses an intermediate authorization code to exchange for tokens, adding an extra layer of security.

Steps in the Flow

  1. User Authentication
    The user is redirected to the Authorization Server (In this case: Azure AD) to log in and grant consent.
  2. Authorization Code Issued
    After successful login, the Authorization Server sends an authorization code to the app via the redirect URI.
  3. Token Exchange
    The app sends the authorization code (plus client credentials) to the Token Endpoint to get: Access Token (for API calls) and Refresh Token (to renew access without user interaction)
  4. API Access
    The app uses the Access Token to call protected resources.

Below diagram shows the Authorization code flow in detail.

Press enter or click to view image in full size

 

 

Microsoft identity platform and OAuth 2.0 authorization code flow — Microsoft identity platform | Microsoft Learn

 

High Level Architecture

Press enter or click to view image in full size

This architecture can also be implemented with APIM backend app registration only. However, stay cautious in configuring redirect URIs appropriately.

Remote MCP Servers using APIM Architecture

APIM exposing Remote MCP servers, enabling AI agents—such as those built in Copilot Studio—to securely access backend services using standardized JSON-RPC interfaces. This integration offers a robust, scalable, and secure way to connect AI tools with enterprise APIs.

Key Capabilities:

  • Secure Gateway: APIM acts as an intelligent gateway, handling OAuth 2.0 Authorization Code Flow, authentication, and request routing.
  • Monitoring & Observability: Integration with Azure Log Analytics and Application Insights enables deep visibility into API usage, performance, and errors.
  • Policy Enforcement: APIM’s policy engine allows for custom rules, including token validation, header manipulation, and response transformation.
  • Rate Limiting & Throttling: Built-in support for rate limits, quotas, and IP filtering helps protect backend services from abuse and ensures fair usage.
  • Managed Identity & Entra ID: Secure service-to-service communication is enabled via system-assigned and user-assigned managed identities, with Entra ID handling identity and access management.
  • Flexible Deployment: MCP servers can be hosted in Azure Functions, App Services, or Container Apps, and exposed via APIM with minimal changes to existing APIs.

To learn more, visit https://learn.microsoft.com/en-us/samples/azure-samples/remote-mcp-apim-functions-python/remote-mcp-apim-functions-python/

Develop MCP server in VS Code

This deployment guide provides sample MCP code written in python for ease of use. It is available on the following GitHub repo. However, you can also use your own MCP server.

Clone the following repository and open in VS Code.

git clone https://github.com/mafzal786/mcp-server.git Run the following to execute it locally. cd mcp-server uv venv uv sync uv run mcpserver.py

Deploy MCP Server as Azure Container App

In this deployment guide, MCP server is deployed in Azure Container App. It can also be deployed as Azure App service.

Deploy the MCP server in Azure container App by running the following command. It can be deployed by many other various ways such as via VS Code or CI/CD pipeline. AZ Cli is used for simplicity.

az containerapp up \ --resource-group <RESOURCE_GROUP_NAME> \ --name streamable-mcp-server2 \ --environment mcp \ --location <REGION> \ --source .

Configure Authentication for Azure Container App

1. Sign in Azure portal. Visit the container App in Azure and Click “Authentication” as shown below.

Press enter or click to view image in full size

 

 

For more details, visit the following link: Enable authentication and authorization in Azure Container Apps with Microsoft Entra ID | Microsoft Learn

Click Add Identity Provider as shown.

 

 

 

2. Select Microsoft from the drop down and leave everything as is as shown below.

 

 

 

3. This will create a new app registration for the container App. After it is all setup, it will look like as below.

 

As soon as authentication is configured. it will make container app inaccessible except for OAuth.

Note: If you have app registration for Azure Container App already configured, use that by selecting "pick an existing app registration in this directory" option.

Review App Registration of Container App — Backend

  • Visit App registration and click streamable-mcp-server2 as in this case.
  • Click on Authentication tab. Verify the Redirect URIs. you should see a redirect URL for container app. URI will end with /.auth/login/aad/callback as shown in the green box in the below screenshot.

 

 

  • Now click on “Expose an API”. Confirm Application ID URI is configured with scope as shown below. its format is api://<client id>
  • Scope "user_impersonation" is created.

 

 

  • Verify API Permission. Make sure you Grant admin consent for your tenant as shown below. More scope can be created depending on the requirement of data access.

 

Note: Make sure to "Grant admin consent" before proceeding to next step.

Create App registration for representing APIM API

  1.  Lauch Azure Portal. Visit App registration. Click New registration.

     

  2. Create a new App registration as shown below. For example, "apim-mcp-backend-api" in this case.

     

  3. Click "Expose an API", configure Application ID URI, and add a scope as shown in the below diagram such as user_impersonation.

     

  4. Click "App roles" and create the following role as shown below. More roles can be created depending on the requirements and case by case basis. Here app roles are created to get the concept around it and how it will be used in APIM inbound policies in the coming sections.

     

Create App Registration for Client App — Copilot Studio

In these steps, we will be configuring app registration for the client app, such as copilot studio in this case acting as a client app. This is also mentioned in the “high level architecture” diagram in the earlier section of this article.

  1. Lauch Azure Portal. Visit App registration. Click New registration.
  1. Create a new App registration. leave the Redirect URL as of now, we will configure it later as it is provided by copilot studio when configuring custom MCP connector.

3. Click on API permission and click “Add a permission”. Click Microsoft Graph and then click “Delegated permissions”. Select email, openid, profile as shown below.

 

 

4. Make sure to Grant admin consent and it should look like as below.

 

5. Create a secret. click “Certificates & secrets”. Create a new client secret by clicking “New client secret”. store the value as it will be masked after some time. if that happens, you can always delete and re-create a new secret.

6. Capture the following as you would need it in configuring MCP tool in Copilot Studio.

  • Client ID from the Overview Tab of app registration.
  • Client secret from “Certificates & secrets” tab.

7. Configure API permissions for APIM API i.e. "apim-mcp-backend-api" in this case. Click “API permissions” tab. Click “Add a permission”. Click on “My APIs” tab as shown below and select "apim-mcp-backend-api".

Note: If you don't see the app registration in "My APIs". Go to App registration. Click "Owners". Add your AD account as Owners.

 

8. Select "Delegated permissions". Then select the permission as shown below.

9. Select the Application permission. Select the App roles created in the apim-mcp-backend-api registration. Such as mcp.read in this case.

You MUST “Grant admin consent” as final step. It is very important!!! I can’t emphasize more on that. without it, nothing will work!!!

10. End result of this client app registration should look like as mentioned in the below figure.

Configure permissions for Container App registration 

  1. Lauch Azure Portal. Visit app registration.
  2. Select app registration of Azure container app such as streamable-mcp-server2 in this case.
  3. Select API permissions.
  4. Add the following delegated and application permissions as shown in the below diagram.

 

Note: Don't forget to Grant admin consent.

Configure allowed token audience for Container App

It defines which audience values (aud claim) in a token are considered valid for your app. When a client app requests an access token from Microsoft Entra ID (Azure AD), the token includes an aud claim that identifies the intended recipient. Your container app will only accept tokens where the aud claim matches one of the values in the Allowed Token Audiences list.

This is important as it ensures that only tokens issued for your API or app are accepted and prevents misuse of tokens intended for other resources. This adds extra layer of security.

  1. In the Azure Portal, visit Azure Container App. i.e. streamable-mcp-server2.
  2. Click on "Authentication"
  3. Click "Edit" under identity provider
  4. Under "Allowed token audiences", add the application ID URI of "apim-mcp-backend-api". As this will be included as an audience in the access token.

 

Best Practices
  • Only include trusted client app IDs.
  • Avoid using overly broad values like “allow all” (not recommended).
  • Validate tokens using Microsoft libraries (MSAL) or built-in auth features.

Configure MCP server in API Management

Note: Provisioning an API Management resource is outside the scope of this document.

If you do not already have an API Management instance, follow this QuickStart: https://learn.microsoft.com/en-us/azure/api-management/get-started-create-service-instance

The following service tiers are available for preview: Classic Basic, Standard, Premium, and Basic v2, Standard v2, Premium v2.

For the Classic Basic, Standard, or Premium tiers, you must join the AI Gateway Early Update group to enable MCP server features. Please allow up to 2 hours for the update to take effect.

Expose an existing MCP server

Follow these steps to expose an existing MCP server is API Management:

  1. In the Azure portal, navigate to your API Management instance.
  2. In the left-hand menu, under APIs, select MCP servers > + Create MCP server.
  3. Select Expose an existing MCP server.
  4. In Backend MCP server:
    1. Enter the existing MCP server base URL. Example: https://streamable-mcp-serverv2.kdhg489457dslkjgn,.eastus2.azurecontainerapps.io/mcpfor the Microsoft Azure Container App hosting MCP server.
    2. In Transport typeStreamable HTTP is selected by default.
  5. In New MCP server:
    1. Enter a Name the MCP server in API Management.
    2. In Base path, enter a route prefix for tools. Example: mcptools
    3. Optionally, enter a Description for the MCP server.
  6. Select Create.

Below diagram shows the MCP servers configured in APIM for reference.

 

 

Configure policies for MCP server

Configure one or more API Management policies to help manage the MCP server. The policies are applied to all API operations exposed as tools in the MCP server and can be used to control access, authentication, and other aspects of the tools.

To configure policies for the MCP server:

  1. In the Azure portal, navigate to your API Management instance.
  2. In the left-hand menu, under APIs, select MCP Servers.
  3. Select an MCP server from the list.
  4. In the left menu, under MCP, select Policies.
  5. In the policy editor, add or edit the policies you want to apply to the MCP server's tools. The policies are defined in XML format. 

 

<!-- - Policies are applied in the order they appear. - Position <base/> inside a section to inherit policies from the outer scope. - Comments within policies are not preserved. --> <!-- Add policies as children to the <inbound>, <outbound>, <backend>, and <on-error> elements --> <policies> <!-- Throttle, authorize, validate, cache, or transform the requests --> <inbound> <base /> <set-variable name="accessToken" value="@(context.Request.Headers.GetValueOrDefault("Authorization", "").Replace("Bearer ", ""))" /> <!-- Log the captured access token to the trace logs --> <trace source="Access Token Debug" severity="information"> <message>@("Access Token: " + (string)context.Variables["accessToken"])</message> </trace> <set-variable name="userId" value="@(context.Request.Headers.GetValueOrDefault("Authorization", "Bearer ").Split(' ')[1].AsJwt().Claims["oid"].FirstOrDefault())" /> <set-variable name="userName" value="@(context.Request.Headers.GetValueOrDefault("Authorization", "Bearer ").Split(' ')[1].AsJwt().Claims["name"].FirstOrDefault())" /> <trace source="User Name Debug" severity="information"> <message>@("username: " + (string)context.Variables["userName"])</message> </trace> <set-variable name="scp" value="@(context.Request.Headers.GetValueOrDefault("Authorization", "Bearer ").Split(' ')[1].AsJwt().Claims["scp"].FirstOrDefault())" /> <trace source="Scope Debug" severity="information"> <message>@("scope: " + (string)context.Variables["scp"])</message> </trace> <set-variable name="roles" value="@(context.Request.Headers.GetValueOrDefault("Authorization", "Bearer ").Split(' ')[1].AsJwt().Claims["roles"].FirstOrDefault())" /> <trace source="Role Debug" severity="information"> <message>@("Roles: " + (string)context.Variables["roles"])</message> </trace> <!-- <set-variable name="requestBody" value="@{ return context.Request.Body.As<string>(preserveContent:true); }" /> <trace source="Request Body information" severity="information"> <message>@("Request body: " + (string)context.Variables["requestBody"])</message> </trace> --> <validate-azure-ad-token tenant-id="{{tenant-id}}" header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid."> <client-application-ids> <application-id>{{client-application-id}}</application-id> </client-application-ids> <audiences> <audience>{{audience}}</audience> </audiences> <required-claims> <claim name="roles" match="any"> <value>mcp.read</value> </claim> </required-claims> </validate-azure-ad-token> </inbound> <!-- Control if and how the requests are forwarded to services --> <backend> <base /> </backend> <!-- Customize the responses --> <outbound> <base /> </outbound> <!-- Handle exceptions and customize error responses --> <on-error> <base /> <trace source="Role Debug" severity="error"> <message>@("username: " + (string)context.Variables["userName"] + " has error in accessing the MCP server, could be auth or role related...")</message> </trace> <return-response> <set-status code="403" reason="Forbidden" /> <set-body> {"error":"Missing required scope or role"} </set-body> </return-response> </on-error> </policies>

 

Note: Update the above inbound policy with the tenant Id, client application id, and audience as per your environment. It is recommended to use APIM "Named values" instead of hard coding inside the policy. To learn more, visit Use named values in Azure API Management policies

Configure Diagnostics for APIM

In this solution, APIM diagnostics are configured to forward log data to Log Analytics. Testing and validation will be carried out using insights from Log Analytics.

Note: Setting up diagnostics is outside the scope of this article. However, you can visit the following link for more information. https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-use-azure-monitor

 

 

Below diagram shows what Logs are being sent to Log Analytics workspace.

MCP Tool configuration in Copilot Studio

  1. Lauch copilot studio at https://copilotstudio.microsoft.com/.
  2. Configuration of environment and agent is beyond the scope of this article. It is assumed, you already have environment setup and agent has been created. Following link will help you, how to create an agent in copilot studio. Quickstart: Create and deploy an agent — Microsoft Copilot Studio | Microsoft Learn
  3. Inside agent configuration, click "Add tool".

 

3. Click on New tool.

 

4. Select Model Context Protocol.

5. Provide all relevant information for MCP server. Make sure your server URL ends with your mcp setup. In this case, it is APIM MCP server URL, with base path configured in APIM in the end. Provide server name and server description.

Select OAuth 2.0 radio button.

6. Provide the following in the OAuth 2.0 section

This will provide you Redirect URL. you need to configure the redirect URL in client app registration. In this case, it is copilot-agent-client.

 

 

Configure Redirect URI in Client App Registration

Visit client app registration. i.e. copilot-studio-client. Click Authentication Tab and provide the Web Redirect URIs as shown below.

 

Note: Configure Redirect URIs MUST be configured in app registration. Otherwise, authorization will not complete and sign on will fail.

 

Configure redirect URI in APIM API app registration

Also configure apim-mcp-backend-api app registration with the same redirect URI as shown below.

Modify MCP connector in PowerApps

Now visit the https://make.powerapps.com and open the newly created connector as shown below.

Select the security tab and modify the Resource URL with application ID URI of apim-mcp-backend-api configured earlier in app registration for expose an API. Add .default in the scope. Provide the secret of client app registration as it will not let you update the connector. This is extra security measure for updating the connector in Powerapps.

Click Update connector.

 

CORS Configuration

CORS configuration is a MUST!!! Since our Azure Container App is a remote MCP server with totally different domain or origin.

Power Apps and CORS for External Domains — Brief Overview

When embedding or integrating Power Apps with external web applications or APIs, Cross-Origin Resource Sharing (CORS) becomes a critical consideration. CORS is a browser security feature that restricts web pages from making requests to a different domain than the one that served the page, unless explicitly allowed.

Key Points:

  • Power Apps hosted on *.powerapps.com or within Microsoft 365 domains will block calls to external APIs unless those APIs include the proper CORS headers.
  • The external API must return:
  • Access-Control-Allow-Origin: https://apps.powerapps.com (or * for all origins, though not recommended for production)
  • Access-Control-Allow-Methods: GET, POST, OPTIONS (or as needed)
  • Access-Control-Allow-Headers: Content-Type, Authorization (and any custom headers)
  • If the API requires authentication (e.g., OAuth 2.0), ensure preflight OPTIONS requests are handled correctly.
  • For scenarios where you cannot modify the external API, consider using:
  • Power Automate flows as a proxy
  • Azure API Management or Azure Functions to inject CORS headers
  • Always validate security implications before enabling wide-open CORS.

If the CORS are not setup. You will encounter following error in copilot studio after pressing F12 (Browser Developer)

 

 

CORS policy — blocking the container app

Azure container app provides very efficient way of configuring CORS in the Azure portal.

  1. Lauch Azure Portal. Visit Azure container app i.e. streamable-mcp-server2 in this case.
  2. Click on CORS under Networking section.
  3. Configure the following in Allowed Origin Section as shown below. localhost is added to make it work from local laptop, although it is not required for Copilot Studio.

 

 

4. Click on “Allowed Method” tab and provide the following.

 

 

5. Provide wild card “*” in “Allowed Headers”tab. Although, it is not recommended for production system. it is done for the sake for simplicity. Configure that for added security

 

 

6. Click “Apply”. This will configure CORS for remote application.

Test the MCP custom connector

We are in the final stages of configuring the connector. It is time to test it, if everything is configured correctly and works.

  1. Launch the http://make.powerapps.com and click on “Custom connectors”, select your configured connector and click “5. Test” tab as shown below. You will see Selected Connection as blank if you are running it first time. Click “+ New connection

2. New connection will launch the Authorization flow and browser dialog will pop up for making a request for authorization code.

3. Click “Create”.

4. Complete the login process. This will create a successful connection.

5. Click “Test operation”. If the response is 406 means everything is configured correctly as shown below.

Solution validation

Add user in Enterprise Application for App roles

Roles have been defined under the required claims in the APIM inbound policy and also configured in the apim-mcp-backend-api app registration. As a result, any request from Copilot Studio will be denied if this role is not properly assigned. This role is included in the JWT access token, which we will validate in the following sections.

To assign role, perform the following steps.

  1. Visit Azure Portal. Visit Enterprise Application.
  2. Select APIM backend app registration. In this case for example, apim-mcp-backend-api
  3. Click "Users and groups"
  4. Select "Add user/group"

 

5. Select User or Group who should have access to the role.

6. Click "Assign". It will look like as below.

 

Note: Role assignment for users or groups is an important step. If it is not configured, MCP server tests will fail in Copilot studio.

Test MCP server in Copilot Studio

  1. Lauch copilot studio and click on the Agent you created in earlier steps and click on “Tools tab”. Select your MCP tool as shown the following figure.

Make sure it is “Enabled” if you have other tools attached to the same agent, disable them for now for testing.

Make sure you have connection available which we created during the testing of custom connector in earlier step. You can also initiate a fresh connection by clicking on the drop down under “Connection” as shown below.

Refreshing the tools will show all the tools available in this MCP server.

Provide the sample prompt such as “Give me the stock price of tesla”. This will trigger the MCP server and call the respective method to bring the stock price of Tesla.

Now try a weather-related question to see more.

Now invoking weather forecast tool in the MCP server.

APIM Monitoring with Log Analytics

We previously configured APIM diagnostic settings to forward log data to Log Analytics. In this section, we’ll review that data, as the inbound policy in APIM sends valuable information to Log Analytics.

Run the Kusto query to retrieve data from the last 30 minutes. As shown, the logs capture the APIM API endpoint URL and the backend URL, which corresponds to the Azure Container App endpoint.

Scrolling further, we find the TraceRecords section. This contains the information captured by APIM inbound policies and sent to Log Analytics. The figure below illustrates the TraceRecords data. In the inbound policy, we configured it to extract details from the access token—such as the token itself, username, scope, and roles—and forward them to Log Analytics.

Now let's capture the access token in the clip board, launch the http://jwt.io which is JSON Web Token (JWT) debugger, and paste the access token in the ENCODED VALUE box as show below. Note the following information.

  • aud: This shows the Application URI ID of apim-mcp-backend-api. which shows access token is requested for that audience.
  • appid: This shows the client Id for copilot-studio-client app registration.
  • You can also see roles and scope. These roles are specified in APIM inbound policy.

 

Note: As you can see, roles are included in access token and if it is not assigned in the enterprise application for "apim-mcp-backend-api", all requests will be denied by APIM inbound policy configured earlier.

 

Perform a test using another Azure AD account that does not have the app role assigned

Now, let's try the copilot studio agent by logging in with another account which is not assigned for the "mcp.read" role.

Let's, review the below diagram.

 

  • Logged in as demo and tried to access the MCP tool in copilot studio agent.
  • Request failed with the error "Missing required scope or roles". If you look at it, this is coming from the APIM policy configured earlier in <on-error> 

 

Let's review log analytics. As you can see request failed due to inbound APIM policy with 403 error and there is no backend URL. Error is also reported under TraceRecords as we configured it in APIM policy.

 

Now copy the Access token from log analytics and paste it into jwt.io. You can notice in the below diagram, there is no "roles" in the access token, resulting access denied from APIM inbound policy definition to the APIM backend i.e. azure container app.

Assign the app role to the demo account

Let's assign the "mcp.read" role to the demo account and test if it accesses the tool.

  1. Visit Azure Portal, Lauch Enterprise application, and select "apim-mcp-backend-api" as in this example.
  2. Click "Users and groups"
  3. Click "+ Add user/group"
  4. Select demo 
  5. Click "Select"
  6. Click "Assign"

End result would look like as shown below.

Now, login again as demo.

Make sure a new access token is generated. Access token refresh happens after one hours.

 

As you can see in the image below, this time the request is successful after assigning the "mcp.read" app roles.

Now let's review the log analytics entries.

Let's review the access token in JWT.io. As you can see, roles are included in the access token.

 

Conclusion

Exposing the MCP server through Azure API Management (APIM) and integrating it with Copilot Studio agents provides a secure and scalable way to extend enterprise capabilities. By implementing OAuth 2.0, you ensure robust authentication and authorization, protecting sensitive data and maintaining compliance with industry standards.

Beyond security, APIM adds significant operational value. With APIM policies, you can monitor traffic, enforce rate limits, and apply fine-grained controls to manage access and performance effectively. This combination of security and governance empowers organizations to innovate confidently while maintaining control and visibility over API usage.

In today’s enterprise landscape, leveraging APIM with OAuth 2.0 for MCP integration is not just best practice—it’s a strategic move toward building resilient, secure, and well-governed solutions.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories