Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151771 stories
·
33 followers

Build a personal organization command center with GitHub Copilot CLI

1 Share

What if you could remove the struggle of context switching across several apps, bringing them together into one place?

Meet Brittany Ellich, Staff Software Engineer, and the productivity tool she built to streamline her work. We sat down with Brittany to learn about this project–what she built, how she did it, and how AI supported the development process from ideation to implementation. Brittany created a visual home that fits how she learns and thinks, all inspired by the GitHub Copilot CLI.

Visual learner? Watch the video above!

Q & A

What is your role at GitHub?

I’m a staff software engineer on the billing team at GitHub. My day-to-day work mostly consists of working on metered billing, so things like keeping records of Actions minutes, storage amounts, and copilot usage. I passionately dogfood everything that comes out of the Copilot org. I’m also an open source contributor to ATProto projects and built Open Social for applications built on the AT Protocol.

What did you build?

I built a personal organization command center to solve a simple problem: digital fragmentation. My goal was to take everything scattered across a dozen different apps and unify them into one calm, central space.

How long did v1 take to make?

I use a plan-then-implement workflow when building systems, leveraging AI for planning and Copilot for implementation. For v1, this approach let me move from idea to a working tool in a single day alongside my other regular work.

While planning, I have Copilot interview me with questions about how something should work until we have a plan that I think is adequate. That way, there’s less guesswork about what I want done and implementation goes more smoothly. Copilot will implement the work based on the plan that we put together.

What’s your favorite tool stack to build with?

I like working in agent mode in VS Code for synchronous development, typically with up to 2 non-competing agent workflows going at a time, and Copilot Cloud Agent for asynchronous development. I typically try to keep a few asynchronous tasks flowing with Copilot Cloud Agent, like bug fixes or tech debt changes that have been well-scoped, while I’m focusing on the work that needs more oversight in VS Code.

Follow-up loaded question: Do you care what tech stack your apps use now?

Not really. I’ve always wanted to build an Electron app and this is technically my first one, but I can’t say I learned a ton about Electron during this process since it was almost completely built by Agent Mode. That said, I went in and simplified the repo significantly to make it publicly accessible which required a lot more hands-on work (agents seem to like adding code but are much less enthusiastic about removing code) and felt pretty comfortable reading through the repo and making changes despite not having a ton of familiarity with Electron apps.

Check out the project repo >

What’s your one-line takeaway for other builders?

Go build something! Building solutions from scratch has never been easier, and it’s helpful for learning how to work with new AI tools.

How do you keep up with news and changes in the industry?

I stay on top of industry news through articles, podcasts, and social media. I read articles that are shared internally on GitHub’s Slack, and I read the GitHub blog. We have a ton of great engineers who are great at curating useful resources and sharing them with the team. There are a few podcasts that I like for keeping up with things, like How I AI and Last Week in AI. On social media, I’m active on Bluesky and have had a ton of great conversations with other engineers there.

Try Brittany’s approach

Brittany’s project is a good reminder that the most useful projects often start as small fixes for everyday problems.

While you can use your own stack for this, if you’d like to try something similar, here are the tools Brittany used:

  • Electron: Cross-platform desktop application framework
  • React: JavaScript UI library for components and state management
  • Vite: Build tool with hot module replacement
  • Tailwind: CSS utility framework
  • WorkIQ MCP: MCP server and CLI for accessing Microsoft 365 data

All of these are open source, and GitHub Copilot can help you get started with them quickly!

If you’d like her exact solution, you can clone Brittany’s repository to get up and running right away. You’ll need the following on your machine:

  • Node.js (v18 or higher)
  • GitHub Copilot CLI (for WorkIQ setup)
  • A Microsoft 365 account (for calendar sync)
  • An ElevenLabs account (for voice assistant setup)

There are more detailed instructions in her repository README file!

Get started with GitHub Copilot CLI >

The post Build a personal organization command center with GitHub Copilot CLI appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
52 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Gemini app is now on Mac

1 Share
Google is bringing the Gemini app to macOS as a native desktop experience.
Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Gemma 4 on Azure Container Apps Serverless GPU

1 Share

Every prompt you send to a hosted AI service leaves your tenant. Your code, your architecture decisions, your proprietary logic — all of it crosses a network boundary you don't control. For teams building in regulated industries or handling sensitive IP, that's not a philosophical concern. It's a compliance blocker.

What if you could spin up a fully private AI coding agent — running on your own GPU, in your own Azure subscription — with a single command?

That's exactly what this template does. One azd up, 15 minutes, and you have Google's Gemma 4 running on Azure Container Apps serverless GPU with an OpenAI-compatible API, protected by auth, and ready to power OpenCode as your terminal-based coding agent. No data leaves your environment. No third-party model provider sees your code. Full control.

Why Self-Hosted AI on ACA?

Azure Container Apps serverless GPU gives you on-demand GPU compute without managing VMs, Kubernetes clusters, or GPU drivers. You get a container, a GPU, and an HTTPS endpoint — Azure handles the rest.

Here's what makes this approach different from calling a hosted model API:

  • Complete data privacy — your code and prompts never leave your Azure subscription. No PII exposure, no data leakage, no third-party processing. For teams navigating HIPAA, SOC 2, or internal IP policies, this is the simplest path to compliant AI-assisted development.
  • Predictable costs — you pay for GPU compute time, not per-token. Run as many prompts as you want against your deployed model.
  • No rate limits — the GPU is yours. No throttling, no queue, no waiting for capacity.
  • Model flexibility — swap models in minutes. Start with the 4B parameter Gemma 4 for fast iteration, scale up to 26B for complex reasoning tasks.

This isn't a tradeoff between convenience and privacy. ACA serverless GPU makes self-hosted AI as easy to deploy as any SaaS endpoint — but the data stays yours.

What You're Building

What does the configuration look like to run Gemma 4 + Ollama securely on ACA serverless GPU

The template deploys two containers into an Azure Container Apps environment:

  1. Ollama + Gemma 4 — running on a serverless GPU (NVIDIA T4 or A100), serving an OpenAI-compatible API
  2. Nginx auth proxy — a lightweight reverse proxy that adds basic authentication and exposes the endpoint over HTTPS

The Ollama container pulls the Gemma 4 model on first start, so there's nothing to pre-build or upload. The nginx proxy runs on the free Consumption profile — only the Ollama container needs GPU.

After deployment, you get a single HTTPS endpoint that works with curl, any OpenAI-compatible SDK, or OpenCode — a terminal-based AI coding agent that turns the whole thing into a private GitHub Copilot alternative.

Step 1: Deploy with azd up

You need the Azure CLI and Azure Developer CLI (azd) installed.

git clone https://github.com/simonjj/gemma4-on-aca.git
cd gemma4-on-aca
azd up

The setup walks you through three choices:

GPU selection — T4 (16 GB VRAM) for smaller models, or A100 (80 GB VRAM) for the full Gemma 4 lineup.

Model selection — depends on your GPU choice. The defaults are tuned for the best quality-to-speed ratio on each GPU tier.

Proxy password — protects your endpoint with basic auth.

Region availability: Serverless GPUs are available in various regoins such as australiaeast, brazilsouth, canadacentral, eastus, italynorth, swedencentral, uksouth, westus, and westus3. Pick one of these when prompted for location.

That's it. Provisioning takes about 10 minutes — mostly waiting for the ACA environment to create and the model to download.

The deployment output

Choose Your Model

Gemma 4 ships in four sizes. The right choice depends on your GPU and workload:

ModelParamsArchitectureContextModalitiesDisk Size
gemma4:e2b~2BDense128KText, Image, Audio~7 GB
gemma4:e4b~4BDense128KText, Image, Audio~10 GB
gemma4:26b26BMoE (4B active)256KText, Image~18 GB
gemma4:31b31BDense256KText, Image~20 GB

Real-World Performance on ACA

We benchmarked every model on both GPU tiers using Ollama v0.20 with Q4_K_M quantization and 32K context in Sweden Central:

ModelGPUTokens/secTTFTNotes
gemma4:e2bT4~81~15msFastest on T4
gemma4:e4bT4~51~17msDefault T4 choice — best quality/speed
gemma4:e2bA100~184~9msUltra-fast
gemma4:e4bA100~129~12msGreat for lighter workloads
gemma4:26bA100~113~14msDefault A100 choice — strong reasoning
gemma4:31bA100~40~30msHighest quality, slower

51 tokens/second on a T4 with the 4B model is fast enough for interactive coding assistance. The 26B model on A100 delivers 113 tokens/second with noticeably better reasoning — ideal for complex refactoring, architecture questions, and multi-file changes.

The 26B and 31B models require A100 — they don't fit in T4's 16 GB VRAM.

Step 2: Verify Your Endpoint

After azd up completes, the post-provision hook prints your endpoint URL. Test it:

curl -u admin:<YOUR_PASSWORD> \
  https://<YOUR_PROXY_ENDPOINT>/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gemma4:e4b",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

You should get a JSON response with Gemma 4's reply. The endpoint is fully OpenAI-compatible — it works with any tool or SDK that speaks the OpenAI API format.

Step 3: Connect OpenCode

Here's where it gets powerful. OpenCode is a terminal-based AI coding agent — think GitHub Copilot, but running in your terminal and pointing at whatever model backend you choose.

The azd up post-provision hook automatically generates an opencode.json in your project directory with the correct endpoint and credentials. If you need to create it manually:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "gemma4-aca": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Gemma 4 on ACA",
      "options": {
        "baseURL": "https://<YOUR_PROXY_ENDPOINT>/v1",
        "headers": {
          "Authorization": "Basic <BASE64_OF_admin:YOUR_PASSWORD>"
        }
      },
      "models": {
        "gemma4:e4b": {
          "name": "Gemma 4 e4b (4B)"
        }
      }
    }
  }
}

Generate the Base64 value: echo -n "admin:YOUR_PASSWORD" | base64

Now run it:

opencode run -m "gemma4-aca/gemma4:e4b" "Write a binary search in Rust"

That command sends your prompt to Gemma 4 running on your ACA GPU, and streams the response back to your terminal. Every token is generated on your infrastructure. Nothing leaves your subscription.

For interactive sessions, launch the TUI:

opencode

Select your model with /models, pick Gemma 4, and start coding. OpenCode supports file editing, code generation, refactoring, and multi-turn conversations — all powered by your private Gemma 4 instance.

The Privacy Case

This matters most for teams that can't send code to external APIs:

  • HIPAA-regulated healthcare apps — patient data in code, schema definitions, and test fixtures stays in your Azure subscription
  • Financial services — proprietary trading algorithms and risk models never leave your network boundary
  • Defense and government — classified or CUI-adjacent codebases get AI assistance without external data processing agreements
  • Startups with sensitive IP — your secret sauce stays secret, even while you use AI to build faster

With ACA serverless GPU, you're not running a VM or managing a Kubernetes cluster to get this privacy. It's a managed container with a GPU attached. Azure handles the infrastructure, you own the data boundary.

Clean Up

When you're done:

azd down

This tears down all Azure resources. Since ACA serverless GPU bills only while your containers are running, you can also scale to zero replicas to pause costs without destroying the environment.

Get Started

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s new in Microsoft Entra – March 2026

1 Share

From January through March 2026, Microsoft Entra introduced key updates to help organizations strengthen identity security, simplify governance, and improve user experience. This Q1 roundup highlights the latest feature releases and important changes—organized by product—so you can quickly see what’s new, what’s changing, and what actions you may need to take.

Microsoft Entra ID

New releases

Change announcements

Security improvements

Conditional Access policies now apply to Windows Hello for Business and macOS Platform SSO registration

[Action may be required]

If your organization has Conditional Access (CA) policies scoped to Register security information, those policies will now apply when users set up Windows Hello for Business (WHfB) or register macOS Platform SSO credentials. Organizations without these policies aren't affected.

When this will happen

  • May 25, 2026: Gradual rollout begins.
  • June 3, 2026: Rollout complete for all tenants.

How this affects your organization

Users registering WHfB or macOS PSSO credentials will need to satisfy your register-security-info CA policies and may see a CA prompt during device setup. Important: WHfB uses the Device Registration Client, classified as "Other clients" in CA. If your policy blocks "Other clients," WHfB and PSSO provisioning will be blocked. Add a trusted location exclusion to avoid this.

Action recommended

  1. Go to Microsoft Entra admin center > Protection > Conditional Access and find policies targeting Register security information.
  2. Review Grant controls, especially policies blocking "Other clients."
  3. Test with report-only mode before enforcement reaches your tenant.
  4. Update helpdesk docs. Users may see a new prompt during device setup.

Learn more.

Jailbreak detection in Authenticator app

[Action may be required]

Starting February 2026, Microsoft Authenticator introduced jailbreak/root detection for Microsoft Entra credentials in the Android app. The rollout progresses from warning mode → blocking mode → wipe mode. Users must move to compliant devices to continue using Microsoft Entra accounts in Authenticator. Learn more.

Microsoft Entra Agent ID

Change announcements

Simplifying agent management with Agent 365 

[Action may be required]

We’re consolidating agent management experiences to make it easier to observe, govern, and secure all agents in your tenant. Agent 365 will be the single source of truth, offering a unified catalog, consistent visibility, and simplified management.

What’s changing

With this change:

  • Agent 365 becomes the unified registry and control plane for agents.  
  • Microsoft Entra continues to provide the identity foundation through Agent ID.  
  • The existing registry Graph API will be deprecated and replaced by a new API powered by Agent 365. Agents registered via the current API will need to be re-registered. You'll be notified soon about the deprecation date and the availability of the new registry Graph API.  
  • All agent access and governance capabilities remain fully available through Agent ID and Agent 365.  

Learn more.

Microsoft Entra ID Governance

New releases

Change announcements

Identity Modernization

Microsoft Entra Connect security update to block hard match for users with Microsoft Entra roles

[Action may be required]

What is hard matching in Microsoft Entra Connect Sync and Cloud Sync?

When Microsoft Entra Connect or Cloud Sync adds new objects from Active Directory, the Microsoft Entra ID service tries to match the incoming object with a Microsoft Entra object by looking up the incoming object’s sourceAnchor value against the OnPremisesImmutableId attribute of existing cloud managed objects in Microsoft Entra ID. If there's a match, Microsoft Entra Connect or Cloud Sync takes over the source or authority (SoA) of that object and updates it with the properties of the incoming Active Directory object in what is known as a "hard match."

To strengthen the security posture of your Microsoft Entra ID environment, we are introducing a change that will restrict certain types of hard match operations by default.

What’s changing

Beginning June 1, 2026, Microsoft Entra ID will block any attempt by Microsoft Entra Connect Sync or Cloud Sync from hard-matching a new user object from Active Directory to an existing cloud-managed Microsoft Entra ID user object that hold Microsoft Entra roles.

This means:

  • If a cloud managed user already has onPremisesImmutableId (sourceAnchor) set and is assigned a Microsoft Entra role, Microsoft Entra Connect Sync or Cloud Sync will no longer be able to take over the Source of Authority of that user by hard-matching with an incoming user object from Active Directory.
  • This safeguard prevents attackers from taking over privileged cloud managed users in Microsoft Entra by manipulating attributes of user objects in Active Directory.

What’s not changing

  • Hard match operations for cloud users without Microsoft Entra roles are not affected.
  • Soft match behavior isn't affected.
  • Ongoing sync from Active Directory to Entra ID for previously hard-matched objects will not be affected.

Customer action required

If you encounter a hard match error after June 1, 2026, see our documentation for mitigation steps.

Learn more.

Microsoft Entra External ID

New releases

Global Secure Access

New releases

 

-Shobhit Sahay

 

 

Learn more about Microsoft Entra

Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How a GitHub engineer built an AI Productivity hub with Copilot CLI

1 Share
From: GitHub
Duration: 5:47
Views: 417

Meet Brittany Ellich, a Staff Software Engineer here at GitHub, and explore the custom productivity tool she built using the GitHub Copilot CLI. Because she prefers visual interfaces over the command line, she used GitHub Copilot to vibe code a personalized command center. Her app features an AI chat agent named Marvin, unified task lists, and calendar integrations. Watch to see how she built it and learn why creating your own AI tools is the best way to learn.

Her project is open source, check it out: https://github.com/features/copilot/cli?utm_source=social-youtube-build-a-game-cli-features-cta&utm_medium=social&utm_campaign=dev-pod-copilot-cli-2026

#GitHubCopilot #CopilotCLI #VibeCoding

Stay up-to-date on all things GitHub by subscribing and following us at:
YouTube: http://bit.ly/subgithub
Blog: https://github.blog
X: https://twitter.com/github
LinkedIn: https://linkedin.com/company/github
Instagram: https://www.instagram.com/github
TikTok: https://www.tiktok.com/@github
Facebook: https://www.facebook.com/GitHub/

About GitHub:
It’s where over 180 million developers create, share, and ship the best code possible. It’s a place for anyone, from anywhere, to build anything—it’s where the world builds software. https://github.com

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Product Architecture Behind Trusted AI Experiences

1 Share
Learn why identity is the core of AI architecture and how to manage non-human identities to build secure, scalable, and trusted AI agents.

Read the whole story
alvinashcraft
53 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories