Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
143542 stories
·
32 followers

Cordova Common 6.0.0 Released!

1 Share

We are happy to announce that cordova-common@6.0.0 has been released. This is one of the libraries used behind-the-scenes by nearly all of the Cordova tooling and provides utilities for dealing with things like config.xml parsing.

Release Highlights

The most notable changes in this major release were:

  • Removal of superspawn, which also dropped the npm dependencies q and cross-spawn.
  • Update to the <resource-file> directive to allow directories and their contents to be copied.
  • Update to the <config-file> and <edit-config> directives to support finding unprefixed Info.plist files.
  • Increase in the Node.js engine requirement to >=20.9.0.

Changes include:

Breaking Changes:

  • GH-239 feat!: change resource-file behavior to support directory contents
  • GH-236 feat!: remove superspawn.js
  • GH-231 feat!: remove superspawn & npm packages q & cross-spawn
  • GH-235 dep(npm)!: bump @cordova/eslint-config@6.0.0
  • Bumps node >=20.9.0
  • GH-228 chore!: bump node >=20.5.0 & upgrade npm packages

Features:

  • GH-212 feat: Support finding an unprefixed Info.plist file

Chores:

  • GH-240 chore: update package-lock.json
  • GH-232 chore(tests): Improve test coverage
  • GH-229 chore(coverage): Move from nyc to c8 for code coverage
  • GH-238 dep(npm): update w/ rebuilt package-lock.json
  • GH-237 refactor(bom): Bring strip-bom in as util

CI & Refactoring:

  • GH-234 ci: use macos-15
  • GH-233 ci(workflow): update release-audit & license config
  • GH-230 ci: add permissions block & commit hash pinning of third-party actions
Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Automating .NET User Secrets with PowerShell

1 Share

For dotnet developers Microsoft created a dev-time convenience to handle secret values. No need for a shared infrastructure dependency, and no need for storing secrets in the repository. Add a helper script on top of it, and your dev shop will have a convenient way to get up and running.

UserSecrets are stored in an unencrypted JSON file. Depending on platform they are in either %APPDATA%\Microsoft\UserSecrets or ~/.microsoft/usersecrets.

Simple Variant

The simplest variant is demonstrated by this PowerShell script. Please note, in a real-world scenario you would parameterize the script to allow entry of the secrets. For simplicity we use random GUIDs here.

#Requires -Version 7.4

$ErrorActionPreference = "Stop"
$RepoRoot = Split-Path $PSScriptRoot -Parent

$Secret1 = [System.Guid]::NewGuid().ToString()
$Secret2 = [System.Guid]::NewGuid().ToString()

$APP1_FOLDER = Join-Path -Path $RepoRoot -ChildPath "src/App1"
Push-Location $APP1_FOLDER
Write-Host "Setting secrets for $APP1_FOLDER"
dotnet user-secrets clear
dotnet user-secrets set "App1:ConnectionStrings:Secret1" $Secret1
dotnet user-secrets set "App1:ConnectionStrings:Secret2" $Secret2
Pop-Location

This produces one of two possible outputs.

Failure output:

Could not find the global property 'UserSecretsId' in MSBuild project '/Secrets/src/App1/App1.csproj'. Ensure this property is set in the project or use the '--id' command line option.

Success output:

Setting secrets for /Users/maxhamulyak/Dev/BlogTopics/_posts/Secrets/src/App1
Successfully saved App1:ConnectionStrings:Secret1 to the secret store.
Successfully saved App1:ConnectionStrings:Secret2 to the secret store.

To be able to set secrets on a project level, the property UserSecretsId needs to be set. For example <UserSecretsId>[ANY-STRING-VALUE]</UserSecretsId>. Doing this for a large solution, project-by-project can be a hassle. So I prefer creating a Directory.Build.Targets file. We can then ensure each project either has an explicit or implicit secret id.

<Project>
  <PropertyGroup>
      <UserSecretsId Condition="'$(UserSecretsId)' == ''">$(MSBuildProjectName)-dev-secrets</UserSecretsId>
  </PropertyGroup>
</Project>

Multiple secrets at once

The first version of the script works, but calling a command line for a ton of secrets feels ineffective. Luckily, we can also bulk import by using a JSON file. The trick here is to create the object in PowerShell, convert it to JSON and run the dotnet user-secrets command.

#Requires -Version 7.4

$ErrorActionPreference = "Stop"
$RepoRoot = Split-Path $PSScriptRoot -Parent

$Secret1 = [System.Guid]::NewGuid().ToString()
$Secret2 = [System.Guid]::NewGuid().ToString()

$APP1_FOLDER = Join-Path -Path $RepoRoot -ChildPath "src/App1"
Push-Location $APP1_FOLDER
Write-Host "Setting secrets for $APP1_FOLDER"
dotnet user-secrets clear
$App1Config = @{
    App1 = @{
        ConnectionStrings = @{
            Secret1 = $Secret1
            Secret2 = $Secret2
        }
    }
}
$App1Config | ConvertTo-Json -Depth 5 | dotnet user-secrets set
Pop-Location

Using the Same Secret Across Multiple Projects

The previous iteration was already an improvement over our first script. But, for me it does not quite match the real-world. For instance, in Azure, I would create a KeyVault per resource group. I would not create multiple key vaults. For this, I picked up the habit of prefixing secrets per executable. For example, thus far in this blog I have used App1.

If we now set the MSBuild property <UserSecretsId>Project-5ea2d981-14f7-4487-93c0-d4b7e3dbebf1</UserSecretsId>, we can apply it to all projects at once.

#Requires -Version 7.4

$ErrorActionPreference = "Stop"

$Secret1 = [System.Guid]::NewGuid().ToString()
$Secret2 = [System.Guid]::NewGuid().ToString()


$App1Config = @{
    ConnectionStrings = @{
            Secret1 = $Secret1
            Secret2 = $Secret2
    }
}

$Config = @{
    App1 = $App1Config
}

$SecretId = "Project-5ea2d981-14f7-4487-93c0-d4b7e3dbebf1"
dotnet user-secrets clear --id $SecretId
$Config | ConvertTo-Json -Depth 10 | dotnet user-secrets set --id $SecretId

Closing thoughts

User Secrets are a nice addition to the tool belt. Remembering the correct format of clearing/updating secrets, is not something you should burden your team with. Wrapping it inside a script for convenience is my recommended approach. Depending on your deployment model I would go with either option 2 or option 3, keeping it as close to production as possible.

Remember: User Secrets are intended for local development only and should never be used to store production secrets.

References

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Building an AI Home Security System Using .NET, Python, CLIP, Semantic Kernel, Telegram, and Raspberry Pi 4 – Part 5: Refinements and Recap

1 Share

In part 5 of this mini-series, we bring together all main components to form the end-to-end solution for the AI home security system.

We also refine and refactor some of the original Python code.

To recap, this home security system uses motion detection, image capture, and AI-powered person recognition to distinguish between known persons (family, friends) and potential intruders.

It sends alerts when unknown persons are detected.

~

System Components

It’s worth a recap of the hardware and software components that form the solution.

Hardware

The hardware consists of:

  • Raspberry Pi 4
  • Camera module
  • PIR motion sensor connected to GPIO
  • Bluetooth speaker for local alerts

 

An old laptop is also used to host a .NET API.

Software

Some of the files have been renamed to ease readability, the following software components are used in the solution:

  • audioplayer.py – audio alert functions
  • botmessage.py – Telegram messaging functions
  • ClipServer.py – Python server running CLIP model for generating image
  • ImageTrainingAndClassification.py – Telegram bot for training person recognition
  • MotionDetectionCapture.py – Main motion detection and capture script
  • .NET API – Image recognition service with /train and /match endpoints
  • Telegram Bot – For receiving security alerts and training the system

 

Access to MotionDetectionCapture.py and ImageTrainingAndClassification.py scripts and CLIP server is controller from a .NET API.

You can find more information about the above in the earlier blog posts from the series:

 

Spoiler alert – in part 6 of this series, we will introduce Semantic Kernel.

~

Core Logic Flow

Our refined process flow detailed the following sequence diagram.

Further information from each of the numbers sub processes are also included:

1. Motion Detection & Capture

This sub process is the starting point.  The PIR detects changes in radiation and camera activates when this happens.  A photo is taken and saved in real-time.

The photo is timestamped and saved to the following location:

/home/admin/repos/HomeSecuritySystem/images/img_YYYYMMDD-HHMMSS.jpg

2. AI Recognition Check

With the image captured, it can be sent to the .NET API /match endpoint.

The steps are as follows:

  1. .NET API forwards image to CLIP_Server.py running on the Raspberry Pi.
  2. The CLIP Server generates image embedding/vector using CLIP model.
  3. .NET API compares new vector against stored trained embeddings.
  4. The .NET API returns 1 of 2 responses:
    • MATCH: [PersonName] (Similarity: X.XX)
    • NO MATCH FOUND

 

The system must send alerts if it does not recognisee the person.  We have some separate logic for that too.

3. Alert Decision Logic

We don’t always want to be alerted for every image that is captured so must have logic to decide when to send and alert and when to ignore the image.

We build on the .NET API output from the previous step.

The logic is as follows:

 

3.1 MATCH FOUND (Known Person)

In this branch of logic, a known person is detected and therefore is not a security concern.  Main steps are:

  1. Person is recognized from training data
  2. No alert sent (silent operation)

3.2 NO MATCH FOUND (Unknown Person)

In this branch of logic, an unknown person was detected which is a security risk.  Immediate notification needed!

  • Person not recognized = potential intruder
  • Play audio alert locally
  • Send Telegram message with image and “Intruder alert!” caption

 

That covers the core logic that forms the basis of the home security system.  Next, we can summarise how we can train the system.

~

.NET API Integration

I’m not a Python developer. So, I created a simple .NET API to act as a single-entry point and common surface area to the various Python scripts that are used.  The following endpoints are available:

  • POST /api/image/train – Train new person with label
  • POST /api/image/match – Check if person matches trained data

 

.NET API Training Flow (/train)

This sequence diagram shows how the home security system handles the training of individuals to be recognised.

It outlines the flow of information between various component from collecting and sending images to processing and storing identity data:

 

The steps are as follows:

  1. Send /train [PersonName] command to Telegram bot
  2. Send photo(s) of the person to train
  3. .NET API receives image and forwards to CLIP_Server.py
  4. CLIP Server generates embedding vector for the person’s image
  5. .NET API stores the embedding with the person’s label
  6. Future detections of this person will be recognized by comparing vectors

.NET API Person Matching Flow (/match)

The following sequence diagram details the process for performing matching of a newly captured image by the camera / PIR sensor against the systems known training data / persons.

Image verification capabilities are performed in a series of steps using multiple components that include: the Telegram bot, Python script, .NET APU and CLIP model server.

 

The steps are as follows:

  1. Send /match command to Telegram bot
  2. Send a photo to test recognition
  3. Image is processed through CLIP Server to generate test embedding
  4. .NET API compares test embedding against all stored embeddings
  5. Bot responds with match result and similarity score

~

CLIP Server Integration

To generate embeddings, OpenAI’s CLIP (Contrastive Language–Image Pre-training) model is used.

CLIP (Contrastive Language-Image Pre-training) is an AI model developed by OpenAI that understands images and text. CLIP makes it easy for you to encode images into high-dimensional vectors (embeddings).

The .NET API communicates with a Python CLIP server (CLIP_Server.py) to generate image embeddings.

In this system, we use CLIP’s image encoding capabilities to generate numerical representations (embeddings) of people for recognition.

Learn more about CLIP and CLIP_Server.py in part 2 of this series but in essence, here is how it works:

  1. Input: Raw image file is sent.
  2. Processing: CLIP model converts image to 512-dimensional vector
  3. Output: Numerical embedding that uniquely represents the person’s face
  4. Comparison: Uses cosine similarity to compare embeddings (values from -1 to 1, where 1 = identical)

 

Sequence diagrams follow and provide low level detail on how these steps are used during the training and matching flows when CLIP is used.

CLIP Training Flow

In this flow, we are using CLIP via CLIP_Server.py to generate embeddings for a person the want to recognise (and subsequently, not send security alerts):

  1. NET API receives image from Telegram bot
  1. .NET API calls CLIP_Server.py with image
  2. CLIP Server uses CLIP model to generate 512-dimensional embedding vector
  3. .NET API stores embedding with person label for future comparisons

CLIP Matching Flow

In this flow, we see how we are using CLIP via the CLIP_Server.py to generate embeddings for a newly taken photo by the camera/PIR sensor:

  1. .NET API receives image from motion detection system
  2. .NET API calls CLIP_Server.py to generate embedding for new image
  3. .NET API compares new embedding against all stored embeddings using cosine similarity
  4. Returns best match if similarity exceeds threshold, otherwise “NO MATCH FOUND”

~

What We Have Built

We’ve reached the end of this summary. At this point, we’ve created an end-to-end AI home security system.

Security benefits of this system include:

  • No false alarms from family members and regular visitors
  • Immediate alerts for genuine security concerns
  • Learning – gets smarter as you train more people
  • Visual confirmation – image sent with every alert
  • Automated operation – no manual intervention needed

 

No subscription required and can be run totally on local hardware and software.  The only external dependency is on Telegram.

Setup Phase

Main steps involved to start the system are:

  1. Install and configure hardware (Raspberry Pi, camera, PIR sensor)
  2. Start CLIP_Server.py to enable AI image processing (Pi)
  3. Start .NET API service for handling recognition and training requests (Laptop)
  4. Train the system with photos of all authorized persons (via phone or Postman)
  5. Test the system with known and unknown faces
  6. Start motion detection script for continuous monitoring (Pi)

 

One thing to note is the .NET API stores embeddings in memory, if you stop the API, you need to retrain the system with known persons.  Not ideal but easily remedied.

Daily Operation

When running the system will:

  1. Continuously monitors for motion
  2. Capture and analyse any detected movement
  3. Silently allows known persons to pass
  4. Immediately alerts on unknown persons

 

Perfect.

~

Technical Notes

Finally, some technical notes related to low-level implementation details. Some of these tweaks helped improve the performance of the system.

  • Motion Detection Cooldown: 3-second delay between captures to avoid multiple alerts for same event
  • Image Format: JPEG with timestamp naming convention
  • API Communication: Asynchronous HTTP requests using aiohttp
  • Vector Generation: CLIP model creates 512-dimensional embeddings for face recognition
  • Similarity Threshold: Configurable threshold for determining matches (typically 0.7-0.9)
  • Error Handling: API failures are logged, system continues monitoring
  • GPIO Cleanup: Proper cleanup on system shutdown

 

You might add your own.

~

Summary

That’s a wrap.  In final part of this series, we will introduce Semantic Kernel.  This will let us use natural language to reason over and interact with the AI home security system.  Stay tuned.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to use GitHub Copilot to level up your code reviews and pull requests

1 Share

Since I joined GitHub as a software engineer on the billing team almost three years ago, I’ve had a front row seat to the evolution of AI coding tools including Github Copilot. What started out as code completions has evolved into so much more including agentic workflows and refactoring suggestions. When I first started using Copilot, I was mainly using it in VSCode. As Copilot has grown and expanded, I’ve extended my use cases beyond my code editor and into all parts of my day-to-day work, including pull requests, code reviews, and more.  

GitHub Copilot is now available in all parts of the software development life cycle and one place where it can be extremely useful is when you’re creating pull requests and doing code reviews. During my time at GitHub, I’ve discovered some practical ways Copilot can make a difference during the pull request and code review processes. Here are a few things I’ve started doing that have made my workflow smoother and more efficient.

Using Copilot suggestions when reviewing code

Often, when I’m reviewing a teammate’s pull request, I’ll have an idea for how their code could be improved, or I’ll want to confirm the code is following best practices. However, I don’t always have time to write the suggested refactored code myself. In these cases, I’ll usually click the Copilot icon next to the file I’m reviewing and ask Copilot to suggest a specific improvement about the lines of code I’m currently reviewing. Then I’ll add that suggestion in my review comment along with some explanation of why I think the suggestion would improve the code.  

Recently while reviewing a teammate’s code, I noticed some repetitive code in a Ruby file. I clicked the Copilot icon next to the relevant code and prompted it with:

> “Can you refactor this Ruby on Rails code to reduce repetition?”

Copilot’s suggestion matched what I would have written and saved me several minutes, which allowed me to focus on higher-level feedback.

Since I frequently switch context and review code in different languages like Ruby, TypeScript and Go, sometimes I’ll need a reminder on best practices for each language. I recently used this prompt to help me review some Go code:

> “Is this code addition following Go best practices for variable assignment? If not, can you suggest improvements?”

Copilot quickly generated a suggestion that I reviewed and used as part of my code review comment. It saved me time and made my feedback more actionable.

Screenshot of a GitHub pull request for a Go file, suggesting a change to use the short variable declaration “result := add(1, 2)” instead of “var result int”. A Copilot panel on the right summarizes the changes and explains the reasoning behind the suggestion.

Formatting tables and data for pull requests

I like to add images, screen recordings, and occasionally spreadsheets to my pull request descriptions to add context to help reviewers understand my changes. Adding this type of information in Markdown can be tedious! In these cases, I ask Copilot to do the formatting for me. 

Recently, I was working on some improvements to help decrease the load time of a React page, and I wanted to show the improvements that my changes were making. I had been keeping a spreadsheet detailing the the page load times before and after my changes, and I wanted to include it in my pull request description, so I gave Copilot this prompt:

Format this data into a GitHub flavored markdown table that I can paste into a GitHub pull request description: 	

Load Time Before (in seconds)	Load Time After Updates (in seconds)
	1.3	1.2
	1.2	1.1
	1.1	0.885
	1.3	1.3
	1.2	0.918
		
Average	1.22	1.0806

And Copilot gave this response: 

Here’s your data formatted as a GitHub-flavored markdown table, ready to paste into a pull request description:
Markdown
| Test Run | Load Time Before (seconds) | Load Time After Updates (seconds) |
|----------|---------------------------|-----------------------------------|
| 1        | 1.3                       | 1.2                               |
| 2        | 1.2                       | 1.1                               |
| 3        | 1.1                       | 0.885                             |
| 4        | 1.3                       | 1.3                               |
| 5        | 1.2                       | 0.918                             |
| **Average** | **1.22**                  | **1.0806**                         

Let me know if you’d like any adjustments!

This simple trick has made it much easier for me to clearly share data in my pull request descriptions and comments.

Letting Copilot help write pull request summaries

I often use Copilot to help me write pull request summaries. We’ve all been there: you finally open your pull request after fixing a bug or introducing a new feature and the last hurdle is writing the pull request description. Copilot can help kickstart this process for you by using the Copilot icon in the pull request description editor bar. Even if I end up editing the text, having a starting point makes it less daunting.

Screenshot of a GitHub pull request page titled "Test PR #1" showing the conversation tab, with a Copilot menu offering options to generate a summary or an outline of the pull request changes.

Using Copilot for code reviews and understanding unfamiliar code

Copilot is very good at reviewing and explaining code and two ways I leverage this in my day-to-day work are requesting initial code reviews from Copilot and asking questions about unfamiliar code. 

Before I mark a pull request as ready for review, I’ll use Copilot to do a quick pass over my changes by requesting a code review from Copilot. It often catches things I might have missed or suggests a better way to write something. And don’t forget to add some notes in the custom instructions in your repository on what you want Copilot to focus on when reviewing pull requests.

If I’m reviewing someone else’s code and I don’t understand a change, I’ll ask Copilot to explain it. This helps me get context quickly, especially when I’m less familiar with that part of the codebase. This better understanding of the code allows me to provide more thoughtful and thorough code reviews for my teammates and ensures that I fully understand the potential impact of any pull request that I’m approving. 

Copilot’s impact on code reviews and pull requests 

While Copilot isn’t a replacement for thoughtful, engaged code reviews, it has become an indispensable tool in my daily workflow as a software engineer. From generating smart suggestions and code refactors, to quick Markdown formatting and drafting pull request summaries, Copilot helps streamline the work that surrounds writing code by making feedback more actionable and the code review process faster and more thorough. By integrating Copilot into every stage of the software development life cycle, I’m able to focus on higher-level engineering problems and collaboration. As Copilot continues to evolve, I’m excited to see how it will further transform not just my workflow, but the way developers everywhere build software together.

Get started with GitHub Copilot >

The post How to use GitHub Copilot to level up your code reviews and pull requests appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Do not Interrupt Developers, Study Says

1 Share

In a study called Breaking the Flow: A Study of Interruptions During Software Engineering Activities, researchers at Duke and Vanderbilt analyzed how interruptions influence three common engineering tasks: code writing, code comprehension, and code review.

Twenty participants performed these tasks while experiencing six types of interruptions, both in person and on screen.

The research showed that it takes 10-15 minutes for a developer to return to editing code after an interruption, and as much as 30-45 minutes to recover the full context they had before breaking focus. That disruption does not mean that they’ve wasted only those 10 to 15 minutes; the cost is also in fragmented flow and decreased creativity. And the importance of the requester increased the impact of the interruption.

A study done by GitHub estimates that interruptions can erase up to 82% of productive work time when developers face frequent disruptions from meetings, messages, and quick questions.

Each interruption can cost a dev 30 minutes

Developing software demands a complex internal mental model-tracking system architecture, problem logic, edge cases, and more. Interruptions shatter that model, forcing a restart. Whether the distraction comes from Slack, a teammate, or even internal thoughts, context-switching costs both time and mental energy.

Developers themselves recognize the impact: Reddit discussions often cite 15-30 minutes lost per interruption, especially on complex tasks, and the cumulative effect means whole afternoons can vanish in broken focus.

What this means for teams?

Interruptions not only waste time, but they also reduce code quality and increase bugs. The Duke study showed higher error rates during fragmented workdays and noted that rushed re-entry into complex tasks often leads to sloppy code.

Even self-imposed context switches – voluntarily checking messages or shifting between tasks -can be as disruptive as external ones, according to studies of software developers’ work habits

Meet less, code more

Engineering leaders who want to protect the flow of their developers limit the number of meetings. Research shows that teams with just one meeting per day maintain daily progress nearly 99% of the time, while adding a third meeting drops progress to 14%.

Asynchronous communication – when answers to pings and messages are not expected to happen instantly – also helps. By answering messages in batches, software engineers can block periods of time for deep focus. Two hours of uninterrupted work delivers a 20% increase in focus time in organizations that track these metrics.

Open-plan layouts, split calendars, uncoordinated tools, and reactive meeting culture all chip away at developer mental bandwidth

Want to solve the case? Do the research

Researchers also point out that interruptions are usually silent productivity killers that rarely, if ever, appear on developer productivity metrics dashboards.

They advise engineering organizations to use a combination of metrics and developer satisfaction surveys to understand the impact of interruptions. No fancy tools are needed; the goal is to get feedback from developers, establish a baseline, and work on improvements.

The post Do not Interrupt Developers, Study Says appeared first on ShiftMag.

Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Advanced Installer 22.9.1

1 Share
Advanced Installer 22.9.1 was released on August 8th, 2025
Read the whole story
alvinashcraft
12 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories