Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147815 stories
·
33 followers

How I Spent My Last 17 Months

1 Share

I recently started a new role, and I wanted to talk about what I did in my last role—it wasn’t top secret or anything, but it was somewhat different than my normal operations. I’d like to thank everyone at Designmind for giving me this opportunity, as it was a very interesting role. I didn’t leave the job because I disliked the project, I mainly didn’t see my skillset as being compatible with a firm like Cognizant, but I wish everyone there the best.

The Project

When I started at Designmind, we weren’t quite billing on the project—my role was to be data architect, I guess, but that really pretty quickly evolved into being the everything architect. While I did some other client work while I was there, nearly all of what I did during my last job was supporting this project. The project itself was a new application development project for client who aimed to build a supply chain resiliency system. We had a small team of a project manager, a data engineer, and two developers. Sadly, early in the project, we had to remove the data engineer as his skills didn’t really align to the project. That left all of the data tasks to myself and my project manager, who was a massive help. We’ll talk more about data later.

The client specified a fairly specific technology stack, AWS, Python/Flask/SQLAlchemy, PostgreSQL, Kubernetes which I was happy to adopt. They also had some specific requests around DevOps and security where we differed, and went in another direction, based on some other business requirements that the application had. While, I’ve been experienced with all of these technologies, I also had to help our developers get up to speed. We had a sample app one of the developers wrote, so at the beginning of the project, I containerized all of that code, and wrote a bunch of scripts to automatically deploy the app, on Windows, Macs, and Linux, as our target would ultimately be Linux.

Getting Started with AWS

I’ve always worked with AWS, just not as much as I’ve worked with Azure. I’ve had the good fortune to work with both vendors and customers in various AWS projects, so I had a good feel for sizing and performance. The first thing I did was to define the network configuration and building a VPN—I built everything on a private network from the beginning of the project. The basic architecture of the app was that we were going to have front-end containers running React and Nginx, connecting to a couple of middle-tier containers running Python/Flask, with a database running Postgres. Given the fairly narrow scope of this deployment, I used Elastic Kubernetes Services and Amazon RDS for PostgreSQL. For local testing and development, I instructed the team to have a local PostgreSQL instance, and Docker with Kubernetes enabled, so we could install the same stack locally, and use similar deployment scripts.

This kind of leads us into DevOps—which I’m not sure why, but fell into my lap. The client wanted us to use Jenkins, however, the ecosystem and community is dying and it was really complicated and proprietary. I have very good bash scripting skills, so using GitHub actions was a more natural fit to me. We faced a couple of challenges in building out our DevOps workflows—the first was that the builds operated differently on AWS as opposed to locally—this is easily taken care of by making a call to 169.254.169.254 which is a cloud metadata endpoint which works on all clouds, but can let you identity where you code is running. This mattered because we were using IAM authentication and other conditional deployment steps based on build location. Not just in our build process, but in our Python middle tier, I implemented conditional logic to decide how to authenticate to the database.

The Data

I’ve written here before about our data flow process and how we used some AI tooling to improve it. Our data flow and engineering process was really confusing to a lot of traditional data engineering pros I talked to about the project. The biggest issue was we didn’t have regular flows of inbound data—we had two data sets coming from the federal government that were published nightly. Those were easy—I built an Amazon glue job that downloaded, wrote them to S3, and did some degree of cleanup. That Glue job, when complete triggered a data ingestion process—Glue has its limitations, but for this data, it was fine.

The rest of our data sources were either what we could find or sporadic. From the earliest days of this project it was very obvious to me that we would have a data sourcing problem. While the federal government did supply some data, we were either going to have to gather, scrap or buy data. We ultimately bought data from a vendor, who was terrible (bad inconsistent data). I asked the vendor at one point if they could provide a delta file (a file of just changes) and they didn’t know what that was. If you know my email, and are in the data market—email me and I’ll tell you who not to buy data from. That vendor data did provide a basis for our webscraping efforts, which was pretty cool. Most of our reference data like geography, congressional districts, etc. was open sourced and downloaded in our environment. My project manager helped a lot here, identifying and vetting potential new data sources and getting us started with getting them integrating them into the app.

What I Learned

At this point in my career, I’ve been functioning as an architect since about 2013. A lot of people try to define what an architect does, and they try to do it in the context of what actual building architects do, or what some business book written by someone who’s never done the job, or written a line of code thinks an architect does. A good architect has to be ahead of the project, to understand where the priorities of both the project and client wants. The architect needs to be flexible and forward thinking. I always make culinary comparisons, but the role is a lot like being an executive chef. It doesn’t matter if your create amazing recipes (or designs), if your line cooks (or developers) don’t have the skill set to execute them. You either have to increase the skills of those workers (best), hire new workers (hardest), or dial back the recipe/design to match the skill of your team. The architect also needs to think about the problems the team is going to have next—mostly from a technical perspective, but also organizationally, or tooling. If you can get in front of those problems, you can help your developers do their jobs better.

It was a cool experience to be working on an app dev project. I was happy to get to push my skill set and help others grow their own. I would have loved to have completed the project, but unfortunately circumstances got in the way.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

How To Deal With A Writer’s Inner Critic

1 Share

Every writer questions their ability to write. In this post, we give you three easy steps to help you deal with your writer’s inner critic.

Everybody has an inner critic. It’s that negative voice that tells you that you’re useless, that you don’t know what you’re doing, and that you’ll never write that book.

Some writers are able to still the voice, but others become paralysed by it.

In this post, I will give you three steps to help you deal with your writer’s inner critic.

How To Deal With A Writer’s Inner Critic

1. Make Friends With Your Inner Critic

Name your critic. Create them as you would a character. How old are they? Where do they come from? How do they dress? What do they look like? This turns them into a manageable entity and not a secret, nebulous presence.

Julia Cameron, author of The Artist’s Way, calls her inner critic Nigel. The poet, Molly Spencer calls her inner critic Spiteful Gillian. Author, Julia Crouch has two. Their names are Nigel and June.

Acknowledge them. Introduce yourself to them and tell them that you will be meeting on a regular basis to discuss your writing progress.

2. Give Your Inner Critic A Backstory

What has happened in their life that they need to stop you from succeeding? Who has hurt them? What do they fear? What do they enjoy? Let them tell you their story and write it down as they talk.

Use one of character questionnaires to interview them if you need to: 9 Useful Character Questionnaires For Writers

This is obviously your baggage coming out, but creating it as a writer will help you to put it into perspective.

3. Write A Letter To Your Inner Critic

Now that we know who we’re dealing with, we can take steps to get them under control.

In the Writers Write Course, we always ask delegates to write a letter or email or text dismissing the negativity of their inner critic. Tell them you would prefer their constructive criticism instead. Tell them how much you love your writing and that you will continue to do it.

It’s amazing how effective this has been over the years. It allows writers to deal with them using the very tools they’ve been mocking, namely their written words.

Some Writers On Their Inner Critics

  1. The real difficulty is to overcome how you think about yourself. ~Maya Angelou
  2. Love your material. Nothing frightens the inner critic more than the writer who loves her work. The writer who is enamored of her material forgets all about censoring herself. She doesn’t stop to wonder if her book is any good, or who will publish it, or what people will think. She writes in a trance, losing track of time, hearing only her characters in her head. ~Allegra Goodman
  3. What you say to your critic is, ‘Ah, thank you for sharing.’ and you turn your critic from a voice of doom and gloom into a little cartoon character. And the cartoon character can be as negative as it wants and you can step past it. ~ Julia Cameron
  4. If you can tell stories, create characters, devise incidents, and have sincerity and passion, it doesn’t matter a damn how you write. ~W. Somerset Maugham
  5. It’s easier to complain about the outside critics, but the biggest critic in your life usually lives between your own two ears. ~James Clear
  6. Almost all good writing begins with terrible first efforts. You need to start somewhere. Start by getting something—anything—down on paper. What I’ve learned to do when I sit down to work on a shitty first draft is to quiet the voices in my head. ~Anne Lamott
  7. If you are not afraid of the voices inside you, you will not fear the critics outside you. ~Natalie Goldberg
  8. When someone else reads my books for the first time, it’s absolutely terrifying, every single time. Every time, you think: I’ve done something horribly wrong, or they’re going to see through me this time. ~Richard Osman

The Last Word

This is a simple way to deal with a writer’s inner critic. Don’t let them stop you writing. Why not try it and see if it works for you?

Additional reading: Impostor Syndrome – What It Is And How To Get Over It


by Amanda Patterson
© Amanda Patterson

If you liked this blogger’s writing, you may enjoy:

  1. What Is Exposition In A Story?
  2. The Death Trap – A Plot Device
  3. Why You Don’t Need To Put Everything In Your Book
  4. How To Write Hardboiled Fiction
  5. 12 Types Of Memoirs – Which One Is Yours?
  6. How Many Suspects Do You Need In A Crime Novel?
  7. Writing Is An Act Of Courage
  8. A Quick Start Guide To Writing An Inciting Incident
  9. A Quick Start Guide To Foreshadowing
  10. Writing Through The Pain – Tips For Memoirists

Top Tip: Find out more about our workbooks and online courses in our shop.

The post How To Deal With A Writer’s Inner Critic appeared first on Writers Write.

Read the whole story
alvinashcraft
23 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

PowerShell, OpenSSH, and DSC team investments for 2026

1 Share

Team investments for 2026

As is tradition, we are publishing our planned team investments for the year. This is based on our current understanding of customer and community needs, but is subject to change based on emerging priorities throughout the year.

Community thanks!

Before we dive into the planned investments, I want to take a moment to thank the community for their continued support and contributions to PowerShell, OpenSSH, DSC, and related tooling over the past year.

Security improvements

Security is a top priority and compliance requirements are constantly evolving. As security issues are discovered and reported, and compliance requirements evolve, we must prioritize this work over feature development. This often results in work that’s not directly visible to end users.

Bug fixes and community PRs

Feedback and contributions from the Community are invaluable. We continue to prioritize fixing reported critical issues, as well as prioritizing the review and merging of community pull requests.

PowerShell 7.7

PSUserContentPath relocation

It’s been a long standing issue where PowerShell stores user content such as modules, profiles, and help files in the user’s Documents folder. This has caused issues for users who have their Documents folder synced with OneDrive or other cloud storage services, leading to performance degradation and other unexpected behaviors. We had published a design proposal in our RFC repo last year and received lots of great feedback. This issue has been particularly challenging due to the breaking nature of the change. I believe we’ve closed on a design that balances the needs of most users while minimizing disruption and should have an experimental feature available in an early PowerShell 7.7 preview for users to test and provide feedback on.

Non-profile based module loading

Currently, PowerShell requires that you load modules in a profile script to immediately enable the features provided by those modules. Specific examples include tab-completers and Feedback Providers.

Application developers have expressed interest in being able to register these features without needing to update a profile script, which can be challenging in their installer. We have a design proposal in our RFC repo and would welcome any feedback from the community.

Delayed update notification

PowerShell has a feature that notifies users when a new version is available. However, consistent feedback from users is that the notification is not useful as it shows up immediately, but the actual update may not be available for the package manager they use (e.g. Windows Store, Linux package manager). Current plan is to delay the notification by some predetermined interval to allow time for the new version to propagate to various package managers.

Bash-style aliases/macros

PowerShell aliases are a way to create short names for cmdlets or commands. However, advanced users often want more powerful aliasing capabilities similar to Bash shell’s aliases and macros. This includes features like parameter passing, command chaining, and conditional execution. We are exploring options to enhance PowerShell’s aliasing capabilities to better meet these needs.

MCP Server and tools

As AI adoption continues to grow, we are seeing increased interest in integrating AI with PowerShell. Enablement of AI-assisted scripting and automation is a key use case. To support this, we plan to develop a team supported Model Context Protocol (MCP) server and associated tools that can be used to integrate AI models with PowerShell. Our initial focus will be on safety and security when using AI with PowerShell.

PSReadLine

Context aware Predictive IntelliSense

Predictive IntelliSense in PSReadLine has proven to be a productivity booster for many users. However, one limitation is that the predictions are not context-aware based on the current directory. For example, if a user is in a Git repository, they may want predictions that are relevant to Git commands and workflows. We are exploring ways to make predictions more context-aware.

Decouple reading keyboard input from terminal rendering

Currently, PSReadLine essentially has a loop that reads keyboard input and renders the terminal output. This design has worked well for traditional terminal environments, but limits new experiences we want to enable. This is a fundamental change that will take time, and the benefits won’t be immediately visible to end users. However, this change is necessary to enable future enhancements.

PowerShellGallery/PSResourceGet

Complete Microsoft Artifact Registry (MAR) migration

One of the big investments last year for PSResourceGet was to add support for Azure Container Registry (ACR). Despite its name, ACR is for more than just containers and can be used as a general purpose artifact repository. This year, we plan to complete the migration to support Microsoft Artifact Registry (MAR) as the default trusted repository for Microsoft published modules and scripts. This will provide a more reliable, scalable, and secure experience for users of PSResourceGet.

Concurrency and performance improvements

Users are often installing large modules (that have many dependencies as a family of modules) using PSResourceGet. Alternatively, many users are installing multiple modules at the same time (e.g. during initial setup). Currently, PSResourceGet processes these requests serially, which can lead to long wait times. We plan to enhance PSResourceGet to support concurrent downloads and installations, which should significantly improve performance in these scenarios.

General PowerShellGallery improvements

We are investing in some fundamental improvements to improve reliability, scalability, and security of the PowerShell Gallery.

Windows OpenSSH

EntraID authentication support

A common ask from customers and partners is to support EntraID authentication for SSH connections. We’re actively exploring options to enable this capability in our Windows OpenSSH fork.

Desired State Configuration v3 (DSC)

DSC v3.2 General Availability

We continue to make progress on DSC v3.2 with multiple previews already available. Current expectation is that a Release Candidate and General Availability release will be available in the first half of 2026.

Python Adapter

For Linux usage of DSC, we have been working on a Python adapter to make it easier to create DSC resources using Python. We expect to have previews available early this year and welcome community feedback.

DSC v3.3

We will continue to enhance DSC focusing on customer and partner asks immediately after the v3.2 General Availability release.

Conclusion

We have an exciting year planned with many investments across PowerShell, OpenSSH, DSC, and related tooling. We will continue to prioritize security, bug fixes, and community contributions throughout the year. We look forward to engaging with the community and hearing feedback on our planned investments.

The post PowerShell, OpenSSH, and DSC team investments for 2026 appeared first on PowerShell Team.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Store CLI, .NET 11 Preview 1 and more! - Developer News 07/2026

1 Share
From: Noraa on Tech
Duration: 3:02
Views: 3

Today we are gonna look at the new Microsoft Store CLI, the first preview of .NET 11 and more.

-----

Links

.NET
• .NET 11 Preview 1 is now available! - https://devblogs.microsoft.com/dotnet/dotnet-11-preview-1/?WT.mc_id=MVP_274787
TypeScript
• Announcing TypeScript 6.0 Beta - https://devblogs.microsoft.com/typescript/announcing-typescript-6-0-beta/?WT.mc_id=MVP_274787
Windows
• Enhanced developer tools on the Microsoft Store - https://blogs.windows.com/windowsdeveloper/2026/02/11/enhanced-developer-tools-on-the-microsoft-store/?WT.mc_id=MVP_274787
GitHub
• GitHub Agentic Workflows are now in technical preview - https://github.blog/changelog/2026-02-13-github-agentic-workflows-are-now-in-technical-preview/
• GPT-5.3-Codex is now generally available for GitHub Copilot - https://github.blog/changelog/2026-02-09-gpt-5-3-codex-is-now-generally-available-for-github-copilot/
• New repository settings for configuring pull request access - https://github.blog/changelog/2026-02-13-new-repository-settings-for-configuring-pull-request-access/

-----

🐦X: https://x.com/theredcuber
🐙Github: https://github.com/noraa-junker
📃My website: https://noraajunker.ch

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Agent Plugins for AWS

1 Share

Deploying applications to AWS typically involves researching service options, estimating costs, and writing infrastructure-as-code tasks that can slow down development workflows. Agent plugins extend coding agents with specialized skills, enabling them to handle these AWS-specific tasks directly within your development environment.

Today, we’re announcing Agent Plugins for AWS (Agent Plugins), an open source repository of agent plugins that provide coding agents with the agent skills to architect, deploy, and operate on AWS.

Today’s launch includes an initial deploy-on-aws agent plugin, which lets developers enter deploy to AWS and have their coding agent generate AWS architecture recommendations, AWS service cost estimates, and AWS infrastructure-as-code to deploy the application to AWS. We will add additional agent skills and agent plugins in the coming weeks.

Agent plugins are currently supported in Claude Code and Cursor (announced February 17). In this post, we’ll show you how to get started with Agent Plugins for AWS, explore the deploy-on-aws plugin in detail, and demonstrate how it transforms the deployment experience from hours of configuration to a simple conversation.

Why agent plugins

AI coding agents are increasingly used in software development, helping developers write, review, and deploy code more efficiently. Agent skills and the broader agent plugin packaging model are emerging as best practices for steering coding agents toward reliable outcomes without bloating model context. Instead of repeatedly pasting long AWS guidance into prompts, developers can now encode that guidance as reusable, versioned capabilities that agents invoke when relevant. This improves determinism, reduces context overhead, and makes agent behavior easier to standardize across teams. Agent plugins act as containers that package different types of expertise artifacts together. A single agent plugin can include:

  • Agent skills – Structured workflows and best-practice playbooks that guide AI through complex tasks like deployment, code review, or architecture planning. Agent skills encode domain expertise as step-by-step processes.
  • MCP servers – Connections to external services, data sources, and APIs. MCP servers give your assistant access to live documentation, pricing data, and other resources at runtime. Learn more about AWS MCP servers.
  • Hooks – Automation and guardrails that run on developer actions. Hooks can validate changes, enforce standards, or trigger workflows automatically.
  • References – Documentation, configuration defaults, and knowledge that the agent skill can consult. References make agent skills smarter without bloating the prompt.

As new types of expertise artifacts emerge in this space, they can be packaged into agent plugins, making the evolution transparent to developers.

The deploy-on-aws plugin

The initial release includes the deploy-on-aws plugin, which gives coding agents the knowledge to deploy applications to AWS with architecture recommendations, cost estimates, and infrastructure-as-code generation.

The agent plugin provides AI coding agents with a structured workflow:

  1. Analyze – Scan your codebase for framework, database, and dependencies.
  2. Recommend – Select optimal AWS services with concise rationale.
  3. Estimate – Show projected monthly cost before committing.
  4. Generate – Write CDK or CloudFormation infrastructure code.
  5. Deploy – Execute your confirmation.

The initial plugin uses three MCP servers for AWS to provide comprehensive guidance:

MCP server Purpose
AWS Knowledge Documentation, architecture guidance, and best practices
AWS Pricing Real-time service pricing for cost estimates
AWS IaC Best practices for AWS Cloud Formation Kit (AWS CDK) and AWS CloudFormation

Agent plugins for AWS in action

Here’s how Agent plugins for AWS transform the software development experience:

A full-stack developer has built an Express.js REST API on their local machine. The application connects to a PostgreSQL database and serves a React frontend. The developer is ready to deploy to AWS but isn’t sure about the best architecture.

Using Cursor or Claude Code with the deploy-on-aws plugin installed, the developer enters:

Deploy this Express app to AWS

The plugin immediately springs into action:

Step 1: Analyze – The agent scans the codebase and identifies:

  • Express.js framework (Node.js 20.x)
  • PostgreSQL database dependency
  • Static React build in /public
  • Environment variables for database connection
  • Current traffic: ~1000 requests/day expected

Step 2: Recommend – Based on the analysis, the agent recommends:

Step 3: Estimate – The agent provides a cost estimate using real-time pricing data from the AWS Pricing MCP server, giving you visibility into projected monthly costs before you commit to any infrastructure.

Step 4: Generate The developer reviews the estimate and confirms. The agent generates:

  • AWS CDK infrastructure code in TypeScript
  • Dockerfile for the Express app
  • Database migration scripts
  • Environment configuration
  • GitHub Actions workflow for CI/CD

Step 5: Deploy The developer reviews the generated code, makes minor adjustments to database schema, and confirms deployment. The agent:

  • Provisions all AWS resources via CDK
  • Builds and deploys the container to App Runner
  • Creates the Amazon RDS database and runs migrations
  • Uploads the React build to S3 and configures CloudFront
  • Stores credentials in Secrets Manager

Within minutes, the developer’s application is live at a custom App Runner URL, with the React frontend served globally via CloudFront. The agent provides:

  • Application URLs (backend and frontend)
  • Database connection details
  • CloudWatch dashboard links for monitoring
  • Cost tracking setup

What would have taken hours of reading documentation, comparing services, and writing infrastructure code took less than 10 minutes with the deploy-on-aws plugin. Developers can now focus on building features instead of wrestling with cloud deployment complexity.

Getting started with Agent Plugins for AWS

Prerequisites

To get started, you need:

Installation

Claude Code

Add the Agent Plugins for AWS marketplace to Claude Code:/plugin marketplace add awslabs/agent-plugins

Install the deploy-on-aws plugin:

/plugin install deploy-on-aws@awslabs-agent-plugins

Cursor

Cursor announced support for agent plugins on February 17. You can install the deploy-on-aws plugin directly from the Cursor Marketplace, or manually in Cursor by:

  1. Open Cursor Settings
  2. Navigate to Plugins, and in the search bar type aws
  3. Select the plugin you want to install, and Click add to cursor, then select the scope
  4. Now the plugin should appear under Plugins, installed

Learn more in the Cursor Marketplace announcement.

Skill triggers

The deploy-on-aws plugin responds to natural language requests like:

  • “Deploy to AWS”
  • “Host on AWS”
  • “Run this on AWS”
  • “AWS architecture for this app”
  • “Estimate AWS cost”
  • “Generate infrastructure”

Best practices for plugin-assisted development

To maximize the benefits of plugin-assisted development while maintaining security and code quality, follow these essential guidelines:

  • Always review generated code before deployment (for example, against your constraints for security, cost, resilience)
  • Use plugins as accelerators, not replacements for developer judgment and expertise.
  • Keep plugins updated to benefit from the latest AWS best practices.
  • Follow the principle of least privilege when configuring AWS credentials.
  • Run security scanning tools on generated infrastructure code.

Conclusion

In this post, we showed how Agent Plugins for AWS extend coding agents with skills for deploying applications to AWS. Using the deploy-on-aws plugin, you can generate architecture recommendations, cost estimates, and infrastructure-as-code directly from your coding agent.

Beyond deployments, agent plugins can help with other AWS workflows; more agent plugins for AWS are launching soon. You can also use AWS MCP servers to give your coding agent access to specialized tools to build on AWS.

About the authors

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building a Real-Time Security Monitoring Dashboard with Telerik UI for WinForms

1 Share

See how Telerik UI for WinForms components, like charts, maps, gauges and context menus, can easily compose a security monitoring dashboard.

Omega Security Dashboard
Omega Security Dashboard

In this post, I will show you the app I built with a modern, responsive security surveillance using Progress Telerik UI for WinForms, demonstrating that desktop applications can have a contemporary appearance and functionality. This platform remains a robust choice for corporate desktop applications, especially for real-time monitoring systems.

The app demonstrates the integration of 10+ Telerik components working together to create a cohesive, professional-grade security monitoring experience. The application features real-time threat visualization on an interactive RadMap with live GreyNoise* API integration, a custom Kanban board for incident management with drag-and-drop functionality, and intelligent AI-powered analysis of network devices using ChatGPT. The system monitors Bluetooth, USB and network devices simultaneously, triggering customizable sound alerts when specific MAC addresses are detected. All of this runs on .NET 10 with C# 14, wrapped in a striking FluentDark theme with neon accents that gives it a true cybersecurity operations center aesthetic.

Solution Explorer for Omega Security
Solution Explorer

Why WinForms for Monitoring Dashboards?

Before diving into the functionalities, it’s worth questioning: why choose WinForms in 2026? The answer lies in the unique characteristics of this platform:

  • Native performance: WinForms applications run directly on the operating system, without the overhead of a browser, resulting in faster screen updates and lower memory consumption.
  • Direct access to system resources: For a monitoring system that needs to check USB devices, analyze network traffic and monitor system resources, native Windows access is fundamental.
  • 24/7 reliability: Security monitoring systems need to run continuously. WinForms offers proven stability in critical environments where uptime is essential.
  • Independence from web infrastructure: WinForms does not require web servers, SSL certificates or complex network configurations. The application simply runs.

Overview of the Omega Surveillance Security System

This app is a base for a professional security dashboard, and I’m using it on my networks to understand what is behind my network, and—this is awesome—I discovered that the iPhone of my partner was linking her name on the network without her knowing it.

It also can be used to detect PCs and Mobiles connected in the network as well Bluetooth devices.

Here is what the monitoring app offers:

  1. Network device monitoring: Automatically detects devices connected via Bluetooth, LAN and USB.
  2. Security alerts: Triggers sound alarms (MP3, MID or WAV) when a specific MAC address enters the network.
  3. AI analysis: Integrates OpenAI ChatGPT for intelligent analysis of captured data.
  4. Threat intelligence: Displays the top 10 cities under attack using the GreyNoise* API (when configured).
  5. Data export: Saves information in multiple formats (TXT, HTML, PDF, CSV, Excel).
  6. Resource monitoring: CPU, RAM, network traffic and disk space in real time.
  7. Startup control: AutoStart ON/OFF for automatic execution with Windows.
  8. Security Incidents Dashboard: Allows you to register cards with security activities and manage them with a CRUD with status lanes for each incident.

Telerik Components Used

The dashboard uses several components from the Telerik UI for WinForms framework, each chosen for its specific capabilities.

RadGridView Device Listing

The RadGridView is the heart of device monitoring. It displays both network devices, Bluetooth and connected USB devices.

AI Analysis Feature: A distinctive functionality is the ability to analyze grid data using ChatGPT:

The AI can be accessed on right mouse click on the Grid in the option Analyze with AI.

IP address – context menu with options to copy row, export, analyze with AI
Context Menu

A form with the information about the PC will be sent to be analyzed:

AI-Powered Security Data Analysis form
AI Form

The AI analysis on my PC showed that it was at risk, with some services exposed on the network.

RadChartView Traffic Visualization

The RadChartView renders real timeline charts for network traffic and disk usage.

RadMap Threat Geolocation

The RadMap visually displays the geographic location of detected threats. When integrated with the GreyNoise API, it shows the 10 most attacked cities. I did this for fun, but you can use the control inside a form and watch more cities if this makes sense for you.

RadRadialGauge Performance Indicators

The radial gauges provide an intuitive visualization of system resource usage. They declare the vars and initialize the Gauges.

Security Alert System

One of the most important functionalities is the alarm system when a specific MAC address is detected.

To create an alert, click the third button on the mouse in the MAC ADDRESS column.

Context Alert menu – with options to play alert or view all alerts
Context Alert Menu

Configure alert form
Configure Alert Form

The code below plays a sound with the MAC address is detected on the network.

Data Export

The system allows exporting, or copying lines to memory, or all grid data in multiple professional formats.

AutoStart and System Settings

The AutoStart feature allows the application to start automatically with Windows.

Splash Screen

The application has a splash screen to help the user avoid multiples starts. It uses C# Mutex to avoid multiples instances.

Security dashboard splash screen
Splash Screen

When starting a second instance, this message below will be fired:

Message avoiding multiples instances of Omega Security: Another instance of the application is already running
Message Avoiding Multiples Instances

Accelerated Development with AI

Vibe Coding: using AI assistants to accelerate initial development, followed by manual adjustments to refine the implementation.

The process was:

  1. Initial layout generation: I described the dashboard structure to the AI, and it generated the base code for the controls.
  2. Integration of Telerik UI components: I requested the AI to use Telerik components, but some had to be replaced manually.
  3. Business logic: From prompts, it was possible to create the alert logic according to the MAC address.
  4. Visual refinement: Fine adjustments to the theme and colors were made iteratively.
  5. C# 14: I changed manually some features like the new null-conditional assignment. And there are other to adjust yet.

This hybrid approach allowed me to significantly accelerate development without compromising code quality or customization.

Please note: This project was completed before the Telerik UI for WinForms AI Coding Assistant was available. This resource is now the preferred way to use AI to code with Telerik UI for WinForms as it is built with direct connection to the docs. (And if you need help, the Progress Telerik support team is second to none!)

Open Source and Demonstration

The complete source code for this project is available on GitHub (requires a Telerik license to use it with the WinForms components). The code helps you to:

  • Study the implementation of Telerik components
  • Adapt for your specific needs
  • Contribute with improvements

I also made an executable available for testing, allowing you to experience the system before diving into the code.

Conclusion

This project demonstrates that WinForms remains a viable and powerful platform for modern desktop applications, especially when combined with the robust component suite in Telerik UI for WinForms.

The main lessons learned:

  1. Performance matters: For real-time dashboards, the native execution of WinForms offers significant advantages over web solutions.
  2. Professional componentization: Telerik components eliminate the need to develop complex controls from scratch.
  3. Modern integrations: WinForms can easily integrate with modern APIs (OpenAI, GreyNoise) and remain relevant.
  4. User experience: With the right themes and good information architecture, WinForms can compete visually with any technology.

If you are maintaining legacy WinForms applications or considering this platform for new critical desktop applications, Telerik UI for WinForms provides the tools you need to create professional, modern experiences.

The Omega Surveillance system is proof that WinForms not only survives in 2026, but it also thrives when combined with the right tools.

Try Telerik UI for WinForms free for 30 days!

Try Now


*GreyNoise is a cybersecurity intelligence platform that helps organizations distinguish between benign internet background noise and genuine malicious activity. By operating one of the largest and most sophisticated global sensor networks, it collects and analyzes mass scanning and exploitation attempts across the internet in real time. This enables security teams to filter out low-priority alerts, focus on urgent threats and reduce mean time to respond (MTTR). Trusted by over 80,000 users, 400+ government agencies and 60% of the Fortune 1000, it provides definitive, verifiable data, including complete packet captures, and integrates seamlessly into existing security workflows to empower defenders with actionable insights.

References

GitHub: https://github.com/jssmotta/OmegaSecurityOpenSource

GreyNoise: https://www.greynoise.io/

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories