Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150167 stories
·
33 followers

The PowerShell Podcast From School IT Intern to Systems Architect with Chris Thomas

1 Share

K-12 IT veteran Chris Thomas joins The PowerShell Podcast to share his 26-year journey in educational technology, from a high school IT internship to becoming an Endpoint Cloud Systems Architect supporting multiple school districts in Michigan. Chris discusses how PowerShell helped him automate identity management, investigate network incidents, and streamline large-scale IT operations across complex school environments.
The conversation also dives into mentorship, Don Jones’ influence through Be the Master, the value of community involvement, and the mental health challenges IT professionals face. Chris shares practical lessons on automation, presenting at conferences, overcoming imposter syndrome, and how putting yourself out there can open doors throughout your career.

Key Takeaways:
• PowerShell fundamentals unlock huge opportunities — learning commands like Get-Command, Get-Help, Get-Member, and Get-Module can help you explore and automate almost anything.
• Automation is essential in resource-constrained environments like K-12 IT where staff wear many hats and must support large systems with limited manpower.
• Community participation accelerates growth — presenting, attending conferences, and contributing scripts can build confidence, connections, and career momentum.

Guest Bio:
Chris Thomas is an Endpoint Cloud Systems Architect supporting multiple K-12 school districts in Michigan through a regional educational service agency. With more than two decades of experience in educational IT, Chris focuses on automation, endpoint management, and infrastructure architecture. He is an active contributor to the Michigan K-12 technology community, regularly presenting at conferences such as MAEDS and MMS/MOA, and sharing PowerShell scripts and tools through his GitHub projects.

Resource Links:
Chris Thomas GitHub – https://github.com/chrisATautomatemystuff

Connect with Andrew - https://andrewpla.tech/links

PowerShell App Deployment Toolkit – https://psappdeploytoolkit.com

Learn PowerShell in a Month of Lunches – https://www.manning.com/books/learn-powershell-in-a-month-of-lunches

PDQ Discord – https://discord.gg/PDQ

MAEDS Conference – https://maeds.org

MMS / MOA Conference – https://mmsmoa.com

The PowerShell Podcast on YouTube: https://youtu.be/k4n6FWzDPUk

 

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

The Mythical Agent-Month

1 Share

The following article originally appeared on Wes McKinney’s blog and is being republished here with the author’s permission.

Like a lot of people, I’ve found that AI is terrible for my sleep schedule. In the past I’d wake up briefly at 4:00 or 4:30 in the morning to have a sip of water or use the bathroom; now I have trouble going back to sleep. I could be doing things. Before I would get a solid 7–8 hours a night; now I’m lucky when I get 6. I’ve largely stopped fighting it: Now when I’m rolling around restlessly in bed at 5:07am with ideas to feed my AI coding agents, I just get up and start my day.

Among my inner circle of engineering and data science friends, there is a lot of discussion about how long our competitive edge as humans will last. Will having good ideas (and lots of them) still matter as the agents begin having better ideas themselves? The human-expert-in-the-loop feels essential now to get good results from the agents, but how long will that last until our wildest ideas can be turned into working, tasteful software while we sleep? Will it be a gentle obsolescence where we happily hand off the reins or something else?

For now, I feel needed. I don’t describe the way I work now as “vibe coding” as this sounds like a pejorative “prompt and chill” way of building AI slop software projects. I’ve been building tools like roborev to bring rigor and continuous supervision to my parallel agent sessions, and to heavily scrutinize the work that my agents are doing. With this radical new way of working it is hard not to be contemplative about the future of software engineering.

Probably the book I’ve referenced the most in my career is The Mythical Man-Month by Fred Brooks, whose now-famous Brooks’s law argues that “adding manpower to a late software project makes it later.” Lately I find myself asking whether the lessons from this book are applicable in this new era of agentic development. Will a talented developer orchestrating a swarm of AI agents be able to build complex software faster and better, and will the short-term productivity gains lead to long-term project success? Or will we run into the same bottlenecks—scope creep, architectural drift, and coordination overhead—that have plagued software teams for decades?

Revisiting The Mythical Man-Month (TMMM)

One of Brooks’s central arguments is that small teams of elite people outperform large teams of average ones, with one “chief surgeon” supported by specialists. This leads to a high degree of conceptual integrity about the system design, as if “one mind designed it, even if many people built it.”

Agentic engineering appears to amplify these problems, since the quality of the software being built is now only as good as the humans in the loop curating and refining specs, saying yes or no to features, and taming unnecessary code and architectural complexity. One of the metaphors in TMMM is the “tar pit”: “Everyone can see the beasts struggling in it, and it looks like any one of them could easily free itself, but the tar holds them all together.” Now, we have a new “agentic tar pit” where our parallel Claude Code sessions and git worktrees are engaged in combat with the code bloat and incidental complexity generated by their virtual colleagues. You can systematically refactor, but invariably an agentic codebase will end up larger and more overwrought than anything built by human hand. This is technical debt on an unprecedented scale, accrued at machine speed.

In TMMM, Brooks observed that a working program is maybe 1/9th the way to a programming product, one that has the necessary testing, documentation, and hardening against edge cases and is maintainable by someone other than its author. Agents are now making the “working program” (or “appears-to-work” program, more accurately) a great deal more accessible, though many newly minted AI vibe coders clearly underestimate the work involved with going from prototype to production.

These problems compound when considering the closely-related Conway’s law, which asserts that the architecture of software systems tends to resemble the organizations’ team or communication structure. What does that look like when applied to a virtual “team” of agents with no persistent memory and no shared understanding of the system they are building?

Another “big idea” from TMMM that has stuck with people is the n(n-1)/2 coordination problem as teams scale. With agentic engineering, there are fewer humans involved, so the coordination problem doesn’t disappear but rather changes shape. Different agent sessions may produce contradictory plans that humans have to reconcile. I’ll leave this agent orchestration question for another post.

No silver bullet

“There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.”
—“No Silver Bullet” (1986)

Brooks wrote a follow-up essay to TMMM to look at software design through the lens of essential complexity and accidental complexity. Essential complexity is fundamental to achieving your goal: If you made the system any simpler, it would fall short of its problem statement. Accidental complexity is everything else imposed by our tools and processes: programming languages, tools, and the layer of design and documentation to make the system understandable by engineers.

Coding agents are probably the most powerful tool ever created to tackle accidental complexity. To think: I basically do not write code anymore, and now write tons of code in a language (Go) I have never written by hand. There is a lot of discussion about whether IDEs are still going to be relevant in a year or two, when maybe all we need is a text editor to review diffs. The productivity gains are enormous, and I say this as someone burning north of 10 billion tokens a month across Claude, Codex, and Gemini.

But Brooks’s “No Silver Bullet” argument predicts exactly the problem I’m experiencing in my agentic engineering: The accidental complexity is no problem at all anymore, but what’s left is the essential complexity which was always the hard part. Agents can’t reliably tell the difference. LLMs are extraordinary pattern matchers trained on the entirety of humanity’s open source software, so while they are brilliant at dealing with accidental complexity (refactor this code, write these tests, clean up this mess), they struggle with the more subtle essential design problems, which often have no precedent to pattern match against. They also often tend to introduce unnecessary complexity, generating large amounts of defensive boilerplate that is rarely needed in real-world use.

Put another way, agents are so good at attacking accidental complexity that they generate new accidental complexity that can get in the way of the essential structure that you are trying to build. With a couple of my new projects, roborev and msgvault, I am already dealing with this problem as I begin to reach the 100 KLOC mark and watch the agents begin to chase their own tails and contextually choke on the bloated codebases they have generated. At some point beyond that (the next 100 KLOC, or 200 KLOC) things start to fall apart: Every new change has to hack through the code jungle created by prior agents. Call it a “brownfield barrier.” At Posit we have seen agents struggle much more in 1 million-plus-line codebases such as Positron, a VS Code fork. This seems to support Brooks’s complexity scaling argument.

I would hesitate to place a bet on whether the present is a ceiling or a plateau. The models are clearly getting better fast, and the problems I’m describing here may look charmingly quaint in two years. But Brooks’s essential/accidental distinction gives me some confidence that this isn’t just about the current limitations of the technology. Figuring out what to build was the hard part long before we had LLMs, and I don’t see how a flawless coding agent changes that.

Agentic scope creep

When generating code is free, knowing when to say “no” is your last defense.

With the cost of generating code now converging to zero, there is practically nothing stopping agents and their human taskmasters from pursuing all avenues that would have previously been cost or time prohibitive. The temptation to spend your day prompting “and now can you just…?” is overwhelming. But any new generated feature or subsystem, while cheap to create, is not costless to maintain, test, debug, and reason about in the future. What seems free now carries a future contextual burden for future agent sessions, and each new bell or whistle becomes a new vector of brittleness or bugs that can harm users.

From this perspective, building great software projects maybe never was about how fast you can type the code. We can “type” 10x, maybe 100x faster with agents than we could before. But we still have to make good design decisions, say no to most product ideas, maintain conceptual integrity, and know when something is “done.” Agents are accelerating the “easy part” while paradoxically making the “hard part” potentially even more difficult.

Agentic scope creep also seems to be actively destroying the open source software world. Now that the bar is lower than ever for contributors to jump in and offer help, projects are drowning in torrents of 3,000-line “helpful” PRs that add new features. As developers become increasingly hands-off and disengaged from the design and planning process, the agents’ runaway scope creep can get out of control quickly. When the person submitting a pull request didn’t write or fully read the code in it, there’s likely no one involved who’s truly accountable for the design decisions.

I have seen in my own work on roborev and msgvault that agents will propose overwrought solutions to problems when a simple solution would do just fine. It takes judgment to know when to intervene and how to keep the agent in check.

Design and taste as our last foothold

Brooks’s argument is that design talent and good taste are the most scarce resources, and now with agents doing all of the coding labor, I argue that these skills matter more now than ever. The bottleneck was never hands on keyboards. Now with the new “Mythical Agent-Month,” we can reasonably conclude that design, product scoping, and taste remain the practical constraints on delivering high-quality software. The developers who thrive in this new agentic era won’t be the ones who run the most parallel sessions or burn the most tokens. They’ll be the ones who are able to hold their projects’ conceptual models in their mind, who are shrewd about what to build and what to leave out, and exercise taste over the enormous volume of output.

The Mythical Man-Month was published in 1975, more than 50 years ago. In that time, a lot has happened: tremendous progress in hardware performance, programming languages, development environments, cloud computing, and now large language models. The tools have changed, but the constraints are still the same.

Maybe I’m trying to justify my own continued relevance, but the reality is more complex than that. Not all software is created equal: CRUD business productivity apps aren’t the same as databases and other critical systems software. I think the median software consulting shop is completely toast. But my thesis is more about development work in the 1% tail of the distribution: problems inaccessible to most engineers. This will continue to require expert humans in the loop, even if they aren’t doing much or any manual coding. As one recent adjacent example, my friend Alex Lupsasca at OpenAI and his world-class physicist collaborators were able to create a formulation of a hard physics problem and arrive at a solution with AI’s help. Without such experts in the loop, it’s much more dubious whether LLMs would be able to both pose the questions and come up with the solutions.

For now, I’ll probably still be getting out of bed at 5am to feed and tame my agents for the foreseeable future. The coding is easier now, and honestly more fun, and I can spend my time thinking about what to build rather than wrestling with the tools and systems around the engineering process.

Thanks to Martin Blais, Josh Bloom, Phillip Cloud, Jacques Nadeau, and Dan Shapiro for giving feedback on drafts of this post.



Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Agile planning gets a boost from new features in GitLab 18.10

1 Share

GitLab's Agile planning experience is getting a significant upgrade. Starting in GitLab 18.10, the new work items list and saved views bring together two long-requested capabilities: one list that displays all work item types together, and saved views that let you store and return to customized list configurations.

These capabilities help save time and effort by:

  • Eliminating repetitive filter setup for common workflows
  • Ensuring consistency in how teams view and assess work
  • Facilitating standardized reporting and status checks

What are work items?

Today, epics and issues live on separate list pages, requiring users to navigate between them. The work items list combines epics, issues, and other work items into a single, unified list experience, eliminating the need to switch between separate pages for different work item types.

This is also the foundation for deeper planning capabilities coming in the future. Bringing all work item types into one place paves the way for hierarchy views (like a Table view) that will make it easier to visualize relationships and structure across epics, issues, and other items at a glance.

Beyond list and hierarchy views, we also plan to consolidate other common workflows, like Boards, into this unified experience. The result: all of your essential planning views in one place, shareable with your team through saved views, without needing to navigate across different parts of the product.

You may be wondering why we call these "work items" rather than issues. The short answer is that "issue" doesn't scale to where we're going. Soon, you'll be able to fully configure your work item types, including their names, to match your organization's planning hierarchy. Locking the experience to legacy naming would work against that flexibility. "Work items" is the foundation for a model you can make your own.

Work items list view

What led to the change to work items?

In 2024, we shared our vision for a new Agile planning experience in GitLab, powered by the work items framework. That post outlined the core problem: Epics and issues existed as separate experiences, creating friction for teams who expected consistent functionality across planning objects. The work items framework was our answer — a unified architecture designed to deliver consistency and unlock new capabilities across GitLab's planning tools. Work items list and saved views are a step in that journey.

What are saved views?

Saved views allow users to save and return to customized list configurations, including filters, sort order, and display options. The goal is to make routine checks more efficient and to support consistent, standardized ways of viewing work across a team.

Saved view

What's next

To understand why we are making the changes we are, it helps to picture where we're headed.

The goal isn't just a work items list; it's a planning experience that lets you move fluidly between different types of views (list, board, table, and more) while retaining your current filter scope.

Pair that with saved views, and you can create a dedicated view for each of your workflows: iteration planning, backlog refinement, portfolio-level planning with nested table views, and more.

Each view is ready to go, consistent in how it filters and displays work, and shareable with your team. This framework also sets the stage for more powerful capabilities down the road, including full swimlane support for any work item attribute in boards.

We know that changes to the tools you use every day can be disruptive. If you've built workflows around the existing epic and issue list pages, this will look and feel different. That's not something we take lightly.

This direction wasn't a decision we made quickly. It reflects years of feedback, a significant architectural investment in the work items framework, and a genuine belief that a unified experience will serve teams better in the long run. We expect the transition to take some adjustment, and we'll continue to iterate based on what we hear from you!

Share your feedback

We encourage you try these new capabilities. Then, please reach out about your work items list and saved views experience in our feedback issue. Your comments will help us further improve these capabilities.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

EP268 Weaponizing the Administrative Fabric: Cloud Identity and SaaS Compromise in M Trends 2026

1 Share

Guests:

 Topics:

  • Do we need to rethink "Mean Time to Respond" entirely, or are we just in deep trouble?
  • Why are threat groups collaborating so well, and are there actual lessons for defenders in their "business" model?
  • What is the scalable advice for teams worried about voice phishing and GenAI cloning?
  • What does "weaponizing the administrative fabric" actually mean in a world where identity is the perimeter?
  • Why is identity/SaaS compromise "news" in 2026 when cloud security folks have been shouting about it for years? What actually changed?
  • What's the latest in supply chain compromise, particularly regarding malicious open-source packages?
  • How do we defend against malware that is "lazy" enough to use the victim's own AI tools for reconnaissance?
  • What is the specific advice for Detection and Response (D&R) teams to handle "living off the land" (or "living off the cloud")?
  • How do you fix the situation when IT and Security departments genuinely hate each other?
  • Besides reading the report, what is the one book or piece of advice for a CISO to survive this year?

Resources:





Download audio: https://traffic.libsyn.com/secure/cloudsecuritypodcast/EP268_not267_CloudSecPodcast.mp3?dest-id=2641814
Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How can I make sure the anti-malware software doesn’t terminate my custom service?

1 Share

A customer was developing a Windows service process, and it is important to them that the service keep running on their servers. They wanted to know if there was a way they could prevent users who connect to the server from terminating the service. In particular, they wanted to make sure that the user couldn’t use the anti-malware software to terminate their service, either by mistake or maliciously.

The fact that they made it to asking about anti-malware software tells me that they have already locked down the more obvious access points. For example, they’ve already set the appropriate permissions on their service so that only administrators can Stop the service.

But how do you protect your process from anti-malware software?

The answer, of course, is that you can’t.

Because if you could inoculate yourself against being terminated by anti-malware software, then malware would do it!

Anti-malware software runs with extremely high levels of access to the system. They have components that run in kernel mode, after all. Even if they can’t terminate your process, they can certainly make it so that your process can’t accomplish anything (say, by preventing its threads from being scheduled to execute). And if anti-malware software goes awry, the entire system can be rendered catastrophically broken.

The customer will have to work with the anti-malware software that runs on their server to see if there is a setting or other way to tell the anti-malware software never to terminate their critical service. (Of course, it means that genuine malware might masquerade as their critical service and elude detection. This is a risk assessment trade-off they will have to make.) And if their service runs on client-configured servers, where they don’t control what anti-malware software the client uses, then they’ll have to work with all of the anti-malware software (or at least all the major ones) and see if they can arrange something.Âą

But Windows can’t help you. The anti-malware software is more powerful than you.

Âą For example, maybe they digitally sign their service process and give the public key to the anti-malware software, saying, “Please don’t terminate processes signed by this key.” Of course, the real question is whether the anti-malware vendors will accept that.

The post How can I make sure the anti-malware software doesn’t terminate my custom service? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How to deploy Pi-Hole with Docker and stop ads on every device on your LAN

1 Share

How do you block ads? Most people install various and sundry ad-blocking software on their computers or add browser extensions to handle the task. 

Either way you go, blocking ads can help prevent your web browser from loading ads that could consume too many system resources or even inject malicious code into your system. I’ve had instances where a single ad bogged down my CPU so much that the computer came to a screeching halt. The only solution was a hard reboot.

After that, I was on a quest to do whatever it took to avoid another such instance. At first, I thought about going the browser extension route, but I realized I’d have to install extensions on every browser I used on every desktop and laptop on my home network. That’s all fine and good if you only have a few machines connected to your LAN. But what if you have considerably more?

You might want to consider an app like Pi-Hole.

Pi-Hole is a popular open source project that provides a simple, easy-to-use solution for blocking advertisements and trackers on the internet. Instead of working on a computer-by-computer basis, Pi-Hole functions network-wide. It’s named after the mathematical constant pi (π), which represents the ratio of a circle’s circumference to its diameter.

How it works is simple: first, you deploy/install Pi-Hole, and then you configure each computer to use Pi-Hole as its DNS server. That’s it.

Pi-Hole offers network-wide protection, the ability to block in-app advertisements, improves network performance (because ads can typically slow down), and a web-based interface for statistical monitoring. Pi-Hole also includes a built-in DHCP server for even more control over your network.

Now, I want to mention something before we continue. The best way to get Pi-Hole working on your network is to point your modem/router’s DNS settings to the Pi-Hole server. The reason I mention this is that if you use AT&T Fiber, you cannot change the DNS settings on the router. 

I’ve deployed and used Pi-Hole on several occasions, and every time AT&T Fiber is involved, things get tricky. However, if you’re using an ISP that allows you to change the DNS addresses on your modem/router, you should be good to go.

Let’s deploy Pi-Hole.

Installing Docker

Since we’re deploying Pi-Hole as a Docker container, you might need to install Docker first. If you’re using macOS or Windows, you can simply install Docker Desktop, which installs the necessary Docker tools along with it. If you’re using Linux, the process is a bit more complicated. Here are the steps on Ubuntu Server 24.04.

The first step is to add the required Docker GPG key with the following commands: 

  1. sudo apt-get update
  2. sudo apt-get install ca-certificates curl
  3. sudo install -m 0755 -d /etc/apt/keyrings
  4. sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
  5. sudo chmod a+r /etc/apt/keyrings/docker.asc

Next, add the official Docker repository with the following command:

echo “deb [arch=$(dpkg –print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo “${UBUNTU_CODENAME:-$VERSION_CODENAME}”) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update and install Docker using the commands:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin git -y

You then need to add your user to the Docker group (so you can manage your containers without using sudo, which can lead to security issues) with the following command:

sudo usermod -aG docker $USER

Log out and log back in so your changes take effect.

Verify that you can use Docker with:

docker ps -a

You should see an empty list without any errors presented.

Deploy Pi-Hole

With Docker installed, we can finally deploy Pi-Hole. A few things to note:

You’ll need to make sure to change any external ports (those on the left side of the : character) to those that are available. You’ll also want to change the webserver_api_password (PASSWORD) to something strong and unique. 

With that in mind, the docker run command for this is:

docker run --name pihole -p 54:53/tcp -p 54:53/udp -p 8081:80/tcp -p 443:443/tcp -e TZ=America/Kentucky/Louisville -e FTLCONF_webserver_api_password="PASSWORD" -e FTLCONF_dns_listeningMode=all -v ./etc-pihole:/etc/pihole -v ./etc-dnsmasq.d:/etc/dnsmasq.d --cap-add NET_ADMIN -d --restart unless-stopped pihole/pihole:latest

You’ll need to give the container a minute or two to deploy. Once the container is listed as “healthy” (using the command docker ps -a), you should be good to go.

Accessing Pi-Hole

You’ll want to be able to access your Pi-Hole dashboard, which can be done by pointing a browser that is connected to your LAN to http://SERVER:PORT/admin/ (where SERVER is the IP address of the hosting server and PORT is the external port you configured in the docker run command).

You’ll be greeted by a login page, where you’ll need to type the password you configured for webserver_api_password in the docker run command.

Once you’ve logged in, you’ll see the Pi-Hole dashboard (Figure 1).

Figure 1: A newly-deployed Pi-Hole instance is ready to be used.

Configuring your machines to use Pi-Hole

There are two ways you can configure the computers on your network to use Pi-Hole. The more involved method is to configure each machine to use the Pi-Hole server address as its DNS address.

The next method allows you to configure DNS once and be done with it. For this, you have to have access to your ISP’s modem/router and then configure the router’s DNS to use the Pi-Hole IP address. Once you’ve done that, you’ll need to disable DHCP on your ISP’s modem/router and enable it on the Pi-Hole server. 

To enable DHCP on your Pi-Hole server, go to Settings > DHCP (in the Pi-Hole web-based GUI), enable the DHCP server, configure the range of IP addresses to hand out, and then configure the router/gateway address that is associated with your modem/router (Figure2).

Figure 2: Using Pi-Hole to serve up DHCP addresses is the more convenient route.

You can then either restart your machines or have them renew their IP address DHCP address leases, and those machines will begin using Pi-Hole’s DNS addresses, which also means they are being protected from ads.

The post How to deploy Pi-Hole with Docker and stop ads on every device on your LAN appeared first on The New Stack.

Read the whole story
alvinashcraft
3 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories