Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151570 stories
·
33 followers

Settle down, nerds. AI is a normal technology

1 Share
Ryan welcomes Anil Dash, writer and former Stack Overflow board member, back to the show to discuss how AI is not a magical technology, but rather the normal next step in computing’s evolution. They explore the importance of democratizing access to technology, the unique challenges that LLMs’ non-determinism poses, and how developers can keep Stack Overflow’s ethos of community alive in a world of AI.
Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Text Grab v4.11 what’s new and what’s next?

1 Share

The release train never ends. In this latest version there are a couple of key features which are laying the ground work for the next bigger. The three big features are:

  1. Calculation Pane in the Edit Text Window
  2. Regular Expression (RegEx) Manager
  3. Zooming on the Fullscreen Grab

Something I’ve wanted in Text Grab for awhile now is the ability to do basic math operations. I loved this as a tool in OneNote and wanted to do even more with updating, aggregation and variable names. Another major inspiration for this feature was the excellent Mac app, Soulver: https://soulver.app/

In addition to wanting to be able to perform calculations, I’ve wanted the ability to save and recall RegExes quickly and easily. Now starting from the Find and Replace dialog, you can open the RegEx manager and select from the starting expressions, modify them, and make your own! These will be popping up in more places so keep an eye out for future releases!

Finally, when selecting an area on the screen from the Fullscreen Grab a more precise selection usually means less clean up work. So now you can zoom in by scrolling the mouse wheel or pinching and zooming when the Fullscreen Grab is active! This makes it easier to precisely select exactly what you’re looking to grab!

Download from the Microsoft Store
Download from the Microsoft Store

The next major features coming to Text Grab will be centered around how to make getting the value out of text automatically with fewer steps and more speed! If there is anything about Text Grab you’d like to change or improve, head over to the GitHub and open an issue, I’m very active in there! https://github.com/TheJoeFin/Text-Grab

Happy Text Grabbing!

Joe



Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Gain insights in your software supply chain using GitHub’s Dependency Graph

1 Share

The recent software supply chain attacks proof again that having insights in own project dependencies is crucial. This is where GitHub's dependency graph can help. It maps every direct and transitive dependency in your project, giving you the visibility you need to understand, secure, and manage your software supply chain.

What is the Dependency Graph?

The dependency graph is a summary of the manifest and lock files stored in a repository, showing which packages depend on what, helping you identify risks, prioritize security fixes, and keep track of your project's true footprint.

For each repository, the dependency graph shows:

  • Dependencies: The ecosystems and packages your project relies on
  • Version information: What versions you're using
  • License details: The licensing terms of your dependencies
  • Vulnerability status: Whether any dependencies have known security issues
  • Transitive paths: For ecosystems that support it, you can see the entire chain that brought in each dependency

Why enable the dependency graph?

When vulnerabilities are discovered in open source packages, they ripple downstream through all projects that depend on them. Without visibility into your dependency tree, you can't take timely action to protect your project.

The dependency graph also unlocks other GitHub security features:

  • Dependabot alerts: Get notified when vulnerabilities are found in your dependencies
  • Dependency review: Understand the security impact of dependency changes in pull requests
  • SBOM export: Generate a Software Bill of Materials for compliance and auditing

How to enable it?

For your public repositories, You don't need to do anything—it's already enabled for you. The dependency graph is available for public repositories by default.

If you have private repositories, a repository administrator can enable it manually:

Step 1: Navigate to Repository Settings

Go to your repository on GitHub and click on Settings (if you don't see the Settings tab, use the dropdown menu to access it).

Step 2: Access Security Settings

In the left sidebar, find the Security section and click on Code security and analysis (or Advanced Security depending on your view).

 

Step 3: Enable the Dependency Graph

Read the message about granting GitHub read-only access to repository data, then click Enable next to "Dependency Graph".

 

Step 4: Wait for Processing

When first enabled, any manifest and lock files for supported ecosystems are parsed immediately. The graph is usually populated within minutes, though this may take longer for repositories with many dependencies.

Viewing your dependency graph

After enabling, you can explore your dependencies by:

  • Going to your repository's Insights tab
  • Clicking Dependency graph in the left sidebar

 

  • Browsing the list of dependencies, which automatically sorts vulnerable packages to the top

 

  • Clicking on any dependency to see details like version, license, and vulnerabilities

 

  • Using "Show paths" for transitive dependencies to understand how they entered your project

 

Export your dependency graph

One of the  features of the dependency graph is the ability to export it as a Software Bill of Materials (SBOM) in the industry-standard SPDX format. SBOMs are increasingly required by regulators, government agencies, and enterprise customers for compliance and transparency purposes.

An SBOM provides a formal, machine-readable inventory of your project's dependencies and associated information such as versions, package identifiers, licenses, transitive paths for package ecosystems with support for transitive dependency labeling, and copyright information.

The export captures the current state of your dependency graph, representing the head of your main branch at the time of export.

The simplest way to generate an SBOM is through GitHub's web interface:

  • Navigate to your repository on GitHub
  • Click on the Insights tab

  • Select Dependency graph from the left sidebar

  • On the top right side of the Dependencies tab, click Export SBOM

  • A JSON file will automatically download to your browser

More information

About the dependency graph - GitHub Docs

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Python Supply Chain Security Made Easy

1 Share

Maybe you’ve heard that hackers have been trying to take advantage of open source software to inject code into your machine, and worst case scenario, even the consumers of your libraries or your applications machines. In this quick post, I’ll show you how to integrate Python’s “Official” package scanning technology directly into your continuous integration and your project’s unit tests. While pip-audit is maintained in part by Trail of Bits with support from Google, it’s part of the PyPA organization.

Why this matters

Here are 5 recent, high-danger PyPI security issues supply chain attacks where “pip install” can turn into “pip install a backdoor.” Afterwards, we talk about how to scan for and prevent these from making it to your users.

What happened: A malicious version (8.3.41) of the widely-used ultralytics package was published to PyPI, containing code that downloaded the XMRig coinminer. Follow-on versions also carried the malicious downloader, and the writeup attributes the initial compromise to a GitHub Actions script injection, plus later abuse consistent with a stolen PyPI API token. Source: ReversingLabs

Campaign of fake packages stealing cloud access tokens, 14,100+ downloads before removal

What happened: Researchers reported multiple bogus PyPI libraries (including “time-related utilities”) designed to exfiltrate cloud access tokens, with the campaign exceeding 14,100 downloads before takedown. If those tokens are real, this can turn into cloud account takeover. Source: The Hacker News

Typosquatting and name-confusion targeting colorama, with remote control and data theft payloads

What happened: A campaign uploaded lookalike package names to PyPI to catch developers intending to install colorama, with payloads described as enabling persistent remote access/remote control plus harvesting and exfiltration of sensitive data. High danger mainly because colorama is popular and typos happen. Source: Checkmarx

PyPI credential-phishing led to real account compromise and malicious releases of a legit project (num2words)

What happened: PyPI reported an email phishing campaign using a lookalike domain; 4 accounts were successfully phished, attacker-generated API tokens were revoked, and malicious releases of num2words were uploaded then removed. This is the “steal maintainer creds, ship malware via trusted package name” playbook. Source: Python Package Index Blog

SilentSync RAT delivered via malicious PyPI packages (sisaws, secmeasure)

What happened: Zscaler documented malicious packages (including typosquatting) that deliver a Python-based remote access trojan (RAT) with command execution, file exfiltration, screen capture, and browser data theft (credentials, cookies, etc.). Source: Zscaler

Integrating pip-audit

Those are definitely scary situations. I’m sure you’ve heard about typo squatting and how annoying that can be. Caution will save you there. Where caution will not save you is when a legitimate package has its supply chain taken over. A lot of times this could look like a package that you use depends on another package whose maintainer was phished. And now everything that uses that library is carrying that vulnerability forward.

Enter pip-audit.

pip-audit is great because you can just run it on the command line. It will check against PyPA’s official list of vulnerabilities and tell you if anything in your virtual environment or requirements files is known to be malicious.

You could even set up a GitHub Action to do so, and I wouldn’t recommend against that at all. But it’s also valuable to make this check happen on developers’ machines. It’s a simple two-step process to do so:

  1. Add pip-audit to your project’s development dependencies or install it globally with uv tool install pip-audit.
  2. Create a unit test that simply shells out to execute pip-audit and fails the test if an issue is found.

Part one’s easy. Part two takes a little bit more work. That’s okay, because I got it for you. Just download the file here and drop it in your pytest test directory:

test_pypi_security_audit.py

Here’s a small segment to give you a sense of what’s involved.

def test_pip_audit_no_vulnerabilities():
	  # setup ...
    # Run pip-audit with JSON output for easier parsing
    try:
        result = subprocess.run(
            [
                sys.executable,
                '-m',
                'pip_audit',
                '--format=json',
                '--progress-spinner=off',
                '--ignore-vuln',
                'CVE-2025-53000', # example of skipping an irrelevant cve
                '--skip-editable', # don't test your own package in dev
            ],
            cwd=project_root,
            capture_output=True,
            text=True,
            timeout=120,  # 2 minute timeout
        )
    except subprocess.TimeoutExpired:
        pytest.fail('pip-audit command timed out after 120 seconds')
    except FileNotFoundError:
        pytest.fail('pip-audit not installed or not accessible')

That’s it! When anything runs your unit test, whether that’s continuous integration, a git hook, or just a developer testing their code, you’ll also run a pip-audit audit of your project.

Let others find out

Now, pip-audit tests if a malicious package has been installed, In which case, for that poor developer or machine, it may be too late. If it’s CI, who cares? But one other feature you can combine with this that is really nice is uv’s ability to put a delay on upgrading your dependencies.

Many developers, myself included, will typically run some kind of command that will pin your versions. Periodically we also run a command that looks for newer libraries and updates pinned versions so we’re using the latest code. So this way you upgrade in a stair-step manner at the time you’re intending to change versions.

This works great. However, what if the malicious version of a package is released five minutes before before you run this command. You’re getting it installed. But pretty soon, the community is going to find out that something is afoot, report it, and it will be yanked from PyPI. Here bad timing got you hacked.

While it’s not a guaranteed solution, certainly Defense In Depth would tell us maybe wait a few days to install a package. But you don’t want to review packages manually one by one, do you? For example, for Talk Python Training, we have over 200 packages for that website. It would be an immense hassle to verify the dates of each one and manually pick the versions.

No need! We can just add a simple delay to our uv command:

uv pip compile requirements.piptools --upgrade --output-file requirements.txt --exclude-newer "1 week"

In particular, notice –exclude-newer “1 week”. The exact duration isn’t the important thing. It’s about putting a little bit of a delay for issues to be reported into your workflow. You can read about the full feature here. This way, we only incorporate packages that have survived in the public on PyPI for at least one week.

Hope this helps. Stay safe out there.

Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Kubernetes v1.35: Kubelet Configuration Drop-in Directory Graduates to GA

1 Share

With the recent v1.35 release of Kubernetes, support for a kubelet configuration drop-in directory is generally available. The newly stable feature simplifies the management of kubelet configuration across large, heterogeneous clusters.

With v1.35, the kubelet command line argument --config-dir is production-ready and fully supported, allowing you to specify a directory containing kubelet configuration drop-in files. All files in that directory will be automatically merged with your main kubelet configuration. This allows cluster administrators to maintain a cohesive base configuration for kubelets while enabling targeted customizations for different node groups or use cases, and without complex tooling or manual configuration management.

The problem: managing kubelet configuration at scale

As Kubernetes clusters grow larger and more complex, they often include heterogeneous node pools with different hardware capabilities, workload requirements, and operational constraints. This diversity necessitates different kubelet configurations across node groups—yet managing these varied configurations at scale becomes increasingly challenging. Several pain points emerge:

  • Configuration drift: Different nodes may have slightly different configurations, leading to inconsistent behavior
  • Node group customization: GPU nodes, edge nodes, and standard compute nodes often require different kubelet settings
  • Operational overhead: Maintaining separate, complete configuration files for each node type is error-prone and difficult to audit
  • Change management: Rolling out configuration changes across heterogeneous node pools requires careful coordination

Before this support was added to Kubernetes, cluster administrators had to choose between using a single monolithic configuration file for all nodes, manually maintaining multiple complete configuration files, or relying on separate tooling. Each approach had its own drawbacks. This graduation to stable gives cluster administrators a fully supported fourth way to solve that challenge.

Example use cases

Managing heterogeneous node pools

Consider a cluster with multiple node types: standard compute nodes, high-capacity nodes (such as those with GPUs or large amounts of memory), and edge nodes with specialized requirements.

Base configuration

File: 00-base.conf

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
 - "10.96.0.10"
clusterDomain: cluster.local

High-capacity node override

File: 50-high-capacity-nodes.conf

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 50
systemReserved:
 memory: "4Gi"
 cpu: "1000m"

Edge node override

File: 50-edge-nodes.conf (edge compute typically has lower capacity)

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
 memory.available: "500Mi"
 nodefs.available: "5%"

With this structure, high-capacity nodes apply both the base configuration and the capacity-specific overrides, while edge nodes apply the base configuration with edge-specific settings.

Gradual configuration rollouts

When rolling out configuration changes, you can:

  1. Add a new drop-in file with a high numeric prefix (e.g., 99-new-feature.conf)
  2. Test the changes on a subset of nodes
  3. Gradually roll out to more nodes
  4. Once stable, merge changes into the base configuration

Viewing the merged configuration

Since configuration is now spread across multiple files, you can inspect the final merged configuration using the kubelet's /configz endpoint:

# Start kubectl proxy
kubectl proxy

# In another terminal, fetch the merged configuration
# Change the '<node-name>' placeholder before running the curl command
curl -X GET http://127.0.0.1:8001/api/v1/nodes/<node-name>/proxy/configz | jq .

This shows the actual configuration the kubelet is using after all merging has been applied. The merged configuration also includes any configuration settings that were specified via kubelet command-line arguments.

For detailed setup instructions, configuration examples, and merging behavior, see the official documentation:

Good practices

When using the kubelet configuration drop-in directory:

  1. Test configurations incrementally: Always test new drop-in configurations on a subset of nodes before rolling out cluster-wide to minimize risk

  2. Version control your drop-ins: Store your drop-in configuration files in version control (or the configuration source from which these are generated) alongside your infrastructure as code to track changes and enable easy rollbacks

  3. Use numeric prefixes for predictable ordering: Name files with numeric prefixes (e.g., 00-, 50-, 90-) to explicitly control merge order and make the configuration layering obvious to other administrators

  4. Be mindful of temporary files: Some text editors automatically create backup files (such as .bak, .swp, or files with ~ suffix) in the same directory when editing. Ensure these temporary or backup files are not left in the configuration directory, as they may be processed by the kubelet

Acknowledgments

This feature was developed through the collaborative efforts of SIG Node. Special thanks to all contributors who helped design, implement, test, and document this feature across its journey from alpha in v1.28, through beta in v1.30, to GA in v1.35.

To provide feedback on this feature, join the Kubernetes Node Special Interest Group, participate in discussions on the public Slack channel (#sig-node), or file an issue on GitHub.

Get involved

If you have feedback or questions about kubelet configuration management, or want to share your experience using this feature, join the discussion:

SIG Node would love to hear about your experiences using this feature in production!

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing Files v4.0.23

1 Share
Announcing Files Preview v4.0.23 for users of the preview version.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories