The release train never ends. In this latest version there are a couple of key features which are laying the ground work for the next bigger. The three big features are:
Something I’ve wanted in Text Grab for awhile now is the ability to do basic math operations. I loved this as a tool in OneNote and wanted to do even more with updating, aggregation and variable names. Another major inspiration for this feature was the excellent Mac app, Soulver: https://soulver.app/

In addition to wanting to be able to perform calculations, I’ve wanted the ability to save and recall RegExes quickly and easily. Now starting from the Find and Replace dialog, you can open the RegEx manager and select from the starting expressions, modify them, and make your own! These will be popping up in more places so keep an eye out for future releases!

Finally, when selecting an area on the screen from the Fullscreen Grab a more precise selection usually means less clean up work. So now you can zoom in by scrolling the mouse wheel or pinching and zooming when the Fullscreen Grab is active! This makes it easier to precisely select exactly what you’re looking to grab!
The next major features coming to Text Grab will be centered around how to make getting the value out of text automatically with fewer steps and more speed! If there is anything about Text Grab you’d like to change or improve, head over to the GitHub and open an issue, I’m very active in there! https://github.com/TheJoeFin/Text-Grab
Happy Text Grabbing!
Joe
The recent software supply chain attacks proof again that having insights in own project dependencies is crucial. This is where GitHub's dependency graph can help. It maps every direct and transitive dependency in your project, giving you the visibility you need to understand, secure, and manage your software supply chain.
The dependency graph is a summary of the manifest and lock files stored in a repository, showing which packages depend on what, helping you identify risks, prioritize security fixes, and keep track of your project's true footprint.
For each repository, the dependency graph shows:
When vulnerabilities are discovered in open source packages, they ripple downstream through all projects that depend on them. Without visibility into your dependency tree, you can't take timely action to protect your project.
The dependency graph also unlocks other GitHub security features:
For your public repositories, You don't need to do anything—it's already enabled for you. The dependency graph is available for public repositories by default.
If you have private repositories, a repository administrator can enable it manually:
Step 1: Navigate to Repository Settings
Go to your repository on GitHub and click on Settings (if you don't see the Settings tab, use the dropdown menu to access it).
Step 2: Access Security Settings
In the left sidebar, find the Security section and click on Code security and analysis (or Advanced Security depending on your view).
Step 3: Enable the Dependency Graph
Read the message about granting GitHub read-only access to repository data, then click Enable next to "Dependency Graph".
Step 4: Wait for Processing
When first enabled, any manifest and lock files for supported ecosystems are parsed immediately. The graph is usually populated within minutes, though this may take longer for repositories with many dependencies.
After enabling, you can explore your dependencies by:
One of the features of the dependency graph is the ability to export it as a Software Bill of Materials (SBOM) in the industry-standard SPDX format. SBOMs are increasingly required by regulators, government agencies, and enterprise customers for compliance and transparency purposes.
An SBOM provides a formal, machine-readable inventory of your project's dependencies and associated information such as versions, package identifiers, licenses, transitive paths for package ecosystems with support for transitive dependency labeling, and copyright information.
The export captures the current state of your dependency graph, representing the head of your main branch at the time of export.
The simplest way to generate an SBOM is through GitHub's web interface:
Maybe you’ve heard that hackers have been trying to take advantage of open source software to inject code into your machine, and worst case scenario, even the consumers of your libraries or your applications machines. In this quick post, I’ll show you how to integrate Python’s “Official” package scanning technology directly into your continuous integration and your project’s unit tests. While pip-audit is maintained in part by Trail of Bits with support from Google, it’s part of the PyPA organization.
Here are 5 recent, high-danger PyPI security issues supply chain attacks where “pip install” can turn into “pip install a backdoor.” Afterwards, we talk about how to scan for and prevent these from making it to your users.
What happened: A malicious version (8.3.41) of the widely-used ultralytics package was published to PyPI, containing code that downloaded the XMRig coinminer. Follow-on versions also carried the malicious downloader, and the writeup attributes the initial compromise to a GitHub Actions script injection, plus later abuse consistent with a stolen PyPI API token. Source: ReversingLabs
What happened: Researchers reported multiple bogus PyPI libraries (including “time-related utilities”) designed to exfiltrate cloud access tokens, with the campaign exceeding 14,100 downloads before takedown. If those tokens are real, this can turn into cloud account takeover. Source: The Hacker News
colorama, with remote control and data theft payloadsWhat happened: A campaign uploaded lookalike package names to PyPI to catch developers intending to install colorama, with payloads described as enabling persistent remote access/remote control plus harvesting and exfiltration of sensitive data. High danger mainly because colorama is popular and typos happen. Source: Checkmarx
num2words)What happened: PyPI reported an email phishing campaign using a lookalike domain; 4 accounts were successfully phished, attacker-generated API tokens were revoked, and malicious releases of num2words were uploaded then removed. This is the “steal maintainer creds, ship malware via trusted package name” playbook. Source: Python Package Index Blog
sisaws, secmeasure)What happened: Zscaler documented malicious packages (including typosquatting) that deliver a Python-based remote access trojan (RAT) with command execution, file exfiltration, screen capture, and browser data theft (credentials, cookies, etc.). Source: Zscaler
Those are definitely scary situations. I’m sure you’ve heard about typo squatting and how annoying that can be. Caution will save you there. Where caution will not save you is when a legitimate package has its supply chain taken over. A lot of times this could look like a package that you use depends on another package whose maintainer was phished. And now everything that uses that library is carrying that vulnerability forward.
Enter pip-audit.
pip-audit is great because you can just run it on the command line. It will check against PyPA’s official list of vulnerabilities and tell you if anything in your virtual environment or requirements files is known to be malicious.
You could even set up a GitHub Action to do so, and I wouldn’t recommend against that at all. But it’s also valuable to make this check happen on developers’ machines. It’s a simple two-step process to do so:
uv tool install pip-audit.Part one’s easy. Part two takes a little bit more work. That’s okay, because I got it for you. Just download the file here and drop it in your pytest test directory:
Here’s a small segment to give you a sense of what’s involved.
def test_pip_audit_no_vulnerabilities():
# setup ...
# Run pip-audit with JSON output for easier parsing
try:
result = subprocess.run(
[
sys.executable,
'-m',
'pip_audit',
'--format=json',
'--progress-spinner=off',
'--ignore-vuln',
'CVE-2025-53000', # example of skipping an irrelevant cve
'--skip-editable', # don't test your own package in dev
],
cwd=project_root,
capture_output=True,
text=True,
timeout=120, # 2 minute timeout
)
except subprocess.TimeoutExpired:
pytest.fail('pip-audit command timed out after 120 seconds')
except FileNotFoundError:
pytest.fail('pip-audit not installed or not accessible')
That’s it! When anything runs your unit test, whether that’s continuous integration, a git hook, or just a developer testing their code, you’ll also run a pip-audit audit of your project.

Now, pip-audit tests if a malicious package has been installed, In which case, for that poor developer or machine, it may be too late. If it’s CI, who cares? But one other feature you can combine with this that is really nice is uv’s ability to put a delay on upgrading your dependencies.
Many developers, myself included, will typically run some kind of command that will pin your versions. Periodically we also run a command that looks for newer libraries and updates pinned versions so we’re using the latest code. So this way you upgrade in a stair-step manner at the time you’re intending to change versions.
This works great. However, what if the malicious version of a package is released five minutes before before you run this command. You’re getting it installed. But pretty soon, the community is going to find out that something is afoot, report it, and it will be yanked from PyPI. Here bad timing got you hacked.
While it’s not a guaranteed solution, certainly Defense In Depth would tell us maybe wait a few days to install a package. But you don’t want to review packages manually one by one, do you? For example, for Talk Python Training, we have over 200 packages for that website. It would be an immense hassle to verify the dates of each one and manually pick the versions.
No need! We can just add a simple delay to our uv command:
uv pip compile requirements.piptools --upgrade --output-file requirements.txt --exclude-newer "1 week"
In particular, notice –exclude-newer “1 week”. The exact duration isn’t the important thing. It’s about putting a little bit of a delay for issues to be reported into your workflow. You can read about the full feature here. This way, we only incorporate packages that have survived in the public on PyPI for at least one week.
Hope this helps. Stay safe out there.
With the recent v1.35 release of Kubernetes, support for a kubelet configuration drop-in directory is generally available. The newly stable feature simplifies the management of kubelet configuration across large, heterogeneous clusters.
With v1.35, the kubelet command line argument --config-dir is production-ready and fully supported,
allowing you to specify a directory containing kubelet configuration drop-in files.
All files in that directory will be automatically merged with your main kubelet configuration.
This allows cluster administrators to maintain a cohesive base configuration for kubelets while enabling targeted customizations for different node groups or use cases, and without complex tooling or manual configuration management.
As Kubernetes clusters grow larger and more complex, they often include heterogeneous node pools with different hardware capabilities, workload requirements, and operational constraints. This diversity necessitates different kubelet configurations across node groups—yet managing these varied configurations at scale becomes increasingly challenging. Several pain points emerge:
Before this support was added to Kubernetes, cluster administrators had to choose between using a single monolithic configuration file for all nodes, manually maintaining multiple complete configuration files, or relying on separate tooling. Each approach had its own drawbacks. This graduation to stable gives cluster administrators a fully supported fourth way to solve that challenge.
Consider a cluster with multiple node types: standard compute nodes, high-capacity nodes (such as those with GPUs or large amounts of memory), and edge nodes with specialized requirements.
File: 00-base.conf
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- "10.96.0.10"
clusterDomain: cluster.local
File: 50-high-capacity-nodes.conf
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
maxPods: 50
systemReserved:
memory: "4Gi"
cpu: "1000m"
File: 50-edge-nodes.conf (edge compute typically has lower capacity)
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
memory.available: "500Mi"
nodefs.available: "5%"
With this structure, high-capacity nodes apply both the base configuration and the capacity-specific overrides, while edge nodes apply the base configuration with edge-specific settings.
When rolling out configuration changes, you can:
99-new-feature.conf)Since configuration is now spread across multiple files, you can inspect the final merged configuration using the kubelet's /configz endpoint:
# Start kubectl proxy
kubectl proxy
# In another terminal, fetch the merged configuration
# Change the '<node-name>' placeholder before running the curl command
curl -X GET http://127.0.0.1:8001/api/v1/nodes/<node-name>/proxy/configz | jq .
This shows the actual configuration the kubelet is using after all merging has been applied. The merged configuration also includes any configuration settings that were specified via kubelet command-line arguments.
For detailed setup instructions, configuration examples, and merging behavior, see the official documentation:
When using the kubelet configuration drop-in directory:
Test configurations incrementally: Always test new drop-in configurations on a subset of nodes before rolling out cluster-wide to minimize risk
Version control your drop-ins: Store your drop-in configuration files in version control (or the configuration source from which these are generated) alongside your infrastructure as code to track changes and enable easy rollbacks
Use numeric prefixes for predictable ordering: Name files with numeric prefixes (e.g., 00-, 50-, 90-) to explicitly control merge order and make the configuration layering obvious to other administrators
Be mindful of temporary files: Some text editors automatically create backup files (such as .bak, .swp, or files with ~ suffix) in the same directory when editing. Ensure these temporary or backup files are not left in the configuration directory, as they may be processed by the kubelet
This feature was developed through the collaborative efforts of SIG Node. Special thanks to all contributors who helped design, implement, test, and document this feature across its journey from alpha in v1.28, through beta in v1.30, to GA in v1.35.
To provide feedback on this feature, join the Kubernetes Node Special Interest Group, participate in discussions on the public Slack channel (#sig-node), or file an issue on GitHub.
If you have feedback or questions about kubelet configuration management, or want to share your experience using this feature, join the discussion:
SIG Node would love to hear about your experiences using this feature in production!