BRUSSELS — First, the bad. I would argue that current open source practices and usage are not sustainable, or at the very least, there is a lot of room for improvement. In the current climate, there is a long litany of structural problems.
These include how burnout is becoming a real possibility, as some of the most talented developers are working for free or with little — usually no — compensation, even though that compensation would be well warranted. Burnout is a real issue. Then, at a high level, there is the problem of large tech companies making use of open source but giving little, if anything, back to the community, essentially using free open source resources not to become rich, but to become even richer.
Then there are those I have come in contact with who have long been maintainers of projects and have moved on from the companies where they were paid to work on those projects. Out of love and intellectual curiosity for the work, they continue to maintain and keep a toe in the project. Again, their time is limited, as they are likely working 60 hours a week at their day jobs and would like to have a life. In many cases, the open source project is fun to work on, but it is something else altogether to maintain it over the long term.
Then there is the diversity factor — the huge lack of diversity. Ethical reasons aside, and in my opinion the ethical reasons for advocating diversity in open source development are a major issue and goal, diversity also lends itself to significantly better health for open source projects. A case in point that I have lived through is childcare when kids are involved. The statistics show that women are inordinately tasked with childcare, although in my case childcare was also an issue previously. That does not leave much time to work on an open source project, regardless of how much you love it and enjoy contributing to it, when you have kids to take to doctor’s appointments, baseball games, and school, along with everything else that goes with childcare.
What I really appreciated about the talk that Marga Manterola, an engineering manager at Igalia — who has contributed to several major open source projects Flatcar Container Linux, Inspektor Gadget and Cilium during the past 25 years — gave during the keynote “Free as in Burned Out: Who Really Pays for Open Source?” last week at FOSDEM in Brussels is this: her talk was not just about listing what is wrong with open source — she gave real reasons for how it could be improved and how it could be fixed. She called it utopia. I would argue it is not utopia; it is this or nothing, because open source will otherwise wither — not necessarily die, but if it maintains the current trajectory, it is simply not viable. The current static flow is not viable, in my opinion.
Manterola’s core argument focused on how the status quo excludes a vast demographic of potential contributors. She pointed out that “being able to do a second job for free during your nights and weekends is a privilege” that many lack. This is particularly true for women, who she noted are “disproportionately in charge of caretaking responsibilities,” effectively making open source work a “second shift” they cannot afford to take on. By only paying senior developers who are already established maintainers, the industry fails to create space for new talent or those without the luxury of free time, she said.
To reach this goal, Manterola offered two concrete frameworks for corporate involvement:
The Open Source Pledge: She encouraged companies to donate $2,000 per developer per year to projects they depend on. While she acknowledged this amount might be high for some, she urged companies to start with whatever they could afford, emphasizing that “gaining steady income is more important, even if it’s less”.
The Open Source Employment Pledge: For companies unwilling to donate cash, she proposed a time-based commitment. Under this pledge, for every 20 developers a company employs, they would dedicate 50% of one person’s time to open source development. Critically, she specified this time must be “completely free of company influence,” allowing the developer to maintain the project however they see fit.
The “utopia” Manterola mentioned is one in which open source contributors are organized into professional teams and paid a “steady salary”. In this model, senior engineers would be supported by junior developers helping with “bug reports or documentation,” allowing for a natural progression where new maintainers can eventually take over or start their own projects. Manterola argued that since “97% of software depends on open source,” it is reasonable to expect that anyone wanting to work on it full-time should be fairly compensated rather than “begging for scraps.”
“I advocate for donating a steady amount every month, rather than big lumps of money to different projects, as gaining steady income is more important, even if it’s less,” Manterola said. “I’m proposing the open source employment pledge, which is, well, if you are not willing to donate money, maybe you are willing to donate time of your employees…Every 20 developers in your company, 50% of one person’s time goes to them developing open source and that 50% is like, completely free of company influence.”
The post Is Open Source in Trouble? appeared first on The New Stack.

This year, we have a new method for Domain Control Validation arriving called DNS-PERSIST-01. It is quite a fundamental change from how we do DCV now, so let's take a look at the benefits and the drawbacks.
When you approach a Certificate Authority, like Let's Encrypt, to issue you a certificate, you need to complete DCV. If I go to Let's Encrypt and say "I own scotthelme.co.uk so please issue me a certificate for that domain", Let's Encrypt are required to say "prove that you own scotthelme.co.uk and we will". That is the very essence of DCV, the CA needs to Validate that I do Control the Domain in question. We're not going to delve in to the details, but it will help to have a brief understanding of the existing DCV mechanisms so we can see their shortcomings, and compare those to the potential benefits of the new mechanism.
In order to demonstrate that I do control the domain, Let's Encrypt will give me a specific path on my website that I will host a challenge response.
http://scotthelme.co.uk/.well-known/acme-challenge/3wQfZp0K4lVbqz6d1Jm2oAAt that location, I will place the response which might look something like this.
3wQfZp0K4lVbqz6d1Jm2oA.P7m1k2Jf8h...b64urlThumbprint...By challenging me to provide this specific response at this specific URL, I have demonstrated to Let's Encrypt that I have control over that web server, and they can now proceed and issue me a certificate.
The problem with this approach is that it requires the domain to be publicly resolvable, which it might not be, and the system requiring the certificate needs to be capable of hosting web content. Even I have a variety of internal systems that I use certificates on that are not publicly addressable in any way, so I use the next challenge method for them, but HTTP-01 is a great solution if it works for your requirements.
Using the DNS-01 method, Let's Encrypt still need to verify my control of the domain, but the process changes slightly. We're now going to use a DNS TXT record to demonstrate my control, and it will be set on a specific subdomain.
_acme-challenge.scotthelme.co.ukThe format of the challenge response token changes slightly, but the concept remains the same and I will set a DNS record like so:
Name: _acme-challenge.scotthelme.co.uk
Type: TXT
Value: "X8d3p0ZJzKQH4cR1N2l6A0M9mJkYwqfZkU5c9bM2EJQ"Upon completing a DNS resolution and seeing that I have successfully set that record at their request, Let's Encrypt can now issue the certificate as I have demonstrated control over the DNS zone. This is far better for my internal environments, and is the method I use, as all they need to do is hit my DNS providers API to set the record and they can they pull the certificate locally, without having any exposure on the public Internet. The DNS-01 mechanism is also required if you want to issue wildcard certificates, which can't be obtained with HTTP-01.
The final mechanism, which is much less common, requires quite a dynamic effort from the host. The CA can connect to the host on port 443, and advertise a special capability in the TLS handshake. The host at scotthelme.co.uk:443 must be able to negotiate that capability, and then generate and provide a certificate with the critically flagged acmeIdentifier extension containing the challenge response token, and the correct names in the SAN.
That's no small task, so I can see why this mechanism is much less common, but it does have different considerations than HTTP-01 or DNS-01 so if it works for you, it is available.
All 3 of those mechanisms are currently valid for DCV, and in essence they provide the following:
HTTP-01 → prove control of web content
DNS-01 → prove control of DNS zone
TLS-ALPN-01 → prove control of TLS endpoint
I think the considerations for each of those mechanisms are clear, with both HTTP-01 and DNS-01 being favoured, and TLS-ALPN-01 trailing behind. Being able to serve web content on the public Internet, or having access and control to a DNS zone, are both quite big requirements that require technical consideration. Don't get me wrong, DCV should not be 'easy', especially when you think about the risks involved with DCV not being done properly or not being effective, but I also understand the difficulties where neither of those mechanisms are quite right for a particular environment and that they come with their own considerations, especially at large scale!
Another challenge to consider is the continued drive to reduce the lifetime of certificates. You can see my blog post on how all certificates will be reduced to a maximum of 47 days by 2029, and how Let's Encrypt are already offering 6-day certificates now, which is a great things for security, but it does need considering. A CA can verify your control of a domain and remember that for a period of time, continuing to issue new certificates against that previous demonstration of DCV, but the time periods they can be re-used for is also reducing. Here's a side-by-side comparison of the certificate maximum lifetime, and the DCV re-use periods.
| Year | Certificate Lifetime | DCV Re-use Window |
|---|---|---|
| Now | 398 days | 398 days |
| 2026 | 200 days | 200 days |
| 2027 | 100 days | 100 days |
| 2029 | 47 days | 10 days |
By 2029, DCV will be coming close to being a real-time endeavour. Now, as ACME requires automation, the shortening of certificate lifetime or the DCV re-use window is not really a concern, you simply run your automated task more frequently, but the more widespread use of certificates does pose a challenge. As we use certificates in more and more places, the overheads of the DCV mechanisms become more problematic in different environments.
This new DCV mechanism is a fundamental change in the approach to how DCV takes place, and does offer some definite advantages, whilst also introducing some concerns that are worth thinking about.
The primary objective here is to set a single, static, DNS record that will allow for continued issuance of new certificates on an ongoing basis for as long as it is present, hence the 'persist' in the name.
Name: _acme-persist.scotthelme.co.uk
Type: TXT
Value: "letsencrypt.org; accounturi=https://letsencrypt.org/acme/acct/123456; policy=wildcard"By setting this new DNS record, I would be allowing Let's Encrypt to issue new certificates using my ACME account specified in the above URL as account ID 123456. Let's Encrypt will still need to conduct DCV by checking this DNS record, but, any of my clients requesting a certificate will not have to answer any kind of dynamic challenge. There is no need to serve a HTTP response, no need to create a new DNS record, and no need to craft a special TLS handshake. The client can simply hit the Let's Encrypt API, use the correct ACME account, and have a new certificate issued. This does allow for a huge reduction in the complexity of having new certificates issued, and I can see many environments where this will be greatly welcomed, but we'll cover a few of my concerns a little later.
Looking at the DNS record itself, we have a couple of configuration options. The policy=wildcard allows the CA and ACME account in question to issue wildcard certificates, it the policy directive is missing, or set to anything other than wildcard, then wildcard certificates will not be allowed. The other configuration value, which I didn't show above, is the persistUntil value.
Name: _acme-persist.scotthelme.co.uk
Type: TXT
Value: "letsencrypt.org; accounturi=https://letsencrypt.org/acme/acct/123456; policy=wildcard; persistUntil=1767959300"This value indicates that this record is valid until Fri Jan 09 2026 11:48:20 GMT+0000, and should not be accepted as valid after that time. This does allow us to set a cap on how long this validation will be accepted for, and addresses one of my concerns. The specification states:
* Domain owners should set expiration dates for validation records
that balance security and operational needs.
My personal approach would be something like having an automated process to refresh this record on a somewhat regular basis, and perhaps push the persistUntil value out by two weeks, updated on a weekly basis. Something about just having a permanent, static record doesn't sit well with me. There are also the concerns around securing the ACME account credentials because any access to those will then allow for issuance of certificates, without any requirement for the person who obtains them to do any 'live' form of DCV.
In short, I can see the value that this mechanism will provide to those that need it, but I can also see it being used far more widely as a purely convenience solution to what was a relatively simple process anyway.
Let's Encrypt have stated that they will have support for this in 2026, and I imagine it won't take too much longer for other CAs to start supporting this mechanism too. I'm hoping that GTS will also bring in support soon so we can have a pair of reliable CAs to lean on! For now though, just know that if the existing DCV mechanisms are problematic for you, there might be a solution just around the corner.
Skills are the newest hype commodity in the world of agentic AI. Skills are text files that optionally get stapled onto the context window by the agent. You can have skills like “frontend design” or “design tokens” and if the LLM “thinks” it needs more context about that topic, it can import the contents of those files into the context to help generate a response.
Generally speaking, skills do an okay job at providing on-demand context. Assuming the AI model is always 12-to-18 months behind in its training data, a skill could potentially backfill any recent framework updates. A skill could potentially undo some training data biases. A skill could potentially apply some of your sensibilities to the output. I’ve seen some impressive results with design guidance skills… but I’ve also seen tons of mediocre results from the same skills. That’s why I deliberately use the word “potentially”. When skills can be optionally included, it’s hard to understand the when and why behind how they get applied.
In that way, skills remind me a bit of magic numbers.
In programming “magic numbers” are a pattern you typically try to avoid. They’re a code smell that you haven’t actually solved the problem, but found a workaround that only works in a particular context. They’re a flashing light that you have brittle logic somewhere in your system. “We don’t know why, but setting the value to 42 appears to have fixed the issue” is a phrase that should send shivers down the spine.
And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.
Were we to sit next to each other and cast the same spell from the same book with the same wand; one of us could have a graceful floating feather and the other could have avada kedavra’d their guts out onto the floor. That unstable magic is by design. That element of randomness –to which the models depend– still gives me apprehension.
There’s an opaqueness to it all. I understand how listing skills in an AGENTS.md gives the agent context on where to find more context. But how do you know if those words are the right words? If I cut the amount of words (read: “tokens”) in a skill in half, does it still work? If I double the amount of words, does it work better? Those questions matter when too little context is not enough context and too much context causes context rot. It also matters when you’re charged per-token and more tokens is more time on the GPU. How do you determine the “Minimum Viable Context” needed to get quality out of the machines?
That sort of quality variance is uncomfortable for me from a tooling perspective. Tooling should be highly consistent and this has a “works on my machine” vibe to it. I suppose all my discomfort goes away if I quit caring about the outputs. If I embrace the cognitive dissonance and switch to a “ZOMG the future is amazeballs” hype mode, my job becomes a lot easier. But my brain has been unsuccessful in doing that thus far. I like magic and mystery, but hope- or luck-based development has its challenges for me.
Looking ahead, I expect these types of errant conjurations will come under more scrutiny when the free money subsidies run out and consumers inherit the full cost of the models’ mistakes. Supply chain constraints around memory and GPUs are already making compute a scarce resource, but our Gas Towns plunder onward. When the cost of wrong answers goes up and more and more people spend all their monthly credits on hallucinations, that will be a lot of dissatisfied users.
Anyways, all this changes so much. Today it’s skills, before that MCP, before that PRDs, before that prompt engineering… what is it going to be next quarter? And aren’t those are all flavors of the same managing context puzzle? Churn, churn, churn, I suppose.
File under: non-determinism