Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147253 stories
·
33 followers

Is Open Source in Trouble?

1 Share

BRUSSELS — First, the bad. I would argue that current open source practices and usage are not sustainable, or at the very least, there is a lot of room for improvement. In the current climate, there is a long litany of structural problems.

These include how burnout is becoming a real possibility, as some of the most talented developers are working for free or with little — usually no — compensation, even though that compensation would be well warranted. Burnout is a real issue. Then, at a high level, there is the problem of large tech companies making use of open source but giving little, if anything, back to the community, essentially using free open source resources not to become rich, but to become even richer.

Then there are those I have come in contact with who have long been maintainers of projects and have moved on from the companies where they were paid to work on those projects. Out of love and intellectual curiosity for the work, they continue to maintain and keep a toe in the project. Again, their time is limited, as they are likely working 60 hours a week at their day jobs and would like to have a life. In many cases, the open source project is fun to work on, but it is something else altogether to maintain it over the long term.

Then there is the diversity factor — the huge lack of diversity. Ethical reasons aside, and in my opinion the ethical reasons for advocating diversity in open source development are a major issue and goal, diversity also lends itself to significantly better health for open source projects. A case in point that I have lived through is childcare when kids are involved. The statistics show that women are inordinately tasked with childcare, although in my case childcare was also an issue previously. That does not leave much time to work on an open source project, regardless of how much you love it and enjoy contributing to it, when you have kids to take to doctor’s appointments, baseball games, and school, along with everything else that goes with childcare.

What I really appreciated about the talk that Marga Manterola, an engineering manager at Igalia — who has contributed to several major open source projects Flatcar Container Linux, Inspektor Gadget and Cilium during the past 25 years — gave during the keynote “Free as in Burned Out: Who Really Pays for Open Source?” last week at FOSDEM in Brussels is this: her talk was not just about listing what is wrong with open source — she gave real reasons for how it could be improved and how it could be fixed. She called it utopia. I would argue it is not utopia; it is this or nothing, because open source will otherwise wither — not necessarily die, but if it maintains the current trajectory, it is simply not viable. The current static flow is not viable, in my opinion.

Manterola’s core argument focused on how the status quo excludes a vast demographic of potential contributors. She pointed out that “being able to do a second job for free during your nights and weekends is a privilege” that many lack. This is particularly true for women, who she noted are “disproportionately in charge of caretaking responsibilities,” effectively making open source work a “second shift” they cannot afford to take on. By only paying senior developers who are already established maintainers, the industry fails to create space for new talent or those without the luxury of free time, she said.

Two frameworks

To reach this goal, Manterola offered two concrete frameworks for corporate involvement:

The Open Source Pledge: She encouraged companies to donate $2,000 per developer per year to projects they depend on. While she acknowledged this amount might be high for some, she urged companies to start with whatever they could afford, emphasizing that “gaining steady income is more important, even if it’s less”.

The Open Source Employment Pledge: For companies unwilling to donate cash, she proposed a time-based commitment. Under this pledge, for every 20 developers a company employs, they would dedicate 50% of one person’s time to open source development. Critically, she specified this time must be “completely free of company influence,” allowing the developer to maintain the project however they see fit.

The “utopia” Manterola mentioned is one in which open source contributors are organized into professional teams and paid a “steady salary”. In this model, senior engineers would be supported by junior developers helping with “bug reports or documentation,” allowing for a natural progression where new maintainers can eventually take over or start their own projects. Manterola argued that since “97% of software depends on open source,” it is reasonable to expect that anyone wanting to work on it full-time should be fairly compensated rather than “begging for scraps.”

“I advocate for donating a steady amount every month, rather than big lumps of money to different projects, as gaining steady income is more important, even if it’s less,” Manterola said. “I’m proposing the open source employment pledge, which is, well, if you are not willing to donate money, maybe you are willing to donate time of your employees…Every 20 developers in your company, 50% of one person’s time goes to them developing open source and that 50% is like, completely free of company influence.”

The post Is Open Source in Trouble? appeared first on The New Stack.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

DNS-PERSIST-01; Handling Domain Control Validation in a short-lived certificate World

1 Share
DNS-PERSIST-01; Handling Domain Control Validation in a short-lived certificate World

This year, we have a new method for Domain Control Validation arriving called DNS-PERSIST-01. It is quite a fundamental change from how we do DCV now, so let's take a look at the benefits and the drawbacks.

First, a quick recap

When you approach a Certificate Authority, like Let's Encrypt, to issue you a certificate, you need to complete DCV. If I go to Let's Encrypt and say "I own scotthelme.co.uk so please issue me a certificate for that domain", Let's Encrypt are required to say "prove that you own scotthelme.co.uk and we will". That is the very essence of DCV, the CA needs to Validate that I do Control the Domain in question. We're not going to delve in to the details, but it will help to have a brief understanding of the existing DCV mechanisms so we can see their shortcomings, and compare those to the potential benefits of the new mechanism.

HTTP-01

In order to demonstrate that I do control the domain, Let's Encrypt will give me a specific path on my website that I will host a challenge response.

http://scotthelme.co.uk/.well-known/acme-challenge/3wQfZp0K4lVbqz6d1Jm2oA

At that location, I will place the response which might look something like this.

3wQfZp0K4lVbqz6d1Jm2oA.P7m1k2Jf8h...b64urlThumbprint...

By challenging me to provide this specific response at this specific URL, I have demonstrated to Let's Encrypt that I have control over that web server, and they can now proceed and issue me a certificate.

The problem with this approach is that it requires the domain to be publicly resolvable, which it might not be, and the system requiring the certificate needs to be capable of hosting web content. Even I have a variety of internal systems that I use certificates on that are not publicly addressable in any way, so I use the next challenge method for them, but HTTP-01 is a great solution if it works for your requirements.

DNS-01

Using the DNS-01 method, Let's Encrypt still need to verify my control of the domain, but the process changes slightly. We're now going to use a DNS TXT record to demonstrate my control, and it will be set on a specific subdomain.

_acme-challenge.scotthelme.co.uk

The format of the challenge response token changes slightly, but the concept remains the same and I will set a DNS record like so:

Name:  _acme-challenge.scotthelme.co.uk
Type:  TXT
Value: "X8d3p0ZJzKQH4cR1N2l6A0M9mJkYwqfZkU5c9bM2EJQ"

Upon completing a DNS resolution and seeing that I have successfully set that record at their request, Let's Encrypt can now issue the certificate as I have demonstrated control over the DNS zone. This is far better for my internal environments, and is the method I use, as all they need to do is hit my DNS providers API to set the record and they can they pull the certificate locally, without having any exposure on the public Internet. The DNS-01 mechanism is also required if you want to issue wildcard certificates, which can't be obtained with HTTP-01.

TLS-ALPN-01

The final mechanism, which is much less common, requires quite a dynamic effort from the host. The CA can connect to the host on port 443, and advertise a special capability in the TLS handshake. The host at scotthelme.co.uk:443 must be able to negotiate that capability, and then generate and provide a certificate with the critically flagged acmeIdentifier extension containing the challenge response token, and the correct names in the SAN.

That's no small task, so I can see why this mechanism is much less common, but it does have different considerations than HTTP-01 or DNS-01 so if it works for you, it is available.

In summary

All 3 of those mechanisms are currently valid for DCV, and in essence they provide the following:

HTTP-01 → prove control of web content
DNS-01 → prove control of DNS zone
TLS-ALPN-01 → prove control of TLS endpoint

Looking to the future

I think the considerations for each of those mechanisms are clear, with both HTTP-01 and DNS-01 being favoured, and TLS-ALPN-01 trailing behind. Being able to serve web content on the public Internet, or having access and control to a DNS zone, are both quite big requirements that require technical consideration. Don't get me wrong, DCV should not be 'easy', especially when you think about the risks involved with DCV not being done properly or not being effective, but I also understand the difficulties where neither of those mechanisms are quite right for a particular environment and that they come with their own considerations, especially at large scale!

Another challenge to consider is the continued drive to reduce the lifetime of certificates. You can see my blog post on how all certificates will be reduced to a maximum of 47 days by 2029, and how Let's Encrypt are already offering 6-day certificates now, which is a great things for security, but it does need considering. A CA can verify your control of a domain and remember that for a period of time, continuing to issue new certificates against that previous demonstration of DCV, but the time periods they can be re-used for is also reducing. Here's a side-by-side comparison of the certificate maximum lifetime, and the DCV re-use periods.

Year Certificate Lifetime DCV Re-use Window
Now 398 days 398 days
2026 200 days 200 days
2027 100 days 100 days
2029 47 days 10 days

By 2029, DCV will be coming close to being a real-time endeavour. Now, as ACME requires automation, the shortening of certificate lifetime or the DCV re-use window is not really a concern, you simply run your automated task more frequently, but the more widespread use of certificates does pose a challenge. As we use certificates in more and more places, the overheads of the DCV mechanisms become more problematic in different environments.

DNS-PERSIST-01

This new DCV mechanism is a fundamental change in the approach to how DCV takes place, and does offer some definite advantages, whilst also introducing some concerns that are worth thinking about.

The primary objective here is to set a single, static, DNS record that will allow for continued issuance of new certificates on an ongoing basis for as long as it is present, hence the 'persist' in the name.

Name:  _acme-persist.scotthelme.co.uk
Type:  TXT
Value: "letsencrypt.org; accounturi=https://letsencrypt.org/acme/acct/123456; policy=wildcard"

By setting this new DNS record, I would be allowing Let's Encrypt to issue new certificates using my ACME account specified in the above URL as account ID 123456. Let's Encrypt will still need to conduct DCV by checking this DNS record, but, any of my clients requesting a certificate will not have to answer any kind of dynamic challenge. There is no need to serve a HTTP response, no need to create a new DNS record, and no need to craft a special TLS handshake. The client can simply hit the Let's Encrypt API, use the correct ACME account, and have a new certificate issued. This does allow for a huge reduction in the complexity of having new certificates issued, and I can see many environments where this will be greatly welcomed, but we'll cover a few of my concerns a little later.

Looking at the DNS record itself, we have a couple of configuration options. The policy=wildcard allows the CA and ACME account in question to issue wildcard certificates, it the policy directive is missing, or set to anything other than wildcard, then wildcard certificates will not be allowed. The other configuration value, which I didn't show above, is the persistUntil value.

Name:  _acme-persist.scotthelme.co.uk
Type:  TXT
Value: "letsencrypt.org; accounturi=https://letsencrypt.org/acme/acct/123456; policy=wildcard; persistUntil=1767959300"

This value indicates that this record is valid until Fri Jan 09 2026 11:48:20 GMT+0000, and should not be accepted as valid after that time. This does allow us to set a cap on how long this validation will be accepted for, and addresses one of my concerns. The specification states:

* Domain owners should set expiration dates for validation records
that balance security and operational needs.

My personal approach would be something like having an automated process to refresh this record on a somewhat regular basis, and perhaps push the persistUntil value out by two weeks, updated on a weekly basis. Something about just having a permanent, static record doesn't sit well with me. There are also the concerns around securing the ACME account credentials because any access to those will then allow for issuance of certificates, without any requirement for the person who obtains them to do any 'live' form of DCV.

In short, I can see the value that this mechanism will provide to those that need it, but I can also see it being used far more widely as a purely convenience solution to what was a relatively simple process anyway.

Coming to a CA near you

Let's Encrypt have stated that they will have support for this in 2026, and I imagine it won't take too much longer for other CAs to start supporting this mechanism too. I'm hoping that GTS will also bring in support soon so we can have a pair of reliable CAs to lean on! For now though, just know that if the existing DCV mechanisms are problematic for you, there might be a solution just around the corner.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Access public data insights faster: Data Commons MCP is now hosted on Google Cloud

1 Share
Data Commons has launched a free, hosted Model Context Protocol (MCP) service on Google Cloud Platform, eliminating the need for users to manage complex local server installations. This update simplifies connecting AI agents and the Gemini CLI to Data Commons, allowing Google to handle security, updates, and resource management while users query data natively.
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Magic Words

1 Share

Skills are the newest hype commodity in the world of agentic AI. Skills are text files that optionally get stapled onto the context window by the agent. You can have skills like “frontend design” or “design tokens” and if the LLM “thinks” it needs more context about that topic, it can import the contents of those files into the context to help generate a response.

Generally speaking, skills do an okay job at providing on-demand context. Assuming the AI model is always 12-to-18 months behind in its training data, a skill could potentially backfill any recent framework updates. A skill could potentially undo some training data biases. A skill could potentially apply some of your sensibilities to the output. I’ve seen some impressive results with design guidance skills… but I’ve also seen tons of mediocre results from the same skills. That’s why I deliberately use the word “potentially”. When skills can be optionally included, it’s hard to understand the when and why behind how they get applied.

In that way, skills remind me a bit of magic numbers.

In programming “magic numbers” are a pattern you typically try to avoid. They’re a code smell that you haven’t actually solved the problem, but found a workaround that only works in a particular context. They’re a flashing light that you have brittle logic somewhere in your system. “We don’t know why, but setting the value to 42 appears to have fixed the issue” is a phrase that should send shivers down the spine.

And so now we have these “magic words” in our codebases. Spells, essentially. Spells that work sometimes. Spells that we cast with no practical way to measure their effectiveness. They are prayers as much as they are instructions.

Were we to sit next to each other and cast the same spell from the same book with the same wand; one of us could have a graceful floating feather and the other could have avada kedavra’d their guts out onto the floor. That unstable magic is by design. That element of randomness –to which the models depend– still gives me apprehension.

There’s an opaqueness to it all. I understand how listing skills in an AGENTS.md gives the agent context on where to find more context. But how do you know if those words are the right words? If I cut the amount of words (read: “tokens”) in a skill in half, does it still work? If I double the amount of words, does it work better? Those questions matter when too little context is not enough context and too much context causes context rot. It also matters when you’re charged per-token and more tokens is more time on the GPU. How do you determine the “Minimum Viable Context” needed to get quality out of the machines?

That sort of quality variance is uncomfortable for me from a tooling perspective. Tooling should be highly consistent and this has a “works on my machine” vibe to it. I suppose all my discomfort goes away if I quit caring about the outputs. If I embrace the cognitive dissonance and switch to a “ZOMG the future is amazeballs” hype mode, my job becomes a lot easier. But my brain has been unsuccessful in doing that thus far. I like magic and mystery, but hope- or luck-based development has its challenges for me.

Looking ahead, I expect these types of errant conjurations will come under more scrutiny when the free money subsidies run out and consumers inherit the full cost of the models’ mistakes. Supply chain constraints around memory and GPUs are already making compute a scarce resource, but our Gas Towns plunder onward. When the cost of wrong answers goes up and more and more people spend all their monthly credits on hallucinations, that will be a lot of dissatisfied users.

Anyways, all this changes so much. Today it’s skills, before that MCP, before that PRDs, before that prompt engineering… what is it going to be next quarter? And aren’t those are all flavors of the same managing context puzzle? Churn, churn, churn, I suppose.

File under: non-determinism

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Multipart Download Support for AWS SDK for .NET Transfer Manager

1 Share

The new multipart download support in AWS SDK for .NET Transfer Manager improves the performance of downloading large objects from Amazon Simple Storage Service (Amazon S3). Customers are looking for better performance and parallelization of their downloads, especially when working with large files or datasets. The AWS SDK for .NET Transfer Manager (version 4 only) now delivers faster download speeds through automatic multipart coordination, eliminating the need for complex code to manage concurrent connections, handle retries, and coordinate multiple download streams.

In this post, we’ll show you how to configure and use these new multipart download capabilities, including downloading objects to files and streams, managing memory usage for large transfers, and migrating from existing download methods.

Parallel download using part numbers and byte-ranges

For download operations, the Transfer Manager now supports both part number and byte-range fetches. Part number fetches download the object in parts, using the part number assigned to each object part during upload. Byte-range fetches download the object with byte ranges and work on all objects, regardless of whether they were originally uploaded using multipart upload or not. The transfer manager splits your GetObject request into multiple smaller requests, each of which retrieves a specific portion of the object. The transfer manager executes your requests through concurrent connections to Amazon S3.

Choosing between part numbers and byte-range strategies

Choose between part number and byte-range downloads based on your object’s structure. Part number downloads (the default) work best for objects uploaded with standard multipart upload part sizes. If the object is a non-multipart object, choose byte-range downloads. Range downloads enable greater parallelization when objects have large parts (for example, splitting a 5GB part into multiple 50MB range requests for concurrent transfer) and work with any S3 object regardless of how it was uploaded.

Keep in mind that smaller range sizes result in more S3 requests. Each API call incurs a cost beyond the data transfer itself, so balance parallelism benefits against the number of requests for your use case.

Now that you understand the download strategies, let’s set up your development environment.

Getting started

To get started with multipart downloads in the AWS SDK for .NET Transfer Manager, follow these steps:

Add the dependency to your .NET project

Update your project to use the latest AWS SDK for .NET:

dotnet add package AWSSDK.S3 -v 4.0.17 

Or add the PackageReference to your .csproj file:

<PackageReference Include="AWSSDK.S3" Version="4.0.17" />; 

Initialize the Transfer Manager

You can initialize a Transfer Manager with default settings for typical use cases:

var s3Client = new AmazonS3Client(); 
var transferUtility = new TransferUtility(s3Client); 

You can customize the following options:

// Create custom Transfer Manager configuration 
var config = new TransferUtilityConfig 
{ 
    ConcurrentServiceRequests = 20,  // Maximum number of concurrent HTTP requests 
    BufferSize = 8192  // Buffer size in bytes for file I/O and HTTP responses 
}; 
 
// Create Transfer Manager with custom configuration 
var transferUtility = new TransferUtility(s3Client, config); 

Experiment with these values to find the optimal configuration for your use case. Factors like object size, available network bandwidth, and your application’s memory constraints will influence which settings work best. For more information about configuration options, please refer to the documentation on TransferUtilityConfig.

Download an object to file

To download an object from an Amazon S3 bucket to a local file, use the DownloadWithResponseAsync method. You must provide the source bucket, the S3 object key, and the destination file path.

// Download large file with multipart support (Part GET strategy) 
var downloadResponse = await transferUtility.DownloadWithResponseAsync( 
    new TransferUtilityDownloadRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "large-dataset.zip", 
        FilePath = @"C:\downloads\large-dataset.zip", 
        MultipartDownloadType = MultipartDownloadType.PART  // Default - uses S3 part numbers 
    }); 
// Download using Range GET strategy (works with any S3 object) 
var downloadResponse = await transferUtility.DownloadWithResponseAsync( 
    new TransferUtilityDownloadRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "any-object.dat", 
        FilePath = @"C:\downloads\any-object.dat", 
        MultipartDownloadType = MultipartDownloadType.RANGE,  // Uses HTTP byte ranges 
        PartSize = 16 * 1024 * 1024  // 16MB parts (default is 8MB) 
    }); 

Download an object to stream

To download an object from Amazon S3 directly to a stream, use the OpenStreamWithResponseAsync method. This is useful when you want to process data as it downloads without saving it to disk first. You must provide the source bucket and the S3 object key. The OpenStreamWithResponseAsync method performs parallel downloads by buffering parts in memory until they are read from the stream. See the configuration options below for how to control memory consumption during buffering.

// Stream large file with multipart coordination and memory control 
var streamResponse = await transferUtility.OpenStreamWithResponseAsync( 
    new TransferUtilityOpenStreamRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        Key = "large-video.mp4", 
        MaxInMemoryParts = 512,  // Maximum number of parts buffered in memory  (default is 1024)
                                  // Total memory = MaxInMemoryParts × PartSize 
        MultipartDownloadType = MultipartDownloadType.PART,  // Uses S3 part numbers 
        ChunkBufferSize = 64 * 1024  // Size of individual memory chunks (64KB) 
                                      // allocated from ArrayPool for buffering. (default is 64KB) 
    }); 
 
using var stream = streamResponse.ResponseStream; 
// Process stream data as it downloads concurrently 
var buffer = new byte[8192]; 
int bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length); 

Memory management for streaming downloads: The MaxInMemoryParts parameter controls how many parts can be buffered simultaneously, and ChunkBufferSize determines the size of individual memory chunks allocated for buffering. You can experiment with different values for both parameters to find the optimal configuration for your specific use case.

Download a directory

To download multiple objects from an S3 bucket prefix to a local directory, use the DownloadDirectoryWithResponseAsync method. This method automatically applies multipart download to each individual object in the directory.

// Download entire directory with multipart support for large files 
await transferUtility.DownloadDirectoryWithResponseAsync( 
    new TransferUtilityDownloadDirectoryRequest 
    { 
        BucketName = "amzn-s3-demo-bucket", 
        S3Directory = "datasets/", 
        LocalDirectory = @"C:\data\" 
    }); 

Migration path

The new WithResponse methods provide both multipart performance and access to S3 response metadata. Here’s how to migrate your existing code:

For file downloads: 

// Existing code (still works, but returns void) 
await transferUtility.DownloadAsync(downloadRequest); 
 
// Enhanced version (new capabilities + metadata access) 
var response = await transferUtility.DownloadWithResponseAsync(downloadRequest); 
Console.WriteLine($"Downloaded {response.ContentLength} bytes"); 
Console.WriteLine($"ETag: {response.ETag}"); 

For streaming downloads: 

// Before: direct Stream return 
using var stream = await transferUtility.OpenStreamAsync(streamRequest); 
 
// After: access ResponseStream from response object 
var response = await transferUtility.OpenStreamWithResponseAsync(streamRequest); 
using var stream = response.ResponseStream; 
Console.WriteLine($"Content-Type: {response.ContentType}"); 
Console.WriteLine($"Last Modified: {response.LastModified}"); 

Conclusion

The multipart download support in the AWS SDK for .NET Transfer Manager provides performance improvements for downloading large objects from Amazon S3. By using parallel byte-range or part-number fetches, you can reduce transfer times.

Key takeaways from this post:

  • Use DownloadWithResponseAsync and OpenStreamWithResponseAsync for downloads with automatic multipart coordination
  • Choose between PART and RANGE download strategies based on your object’s structure
  • Customize configuration settings based on your specific environment (memory, network bandwidth, etc.)

Next steps: Try implementing multipart downloads in your applications and measure the performance improvements for your specific use cases.

To learn more about the AWS SDK for .NET Transfer Manager, visit the AWS SDK for .NET documentation. For questions or feedback about this feature, visit the GitHub issues page.

 

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Global Expertise: AI Blueprints, Resource API Fixes, and Angular v20

1 Share

The Angular community is truly global! This week’s roundup features cutting-edge AI integrations and essential deep dives into the newest features of Angular v20, with contributions in multiple languages to help developers everywhere level up.

Check out these latest highlights:

GenAI Scaffold: A Blueprint for Modern Apps

Damian Sire (@damiansire) provides a comprehensive code sample and repository that serves as a blueprint for high-performance apps. It features a cutting-edge stack: Angular, Node.js, and Google Gemini API.

Explore the scaffold: https://github.com/damiansire/GenAI-Scaffold

Nova Reel: AI-Powered Movie & TV Recommendations

Wayne Gakuo (@wayne_gakuo) showcases how to combine Angular, Genkit, and Firebase to build a smart recommendation engine. A perfect example of Gemini’s multimodal power in action!

Check the code: https://github.com/waynegakuo/nova-reel

The Future of Angular and AI (Spanish)

Alejandro Cuba Ruiz (@zorphdark) sits down with Mark Techson to discuss the intersection of Angular and Artificial Intelligence. Essential viewing for our Spanish-speaking community members!

Watch the interview: https://youtu.be/GoPtZ9-RKCY

Angular’s Resource APIs: Let’s Fix Them!

Johannes Hoppe (@johanneshoppe) takes a critical look at the current state of Resource APIs, offering a thought-provoking perspective on what needs to change for better developer ergonomics.

Read the blog post: https://angular.schule/blog/2025-10-rx-resource-is-broken

Angular Signal Forms: Beginner’s Full Guide

Ready to build your first form with Signals? Fannis Prodromou (@prodromouf) walks you through a complete login form tutorial, perfect for those just getting started with this new API.

Watch the guide: https://www.youtube.com/watch?v=yeZoleR4v84

Angular v20 Deep Dive Series (French)

Modeste Assiongbon (@rblmdst) breaks down the latest updates in a three-part French language series, covering Asynchronous Redirects, Resource API breaking changes, and the New Style Guide.

Watch Part 1 (Style Guide): https://youtu.be/8yrEcnOHvlM
Watch Part 2 (Resources): https://youtu._gfa4PUtiaI
Watch Part 3 (Routing): https://youtu.be/f_mppF1Dw_k

Spread the knowledge across the globe! Use #AngularSparkles to share your latest tutorials and repos! 👇


Global Expertise: AI Blueprints, Resource API Fixes, and Angular v20 🌍 was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
8 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories