5CA is a customer service support company that works with Discord. Recently, the chat platform said the vendor had been breached as part of a “security incident” where 70,000 government ID photos may have leaked. Now, 5CA says in a post on its website that it was “not hacked.”
According to Discord, “this incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams,” and “of the accounts impacted globally, we have identified approximately 70,000 users that may have had government-ID photos exposed, which our vendor used to review age-related appeals.” The company said that (emphasis Discord’s) “this was not a breach of Discord, but rather a breach of a third party service provider, 5CA, that we used to support our customer service efforts.”
However, on its website, 5CA shared its own statement, which I am including in full below (with emphasis 5CA’s):
We are aware of media reports naming 5CA as the cause of a data breach involving one of our clients. Contrary to these reports, we can confirm that none of 5CA’s systems were involved, and 5CA has not handled any government-issued IDs for this client. All our platforms and systems remain secure, and client data continues to be protected under strict data protection and security controls.
We are conducting an ongoing forensic investigation into the matter and collaborating closely with our client, as well as external advisors, including cybersecurity experts and ethical hackers. Based on interim findings, we can confirm that the incident occurred outside of our systems and that 5CA was not hacked. There is no evidence of any impact on other 5CA clients, systems, or data. Access controls, encryption, and monitoring systems are fully operational and, as a precautionary measure, are under heightened review.
Our preliminary information suggests the incident may have resulted from human error, the extent of which is still under investigation. We remain in close contact with all relevant parties and will share verified findings once confirmed.
We’ve asked 5CA to confirm if it handled government ID photos and if it could share more information about the “human error” that may have been involved. We’ve also asked Discord if it can confirm which company was in possession of the photos of government IDs that may have been accessed.
Windows 10 is so popular that Windows 11 only overtook it in terms of usage just a few months ago. That's why I'm surprised that Microsoft is still, kind of, going ahead with its end of support cutoff today.
At one point last year, I wasn't sure if Microsoft was actually going to end support for Windows 10 on time. The software giant randomly reopened Windows 10 beta testing to add new features and improvements to a 10-year-old operating system, giving millions of users hope that the company would change its mind or at least lower the system requirements for Windows 11. Neither of those things is happening, though.
Picture this: you’re a developer in 2025, and your company just told you they need to modernize a mainframe system that processes millions of ATM transactions daily. We’re talking about COBOL, a programming language that’s been around for 65 years. That’s older than the internet.
Now, your first instinct might be to laugh or maybe cry a little. But here’s the thing—COBOL isn’t going anywhere. In fact, it’s powering some of the largest and most critical systems on the planet right now.
The problem? Finding developers who understand COBOL is like finding unicorns. The original developers are retiring, and yet 200 billion lines of COBOL code are still running our banks, insurance companies, and government systems.
But here’s the plot twist: we now have the opportunity to support the unicorns. We have GitHub Copilot and autonomous AI agents.
Meet the developer who’s modernizing COBOL (without learning COBOL)
I recently spoke with Julia Kordick, Microsoft Global Black Belt, who’s modernizing COBOL systems using AI. What’s remarkable? She never learned COBOL.
Julia brought her AI expertise and worked directly with the people who had decades of domain knowledge. That partnership is the key insight here. She didn’t need to become a COBOL expert. Instead, she focused on what she does best: designing intelligent solutions. The COBOL experts provided the legacy system knowledge.
When this whole idea of Gen AI appeared, we were thinking about how we can actually use AI to solve this problem that has not been really solved yet.
Julia Kordick, Microsoft Global Black Belt
The three-step framework for AI-powered legacy modernization
Julia and her team at Microsoft have cracked the code (pun intended) with a systematic approach that works for any legacy modernization project, not just COBOL. Here’s their GitHub Copilot powered, battle-tested framework.
Step 1: Code preparation (reverse engineering)
The biggest problem with legacy systems? Organizations have no idea what their code actually does anymore. They use it, they depend on it, but understanding it? That’s another story.
This is where GitHub Copilot becomes your archaeological tool. Instead of hiring consultants to spend months analyzing code, you can use AI to:
Extract business logic from legacy files.
Document everything in markdown for human review.
Automatically identify call chains and dependencies.
Clean up irrelevant comments and historical logs.
Add additional information as comments where needed.
💡Pro tip: Always have human experts review AI-generated analysis. AI is incredible at pattern recognition, but domain knowledge still matters for business context.
Here’s what GitHub Copilot generates for you:
# Business Logic Analysis Generated by GitHub Copilot
## File Inventory
- listings.cobol: List management functionality (~100 lines)
- mainframe-example.cobol: Full mainframe program (~10K lines, high complexity)
## Business Purpose
Customer account validation with balance checking
- Validates account numbers against master file
- Performs balance calculations with overdraft protection
- Generates transaction logs for audit compliance
## Dependencies Discovered
- DB2 database connections via SQLCA
- External validation service calls
- Legacy print queue system
Step 2: Enrichment (making code AI-digestible)
You usually need to add context to help AI understand your code better. Here’s what that looks like:
Translation: If your code has Danish, German, or other non-English comments, translate them. Models work better with English context.
Structural analysis: COBOL has deterministic patterns. Even if you’ve never written COBOL, you can leverage these patterns because they’re predictable. Here’s how:
COBOL programs always follow the same four-division structure:
IDENTIFICATION DIVISION (metadata about the program)
ENVIRONMENT DIVISION (file and system configurations)
DATA DIVISION (variable declarations and data structures)
PROCEDURE DIVISION (the actual business logic)
Ask GitHub Copilot to map these divisions for you. Use prompts like:
"Identify all the divisions in this COBOL file and summarize what each one does"
"List all data structures defined in the DATA DIVISION and their purpose"
"Extract the main business logic flow from the PROCEDURE DIVISION"
The AI can parse these structured sections and explain them in plain English. You don’t need to understand COBOL syntax. You just need to know that COBOL’s rigid structure makes it easier for AI to analyze than more flexible languages.
Documentation as source of truth: Save everything AI generates as markdown files that become the primary reference. Julia explained it this way: “Everything that you let Copilot generate as a preparation, write it down as a markdown file so that it can actually reference these markdown files as source of truth.”
💡Pro tip: COBOL’s verbosity is actually an advantage here. Statements like ADD TOTAL-SALES TO ANNUAL-REVENUE are almost self-documenting. Ask Copilot to extract these business rules into natural language descriptions.
Step 3: Automation Aids (Scaling the Process)
Once you’ve analyzed and enriched individual files, you need to understand how they all fit together. This is where you move from using Copilot interactively to building automated workflows with AI agents.
Julia’s team built a framework using Microsoft Semantic Kernel, which orchestrates multiple specialized agents. Each agent has a specific job, and they work together to handle the complexity that would overwhelm a single AI call.
Here’s what this orchestration looks like in practice:
Call chain mapping: Generate Mermaid diagrams showing how files interact. One agent reads your COBOL files, another traces the CALL statements between programs, and a third generates a visual diagram. You end up with a map of your entire system without manually tracing dependencies.
Test-driven modernization: Extract business logic (agent 1), generate test cases that validate that logic (agent 2), then generate modern code that passes those tests (agent 3). The tests become your safety net during migration.
Dependency optimization: Identify utility classes and libraries that you can replace with modern equivalents. An agent analyzes what third-party COBOL libraries you’re using, checks if modern alternatives exist, and flags opportunities to simplify your migration.
Think of it like this: Copilot in your IDE is a conversation. This framework is a production line. Each agent does one thing well, and the orchestration layer manages the workflow between them.
💡Pro tip: Use Mermaid diagrams to visualize complex dependencies before making any changes. It helps you catch edge cases early. You can generate these diagrams by asking Copilot to trace all CALL statements in your codebase and output them in Mermaid syntax. Mermaid chart example:
The reality check: It’s not a silver bullet
Julia’s brutally honest about limitations:
Everyone who’s currently promising you, ‘hey, I can solve all your mainframe problems with just one click’ is lying to you.
The reality is:
Humans must stay in the loop for validation.
Each COBOL codebase is unique and complex.
We’re early in the agentic AI journey.
Full automation is probably at least five years away.
But that doesn’t mean we can’t make massive progress today.
See it in action: the Azure samples framework
Julia and her team have open-sourced their entire framework. It’s built with Microsoft Semantic Kernel for agentic orchestration and includes:
Set up your environment: Configure Azure OpenAI endpoint (or use local models for sensitive data)
Run the doctor script:./doctor.sh doctor validates your setup and dependencies
Start modernization:./doctor.sh run begins the automated process
# Quick setup for the impatient developer
git clone https://github.com/Azure-Samples/Legacy-Modernization-Agents
cd Legacy-Modernization-Agents
./doctor.sh setup
./doctor.sh run
The business case that changes everything
This isn’t just about technical debt. It’s about business survival. Organizations are facing a critical shortage of COBOL expertise right when they need it most.
The traditional approach has been to hire expensive consultants, spend 5+ years on manual conversion, and end up with auto-generated code that’s unmaintainable. I’ve seen this play out at multiple organizations. The consultants come in, run automated conversion tools, hand over thousands of lines of generated code, and leave. Then the internal team is stuck maintaining code they don’t understand in a language they’re still learning.
The AI-powered approach changes this. You use AI to understand business logic, generate human-readable modern code, and maintain control of your intellectual property. Your team stays involved throughout the process. They learn the business logic as they go. The code that comes out the other end is something your developers can actually work with.
Julia explained the shift:
What a lot of customers do not want to actually give all their intellectual property like a hundred percent to a partner anymore, right? They want to keep it in check.
Start here: your path to becoming the modernization hero
Whether you’re dealing with COBOL, ancient Java, or any legacy system, here’s how you can start today:
Start small
Identify one problematic legacy system (start with fewer than 5,000 lines)
Use GitHub Copilot to analyze a single file
Document what you discover in markdown
Share findings with your team
Build your AI toolkit
Experiment with the Azure Samples framework
Learn prompt engineering for code analysis (try: “Analyze this COBOL program and explain its business purpose in simple terms”)
Practice iterative modernization techniques
Think beyond code
Consider nonfunctional requirements for cloud-native design
Plan for distributed systems architecture
Remember: most COBOL programs are doing simple CRUD operations. They don’t need the complexity of a mainframe. They need the simplicity of modern architecture.
Here’s a challenge: Find a legacy system in your organization. Six-month-old code counts as legacy in our industry. Try using GitHub Copilot to:
Generate business logic documentation
Identify potential modernization opportunities
Create a migration plan with human validation checkpoints
Share your results on LinkedIn and tag me. I’d love to see what you discover.
The best time to start is now
The most powerful insight from my conversation with Julia is this: AI doesn’t replace developer expertise. It amplifies it.
COBOL experts bring irreplaceable domain knowledge. Modern developers bring fresh perspectives on architecture and best practices. AI brings pattern recognition and translation capabilities at scale.
When these three forces work together, legacy modernization transforms from an impossible challenge into an achievable project.
The best time to modernize legacy code was 10 years ago. The second-best time is now.
Special thanks to Julia Kordick, Microsoft Global Black Belt, who shared her insights and experiences that made this blog post possible.
Ready to dive deeper? Check out the full blog post about this project at aka.ms/cobol-blog and connect with Julia on LinkedIn for the latest updates.
The age of legacy code doesn’t have to be a barrier anymore. With the right AI tools and framework, even 65-year-old COBOL can become approachable, maintainable, and modern.
What legacy system will you modernize next? Start building now with GitHub Copilot now >
Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS.
The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of official support from Microsoft. The tool also requires Node.js 18.x or later because this version of Node.js is the new minimum version that the AWS Cloud Development Kit (CDK) supports, which is a dependency.
Outside of these prerequisites, there are no other breaking changes to the tool’s commands or your existing deployment configurations. We expect a smooth upgrade for most users. Let’s get into the details.
Breaking Changes
This section details the mandatory changes required to use version 2.0.
.NET 8 Runtime Requirement
The AWS Deploy Tool for .NET is now built on .NET 8, replacing the previous .NET 6 runtime. As noted in the introduction, we made this change because .NET 6 is now out of official support from Microsoft.
To use this new version, you must have the .NET 8 installed on your development machine. This mandatory upgrade ensures that the deploy tool itself remains on a secure, stable, and supported foundation for the future.
Node.js 18 Prerequisite
We also updated the minimum required Node.js version for the deploy tool to 18.x (from 14.x). This is necessary because Node.js 18 is the new minimum version for the CDK, which is one of the underlying dependencies for the deploy tool. Please ensure that you have Node.js 18 or later installed on your development machine.
New Features and Key Updates
Container engine flexibility with support for Podman
In addition to Docker, the deploy tool now includes support for Podman as a container engine. The deploy tool now automatically detects both Docker and Podman on your machine. To ensure a consistent experience for existing users, the tool defaults to Docker if it is running. If Docker is not running, the tool then checks for an available Podman installation and uses that as the container engine. This gives you more flexibility in your container workflow while maintaining predictable behavior.
.NET 10 deployment support
To ensure adoption of the latest .NET versions as they become available, this release adds support for deploying .NET 10 applications.
For deployment targets such as AWS Elastic Beanstalk that might not have a native .NET 10 managed runtime at the time of its release, the deploy tool automatically publishes your project as a self-contained deployment bundle. This bundle includes the .NET 10 runtime and all necessary dependencies alongside your application code. This approach allows your .NET 10 application to run on the target environment without requiring a pre-installed runtime, providing a smooth path forward as you upgrade your projects.
Other Notable Updates
This release also includes other important foundational and dependency updates:
Optimized Dockerfile Generation: When deploying to a container-based service such as Amazon Elastic Container Service (Amazon ECS), the deploy tool generates a Dockerfile if one doesn’t already exist. Previously, to run Single Page Applications (SPAs), the generated Dockerfile included steps to install Node.js in the container’s build stage. This is no longer the default behavior. By removing the Node.js installation from the build image, you will see improved container build times and a reduced number of dependencies to manage during the build process. If your application requires Node.js for its build (for example, an Angular or React frontend), you must now add the required installation steps to the generated Dockerfile.
Upgraded CLI Foundation: The command-line handling library has been switched to Spectre.CLI. This provides the foundation for future improvements like interactive guided deployments and enhanced output formatting.
AWS CDK: We’ve updated the AWS Cloud Development Kit (CDK) library to version 2.194.0 and the CDK CLI to 2.1013.0.
AWS SDK for .NET V4: The tool now leverages version 4 of the AWS SDK for .NET, bringing in the latest features in performance-optimized packages.
Microsoft Templating Engine: We also updated the engine that powers our project recipes from .NET 5 to .NET 8, improving the reliability of the templating experience.
How to Get the New Version
Ready to get started? The new version is available for both .NET CLI and Visual Studio.
For the .NET CLI:
To update to the latest version, simply run the following command:
dotnet tool update -g AWS.Deploy.Tools
If you’re a new user, use this command to install the tool:
In the Updates tab on the left pane, find the AWS Toolkit for Visual Studio and choose Update.
You will need to close Visual Studio for the update to be installed.
If you don’t already have the AWS Toolkit installed, see the installation instructions.
What’s Next?
We will continue to expand the feature scope to make sure that deploying .NET applications to AWS is as easy as possible. Please install or upgrade to the latest version of this deployment tool (CLI or toolkit), try a few deployments, and let us know what you think by opening a GitHub issue.
To learn more, check out our Developer guide. The .NET CLI tooling is open source and our Github repo is a great place to provide feedback. Bug reports and feature requests are welcome!
At Naftiko, we are reshaping integrations as business capabilities through the alignment of the technical details with business strategy using domain-driven design. Domain-driven design is a fundamental concept from the world of software development and heavily used in shaping how we produce APIs, but it is something that when you mention out of an API producer context, making things more about API consumers, people seem to get stuck and confused. We have had conversations with folks who possess a strong grasp of domain-driven design, one could say domain experts, but fail to see and understand how and why the methodology applies to other side of the integration conversations with the consumers of 3rd-party APIs.
I wanted to better understand why folks find it difficult to think about domain-driven design when it comes to API consumption, and explore the common definitions and practices in use, to see if there is something I am missing, and what I can do to help make our intent more clear. I recently read Learning Domain-Driven Design from by Vlad Khononov from O’Reilly media, and have been browsing a lot of the common material available-—while also poking at Gemini, ChatGPT, and Claude for answers. I can’t find any reason why it wouldn’t make sense to apply domain-driven design to the realm of software integration and consumption, and I find it strange that people who produce certain types of infrastructure would gate keep around a concept that is focused on alignment and establishing a common language for the software we produce.
Personally, I find the concepts of models, ubiquitous language, bounded context, aggregates, events, repositories, and services in the context of 3rd-party Cloud, SaaS, and APIs very compelling. As an API producer you “potentially” have a lot more control over your domain(s), but as an API consumer, you do not have near as much control. You have to cede a significant amount of control over the “design” of the APIs you are using for any given domain, but you do have control over how you use many different services and their APIs in concert—which isn’t something each API service provider have always thought critically about. Some have, but most do not. This is where the concept of boundaries and a ubiquitous language really comes into play, empowering and giving you more control over the services you use and do not use, and which individual API resources you apply to stitch together the variety of business capabilities you will need from your 3rd-party suppliers.
Microsoft will no longer provide technical support, meaning that Exchange 2016 and Exchange 2019 users will no longer receive:
Bug fixes for issues that may impact the stability and usability of the server.
Security fixes for vulnerabilities that may make the server vulnerable to security breaches.
Time zone updates.
Customer installations of Exchange 2016 and Exchange 2019 will continue to run after October 14, 2025. However, continuing to use these offerings after the end-of-support date invites potential security risks, so we strongly recommend taking action now.
We strongly believe that customers get the best value and user experience by migrating fully to Exchange Online or Microsoft 365. Migrating to the cloud is the best and simplest option to help you retire your Exchange Server deployment. When you migrate to the Microsoft cloud, you make a straightforward hop away from an on-premises deployment and benefit from new features and technologies, including advanced generative AI technologies that are available in the cloud but not on-premises.
If you’re migrating to the cloud, you might be eligible to use our Microsoft FastTrack service. FastTrack shares best practices and provides tools and resources to make your migration as seamless as possible. Best of all, you’ll have a support engineer helping you, from planning and designing to migrating your last mailbox. For more information about FastTrack, see Microsoft FastTrack.
If you are running Exchange 2016, we recommend that you perform a legacy (a.k.a. side-by-side) upgrade to Exchange Server SE and do an in-place upgrade to Exchange Server SE when it is available.
If you still have Exchange Server 2013 or earlier in your organization, you must first remove it before you can install Exchange Server 2019 CU15 or upgrade to Exchange Server SE.
Exchange Server Technology Adoption Program
If your organization intends to continue running Exchange Server and you want to test and evaluate pre-release builds of Exchange Server SE releases, you can apply to join the Exchange Server Technology Adoption Program (TAP).
Joining the Exchange Server TAP has several advantages, such as the ability to provide input and feedback on future updates, develop a close relationship with the Exchange Server engineering team, receive pre-release information about Exchange Server, and more. TAP members also get support from Microsoft at no additional charge for issues related to the TAP.
All nominations are reviewed and screened prior to acceptance. No customers are allowed access to any pre-release downloads or information until all legal paperwork is properly executed. Nomination does not mean acceptance, as not all nominees will be chosen for the TAP. If you are preliminarily accepted, we will contact you to get the required paperwork started.