Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147332 stories
·
33 followers

dotInsights | February 2026

1 Share

Did you know? C# allows digit separators (_) in numeric literals to make large numbers more readable, e.g., int num = 1_000_000; is valid and equals one million.

dotInsights | February 2026

Welcome to dotInsights by JetBrains! This newsletter is the home for recent .NET and software development related information.

🔗 Links

Here’s the latest from the developer community.

☕ Coffee Break

Take a break to catch some fun social posts.

đŸ—žïž JetBrains News

What’s going on at JetBrains? Check it out here:

✉ Comments? Questions? Send us an  email

Subscribe to dotInsights

Read the whole story
alvinashcraft
12 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Secret to Faster Migrations? AI, Pandas, and a Little Prompt Engineering

1 Share

McDonald’s team built an AI-assisted workflow that turned a six-week migration analysis into a two-day process, enabling accurate, scalable system transitions.

by: Arth Shah, Software Engineer, Restaurant Technology

Quick Bytes:

  • AI‑generated pandas scripts now complete migration analysis in two days instead of six weeks
  • Automated decoding and validation reduce manual effort and improve data accuracy
  • This approach enabled the GLS → Restaurant Profile transition and created a scalable blueprint for future migrations

Why AI + pandas matter for enterprise data migration
Enterprise system migrations are rarely glamorous. They often require analyzing thousands of records across multiple data models, with manual extraction, field mapping, and validation stretching into months — time that modern businesses simply don’t have. And every extra week adds risk to data integrity.

By combining AI-generated code with pandas processing — an open-source Python library for fast, flexible data handling — and real-time documentation access, we can turn months of manual migration analysis into weeks of automated data validation.

Modernizing legacy systems at enterprise scale
Our team was tasked with analyzing data compatibility between the legacy Global Location Service (GLS) system and Restaurant Profile for the US market. We analyzed 178K+ facility instances across 22 facility types to support the transition, a process that traditionally required six weeks of manual work per model.

The aha moment: From manual mapping to AI-powered analysis
The breakthrough came when we realized that data migration analysis follows predictable patterns — ideal for AI code generation. Instead of manually writing extraction and validation scripts for each data model comparison, we shifted to a smarter approach:

  • AI-generated decoders to generate Base64/gzip decoders from data format descriptions
  • Pandas-powered analysis for efficient field coverage analysis across 18,000+ records
  • Model Context Protocol (MCP) to pull validation requirements from live documentation systems
  • Maintain human control over business logic decisions and data quality thresholds

Cracking the code: Our prompt engineering breakthrough
Our key discovery was simple but powerful: structured prompts with context produce production-ready code. Instead of vague requests, we developed prompt templates that deliver consistent, accurate outputs.

PROMPT TEMPLATE:
“Generate a pandas function to analyze field coverage in {data_type}
data from {source_system}
with {record_count} records. Requirements: {business_rules}. Output:
coverage percentages, missing data patterns, and validation summary.”

Example: “Analyze facility coverage patterns in compressed JSON data from GLS APIs with 18K+ records. Requirements: identify missing phone numbers, coordinates, time zone data. Output: coverage percentages and critical gaps.”

Result: Complete pandas analysis function in 30 seconds vs. 2+ hours of manual coding.

But generating code was only part of the solution. To make this approach scalable and reliable, we needed more than one‑off AI‑generated scripts. We built a Python framework that could orchestrate the extraction, processing, and validation steps end‑to‑end, while still giving engineers full control over business rules and data‑quality requirements.

Why the Python framework mattered
The Python framework became the backbone of our solution, creating a repeatable and auditable process for large-scale migrations. By combining automation with human validation, we ensured accuracy while enabling a consistent workflow that can scale across markets.

Our three-layer architecture: The secret sauce behind speed
We built a Python framework combining AI code generation, pandas processing, and real-time documentation access:

AI-Assisted Data Analysis Workflow:
1. DATA EXTRACTION (AI + Python)

  • AI generates extraction scripts from requirements
  • Handles Base64 + Gzip decoding automatically
  • Fetches from multiple API endpoints

2. DATA PROCESSING (Pandas + AI)

  • pandas DataFrames for bulk processing
  • AI generates transformation logic
  • Statistical analysis and pattern recognition

3. BUSINESS VALIDATION (MCP + Human Oversight)

  • MCP pulls business rules from documentation
  • Human-defined validation criteria
  • AI generates validation code from business rules

How this architecture is implemented: It’s powered by an advanced AI code-generation model, integrated with MCP for real-time business rule access and an AI-enabled development environment. Together, these components automate pandas code generation, connect live documentation, and streamline the entire workflow from extraction to validation.

Real-world impact: How we scaled migration analysis
This approach significantly reduced the time required for migration analysis, accelerating system modernization.

So, what changed?

  • Scale: Automated processing of 18,000+ records and 178,000+ facility instances
  • Accuracy: AI-generated decoders eliminated Base64/gzip parsing errors
  • Coverage: Panda’s analysis identified gaps in field coverage (76.8% vs 100%)
  • Reusability: The framework works across multiple system migrations

This speed and precision delivered measurable business impact. For the US market, the team rapidly completed the GLS → Restaurant Profile analysis, identified critical data gaps early, and helped ensure Restaurant Profile as the primary data source.

And the best part? This approach is built for adaptability. All scripts are stored in source control and can be iteratively improved, making it easy to extend the framework to other markets. The prompt engineering patterns and analysis workflows are flexible enough to accommodate different data structures and regional requirements — creating a blueprint for global scalability.

What worked and what we learned
Looking back, these elements made the biggest impact:

  • AI for repetitive tasks: Automated extraction and validation code generation
  • Pandas for data processing: Efficiently handled large datasets without memory issues
  • MCP for current rules: Integrated documentation kept validation logic up to date
  • Human oversight: Maintained control over business decisions and code approval

We reinforced the importance of early error handling for compressed data and maintaining human validation checkpoints through the process.

What sets this approach apart and why it matters for McDonald’s tech future
This work demonstrates how AI can act as a force multiplier for complex data analysis. By combining advanced AI code generation, MCP’s real-time documentation access, and AI-integrated development tools, we created a new paradigm for technical analysis work — one that accelerates processes without sacrificing accuracy or control.

This approach not only enabled the GLS (Global Location Service) → Restaurant Profile transition but also established reusable patterns for enterprise data analysis. By pairing AI assistance with human oversight, we built a scalable solution that integrates seamlessly into existing workflows.

Looking ahead, this model sets a blueprint for future migrations, proving how AI can accelerate modernization at a global scale while supporting McDonald’s vision for faster, smarter technology solutions.


The Secret to Faster Migrations? AI, Pandas, and a Little Prompt Engineering was originally published in McDonald’s Technical Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Implementing Zero Trust with Resource Isolation

1 Share

There's a good chance your application consumes one or more APIs. For example, you may have a back-office application that works with a shipping API and an invoice API. Or perhaps you have a microservice architecture, and there are 50 different APIs involved.

In this landscape, one of the most persistent security anti-patterns we see is having access tokens with too much access. An overprivileged token occurs when a client requests a wide array of scopes, for example, invoice.read, shipping.write, and, and receives a single access token that contains all the issued claims.

While asking for multiple scopes at once can be convenient, the issued token raises a concerning trust issue. If the shipping API is compromised and the token is leaked, an attacker can use it to access the invoice API. The attacker has a token that’s issued once, but usable against almost every service within a solution. We’ve sacrificed security for convenience, which can weaken our security posture.

This is where Resource Isolation comes in. Based on RFC 8707 (Resource Indicators for OAuth 2.0), this feature allows you to enforce strict trust boundaries between your APIs, ensuring that a token is only valid for the specific target it was intended for.

Read the whole story
alvinashcraft
27 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Fortifying your stack with Cloudflare: A security playbook

1 Share

Recently, Cloudflare outages have reignited an uncomfortable but necessary conversation among senior engineers: how much trust should we place in the edge?

Cloudflare Stack Decisions LogRocket Article

These incidents were not catastrophic failures of security. Still, they were operationally disruptive enough to expose assumptions many teams make about availability, isolation, and responsibility when outsourcing critical controls to a global edge provider.

Trusting the edge after Cloudflare outages

This section is not about fault-finding because in reality, Cloudflare operates one of the most complex distributed networks in the world, and outages are an inevitable reality of systems at that scale. Instead, the goal is to reframe how senior engineers should think about Cloudflare’s role in their architecture, especially after seeing what happens when the edge is degraded.

Outages as a design signal, not a red flag

When an edge provider experiences downtime, the instinctive reaction is to question reliability. However, a more productive response is to treat the event as a design signal because outages force teams to confront questions that are often ignored during steady-state operation:

(i) What actually breaks when the edge is unavailable?
(ii) Which guarantees were assumed but never explicitly designed for?
(iii) Where does our responsibility end, and where does the provider’s begin?

For senior engineers, these questions are architectural, not emotional. They help clarify whether Cloudflare is being used appropriately as an edge control plane or as a single point of truth for security and availability.

Shared responsibility at the edge

Cloudflare’s model implicitly follows a shared-responsibility pattern, even if it is not always framed that way, because these responsibilities can be categorised into the following:

(i) You remain responsible for the application layer authorisation, data integrity, origin resilience, fallback behavior and failure isolation.
(ii) While Cloudflare is responsible for global routing, DDoS absorption, WAF rule execution, TLS termination at the edge and the correctness of its control plane.

Therefore, it is safe to say that most outages expose gaps where teams have unintentionally delegated too much responsibility upward. A common example is relying solely on Cloudflare Access or WAF rules as the only enforcement layer, without compensating controls at the application or origin level.
Senior engineers should treat the edge as an augmenting layer, not a replacement for core security invariants.

Failure domains vs. security guarantees

One of the most dangerous misconceptions is equating Cloudflare’s global footprint with immunity from failure. While Cloudflare significantly reduces the blast radius of many attacks, it also introduces a new, distinct failure domain: the edge itself.
Some of the key distinctions to internalise include the following:

(i) Failure domains describe where things can break, like edge routing, control plane propagation, and regional PoPs.
(ii) Security guarantees describe what must never be violated, like the authentication, authorisation, and data access rules.

A mature architecture ensures that security guarantees hold even when a failure domain is compromised. For example:

  • If Cloudflare Access becomes unavailable, does the application fail closed or fail open?
  • If WAF rules are temporarily bypassed, does the application still enforce first-class input validation and authorisation?

Outages are stress tests for security architecture. They expose whether controls are truly defence-in-depth or merely defence-by-delegation.

Reframing Cloudflare: Control plane, not silver bullet

The most important mindset shift for senior engineers is this: Cloudflare is a powerful control plane at the edge, not a silver bullet for security or reliability, and if used correctly, Cloudflare provides:

  • Fast, globally distributed policy enforcement
  • Strong mitigation against volumetric and protocol-level attacks
  • Centralised visibility and rapid response capabilities

But if used incorrectly, it becomes a fragile choke point whose failure cascades directly into application downtime or security exposure.

As much as you want to trust the edge, it shouldn’t result in a blind reliance but rather consciously designing for partial failure, explicitly defining responsibilities and ensuring that your core security guarantees remain enforceable even when the edge is degraded.

In the next sections, we will translate this framing into concrete playbooks, like how to layer controls, design fallbacks and use Cloudflare’s features in ways that strengthen your system rather than over-concentrate risk.

Cloudflare’s security model in practice

Where Cloudflare sits in the request path

Cloudflare operates as a reverse proxy in front of your infrastructure.

Request flow:
User → Cloudflare Edge → Origin (API, App, Load Balancer)

  • All inbound traffic terminates at Cloudflare first
  • Requests are inspected, filtered, and optionally blocked at the edge
  • Only traffic that satisfies the defined security and policy controls is forwarded to the origin

How Data Moves Through Cloudflare

This placement is good because it allows Cloudflare to enforce security controls before requests reach your servers, reducing the attack surface, absorbing malicious traffic early, and offloading enforcement from origin infrastructure.

Crucially, this model is most effective when Cloudflare’s controls augment, not replace, application-level security. Edge enforcement should be treated as the first line of defence, not the only one.

This placement allows Cloudflare to enforce security controls before requests ever reach origin infrastructure, reducing attack surface, absorbing malicious traffic early, and offloading enforcement from backend systems.

Crucially, this model is most effective when Cloudflare’s controls augment rather than replace application-level security. Also, it is paramount to state that the edge enforcement should be treated as the first line of defence, not the only one.

What Cloudflare can fully enforce vs. partially influence

Network-layer protections (L3/L4)

At the network layer, Cloudflare has direct control over traffic before it reaches your infrastructure with the following capabilities:

  • DDoS mitigation (volumetric floods, SYN floods, UDP floods)
  • IP reputation and threat-intel–based blocking
  • Rate limiting at the edge
  • Geo-blocking and ASN-based filtering

Why this works well:
Attacks are absorbed by Cloudflare’s global Anycast network, preventing malicious traffic from ever reaching your origin.

HTTP-layer enforcement (conditional enforcement)

At Layer 7, Cloudflare enforces controls based on request inspection, not on application semantics or intent. In this situation, some of Cloudflare’s pros include the following:

  • Web Application Firewall (WAF) rules
  • Bot detection and mitigation
  • API abuse prevention (patterns, headers, paths, request rates)
  • Blocking common OWASP Top 10 attack signatures (SQLi, XSS, RCE)

While some of Cloudflare’s cons include:

  • Business-logic abuse (valid requests used maliciously)
  • Authorization and access-control flaws
  • Vulnerabilities in backend or service-to-service code
  • Insecure internal APIs exposed behind the edge

Also, some key limitations are that Cloudflare sees requests, not your application’s state, intent, or authorization model.

A common misconception

Some misconceptions most senior Engineers/architects have are “If it’s behind Cloudflare, it’s secure.” However, a reality check shows that:
Cloudflare is a powerful perimeter and traffic-control layer, not a complete security solution.

Even while Cloudflare helps with reducing surface attacks, blocking most automated, volumetric, and signature-based attacks, it does not replace secure coding, strong auth, or internal trust boundaries. Therefore, you must take extra care to ensure the following:

  1. Your origin IP is not exposed
  2. Your application does not blindly trust Cloudflare headers
  3. Your business logic or authorisation is not flawed

These measures help you to prevent malicious attackers from succeeding, even when traffic passes through Cloudflare.

Cloudflare use cases at a glance

The table below summarizes where Cloudflare is typically a strong fit, what it can reliably enforce at the edge, and what you still need to enforce at the origin and application layers:

Use case What Cloudflare does well What you must still own Common pitfall
DDoS protection (L3/L4) Absorbs volumetric and protocol-level attacks at the edge using Anycast routing Origin hardening, capacity planning for legitimate traffic Assuming the origin does not need rate limits or firewall rules
Traffic filtering & rate limiting Blocks abusive patterns early based on IP, ASN, geography, and request rates Auth-aware throttling tied to user identity, session, or token Treating rate limits as a replacement for application-layer abuse controls
Web Application Firewall (WAF) Stops known, automated, and signature-based attacks at scale Business-logic validation, authorization, and input handling Confusing WAF coverage with application security guarantees
Bot management Differentiates good vs. malicious bots using behavioral and reputation signals Protecting sensitive workflows from authenticated or human-assisted abuse Assuming bots are the only meaningful threat vector
Edge authentication (Access, JWT, headers) Fast, centralized identity checks before traffic hits the origin Enforcing auth and authorization inside the application Blindly trusting edge headers without verification
API protection Blocks basic abuse patterns and malformed requests Schema validation, RBAC, and business invariants Assuming managed rules fully understand custom APIs or GraphQL
Caching & performance Reduces latency and origin load for static and cacheable content Correct cache headers and explicit no-cache rules for sensitive data Accidentally caching authenticated or user-specific responses
TLS termination & encryption Manages certificates and enforces HTTPS globally End-to-end encryption assumptions and origin trust boundaries Assuming TLS termination equals full data security
Global policy enforcement Centralized rules, fast propagation, and visibility Change management, audits, and rollback strategies Treating the control plane as infallible

The Cloudflare WAF: What it’s excellent at

Cloudflare WAF is a Layer 7 control designed to detect and block malicious HTTP traffic before it reaches your origin. Its strength lies in stopping known, repeatable, and automated attack patterns at scale. Below are three (3) core capabilities where it consistently performs well in real-world systems.

1. Managed rulesets

Cloudflare uses managed rulesets to match and inspect traffic. These are pre-built, continuously maintained rule collections designed to detect and block common attack classes such as SQL injection (SQLi), cross-site scripting (XSS), remote code execution (RCE), and known CVEs.

How matching works

Each incoming HTTP request is evaluated against signatures and heuristics across multiple components of the request, including:

  • URI paths
  • Headers and cookies
  • Query parameters
  • Request body payloads

Why is this effective?

  • Immediately blocks generic and automated attacks at the edge
  • Continuously updated rules provide coverage for newly disclosed vulnerabilities without manual intervention
  • Reduces operational overhead by eliminating the need to maintain custom signatures for common attack classes

Practical example
A login request containing a payload such as ' OR 1=1 -- within a request parameter is blocked at the edge, preventing the request from ever reaching the application or database.

2. Rate limiting as a security primitive

Rate limiting is not just a performance tool; it is a foundational security control that limits attack amplification.

How rate limiting works

  • Applied per endpoint, IP, token, or session
  • Thresholds trigger blocks, challenges, or temporary bans
  • Often combined with WAF or bot signals for precision

Why is this effective?

  • It disrupts brute-force authentication attempts
  • It reduces the effectiveness of credential stuffing and API abuse
  • It prevents low-cost attacks from scaling linearly

Practical example
Restricting login attempts to 5 requests per minute stops bots from testing thousands of credentials, protecting accounts even if passwords are weak.

3. Bot management: behavioural & reputation signals

Not all bots are malicious. Cloudflare differentiates good bots from bad bots using multiple signals rather than simple user-agent checks.

Signals used

  • Behavioural analysis (timing patterns, interaction entropy)
  • Browser and TLS fingerprinting
  • IP reputation and known bot databases

Why is this effective?

  • Blocks scraping, spam, and credential abuse at scale
  • Allows legitimate crawlers (e.g. Googlebot, Bingbot) through without friction
  • Minimises false positives for real users

Practical example
An attacker using headless browsers to scrape pricing or inventory data is automatically challenged or blocked, while normal users remain unaffected.

Where teams overestimate Cloudflare WAF coverage

Cloudflare WAF is a strong Layer 7 control, but its effectiveness is often overstated. Senior engineers get into trouble when WAF capabilities are confused with application-level guarantees. Understanding these limits is essential to avoiding false security assumptions.

Common false assumptions

WAF is not the same as business logic protection

The WAF evaluates request structure and patterns, not application intent. Abuse of valid workflows (for example, transferring more funds than allowed via a legitimate API endpoint) will pass through unchanged.

WAF is not the same as authorisation enforcement

The application or API itself must enforce access control. A WAF cannot replace role-based access control (RBAC), session validation, or entitlement checks.

WAF is not the same as comprehensive API security

Modern APIs, especially those using custom endpoints, GraphQL, or gRPC, often expose attack surfaces that managed rulesets do not fully understand or cover.

Real failure modes in practice

Authenticated abuse

Attackers using valid credentials can bypass WAF protections by exploiting business logic or authorised endpoints in unintended ways.

Overly permissive tuning

To reduce false positives, teams sometimes weaken or disable managed rules, re-exposing well-known attack vectors in the process.

API blind spots

GraphQL introspection, complex query payloads, and gRPC traffic frequently evade default signatures, requiring dedicated controls beyond standard WAF rules.

Security misconfigurations seen in real systems

Cloudflare significantly reduces risk, but it cannot compensate for architectural or configuration mistakes. The following examples show common failure modes, why Cloudflare didn’t save the system and what mature designs do differently.

1. Broken cache rules causing data exposure

What went wrong?

Sensitive API responses (user profiles, access tokens, or session data) were cached publicly due to misconfigured page rules or default caching behaviour.

Why Cloudflare didn’t save it

WAF and DDoS protections operate on request inspection. Caching logic determines what data is stored and served. Cloudflare correctly served cached content even when that content should never have been cached.

Correct architecture

  • Set Cache-Control: private or no-store headers on sensitive endpoints
  • Enforce path-based caching rules using Page Rules or Cloudflare Workers
  • Explicitly opt out of caching for authenticated or user-specific responses

2. Origins still reachable directly

What went wrong?

The origin IP or load balancer was publicly reachable, allowing attackers to bypass Cloudflare entirely and interact directly with backend services.

Why Cloudflare didn’t save it

Cloudflare can only inspect traffic that passes through it. Direct-to-origin traffic is invisible to WAF, rate limiting and bot protections.

Correct architecture

  • Restrict origin access to Cloudflare IP ranges only
  • Use firewall rules, private networking, or VPNs for internal services
  • Regularly validate that no public paths bypass the edge

3. Conflicting Page Rules or Transform Rules weakening enforcement

What went wrong?

Multiple Page Rules, Transform Rules or Workers unintentionally overrode security headers or caching behaviour, reducing protections on critical endpoints.

Why Cloudflare didn’t save it

Rules are applied in a defined order. Misconfigured precedence can silently undo security controls, such as disabling HSTS, altering headers or bypassing WAF rules.

Correct architecture

  • Minimise, consolidate pages and transform rules
  • Maintain a clear rule-precedence model
  • Continuously validate that security headers and WAF settings remain intact

4. Authentication or JWT validation occurring too late

What went wrong?

Requests were processed by Cloudflare Workers or backend logic before authentication checks, allowing unauthenticated access to sensitive operations.

Why Cloudflare didn’t save it

WAF and bot management evaluate request patterns, not authentication state. Authorisation must be explicitly enforced, not assumed.

Resilience and failure planning when using edge security providers

Edge security providers like Cloudflare dramatically reduce risk, but they are not infallible. Mature architectures plan explicitly for partial outages, misconfigurations, and control-plane failures, ensuring core security guarantees survive even when the edge does not.

1. Planning for partial edge outages

Scenario

Specific Cloudflare POPs or services become unavailable, degrading or blocking traffic in certain regions.

Mitigations approach

  • Ensure the origin can safely handle direct traffic from trusted sources if edge routing degrades
  • Use health checks and failover routing (DNS-based or global load balancers)
  • Avoid architectural assumptions of 100% edge availability

2. Designing for misconfigurations

Scenario

Page rules, WAF policies, or caching settings unintentionally expose sensitive endpoints or introduce bypass paths.

Mitigations approach

  • Apply secure-by-default behaviour at the origin (deny by default)
  • Regularly audit Cloudflare configurations and rule precedence
  • Maintain redundant application-layer controls (authorisation, rate limiting and input validation)

3. Control-plane failures

Scenario

Cloudflare dashboards or APIs experience outages, delayed propagation, or inconsistent rule enforcement.

Mitigations approach

  • Treat edge security as supplementary, not the system of record
  • Enforce critical guarantees (auth, business invariants, logging) at the origin
  • Use feature flags or emergency overrides to roll back unsafe configurations quickly

4. Practical mitigations and safe defaults

Origin-level safeguards

  • Enforce authentication and authorisation within the application
  • Validate sensitive inputs server-side
  • Maintain deny-by-default firewall rules on origin infrastructure

Safe defaults when edge assumptions break

  • Block unknown or untrusted IPs by default
  • Rate-limit all unauthenticated requests
  • Log and retain suspicious activity for forensic analysis

5. Observability signals senior engineers should monitor

  • Traffic patterns: Detect volumetric attacks, abuse, or edge outages by monitoring sudden spikes, anomalies, or unexpected drops in requests.
  • Edge-to-origin ratios: Confirm that traffic is flowing through the edge as intended; deviations may indicate bypasses, routing issues, or edge failures.
  • WAF and Bot Management logs: Track blocked versus allowed traffic to validate rule effectiveness and identify false positives or missed attacks.
  • Error rates: Surface authentication failures, abuse attempts, backend instability, or misconfigured protections especially on sensitive endpoints.
  • Configuration changes: Catch risky or unintended changes early by monitoring dashboard and API modifications to security, caching, or firewall rules.

Key takeaways for senior engineers

  1. Edge as the first line, not the only line: Cloudflare and WAF protections provide a strong perimeter, but real security is enforced in the application: authentication, authorisation, input validation, and business-logic controls must remain first-class citizens.
  2. WAF effectiveness is architectural, not numerical: Security does not scale with rule count; strong outcomes come from clean request flows, hardened origins, explicit trust boundaries, and a deep understanding of attack surfaces, not from piling on managed rules.
  3. Design for failure and safe degradation: Outages, misconfigurations, and control-plane failures are inevitable. Mature systems fail closed where it matters, preserve observability, and avoid blind trust in edge controls when assumptions break.

Conclusion

Treat Cloudflare as a strategic layer in a defence-in-depth model, not a substitute for disciplined engineering, secure design, and ownership at the application layer. Happy coding!

The post Fortifying your stack with Cloudflare: A security playbook appeared first on LogRocket Blog.

Read the whole story
alvinashcraft
32 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

I Started Programming When I Was 7. I'm 50 Now, and the Thing I Loved Has Changed

1 Share

I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine. I understood that machine completely. Every byte of RAM had a purpose I could trace. Every pixel on screen was there because I’d put it there. The path from intention to result was direct, visible, and mine.

Forty-two years later, I’m sitting in front of hardware that would have seemed like science fiction to that kid, and I’m trying to figure out what “building things” even means anymore.

This isn’t a rant about AI. It’s not a “back in my day” piece. It’s something I’ve been circling for months, and I think a lot of experienced developers are circling it too, even if they haven’t said it out loud yet.

The era that made me

My favourite period of computing runs from the 8-bits through to about the 486DX2-66. Every machine in that era had character. The Sinclair Spectrum with its attribute clash. The Commodore 64 with its SID chip doing things the designers never intended. The NES with its 8-sprite-per-scanline limit that made developers invent flickering tricks to cheat the hardware. And the PC — starting life as a boring beige box for spreadsheets, then evolving at breakneck pace through the 286, 386, and 486 until it became a gaming powerhouse that could run Doom. You could feel each generation leap. Upgrading your CPU wasn’t a spec sheet exercise — it was transformative.

These weren’t just products. They were engineering adventures with visible tradeoffs. You had to understand the machine to use it. IRQ conflicts, DMA channels, CONFIG.SYS and AUTOEXEC.BAT optimisation, memory managers — getting a game to run was the game. You weren’t just a user. You were a systems engineer by necessity.

And the software side matched. Small teams like id Software were going their own way, making bold technical decisions because nobody had written the rules yet. Carmack’s raycasting in Wolfenstein, the VGA Mode X tricks in Doom — these were people pushing against real constraints and producing something genuinely new. Creative constraints bred creativity.

Then it professionalised. Plug and Play arrived. Windows abstracted everything. The Wild West closed. Computers stopped being fascinating, cantankerous machines that demanded respect and understanding, and became appliances. The craft became invisible.

But it wasn’t just the craft that changed. The promise changed.

When I started, there was a genuine optimism about what computers could be. A kid with a Spectrum could teach themselves to build anything. The early web felt like the greatest levelling force in human history. Small teams made bold decisions because nobody had written the rules yet.

That hope gave way to something I find genuinely distasteful. The machines I fell in love with became instruments of surveillance and extraction. The platforms that promised to connect us were really built to monetise us. The tinkerer spirit didn’t die of natural causes — it was bought out and put to work optimising ad clicks.

The thing I loved changed, and then it was put to work doing things I’m not proud to be associated with. That’s a different kind of loss than just “the tools moved on.”

But I adapted. That’s what experienced developers, human beings, do.

The shifts I rode

Over four decades I’ve been through more technology transitions than I can count. New languages, new platforms, new paradigms. CLI to GUI. Desktop to web. Web to mobile. Monoliths to microservices. Tapes, floppy discs, hard drives, SSDs. JavaScript frameworks arriving and dying like mayflies.

Each wave required learning new things, but the core skill transferred. You learned the new platform, you applied your existing understanding of how systems work, and you kept building. The tool changed; the craft didn’t. You were still the person who understood why things broke, how systems composed, where today’s shortcut became next month’s mess.

I’ve written production code in more languages than some developers have heard of. I’ve shipped software on platforms that no longer exist. I’ve chased C-beams off the shoulder of Orion. And every time the industry lurched in a new direction, the experience compounded. You didn’t start over. You brought everything with you and applied it somewhere new.

That’s the deal experienced developers made with the industry: things change, but understanding endures.

This time is different

I say that knowing how often those words have been wrong throughout history. But hear me out.

Previous technology shifts were “learn the new thing, apply existing skills.” AI isn’t that. It’s not a new platform or a new language or a new paradigm. It’s a shift in what it means to be good at this.

I noticed it gradually. I’d be working on something — building a feature, designing an architecture — and I’d realise I was still doing the same thing I’d always done, just with the interesting bits hollowed out. The part where you figure out the elegant solution, where you wrestle with the constraints, where you feel the satisfaction of something clicking into place — that was increasingly being handled by a model that doesn’t care about elegance and has never felt satisfaction.

Cheaper. Faster. But hollowed out.

I’m not typing the code anymore. I’m reviewing it, directing it, correcting it. And I’m good at that — 42 years of accumulated judgment about what works and what doesn’t, what’s elegant versus what’s expedient, how systems compose and where they fracture. That’s valuable. I know it’s valuable. But it’s a different kind of work, and it doesn’t feel the same.

The feedback loop has changed. The intimacy has gone. The thing that kept me up at night for decades — the puzzle, the chase, the moment where you finally understand why something isn’t working — that’s been compressed into a prompt and a response. And I’m watching people with a fraction of my experience produce superficially similar output. The craft distinction is real, but it’s harder to see from the outside. Harder to value. Maybe harder to feel internally.

The abstraction tower

Here’s the part that makes me laugh, darkly.

I saw someone on LinkedIn recently — early twenties, a few years into their career — lamenting that with AI they “didn’t really know what was going on anymore.” And I thought: mate, you were already so far up the abstraction chain you didn’t even realise you were teetering on top of a wobbly Jenga tower.

They’re writing TypeScript that compiles to JavaScript that runs in a V8 engine written in C++ that’s making system calls to an OS kernel that’s scheduling threads across cores they’ve never thought about, hitting RAM through a memory controller with caching layers they couldn’t diagram, all while npm pulls in 400 packages they’ve never read a line of.

But sure. AI is the moment they lost track of what’s happening.

The abstraction ship sailed decades ago. We just didn’t notice because each layer arrived gradually enough that we could pretend we still understood the whole stack. AI is just the layer that made the pretence impossible to maintain.

The difference is: I remember what it felt like to understand the whole machine. I’ve had that experience. And losing it — even acknowledging that it was lost long before AI arrived — is a kind of grief that someone who never had it can’t fully feel.

What remains

I don’t want to be dishonest about this. There’s a version of this post where I tell you that experience is more valuable than ever, that systems thinking and architectural judgment are the things AI can’t replace, that the craft endures in a different form.

And that’s true. When I’m working on something complex — juggling system-level dependencies, holding a mental model across multiple interacting specifications, making the thousand small decisions that determine whether something feels coherent or just works — I can see how I still bring something AI doesn’t. The taste. The judgment. The pattern recognition from decades of seeing things go wrong.

AI tools actually make that kind of thinking more valuable, not less. When code generation is cheap, the bottleneck shifts to the person who knows what to ask for, can spot when the output is subtly wrong, and can hold the whole picture together. Typing was never the hard part.

But I’d be lying if I said it felt the same. It doesn’t. The wonder is harder to access. The sense of discovery, of figuring something out through sheer persistence and ingenuity — that’s been compressed. Not eliminated, but compressed. And something is lost in the compression, even if something is gained.

The fallow period

I turned 50 recently. Four decades of intensity, of crafting and finding satisfaction and identity in the building.

And now I’m in what I’ve started calling a fallow period. Not burnout exactly. More like the ground shifting under a building you thought that although ever changing also had a permanence, and trying to figure out where the new foundation is.

I don’t have a neat conclusion. I’m not going to tell you that experienced developers just need to “push themselves up the stack” or “embrace the tools” or “focus on what AI can’t do.” All of that is probably right, and none of it addresses the feeling.

The feeling is: I gave 42 years to this thing, and the thing changed into something I’m not sure I recognise anymore. Not worse, necessarily. Just different. And different in a way that challenges the identity I built around it and doesn’t satisfy in the way it did.

I suspect a lot of developers over 40 are feeling something similar and not saying it, because the industry worships youth and adaptability and saying “this doesn’t feel like it used to” sounds like you’re falling behind.

I’m not falling behind. I’m moving ahead, taking advantage of the new tools, building faster than ever, and using these tools to help others accelerate their own work. I’m creating products I could only have dreamt of a few years ago. But at the same time I’m looking at the landscape, trying to figure out what building means to me now. The world’s still figuring out its shape too. Maybe that’s okay.

Maybe the fallow period is the point. Not something to push through, but something to be in for a while.

I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.


Photo by Javier Allegue Barros on Unsplash

Read the whole story
alvinashcraft
40 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Mastering the ASP.NET Core Request Pipeline: Middleware Patterns and Endpoint Filters for Real Apps

1 Share

## 1 The Modern Request Pipeline: Architecture and .NET Evolution

The ASP.NET Core request pipeline is the framework's execution backbone. Every HTTP request flows through a well-defined sequence—middleware, routing, filters, model binding, and finally an endpoint that produces a response. Understanding this flow determines where you enforce security, how you apply cross-cutting concerns, and how predictable your system is under load.

This article builds a clear mental model of how the pipeline behaves at runtime, explains how middleware and endpoint filters work together, and shows how to use them deliberately to build production-grade APIs.

### 1.1 The "Russian Doll" Model: Understanding the Bidirectional Flow

A reliable way to think about the ASP.NET Core pipeline is as a nested stack, similar to a set of Russian dolls. Each middleware wraps the next one. Requests travel inward toward the endpoint; responses unwind outward through the same layers.

#### 1.1.1 Inbound and Outbound Execution

Every middleware follows a consistent execution shape:

Read the whole story
alvinashcraft
48 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories