Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146157 stories
·
33 followers

Agentic AI meets integration: The next frontier

1 Share
Interconnected lines and dots.

Artificial intelligence has already transformed the way organizations analyze data, predict outcomes, and generate content. The next leap forward is arriving fast. Enter agentic AI: Autonomous systems capable of reasoning, deciding, acting, and continuously improving with minimal human intervention.

Unlike predictive or generative AI, agentic AI systems don’t just support decisions; they execute them. They can start workflows, coordinate across systems, adapt to changing conditions, and optimize processes in real time. But as promising as this shift is, one foundational question often gets overlooked:

How do autonomous agents safely and reliably interact with the enterprise systems where business happens?

The answer is integration, and how organizations choose to approach it will shape the success or failure of agentic AI initiatives.

Why integration is the real enabler of agentic AI

Agentic AI cannot exist in isolation. An agent is only as capable as the data, systems, and processes it can access and orchestrate.

A helpful analogy is the difference between a compass and a GPS. A compass provides direction, much like traditional AI delivers insights. A GPS, however, combines maps, traffic data, and real-time signals to guide action. Agentic AI works the same way: Autonomy emerges only when intelligence is paired with deep, trusted connectivity across the enterprise.

To operate autonomously, AI agents must be able to:

  • Access high-quality, contextual data across systems.
  • Navigate fragmented, hybrid IT landscapes.
  • Trigger and coordinate actions across applications.
  • Operate within defined governance, security, and compliance boundaries.

These requirements elevate integration from a technical concern to a strategic business capability.

The integration spectrum: Different ways to feed data to agents

Organizations exploring agentic AI quickly discover that there is no single way to connect agents to enterprise systems. Instead, there is a spectrum of integration approaches, each with trade-offs.

At one end are direct API connections and straight-through integrations, where agents call services or databases directly. This approach can work well for narrow use cases or greenfield environments, but it often struggles with scalability, error handling, and governance as complexity grows.

Others turn to open source integration frameworks or event streaming platforms, which provide flexibility and strong developer control. These options can be powerful, especially for digitally native teams, but they typically require significant engineering effort to manage security, life cycle management, monitoring, and enterprise-grade operations.

Many organizations adopt integration platforms or integration platform as a service (iPaaS)  solutions, which abstract connectivity, orchestration, and transformation logic into reusable services. These platforms are increasingly adding AI-assisted features — such as automated mapping, testing, and monitoring — to reduce manual effort.

Finally, large enterprises often look for deeply embedded integration platforms that are tightly aligned with their core business applications, data models, and process frameworks. These solutions emphasize governance, scalability, and business context, which are critical when agents may act autonomously.

Choosing the right approach depends on factors such as organizational maturity, regulatory requirements, landscape complexity, and the level of autonomy desired.

Why agentic AI cannot scale without an integration strategy

Most enterprises operate across a mix of cloud services, on-premises systems, partner networks, and industry-specific applications. Data is fragmented, processes span multiple systems, and change is constant.

Without a strong integration foundation, agentic AI initiatives face genuine risks:

  • Agents acting on incomplete or inconsistent data.
  • Brittle automations that fail at scale.
  • Limited visibility into decisions and actions.
  • Governance gaps that undermine trust and compliance.

Organizations that treat integration as a strategic capability, rather than a project-by-project necessity, are better positioned to scale agentic AI safely. They gain the ability to automate end-to-end processes, adapt quickly to change, and continuously optimize operations, turning autonomy into a competitive advantage rather than a liability.

The rise of agentic integration

As agentic AI grows, integration itself is becoming more autonomous.

Across the market, we’re seeing early examples of agentic integration patterns, where AI assists, or increasingly automates, parts of the integration lifecycle:

  • Discovering systems, APIs, and events.
  • Designing and mapping integrations based on intent.
  • Deploying and testing integration flows.
  • Monitoring, optimizing, and even “healing” failures.

In this model, integration experts shift from hands-on builders to strategic orchestrators, defining policies, outcomes, and guardrails while agents handle execution. This mirrors trends in other domains, from Infrastructure as Code to autonomous operations.

Positioning enterprise integration platforms in an agentic world

Enterprise-grade integration platforms are evolving to meet this shift by combining:

  • Support for multiple integration styles (API-led, event-driven, B2B, A2A).
  • AI-assisted design, mapping, and monitoring.
  • Built-in security, governance, and life cycle management.
  • Connectivity across systems.

These platforms, such as SAP Integration Suite, sit at this end of the spectrum, focusing on scalability, trust, and business context. By embedding AI capabilities directly into integration workflows and aligning closely with enterprise business processes, these platforms aim to make agentic AI operationally viable at scale, not just technically possible.

For organizations already running complex, regulated, or mission-critical landscapes, this approach can reduce risk, speed time to value, and provide the governance needed when autonomous agents begin to act on behalf of the business.

Looking ahead: How autonomous should integration become?

Agentic AI represents a fundamental shift in the way businesses design and run their operations. But autonomy is not binary; it’s a continuum.

Just as autonomous driving systems range from driver assistance to full self-driving, integration platforms will develop along a spectrum of autonomy. The key question for enterprises is not if integration should become more autonomous, but how much autonomy they are ready to trust, and where.

Organizations that start building this foundation now — by modernizing integration, clarifying governance, and experimenting with agentic patterns — will be best positioned to shape the next era of autonomous business.

The future of agentic AI will be defined not only by smarter models, but by smarter connections.

The post Agentic AI meets integration: The next frontier appeared first on The New Stack.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Making the Most of Your Docker Hardened Images Enterprise Trial – Part 3

1 Share

Customizing Docker Hardened Images

In Part 1 and Part 2, we established the baseline. You migrated a service to a Docker Hardened Image (DHI), witnessed the vulnerability count drop to zero, and verified the cryptographic signatures and SLSA provenance that make DHI a compliant foundation.

But no matter how secure a base image is, it is useless if you can’t run your application on it. This brings us to the most common question engineers ask during a DHI trial: what if I need a custom image?

Hardened images are minimal by design. They lack package managers (apt, apk, yum), utilities (wget, curl), and even shells like bash or sh. This is a security feature: if a bad actor breaks into your container, they find an empty toolbox.

However, developers often need these tools during setup. You might need to install a monitoring agent, a custom CA certificate, or a specific library.

In this final part of our series, we will cover the two strategies for customizing DHI: the Docker Hub UI (for platform teams creating “Golden Images”) and the multi-stage build pattern (for developers building applications).

Option 1: The Golden Image (Docker Hub UI)

If you are a Platform or DevOps Engineer, your goal is likely to provide a “blessed” base image for your internal teams. For example, you might want a standard Node.js image that always includes your corporate root CA certificate and your security logging agent.The Docker Hub UI is the preferred path for this. The strongest argument for using the Hub UI is maintenance automation.

The Killer Feature: Automatic Rebuilds

When you customize an image via the UI, Docker understands the relationship between your custom layers and the hardened base. If Docker releases a patch for the underlying DHI base image (e.g., a fix in glibc or openssl), Docker Hub automatically rebuilds your custom image.

You don’t need to trigger a CI pipeline. You don’t need to monitor CVE feeds. The platform handles the patching and rebuilding, ensuring your “Golden Image” is always compliant with the latest security standards.

How It Works

Since you have an Organization setup for this trial, you can explore this directly in Docker Hub.First, navigate to Repositories in your organization dashboard. Locate the image you want to customize (e.g., dhi-node), then the Customizations tab and click the “Create customization” action. This initiates a customization workflow as follows:

Screenshot 2026 01 23 at 23.09.16

In the “Add packages” section, you can search for and select OS packages directly from the distribution’s repository. For example, here we are adding bash to the image for debugging purposes. You can also add “OCI Artifacts” to inject custom files like certificates or agents.

Screenshot 2026 01 23 at 23.09.48

Finally, configure the runtime settings (User, Environment Variables) and review your build. Docker Hub will verify the configuration and queue the build. Once complete, this image will be available in your organization’s private registry and will automatically rebuild whenever the base DHI image is updated.

Screenshot 2026 01 23 at 23.09.30

This option is best suited for creating standardized “golden” base images that are used across the entire organization. The primary advantage is zero-maintenance security patching due to automatic rebuilds by Docker Hub. However, it is less flexible for rapid, application-specific iteration by individual development teams.

Option 2: Multi-Stage Build

If you are an developper, you likely define your environment in a Dockerfile that lives alongside your code. You need flexibility, and you need it to work locally on your machine.

Since DHI images don’t have apt-get or curl, you cannot simply RUN apt-get install my-lib in your Dockerfile. It will fail.

Instead, we use the multi-stage build pattern. The concept is simple:

  1. Stage 1 (Builder): Use a standard “fat” image (like debian:bookworm-slim) to download, compile, and prepare your dependencies.
  2. Stage 2 (Runtime): Copy only the resulting artifacts into the pristine DHI base.

This keeps your final image minimal, non-root, and secure, while still allowing you to install whatever you need.

Hands-on Tutorial: Adding a Monitoring Agent

Let’s try this locally. We will simulate a common real-world scenario: adding the Datadog APM library (dd-trace) globally to a Node.js DHI image.

1. Setup

Create a new directory for this test and add a simple server.js file. This script attempts to load the dd-trace library to verify our installation.

app/server.js

// Simple Express server to demonstrate DHI customization
console.log('Node.js version:', process.version);
try {
  require('dd-trace');
  console.log('dd-trace module loaded successfully!');
} catch (e) {
  console.error('Failed to load dd-trace:', e.message);
  process.exit(1);
}
console.log('Running as UID:', process.getuid(), 'GID:', process.getgid());
console.log('DHI customization test successful!');

2. Hardened Dockerfile

Now, create the Dockerfile. We will use a standard Debian image to install the library, and then copy it to our DHI Node.js image. Create a new directory for this test and add a simple server.js file. This script attempts to load the dd-trace library to verify our installation.

# Stage 1: Builder - a standard Debian Slim image that has apt, curl, and full shell access.
FROM debian:bookworm-slim AS builder


# Install Node.js (matching our target version) and tools
RUN apt-get update && \
    apt-get install -y curl && \
    curl -fsSL https://deb.nodesource.com/setup_24.x | bash - && \
    apt-get install -y nodejs


# Install Datadog APM agent globally (we force the install prefix to /usr/local so we know exactly where files go)
RUN npm config set prefix /usr/local && \
    npm install -g dd-trace@5.0.0


# Stage 2: Runtime - we switch to the Docker Hardened Image.
FROM <your-org-namespace>/dhi-node:24.11-debian13-fips


# Copy only the required library from the builder stage
COPY --from=builder /usr/local/lib/node_modules/dd-trace /usr/local/lib/node_modules/dd-trace


# Environment Configuration
# DHI images are strict. We must explicitly tell Node where to find global modules.
ENV NODE_PATH=/usr/local/lib/node_modules


# Copy application code
COPY app/ /app/


WORKDIR /app


# DHI Best Practice: Use the exec form (["node", ...]) 
# because there is no shell to process strings.
CMD ["node", "server.js"]

3. Build and Run

Build the custom image:

docker build -t dhi-monitoring-test .

Now run it. If successful, the container should start, find the library, and exit cleanly.

docker run --rm dhi-monitoring-test

Output:

Node.js version: v24.11.0
dd-trace module loaded successfully!
Running as UID: 1000 GID: 1000
DHI customization test successful!

Success! We have a working application with a custom global library, running on a hardened, non-root base.

Security Check

We successfully customized the image. But did we compromise its security?

This is the most critical lesson of operationalizing DHI: hardened base images protect the OS, but they do not protect you from the code you add.Let’s verify our new image with Docker Scout.

docker scout cves dhi-monitoring-test --only-severity critical,high

Sample Output:

    ✗ Detected 1 vulnerable package with 1 vulnerability
...
   0C     1H     0M     0L  lodash.pick 4.4.0           
pkg:npm/lodash.pick@4.4.0                               
                                                        
    ✗ HIGH CVE-2020-8203 [Improperly Controlled Modification of Object Prototype Attributes]

This result is accurate and important. The base image (OS, OpenSSL, Node.js runtime) is still secure. However, the dd-trace library we just installed pulled in a dependency (lodash.pick) that contains a High severity vulnerability.

This proves that your verification pipeline works.

If we hadn’t scanned the custom image, we might have assumed we were safe because we used a “Hardened Image.” By using Docker Scout on the final artifact, we caught a supply chain vulnerability introduced by our customization.

Let’s check how much “bloat” we added compared to the clean base.

docker scout compare --to <your-org-namespace>/dhi-node:24.11-debian13-fips dhi-monitoring-test

You will see that the only added size corresponds to the dd-trace library (~5MB) and our application code. We didn’t accidentally inherit apt, curl, or the build caches from the builder stage. The attack surface remains minimized.

A Note on Provenance: Who Signs What?

In Part 2, we verified the SLSA Provenance and cryptographic signatures of Docker Hardened Images. This is crucial for establishing a trusted supply chain. When you customize an image, the question of who “owns” the signature becomes important.

  1. Docker Hub UI Customization: When you customize an image through the Docker Hub UI, Docker itself acts as the builder for your custom image. This means the resulting customized image inherits signed provenance and attestations directly from Docker’s build infrastructure. If the base DHI receives a security patch, Docker automatically rebuilds and re-signs your custom image, ensuring continuous trust. This is a significant advantage for platform teams creating “golden images.”
  1. Local Dockerfile: When you build a custom image using a multi-stage Dockerfile locally (as we did in our tutorial), you are the builder. Your docker build command produces a new image with a new digest. Consequently, the original DHI signature from Docker does not apply to your final custom image (because the bits have changed and you are the new builder).
    However, the chain of trust is not entirely broken:
    • Base Layers: The underlying DHI layers within your custom image still retain their original Docker attestations.
    • Custom Layer: Your organization is now the “builder” of the new layers.

For production deployments using the multi-stage build, you should integrate Cosign or Docker Content Trust into your CI/CD pipeline to sign your custom images. This closes the loop, allowing you to enforce policies like: “Only run images built by MyOrg, which are based on verified DHI images and have our internal signature.”

Measuring Your ROI: Questions for Your Team

As you conclude your Docker Hardened Images trial, it’s critical to quantify the value for your organization. Reflect on the concrete results from your migration and customization efforts using these questions:

  • Vulnerability Reduction: How significantly did DHI impact your CVE counts? Compare the “before and after” vulnerability reports for your migrated services. What is the estimated security risk reduction?
  • Engineering Effort: What was the actual engineering effort required to migrate an image to DHI? Consider the time saved on patching, vulnerability triage, and security reviews compared to managing traditional base images.
  • Workflow: How well does DHI integrate into your team’s existing development and CI/CD workflows? Do developers find the customization patterns (Golden Image / Builder Pattern) practical and efficient? Is your team likely to adopt this long-term?

Compliance & Audit: Has DHI simplified your compliance reporting or audit processes due to its SLSA provenance and FIPS compliance? What is the impact on your regulatory burden?

Conclusion

Thanks for following through to the end! Over this 3-part blog series, you have moved from a simple trial to a fully operational workflow:

  1. Migration: You replaced a standard base image with DHI and saw immediate vulnerability reduction.
  2. Verification: You independently validated signatures, FIPS compliance, and SBOMs.
  3. Customization: You learned to extend DHI using the Hub UI (for auto-patching) or multi-stage builds, while checking for new vulnerabilities introduced by your own dependencies.

The lesson here is that the “Hardened” in Docker Hardened Images isn’t a magic shield but a clean foundation. By building on top of it, you ensure that your team spends time securing your application code, rather than fighting a never-ending battle against thousands of upstream vulnerabilities.

Read the whole story
alvinashcraft
32 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What’s the right Linux desktop UI for you?

1 Share

If you’ve never used Linux before and are considering it now, there’s one thing you’ll inevitably run into, and that’s choice.

With Linux, you can choose your distribution, kernel, init system, file system type, boot loader, default apps, and your desktop environment.

Even before you dive down that rabbit hole, you’ll discover that there’s a difference between a desktop environment and a window manager.

For some, it can quickly get overwhelming.

That’s why I’m here to help you make sense of the desktop choices.

Are you ready for this?

The difference between a window manager and a desktop environment

The first question you might ask yourself is, “What’s the difference between a desktop environment and a window manager?”

A desktop environment is a suite of tools and applications that provide a graphical user interface and include such features as panels, menus, and file managers. A window manager, on the other hand, takes care of managing the appearance and behavior of application windows.

To make this even more confusing, every desktop has a window manager. For example, GNOME has Mutter and KDE Plasma has KWin.

To make this even more confusing, some window managers are designed to serve as your desktop UI without having to also install a desktop environment.

Ahhhhh! It’s just too much.

Nah. It’s much easier than you think.

Because you can use both a window manager and a desktop environment as your desktop UI, I’m going to address both. Before I continue, I’m not going to talk about every window manager and desktop environment out there, as there are a lot. I’m going to primarily talk about the desktop environments and window managers that I believe are great choices for both new and skilled Linux users.

Ready?

Let’s go.

GNOME

Let’s just start off with a bang, as GNOME is one of what I call the “Big Three” desktops for Linux. GNOME is a “minimalist’s dream come true.” The idea behind GNOME is to get out of your way so you can focus on whatever it is that you need to do.

That’s not to say that GNOME is bereft of features. This is a full-featured desktop environment and includes everything you need to be productive. The biggest difference is that, instead of finding a typical desktop menu from which to launch applications, you open the Application Overview. From within the Application Overview, you can manually locate the application you want to run, search for the application you want to run, or pin applications to the Dash.

The Dash is essentially your panel, only it’s tucked out of the way.

If you don’t like the idea of the favorites bar being tucked away, you can install GNOME extensions, such as Dash To Panel or Dash to Dock. There are tons of GNOME extensions from which to choose that will extend the feature set of your desktop.

Who is GNOME for?

GNOME is a great desktop for minimalists who don’t care to have the usual desktop bits and pieces in the way. GNOME is essentially a blank canvas that allows you to do what you want without the usual distractions.

KDE Plasma

KDE Plasma is not only one of the most beautiful desktop environments on the market, but it’s also one of the most configurable. The out-of-the-box experience will look immediately familiar because it has all the usual trappings of a desktop: a panel, start menu, system tray, and clickable icons.

That default layout is very easy to use. A Windows user with zero Linux experience could log into a KDE Plasma desktop and immediately know how to use it.

Of course, the more you use KDE Plasma, the more you might want to customize it. You can do this manually, or you can download global themes. In the upcoming 6.6 release, you’ll be able to customize your desktop and then save your customizations as a global theme.

KDE Plasma is the desktop I usually recommend for those who are new to Linux. There are a few reasons for that: First, it’s easy to use. It’s also very fast and stable. KDE Plasma also happens to be one of those unique desktops that will grow with you as you learn more about Linux. At first, you’ll leave the configuration as the default. The more you learn, the more you’ll find yourself wanting to tweak it so it better suits your workflow. By the time you’re a few months or years into it, you might wind up with a KDE Plasma desktop that is completely unique to you.

KDE Shell.

Who is KDE Plasma for?

I typically say that KDE Plasma is for users of all types — especially those who place an emphasis on aesthetics. If you want the most usable and beautiful desktop available, KDE Plasma is the way to go.

Cinnamon

Cinnamon is the default desktop for Linux Mint, which happens to be one of the most popular Linux distributions available. And if Linux Mint is good enough for my esteemed colleague, Steven J. Vaughan-Nichols, it’s good enough for anyone. Trust me, he knows open source better than most.

Seriously, Linux Mint is the distribution I usually suggest for those who want to dip their toes into the Linux water. One very good reason for this is Cinnamon.

Screenshot

Linux Mint.

Cinnamon came about when GNOME 2 evolved into GNOME 3, which was a radical departure from what users were accustomed to. There was a good portion of the GNOME userbase who didn’t want such changes, so they forked Cinnamon from GNOME 2.

Cinnamon is pretty much a universal desktop, meaning it has all the bits you’ve grown accustomed to. If you’re a Windows user, you’ll feel right at home on Cinnamon.

Who is Cinnamon for?

Anyone. Seriously. Anyone could make use of Cinnamon, regardless of whether you’ve never used Linux or you’ve used it for decades.

Xfce

Like Cinnamon, Xfce is immediately familiar. The default configuration is a panel, desktop menu, system tray, and clickable icons. And although Cinnamon is very customizable, there are few desktops on the market that can be bent and twisted in the ways that Xfce can.

On top of that, Xfce is blazingly fast. Like Cinnamon, Xfce is considered a lightweight desktop environment, but as far as speed is concerned, I’d have to give the win to Xfce. That’s one of the reasons why so many lightweight Linux distributions default to Xfce.

Screenshot

Xfce screenshot.

If you want to know what separates Cinnamon and Xfce, consider this: Xfce is one of the better desktops for older hardware. If you have an aging machine lying around, you should install a Linux distribution that defaults to Xfce (such as Xubuntu) and watch that computer run like it was brand new. Xfce does not offer 3D acceleration (while Cinnamon does), so you won’t find the same level of speed and smoothness for animations.

Essentially, Xfce is a highly configurable desktop that doesn’t include all the bells and whistles of a modern UI.

Who is Xfce for?

Xfce is for those who value speed over looks and might have an older computer they want to revive. Xfce is also for those who like to tinker with their desktop layout, but don’t place much value on visual effects.

i3

OK, we’re going to veer away from the typical and venture into a different type of desktop: the tiling window manager. What is a tiling window manager? The easiest way to think of this is that a tiling window manager makes the choice of where an app window is placed for you.

Even better, a tiling window manager does a great job of making the most out of your desktop real estate.

The first app you open will take up the entire screen. The second app you open will automatically split the screen with the first app. The third app you open will split the right side of the screen with the second app you opened. The first app you open will split the left side of the screen with the first app you opened.

It might seem a bit confusing at first, but the good news is that i3 is a tiling window manager that is suitable for those who’ve never used a tiling window manager.

One thing to keep in mind about tiling window managers is that they typically only use the keyboard. You open apps with the keyboard, change the focus of the app you want to use, move tiles around, etc. You could use a tiling window manager and never touch your mouse.

Because of this, tiling window managers are often considered highly efficient, especially for developers and other types who lean heavily into multitasking.

Who is i3 for?

I would say that you need a bit of Linux experience before you jump into the tiling window manager. But if you think you’re ready for it, i3 is the way to go.

No, I don’t believe i3 is a good introduction to Linux, unless you’re absolutely certain you want the most efficient means of working with your apps.

Yes, there are a lot of other DEs and WMs than what I’ve listed here. For example, there’s my favorite COSMIC, which is relatively new. But if I had to recommend a UI to anyone who is either bored with what they have or wants to make the jump to Linux, you can bet one of the above would be my first recommendation.

i3 screenshot

i3 screenshot.

The post What’s the right Linux desktop UI for you? appeared first on The New Stack.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

This is What They Have Always Wanted

1 Share

I wanted to relax on the couch this morning, but the stories coming out of Minnesota leave me needing to write. I don’t have as much emotional skin in the game now that my Trump supporting mother has passed, but I can’t help but reflect on the stories I was told growing up, and the deep hypocrisy of this moment when it comes to the government being perceived as the enemy, and Americans possessing the right to own guns. I grew up in a perpetual belief that the government was coming for you, and that having a gun was the only thing that stood in their way—making what I am seeing unfold on the streets of Minneapolis very revealing about the fear we are steeped in growing up in this country.

My heart goes out to the families of Renee Good and Alex Pretti. My heart swells and overflows for you. I am thankful for the citizens of Minneapolis for putting their bodies on the line, and going out in the cold to stand up to this. I think Minneapolis people reflect the heart of America in this moment, and I feel a kindred spirit with Alex on where he stood on gun ownership and fascism. I won’t own a gun anymore after my best friend Derek blew his brains out with a gun I gave him, and I was pushed to give the kid back his guns, only never to see him again. I don’t need them. But the double standard being applied to people not he right with guns and people on the left with guns hit’s my America body right in it’s solar plexus.

In this moment I think about how much all the men my mother brought into my life wanted this. They knew the government would make a turn towards fascism. They knew that “they” would need their guns. They believed it so hard for so long, that they were left with no choice but to make it happen. The men I grew up with are so scared of the world, they knew nothing else. They bought the end times rhetoric so thoroughly, that they are determined to bring it to life. I don’t have anyone on the right that possesses a direct line to my heart anymore, so I find myself able to take very level headed approach to what I am seeing. I have done a serious amount of work to deprogram myself over the last decade—leaving me on much firmer ground.

It feels like we need this moment to expose the illness at the core of America. The illness that has been here since the beginning. Now feels like the time. Now is the time to confront the sexism and racism at the core of who we are. It is clear that the right wing of this nation wants confrontation. They are wanting cruelty. They want it to be videotaped. They are pleased that they get to live in the live action video game they have been playing over and over for years. I am very proud of my fellow Americans in the street. I am less proud of the politicians on the left in this country. I am very concerned with the strength of our institutions and the role that the technology sectorthat I work in is complicit and willfully blind to what is unfolding. I think this is the time for those of us at the center and the left to have an honest discussions around what it means to be American in this century.

I am proud that I live in New York City right now. I am proud that I don’t feel the need to own a gun. I am proud that Los Angeles, CA, Portland, OR, and Minneapolis, MN were chosen as front lines for this administrations assault. They are just the right cities for proper resistance. This is what the right in this county have always dreamed of and wanted. Let’s deny them. Let’s do it compassionately. Let’s come together across class and race to show them who America really is. I know we have it in us. They may have always wanted this, but we are much bigger. Even though this illness has been with us since the beginning, we have become much bigger than it. We are much more. We are diverse now, and they are scared of that diversity. Let’s take to the streets. Let’s capture images. Let’s tell stories. Let’s take care of each other. Let’s not be afraid. Let’s deny them this moment.



Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code in 2026: A Practical End-to-End Workflow Across the SDLC

1 Share

## 1 The Agentic Shift: Software Engineering with Claude Code

Software engineering is evolving, but not in the way most trend pieces describe. Teams still write code, review pull requests, and run production services. What has changed is how much of the mechanical, repetitive work can be assisted. Claude Code is not a smarter autocomplete. It is a *repo-aware CLI assistant* that can reason over your codebase, plan multi-file changes, implement them, and help you verify results—all through natural language.

The practical challenge for senior developers and architects is no longer “should we use AI,” but “how do we integrate Claude Code safely across the SDLC without losing architectural control.” This section focuses on how Claude Code actually works in day-to-day development, how it fits into real workflows, and how humans stay in charge of design decisions.

Throughout this section, examples use *current, production-ready stacks (ASP.NET Core, Angular, SQL Server) and describe real workflows*, not speculative tooling.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Sometimes you just need extensions methods to model your stuff

1 Share

I came across a recent thing, where I hit the limit with the "normal" approach of modeling my entity and had to resort to extension methods.

Read the whole story
alvinashcraft
33 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories