Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153256 stories
·
33 followers

Your Dependencies Don’t Care About Your FIPS Configuration

1 Share

FIPS compliance is a great idea that makes the entire software supply chain safer. But teams adopting FIPS-enabled container images are running into strange errors that can be challenging to debug. What they are learning is that correctness at the base image layer does not guarantee compatibility across the ecosystem. Change is complicated, and changing complicated systems with intricate dependency webs often yields surprises. We are in the early adaptation phase of FIPS, and that actually provides interesting opportunities to optimize how things work. Teams that recognize this will rethink how they build FIPS and get ahead of the game.

FIPS in practice

FIPS is a U.S. government standard for cryptography. In simple terms, if you say a system is “FIPS compliant,” that means the cryptographic operations like TLS, hashing, signatures, and random number generation are performed using a specific, validated crypto module in an approved mode. That sounds straightforward until you remember that modern software is built not as one compiled program, but as a web of dependencies that carry their own baggage and quirks.

The FIPS crypto error that caught us off guard

We got a ticket recently for a Rails application in a FIPS-enabled container image. On the surface, everything looked right. Ruby was built to use OpenSSL 3.x with the FIPS provider. The OpenSSL configuration was correct. FIPS mode was active.

However, the application started throwing cryptography module errors from the Postgres Rubygem module. Even more confusing, a minimal reproducer of a basic Ruby app and a stock postgres did not reproduce the error and a connection was successfully established. The issue only manifested when using ActiveRecord.

The difference came down to code paths. A basic Ruby script using the pg gem directly exercises a simpler set of operations. ActiveRecord triggers additional functionality that exercises different parts of libpq. The non-FIPS crypto was there all along, but only certain operations exposed it.

Your container image can be carefully configured for FIPS, and your application can still end up using non-FIPS crypto because a dependency brought its own crypto along for the ride. In this case, the culprit was a precompiled native artifact associated with the database stack. When you install pg, Bundler may choose to download a prebuilt binary dependency such as libpq.

Unfortunately those prebuilt binaries are usually built with assumptions that cause problems. They may be linked against a different OpenSSL than the one in your image. They may contain statically embedded crypto code. They may load crypto at runtime in a way that is not obvious.

This is the core challenge with FIPS adoption. Your base image can do everything right, but prebuilt dependencies can silently bypass your carefully configured crypto boundary.

Why we cannot just fix it in the base image yet

The practical fix for the Ruby case was adding this to your Gemfile.

gem "pg", "~> 1.1", force_ruby_platform: true

You also need to install libpq-dev to allow compiling from source. This forces Bundler to build the gem from source on your system instead of using a prebuilt binary. When you compile from source inside your controlled build environment, the resulting native extension is linked against the OpenSSL that is actually in your FIPS image.

Bundler also supports an environment/config knob for the same idea called BUNDLE_FORCE_RUBY_PLATFORM. The exact mechanism matters less than the underlying strategy of avoiding prebuilt native artifacts when you are trying to enforce a crypto boundary.

You might reasonably ask why we do not just add BUNDLE_FORCE_RUBY_PLATFORM to the Ruby FIPS image by default. We discussed this internally, and the answer illustrates why FIPS complexity cascades.

Setting that flag globally is not enough on its own. You also need a C compiler and the relevant libraries and headers in the build stage. And not every gem needs this treatment. If you flip the switch globally, you end up compiling every native gem from source, which drags in additional headers and system libraries that you now need to provide. The “simple fix” creates a new dependency management problem.

Teams adopt FIPS images to satisfy compliance. Then they have to add back build complexity to make the crypto boundary real and verify that every dependency respects it. This is not a flaw in FIPS or in the tooling. It is an inherent consequence of retrofitting a strict cryptographic boundary onto an ecosystem built around convenience and precompiled artifacts.

The patterns we are documenting today will become the defaults tomorrow. The tooling will catch up. Prebuilt packages will get better. Build systems will learn to handle the edge cases. But right now, teams need to understand where the pitfalls are.

What to do if you are starting a FIPS journey

You do not need to become a crypto expert to avoid the obvious traps. You only need a checklist mindset. The teams working through these problems now are building real expertise that will be valuable as FIPS requirements expand across industries.

  • Treat prebuilt native dependencies as suspect. If a dependency includes compiled code, assume it might carry its own crypto linkage until you verify otherwise. You can use ldd on Linux to inspect dynamic linking and confirm that binaries link against your system OpenSSL rather than a bundled alternative.
  • Use a multi-stage build and compile where it matters. Keep your runtime image slim, but allow a builder stage with the compiler and headers needed to compile the few native pieces that must align with your FIPS OpenSSL.
  • Test the real execution path, not just “it starts.” For Rails, that means running a query, not only booting the app or opening a connection. The failures we saw appeared when using the ORM, not on first connection.
  • Budget for supply-chain debugging. The hard part is not turning on FIPS mode. The hard part is making sure all the moving parts actually respect it. Expect to spend time tracing crypto usage through your dependency graph.

Why this matters beyond government contracts

FIPS compliance has traditionally been seen as a checkbox for federal sales. That is changing. As supply chain security becomes a board-level concern across industries, validated cryptography is moving from “nice to have” to “expected.” The skills teams build solving FIPS problems today translate directly to broader supply chain security challenges.

Think about what you learn when you debug a FIPS failure. You learn to trace crypto usage through your dependency graph, to question prebuilt artifacts, to verify that your security boundaries are actually enforced at runtime. Those skills matter whether you are chasing a FedRAMP certification or just trying to answer your CISO’s questions about software provenance.

The opportunity in the complexity

FIPS is not “just a switch” you flip in a base image. View FIPS instead as a new layer of complexity that you might have to debug across your dependency graph. That can sound like bad news, but switch the framing and it becomes an opportunity to get ahead of where the industry is going.

The ecosystem will adapt and the tooling will improve. The teams investing in understanding these problems now will be the ones who can move fastest when FIPS or something like it becomes table stakes.

If you are planning a FIPS rollout, start by controlling the prebuilt native artifacts that quietly bypass the crypto module you thought you were using. Recognize that every problem you solve is building institutional knowledge that compounds over time. This is not just compliance work. It is an investment in your team’s security engineering capability.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Drowning in AI slop, cURL ends bug bounties

1 Share

Enough is enough. Daniel Steinberg, lead developer and founder of cURL, the popular, open source internet file transfer protocol, is closing down cURL’s bug bounty program at the end of January.

Why? Because cURL’s maintainers are being buried in AI slop. In an interview conducted over Mastodon, Steinberg told me, “It is our attempt to remove the incentives for submitting made-up lies. The submission quality has plummeted; not only are lots of the submissions plain slop, but the ones that aren’t obviously AI also seem to a high degree be worse (possibly because they, too, are AI but just hidden better). We need to do something to prevent us from drowning.”

The impact of AI slop on open source security

He’s not the only one who’s sick and tired of AI slop bug reports. Viktor Petersson, founder of sbomify and co-founder of Screenly, was the first person to spread the news of cURL’s change in a LinkedIn post, wrote, “We at Screenly are probably only seeing a fraction of the amount that curl gets, but the amount of AI slop that the bug bounty is very taxing on human reviewers.” Amen.

Steinberg continued, “The plan is to close it down [at the] end of January, so there will be more messaging about it from the project probably next week. It also times nicely with my talk about open source security and AI on FOSDEM that weekend.”

This move comes as no surprise. Steinberg has been the most vocal opponent of indiscriminate use of AI bug reports for some time now. In May 2025, he had complained about a flood of “AI slop” bug reports from the bug bounty site HackerOne. He’d said, on LinkedIn, “We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help.”

Distinguishing between AI slop and effective AI-assisted bug finding

That’s not to say, however, that Steinberg rejects using AI to find bugs. He doesn’t. In September 2025, for example, he praised Joshua Rogers on Mastodon for sending “us a *massive* list of potential issues in #curl that he found using his set of AI-assisted tools. Code analyzer style nits all over. Mostly smaller bugs, but still bugs, and there could be one or two actual security flaws in there. Actually,  truly awesome findings.”

You see, Steinberg’s problem isn’t with AI per se; it’s how lazy people are using AI thoughtlessly to look for a bounty check or a reputation as a security researcher.

Mind you, if you do find an honest-to-goodness bug, with or without AI help, the cURL maintainers still want to know about it. But, if you do use AI, you must follow cURL’s AI usage rules. That is not optional. If you don’t obey them, you won’t be contributing to cURL. Considering how buried the cURL maintainers are by AI slop, it’s not like you can blame them for taking such a strict stance. I would too in their shoes.

The post Drowning in AI slop, cURL ends bug bounties appeared first on The New Stack.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

'Stealing Isn't Innovation': Hundreds of Creatives Warn Against an AI Slop Future

1 Share
Around 800 artists, writers, actors, and musicians signed on to a new campaign against what they call "theft at a grand scale" by AI companies. From a report: The signatories of the campaign -- called "Stealing Isn't Innovation" -- include authors George Saunders and Jodi Picoult, actors Cate Blanchett and Scarlett Johansson, and musicians like the band R.E.M., Billy Corgan, and The Roots. "Driven by fierce competition for leadership in the new GenAI technology, profit-hungry technology companies, including those among the richest in the world as well as private equity-backed ventures, have copied a massive amount of creative content online without authorization or payment to those who created it," a press release reads. "This illegal intellectual property grab fosters an information ecosystem dominated by misinformation, deepfakes, and a vapid artificial avalanche of low-quality materials ['AI slop'], risking AI model collapse and directly threatening America's AI superiority and international competitiveness."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
25 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Under Armour Ransomware Attack Exposes 72M Email Addresses

1 Share

Many records also contained additional personal information such as names, dates of birth, genders, geographic locations, and purchase information.

The post Under Armour Ransomware Attack Exposes 72M Email Addresses appeared first on TechRepublic.

Read the whole story
alvinashcraft
42 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Adobe is developing ‘IP-safe’ gen AI models for the entertainment industry

1 Share

As Hollywood continues to embrace generative AI, Adobe is taking steps to make its Firefly suite of creative tools the go-to for studios' entertainment production needs.

Timed to this year's Sundance Film Festival, Adobe has announced that it is working with a number of studios, directors, and talent agencies to develop "private, IP-safe" Firefly Foundry gen AI "omni-models." According to the company, Firefly Foundry models are meant to "accelerate creativity without eroding ownership or creative intent" while generating different kinds of assets like audio-aware videos and 3D / vector graphics that can be seamlessly integrated into workflo …

Read the full story at The Verge.

Read the whole story
alvinashcraft
58 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build an agent into any app with the GitHub Copilot SDK

1 Share

Building agentic workflows from scratch is hard. 

You have to manage context across turns, orchestrate tools and commands, route between models, integrate MCP servers, and think through permissions, safety boundaries, and failure modes. Even before you reach your actual product logic, you’ve already built a small platform. 

GitHub Copilot SDK (now in technical preview) removes that burden. It allows you to take the same Copilot agentic core that powers GitHub Copilot CLI and embed it in any application.  

This gives you programmatic access to the same production-tested execution loop that powers GitHub Copilot CLI. That means instead of wiring your own planner, tool loop, and runtime, you can embed that agentic loop directly into your application and build on top of it for any use case. 

You also get Copilot CLI’s support for multiple AI models, custom tool definitions, MCP server integration, GitHub authentication, and real-time streaming.

What’s new in GitHub Copilot CLI  

Copilot CLI lets you plan projects or features, modify files, run commands, use custom agents, delegate tasks to the cloud, and more, all without leaving your terminal. 

Since we first introduced it, we’ve been expanding Copilot’s agentic workflows so it: 

  • Works the way you do with persistent memory, infinite sessions, and intelligent compaction. 
  • Helps you think with explore, plan, and review workflows where you can choose which model you want at each step. 
  • Executes on your behalf with custom agents, agent skills, full MCP support, and async task delegation. 

How does the SDK build on top of Copilot CLI? 

The SDK takes the agentic power of Copilot CLI (the planning, tool use, and multi-turn execution loop) and makes it available in your favorite programming language. This makes it possible to integrate Copilot into any environment. You can build GUIs that use AI workflows, create personal tools that level up your productivity, or run custom internal agents in your enterprise workflows.  

Our teams have already used it to build things like: 

  • YouTube chapter generators 
  • Custom GUIs for their agents 
  • Speech-to-command workflows to run apps on their desktops 
  • Games where you can compete with AI 
  • Summarizing tools 
  • And more! 

Think of the Copilot SDK as an execution platform that lets you reuse the same agentic loop behind the Copilot CLI, while GitHub handles authentication, model management, MCP servers, custom agents, and chat sessions plus streaming. That means you are in control of what gets built on top of those building blocks.

Start building today! Visit the SDK repository to get started.

The post Build an agent into any app with the GitHub Copilot SDK appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories