Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153258 stories
·
33 followers

How Mozilla builds now

1 Share
Headshot of Peter Rojas, Senior Vice President of New Products at Mozilla, wearing a gray sweater and smiling against a white background.

Mozilla has always believed that technology should empower people.

That belief shaped the early web, when browsers were still new and the idea of an open internet felt fragile. Today, the technology is more powerful, more complex, and more opaque, but the responsibility is the same. The question isn’t whether technology can do more. It’s whether it helps people feel capable, informed, and in control.

As we build new products at Mozilla today, that question is where we start.

I joined Mozilla to lead New Products almost one year ago this week because this is one of the few places still willing to take that responsibility seriously. Not just in what we ship, but in how we decide what’s worth building in the first place — especially at a moment when AI, platforms, and business models are all shifting at once.

Our mission — and mine — is to find the next set of opportunities for Mozilla and help shape the internet that all of us want to see. 

Writing up to users

One of Mozilla’s longest-held principles is respect for the people who use our products. We assume users are thoughtful. We accept skepticism as a given (it forces product development rigor — more on that later). And we design accordingly.

That respect shows up not just in how we communicate, but in the kinds of systems we choose to build and the role we expect people to play in shaping them.

You can see this in the way we’re approaching New Products work across Mozilla today: Our current portfolio includes tools like Solo, which makes it easy for anyone to own their presence on the web; Tabstack, which helps developers enable agentic experiences; 0DIN, which pools the collective expertise of over 1400 researchers from around the globe to help identify and surface AI vulnerabilities; and an enterprise version of Firefox that treats the browser as critical infrastructure for modern work, not a data collection surface.

None of this is about making technology simpler than it is. It’s about making it legible. When people understand the systems they’re using, they can decide whether those systems are actually serving them.

Experimentation that respects people’s time

Mozilla experiments. A lot. But we try to do it without treating talent and attention as an unlimited resource. Building products that users love isn’t easy and requires us to embrace the uncertainty and ambiguity that comes with zero-to-one exploration. 

Every experiment should answer a real question. It should be bounded. And it should be clear to the people interacting with it what’s being tested and why. That discipline matters, especially now. When everything can be prototyped quickly, restraint becomes part of the craft.

Fewer bets, made deliberately. A willingness to stop when something isn’t working. And an understanding that experimentation doesn’t have to feel chaotic to be effective.

Creating space for more kinds of builders

Mozilla has always believed that who builds is just as important as what gets built. But let’s be honest: The current tech landscape often excludes a lot of brilliant people, simply because the system is focused on only rewarding certain kinds of outcomes. 

We want to unlock those meaningful ideas by making experimentation more practical for people with real-world perspectives. We’re focused on lowering the barriers to building — because we believe that making tech more inclusive isn’t just a nice-to-have, it’s how you build better products.

A practical expression of this approach

One expression of this philosophy is a new initiative we’ll be sharing more about soon: Mozilla Pioneers.

Pioneers isn’t an accelerator, and it isn’t a traditional residency. It’s a structured, time-limited way for experienced builders to work with Mozilla on early ideas without requiring them to put the rest of their lives on hold.

The structure is intentional. Pioneers is paid. It’s flexible. It’s hands-on. And it’s bounded. Participants work closely with Mozilla engineers, designers, and product leaders to explore ideas that could become real Mozilla products — or could simply clarify what shouldn’t be built.

Some of that work will move forward. Some won’t. Both outcomes are valuable. Pioneers exists because we believe that good ideas don’t only come from founders or full-time employees, and that meaningful contribution deserves real support.

Applications open Jan. 26. For anyone interested (and I hope that’s a lot of you) please follow us, share and apply. In the meantime, know that what’s ahead is just one more example of how we’re trying to build with intention.

Looking ahead

Mozilla doesn’t pretend to have all the answers. But we’re clear about our commitments.

As we build new products, programs, and systems, we’re choosing clarity over speed, boundaries over ambiguity, and trust that compounds over time instead of short-term gains.

The future of the internet won’t be shaped only by what technology can do — but by what its builders choose to prioritize. Mozilla intends to keep choosing people.

The post How Mozilla builds now appeared first on The Mozilla Blog.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Why enterprise AI breaks without metrics discipline

1 Share
An upward shot of a tree canopy.

With the rising popularity of AI at workplaces based on various use cases, it promises to help support organizations by providing long-term benefits including faster product iterations, operational efficiency, customer support cost optimizations, faster data research and boost in overall employee productivity.

Technological companies are leading the pack in AI adoption followed by banking, e-commerce, healthcare and insurance to name a few. These companies are testing out various proofs of concept (POCs) to understand how AI can help support their business and productivity use cases.

While the testing of agentic AI systems is moving swiftly, many companies still struggle to see a consistent and trustworthy impact to justify investment. The notion isn’t entirely based on the desired model performance, the number of tokens available, or the infrastructure to scale the system. Rather, it is based on the more fundamental issue of enterprise-level data definition setup to train these systems.

When there’s inconsistency in data definitions across teams — for instance, teams across geographies having different definitions of net revenue, active users, or performance marketing expense — AI systems tend to inherit ambiguity, and tend to be unreliable and ineffective. Until this foundational problem is taken care of, AI systems won’t gain adoption among users due to a lack of trust.

What is an intelligent metrics layer?

At its core, an intelligence metrics layer is a foundational semantic system that standardizes the way metrics and associated dimensions are defined, computed, aggregated, sliced, governed, and interpreted by humans and machines together. It’s a single source of truth that aligns leadership, analysts, business intelligence (BI) tools, and AI systems around consistent definitions and computing logic.

The fundamental way it differs from traditional data models is that the business context, computation, ownership, data governance, and validation checks are embedded into the metric itself, which makes it a lot faster for AI systems to train and interpret and a lot simpler for both analysts and BI systems to work using this single source of truth. This level of semantic clarity is what makes metrics durable and contributes to AI readiness.

Why AI adoption breaks without it

Garbage in, garbage out. AI systems love clean data abstraction. When metrics lack consistent definitions across the company, the model may return conflicting answers to the same question.

For teams like finance, depending on highly accurate reporting data, accuracy and consistency are of prime importance. Financial reporting can become brittle when metric definitions and underlying data change without traceability. This inconsistency in reporting across regions or currencies can lead to regulatory risk for a company that can end in hefty fines, not to mention tarnished brand image and reduced confidence among shareholders.

AI falls short in reasoning about a business that can’t clearly define its core data abstraction.

Intelligent metrics layer architecture

An intelligent metric layer typically consists of these interconnected core components:

  • Semantic definitions: Standardized business definitions independent of underlying data sources.
  • Computation and logic layer: Git version-controlled computation logic for calculation.
  • Governance and ownership: Clearly defined team accountability for metrics and data refresh service-level agreements (SLAs), central policy defining data retention and deprecation.
  • Lineage and metadata: Visibility into upstream data sources and downstream metric usage in reporting.
  • AI enablement: Structured metadata that AI systems can train on and reference to output consistent answers.

These synchronized key components transform data stemming from unreliable outputs into vetted and trustworthy metrics.

Impact on the Enterprise

The organizations that invest in intelligent metric layers as a foundation for AI systems are bound to see tangible outcomes relatively sooner. This includes faster turnaround time for report generation and analytics adoption, faster A/B testing and product iterations, reliable and audit-ready regulatory reporting, fewer escalations to leadership due to inconsistent numbers, and higher overall trust due to AI-generated interpretations leading to deep dives in the data.

With a robust semantic layer, metrics become durable assets as compared to fragile virtual datasets and queries embedded in dashboards. Agentic AI systems are contained within well-defined semantic boundaries, mitigating the risk of misinformation while bolstering organizational trust in the data.

Making Metrics AI-Ready

As we continue to see analytics evolve beyond reactive dashboards, conversation-based AI agents rely on metrics that must be interpretable by machines. This goes beyond formulas to include clear context, structured definitions and constraints. These together define the relationships between metrics and dimensions, contextual definition and guardrails around appropriate use.

When the AI systems have clear guidelines for the use of metrics and context, they can be used more effectively. Therefore, intelligent metric systems are a prerequisite to agentic AI systems, conversational analytics and AI decision systems, and are vital for overall semantic alignment.

Implementation roadmap

Building an intelligent metric system doesn’t require re-engineering the data warehouse setup. Instead, start with a focused approach:

  • Build a data dictionary of all existing metrics and categorize them from most to least business-critical.
  • Standardize metric definitions, supported by corresponding upstream data models and define ownership for those.
  • Define vetted out computational logic, version-controlled and CI/CD tracked
  • Add data governance, SLA data refresh and data quality checks.
  • Integrate metrics with BI and agentic AI tools incrementally.

The fundamental goal here is consistent progress in building the core foundation, not perfection at the first instance.

Key takeaways

Enterprise AI adoption isn’t moving as rapidly, not because companies don’t have access to the latest models, but because metric definitions are inconsistent. An intelligent layer provides a semantic data foundation that AI systems need to deliver reliable and trustworthy information.

As organizations continue to move from dashboard to conversational analytics and automated decision systems, intelligent metric layers must serve as foundational infrastructure. This investment would help unlock AI’s real value, not just based on the market’s best-performing models but through clear, consistent and shared understanding of business’s key performance indicators.

The post Why enterprise AI breaks without metrics discipline appeared first on The New Stack.

Read the whole story
alvinashcraft
14 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Your Dependencies Don’t Care About Your FIPS Configuration

1 Share

FIPS compliance is a great idea that makes the entire software supply chain safer. But teams adopting FIPS-enabled container images are running into strange errors that can be challenging to debug. What they are learning is that correctness at the base image layer does not guarantee compatibility across the ecosystem. Change is complicated, and changing complicated systems with intricate dependency webs often yields surprises. We are in the early adaptation phase of FIPS, and that actually provides interesting opportunities to optimize how things work. Teams that recognize this will rethink how they build FIPS and get ahead of the game.

FIPS in practice

FIPS is a U.S. government standard for cryptography. In simple terms, if you say a system is “FIPS compliant,” that means the cryptographic operations like TLS, hashing, signatures, and random number generation are performed using a specific, validated crypto module in an approved mode. That sounds straightforward until you remember that modern software is built not as one compiled program, but as a web of dependencies that carry their own baggage and quirks.

The FIPS crypto error that caught us off guard

We got a ticket recently for a Rails application in a FIPS-enabled container image. On the surface, everything looked right. Ruby was built to use OpenSSL 3.x with the FIPS provider. The OpenSSL configuration was correct. FIPS mode was active.

However, the application started throwing cryptography module errors from the Postgres Rubygem module. Even more confusing, a minimal reproducer of a basic Ruby app and a stock postgres did not reproduce the error and a connection was successfully established. The issue only manifested when using ActiveRecord.

The difference came down to code paths. A basic Ruby script using the pg gem directly exercises a simpler set of operations. ActiveRecord triggers additional functionality that exercises different parts of libpq. The non-FIPS crypto was there all along, but only certain operations exposed it.

Your container image can be carefully configured for FIPS, and your application can still end up using non-FIPS crypto because a dependency brought its own crypto along for the ride. In this case, the culprit was a precompiled native artifact associated with the database stack. When you install pg, Bundler may choose to download a prebuilt binary dependency such as libpq.

Unfortunately those prebuilt binaries are usually built with assumptions that cause problems. They may be linked against a different OpenSSL than the one in your image. They may contain statically embedded crypto code. They may load crypto at runtime in a way that is not obvious.

This is the core challenge with FIPS adoption. Your base image can do everything right, but prebuilt dependencies can silently bypass your carefully configured crypto boundary.

Why we cannot just fix it in the base image yet

The practical fix for the Ruby case was adding this to your Gemfile.

gem "pg", "~> 1.1", force_ruby_platform: true

You also need to install libpq-dev to allow compiling from source. This forces Bundler to build the gem from source on your system instead of using a prebuilt binary. When you compile from source inside your controlled build environment, the resulting native extension is linked against the OpenSSL that is actually in your FIPS image.

Bundler also supports an environment/config knob for the same idea called BUNDLE_FORCE_RUBY_PLATFORM. The exact mechanism matters less than the underlying strategy of avoiding prebuilt native artifacts when you are trying to enforce a crypto boundary.

You might reasonably ask why we do not just add BUNDLE_FORCE_RUBY_PLATFORM to the Ruby FIPS image by default. We discussed this internally, and the answer illustrates why FIPS complexity cascades.

Setting that flag globally is not enough on its own. You also need a C compiler and the relevant libraries and headers in the build stage. And not every gem needs this treatment. If you flip the switch globally, you end up compiling every native gem from source, which drags in additional headers and system libraries that you now need to provide. The “simple fix” creates a new dependency management problem.

Teams adopt FIPS images to satisfy compliance. Then they have to add back build complexity to make the crypto boundary real and verify that every dependency respects it. This is not a flaw in FIPS or in the tooling. It is an inherent consequence of retrofitting a strict cryptographic boundary onto an ecosystem built around convenience and precompiled artifacts.

The patterns we are documenting today will become the defaults tomorrow. The tooling will catch up. Prebuilt packages will get better. Build systems will learn to handle the edge cases. But right now, teams need to understand where the pitfalls are.

What to do if you are starting a FIPS journey

You do not need to become a crypto expert to avoid the obvious traps. You only need a checklist mindset. The teams working through these problems now are building real expertise that will be valuable as FIPS requirements expand across industries.

  • Treat prebuilt native dependencies as suspect. If a dependency includes compiled code, assume it might carry its own crypto linkage until you verify otherwise. You can use ldd on Linux to inspect dynamic linking and confirm that binaries link against your system OpenSSL rather than a bundled alternative.
  • Use a multi-stage build and compile where it matters. Keep your runtime image slim, but allow a builder stage with the compiler and headers needed to compile the few native pieces that must align with your FIPS OpenSSL.
  • Test the real execution path, not just “it starts.” For Rails, that means running a query, not only booting the app or opening a connection. The failures we saw appeared when using the ORM, not on first connection.
  • Budget for supply-chain debugging. The hard part is not turning on FIPS mode. The hard part is making sure all the moving parts actually respect it. Expect to spend time tracing crypto usage through your dependency graph.

Why this matters beyond government contracts

FIPS compliance has traditionally been seen as a checkbox for federal sales. That is changing. As supply chain security becomes a board-level concern across industries, validated cryptography is moving from “nice to have” to “expected.” The skills teams build solving FIPS problems today translate directly to broader supply chain security challenges.

Think about what you learn when you debug a FIPS failure. You learn to trace crypto usage through your dependency graph, to question prebuilt artifacts, to verify that your security boundaries are actually enforced at runtime. Those skills matter whether you are chasing a FedRAMP certification or just trying to answer your CISO’s questions about software provenance.

The opportunity in the complexity

FIPS is not “just a switch” you flip in a base image. View FIPS instead as a new layer of complexity that you might have to debug across your dependency graph. That can sound like bad news, but switch the framing and it becomes an opportunity to get ahead of where the industry is going.

The ecosystem will adapt and the tooling will improve. The teams investing in understanding these problems now will be the ones who can move fastest when FIPS or something like it becomes table stakes.

If you are planning a FIPS rollout, start by controlling the prebuilt native artifacts that quietly bypass the crypto module you thought you were using. Recognize that every problem you solve is building institutional knowledge that compounds over time. This is not just compliance work. It is an investment in your team’s security engineering capability.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Drowning in AI slop, cURL ends bug bounties

1 Share

Enough is enough. Daniel Steinberg, lead developer and founder of cURL, the popular, open source internet file transfer protocol, is closing down cURL’s bug bounty program at the end of January.

Why? Because cURL’s maintainers are being buried in AI slop. In an interview conducted over Mastodon, Steinberg told me, “It is our attempt to remove the incentives for submitting made-up lies. The submission quality has plummeted; not only are lots of the submissions plain slop, but the ones that aren’t obviously AI also seem to a high degree be worse (possibly because they, too, are AI but just hidden better). We need to do something to prevent us from drowning.”

The impact of AI slop on open source security

He’s not the only one who’s sick and tired of AI slop bug reports. Viktor Petersson, founder of sbomify and co-founder of Screenly, was the first person to spread the news of cURL’s change in a LinkedIn post, wrote, “We at Screenly are probably only seeing a fraction of the amount that curl gets, but the amount of AI slop that the bug bounty is very taxing on human reviewers.” Amen.

Steinberg continued, “The plan is to close it down [at the] end of January, so there will be more messaging about it from the project probably next week. It also times nicely with my talk about open source security and AI on FOSDEM that weekend.”

This move comes as no surprise. Steinberg has been the most vocal opponent of indiscriminate use of AI bug reports for some time now. In May 2025, he had complained about a flood of “AI slop” bug reports from the bug bounty site HackerOne. He’d said, on LinkedIn, “We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help.”

Distinguishing between AI slop and effective AI-assisted bug finding

That’s not to say, however, that Steinberg rejects using AI to find bugs. He doesn’t. In September 2025, for example, he praised Joshua Rogers on Mastodon for sending “us a *massive* list of potential issues in #curl that he found using his set of AI-assisted tools. Code analyzer style nits all over. Mostly smaller bugs, but still bugs, and there could be one or two actual security flaws in there. Actually,  truly awesome findings.”

You see, Steinberg’s problem isn’t with AI per se; it’s how lazy people are using AI thoughtlessly to look for a bounty check or a reputation as a security researcher.

Mind you, if you do find an honest-to-goodness bug, with or without AI help, the cURL maintainers still want to know about it. But, if you do use AI, you must follow cURL’s AI usage rules. That is not optional. If you don’t obey them, you won’t be contributing to cURL. Considering how buried the cURL maintainers are by AI slop, it’s not like you can blame them for taking such a strict stance. I would too in their shoes.

The post Drowning in AI slop, cURL ends bug bounties appeared first on The New Stack.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

'Stealing Isn't Innovation': Hundreds of Creatives Warn Against an AI Slop Future

1 Share
Around 800 artists, writers, actors, and musicians signed on to a new campaign against what they call "theft at a grand scale" by AI companies. From a report: The signatories of the campaign -- called "Stealing Isn't Innovation" -- include authors George Saunders and Jodi Picoult, actors Cate Blanchett and Scarlett Johansson, and musicians like the band R.E.M., Billy Corgan, and The Roots. "Driven by fierce competition for leadership in the new GenAI technology, profit-hungry technology companies, including those among the richest in the world as well as private equity-backed ventures, have copied a massive amount of creative content online without authorization or payment to those who created it," a press release reads. "This illegal intellectual property grab fosters an information ecosystem dominated by misinformation, deepfakes, and a vapid artificial avalanche of low-quality materials ['AI slop'], risking AI model collapse and directly threatening America's AI superiority and international competitiveness."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Under Armour Ransomware Attack Exposes 72M Email Addresses

1 Share

Many records also contained additional personal information such as names, dates of birth, genders, geographic locations, and purchase information.

The post Under Armour Ransomware Attack Exposes 72M Email Addresses appeared first on TechRepublic.

Read the whole story
alvinashcraft
15 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories