Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
147312 stories
·
33 followers

GitHub’s former CEO launches a developer platform for the age of agentic coding

1 Share
The Entire logo.

When GitHub CEO Thomas Dohmke left the Microsoft-owned company in August of 2025, he said he did so to return to his startup roots. Now, after a few months of development, he is launching Entire, a new open-source developer platform that reimagines what collaboration between developers and agents could look like if built from scratch.

Entire is backed by a $60 million seed round, the largest in developer tools history, led by Felicis, with participation from Madrona, Basis Set, and M12, Microsoft’s venture arm.

That’s an outsized round by any standard, but Dohmke’s reputation, having led GitHub’s evolution from code repository to an AI-centric platform built around Copilot, surely helped. Also, given the rapid pace of software development, a significant investment is required to keep up with the market’s evolving needs.

In an interview with The New Stack, Dohmke explained his vision behind Entire. “We’re moving away from engineering as a craft, where you build code manually and in files and folders — which is effectively what a Git repository was for the last 15 years, right? If you look at any given repo, it’s literally a file browser, and you can click through the files in this search and all that,” Dohmke says. “We are moving from that to a much higher abstraction, which is specifications — reasoning, session logs, intent, outcomes. And we believe that requires a very different developer platform than what GitHub is today.”

Dohmke notes that GitHub was built for human-to-human interaction. The entire system, from issues to pull requests to deployment, was not designed for this new era of AI agents. Yet today’s most advanced developers often use a dozen agents in parallel, and even the way we define software projects is changing quickly. Building that on GitHub — and bringing its existing user base along — would have been a ‘very different endeavor,” Dohmke says.

Leaving GitHub to become a founder again

It’s worth noting that Dohmke stressed that his departure from GitHub was entirely amicable. He didn’t leave GitHub to build a competitor.

“After over ten years at Microsoft, seven of those at GitHub and four as CEO, I felt like the itch to be a founder again and build something new,” he says. “And I had a very nice conversation with [Microsoft CEO] Satya [Nadella] in June, and I outlined to him where my head is at. And his response was,’ Hey, you know, one, please keep pushing until your last day. And two, let’s see what we can do together in the Microsoft ecosystem.”

Given that, it’s no surprise that Microsoft’s venture arm invested in Entire.

What is Entire building?

It’s also worth noting that while it is a platform play, Entire will not necessarily end up competing with GitHub. Dohmke says the idea is to build a layer higher in the stack where developers can manage agents’ reasoning processes and collaborate with them. Code repositories will remain central to that.

What Entire is building is a three-layer platform, with a new Git-compatible database built from scratch as its foundation, a semantic reasoning layer in the middle, and a user interface on top.

The team believes a new database layer is necessary because the information stored in these new repositories is different — specifically, the agents can emit a lot more context than humans do when using these tools. This new database will allow humans and agents to query not just the code but also the reasoning behind it.

Since agents will likely use this database and its API endpoints far more often than humans could ever use a Git repository, the team also needs to consider performance.

Dohmke also says that this new database, unlike traditional centralized Git repositories, can be a globally distributed network of nodes. For users who need to (or want to) ensure data sovereignty, that’s a major selling point.

Checkpoints

The first product the team is launching is part of the middle layer. Dubbed Checkpoints, this new open-source tool integrates with Claude Code and Google’s Gemini CLI (with Open Codex support coming soon) and automatically extracts and logs the agent’s reasoning, intent, and outcomes.

“The middle [layer] is all about providing both to humans and to agents all the information that led to the software product,” Dohmke explains. “And today, in the GitHub repository, that’s all the code and sometimes documentation and dependencies, but it’s essentially missing all the [information about] how you got there.”

Entire’s Checkpoint tool in the CLI (credit: Entire).

That’s because those systems were built for human developers, and while developers may write test cases and documentation after they finish writing the code, documenting their exact reasoning steps was never part of the process. But in the traditional, non-agentic workflow, a lot of institutional knowledge is never written down.

“That’s the first step in our larger vision, which is providing the semantic reasoning layer over the life cycle of a software project, so you can trace — both as a human and as an agent — at any point in time, in the future, why decisions were made the way they were,” Dohmke explains.

With all this data saved, Checkpoints will allow developers to review how the agents produced the code.

While the user interface is still a work in progress, Entire has built parts of it that help visualize the checkpoints stored in Git. For now, though, the team is mostly focused on the command line experience.

The review bottleneck

One issue Dohmke stresses in the current developer/agent workflow is something we hear a lot about these days: the bottleneck for shipping code isn’t writing code, it is reviewing the code written by the agents. That’s already leading to developer burnout.

The future, Dohmke argues, is more agents.

“If you keep that going through the software life cycle, the next thing you do after writing code is reviewing code — either your own code or your team member’s code in a pull request,” says Dohmke. “But a pull request has the same problem [when it comes to understanding the code]. It shows me changes to files that I never wrote in the first place. And the code review agents, like Copilot agent, give me feedback on their code, which is great when I still have some fundamental understanding, but becomes pointless or superfluous if I don’t actually understand what that code does.”

When there is more code and less context, the solution may be to use agents and deterministic tools to test the code and ensure it’s compliant and secure.

“It’s becoming more and more of a bottleneck, and so you have to remove that step out of the process,” he explains. “And that’s, I think, one of the biggest challenges in the industry, because at the same time we are struggling with more and more cyber attacks, many organizations have introduced zero trust as a process, which means nothing gets deployed without a human review. And so that’s, I think, where we believe, in our vision, a lot of innovation will happen, and we want to be part of that.”

Hiring humans and agents

With this funding round closed, Entire plans to double its headcount from currently fifteen employees to about thirty and build up its platform as fast as possible. But as Dohmke stresses, it’s not just human employees that matter anymore. Even in its press release, the Entire team notes that it plans to expand its team to “hundreds of agents.”

“I think in 2026, any leader needs to think about head count no longer just as salaries and benefits and travel and expenses, but tokens. And I’ve spoken to engineers, both on my own team, but also in the Bay Area here that are talking about 1,000s of dollars in tokens per month,” Dohmke says.

As for the business model, Dohmke tells us that the team plans to follow the well-established open source playbook of making most of the platform available under a permissive license and then offering a hosted service with additional features to monetize.

The post GitHub’s former CEO launches a developer platform for the age of agentic coding appeared first on The New Stack.

Read the whole story
alvinashcraft
6 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Hardened Images Are Free. Now What?

1 Share

Docker Hardened Images are now free, covering Alpine, Debian, and over 1,000 images including databases, runtimes, and message buses. For security teams, this changes the economics of container vulnerability management.

DHI includes security fixes from Docker’s security team, which simplifies security response. Platform teams can pull the patched base image and redeploy quickly. But free hardened images raise a question: how should this change your security practice? Here’s how our thinking is evolving at Docker.

What Changes (and What Doesn’t)

DHI gives you a security “waterline.” Below the waterline, Docker owns vulnerability management. Above it, you do. When a scanner flags something in a DHI layer, it’s not actionable by your team. Everything above the DHI boundary remains yours.

The scope depends on which DHI images you use. A hardened Python image covers the OS and runtime, shrinking your surface to application code and direct dependencies. A hardened base image with your own runtime on top sets the boundary lower. The goal is to push your waterline as high as possible.

Vulnerabilities don’t disappear. Below the waterline, you need to pull patched DHI images promptly. Above it, you still own application code, dependencies, and anything you’ve layered on top.

Supply Chain Isolation

DHI provides supply chain isolation beyond CVE remediation.

Community images like python:3.11 carry implicit trust assumptions: no compromised maintainer credentials, no malicious layer injection via tag overwrite, no tampering since your last pull. The Shai Hulud campaign(s) demonstrated the consequences when attackers exploit stolen PATs and tag mutability to propagate through the ecosystem.

DHI images come from a controlled namespace where Docker rebuilds from source with review processes and cooldown periods. Supply chain attacks that burn through community images stop at the DHI boundary. You’re not immune to all supply chain risk, but you’ve eliminated exposure to attacks that exploit community image trust models.

This is a different value proposition than CVE reduction. It’s isolation from an entire class of increasingly sophisticated attacks.

The Container Image as the Unit of Assessment

Security scanning is fragmented. Dependency scanning, SAST, and SCA all run in different contexts, and none has full visibility into how everything fits together at deployment time.

The container image is where all of this converges. It’s the actual deployment artifact, which makes it the checkpoint where you can guarantee uniform enforcement from developer workstation to production. The same evaluation criteria you run locally after docker build can be identical to what runs in CI and what gates production deployments.

This doesn’t need to replace earlier pipeline scanning altogether. It means the image is where you enforce policy consistency and build a coherent audit trail that maps directly to what you’re deploying.

Policy-Driven Automation

Every enterprise has a vulnerability management policy. The gap is usually between policy (PDFs and wikis) and practice (spreadsheets and Jira tickets).

DHI makes that gap easier to close by dramatically reducing the volume of findings that require policy decisions in the first place. When your scanner returns 50 CVEs instead of 500, even basic severity filtering becomes a workable triage system rather than an overwhelming backlog.

A simple, achievable policy might include the following:

  • High and critical severity vulnerabilities require remediation or documented exception
  • Medium and lower severity issues are accepted with periodic review
  • CISA KEV vulnerabilities are always in scope

Most scanning platforms support this level of filtering natively, including Grype, Trivy, Snyk, Wiz, Prisma Cloud, Aqua, and Docker Scout. You define your severity thresholds, apply them automatically, and surface only what requires human judgment.

For teams wanting tighter integration with DHI coverage data, Docker Scout evaluates policies against DHI status directly. Third-party scanners can achieve similar results through pipeline scripting or by exporting DHI coverage information for comparison.

The goal isn’t perfect automation but rather reducing noise enough that your existing policy becomes enforceable without burning out your engineers.

VEX: What You Can Do Today

Docker Hardened Images ship with VEX attestations that suppress CVEs Docker has assessed as not exploitable in context. The natural extension is for your teams to add their own VEX statements for application-layer findings.

Here’s what your security team can do today:

Consume DHI VEX data. Grype (v0.65+), Trivy, Wiz, and Docker Scout all ingest DHI VEX attestations automatically or via flags. Scanners without VEX support can still use extracted attestations to inform manual triage.

Write your own VEX statements. OpenVEX provides the JSON format. Use vexctl to generate and sign statements.

Attach VEX to images. Docker recommends docker scout attestation add for attaching VEX to images already in a registry:

docker scout attestation add \
  --file ./cve-2024-1234.vex.json \
  --predicate-type https://openvex.dev/ns/v0.2.0 \
  <image>

Alternatively, COPY VEX documents into the image filesystem at build time, though this prevents updates without rebuilding.

Configure scanner VEX ingestion. The workflow: scan, identify investigated findings, document as VEX, feed back into scanner config. Future scans automatically suppress assessed vulnerabilities.

Compliance: What DHI Actually Provides

Compliance frameworks such as ISO 27001, SOC 2, and the EU Cyber Resilience Act require systematic, auditable vulnerability management. DHI addresses specific control requirements:

Vulnerability management documentation (ISO 27001  A.8.8. , SOC 2 CC7.1). The waterline model provides a defensible answer to “how do you handle base image vulnerabilities?” Point to DHI, explain the attestation model, show policy for everything above the waterline.

Continuous monitoring evidence. DHI images rebuild and re-scan on a defined cadence. New digests mean current assessments. Combined with your scanner’s continuous monitoring, you demonstrate ongoing evaluation rather than point-in-time checks.

Remediation traceability. VEX attestations create machine-readable records of how each CVE was handled. When auditors ask about specific CVEs in specific deployments, you have answers tied to image digests and timestamps.

CRA alignment. The Cyber Resilience Act requires “due diligence” vulnerability handling and SBOMs. DHI images include SBOM attestations, and VEX aligns with CRA expectations for exploitability documentation.

This won’t satisfy every audit question, but it provides the foundation most organizations lack.

What to Do After You Read This Post

  1. Identify high-volume base images. Check Docker Hub’s Hardened Images catalog (My Hub → Hardened Images → Catalog) for coverage of your most-used images (Python, Node, Go, Alpine, Debian).
  2. Swap one image. Pick a non-critical service, change the FROM line to DHI equivalent, rebuild, scan, compare results.
  3. Configure policy-based filtering. Set up your scanner to distinguish DHI-covered vulnerabilities from application-layer findings. Use Docker Scout or Wiz for native VEX integration, or configure Grype/Trivy ignore policies based on extracted VEX data.
  4. Document your waterline. Write down what DHI covers and what remains your responsibility. This becomes your policy reference and audit documentation.
  5. Start a VEX practice. Convert one informally-documented vulnerability assessment into a VEX statement and attach it to the relevant image.

DHI solves specific, expensive problems around base image vulnerabilities and supply chain trust. The opportunity is building a practice around it that scales.

The Bigger Picture

DHI coverage is expanding. Today it might cover your OS layer, tomorrow it extends through runtimes and into hardened libraries. Build your framework to be agnostic to where the boundary sits. The question is always the same, though, namely —  what has Docker attested to, and what remains yours to assess?

The methodology Docker uses for DHI (policy-driven assessment, VEX attestations, auditable decisions) extends into your application layer. We can’t own your custom code, but we can provide the framework for consistent practices above the waterline. Whether you use Scout, Wiz, Grype, Trivy, or another scanner, the pattern is the same. You can let DHI handle what it covers, automate policy for what remains, and document decisions in formats that travel with artifacts.

At Docker, we’re using DHI internally to build this vulnerability management model. The framework stays constant regardless of how much of our stack is hardened today versus a year from now. Only the boundary moves.

The hardened images are free. The VEX attestations are included. What’s left is integrating these pieces into a coherent security practice where the container is the unit of truth, policy drives automation, and every vulnerability decision is documented by default.

For organizations that require stronger guarantees, FIPS-enabled and STIG-ready images, and customizations, DHI Enterprise is tailor made for those use cases. Get in touch with the Docker team if you would like a demo. If you’re still exploring, take a look at the catalog (no-signup needed) or take DHI Enterprise for a spin with a free trial.

Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js 25.6.1 (Current)

1 Share
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Node.js 24.13.1 (LTS)

1 Share
Read the whole story
alvinashcraft
7 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Force Content Security Policy (CSP) Headers on WordPress

1 Share

Content Security Policy (CSP) is a browser feature that blocks unsafe content. It tells the browser what resources it can load, which can prevent attacks like cross‑site scripting (XSS). It also helps stop mixed content issues, which occur when secure pages load insecure files.

While CSP sounds complicated, you don’t need advanced technical experience to use it — just a simple starting policy, a safe way to test, and a little fine-tuning. This guide explains how to plan a policy, test it safely, enforce it, and maintain it over time. It includes a beginner path using a plugin and a developer path using server or code-level headers.

Who this guide is for and how to use it

This guide supports two experience levels:

  • The beginner track: You have WordPress admin access but don’t want to edit server config. You’ll use a plugin to add CSP and follow a clear test loop.
  • The developer track: You can edit Apache, Nginx, CDN headers, or WordPress code. You want direct control, versioning, and support for nonces or route-based policies.

You can read the shared foundation first, then jump to your path.

CSP basics before you get started

To deploy a solid policy, you need to understand a few terms:

  • Directive: This is a rule for one resource type. Example: script-src controls JavaScript.
  • Source expression: This is a value inside a directive. Example: ‘self’ or https://cdn.example.com.
  • ‘self’. This allows resources from your own domain and subresources served from it.
  • Report-only mode: The browser records violations but still loads blocked resources.
  • Enforcement mode: The browser blocks violations instead of only logging them.
  • Nonce: A random token that allowlists one inline script or style for one request.
  • Hash: An SHA-based fingerprint that allowlists one exact inline script or style text.

A CSP header is a semicolon separated list of directives. Browsers apply the rules to every page response that includes the header.

Before you set up CSP, make sure that:

  • You can clear all caches. This includes caching through plugins, your host, or a CDN.
  • You can test on staging or during a low traffic window. CSP can break your site layout or code if done incorrectly.
  • You know which third-party services your site uses. This includes things like analytics, fonts, ads, embeds, payment gateways, and chat widgets.

Decide on policy rules

When choosing which resources to allow, consider:

  • Scripts (script-src): From things like plugins, analytics, and inline code
  • Styles (style-src): Like theme CSS and external fonts
  • Images (img-src): From your Media Library and external image hosts
  • Connections (connect-src): From the REST API and external APIs
  • Other resources: Like fonts, frames, etc.

Start with a simple policy. For example:

default‑src 'self';
script‑src 'self' https://www.google‑analytics.com;
style‑src 'self' 'unsafe‑inline';
img‑src 'self' data:;

This allows files from your site, Google Analytics scripts, inline styles, and inline images. You can expand later.

At this point, it’s time to choose your track. If you’re a beginner, read on to the next line. If you’re more experienced, skip down to the developer track.

Beginner track: Add CSP with a plugin

Step 1: Install and configure a CSP plugin

We’ll use the Headers Security Advanced & HSTS WP plugin here, but any plugin that allows you to set custom response headers works.

  1. Install the plugin from the WordPress plugin directory.
  2. Go to Settings → Headers Security Advanced & HSTS WP.
  3. Find the CSP Header Contents section and paste your policy. Use the “starter policy template” section below if you need help writing a policy.
  4. Add a CSP Report URI if you’re in report-only mode (recommended). You can use:
    • A service such as Report URI or Sentry CSP reporting.
    • Your own endpoint if you already log security reports.
  5. Save your settings.
  6. Clear all caches.
CSP header content and report URI settings

Starter policy templates

Pick one that matches your setup, then refine it.

Standard template:

• default-src 'self';
• script-src 'self';
• style-src 'self' 'unsafe-inline';
• img-src 'self' data:;
• font-src 'self';
• connect-src 'self';

Typical WordPress setup with Google Fonts and Google Analytics:

• default-src 'self';
• script-src 'self' https://www.google-analytics.com https://www.googletagmanager.com;
• style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
• font-src 'self' https://fonts.gstatic.com data:;
• img-src 'self' data:;
• connect-src 'self';

WordPress with YouTube embeds:

• default-src 'self';
• script-src 'self';
• style-src 'self' 'unsafe-inline';
• img-src 'self' data: https://i.ytimg.com;
• frame-src https://www.youtube.com https://www.youtube-nocookie.com;

Step 2: Check reports

After enabling report-only mode, browse your site and watch for reports. You can do this with tools like Chrome DevTools, keeping an eye out for CSP warnings.

A typical report contains keys such as:

  • blocked-uri: The resource the browser would block in enforcement mode.
  • violated-directive: The rule that would block it.
  • source-file: The page where the violation happened.

Your job is simple:

  1. Decide if the blocked resource is necessary and trusted.
  2. If it is, add its domain to the matching directive.
  3. Save, clear your cache, and reload.

Repeat until your critical pages generate no important violations.

Step 3: Move to enforcement mode

Once reports are clean, switch to enforcement mode.

  1. Replace the report-only header with an enforced CSP header.
  2. Paste the exact same policy.
  3. Save and clear caches.
  4. Reload key pages and confirm that no critical resources are blocked.

You’ll still see violations, but now the browser will block what breaks the policy.

Developer track method 1: Add CSP at the server level

Are you a developer who wants more control? There are two primary ways to add a CSP to WordPress.

Apache, using .htaccess 

In .htaccess add the following, customizing as needed:

<IfModule mod_headers.c>
  Header set Content-Security-Policy-Report-Only "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;"
</IfModule>

After testing, switch to enforcement:

<IfModule mod_headers.c>
  Header set Content-Security-Policy-Report-Only "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;"
</IfModule>

Nginx

In your server block add the following, customized for your situation:

add_header Content-Security-Policy-Report-Only "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;" always;

Then enforce:

add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;" always;

The always keyword makes sure headers are sent even on error responses.

CDN or reverse proxy

If your CSP should apply across multiple origins or you want a single control point:

  • Add the CSP header in your CDN response header rules.
  • Keep the policy string in version control if your CDN supports config as code.
  • Avoid setting CSP both at origin and CDN unless you intend to replace the origin value.

Developer track method 2: Add CSP via WordPress code

You may want CSP inside WordPress when your setup requires dynamic behavior. For example:

  • You need per-page policies, such as a tighter rule on admin-facing pages.
  • You plan to add nonces for inline scripts you control.
  • You want CSP rollout tied to theme or plugin deployment.

Example using wp_headers:

<IfModule mod_headers.c>
  Header set Content-Security-Policy-Report-Only "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;"
</IfModule>

After testing, change the header key to Content-Security-Policy.

Advanced CSP in WordPress: Nonces and hashes

Inline scripts are common in WordPress, as page builders, themes, and plugins may inject them. There are three ways to handle them:

  • Allow inline scripts globally with ‘unsafe-inline’. This is the easiest method, but isn’t as strong as the others.
  • Add hashes for specific inline blocks. This is stable when the content doesn’t change often.
  • Add nonces per request. This method is strong and flexible, but you must inject the nonce into every inline block you want to allow.

Suppose your theme outputs this inline script:

<script>
  console.log("hi");
</script>

You could compute a SHA 256 hash of the script content and add it to script-src:

script-src 'self' 'sha256-BASE64HASHVALUE';

Here’s a nonces example with WordPress:

add_action('send_headers', function() {
  $nonce = base64_encode(random_bytes(16));
  $policy = "default-src 'self'; script-src 'self' 'nonce-$nonce'; style-src 'self' 'unsafe-inline';";
  header("Content-Security-Policy: $policy");
  add_filter('script_loader_tag', function($tag, $handle) use ($nonce) {
    return str_replace('<script ', '<script nonce="' . esc_attr($nonce) . '" ', $tag);
  }, 10, 2);
});

This example adds the nonce to enqueued scripts. You still need to add the same nonce attribute to any inline scripts you output in templates.

Warning: Nonces in a mixed plugin environment take effort. If most of your inline scripts come from third-party plugins, hashes or carefully scoped ‘unsafe-inline’ may be the more practical choice.

Common WordPress CSP issues and fixes

Here are a few elements that may cause issues, and how to adjust for each one:

Page builders and themes with inline CSS

  • Problem: Layout looks unstyled.
  • Fix: Keep ‘unsafe-inline’ in style-src, or move to hashes if you control the inline blocks.

Third-party analytics and tag managers:

  • Problem: Console shows blocked scripts from Google or other tools.
  • Fix: Add their domains to script-src.

Payment and checkout scripts:

  • Problem: Checkout buttons fail or embedded payment frames don’t load.
  • Fix: Add payment domains to script-src, connect-src, and frame-src as needed.

Embedded media or social posts:

  • Problem: There are empty embed containers.
  • Fix: Add provider domains to frame-src or img-src.

A quick overview

Here’s a quick summary of the steps to put a CSP into place and maintain it over time.

  1. Write a small policy first.
    • Start with default-src ‘self’.
    • Add only the directives you know you need, usually script-src, style-src, img-src, font-src, and connect-src.
    • Keep the allowlist short. Every domain should have a purpose.
  2. Run in report-only mode.
    • Set Content-Security-Policy-Report-Only.
    • Browse multiple pages, not only the homepage. Include posts, archives, forms, and checkout pages.
    • Collect reports in DevTools or a reporting endpoint.
  3. Triage violations.
    • Read violated-directive first. That tells you what to fix.
    • Check blocked-uri. Decide if the resource is trusted and needed.
    • If yes, add the source to the right directive. If no, leave it blocked.
    • Change one thing at a time, then retest.
  4. Enforce once reports are clear.
    • Switch to Content-Security-Policy with the same policy.
    • Clear caches, reload key pages, and confirm the site works.
    • Watch the console for any last surprises.
  5. Tighten carefully.
    • If you control inline code, move from ‘unsafe-inline’ to hashes or nonces.
    • If third-party plugins inject inline scripts you cannot change, accept the targeted risk and document it.
  6. Maintain the policy.
    • After adding plugins or new features, adapt for new sources.
    • Review the allowlist every few months and remove dead entries.

This setup takes time up front. But once it’s running, you’ll block a lot of common attacks and reduce mixed‑content issues. It will give your WordPress site a stronger security baseline.

For even stronger protection, add Jetpack Security

Setting up a Content Security Policy is a smart move, adding a solid layer of defense to your site. But it shouldn’t be your only one. Attacks can still happen in other ways, like through outdated plugins, file changes, or comment spam.

That’s where Jetpack Security comes in. It gives your site extra tools that ultimately build a comprehensive security barrier, including:

  • Jetpack VaultPress Backup: Backs up your site automatically, in real time If something breaks, you can restore it in minutes.
  • Jetpack Scan: Monitors your site for malware or file changes. If it finds something, you get an alert.
  • Akismet: Filters spam from comments and forms, so you don’t have to waste time with manual removal.

These tools run quietly in the background, and you don’t need to be a security expert to use them. They also come together in one bundle designed specifically for WordPress.

CSP protects what the browser is allowed to load, while Jetpack Security watches everything else — from server-level changes to spam and backups. Together, they give your site stronger, more complete protection.

Learn more about Jetpack Security





Read the whole story
alvinashcraft
9 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing AI Performance in Bing Webmaster Tools Public Preview

1 Share

We are happy to introduce AI Performance in Bing Webmaster Tools, a new set of insights that shows how publisher content appears across Microsoft Copilot, AI-generated summaries in Bing, and select partner integrations. For the first time, you can understand how often your content is cited in generative answers, with clear visibility into which URLs are referenced and how citation activity changes over time.

 

Extending Search Insights to AI Answers

Bing Webmaster Tools has long helped website owners understand indexing, crawl health, and search performance. AI Performance extends those insights to AI-generated answers by showing where and how content from your site is referenced as a source across AI experiences.

As AI becomes a more common way people discover information, visibility is not only about blue links. It is also about whether your content is cited and referenced when AI systems generate answers. This release is an early step toward Generative Engine Optimization (GEO) tooling in Bing Webmaster Tools, helping publishers understand how their content participates in AI-driven experiences.

AI-performance-dashboard.png
 

AI Performance Dashboard: Visibility Across AI Experiences

The AI Performance dashboard provides a consolidated view of when your site is cited in AI answers.

What the dashboard measures

Total Citations

Shows the total number of citations that are displayed as sources in AI-generated answers during the selected time frame. This highlights how often your content is referenced by AI systems, without indicating placement or presentation within a specific answer.

Average Cited Pages

Shows the average number of unique pages from your site that are displayed as sources in AI-generated answers per day over the selected time range. Because the data is aggregated across supported AI surfaces, average cited pages reflect overall citation patterns and does not indicate ranking, authority, or the role of any page within an individual answer.

Grounding queries

Shows the key phrases the AI used when retrieving content that was referenced in AI-generated answers. The data shown represents a sample of overall citation activity. We will continue to refine this metric as additional data is processed.

Page-level citation activity

Shows citation counts for specific URLs from your site, making it easy to see which individual pages are most often referenced across AI-generated answers during the selected date range. This reflects how often pages are cited, not page importance, ranking, or placement.

Visibility trends over time

The timeline shows how citation activity for your site changes over time across supported AI experiences, making it easier to spot trends at a glance.

Important Note: Bing respects all content owner preferences expressed through robots.txt and other supported control mechanisms.

Using AI Performance Insights in Bing Webmaster Tools

By reviewing cited pages and grounding query phrases, AI Performance insights help clarify your content visibility in AI-generated answers.

These insights can help you:

  • Validate which pages are already being used as references in AI answers.
     
  • Identify content that appears frequently across AI answers.
     
  • Spot opportunities to improve clarity, structure, or completeness on pages that are indexed but less frequently cited.

Using These Insights to Improve Content

Once you understand which pages and topics are being cited, you can use those signals to guide content improvements.

  • Strengthen depth and expertise
    Pages cited for specific grounding query phrases often reflect clear subject focus and domain expertise. Deepening coverage in related areas can reinforce authority.
     
  • Improve structure and clarity
    Clear headings, tables, and FAQ sections help surface key information and make content easier for AI systems to reference accurately.
     
  • Support claims with evidence
    Examples, data, and cited sources help build trust when content is reused in AI-generated answers.
     
  • Keep content fresh and accurate
    Regular updates help ensure AI systems reference the most current version of your content.
     
  • Reduce ambiguity across formats
    Align text, images, and video so they consistently represent the same entities, products, or concepts.

For deeper guidance on structuring content to improve inclusion in AI-generated answers, see Optimizing Your Content for Inclusion in AI Search Answers.

Keeping Content Fresh with IndexNow

Accurate and up to date content is important for inclusion and citation in AI-generated answers. IndexNow helps keep information fresh across search and AI experiences by notifying participating search engines whenever content is added, updated, or removed.

By enabling faster discovery of content changes, IndexNow helps ensure that AI systems reference the most current version of a page when generating answers. If you’re not already using IndexNow, go to https://www.indexnow.org to get started.

Local Business Information and AI Visibility

For local businesses, accurate business information is especially important when AI experiences surface answers to location-based queries.

In addition to using Bing Webmaster Tools, businesses can register with Bing Places for Business to help ensure that key details such as address, hours, and contact information remain current and eligible for inclusion in AI-generated responses.

Evolving AI Performance with the Webmaster Community

AI Performance in Bing Webmaster Tools marks an important step toward greater transparency between AI systems and the open web. As we expand these insights, we’ll continue working with publishers and the webmaster community to improve inclusion, attribution, and visibility across both search results and AI experiences.

We look forward to partnering with you as we evolve these capabilities and continue building tools that support discovery in the next generation of search and AI experiences.

Krishna Madhavan, Meenaz Merchant, Fabrice Canel, Saral Nigam
Product Managers, Microsoft AI

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories