Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153026 stories
·
33 followers

A second US Sphere could come to Maryland

1 Share
A rendering of the planned mini-Sphere potentially coming to National Harbor, Maryland.

Sphere Entertainment, the company behind the eye-catching interactive venue in Las Vegas, has announced its "intent to develop" another Sphere in Maryland that will be located 15 minutes south of Washington, DC. A timeline and exact location haven't been finalized, but the Maryland Sphere would be the company's second venue in the US, following plans to build a Sphere in Abu Dhabi announced in October 2024.

The second US sphere would be built in an area known as National Harbor in Prince George's County, Maryland. Located along the Potomac River, National Harbor currently features a convention center, multiple hotels, restaurants, and shops …

Read the full story at The Verge.

Read the whole story
alvinashcraft
17 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

The Smart Money Is on Bluesky

1 Share

Since Bluesky launched, economists, investment strategists, and financial commentators have been building a finance community here, post by post. Now, it’s breaking out.

Other social media platforms bury the signal under clickbait, ads, and spam. We’re building Bluesky to surface what’s happening now, so critical information reaches you at key market moments.

When major news breaks—like the Department of Justice investigating Jerome Powell—Bluesky is where you can find coverage from top business media, alongside real-time reactions from financial commentators and industry experts.

BREAKING: Federal prosecutors have opened a criminal inquiry into Fed Chair Powell, per NYT

— Unusual Whales (@unusualwhales.bsky.social) January 11, 2026 at 4:49 PM

"We could score this one to the Fed for its standing up to political intimidation, but the linchpin of the defense — Jay Powell — is out as Fed Chair in a matter of months, and there’s no sign that the President will ease up on the Fed." stayathomemacro.substack.com/p/jay-powell...

[image or embed]

— Claudia Sahm (@claudia-sahm.bsky.social) January 15, 2026 at 4:24 PM

"The scoreboard... is Jay Powell and Fed independence: 1, Trump: 0."

Markets stayed calm because they believe the institution held—this time.

But what if the President take the lack of a market reaction as permission to keep going in his pointless crusade against the Fed?

[image or embed]

— Justin Wolfers (@justinwolfers.bsky.social) January 19, 2026 at 5:51 AM

Fit check for today.

[image or embed]

— George Pearkes (@peark.es) January 12, 2026 at 5:15 AM

If Jerome Powell has million fans, then I'm one of them. If Jerome Powell has one fan, then I'm THAT ONE. If Jerome Powell has no fans, that means I'm dead. If the world is against Jerome Powell I’m against the entire world

— Joey Politano🏳️‍🌈 (@josephpolitano.bsky.social) January 11, 2026 at 7:37 PM

So who exactly is Finance Bluesky?

It's television anchors like CNBC’s Carl Quintanilla… energy analysts like Rory Johnston… SEC sleuths like Michelle Leder… macroeconomists like Julia Coronado… strategists like Edward Harrison… wealth managers like Josh Brown… bond traders like Ed Bradford… VCs like Russ Wilcox… housing writers like Conor Sen… tech journalists like Tae Kim… and academic economists like Justin Wolfers.

It’s also government agencies like the US Treasury Department, US Bureau of Labor Statistics, and European Central Bank, and top business publications like the Wall Street Journal, Financial Times, and Barrons.

Put it all together, and you’ve got a place where finance happens in real time – from Wall Street professionals tracking sentiment, to individuals trying to understand what’s impacting their retirement portfolios.

At Bluesky, we’ve watched the momentum build, and we’re investing in it. We just shipped cashtags, which make it easy to tag and discover posts about specific stocks.

We're also building feeds to give Bluesky users instant access to market-moving information. Our feeds for January's US jobs and inflation reports delivered over 5 million posts to users’ devices, and our Nvidia earnings feed hit 6 million. When news breaks, Finance Bluesky is already pricing it in.

Our friends at Graze Social helped us build these feeds. Graze is one of a growing number of startups building on the same open protocol that powers Bluesky. That openness lets independent developers create tools that make the ecosystem more useful for everyone, including traders and investors.

Not on Bluesky yet? Click this link to join and instantly follow a curated list of top finance voices.

Read the whole story
alvinashcraft
18 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog

1 Share

In this CodeLab, you will learn how to build a Hotel Search Agent using LangChain, Couchbase AI Services, and Agent Catalog. We will also incorporate Arize Phoenix for observability and evaluation to ensure our agent performs reliably.

This tutorial takes you from zero to a fully functional agent that can search for hotels, filter by amenities, and answer natural language queries using real-world data.

Note: You can find the full Google CodeLab notebook for this CodeLab here.

What Are Couchbase AI Services?

Building AI applications often involves juggling multiple services: a vector database for memory, an inference provider for LLMs (like OpenAI or Anthropic), and separate infrastructure for embedding models.

Couchbase AI Services streamlines this by providing a unified platform where your operational data, vector search, and AI models live together. It offers:

  • LLM inference and embeddings API: Access popular LLMs (like Llama 3) and embedding models directly within Couchbase Capella, with no external API keys, no extra infrastructure, and no data egress. Your application data stays inside Capella. Queries, vectors, and model inference all happen where the data lives. This enables secure, low-latency AI experiences while meeting privacy, compliance requirements. Thus, the key value: data and AI together, without sending sensitive information outside your system.
  • Unified platform: Database + Vectorization + Search + Model
  • Integrated vector search: Perform semantic search directly on your JSON data with millisecond latency.

Why Is This Needed?

As we move from simple chatbots to agentic workflows, where AI models autonomously use tools, latency, and complexity of setup become bottlenecks. By co-locating your data and AI services, you reduce the operational overhead and latency. Furthermore, tools like the Agent Catalog help with managing hundreds of agent prompts and tools and provide built in logging for your agents.

Prerequisites

Before we begin, ensure you have:

  • A Couchbase Capella account.
  • Python 3.10+ installed.
  • Basic familiarity with Python and Jupyter notebooks.

Create a Cluster in Couchbase Capella

  1. Log into Couchbase Capella.
  2. Create a new cluster or use an existing one. Note that the cluster needs to run the latest version of Couchbase Server 8.0 with the Data, Query, Index, and the Eventing services.
  3. Create a bucket.
  4. Create a scope and collection for your data.

Step 1: Install Dependencies

We’ll start by installing the necessary packages. This includes the couchbase-infrastructure helper for setup, the agentc CLI for the catalog, and the LangChain integration packages.

%pip install -q \
    "pydantic>=2.0.0,<3.0.0" \
    "python-dotenv>=1.0.0,<2.0.0" \
    "pandas>=2.0.0,<3.0.0" \
    "nest-asyncio>=1.6.0,<2.0.0" \
    "langchain-couchbase>=0.2.4,<0.5.0" \
    "langchain-openai>=0.3.11,<0.4.0" \
    "arize-phoenix>=11.37.0,<12.0.0" \
    "openinference-instrumentation-langchain>=0.1.29,<0.2.0" \
    "couchbase-infrastructure"

# Install Agent Catalog 
%pip install agentc==1.0.0

Step 2: Infrastructure as Code

Instead of manually clicking through the UI, we use the couchbase-infrastructure package to programmatically provision our Capella environment. This ensures a reproducible setup.

We will:

  1. Create a Project and Cluster.
  2. Deploy an Embedding Model (nvidia/llama-3.2-nv-embedqa-1b-v2) and an LLM (meta/llama3-8b-instruct).
  3. Load the travel-sample dataset.

Couchbase AI Services provides OpenAI-compatible endpoints that are used by the agents.

import os
from getpass import getpass
from couchbase_infrastructure import CapellaConfig, CapellaClient
from couchbase_infrastructure.resources import (
    create_project,
    create_developer_pro_cluster,
    add_allowed_cidr,
    load_sample_data,
    create_database_user,
    deploy_ai_model,
    create_ai_api_key,
)

# 1. Collect Credentials
management_api_key = getpass("Enter your MANAGEMENT_API_KEY: ")
organization_id = input("Enter your ORGANIZATION_ID: ")

config = CapellaConfig(
    management_api_key=management_api_key,
    organization_id=organization_id,
    project_name="agent-app",
    cluster_name="agent-app-cluster",
    db_username="agent_app_user",
    sample_bucket="travel-sample",
    # Using Couchbase AI Services for models
    embedding_model_name="nvidia/llama-3.2-nv-embedqa-1b-v2",
    llm_model_name="meta/llama3-8b-instruct",
)

# 2. Provision Cluster
client = CapellaClient(config)
org_id = client.get_organization_id()
project_id = create_project(client, org_id, config.project_name)
cluster_id = create_developer_pro_cluster(client, org_id, project_id, config.cluster_name, config)

# 3. Network & Data Setup
add_allowed_cidr(client, org_id, project_id, cluster_id, "0.0.0.0/0") # Allow all IPs for tutorial
load_sample_data(client, org_id, project_id, cluster_id, config.sample_bucket)
db_password = create_database_user(client, org_id, project_id, cluster_id, config.db_username, config.sample_bucket)

# 4. Deploy AI Models
print("Deploying AI Models...")
deploy_ai_model(client, org_id, config.embedding_model_name, "agent-hub-embedding-model", "embedding", config)
deploy_ai_model(client, org_id, config.llm_model_name, "agent-hub-llm-model", "llm", config)

# 5. Generate API Keys
api_key = create_ai_api_key(client, org_id, config.ai_model_region)

Ensure to follow the steps to setup the security root certificate. Secure connections to Couchbase Capella require a root certificate for TLS verification. You can find this in the ## 📜 Root Certificate Setup section of the Google Colab Notebook.

Step 3: Integrating Agent Catalog

The Agent Catalog is a powerful tool for managing the lifecycle of your agent’s capabilities. Instead of hardcoding prompts and tool definitions in your Python files, you manage them as versioned assets. You can centralize and reuse your tools across your development teams. You can also examine and monitor agent responses with the Agent Tracer.

Initialize and Download Assets

First, we initialize the catalog and download our pre-defined prompts and tools.

!git init
!agentc init

# Download example tools and prompts
!mkdir -p prompts tools
!wget -O prompts/hotel_search_assistant.yaml https://raw.githubusercontent.com/couchbase-examples/agent-catalog-quickstart/refs/heads/main/notebooks/hotel_search_agent_langchain/prompts/hotel_search_assistant.yaml
!wget -O tools/search_vector_database.py https://raw.githubusercontent.com/couchbase-examples/agent-catalog-quickstart/refs/heads/main/notebooks/hotel_search_agent_langchain/tools/search_vector_database.py
!wget -O agentcatalog_index.json https://raw.githubusercontent.com/couchbase-examples/agent-catalog-quickstart/refs/heads/main/notebooks/hotel_search_agent_langchain/agentcatalog_index.json

Index and Publish

We use agentc to index our local files and publish them to Couchbase. This stores the metadata in your database, making it searchable and discoverable by the agent at runtime.

# Create local index of tools and prompts
!agentc index .

# Upload to Couchbase
!agentc publish

Step 4: Preparing the Vector Store

To enable our agent to search for hotels semantically (e.g., “cozy place near the beach”), we need to generate vector embeddings for our hotel data.

We define a helper to format our hotel data into a rich text representation, prioritizing location and amenities.

from langchain_couchbase.vectorstores import CouchbaseVectorStore

def load_hotel_data_to_couchbase(cluster, bucket_name, scope_name, collection_name, embeddings, index_name):
    # Check if data exists
    # ... (omitted for brevity) ...

    # Generate rich text for each hotel
    # e.g., "Le Clos Fleuri in Giverny, France. Amenities: Free breakfast: Yes..."
    hotel_texts = get_hotel_texts() 
    
    # Initialize Vector Store connected to Capella
    vector_store = CouchbaseVectorStore(
        cluster=cluster,
        bucket_name=bucket_name,
        scope_name=scope_name,
        collection_name=collection_name,
        embedding=embeddings,
        index_name=index_name,
    )
    
    # Batch upload texts
    vector_store.add_texts(texts=hotel_texts)
    print(f"Successfully loaded {len(hotel_texts)} hotel embeddings")

Step 5: Building the LangChain Agent

We use the Agent Catalog to fetch our tool definitions and prompts dynamically. The code remains generic, while your capabilities (tools) and personality (prompts) are managed separately. We will also create our ReAct agents.

import agentc
from langchain.agents import AgentExecutor, create_react_agent
from langchain_core.prompts import PromptTemplate
from langchain_core.tools import Tool

def create_langchain_agent(self, catalog, span):
    # 1. Setup AI Services using Capella endpoints
    embeddings, llm = setup_ai_services(framework="langchain")
    
    # 2. Discover Tools from Catalog
    # The catalog.find() method searches your published catalog
    tool_search = catalog.find("tool", name="search_vector_database")
    
    tools = [
        Tool(
            name=tool_search.meta.name,
            description=tool_search.meta.description,
            func=tool_search.func, # The actual python function
        ),
    ]

    # 3. Discover Prompt from Catalog
    hotel_prompt = catalog.find("prompt", name="hotel_search_assistant")
    
    # 4. Construct the Prompt Template
    custom_prompt = PromptTemplate(
        template=hotel_prompt.content.strip(),
        input_variables=["input", "agent_scratchpad"],
        partial_variables={
            "tools": "\n".join([f"{tool.name}: {tool.description}" for tool in tools]),
            "tool_names": ", ".join([tool.name for tool in tools]),
        },
    )

    # 5. Create the ReAct Agent
    agent = create_react_agent(llm, tools, custom_prompt)
    
    agent_executor = AgentExecutor(
        agent=agent,
        tools=tools,
        verbose=True,
        handle_parsing_errors=True, # Auto-correct formatting errors
        max_iterations=5,
        return_intermediate_steps=True,
    )
    
    return agent_executor

Step 6: Running the Agent

With the agent initialized, we can perform complex queries. The agent will:

  1. Receive the user input.
  2. Decide it needs to use the search_vector_database tool.
  3. Execute the search against Capella.
  4. Synthesize the results into a natural language response.

# Initialize Agent Catalog
catalog = agentc.catalog.Catalog()
span = catalog.Span(name="Hotel Support Agent", blacklist=set())

# Create the agent
agent_executor = couchbase_client.create_langchain_agent(catalog, span)

# Run a query
query = "Find hotels in Giverny with free breakfast"
response = agent_executor.invoke({"input": query})

print(f"User: {query}")
print(f"Agent: {response['output']}")

Example Output:

Agent: I found a hotel in Giverny that offers free breakfast called Le Clos Fleuri. It is located at 5 rue de la Dîme, 27620 Giverny. It offers free internet and parking as well.

Note: In Capella Model Services, the model outputs can be cached (both semantic and standard cache). The caching mechanism enhances the RAG’s efficiency and speed, particularly when dealing with repeated or similar queries. When a query is first processed, the LLM generates a response and then stores this response in Couchbase. When similar queries come in later, the cached responses are returned. The caching duration can be configured in the Capella Model services.

Adding Semantic Caching

Caching is particularly valuable in scenarios where users may submit similar queries multiple times or where certain pieces of information are frequently requested. By storing these in a cache, we can significantly reduce the time it takes to respond to these queries, improving the user experience.

## Semantic Caching Demonstration

# This section demonstrates how to enable and use Semantic Caching with Capella Model Services.
# Semantic caching stores responses for queries and reuses them for semantically similar future queries, significantly reducing latency and cost.

# 1. Setup LLM with Semantic Caching enabled
# We pass the "X-cb-cache": "semantic" header to enable the feature
print(" Setting up LLM with Semantic Caching enabled...")
llm_with_cache = ChatOpenAI(
    model=os.environ["CAPELLA_API_LLM_MODEL"],
    base_url=os.environ["CAPELLA_API_LLM_ENDPOINT"] + "/v1" if not os.environ["CAPELLA_API_LLM_ENDPOINT"].endswith("/v1") else os.environ["CAPELLA_API_LLM_ENDPOINT"],
    api_key=os.environ["CAPELLA_API_LLM_KEY"],
    temperature=0, # Deterministic for caching
    default_headers={"X-cb-cache": "semantic"}
)

# 2. Define a query and a semantically similar variation
query_1 = "What are the best hotels in Paris with a view of the Eiffel Tower?"
query_2 = "Recommend some hotels in Paris where I can see the Eiffel Tower."

print(f"\n Query 1: {query_1}")
print(f" Query 2 (Semantically similar): {query_2}")

# 3. First execution (Cache Miss)
print("\n Executing Query 1 (First run - Cache MISS)...")
start_time = time.time()
response_1 = llm_with_cache.invoke(query_1)
end_time = time.time()
time_1 = end_time - start_time
print(f" Time taken: {time_1:.4f} seconds")
print(f" Response: {response_1.content[:100]}...")

# 4. Second execution (Cache Hit)
# The system should recognize query_2 is semantically similar to query_1 and return the cached response
print("\n Executing Query 2 (Semantically similar - Cache HIT)...")
start_time = time.time()
response_2 = llm_with_cache.invoke(query_2)
end_time = time.time()
time_2 = end_time - start_time
print(f" Time taken: {time_2:.4f} seconds")
print(f" Response: {response_2.content[:100]}...")

Step 7: Observability With Arize Phoenix

In production, you need to know why an agent gave a specific answer. We use Arize Phoenix to trace the agent’s “thought process” (the ReAct chain).

We can also run evaluations to check for hallucinations or relevance.

import phoenix as px
from phoenix.evals import llm_classify, LENIENT_QA_PROMPT_TEMPLATE

# 1. Start Phoenix Server
session = px.launch_app()

# 2. Instrument LangChain
from openinference.instrumentation.langchain import LangChainInstrumentor
LangChainInstrumentor().instrument()

# ... Run your agent queries ...

# 3. Evaluate Results
# We use an LLM-as-a-judge to grade our agent's responses
hotel_qa_results = llm_classify(
    data=hotel_eval_df[["input", "output", "reference"]],
    model=evaluator_llm,
    template=LENIENT_QA_PROMPT_TEMPLATE,
    rails=["correct", "incorrect"],
    provide_explanation=True,
)

By inspecting the Phoenix UI, you can visualize the exact sequence of tool calls and see the latency of each step in the chain.

Conclusion

We have successfully built a robust Hotel Search Agent. This architecture leverages:

  1. Couchbase AI Services: For a unified, low-latency data and AI layer.
  2. Agent Catalog: For organized, versioned management of agent tools and prompts. Agent catalog also provides tracing. It provides users to use SQL++ with traces, leverage the performance of Couchbase, and get insight into details of prompts and tools in the same platform.
  3. LangChain: For flexible orchestration.
  4. Arize Phoenix: For observability.

This approach scales well for teams building complex, multi-agent systems where data management and tool discovery are critical challenges.

The post Codelab: Building an AI Agent With Couchbase AI Services & Agent Catalog appeared first on The Couchbase Blog.

Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the 'Couchbase Explorer' Visual Studio extension!

1 Share

Introducing "Couchbase Explorer", an extension for Visual Studio 2022 (and 2026!) that brings Couchbase database browsing and management directly into your IDE. If you've ever found yourself constantly switching between Visual Studio and Couchbase's web console while developing, this extension is for you.

The Vision

Couchbase Explorer is designed to be your Couchbase companion inside Visual Studio. The goal is to let you browse your clusters, explore your data, and work with documents without breaking your development flow or switching context to another application.

Note that this extension is currently in BETA - it's functional, but there's still plenty of work to do. It's also worth mentioning that this is an independent, community-driven project and is not affiliated with, endorsed by, or sponsored by Couchbase, Inc.

Current Features

The initial release focuses on connection management and data browsing:

  • Multiple Connections - Save and manage connections to multiple Couchbase Server clusters
  • Secure Credential Storage - Passwords are stored securely using Windows Credential Manager
  • SSL/TLS Support - Connect securely to clusters with SSL encryption enabled
  • Hierarchical Tree View - Intuitive navigation: Connections → Buckets → Scopes → Collections → Documents
  • Document Viewer - Double-click any document to open it in a dedicated editor with formatted JSON and syntax highlighting
  • Lazy Loading - Efficient handling of large collections with batched document retrieval
  • Copy Functionality - Quickly copy document contents or document IDs to your clipboard
  • Refresh Support - Refresh at any level to see the latest data from your cluster
  • Theme Support - Adapts to Visual Studio's light and dark themes

Getting Started

Once installed, you can access the Couchbase Explorer from the View menu. Right-click in the explorer to add your first connection - you'll need your cluster's connection string, username, and password. The extension will securely store your credentials and connect to your cluster.

From there, you can expand the connection to see your buckets, then scopes, then collections. Expand a collection to browse its documents. Double-click any document to view its contents in a formatted JSON editor.

What's Coming

The roadmap is packed with planned features:

  • Couchbase Capella Support - Connect to Couchbase's cloud offering
  • N1QL Query Editor - Write and execute SQL++ queries directly in Visual Studio
  • Document Editing - Create, update, and delete documents
  • Index Management - View and manage your cluster indexes
  • Full-Text Search Integration - Work with FTS indexes
  • Bulk Import/Export - Move data in and out of your collections
  • Query Results Panel - View query results in a dedicated tool window
  • Output/Log Window - Track operations and debug connection issues

You can check out the full issue list on GitHub to see everything that's planned and track progress.

Get It Now

Feel free to check it out on the Visual Studio Marketplace, and let me know if you have any suggestions! It's open source on GitHub, so issues and PRs are happily accepted if you're into that sort of thing.



Read the whole story
alvinashcraft
19 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

You Can Now Vibecode Mobile Apps

1 Share
From: AIDailyBrief
Duration: 6:44
Views: 102

Replit unveils native mobile app publishing with one-click app-store submission to streamline building, payments, security, and publishing. Bloomberg reports a Replit funding round that could value the company near $9 billion, and HiggsField closes funding around a $1.3 billion valuation as generative video adoption accelerates. Co-founder departures at Thinking Machines Labs signal a talent exodus, and DeepMind leadership warns that Chinese AI models are rapidly closing the performance gap with Western counterparts.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
20 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing the Enhanced Code Block: Syntax Highlighting and More

1 Share

Last year, WordPress.com introduced new code editors for the block editor and the Additional CSS input box in the WordPress admin. This was the first stage of a larger effort to make editing code a more enjoyable experience.

Today, I’m happy to announce the launch of the second stage of that effort: introducing the new and improved Code block.

This is not a new block. It’s an enhancement to the current Code block that you’ve likely already been using, which includes several improvements over the original:

  • Syntax highlighting: Supports color-based syntax highlighting for over 100 common languages.
  • Configuration: Decide to show the filename, language name, line numbers, and even include a copy button for visitors.
  • Drag-and-drop: Dragging a code file from your computer to the editor will automatically transform it to the Code block with the language set.
  • Transforms: Transform other code-supported blocks on WordPress.com, such as Syntax Highlighter, to the new Code block.
  • Styles: Customize syntax colors directly from the editor or via ‘theme.json’ if you’re a developer.

Using the enhanced Code block

You do not have to enable anything to begin using the new version of the Code block. It’s already available to use. Just insert the Code block anywhere in the block editor and add your code.

By default, when adding a Code block and inserting code, you will see your code in Plain Text:

Example of code displayed in a blog post using plain text formatting.

Of course, Plain Text doesn’t include any syntax highlighting since it’s not a language. To change this, choose a code language from the Settings → Language dropdown in the sidebar:

Example of the Code block displaying code using a language preset.

Syntax highlighting will then be applied based on the language that you selected, making the code much more readable for both you and your visitors!

Pro tip: If you type three backticks followed by the language name (e.g., ```php) and then hit Enter, the editor will automatically create a new Code block instance and auto-fill the Language setting.

If you want to kick your Code block’s features up a notch, you can also configure several other settings besides the language:

  • Filename: Add a custom filename to display in the top left of the code block (useful when walking readers through tutorials).
  • Show language name: Displays the language name in the top right corner of the block.
  • Show copy button: Inserts a Copy button in the top right of the block, allowing site visitors to copy the entirety of the code.
  • Show line numbers: Displays line numbers next to your code on the left.
  • Line numbers start at: Choose a starting line number.

This will make your site’s code examples much more reader friendly:

Example of the Code block displaying code using the additional language settings.

Customizing the Code block colors

There are multiple ways to customize the syntax highlighting and colors shown with the enhanced Code block. In this section, I’ll walk you through each from the quickest/simplest to the more advanced techniques.

Selecting a block style

The Code block ships with four block styles out of the box:

  • Default: Will use the default styles and colors from your theme.
  • No Highlight: Disables syntax highlighting.
  • Solarized Light: A light color scheme.
  • Solarized Dark: A dark color scheme.

Themes can also register additional styles. Selecting one of these styles is the quickest way to change how your Code block is output:

Customizing colors from the editor

You can also customize the colors directly from the editor via the Styles → Color panel in the block sidebar. The block has an extensive array of color options for customizing every aspect of the syntax highlighting:

Selecting custom colors for the Code Block syntax formatting.

You are not limited to only colors either. You can customize any of the other available styles, such as Typography, Border, and more. These options haven’t changed with the latest enhancement.

Customizing the Code block via theme.json

If you’re a developer or theme author, you’ll most likely want to define default syntax colors and other styles for the default output of the block. theme.json support is included with this batch of enhancements.

Here’s what my custom Code block styles look like after a few tweaks in theme.json:

Example of the Code Block using custom colors.

Because the WordPress software itself doesn’t support custom colors via theme.json, the developers at WordPress.com built in custom support for this feature.

You can customize any of the syntax colors via settings.custom.core/code in theme.json. This is an object where each key is the syntax color name and the value is the color itself.

Here’s an example snippet that you can use to customize your own colors:

{
"$schema": "https://schemas.wp.org/trunk/theme.json",
"version": 3,
"settings": {
"custom": {
"core/code": {
"comment": "#94a3b8",
"keyword": "#8b5cf6",
"boolean": "#f59e0b",
"literal": "#10b981",
"string": "#06b6d4",
"specialString": "#ec4899",
"macroName": "#8b5cf6",
"variableDefinition": "#3b82f6",
"typeName": "#14b8a6",
"className": "#f97316",
"invalid": "#ef4444"
}
}
}
}

Any valid CSS color is supported, so you’re not limited to hex color codes. Use CSS custom properties, RGBA, and more.

If you want to borrow my full theme.json customizations, copy and paste the following code. It includes additional custom styles to make the Code block even nicer:

{
"$schema": "https://schemas.wp.org/trunk/theme.json",
"version": 3,
"settings": {
"custom": {
"core/code": {
"comment": "#94a3b8",
"keyword": "#8b5cf6",
"boolean": "#f59e0b",
"literal": "#10b981",
"string": "#06b6d4",
"specialString": "#ec4899",
"macroName": "#8b5cf6",
"variableDefinition": "#3b82f6",
"typeName": "#14b8a6",
"className": "#f97316",
"invalid": "#ef4444"
}
}
},
"styles": {
"blocks": {
"core/code": {
"border": {
"color": "#e2e8f0",
"style": "solid",
"width": "1px",
"radius": "8px"
},
"color": {
"background": "#f1f5f9",
"text": "#1e293b"
},
"typography": {
"fontSize": "15px"
}
}
}
}
}

Start sharing code now.

Whether you’re publishing snippets or full-blown tutorials, the enhanced Code block makes sharing and styling code in WordPress.com smoother and more customizable than ever before. 

Syntax highlighting, block styles, and custom color options put you in full control of how your code appears. 

With these improvements, you can focus less on formatting and more on writing great content that helps your readers learn and build.





Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories