Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152813 stories
·
33 followers

Daily Reading List – January 14, 2026 (#700)

1 Share

I talked too much today. Did a podcast episode with someone and was a guest at a fireside chat in our San Diego office. I try to listen more than I talk in 1:1s, so that balanced things out today a bit.

[blog] Common misunderstandings about large software companies. Sheesh, this feels spot on. Criticisms about larger companies are often missing perspective.

[blog] Tutorial : Getting Started with Google Antigravity Skills. Absolutely fantastic look at the new Skills capability in Google Antigravity. And importantly, how it fits with all the other ways to customize AI dev workflows.

[article] Shopify, Walmart Endorse Google’s New Open Commerce Protocol. This one might have legs. It’s not locked into one ecosystem and opens up a few interesting use cases.

[article] Five LLM Content Strategies Revealed from Top Dev Tool Companies. Does your company make it easy for LLMs to understand your product and how to use it? Adam has a great post about how companies are using llms.txt Markdown files to steer the LLM.

[blog] Your AI coding agents need a manager. You’ll see so much of this in 2026. We’re entering the phase of multiple agents working for you. Learn good communication skills, prioritization skills, and stay smart on the underlying tech.

[article] AI is rendering some IT skill sets obsolete. Some tech skills from 2010 are obsolete. Few things stay entirely static! But the pace may be accelerating for some skills that weren’t obviously open to replacement.

[blog] Introducing Community Benchmarks on Kaggle. Community-generated custom evals that provide insights into real model behavior? I like it.

[article] Hasta la vista! Microsoft finally ends extended updates for ancient Windows version. I know that someone reading this has Windows Server 2008 sitting on a server somewhere. You knew this day was coming.

[blog] Bring back opinionated architecture. Be informed, and then make some calls. Stop saying “it depends” in your architecture.

[blog] The Tool Bloat Epidemic. This post has a handful of solid suggestions for avoiding MCP tool bloat that eats your tokens and contributes to context rot.

[blog] This Week in Open Source for January 9, 2026. Good roundup of upcoming events and happenings worth paying attention to.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
18 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Day 15: Context, Tokens, and When to Compact

1 Share

The conversation was going great. Forty messages in, Claude and I had built a complete authentication system. Database schema. API endpoints. Frontend components. Tests.

Then I asked Claude to add password reset.

Claude’s response referenced a “UserAuth” model we’d never created. It imported from a file path that didn’t exist. It used patterns completely different from everything we’d built.

The conversation had exceeded Claude’s effective context. Not the hard limit. The soft limit where quality degrades. Where AI can technically see all the messages but can’t actually hold them all in focus.

Understanding context and tokens isn’t academic. It’s the difference between productive sessions and frustrating ones.

What Are Tokens?

Tokens are how AI measures text. Not words. Not characters. Tokens.

A token is roughly 4 characters or about 3/4 of a word. The word “authentication” is 3-4 tokens. The word “the” is 1 token. Code tends to use more tokens per line than prose because of punctuation and special characters.

Every AI model has a context window measured in tokens. That window includes:

  • Your system prompt (CLAUDE.md, etc.)
  • The entire conversation history
  • AI’s current response as it’s being generated

When you hit the limit, the oldest messages start getting dropped. Before that hard limit, there’s a soft degradation where AI technically sees everything but can’t effectively process it all.

The Real Limits

Claude’s context window is large. 200K tokens for Claude 3. That sounds like a lot.

It is a lot for a single prompt. It’s not as much for a long conversation.

Consider a typical development session:

  • System prompt and configuration: 2,000 tokens
  • Each human message (code + explanation): 500-2,000 tokens
  • Each AI response: 1,000-5,000 tokens
  • Code files you paste in: 500-3,000 tokens each

A 40-message conversation with code can easily hit 80,000-100,000 tokens. You’re not at the hard limit, but you’re in degradation territory.

The issue isn’t running out of space. It’s attention. AI models don’t give equal attention to all tokens. Recent messages get more focus. Earlier messages fade. By message 40, the decisions from message 5 are fuzzy at best.

Signs of Context Degradation

Watch for these indicators:

Contradicting earlier decisions. You agreed on PostgreSQL in message 10. In message 45, AI generates MySQL queries. It’s not being stubborn. It’s lost that context.

Forgetting file contents. You pasted a file in message 8. AI references it incorrectly in message 35. The details have faded.

Inventing details. AI confidently states things about your codebase that aren’t true. It’s filling gaps with plausible-sounding inventions.

Asking questions you answered. “What database are you using?” You told it. It forgot.

Pattern drift. Early code followed your patterns. Later code uses different patterns. AI lost track of the established conventions.

Quality decline. Responses that were sharp and specific become vague and generic.

When you see these signs, the conversation needs intervention.

The Compact Command

Claude Code has a /compact command that summarizes the conversation and frees up context space. Use it when:

  • The conversation is getting long (30+ messages)
  • You notice quality degradation
  • You’re about to start a new phase of work
  • AI seems to be forgetting earlier context

What compact does:

  1. Summarizes the key points from the conversation
  2. Preserves important decisions and context
  3. Removes the detailed back-and-forth
  4. Gives you a fresh context window with the essentials

After compacting, AI has room to breathe. New messages get full attention. The summary maintains continuity.

When to Compact

Compact proactively at natural breakpoints:

  • Finished Phase 1 of a feature? Compact before starting Phase 2.
  • Completed a debugging session? Compact before new work.
  • About to paste a large file? Compact first to make room.

Compact reactively when you see degradation:

  • AI contradicted something? Compact and clarify.
  • Quality dropped noticeably? Compact and refresh.
  • AI seems confused? Compact and reorient.

Don’t wait until it’s broken. Compact when things are still working. It’s preventive maintenance, not emergency repair.

Manual Compacting

If your AI tool doesn’t have automatic compacting, do it manually. Start a new conversation with a summary:

## Summary of Previous Session

We're building a card trading feature for collectyourcards.com.

### Completed
- Trade offer model (server/models/trade.ts)
- Create trade endpoint (POST /api/trades)
- Accept/decline endpoints
- Database migrations

### Key Decisions
- Trades are immutable once created (no editing)
- Both parties must have the cards they're offering
- Cards are locked during pending trade
- 48-hour expiration on pending trades

### Current Code
[paste current state of key files]

### Next Steps
- Add trade history endpoint
- Add trade notification emails
- Frontend for trade management

This is manual compacting. You’ve distilled the essential context. AI starts fresh with everything it needs.

What to Include in a Summary

Good summaries include:

What exists. Files created. Endpoints built. Tests written.

Key decisions and why. Not just “we use PostgreSQL” but “PostgreSQL for JSONB support on card metadata.”

Current state. What the code looks like now. Paste current versions of key files.

What’s next. Where you’re headed. Helps AI understand priorities.

Open questions. Anything unresolved that needs to carry forward.

Bad summaries are either too vague (“we built some stuff”) or too detailed (pasting the entire conversation). Hit the middle ground.

The 30-Message Checkpoint

I use 30 messages as a checkpoint. At message 30, I pause and assess:

  1. How’s the quality? Still sharp or getting fuzzy?
  2. Is AI remembering earlier decisions?
  3. Am I repeating myself?
  4. How much code has been pasted?

If any answers concern me, I compact. If everything’s fine, I continue with awareness that degradation is coming.

Some conversations stay coherent for 50+ messages. Some degrade at 20. The 30-message checkpoint catches most issues before they become problems.

Token-Efficient Practices

Reduce token usage to extend productive conversation length:

Don’t paste entire files when you need specific sections:

# Instead of pasting all 500 lines of userService.ts:

Here's the relevant function from userService.ts (lines 45-60):
[paste just that section]

Reference files instead of pasting when AI has access:

# Instead of pasting:
Read server/services/userService.ts and modify the validateUser function.

Summarize previous work instead of letting context accumulate:

# Instead of letting AI infer from 20 messages:
Summary: We've built CRUD for trades. The model is at server/models/trade.ts.

Remove resolved discussion:

After you’ve agreed on an approach, you don’t need the 10 messages of discussion. That’s what compacting removes.

Context Budgeting

Think of context as a budget. You have X tokens to spend. Choose wisely.

High-value token spending:

  • Current code state (AI needs to see what exists)
  • Key decisions (AI needs to respect these)
  • Specific task context (what you’re working on now)

Low-value token spending:

  • Resolved discussions (you agreed, move on)
  • Old code versions (current state matters more)
  • Tangential exploration (interesting but not relevant)

Every message costs tokens. Long AI responses cost tokens. Pasted files cost tokens.

When you’re deep in a conversation, ask: “Is this worth the context space?”

Handling Large Files

Large files are context killers. A 1000-line file can use 5000+ tokens. Options:

Paste only relevant sections:

Here's the authentication middleware (server/middleware/auth.ts lines 20-45):
[relevant section]

The rest of the file handles [brief summary].

Let AI read files directly:

In Claude Code and similar tools:

Read server/middleware/auth.ts and tell me what changes are needed for multi-tenant support.

AI reads the file on demand without clogging the conversation context.

Create focused reference files:

Instead of one massive service file, keep related concerns separate. Smaller files mean smaller context footprint.

The Fresh Start Alternative

Sometimes compacting isn’t enough. The conversation is too tangled. Better to start fresh.

Fresh start criteria:

  • Conversation has fundamental confusion (not just detail loss)
  • You’ve compacted and quality is still poor
  • You’re changing direction significantly
  • The codebase has changed substantially since conversation started

Fresh starts aren’t failure. They’re recognition that a clean slate serves better than patching a degraded conversation.

Multi-Session Projects

For projects spanning days or weeks:

End-of-session summary:

Before closing for the day, write a summary. What’s done. What’s decided. What’s next. Save it.

Start-of-session context:

Begin the next session by providing that summary. AI has full context immediately.

Maintain a running document:

Keep a PROGRESS.md that tracks:

  • Completed work
  • Key decisions
  • Current state
  • Next steps

Update it as you go. It becomes your compacting source material.

Real Example: The Trading Feature

Here’s how context management played out for me:

Messages 1-15: Designed trade data model. Debated approaches. Settled on design. Messages 16-30: Implemented create trade endpoint with validation. Message 31: /compact to summarize before moving on. Messages 32-40: Implemented accept/decline with card transfer logic. Messages 41-45: Quality started degrading. AI forgot validation decisions. Message 46: Manual summary + fresh context injection. Messages 47-55: Completed trade history and notifications.

Total: ~55 messages across two effective conversations (one compact, one fresh start).

Without context management, I’d have hit serious degradation around message 35 and spent the rest of the session fighting confusion.

The Attention Budget Mental Model

Think of AI’s attention as a budget that depletes through the conversation.

Early messages: 100% attention Message 20: 80% attention on early content Message 40: 50% attention on early content Message 60: 20% attention on early content

These numbers are illustrative, not exact. But the pattern is real.

When you compact, you reset the budget. Critical information gets summarized into recent context. Attention concentrates on what matters.

Tomorrow

You understand context and tokens. You know when to compact. Tomorrow we’ll put AI to work in a specialized role: security auditor. Different prompts. Different focus. Using AI to find vulnerabilities you’d miss.


Try This Today

  1. Check your current AI conversation length
  2. If over 30 messages, try compacting or manual summary
  3. Notice the quality difference in responses after compacting
  4. Practice writing a compact summary: what’s done, what’s decided, what’s next

Build the habit of context awareness. Watch for degradation signs. Compact proactively.

Your conversations will stay sharper longer. And when they do degrade, you’ll know exactly what to do about it.

Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How Replit Secures AI-Generated Code [white paper]

1 Share
AI-generated code is changing how software is built, but securing that code raises new challenges. This research explores whether AI-driven security scans are sufficient for vibe coding platforms, or whether they risk asking models to audit their own output. Through controlled experiments on React applications with realistic vulnerability variants, we compare AI-only security scans with Replit’s hybrid approaches that combine deterministic static analysis and dependency scanning with LLM-based reasoning. Along the way, we examine how prompt sensitivity, nondeterminism, and ecosystem awareness affect real-world security outcomes. We show that functionally equivalent code can receive different security assessments depending on syntactic form or prompt phrasing. Issues like hardcoded secrets may be detected in one representation and missed in another. More critically, dependency-level vulnerabilities and supply-chain risks remain largely invisible without traditional scanning infrastructure. The takeaway is not that LLMs are ineffective, but that they are best used alongside deterministic tools. While LLMs can reason about business logic and intent-level issues, static analysis and dependency scanning are essential for establishing a reliable security baseline. Key Findings: AI-only security scans are nondeterministic: Identical vulnerabilities receive different classifications based on minor syntactic changes or variable naming

Read the whole story
alvinashcraft
34 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

python-1.0.0b260114

1 Share

[1.0.0b260114] - 2026-01-14

Added

  • agent-framework-azure-ai: Create/Get Agent API for Azure V2 (#3059) by @dmytrostruk
  • agent-framework-declarative: Add declarative workflow runtime (#2815) by @moonbox3
  • agent-framework-ag-ui: Add dependencies param to ag-ui FastAPI endpoint (#3191) by @moonbox3
  • agent-framework-ag-ui: Add Pydantic request model and OpenAPI tags support to AG-UI FastAPI endpoint (#2522) by @claude89757
  • agent-framework-core: Add tool call/result content types and update connectors and samples (#2971) by @moonbox3
  • agent-framework-core: Add more specific exceptions to Workflow (#3188) by @taochenms

Changed

  • agent-framework-core: [BREAKING] Refactor orchestrations (#3023) by @taochenms
  • agent-framework-core: [BREAKING] Introducing Options as TypedDict and Generic (#3140) by @eavanvalkenburg
  • agent-framework-core: [BREAKING] Removed display_name, renamed context_providers, middleware and AggregateContextProvider (#3139) by @eavanvalkenburg
  • agent-framework-core: MCP Improvements: improved connection loss behavior, pagination for loading and a param to control representation (#3154) by @eavanvalkenburg
  • agent-framework-azure-ai: Azure AI direct A2A endpoint support (#3127) by @moonbox3

Fixed

  • agent-framework-anthropic: Fix duplicate ToolCallStartEvent in streaming tool calls (#3051) by @moonbox3
  • agent-framework-anthropic: Fix Anthropic streaming response bugs (#3141) by @eavanvalkenburg
  • agent-framework-ag-ui: Execute tools with approval_mode, fix shared state, code cleanup (#3079) by @moonbox3
  • agent-framework-azure-ai: Fix AzureAIClient tool call bug for AG-UI use (#3148) by @moonbox3
  • agent-framework-core: Fix MCPStreamableHTTPTool to use new streamable_http_client API (#3088) by @Copilot
  • agent-framework-core: Multiple bug fixes (#3150) by @eavanvalkenburg
Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

2025: The Year of the Return of the Ada Programming Language?

1 Share

In 2025, the Ada programming language made what might be considered a comeback. (But don’t call it one! Yet.)

Last March, Ada broke into the TIOBE Index top 20 (reaching number 18), and by July, Ada broke the top 10 (reaching number 9 – its highest-ever position on TIOBE). It is now back to number 18.

Moreover, this month Ada also broke into the top 10 in the PopularitY of Programming Language Index (PYPL), landing at number 9.

While programming languages such as Python, C/C++ and Java continue to rank amongst the most popular languages, the resurgence of interest in Ada could partially be explained by the push to use more memory-safe languages.

Indeed, in a world increasingly focused on software security and safety, Ada stands out, said Tony Aiello, product manager at AdaCore, which provides software development tools for mission-critical systems.

“Ada is the language with the longest and strongest pedigree: Ada was designed from the ground up for secure, safety-critical systems,” Aiello said. “Ada offers strong static typing and runtime memory safety, two language features in growing demand.”

Moreover, thanks to SPARK, which brings deductive formal verification to Ada, developers can prove static memory safety and the absence of runtime errors, guaranteeing the elimination of significant classes of serious security vulnerabilities before the software is even executed, Aiello told The New Stack. SPARK is a programming language based on Ada and intended for developing high-integrity software.

These capabilities make Ada especially popular in the aerospace, defense and automotive industries, which have been receiving considerable attention recently, he said.

Meanwhile, “Ada proves that a language capable of delivering concurrency and safety can remain viable for nearly five decades,” said Brad Shimmin, an analyst at The Futurum Group. “Currently, features such as strong typing and robust memory management are driving interest in languages like Rust and Go across the board, as developers can build software with less concern for both stability and security. With Ada, it’s easy to see that those techniques and capabilities, like runtime and compile-time checks, make it a great choice for large-scale projects that need to deliver performance and stability.”

According to an AdaCore blog post, Ada is a powerful language for developing safe, reliable, high-performance software. Its combination of strong typing, memory safety, efficient code generation, and precise low-level control makes it an ideal choice for high-integrity systems.

Also, “as an imperative, procedural language, Ada feels familiar to developers with experience in C, C++, or Rust,” the post said.

Updating FAA Systems

The U.S. Federal Aviation Administration (FAA) has used Ada extensively in its air traffic control (ATC) systems. In 1989, IBM, under contract with the FAA, began working with the agency to deliver the Advanced Automation System (AAS) program, an ambitious effort to modernize the entire ATC system. The AAS was slated to consist of 2 million lines of Ada code. The FAA’s interest in Ada coincided with the Department of Defense (DoD) mandating Ada for its systems.

The DoD mandated Ada as its standard language in 1987, but this mandate became law via the 1991 Defense Appropriations Act, which took effect June 1, 1991, and required Ada for weapons systems and other mission-critical systems. That mandate ended in 1997.

However, Ada continues to be used in critical systems within the aviation industry, including commercial aircraft like the Boeing 777, and in various ATC systems globally.

As the Trump Administration has vowed to overhaul the ATC, the question arises of what the FAA will do with all that Ada code? Are they looking at moving to Rust or a new language?

“Moving from Ada to Rust sounds like a pretty heavy lift,” Shimmin said. “Do you think, based on some of the early work we’ve seen from IBM in refactoring COBOL to Java, that we’ll be seeing more and more of these major migration/modernization efforts?” he asked.

Shimmin added that his guess is that the rise of “generative and agentic AI actually makes it less likely that companies will feel they need to ‘modernize’ code that already works and works well. I’m thinking that’s down to the fact that the biggest issue with maintenance is domain expertise and institutional knowledge, two areas where AI simply shines.”

Ada and AI?

Speaking of AI, AdaCore’s Aiello said Ada and especially SPARK are the best-suited languages for AI-assisted development.

“Both Ada and SPARK embed self-checking capabilities in the language, allowing LLMs [large language models] to reason more precisely as they develop the code and provide the user with code that is not only free of a number of common programming mistakes, but also functionally correct,” he said.

Other Factors Driving Ada’s Popularity

In addition to Ada’s memory safety and other features, some observers see other reasons for its recent spurt in popularity.

“The recent growth in languages’ popularity indices can also be explained by the modernization of the Ada/SPARK ecosystem, for example with the Alire package manager or the VS Code plugin,” said Fabien Chouteau, community and advocacy manager at AdaCore. Alire is also known as the Ada Library Repository.

Meanwhile, “We are also witnessing a virtuous cycle of new Ada/SPARK developers joining the community, driving new initiatives for collaboration, and therefore bringing new people to learn and contribute,” Chouteau said.

In addition, Nvidia’s interest in Ada could also play a part in its recent popularity.

“It’s likely that Ada’s ranking in the TIOBE index is influenced by Nvidia’s experimentation with Ada and SPARK, and may be confounded because they have products by those names as well,” said Andrew Cornwall, an analyst at Forrester Research.

Indeed, Nvidia offers its DGX Spark product and Nvidia Ada Lovelace Architecture.

Ada’s History

Ada was named after Ada Lovelace, who is considered to be the world’s first computer programmer.

According to Wikipedia, “Ada is a structured, statically typed, imperative and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects and nondeterminism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors.”

The language was created by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the DoD from 1977 to 1983 to supersede over 450 programming languages then used by the DoD.

Moreover, Ada was originally designed for embedded and real-time systems. Then the Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial and object-oriented programming (OOP). Taft is currently a vice president and the director of language research at AdaCore.

A Language That Prevents Errors

Meanwhile, an article from Ada Germany last year by Chair Tobias Philipp, Deputy Chair Christina Unger, Dr. Hubert B. Keller and experts from Ada Deutschland e.V., said “programming languages like Ada can systematically prevent entire classes of errors through strong typing and extensive compile-time and runtime checks. In Ada, an array is not merely a pointer to the first element, but a semantic structure that inherently includes its index types and bounds. Read and write operations are verified by both the compiler and at runtime.”

In addition, the article argues that 12 of the Top 25 software weaknesses as defined by the Common Weakness Enumeration (CWE) “would be precluded by the use of Ada and its syntactic and semantic checking mechanisms at compile and runtime — prevented solely through the choice of programming language, regardless of coding mistakes.”

Other Ada Users

Displaying the varied uses of Ada today, AdaCore compiled a list of organizations that depend on the language every day.

These include the Victoria Line in London, which is touted as the world’s first fully automated underground railway. Its Automatic Train Operation (ATO) system uses Ada for the main control logic, while the emergency braking system was written in SPARK.

Also, BNP Paribas, one of the world’s leading banking and financial services institutions, uses Ada to enhance its ability to meet the demands for its risk calculation models. Ada empowers a robust and reliable risk calculation engine that can handle millions of daily requests with high accuracy, performance and reliability, AdaCore explained.

Deep Blue Capital uses Ada to implement trading algorithms that execute thousands of transactions per second. The company’s engineers chose Ada not just for its efficiency, but for its compile-time safety features, strong typing and memory safety guarantees, AdaCore reported.

Additionally, Stratégies Romans, a French software vendor, developed a comprehensive 3D computer-aided design (CAD) suite using Ada. Their software supports a range of design and modelling tasks and is used by engineers and designers in industrial settings. Ada’s use here demonstrates that Ada is equally suited to large-scale interactive systems, not just embedded controllers, AdaCore noted.

The post 2025: The Year of the Return of the Ada Programming Language? appeared first on The New Stack.

Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete

New Safari developer tools provide insight into CSS Grid Lanes

1 Share

You might have heard recently that Safari Technology Preview 234 landed the final plan for supporting masonry-style layouts in CSS. It’s called Grid Lanes.

web browser showing a 6-column layout of photos of various aspect ratios, packed vertically
Try out all our demos of CSS Grid Lanes today in Safari Technology Preview.

CSS Grid Lanes adds a whole new capability to CSS Grid. It lets you line up content in either columns or rows — and not both.

This layout pattern allows content of various aspect ratios to pack together. No longer do you need to truncate content artificially to make it fit. Plus, the content that’s earlier in the HTML gets grouped together towards the start of the container. If new items get lazy loaded, they appear at the end without reshuffling what’s already on screen.

It can be tricky to understand the content flow pattern as you are learning Grid Lanes. The content is not flowing down the first column to the very bottom of the container, and then back up to the top of the second column. (If you want that pattern, use CSS Multicolumn or Flexbox.)

With Grid Lanes, the content flows perpendicular to the layout shape you created. When you define columns, the content flows back and forth across those columns, just like to how it would if rows existed. If you define rows, the content will flow up and down through the rows — in the column direction, as if columns were there.

diagram showing how for waterfall layout there are columns, while content flows side to side. And for brick, the content is laid out in rows, while it the order flows up and down.

Having a way to see the order of items can make it easier to understand this content flow. Introducing the CSS Grid Lanes Inspector in Safari. It’s just the regular Grid Inspector, now with more features.

Grid Lanes photo demo in Safari, with Web Inspector open to the Layout panel, and all the tools for the Grid Inspector turned on. Grid lines are marked with dotted lines. Columns are labeled with numbers and sizes. And each photo is marked with a label like Item 1 — which makes it clear the order of content in the layout.

Safari’s Grid Inspector already reveals the grid lines for Grid Lanes, and labels track sizes, line numbers, line names, and area names. Now it has a new feature — “Order Numbers”.

By turning on the order numbers in the example above, we can clearly see how Item 1, 2, 3, and 4 flow across the columns, as if there were a row. Then Item 5 is in the middle right, followed by Item 6 on the far right, and so on.

You might be tempted to believe the content order doesn’t matter. With pages like this photo gallery — most users will have no idea how the photos are ordered in the HTML. But for many users, the content order has a big impact on their experience. You should always consider what it’s like to tab through content — watching one item after another sequentially come into focus. Consider what it’s like to listen to the site through a screenreader while navigating by touch or keyboard. With Grid Lanes, you can adjust flow-tolerance to reduce the jumping around and put items where people expect.

To know which value for flow tolerance to choose, it really helps to quickly see the order of items. That makes it immediately clear how your CSS impacts the result.

Order Numbers in the Grid Inspector is an extension of a feature Safari’s Flexbox Inspector has had since Safari 16.0 — marking the order of Flex items. Seeing content order is also helpful when using the order property in Flexbox.

Web browser showing photo layout — this time a Flexbox layout. The Web Inspector is open to the Layout tab, and the Flexbox Inspector is enabled. The lines of the layout are marked with dotted lines... and each item is labeled with its order.

Order Numbers in Safari’s Grid Inspector works for CSS Grid and Subgrid, as well as Grid Lanes.

Try out Safari’s layout tooling

The Grid and Flexbox layout inspectors might seem similar across browsers, but the team behind Safari’s Web Inspector has taken the time to finely polish the details. In both the Grid and Flexbox Inspectors, you can simultaneously activate as many overlays as you want. No limits. And no janky scrolling due to performance struggles.

Safari’s Flexbox Inspector visually distinguishes between excess free space and Flex gaps, since knowing which is which can solve confusion. It shows the boundaries of items, revealing how they are distributed both on the main axis and the cross axis of Flexbox containers. And it lists all the Flexbox containers, making it easier to understand what’s happening overall.

Our Grid Inspector has a simple and clear interface, making it easy to understand the options. It also lists all Grid containers. And of course, you can change the default colors of the overlays, to best contrast with your site content.

And Safari’s Grid and Flexbox Inspectors are the only browser devtools that label content order. We hope seeing the order of content in Grid Lanes helps you understand it more thoroughly and enjoy using this powerful new layout mechanism.

Try out Order Numbers

Order Numbers in Safari’s Grid Inspector shipped today in Safari Technology Preview 235. Let us know what you think. There’s still time to polish the details to make the most helpful tool possible. You can ping Jen Simmons on Bluesky or Mastodon with links, comments and ideas.

For more

Note: Learn more about Web Inspector from the Web Inspector Reference documentation.
Read the whole story
alvinashcraft
10 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories