Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150134 stories
·
33 followers

Building Real .NET UI with Uno Studio AI – Final Week of the Challenge

1 Share


Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

WordPress 6.9: What’s New for Bloggers, Creators, and Site Owners

1 Share

WordPress 6.9 is here, bringing a handful of upgrades that make life easier for bloggers, creators, and site owners.

This release speeds up everyday work, improves how teams collaborate, and adds new block options that give you more room to shape your site the way you want.

Here’s a look at the standout WordPress 6.9 features that have arrived since the last update in April 2025, and how they help you build more with WordPress.com.

Collaborate and stage content directly in your posts

Explore the latest Site Editor updates, which make it easier to do more directly inside WordPress without relying on extra tools or touching backend code.

Block-level Notes 

Block-level Notes make collaboration much easier by letting teams leave feedback directly on the block that needs attention.

You can add threaded, resolvable notes from the toolbar or sidebar, and authors automatically get email alerts when new comments come in.

This keeps all feedback — pre-launch edits, content fixes, design tweaks, and even post-publication updates like adding new links — in one place, without needing extra tools.

Hide and Show blocks

Hide and Show lets you switch blocks on and off without deleting them, making it easier to manage content you’ll need again.

Use the visibility toggle in a block’s toolbar to temporarily hide sections like seasonal promos or recurring announcements.

Hide and Show blocks in WordPress 6.9

This gives you a simple, built-in way to stage updates without juggling duplicate blocks or storing drafts elsewhere, and your reusable content stays exactly where you left it for when you’re ready to bring it back.

Visual drag and drop

You can now see exactly where a block will land as you drag it.

The live preview makes it much easier to move things around without guessing or fixing mistakes afterward.

Visual drag and drop in WordPress 6.9

It currently works with single blocks, although multi-block dragging is expected in  WordPress 7.0.

Allowed blocks UI and other workflow tools

The allowed blocks UI, found under Advanced settings (with a keyboard shortcut to copy settings: Ctrl/Cmd + Alt/Option + V), lets you specify which block types are allowed within a given container. 

Allowed blocks UI and other workflow tools in WordPress 6.9

Previously, this was only editable through block markup in code view. 

By bringing these controls into the interface, WordPress now makes it easier to build more complex layouts and features without touching code.

Enrich your content with creative blocks for improved storytelling

Take advantage of new ways to display information visually within WordPress without installing additional plugins or using custom code. 

Accordion block

The Accordion block lets you add collapsible sections with headings and panels, creating an interactive reading experience without requiring code or extra plugins.

Accordion Block WordPress 6.9

It’s ideal for adding frequently asked questions (FAQs) or for expanding details and lists to add additional context within your content.

Term Query and companion blocks

The Term Query block simplifies building category and tag pages by offering a built-in way to display them, similar to the Query Loop block

It supports sorting options (e.g., “order-by” sorting), design tools for styling, and a toggle to turn each item into a link. 

Term Query and companion blocks in WordPress 6.9

When combined with the Term Description block, it offers a powerful setup for directory and magazine sites that use structured filtering or subpage navigation.

Supporting (companion) blocks include:

  • Term Template block
  • Term Name block
  • Term Count block

Time‑to‑Read block

The Time-to-Read block sets expectations for readers by providing an estimated reading time (including a range) based on word count. 

Time‑to‑Read block in WordPress 6.9

Although incorporating this information doesn’t directly correlate to better SEO performance, it can have an impact on user engagement, which is tangentially related.

Math block

LaTeX is a markup language and high-quality typesetting system for technical and scientific documentation.

The new Math block implements LaTeX for better visualizing mathematical equations and notations, making it especially useful for technical and educational posts. 

Math block in WordPress 6.9

Comment Count and Comment Link blocks

By separating the comment count from the comment link, the Comment Count and Comment Link blocks let you place comment access wherever it makes the most sense in a post.

It also lets you control which posts allow comments at all. 

Comment Count and Comment Link blocks in WordPress 6.9

This functionality was once exclusive to the Site Editor, but it’s now available throughout the entire editing experience.

Create and manage reusable layouts with safe drafts and flexible templates

WordPress 6.9 introduces several exciting features that make life easier for anyone building across multiple sites — cutting down on repeat work and helping you move faster without recreating the same layouts from scratch.

Starter pattern modal everywhere

All post types containing patterns (previously just pages) now display the pop-up modal for using starter patterns. 

This makes it easier for creators to drop in structured layouts across different content types, especially when working with varied or more complex designs.

Fit Text (stretchy text)

The new Fit Text option in Heading and Paragraph blocks automatically adjusts text to fill its container.

Fit Text (stretchy text) in WordPress 6.9

This gives you precise typographic control without writing custom CSS, making it easier to create eye-catching headers and hero sections that look polished across all screen sizes.

Gallery block aspect ratios and Cover block posters

The Gallery block’s new aspect ratio setting lets you apply a consistent ratio to all images with a single click from the sidebar. 

No more manual edits or custom CSS are necessary to get a clean, unified layout. 

Gallery block aspect ratios and Cover block posters in WordPress 6.9

Besides, you can add poster images to Cover blocks with video backgrounds, giving visitors on slower connections a still image to view while the video loads.

Find anything instantly with the Dashboard-wide Command Palette

You can now use the Command Palette across the entire WP Admin dashboard (not just the Site Editor), making navigation commands universally accessible. 

With a single keyboard shortcut, power users and admins can bypass repetitive menu clicking and streamline their workflows.

Press Ctrl/Cmd + K on any admin screen (Posts, Pages, Media, Settings, the Site Editor, and more) to open the search/command bar and quickly run actions or jump to content. 

Dashboard-wide Command Palette in WordPress 6.9

Developers can also register custom commands through Extensible Commands, giving users even faster access to frequently used features.

Enjoy faster load times with no extra effort

WordPress is known for performance and is constantly raising the standard with new updates. 

The latest technical improvements in WordPress 6.9 work together to boost performance without any extra setup on your part. 

For example, these include:

  • On‑demand block CSS: Loads styles only for the blocks actually used on a page, improving performance for classic themes that normally ship more CSS than needed.
  • Optimized cron execution: Improves Core Web Vitals tangentially through better Time to First Byte, by scheduling tasks to run after the page loads.
  • Template output buffer and hidden block styles: An updated system that template developers can use to optimize HTML outputs, which results in small improvements to page performance — loading block styles only when needed, moving them to the <head> section, and reducing CSS output. It’s enabled by default for classic WordPress themes and skips loading styles for hidden blocks.

Together, these changes help your pages load faster and feel smoother for visitors, all without any extra configuration.

Try WordPress 6.9’s new features today 

WordPress 6.9 is already live on WordPress.com, so you can try the new tools right away and see how they fit into your workflow.

These updates might improve your experience as a content creator, boost user engagement, and ultimately increase blog traffic

Test out Notes, the new storytelling blocks, and the template updates to get a feel for what’s possible.

If you create something you’re proud of, share it and tag us — we’d love to see it.

Want a faster, more reliable setup for everything in 6.9? Get started with WordPress.com.





Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Scrollytelling on Steroids With Scroll-State Queries

1 Share

Read you a story? What fun would that be? I’ve got a better idea: let’s tell a story together.

Photopia by Adam Cadre

Do you think of scrolling as a more modern way of reading than turning pages in a book? Nope, the concept originated in ancient Egypt, and it’s older than what we now classify as books. It’s based on how our ancestors read ancient physical scrolls, the earliest form of editable text in the history of writing. I am Jewish, so I remember my earliest non-digital scrolling experience was horizontally scrolling the Torah, which can be more immersive than traditionally scrolling a webpage. The physical actions to navigate texts have captured the imagination of many a storyteller, leading authors to gamify the act of turning pages and to create stories that incorporate the physical actions of opening a book and turning pages as part of the narrative. However, innovative experiences using non-standard scrolling haven’t been explored as thoroughly.

Photo of an ancient scroll partially unrolled inside a glass case on a wooden desk.
Photo by Taylor Flowe on Unsplash

I can sympathize with those who dismiss scrollytelling as a gimmick: it can be an annoyance if it’s just for the sake of cleverness, but my favorite examples I’ve seen over the years tell stories we couldn’t otherwise. There’s something uniquely immersive about stories driven by a mechanic that has lived in our species’ collective muscle memory since ancient days.

Still unconvinced of the value of scrollytelling? Alright, hypothetical annoying skeptic, let’s first warm up with some common use cases for scroll-based styling.

It’s awesome that Chrome has solid support for native scroll-driven animations without requiring JavaScript, and we see that both Safari and Firefox are actively working on support for the new scroll-driven standards. These new features facilitate optimized, smooth scroll-driven animations. The support via pure CSS syntax makes scroll-driven animation a more approachable option for designers who may be more comfortable with CSS than with the equivalent JavaScript.

Indeed, even though I am a full-stack developer who is supposed to know everything, I found having scroll-driven animation built into the browser and available with a few lines of CSS gets my creativity flowing, inspiring me to experiment more than if I had to go through hoops of a proprietary library and writing JavaScript, which in the past might include messing with intersection observer and fiddly code.

If animation timelines weren’t enough, Chrome has now introduced support for CSS carousel, scroll-initial-target, and scroll-state queries—all of which provide opportunities to control scrolling behaviors in CSS and style all the things based on scrolling.

In my opinion, scroll-state is more of an evolutionary than revolutionary addition to the growing range of scroll-related CSS features. Animation timelines are so powerful that they can be hacked to achieve many of the same effects we can implement with scroll-state queries. Therefore, think of scroll-state as a highly convenient, simplified subset of what we can do in more verbose hacky ways with animation timelines and/or view timelines.

Some examples of effects scroll-state simplifies are:

  1. Before scroll-state queries existed, you could hack view progress timelines to create scroll-triggered animations, but we now have snapped scroll-state queries to achieve similar effects.
  2. Before snappped queries existed, Bramus demonstrated a hack to simulate a hypothetical :snappped selector using scroll-driven animations.
  3. Before scrollable queries existed, Bramus showed how we could do similar things using scroll-timeline.

Take a moment to appreciate that Bramus is from the future, and to reflect on how scroll-state can simplify common UI patterns, such as scroll shadows, which Chris Coyier said might be his “favorite CSS trick of all time.” This year, Kevin Hamer showed how scroll-timeline can achieve scroll shadows in CSS with fewer tricks. It’s excellent, but the only thing better than clever CSS tricks is that scroll shadows no longer require a trick at all. Hacking CSS is fun, but there is something to be said for that warm fuzzy feeling that CSS was made just for your use case. This demo from the Chrome blog shows how scroll shadows and other visual affordances are easy to implement with scroll-state.

But the popularity of Kevin’s article suggests that normal, sane people will gravitate to practical use cases for the new CSS scroll-based features. In fact, a normal and sane author might end the article here. Unfortunately, as I revealed in a previous article, I have been cursed by a spooky shopkeeper who sells CSS tricks at a haunted carnival, so I now roam the earth attempting the unthinkable with pure CSS.

Decision time

As you reach this paragraph in the article, you realize that when you scroll, it fast-forwards reality. Therefore, after we end the discussion of scroll shadows, the shadows swallow the world outside your window, except for two glowing words hovering near your house: CSS TRICKS. You wander out through your front door and meet a street vendor standing beneath the neon sign. The letters give her multiple shadows as if she has thrown them down like discarded masks, undecided about which shade of night to wear. On the table before her lies a weathered scroll. It unrolls on its own, whispering misremembered fragments from a forgotten CSS-Tricks article: “A scroll trigger is a point of no return, like a trap sprung once the hapless user scrolls past a certain point.”

The neon flickers like a glitch, revealing another of the shopkeeper’s faces: a fire demon doppleganger of yourself who is the villain of the CodePen we’ll descend into if you scroll further.

“Will you continue?” the fire demon hisses. “Will you scroll deeper into the madness at the far edges of CSS?

Non-linear scrollytelling

Evidently, you are game to play with fire, so check out the pure CSS experiment below, which demonstrates a technique I call “nonlinear scrollytelling,” in which the user controls the outcome of a visual story by deciding which direction to scroll next. It’s a scrolling Choose Your Own Adventure. But if your browser is less adventurous than you are, watch the screen recording instead. The experiment will only work on Chromium-based browsers for now, because it relies on scroll-state, animation-timeline, scroll-initial-target and CSS inline conditionals.

I haven’t seen this technique in the wild, so let me know in the comments if you have seen other examples of the idea. For now, I’ll claim credit for pioneering the mechanics — but I give credit to the talented Dead Revolver for creating the awesome, affordable pixel art bundle I used for most of the graphics. The animated lightsaber icon was ripped from this cool CodePen by Ujjawal Anand, and I used ChatGPT to draw the climbable building. To make the bad guy, I reused the same spritesheet from the player character, but I implemented the Mirror Match trope from Mortal Kombat, using color shifting to create a “new” character who I evilized by casting the following spell in CSS:

.evil-twin {
  transform: rotateY(180deg);
  filter: invert(24%) sepia(99%) saturate(5431%) hue-rotate(354deg) brightness(93%) contrast(122%);
  background-image: url(/* same spritesheet as the player character */);
}

It’s cool that CSS helps recycle existing assets for those like me who are drawing-challenged. I also wanted to make sure that well-supported CSS features like transform and filter didn’t feel left out of the fun in an experiment filled with newer, emergent CSS features.

But if you’ve come this far, you’re probably eager to understand the scroll-related CSS logic.

Our story begins in the middle of the end

You may have noticed our experiment earns extra crazy points as soon as it loads, by starting at the middle of the bottom of the page so that the player can choose whether to scroll left to run away, or scroll right to walk unarmed towards the bad guy if the player wants to compete with the madness level of the game’s creator.

This explainer for the emergent scroll-initial-target property shows that controlling scroll position on load was previously possible by hacking CSS animations and the scroll-snap-align property. However, similar to what we discussed above about the value proposition of scroll-state, a feature like scroll-initial-target is exciting because it simplifies something that previously required verbose, fragile hacks, which can now be replaced with more succinct and reliable CSS:

.spawn-point {
  position: absolute;
  left: 400vw;
  scroll-initial-target: nearest;
}

As cool as this is, we should only subvert expectations for how a webpage behaves if we have a sufficient reason. For instance, CSS like the above could have simplified my pure CSS swiper experiment, but Chrome only added scroll-initial-target in February 2025, the month after I wrote that article. Using scroll-initial-target would be justified in the swiper scenario, since the crux of that design was that the user started in the middle with the option to swipe left or right.

A similar dilemma is central to the opening of our scrollytelling narrative. The disorienting experience of finding ourselves in an unexpected scroll position with only the option to scroll horizontally heightens the drama, as the user has to adapt to an unusual way of interacting while the bad guy rapidly approaches. I’m feeling generous, so let’s give the user 20 seconds to figure it out, but you can experiment with different timeframes by editing the --chase-time custom property at the top of the source file.

We’re going to create a CSS implementation of the slasher movie trope in which a walking aggressor can’t be outrun. We do that by marking the bad guy as position: fixed, then adding an infinite walk-cycle animation and another animation that moves him relentlessly from right to left across the screen. Meanwhile, we give the player character a running animation and position him based on a horizontal animation timeline. He can run, but he can’t hide.

body {
  .idle {
    animation: idleAnim 1s steps(6) infinite;
  }

    /* --scroll-direction is populated using the clever property Bramus demonstrates 
  here https://www.bram.us/2023/10/23/css-scroll-detection */

  .sprite {
    transform: rotateY(calc(1deg * min(0, var(--scroll-direction) * 180)));
  }

  @container not style(--scroll-direction: 0) {
    .sprite {
      animation: runAnim 0.8s steps(8) infinite;
    }
  }

  .evil-twin-wrapper {
    position: fixed;
    bottom: 5px;
    z-index: 1000;
    margin-left: var(--enemy-x-offset);
    /* we'll explain later how we detect the way the game should end */
    --follow: if(style(--game-state: ending): paused; else: running); 
    animation: var(--chase-time) forwards linear evil-twin-chase var(--follow);
  }
}

He can’t hide, but we’ll next introduce a second scroll-based decision point using scroll-state to detect when our hero has been backed into a corner and see if we can help him.

How scroll-state could save your life

As our hero runs away to the left, the buildings and sky in the cityscape background show off a few layers of parallax scrolling by assigning each layer an anonymous animation timeline and an animation that moves each layer faster than the layer behind it.

.sky, .buildings-back, .buildings-mid, .sky-vertical, .buildings-back-vertical, .buildings-mid-vertical {
  position: fixed;
  top: 0;
  left: 0;
  width: 800%;
  height: max(100vh, 300px);
  background-size: auto max(100vh, 300px);
  background-repeat: repeat-x;
  animation-timing-function: linear;
  animation-timeline: scroll(x);
}

/*...repetitively assign the corresponding animations to each layer...*/

@keyframes move-sky {
  from {
    transform: translateX(0);
  }
  to {
    transform: translateX(-2.5%);
  }
}

@keyframes move-back {
  from {
    transform: translateX(0);
  }
  to {
    transform: translateX(-6.25%);
  }
}

@keyframes move-mid {
  from {
    transform: translateX(0);
  }
  to {
    transform: translateX(-12.5%);
  }
}

This usage of animation timelines is what they were designed for, which is why the code is straightforward. If we had to, we could push the boundaries and use the same technique to set a Houdini variable in an animation timeline to detect when the player reaches the left corner of the screen — but thanks to scroll-state queries, we have a cleaner option.

@container scroll-state((scrollable: left)) {
  body {
    overflow-y: hidden;
  }
}

@container scroll-state((scrollable: bottom)) {
  body {
    width: 0;
  }
}

That’s all we need to toggle vertical and horizontal scrolling based on position! This is the basis that allows the player to escape from being slashed by the bad guy. Now we can scroll up and down to climb the ladder only when the player reaches the left corner where the ladder is, and disallow horizontal scrolling while he is climbing.

I could have made the game detect reaching the left of the screen using animation timelines, but that would involve custom property toggles, which are more verbose and error-prone.

When the player climbs to the top of the ladder to collect the lightsaber, we do need one toggle property so the game will remember we have collected the weapon, but it’s simpler than if we had used animation timelines.

@keyframes collect-saber {
  from {
    --player-has-saber: false;
  }
  to {
    --player-has-saber: true;
  }
}

body {
  animation: .25s forwards var(--saber-collection-state, paused) collect-saber;
}

@container scroll-state(not (scrollable: top)) {
  body {
    --saber-collection-state: running;
  }
}


@container style(--player-has-saber: true) {
  .sprite {
    background-image: url(/*combat spritesheet*/);
  }

  .lightsaber {
    visibility: hidden;
  }
}

Contrariwise, the animation cycle while the sprite is climbing the ladder is a job for animation-timeline used to assign an anonymous vertical timeline to the player sprite. This is applied conditionally when our scroll-state query detects that the player is between the bottom and the top of the ladder. It’s a nice example of how animation timelines and scroll-state queries are good at different things, and work well together.

@container scroll-state((scrollable: top) and ((scrollable: bottom))) {
  .player-wrapper {
    .sprite {
      animation: climbAnim 1s steps(8);
      animation-timeline: scroll(root y);
      animation-iteration-count: 10;
    }
  }
}

Finish him with fatal conditionality

We apply the techniques I discovered in my CSS collision detection article to detect when the two characters meet for their showdown. At that point, we want to disable scrolling entirely and display the appropriate non-interactive endgame cutscene depending on the choices our user made. Notice that if we detect the good guy won, he only strikes with the sword once, whereas the bad guy will continue to slash infinitely, even after the good guy is dead. What can I say — I was working on this CodePen around Halloween.

In the past, I wrote an article questioning the need for inline CSS conditionals — but now that they’ve landed in Chrome, I find them addictive, especially when creating a heavily conditional CSS experiment like nonlinear scrollytelling. I like to imagine that the new if() function stands for Interactive Fiction. Below is how I detect the endgame conditions and choose which animations to play in the final cutscene. I am not sure of the most readable way to space out if() code in CSS, so feel free to start holy wars on that topic in the comments.

body {
  --min-of-player-and-enemy-x: min(var(--player-x-offset), var(--enemy-x-offset) - 10px);
  --max-of-player-and-enemy-y: max(var(--player-y-offset, 5px));
  --game-state:
    if(
      style(--min-of-player-and-enemy-x: calc(var(--enemy-x-offset) - 10px)) and style(--max-of-player-and-enemy-y: 5px): 
        ending; 
      else: 
        playing
    );
  overflow:
    if(
      style(--game-state: ending): 
        hidden; 
      else: 
        scroll
    );
}

@container style(--player-has-saber: true) and style(--game-state: ending) {
  .player-wrapper {
    .sprite {
      animation: attack 0.7s steps(4) forwards;
    }

  .speech-bubble {
    animation: show-endgame-message 3s linear 1s forwards;

    &::before {
      content: 'Refresh the page to play again';
    }
  }
    
  .evil-twin-wrapper {
    .evil-twin {
      evil-twin-die 0.8s steps(4) .7s forwards;
    }
  }
}

@container style(--player-has-saber: false) and style(--game-state: ending) {
  .player-wrapper {
    .sprite {
      animation: player-die .8s steps(6) .7s forwards;
    }
  }

  .evil-twin-wrapper {
    .speech-bubble {
      animation: show-endgame-message 3s linear 1s forwards;
      display: block;

      &::before {
        content: 'Baha! Refresh the page to fight me again';
      }
      .evil-twin {
        attack 0.8s steps(4) infinite;
      }
    }
  }
}

Should we non-linearly scrollytell all the things?

I am glad you asked, hypothetical troll who wrote that heading. Of course, even putting the technical challenges aside, you know that this won’t always be the right approach for a website. As Andy Clarke recently pointed out here on CSS-Tricks, design is storytelling. The needs of every story are different, but I found my little pixel art guy’s emotional story arc requires non-linear scrollytelling.

I think this particular example isn’t a gimmick and is a legitimate form of web design expression. The demo tells a simple story, but my wife pointed out that a personal situation I am dealing with has strong analogies to the pixel guy’s journey. He finds himself in a situation where the only sane option is to allow himself to be backed into a corner, but when all seems lost, he finds a way to rise above the adversity. Then he learns that the moral high ground is its own form of trap, so he must put his own spin on the wisdom of Sun Tzu that “to know your enemy, you must become your enemy.” He apparently lowers himself back to the aggressor’s level — but he only does what is necessary. The bitterwseet moral is that survival sometimes requires taking a leaf out of the enemy’s book — but the user has been guiding the hero through this story, which helps the audience to understand that the good guy’s motivations are not comparable to those of his adversary. While testing the CodePen, I found the story moving and even suspenseful in an 8-bit nostalgia kind of way, even if some of that suspense was my uncertainty about whether I would get it working.

From a technical point of view, I think building a full-scale website based on this idea would require a mix of CSS and JavaScript, because storing state in CSS currently requires hacks (like this one, which is cool but also highly experimental). The paused animation approach to remember that the player collected the sword can glitch due to timer drift, so there is a small chance the dude will start the game with the lightsaber already in his hand! If you resize the window during the endgame, you can glitch the game, and then things get really weird. By contrast, something like the scroll snap events — already supported in Chrome — would allow us to store state and even play sounds using a script that fires based on scroll interactions.

It seems like we already have enough in CSS to build a site like this one, which uses horizontal multimedia scrollytelling to raise awareness that interpersonal violence exists on a continuum and tends to escalate if the target is unable to recognize the early warning signs. That’s a worthy topic I unfortunately have some experience with, and the usage of horizontal scrollytelling to address it demonstrates that a wide variety of stories can be told engagingly through scrollytelling.

I leave to the various futures (not to all) my garden of forking paths.

Jorge Luis Borges


Scrollytelling on Steroids With Scroll-State Queries originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Read the whole story
alvinashcraft
31 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Masonry: Things You Won’t Need A Library For Anymore

1 Share

About 15 years ago, I was working at a company where we built apps for travel agents, airport workers, and airline companies. We also built our own in-house framework for UI components and single-page app capabilities.

We had components for everything: fields, buttons, tabs, ranges, datatables, menus, datepickers, selects, and multiselects. We even had a div component. Our div component was great by the way, it allowed us to do rounded corners on all browsers, which, believe it or not, wasn't an easy thing to do at the time.

Our work took place at a point in our history when JS, Ajax, and dynamic HTML were seen as a revolution that brought us into the future. Suddenly, we could update a page dynamically, get data from a server, and avoid having to navigate to other pages, which was seen as slow and flashed a big white rectangle on the screen between the two pages.

There was a phrase, made popular by Jeff Atwood (the founder of StackOverflow), which read:

“Any application that can be written in JavaScript will eventually be written in JavaScript.”

Jeff Atwood

To us at the time, this felt like a dare to actually go and create those apps. It felt like a blanket approval to do everything with JS.

So we did everything with JS, and we didn’t really take the time to research other ways of doing things. We didn’t really feel the incentive to properly learn what HTML and CSS could do. We didn’t really perceive the web as an evolving app platform in its entirety. We mostly saw it as something we needed to work around, especially when it came to browser support. We could just throw more JS at it to get things done.

Would taking the time to learn more about how the web worked and what was available on the platform have helped me? Sure, I could probably have shaved a bunch of code that wasn’t truly needed. But, at the time, maybe not that much.

You see, browser differences were pretty significant back then. This was a time when Internet Explorer was still the dominant browser, with Firefox being the close second, but starting to lose market share due to Chrome rapidly gaining popularity. Although Chrome and Firefox were quite good at agreeing on web standards, the environments in which our apps were running meant that we had to support IE6 for a long time. Even when we were allowed to support IE8, we still had to deal with a lot of differences between browsers. Not only that, but the web of the time just didn't have that many capabilities built right into the platform.

Fast forward to today. Things have changed tremendously. Not only do we have more of these capabilities than ever before, but the rate at which they become available has increased as well.

Let me ask the question again, then: Would taking the time to learn more about how the web works and what is available on the platform help you today? Absolutely yes. Learning to understand and use the web platform today puts you at a huge advantage over other developers.

Whether you work on performance, accessibility, responsiveness, all of them together, or just shipping UI features, if you want to do it as a responsible engineer, knowing the tools that are available to you helps you reach your goals faster and better.

Some Things You Might Not Need A Library For Anymore

Knowing what browsers support today, the question, then, is: What can we ditch? Do we need a div component to do rounded corners in 2025? Of course, we don’t. The border-radius property has been supported by all currently used browsers for more than 15 years at this point. And corner-shape is also coming soon, for even fancier corners.

Let’s take a look at relatively recent features that are now available in all major browsers, and which you can use to replace existing dependencies in your codebase.

The point isn't to immediately ditch all your beloved libraries and rewrite your codebase. As for everything else, you’ll need to take browser support into account first and decide based on other factors specific to your project. The following features are implemented in the three main browser engines (Chromium, WebKit, and Gecko), but you might have different browser support requirements that prevent you from using them right away. Now is still a good time to learn about these features, though, and perhaps plan to use them at some point.

Popovers And Dialogs

The Popover API, the <dialog> HTML element, and the ::backdrop pseudo-element can help you get rid of dependencies on popup, tooltip, and dialog libraries, such as Floating UI, Tippy.js, Tether, or React Tooltip.

They handle accessibility and focus management for you, out of the box, are highly customizable by using CSS, and can easily be animated.

Accordions

The <details> element, its name attribute for mutually exclusive elements, and the ::details-content pseudo-element remove the need for accordion components like the Bootstrap Accordion or the React Accordion component.

Just using the platform here means it’s easier for folks who know HTML/CSS to understand your code without having to first learn to use a specific library. It also means you’re immune to breaking changes in the library or the discontinuation of that library. And, of course, it means less code to download and run. Mutually exclusive details elements don’t need JS to open, close, or animate.

CSS Syntax

Cascade layers, for a more organized CSS codebase, CSS nesting, for more compact CSS, new color functions, relative colors, and color-mix, new Maths functions like abs(), sign(), pow() and others help reduce dependencies on CSS pre-processors, utility libraries like Bootstrap and Tailwind, or even runtime CSS-in-JS libraries.

The game changer :has(), one of the most requested features for a long time, removes the need for more complicated JS-based solutions.

JS Utilities

Modern Array methods like findLast(), or at(), as well as Set methods like difference(), intersection(), union() and others can reduce dependencies on libraries like Lodash.

Container Queries

Container queries make UI components respond to things other than the viewport size, and therefore make them more reusable across different contexts.

No need to use a JS-heavy UI library for this anymore, and no need to use a polyfill either.

Layout

Grid, subgrid, flexbox, or multi-column have been around for a long time now, but looking at the results of the State of CSS surveys, it’s clear that developers tend to be very cautious with adopting new things, and wait for a very long time before they do.

These features have been Baseline for a long time and you could use them to get rid of dependencies on things like the Bootstrap’s grid system, Foundation Framework’s flexbox utilities, Bulma fixed grid, Materialize grid, or Tailwind columns.

I’m not saying you should drop your framework. Your team adopted it for a reason, and removing it might be a big project. But looking at what the web platform can offer without a third-party wrapper on top comes with a lot of benefits.

Things You Might Not Need Anymore In The Near Future

Now, let’s take a quick look at some of the things you will not need a library for in the near future. That is to say, the things below are not quite ready for mass adoption, but being aware of them and planning for potential later use can be helpful.

Anchor Positioning

CSS anchor positioning handles the positioning of popovers and tooltips relative to other elements, and takes care of keeping them in view, even when moving, scrolling, or resizing the page.

This is a great complement to the Popover API mentioned before, which will make it even easier to migrate away from more performance-intensive JS solutions.

Navigation API

The Navigation API can be used to handle navigation in single-page apps and might be a great complement, or even a replacement, to React Router, Next.js routing, or Angular routing tasks.

View Transitions API

The View Transitions API can animate between the different states of a page. On a single-page application, this makes smooth transitions between states very easy, and can help you get rid of animation libraries such as Anime.js, GSAP, or Motion.dev.

Even better, the API can also be used with multiple-page applications.

Remember earlier, when I said that the reason we built single-page apps at the company where I worked 15 years ago was to avoid the white flash of page reloads when navigating? Had that API been available at the time, we would have been able to achieve beautiful page transition effects without a single-page framework and without a huge initial download of the entire app.

Scroll-driven Animations

Scroll-driven animations run on the user’s scroll position, rather than over time, making them a great solution for storytelling and product tours.

Some people have gone a bit over the top with it, but when used well, this can be a very effective design tool, and can help get rid of libraries like: ScrollReveal, GSAP Scroll, or WOW.js.

Customizable Selects

A customizable select is a normal <select> element that lets you fully customize its appearance and content, while ensuring accessibility and performance benefits.

This has been a long time coming, and a highly requested feature, and it’s amazing to see it come to the web platform soon. With a built-in customizable select, you can finally ditch all this hard-to-maintain JS code for your custom select components.

CSS Masonry

CSS Masonry is another upcoming web platform feature that I want to spend more time on.

With CSS Masonry, you can achieve layouts that are very hard, or even impossible, with flex, grid, or other built-in CSS layout primitives. Developers often resort to using third-party libraries to achieve Masonry layouts, such as the Masonry JS library.

But, more on that later. Let’s wrap this point up before moving on to Masonry.

Why You Should Care

The job market is full of web developers with experience in JavaScript and the latest frameworks of the day. So, really, what’s the point in learning to use the web platform primitives more, if you can do the same things with the libraries, utilities, and frameworks you already know today?

When an entire industry relies on these frameworks, and you can just pull in the right library, shouldn’t browser vendors just work with these libraries to make them load and run faster, rather than trying to convince developers to use the platform instead?

First of all, we do work with library authors, and we do make frameworks better by learning about what they use and improving those areas.

But secondly, “just using the platform” can bring pretty significant benefits.

Sending Less Code To Devices

The main benefit is that you end up sending far less code to your clients’ devices.

According to the 2024 Web Almanac, the average number of HTTP requests is around 70 per site, most of which is due to JavaScript with 23 requests. In 2024, JS overtook images as the dominant file type too. The median number of page requests for JS files is 23, up 8% since 2022.

And page size continues to grow year over year. The median page weight is around 2MB now, which is 1.8MB more than it was 10 years ago.

Sure, your internet connection speed has probably increased, too, but that’s not the case for everyone. And not everyone has the same device capabilities either.

Pulling in third-party code for things you can do with the platform, instead, most probably means you ship more code, and therefore reach fewer customers than you normally would. On the web, bad loading performance leads to large abandonment rates and hurts brand reputation.

Running Less Code On Devices

Furthermore, the code you do ship on your customers’ devices likely runs faster if it uses fewer JavaScript abstractions on top of the platform. It’s also probably more responsive and more accessible by default. All of this leads to more and happier customers.

Check my colleague Alex Russell’s yearly performance inequality gap blog, which shows that premium devices are largely absent from markets with billions of users due to wealth inequality. And this gap is only growing over time.

Built-in Masonry Layout

One web platform feature that’s coming soon and which I’m very excited about is CSS Masonry.

Let me start by explaining what Masonry is.

What Is Masonry

Masonry is a type of layout that was made popular by Pinterest years ago. It creates independent tracks of content within which items pack themselves up as close to the start of the track as they can.

Many people see Masonry as a great option for portfolios and photo galleries, which it certainly can do. But Masonry is more flexible than what you see on Pinterest, and it’s not limited to just waterfall-like layouts.

In a Masonry layout:

  • Tracks can be columns or rows:

  • Tracks of content don’t all have to be the same size:

  • Items can span multiple tracks:

  • Items can be placed on specific tracks; they don’t have to always follow the automatic placement algorithm:

Demos

Here are a few simple demos I made by using the upcoming implementation of CSS Masonry in Chromium.

A photo gallery demo, showing how items (the title in this case) can span multiple tracks:

Another photo gallery showing tracks of different sizes:

A news site layout with some tracks wider than others, and some items spanning the entire width of the layout:

A kanban board showing that items can be placed onto specific tracks:

Note: The previous demos were made with a version of Chromium that’s not yet available to most web users, because CSS Masonry is only just starting to be implemented in browsers.

However, web developers have been happily using libraries to create Masonry layouts for years already.

Sites Using Masonry Today

Indeed, Masonry is pretty common on the web today. Here are a few examples I found besides Pinterest:

And a few more, less obvious, examples:

So, how were these layouts created?

Workarounds

One trick that I’ve seen used is using a Flexbox layout instead, changing its direction to column, and setting it to wrap.

This way, you can place items of different heights in multiple, independent columns, giving the impression of a Masonry layout:

There are, however, two limitations with this workaround:

  1. The order of items is different from what it would be with a real Masonry layout. With Flexbox, items fill the first column first and, when it’s full, then go to the next column. With Masonry, items would stack in whichever track (or column in this case) has more space available.
  2. But also, and perhaps more importantly, this workaround requires that you set a fixed height to the Flexbox container; otherwise, no wrapping would occur.
Third-party Masonry Libraries

For more advanced cases, developers have been using libraries.

The most well-known and popular library for this is simply called Masonry, and it gets downloaded about 200,000 times per week according to NPM.

Squarespace also provides a layout component that renders a Masonry layout, for a no-code alternative, and many sites use it.

Both of these options use JavaScript code to place items in the layout.

Built-in Masonry

I’m really excited that Masonry is now starting to appear in browsers as a built-in CSS feature. Over time, you will be able to use Masonry just like you do Grid or Flexbox, that is, without needing any workarounds or third-party code.

My team at Microsoft has been implementing built-in Masonry support in the Chromium open source project, which Edge, Chrome, and many other browsers are based on. Mozilla was actually the first browser vendor to propose an experimental implementation of Masonry back in 2020. And Apple has also been very interested in making this new web layout primitive happen.

The work to standardize the feature is also moving ahead, with agreement within the CSS working group about the general direction and even a new display type display: grid-lanes.

If you want to learn more about Masonry and track progress, check out my CSS Masonry resources page.

In time, when Masonry becomes a Baseline feature, just like Grid or Flexbox, we’ll be able to simply use it and benefit from:

  • Better performance,
  • Better responsiveness,
  • Ease of use and simpler code.

Let’s take a closer look at these.

Better Performance

Making your own Masonry-like layout system, or using a third-party library instead, means you’ll have to run JavaScript code to place items on the screen. This also means that this code will be render blocking. Indeed, either nothing will appear, or things won’t be in the right places or of the right sizes, until that JavaScript code has run.

Masonry layout is often used for the main part of a web page, which means the code would be making your main content appear later than it could otherwise have, degrading your LCP, or Largest Contentful Paint metric, which plays a big role in perceived performance and search engine optimization.

I tested the Masonry JS library with a simple layout and by simulating a slow 4G connection in DevTools. The library is not very big (24KB, 7.8KB gzipped), but it took 600ms to load under my test conditions.

Here is a performance recording showing that long 600ms load time for the Masonry library, and that no other rendering activity happened while that was happening:

In addition, after the initial load time, the downloaded script then needed to be parsed, compiled, and then run. All of which, as mentioned before, was blocking the rendering of the page.

With a built-in Masonry implementation in the browser, we won’t have a script to load and run. The browser engine will just do its thing during the initial page rendering step.

Better Responsiveness

Similar to when a page first loads, resizing the browser window leads to rendering the layout in that page again. At this point, though, if the page is using the Masonry JS library, there’s no need to load the script again, because it’s already here. However, the code that moves items in the right places needs to run.

Now this particular library seems to be pretty fast at doing this when the page loads. However, it animates the items when they need to move to a different place on window resize, and this makes a big difference.

Of course, users don’t spend time resizing their browser windows as much as we developers do. But this animated resizing experience can be pretty jarring and adds to the perceived time it takes for the page to adapt to its new size.

Ease Of Use And Simpler Code

How easy it is to use a web feature and how simple the code looks are important factors that can make a big difference for your team. They can’t ever be as important as the final user experience, of course, but developer experience impacts maintainability. Using a built-in web feature comes with important benefits on that front:

  • Developers who already know HTML, CSS, and JS will most likely be able to use that feature easily because it’s been designed to integrate well and be consistent with the rest of the web platform.
  • There’s no risk of breaking changes being introduced in how the feature is used.
  • There’s almost zero risk of that feature becoming deprecated or unmaintained.

In the case of built-in Masonry, because it’s a layout primitive, you use it from CSS, just like Grid or Flexbox, no JS involved. Also, other layout-related CSS properties, such as gap, work as you’d expect them to. There are no tricks or workarounds to know about, and the things you do learn are documented on MDN.

For the Masonry JS lib, initialization is a bit complex: it requires a data attribute with a specific syntax, along with hidden HTML elements to set the column and gap sizes.

Plus, if you want to span columns, you need to include the gap size yourself to avoid problems:

<script src="https://unpkg.com/masonry-layout@4.2.2/dist/masonry.pkgd.min.js"></script>
<style>
  .track-sizer,
  .item {
    width: 20%;
  }
  .gutter-sizer {
    width: 1rem;
  }
  .item {
    height: 100px;
    margin-block-end: 1rem;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    width: calc(40% + 1rem);
  }
</style>

<div class="container"
  data-masonry='{ "itemSelector": ".item", "columnWidth": ".track-sizer", "percentPosition": true, "gutter": ".gutter-sizer" }'>
  <div class="track-sizer"></div>
  <div class="gutter-sizer"></div>
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Let’s compare this to what a built-in Masonry implementation would look like:

<style>
  .container {
    display: grid-lanes;
    grid-lanes: repeat(4, 20%);
    gap: 1rem;
  }
  .item {
    height: 100px;
  }
  .item:nth-child(odd) {
    height: 200px;
  }
  .item--width2 {
    grid-column: span 2;
  }
</style>

<div class="container">
  <div class="item"></div>
  <div class="item item--width2"></div>
  <div class="item"></div>
  ...
</div>

Simpler, more compact code that can just use things like gap and where spanning tracks is done with span 2, just like in grid, and doesn’t require you to calculate the right width that includes the gap size.

How To Know What’s Available And When It’s Available?

Overall, the question isn’t really if you should use built-in Masonry over a JS library, but rather when. The Masonry JS library is amazing and has been filling a gap in the web platform for many years, and for many happy developers and users. It has a few drawbacks if you compare it to a built-in Masonry implementation, of course, but those are not important if that implementation isn’t ready.

It’s easy for me to list these cool new web platform features because I work at a browser vendor, and I therefore tend to know what’s coming. But developers often share, survey after survey, that keeping track of new things is hard. Staying informed is difficult, and companies don’t always prioritize learning anyway.

To help with this, here are a few resources that provide updates in simple and compact ways so you can get the information you need quickly:

If you have a bit more time, you might also be interested in browser vendors’ release notes:

For even more resources, check out my Navigating the Web Platform Cheatsheet.

My Thing Is Still Not Implemented

That’s the other side of the problem. Even if you do find the time, energy, and ways to keep track, there’s still frustration with getting your voice heard and your favorite features implemented.

Maybe you’ve been waiting for years for a specific bug to be resolved, or a specific feature to ship in a browser where it’s still missing.

What I’ll say is browser vendors do listen. I’m part of several cross-organization teams where we discuss developer signals and feedback all the time. We look at many different sources of feedback, both internal at each browser vendor and external/public on forums, open source projects, blogs, and surveys. And, we’re always trying to create better ways for developers to share their specific needs and use cases.

So, if you can, please demand more from browser vendors and pressure us to implement the features you need. I get that it takes time, and can also be intimidating (not to mention a high barrier to entry), but it also works.

Here are a few ways you can get your (or your company’s) voice heard: Take the annual State of JS, State of CSS, and State of HTML surveys. They play a big role in how browser vendors prioritize their work.

If you need a specific standard-based API to be implemented consistently across browsers, consider submitting a proposal at the next Interop project iteration. It requires more time, but consider how Shopify and RUMvision shared their wish lists for Interop 2026. Detailed information like this can be very useful for browser vendors to prioritize.

For more useful links to influence browser vendors, check out my Navigating the Web Platform Cheatsheet.

Conclusion

To close, I hope this article has left you with a few things to think about:

  • Excitement for Masonry and other upcoming web features.
  • A few web features you might want to start using.
  • A few pieces of custom or 3rd-party code you might be able to remove in favor of built-in features.
  • A few ways to keep track of what’s coming and influence browser vendors.

More importantly, I hope I’ve convinced you of the benefits of using the web platform to its full potential.



Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

One Year of MCP: Looking Back, and Forward

1 Share

In November 2024, Anthropic published a seemingly innocuous blog post entitled Introducing the Model Context Protocol. In it, they promised that Model Context Protocol (MCP) would provide “a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.”

Tech companies say lots of lofty things, but in its first year, it’s hard to build a case that says Anthropic overpromised. MCP still has room to improve, but the protocol filled a need and has found its audience fast. Let’s dig into why, and where it might be headed in 2026 and beyond.

A Little History

We leaned in on building an MCP server at Sentry very early, and in the past 30 days alone it’s served over 278m requests, across more than 5,000 organizations. Those numbers sound great — but getting here was… rough.

For anyone building in the earliest days of MCP there were A LOT of bumps. Because of the scale our company operates at, we ran into a lot of them early.

Like most technologies, MCP saw a lot of shifts early on — across the protocol itself and the platforms that ended up supporting it. The first days of MCP were full of hacked-together things like locally running with chained-together commands, or CLIs to help duct-tape together remote support. Outside of Anthropic’s products — which makes sense, given they built it — most LLMs’ support for MCP was pretty sketchy.

A year later? It’s still not perfect, but it’s definitely on much more solid ground. It’s also clearly not going away any time soon.

Cloudflare jumped in headfirst here and became a “go-to” for platforms to build a full-featured MCP server on. They shipped framework helpers, OAuth support packages, made durable objects more approachable, all through the Cloudflare Workers platform. Vercel followed suit, introducing mcp-handler as an easy way to create MCP servers as API routes in Next.js, and shipped their own authentication support and functionality along with it.

Over the past twelve months, MCP has grown from a curious, complex spec to a pretty much mandatory feature of any LLM client — whether it’s the terminal-heavy TUIs currently on the rise, or more fully featured editors like Cursor.

We hit our share of challenges and lessons learned building around MCP, and we’ve got a lot of hopes for where it’s going next.

Lessons Learned: Building MCP Servers at Scale

Do we know everything there is to know about building the perfect MCP? Nope! No one does. This stuff is new-new. But here’s some stuff we’ve picked up along the way:

You’ll Need To Figure out What’s Breaking, Where, and for Whom

Being so new, MCP broke… a lot, in the early days.

We needed better ways to understand exactly where that was happening in the code, which tool calls were flaking the most, which users things were breaking for, and most importantly: which combinations of clients/protocols were running into these various issues.

Sentry has a lot of tooling that helps with similar use cases, but there were absolutely gaps due to the combination of MCP being such a different approach to software, and running on top of Cloudflare workers.

Realizing we probably weren’t the only ones who needed this, we folded the functionality we built for monitoring MCP into Sentry for everyone to use.

Looking back, more standardization at a spec level would’ve gone a long way in ensuring an easier path forward for monitoring the way MCP interacts with both clients and the actions it takes.

A lot of this observability falls back on the LLMs today, but we’re absolutely seeing a rise in people wanting a clearer understanding of what’s happening under the hood with their MCP servers. MCP feels like magic until it breaks and you’re digging for why.

Complexities are going to continue to pop up as different platforms approach the problem in slightly different ways. For our part, we’re staying as close to the application code as possible — and choosing to wrap Sentry around the MCP server layer itself.

Remote First… but Also… Local? 

Running the first generation of MCP servers largely involved cloning down a repo, dropping them into specific directories, and then battling with node path issues until you got a green connection indicator. Layer on challenges like local dependencies, or creating/storing API tokens… It was a high-friction experience, not to mention being error-prone. Day-2 updates were annoying to try and manage.

Remote servers sidestepped many of these challenges, and in the early days were supported largely by using mcp-remote to connect standard MCP servers as remote. There were a few other proxy solutions, but none of them really took off. This created a really spotty support ecosystem in common editors and tooling.

Reading the room on where all this was going, and how we could keep fiction low for users as well as enable them to just “consume” the service, we opted for a primary path of remotely hosting our MCP. Hosting remotely had a lot of benefits – it lets you continually add new functionality without the user having to install a new package or clone anything down, you’re able to centrally monitor the service and optimize around the standard user paths, and if you do it right, you also get to simplify the way you manage user access through things like OAuth.

Remote MCP servers are pretty easy to implement (especially using the Cloudflare / Vercel tooling I talked about earlier) these days — and you gain a lot from a maintenance standpoint on running them remotely (namely, no worrying about if users are updating their local code.)

It’s still a good idea to have local options also. Keeping a local STDIO version handy gives you the ability to test locally, and also gives you some flexibility around the few clients that don’t support OAuth in remote MCP yet (there are a few out there).

Over time, OpenAI embraced the spec, and started including it in their own tooling. We’ve watched the shift from STDIO servers that were cloned from GitHub and run locally, into remotely hosted MCP servers that bundle auth. Now we’re watching as the different client platforms race to support and leverage each revision.

MCP Goes Wider, and Then, Narrower  

With MCP servers it’s really easy to fall into the trap of building a tool for everything, and trying to just replicate what you’re doing in APIs inside MCP tool calls. But it turns out having dozens of bloated token-heavy MCP tool calls is a good way to blast through token limits fast.

The problem with this approach is that with MCP, every time you make a call, the full tool list is sent with every prompt. Every call, you’re eating away at that precious context window. On top of the tool calls, any resources you create or prompts you attach are also sent.

Context windows are getting bigger, but the wasted space from context bloat adds up very quickly, and it gets exponentially worse the more complex calls you make or tool calls you chain together.. We’re coming into a time where the MCP builders are pulling back on the tools they expose by default, either by scrapping unnecessary tool calls entirely, or by giving users options to let them reduce which tool calls are exposed to the client.

We opted for this route in our own MCP, mostly so we could still keep some of the useful functionality but also give our users a choice in how much they wanted to expose. We removed several tool calls that weren’t being used at all, reduced the available resources, and removed the additional prompts. We also added an enhancement at the OAuth consent screen that lets you configure which tool groups you want to expose in the MCP, allowing a finer level of control. Give developers the tools they need, but don’t eat their precious context window with what they don’t.

SSE Is Fading out, HTTP-Streamable Is in

In the early MCP ecosystem, Server-Sent Events (SSE) were the default way to stream data from a server to a client. SSE is simple and worked well enough early on, so it made sense to use it when we were first prototyping things. But it’s built on top of a long-lived HTTP connection that can be, well, finicky. Among the challenges: SSE’s long-lived connection required specific infrastructure decisions to support, and in the event of failure, there was no clean way to resume sessions.

Eventually, OpenAI introduced HTTP Streamable, a more robust streaming format designed specifically to fix the pain points that people were starting to have. Pretty quickly thereafter, we made the call to move away from SSE to HTTP Streamable.

SSE had a lot of limitations — and even though it took several months to get here, the major clients all support HTTP Streamable now. We’re finding connections to be much more stable overall after the move.

We’re still in the transition time; many servers are still leveraging SSE as their default protocol, but I anticipate this is going to quickly shift. HTTP-Streamable ends up giving a far simpler implementation experience overall, and a better user experience for consuming it.

MCP as a Workflow

The most successful MCP servers at this point I see are ones that are designed to fit right into users’ existing workflows. As much as possible, these tools need to meet users where they are.

A great example of this is the recently released Chrome Dev Tools MCP server. This MCP makes it easier for developers to have MCP fire up a browser; to see beyond the code, and get a view of what a running app actually works/looks like. The Resend MCP server lets you take the context of things you are building, and easily email it out to users alongside your generated templates. If you’re an iOS developer, the Xcode MCP server makes it easy to give models more visibility into your Xcode environment.

We’ve even started working on a new MCP server that is designed to be used with Spotlight, to help with local debugging as opposed to the standard Sentry’s hosted model. This gives us a clear separation of functionality where the standard Sentry MCP is designed for the core of Sentry’s platform functionality – but we can now enable some specific local debugging workflows as well, and create scenarios where the MCP servers can work together across tool calls.

What’s Next for MCP? 

So what lies ahead for MCP? As someone that’s been building here for a while now, here’s what I see coming:

  • Protocol is set — HTTP Streamable is so in. There’s a bit of noise around adding deeper Websocket support overall, but ultimately, HTTP Streamable is likely to be the primary path for a long while — with fallback to SSE as needed.
  • OAuth 2.1 will require some work, but building for it now will pay off in the long run — The move from OAuth 2.0 to 2.1 has some sharp edges in it, no doubt, but the ecosystem is quickly converting to 2.1 as the standard path forward. Put the work in now and it’ll pay off later. For servers that the use case fits, no one wants to tell users to pull manual tokens for authentication.
  • MCP Tool consolidation is very much the vibe — Developers will continue to look for ways to reduce the total tool sprawl within their MCP environments, as a mechanism to control token utilization. Expect more customization options around tool exposure, and ways to introduce more dynamic tool usage (like the natural language search in the Sentry MCP)
  • Agent loops powered by MCP — With the major providers all adding MCP support directly into their agent SDKs, it’s a pretty strong signal that these are first-class tooling for agents to consume. I expect MCP to become more and more a first-class citizen in these workflows, as ways to extend the functionality of systems intelligently. We’re seeing people expose more agentic flows with MCP, which is creating some new and interesting ways to leverage it. It’s creating a balancing act between context conservation and making these tools actually useful

Over the past year we watched the MCP shift around on unsteady ground — both as the protocol itself evolved, and as people figured out some best practices. It’s still the earliest of days — and MCP will certainly have plenty of more evolution points over the next year — but many of these areas are starting to firm up now; if you’ve been waiting for things to be a bit more stable before diving in, now might be the time.

The post One Year of MCP: Looking Back, and Forward appeared first on The New Stack.

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Announcing the Data Commons Gemini CLI extension

1 Share
The new Data Commons extension for the Gemini CLI makes accessing public data easier. It allows users to ask complex, natural-language questions to query Data Commons' public datasets, grounding LLM responses in authoritative sources to reduce AI hallucinations. Data Commons is an organized library of public data from sources like the UN and World Bank. The extension enables instant data analysis, exploration, and integration with other data-related extensions.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories