Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
152556 stories
·
33 followers

Focus rings with nested contrast-color()?

1 Share

As I was playing around with contrast-color(), I got a wild idea that you could use contrast-color() to invert its return value by nesting it: contrast-color(contrast-color(var(--some-color)). When would this be useful? Uh… Good question. I couldn’t come up with an example right away but after a bit I found one sitting right under my nose….

four buttons in one line. a secondary button, an accented primary button with a focus state, a subtle button and a transparent button

Our focus-rings in Fluent use a 1px inset white highlight and a 2px offset black focus-ring. It’s a smidge chonkier than the Chromium default. The reason we do this is to guarantee contrast against the focused-element, in the above example, a blue button. Without the addition of the white stroke, the black outline wouldn’t “pop” with enough contrast to the blue background.

To make this work, we have a --focus-inner-ring token and a --focus-outer-ring token themed for both light and dark modes.

*:focus-visible {
  box-shadow: 0 0 1px 0 var(--focus-inner-ring);
  outline: 2px solid var(--focus-outer-ring);
  outline-offset: 1px;
}

How would this change with nested contrast-color()?

*:focus-visible {
  outline: 2px solid contrast-color(var(--page-bg));
  outline-offset: 1px;
  box-shadow: 0 0 1px 0 contrast-color(contrast-color(var(--page-bg)));
}

All you would need is one token you probably already have (--page-bg) vs two tokens. Neat.

One consideration… your focus-ring isn’t always on a --page-bg. Sometimes it shows up on a --card-bg and this little trick might fall apart. As always, your mileage may vary.

Aside: This got me thinking it’d be nice to have a currentBackgroundColor like we have currentColor in CSS today. I’m not sure how much career I’ve got left to wait for that, but who knows.

On second thought…

Mulling this over a bit more… you might be better off using color-scheme and light-dark() for this instead.

 :root {
	 color-scheme: light dark;
 }
 
 *:focus-visible {
   box-shadow: 0 0 1px 0 light-dark(white, black);
   outline: 2px solid light-dark(black, white);
   outline-offset: 1px;
}

Yeah… I’d probably do that. You dodge the spotty contrast-color() support and have a singular function instead of a nested situation. It keeps it simple and readable.

If you converted this to tokens, you’d probably need four tokens to fill out the inner/outer and light/dark matrix. But I’d consider not using custom color tokens for focus-rings at all and embrace the higher contrast… or limit tokens to the outer-ring.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Gentoo Linux Plans Migration from GitHub Over 'Attempts to Force Copilot Usage for Our Repositories'

1 Share
Gentoo Linux posted its 2025 project retrospective this week. Some interesting details: Mostly because of the continuous attempts to force Copilot usage for our repositories, Gentoo currently considers and plans the migration of our repository mirrors and pull request contributions to Codeberg. Codeberg is a site based on Forgejo, maintained by a non-profit organization, and located in Berlin, Germany. Gentoo continues to host its own primary git, bugs, etc infrastructure and has no plans to change that... We now publish weekly Gentoo images for Windows Subsystem for Linux (WSL), based on the amd64 stages, see our mirrors. While these images are not present in the Microsoft store yet, that's something we intend to fix soon... Given the unfortunate fracturing of the GnuPG / OpenPGP / LibrePGP ecosystem due to competing standards, we now provide an alternatives mechanism to choose the system gpg provider and ease compatibility testing... We have added a bootstrap path for Rust from C++ using Mutabah's Rust compiler mrustc, which alleviates the need for pre-built binaries and makes it significantly easier to support more configurations. Similarly, Ada and D support in gcc now have clean bootstrap paths, which makes enabling these in the compiler as easy as switching the useflags on gcc and running emerge. Other interesting statistics for the year: Gentoo currently consists of 31,663 ebuilds for 19,174 different packages.For amd64 (x86-64), there are 89 GBytes of binary packages available on the mirrors.Gentoo each week builds 154 distinct installation stages for different processor architectures and system configurations, with an overwhelming part of these fully up-to-date.The number of commits to the main ::gentoo repository has remained at an overall high level in 2025, with a slight decrease from 123,942 to 112,927.The number of commits by external contributors was 9,396, now across 377 unique external authors. Thanks to long-time Slashdot reader Heraklit for sharing the 2025 retrospective.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
25 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Hello, Anthropic

1 Share
Kickstarting 2026 with a new role as a Member of Technical Staff at Anthropic, where I will be helping grow the MCP ecosystem.
Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Mastering React DevTools: A Comprehensive Guide to Efficient Debugging

1 Share

In the dynamic ecosystem of modern web development, React remains a dominant force. However, as applications grow in complexity, managing component states, props, and performance can become a daunting task. Enter React DevTools — a browser extension that serves as a surgical instrument for frontend developers. This article explores the basics of React DevTools and articulates why it is a non-negotiable asset for professional development.

Introduction

Debugging is often the most time-consuming aspect of software engineering. While standard browser developer tools allow us to inspect the DOM, they fall short when dealing with React’s Virtual DOM. React DevTools bridges this gap, providing a window into the internal logic of your application without requiring you to sift through compiled code.

The Components Tab: Your Component Tree

<br>
Sample of components tree displayed under components tab
The core feature of the extension is the ‘Components’ tab. Unlike the flat structure of HTML elements seen in standard inspectors, this view preserves the hierarchy of your React components.

  1. Inspecting Props and State: By selecting a component in the tree, you can view its current props and state in the side pane. This eliminates the need for excessive console.log statements.
  2. Live Editing: The most powerful feature for UI testing is the ability to edit these values in real-time. You can toggle booleans, modify strings, or adjust numbers to instantly see how your UI responds to different data states.
  3. Source Traceability: The tool also allows you to jump directly to the source code of the component, streamlining the navigation of large codebases.

The Profiler: Optimizing Performance

Me using Profiler on my app ChwiiX :chwiix.vercel.app
Performance is a key metric for user retention. The ‘Profiler’ tab is designed to record the performance information of your application. When you record a session, the Profiler gathers data on each render phase.

It generates a ‘Flame Graph’ — a visual representation of your component tree where the width of each bar represents the time taken to render. This allows developers to spot ‘expensive’ components that are taking too long to load. Furthermore, it helps identify unnecessary re-renders, where components update even when their data hasn’t changed, allowing for optimization via React.memo or useMemo.

Visualizing Updates

Another subtle but effective feature is the ability to ‘Highlight updates when components render.’ Found in the settings, this option draws a colored border around any component in the browser view the moment it re-renders. This visual feedback is invaluable for spotting cascading render issues that might otherwise go unnoticed until the application scales.

Conclusion

React DevTools is more than a convenience; it is a necessity for scalable development. By moving beyond basic debugging and utilizing the Profiler and Component inspection tools, developers can write cleaner, faster, and more reliable code. If you haven’t integrated a deep dive of these tools into your workflow, now is the time to start.

Read the whole story
alvinashcraft
26 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

When To Use GenAI: A Practical Decision Framework

1 Share
Tiny figures of businessmen on a notebook page considering different paths.

As generative artificial intelligence (GenAI) capabilities evolve, software architects and developers face critical decisions about when to use GenAI-based solutions versus traditional programming approaches. A systematic, four-dimensional decision framework guides technology selection in application design.

While traditional programming offers faster implementation for straightforward tasks with full transparency, GenAI-based solutions demand significant computational resources and training time but enable sophisticated handling of complex, personalized interactions. A hybrid architectural strategy provides concrete criteria for technology selection that reconcile software engineering limitations and GenAI capabilities.

The 4-Dimensional Decision Framework

Before defaulting to or dismissing GenAI, architects can take a systematic approach to evaluate each feature against four practical dimensions to determine whether GenAI will add value or create unnecessary complexity and costs.

  1. Reasoning versus logic. Does the feature require adaptive interpretation of ambiguous inputs or intentions, or does it follow predictable rules? GenAI excels in tasks involving pattern matching across messy inputs where all possibilities can’t be enumerated upfront. Deterministic code is best suited to handle behaviors that can be expressed as explicit rules, such as routing requests based on user permissions and eligibility decisions that follow clear logic trees.
  2. Data type. What format are the primary inputs and outputs? Traditional code struggles with unstructured data that resists traditional parsing. GenAI is better suited for unstructured data, while traditional code excels at handling structured data.
  3. Scalability profile. How many times per second will this function run, and what is the cost tolerance per execution? GenAI is well-suited to moderate-volume interactions where adaptive decision-making justifies the higher cost per call. Traditional code is the better option for high-throughput operations that must handle tens of thousands of requests per second at fractions of a cent each.
  4. Task complexity. How many edge cases and conditional paths exist? GenAI is designed to handle workflows where the path forward depends on unpredictable or ambiguous factors, like customer intent. Using GenAI to manage simple linear tasks with clear success criteria that can be handled by programmable code is excessive and expensive.

Klarna’s GenAI customer service chatbot aligned with these criteria. The “buy now, pay later” payment company determined that customer messages, which contain unstructured text with ambiguous intent and emotional context that require interpretation across 35 languages, warranted the use of GenAI. Structured operations that required calculations, audit trails and regulatory compliance, such as authentication and payment processing, remained in traditional code.

Klarna later refined its approach, using GenAI to handle two-thirds of inquiries, while directing more complex cases that require judgment and empathy to humans. The result demonstrated an effective division of labor. GenAI interprets and routes requests, while deterministic systems handle execution and judgment.

3 Critical Trade-Offs in Practice

Once features are mapped against the four-dimensional framework, balancing three operational trade-offs further refines the decision whether the GenAI approach justifies its costs.

The first trade-off is time-to-market. GenAI accelerates features centered on language interaction, summarization or question answering. Building a hypothetical “Ask our docs” feature requires less development time with GenAI than with traditional approaches.

Traditional programming wins on speed when building crisp, rule-based features like order status tracking. When business rules are clear, features can be implemented, tested and deployed in days without the GenAI overhead of prompt engineering, model selection or accuracy evaluation.

The second trade-off is transparency and explainability. Financial calculations, access control, compliance checks and safety-critical operations demand complete transparency. When auditors ask why a fee was charged or regulators question a claim denial, deterministic code provides traceable logic. GenAI models produce outputs through billions of learned parameters that cannot guarantee reproducible reasoning paths.

Consider a fee calculation service processing 1 million transactions monthly: Deterministic code achieves essentially 100% accuracy for valid inputs. Recent studies on GPT model behavior found its accuracy rate ranging from 30% to 90%, depending on the function. Applying these rates to 1 million monthly transactions would result in over 100,000 errors, which is unacceptable for financial or compliance-critical tasks.

The final trade-off is cost structure. Traditional applications typically run on a central processing unit (CPU) infrastructure with per-request costs measuring fractions of a cent. GenAI systems introduce variable costs depending on deployment models. With external API providers, cost scales with usage through per-token pricing. A feature averaging 1,000 tokens per callcosts $20,000 monthly at $0.002 per 1,000 tokens for 10 million calls, or $100,000 monthly at higher pricing. Self-hosted models’ costs shift to GPU infrastructure, resulting in a higher upfront investment but lower marginal cost per request.

Beyond compute, GenAI introduces additional governance costs. The economics shift when GenAI replaces substantial human labor. Bank of America’s Erica chatbot relies on deterministic natural language processing GenAI to resolve 98% of support interactions autonomously,contributing to a 19% earnings upliftthat far exceeds the GenAI infrastructure costs.

Implementing Hybrid Architecture

Successful production systems use one of three architectural templates to structure the relationship between GenAI and traditional code at the system level.

Template No. 1: GenAI Interprets, Code Executes.

This template is effective when natural user experience is critical, but transactional operations require precision. A customer types, “Can you refund my last order and ship the replacement to my office?” GenAI parses intent and extracts structured elements, such as a refund request, order identifier or delivery address. Traditional services then verify ownership, determine refund eligibility, calculate amounts, call payment and shipping APIs, and update databases.

Template No. 2: GenAI Generates, Code Validates.

When creative or summary output is needed within strict boundaries, this template is appropriate. For example, a support agent uses GenAI to draft customer responses quickly by reviewing ticket history and generating suggested text. Code-based validation ensures that no personally identifiable information (PII) is leaked, that refund amounts match actual records and that responses comply with policy requirements. GenAI provides speed and quality, while deterministic guardrails ensure compliance violations never reach customers.

Template No. 3: GenAI Captures Knowledge, Code Enforces Facts.

Cleveland Clinic uses a GenAI scribe platform to document patient interactions. With patient consent, the system listens to patient calls and drafts clinical notes, which providers review before adding them to medical records. To date, the tool has documented more than 1million patient interactions, saving providers an average of 14 minutes per day. The GenAI-generated notes are then used by revenue cycle systems to reduce billing and coding issues downstream. The GenAI applies contextual knowledge to generate accurate records, while the code uses factual elements to complete traditional functions.

Hybrid Architecture: It’s Not Either/Or, but Yes

Choosing between GenAI and traditional code doesn’t require complex analysis. These frameworks and templates provide a systemic approach to GenAI adoption, but it’s essential that they don’t become obstacles to making clear decisions. A practical checklist helps move analysis to execution.

  • Start with the verb. If the feature helps, suggests or explains, consider GenAI. If it calculates, enforces or guarantees outcomes, deterministic code might be a better option.
  • Assess the feature’s acceptable error rate and latency. Can the system tolerate occasional inaccuracies? Does it require sub-50 millisecond response times? Identifying these operational constraints helps eliminate unsuitable approaches.
  • Determine which outcomes matter most. Common metrics include cost per successful task, time to resolution and user satisfaction, not GenAI model sophistication.

GenAI excels at interpreting ambiguous inputs and generating insights, while traditional code takes ownership of decisions and irreversible actions. This division of responsibility captures GenAI’s strengths without sacrificing the reliability that business operations demand.

The post When To Use GenAI: A Practical Decision Framework appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Python: What’s Coming in 2026

1 Share

If 2025 was “the year of type checking and language server protocols” for Python, will 2026 be the year of the type server protocol? “Transformative” developments like free threading were said to be coming to Python in 2026, along with performance-improving “lazy” imports for Python modules. We’ll (hopefully) also be seeing improved Python agent frameworks.

But 2026 could even see a change in change itself — in the ways that Python changes are proposed.

Last month, there was an illuminating seven-person conversation — including three of Python’s core developers, two of whom are also members of Python’s Steering Council. As 2025 came to a close, these prominent guests came together for a special year-in-review episode of the “Talk Python to Me” podcast, discussing what trends they’d seen in 2025 — but also what they see coming in the year ahead for Python and its developer community.

Tools That Run Python

From the beginning, the podcast showed that Python is still a broad-based global community. From Vancouver came Brett Cannon, a Python core developer since 2003, and a Microsoft principal software engineer for over 10 years. And what Cannon saw in 2025 was advances in people running Python code using tools.

Where before you’d install the Python interpreter — and then also any needed dependencies — to then launch everything in a virtual environment, “Now we’ve got tools that will compress all that into a run command!” A growing set of tools like Hatch, BDM and uv “build on each other and … just slowly build up this repertoire of tool approaches.”

This caught the attention of Python’s core developers. “These tools treat Python as an implementation detail, almost,” Cannon said. The Python interpreter just fades into the background as “a thing that they pull in to make your code run.”

Also on the call was Barry Warsaw, who has been a Python core developer for more than 30 years. He told Cannon, “I think you’re really onto something.” Warsaw is also on the 2026 Python Steering Council (while currently working on Python at Nvidia), and sees this as an even larger trend — “a renewed focus on the user experience.”

Just installing the binary with the Python interpreter can be complicated for new users, but 2024 saw Python adding a format for metadata to embed in Python scripts to help IDEs, launchers and other external tools. So in the world of 2025, it’s now that much easier to write code that will be run by Python. “You can put uv in the shebang line of your script — and now you don’t have to think about anything. And Hatch can work the same way for developers.”

This drew some enthusiastic agreement from Associate CS Professor Gregory Kapfhammer. Dialing in from Pittsburgh (where he’s already using uv in his classes at Allegheny College), Kapfhammer said he’s amazed how much uv had simplified lessons for his students. “I don’t have to teach them about Docker containers, and I don’t have to tell them how to install Python with some package manager.”

And from Berlin came more agreement from Jodie Burchell, a 10-year data scientist (now a developer advocate at JetBrains, working on PyCharm). Burchell said they’re even discussing whether to use uv at the data science mentoring community Humble Data (where she’s one of the organizers). “It does abstract away all these details. The debate I have is, is it too magic?” And as a developer advocate at JetBrains, “It’s also a debate we have in PyCharm. How much do you magic away the fundamentals versus making people think a little bit?”

This led to a discussion about possible future developments in Python. Core developer Cannon said that for the troubleshooters — and even just for the curious — “I want the magic to decompose. You should be able to explain the ‘magic’ path via more decomposed steps using the tool all the way down to what the tools actually do behind the scenes.” And it’s not just a hypothetical for him. “I’ve been thinking about this a lot,” Cannon said, because “I’m thinking of trying to get the Python launcher to do a bit more.”

After all, uv is still made by a company (named Astral), and “There’s always the risk they might disappear.” And a lot of work has now been put in place to create standards for this kind of packaging, including that metadata addition.

Lazy Imports and Free-Threaded Python

2026 will also bring performance-improving “lazy imports,” which will speed start-up times by deferring until first use the importing of modules. “It’s been accepted, and it’s going to be awesome,” said Core Developer Thomas Wouters. Dialing in from Amsterdam, Wouters has also deployed Python internally at Google, where he worked for 17 years before moving to Meta. He’s been a board member of the Python Software Foundation — even receiving their Distinguished Service Award in 2025 — and is a current member of 2026’s Python steering council.

Wouters is even more excited about Python’s progress toward adding parallel processing capabilities. Thinking of how Python’s Global Interpreter Lock notoriously slowed down performance by enforcing single-thread processing, Wouters phrased this development indelicately as “the global interpreter lock is going away! I am stating it as a fact — it’s not actually a fact yet, but that’s because the Steering Council hasn’t realized the fact yet.”

Wouters said this because he was on the Steering Council that accepted an alternative — free-threading — as an experimental feature, and now for Python 3.14, “It is officially supported. The performance is great … It’s basically the same speed on MacOS … That’s a combination of the ARM hardware and clang specializing things … And then on recent GCCs on Linux, it’s like a couple percent slower.”

2026 will see a focus on community adoption, said Wouters, “getting third-party packages to update their extension modules for the new APIs” and “supporting free-threading in a good way.” But for Python code, “it turns out there’s very few changes that need to be made for things to work well under free-threading.”

And more to the point, “We’ve seen a lot of examples of really promising, very parallel problems that now speed up by 10x or more. And it’s going to be really exciting in the future.”

Enhancing the Enhancement Proposals?

The biggest change may have been suggested by Barry Warsaw. As the creator of Python Enhancement Proposals — the procedure for changing the language — Warsaw brings real credibility when he said, “We have to rethink how we evolve Python — and how we propose changes to Python, and how we discuss those changes in the community.”

The current process is over a quarter of a century old, and while the developer community is “somewhat larger,” Warsaw said there’s been a more exponential growth in “the number of people who are using Python and who have an interest in it.” But more to the point, “One of the things that I’ve heard over and over and over again is that authoring Python Enhancement Proposals is incredibly difficult, and emotionally draining. It’s a time sink, and leading those discussions on discuss.Python.org … can be toxic at times, and very difficult.”

The end result? “It has become really difficult to evolve the language and the standard library and the interpreter … We need to think about how we can make this easier for people and not lose the voice of the user.”

When it comes to the Python community, comments left at discuss.python.org are “the tip of the iceberg,” Warsaw said. “We’ve got millions and millions of users out there in the world who, for example, lazy imports will affect — free threading will affect. And yet they don’t even know that they have a voice.” Warsaw hopes to represent them “in a much more collaborative and positive way.”

So in 2026, Warsaw said, “I think this is something I’m going to spend some time on, trying to think about — you know, and talk to people about — ways we can make this easier for everyone.”

Warsaw shared an interesting observation on where we are now. “There have been changes that have been made to Python that really should have been a PEP. And they aren’t because … core developers don’t want to go through this gauntlet!

“But that’s also not good because then, you know, we don’t have the right level of consideration.”

When Types Meet Tools

Kapfhammer shared an important tip, pointing out that “If you can teach your AI agent to use the type checkers and use the LSPs, it will also generate better code for you.” It’s giving the large language model (LLM) one more useful piece of information and context — and the industry is starting to take notice. Kapfhammer said the team behind Meta’s type checker is working directly with Pydantic AI to create interoperability, “So that when you’re building an AI agent using Pydantic AI, you can also then have better guarantees when you’re using Pyrefly as your type checker.”

In fact, for Kapfhammer, 2025 was “the year of type checking and language server protocols.” He uses the static type checker Mypy and language server protocols like Pyright or PyLance. But 2025 also brought Meta’s Pyrefly type checker/LSP, Astral’s ty and a new type checker/LSP called Zuban. He notes these 2025 tools were all implemented in Rust — and are “significantly faster,” which changes how he uses the tools, and how often. “It’s helped me to take things that might take tens of seconds or hundreds of seconds, and cut them down often to less than a second.”

Cannon noted that “It takes more work to write Rust code than it does to write Python code,” and applauded the tool makers for being willing to shoulder that extra effort to deliver “an overall win for the community.”

But Cannon also seemed to have some high hopes for what we’ll see in 2026. “Pylance is actually working with the Pyrefly team to define a type server protocol [TSP] so that a lot of these type servers can just kind of feed the type information to a higher-level LSP, and let that LSP handle the stuff like symbol renaming and all that stuff.”

The podcast was hosted by Michael Kennedy, a Portland-based Python enthusiast and educator, who gave the 84-minute conversation — and the year ahead — a perfect summation.

“I still think it’s an incredibly exciting time to be a developer or a data scientist. There’s so much opportunity out there … Every day is slightly more amazing than the previous day … I love it.”

The post Python: What’s Coming in 2026 appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories