RUM can leave questions unanswered.
Honeycomb for Frontend Observability doesn’t.
Are you trying to wire your React application to Honeycomb, but running into some challenges understanding how our instrumentation works with React?
In this article, I’ll lay out approaches for wiring Honeycomb to client-side only React so you can ingest your telemetry into Honeycomb and take advantage of the Web Launchpad. This telemetry sends semantically-named attributes, and can be used with any OTLP destination.
These examples use a React application created with Vite. The advice here applies to React apps that are not using server-side rendering. Watch this space for more information about using Next.js.
HoneycombWebSDK
before booting ReactSince a React application usually boots from something like src/main.ts|.js
, configure your OpenTelemetry browser instrumentation here before booting React. This has the benefit of only running once for your browser session and is fully configured before any services start up.
RUM can leave questions unanswered.
Honeycomb for Frontend Observability doesn’t.
Let’s initialize your React application:
main.ts|.js
: Step 1 – wire up telemetry before starting React import { createRoot } from 'react-dom/client' import './index.css' import App from './App' import {StrictMode} from "react"; import installOpenTelemetry from './otel-config'; // avoid double-render problem by wiring up the // Honeycomb OpenTelemetry Web SDK wrapper // outside of a render process installOpenTelemetry(); // now, boot React! createRoot(document.getElementById('root')!).render( <StrictMode> <App /> </StrictMode> )
Now, we’ll create a file with a function that does the wiring (see our docs and additional samples for more details):
src/otel-config.ts|.js
: Step 2 – create a set of defaultsimport { HoneycombWebSDK } from '@honeycombio/opentelemetry-web'; import { getWebAutoInstrumentations } from '@opentelemetry/auto-instrumentations-web'; // some telemetry instrumentation requires default settings // so we create a set of sensible defaults. const defaults = { // don't create spans for all of the network traffic, otherwise // we'll get 10x the spans we normally care about ignoreNetworkEvents: true, // Which outgoing service calls to servers will contain the // traceparent header to pass our trace information inward, // so our fetch requests will be part of an end-to-end trace. // otherwise you'll get disconnected front-end and back-end traces!! // for this case, we're allowing zero or more characters in the // server name so we are propagating traces to all outbound fetch // calls. propagateTraceHeaderCorsUrls: [ /.*/g ) ] }
Note: If you don’t set up your propagateTraceHeaderCorsUrls
entries here and point them at your application backend endpoints, you won’t send the proper W3C traceparent
header to any backend services you call from React via instrumented network calls from either the fetch
or XMLHttpRequest
APIs. This means you won’t end up seeing frontend-to-backend spans in your traces.
src/otel-config.ts|.js
: Step 3 – initialize the HoneycombWebSDK
... export default function installOpenTelemetry() { try { // this SDK installs OpenTelemetry-JS for a web browser, and // adds automatic instrumentation for Core Web Vitals and other // features. const sdk = new HoneycombWebSDK({ contextManager: new StackContextManager(), serviceName: 'react-frontend', instrumentations: [ getWebAutoInstrumentations({ '@opentelemetry/instrumentation-xml-http-request': defaults, '@opentelemetry/instrumentation-fetch': defaults, '@opentelemetry/instrumentation-document-load': defaults '@opentelemetry/instrumentation-user-interaction': defaults }) ] }); // start up the SDK, wiring up OpenTelemetry for JS sdk.start(); } catch (e) { console.log(`An error occurred wiring up Honeycomb...`); console.error(e); } }
If you have a standard Honeycomb account, you can use web telemetry along with your other telemetry data.
For this example, I’ll use the Query Builder against our datasource react-frontend
(which we defined in our instrumentation as our service-name
), using:
COUNT
(how many in each time period)library.name = @opentelemetry/instrumentation-fetch
(only shows fetch
calls)http.status_text
(to view good and bad calls) This query looks like this:
If I scroll down a bit, I see options to view traces. Here they are:
I clicked on one of the trace IDs that had a root name of Submit
, which was triggered by clicking a form submit button in a frontend form. Since I’ve configured Honeycomb to include user event tracing, a button click is traced by default.
This trace shows that we encountered a database error that we didn’t plan for, and that error failed our POST
. Clicking on the span near the bottom shows the error message as one of the attributes (status.message
on the pg.query: INSERT library
span):
In an account configured with Honeycomb for Frontend Observability, the environment landing page becomes the Web Launchpad. Here, you can see helpful charts and statistics based on the telemetry emitted from the HoneycombWebSDK
via opentelemetry-js
:
Alternatively, you might approach instrumentation by mounting a component. Maybe you want to defer telemetry until a particular parent route opens up (do you?).
While instrumenting after the React application has begun is not necessarily a problem, it is less direct. However, samples exist that show this approach, so let’s review it.
In the component’s render method, you could create an effect to boot React on the mounting of a component. Simply call the Collector setup method in a useEffect
hook on the way up. Note that you don’t provide any values in the hook’s dependency array, so the effect never re-runs.
ObservabilityConfigurer.ts
: Call the instrumentation script on an initial loading effectimport { installOpenTelemetry } from './otel-config'; export default function ObservabilityConfigurer() { useEffect(() => { installOpenTelemetry(); }, []); return null; // render nothing, this is just a component // to facilitate wiring up Honeycomb }
You can then mount the component within your top-level component. The useEffect
hook above ensures that this only runs on the initial render of your component.
src/Application.tsx
: Now, call your top-level componentimport ObservabilityConfigurer from './ObservabilityConfigurer'; export default function App() { return ( <> <ObservabilityConfigurer /> {/* Your top-level components here */} </> ); }
Go with the simplest approach that makes sense for you, and do it as early as you can to avoid missing any key telemetry. Executing the script before loading React is the easiest way to isolate it from the rest of your components.
The open source Honeycomb OpenTelemetry Web project provides the HoneycombWebSDK
wrapper used in this blog post and sends telemetry compatible with the Web Launchpad.
Have questions about instrumenting React applications with OpenTelemetry or troubleshooting your configuration using Honeycomb? You can request office hours with me, check our detailed documentation, or join the Honeycomb Pollinators Slack. I’ll be happy to help you get going.
The post Configuring a React Application with Honeycomb For Frontend Observability appeared first on Honeycomb.
A team at AI dev platform Hugging Face has released what they’re claiming are the smallest AI models that can analyze images, short videos, and text. The models, SmolVLM-256M and SmolVLM-500M, are designed to work well on “constrained devices” like laptops with under around 1GB of RAM. The team says that they’re also ideal for […]
© 2024 TechCrunch. All rights reserved. For personal use only.
Node.js is still one of the most popular runtimes for JavaScript. In fact, it’s kind of a juggernaut: It has seemed unstoppable since its introduction in 2009. In fact, Node.js is the industry standard runtime for JavaScript, and is used by companies like Netflix, Uber, eBay, PayPal, LinkedIn, Trello, NASA, Walmart, Groupon and many more.
This open source, cross-platform runtime environment is an amazing tool for developing scalable network applications, and has become one of the most widely used web frameworks. One reason Node.js is so popular is that it can reduce loading time by as much as 60%. This is immensely important for applications at scale.
But what is there to be excited about in the latest release? Truth be told, you have to go back to version 23.0.0 to find a release that isn’t specifically listed as a security release. And since version 23.0.0 was released on Oct. 16, 2024, it might seem a bit long in the tooth (in tech years), but it is an LTS release, so it’s going to be sticking around for some time.
As far as what’s new in Node.js 23, let’s take a look.
There are four big highlights for this release:
node --run
command has been stabilized.With require(esm) enabled by default, Node.js will not longer throw the ERR_REQUIRE_ESM error when require() is used to load an ES module. If, however, the ES module being loaded contain top-level await, it can still throw ERR_REQUIRE_ASYNC_MODULE.
If you’re still using a 32-bit Windows operating system, Node.js 23.0.0 will no longer function.
Node.js provides a built-in task runner that allows you to execute specific commands that are defined in a package.json file. This is done with the --run
flag, and with version 23.0.0, the option has been improved and is now more stable.
The Node.js test runner makes it possible to create JavaScript tests. Here are some of the enhancements to the test runner:
--test
is not used.Other changes to Node.js in v23 include:
You can read the entire Node.js change log here.
Let’s first install Node.js 23 on an Ubuntu-based Linux distribution. To do that, follow these steps.
Install the necessary dependencies with the command:
sudo apt-get install ca-certificates curl gnupg -y
Import the necessary GPG key with the following:
Add the Node.js repository with the following command:
Update apt with:
sudo apt-get update
Install Node.js with the command:
sudo apt-get install nodejs -y
Next, we’ll install Node.js 23 on macOS. To do this, we’ll use nvm as the installer.
Download and install nvm with the command:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
Download and install Node.js with:
nvm install 23
Finally, we’ll install Node.js 23 on Windows, using fnm.
Download and install fnm using winget with the following command:
winget install Schniz.fnm
Install Node.js 23 with the command:
fnm install 23
You can verify the installation with the node -v
command.
You should see something like this in the output:
v23.6.1
If you find that Linux still reports version 20, you’ll need to remove Node.js (sudo apt-get remove nodejs -y) and then install it with the following steps.
Download and install nvm with the command:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
Close and reopen your terminal window. Once the terminal is open, install Node.js with:
nvm install 23
And that’s all there is to installing the latest version of Node.js. This powerhouse runtime will serve you well for years to come.
The post What’s in the New Node.js, and How Do You Install It? appeared first on The New Stack.
Vivaldi today announced a major update to its desktop web browser that adds new Dashboard customization features, a weather widget, and a lot more.
The post Vivaldi 7.1 Arrives on Desktop with Even More Personalization appeared first on Thurrott.com.
At JetBrains, we aim to enable and scale the next generation of technologies to make software development a more productive and enjoyable experience. To empower and support developers, we build a variety of products for professional development, including powerful AI tools and features that have already enhanced productivity and opened new horizons for creativity. But can we go beyond that and boost productivity even further – improve code quality, unlock future innovations, help execute complex tasks, and change the way you work with code?
Yes, we can!
With the launch of Junie, JetBrains AI coding agent, we are redefining how we code by leveraging its agentic power for co-creation right in your IDE. With Junie, you can fully delegate routine tasks to your very own personal coding agent or collaborate with it to execute more complex ones together. Thanks to the power of JetBrains IDEs, coupled with reliable LLMs, Junie already solves tasks that would otherwise require hours of work.
According to SWEBench Verified, a curated benchmark of 500 developer tasks, Junie can solve 53.6% of tasks on a single run. Such a success rate is promising – this shows the potential to also adapt Junie to the reality of today’s software development, including large amounts of tasks of different complexity. Junie will unlock the power of coding agents for millions of developers and companies around the world.
Delivering Junie into your familiar IDEs
Our goal is to ensure that partnering with Junie does not disrupt your coding experience, but empowers you to create and do more. Getting started with Junie is as simple as installing it into your IDE. You can then begin with delegating simple tasks as you get used to working with the coding agent, so you don’t need to make changes to your workflow.
And when you are comfortable with working with Junie, you can have it handle more complex tasks, integrate it into your team workflow and start redefining tasks, to boost productivity, unleash your ingenuity and creativity, and get the most from your coding experience powered by agentic AI.
Stay in control of your code
Developers must be able to quickly review proposed changes, maintain project context, and guide critical decisions. With Junie, you stay in control, even when delegating tasks and you are always able to review code changes and how the agent executes commands.
Delivering improved code quality
AI-generated code can be just as flawed as developer-written code. Ultimately, Junie will not just speed up development – it is poised to raise the bar for code quality. By combining the power of JetBrains IDEs with LLMs, Junie can generate code, run inspections, write tests and verify they have passed.
Making Junie a trusted teammate
Junie is designed to understand the context of any given project, so it can adapt to your coding style. Junie can also follow specific coding guidelines, additionally enhancing its ability to align with the way you code. This results in better code quality and control on how Junie performs tasks, ensuring reliability, making Junie a trusted collaborator on your team.
At JetBrains, we build products together with users, listening to the feedback to drive innovation in software development. This approach helps us create products empowering millions of developers to make anything happen – with code.
We’ve now opened the Early Access Program waitlist. We invite you to try Junie and share your thoughts, feedback, and ideas.
Junie is currently available in the following JetBrains IDEs: IntelliJ IDEA Ultimate, PyCharm Professional. WebStorm is coming next. OS X and Linux platforms are supported.