Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
142290 stories
·
32 followers

Microsoft will halt new Office features for Windows 10 in 2026

1 Share

Microsoft has quietly revealed that it will stop adding new features to Office apps (Microsoft 365) for Windows 10 users in August 2026. While security updates will remain in place for Office apps running on Windows 10 until October 2028, Microsoft will cut off new feature support gradually next year.

Microsoft 365 Personal and Family users will stop getting new features on Windows 10 in August 2026, alongside Current Channel users on the business side. Microsoft then plans to cut off new Office features for monthly enterprise channel users on Windows 10 on October 13th 2026, followed by the same for semi-annual enterprise channel users on January 12th 2027.

Microsoft was forced to perform a U-Turn on security updates for Office apps on Windows 10 earlier this year, but at the time it didn’t reveal that new features would be cut off starting in 2026. The change means you’ll have to upgrade to Windows 11 to get the latest Microsoft 365 features. Windows 10 goes end of life on October 14th, and Microsoft has committed to delivering security updates for Office apps on Windows 10 until October 10th, 2028.

Microsoft has been trying to convince Windows 10 users to upgrade to Windows 11 ahead of the end of support cutoff in October, but there are still millions of devices running the OS — despite Windows 11 finally overtaking Windows 10 as the most used desktop OS. Consumers can also extend security updates support for another year, free of charge if you’re willing to enable Windows Backup.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

COVID-19 Vaccine's mRNA Technology Adapted for First Antibiotic-Resistant Bacteria Vaccine

1 Share
Researchers have created the world's first mRNA-based vaccine against a deadly, antibiotic-resistant bacterium — and they did it using the platform developed for COVID-19 vaccines. Medical Express publishes their announcement: The vaccine developed by the team from the Institute for Biological Research and Tel Aviv University is an mRNA-based vaccine delivered via lipid nanoparticles, similar to the COVID-19 vaccine. However, mRNA vaccines are typically effective against viruses like COVID-19 — not against bacteria like the plague... In 2023, the researchers developed a unique method for producing the bacterial protein within a human cell in a way that prompts the immune system to recognize it as a genuine bacterial protein and thus learn to defend against it. The researchers from Tel Aviv University and the Institute for Biological Research proved, for the first time, that it is possible to develop an effective mRNA vaccine against bacteria. They chose Yersinia pestis, the bacterium that causes bubonic plague — a disease responsible for deadly pandemics throughout human history. In animal models, the researchers demonstrated that it is possible to effectively vaccinate against the disease with a single dose. The team of researchers was led by Professor Dan Peer at Tel Aviv University, a global pioneer in mRNA drug development, who says the success of the current study now "paves the way for a whole world of mRNA-based vaccines against other deadly bacteria."

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Where are the iPhone’s WebKit-less browsers?

1 Share
An illustration of the Apple logo.

It’s been 16 months since a DMA ruling allowed iOS developers like Google and Mozilla to use their own browser engines in the EU, so…  where are they? According to the Open Web Advocacy (OWA) — a nonprofit group of software engineers that advocates for the open web — Apple continues to place technical and financial restrictions on WebKit-alternative iOS browser engines that effectively stifle competition.

OWA says these barriers include insufficient testing tools outside of the US, hostile legal terms, and forcing browser developers to create entirely new apps to ship their own engines, causing developers to lose their existing European user base. Instead of allowing Google, for example, to simply update its existing Chrome browser with a Blink engine, Apple’s rules require a brand new app for the EU audience, resetting the user count to zero. Developers would then have to maintain two separate browser implementations.

Mozilla told The Verge last year that it was disappointed by Apple’s restrictions, describing them as “a burden” on independent browser providers. “Apple’s proposals fail to give consumers viable choices by making it as painful as possible for others to provide competitive alternatives to Safari,” said Mozilla spokesperson Damiano DeMonte. “This is another example of Apple creating barriers to prevent true browser competition on iOS.”

Apple added support for non-WebKit browsers in iOS 17.4 to appease DMA rules that aim to prevent tech giants from disadvantaging third-party browser engines, but the OWA alleges that Apple’s restrictions mean it is “not in effective compliance with the DMA.”

“Ensuring other browsers are not able to compete fairly is critical to Apple’s best and easiest revenue stream,” the OWA says. The group notes that Safari brings in $20 billion per year in search engine revenue from Google, accounting for 14-16 percent of Apple’s annual operating profit, and that it’s set to lose $200 million per year for every 1 percent of browser market share that Safari loses.

Outside of the EU, Apple is also facing pressure from UK regulators to allow developers to use alternative browser engines in iOS, following an investigation that found both Apple and Google were “holding back” mobile browser innovation.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Setting Up OpenTelemetry on the Frontend Because I Hate Myself

1 Share
Image of a frowning red splatter on bright yellow background.

Picture this: You’re having a lovely afternoon, you’ve cooked a delicious dinner, you took a walk around the neighborhood and you’re feeling delightful. Well, that just won’t do. Delightful? In this economy? I have the perfect solution! Setting up OpenTelemetry in your favorite ReactJS frontend side project. It’s the ideal fix for when you’re feeling cheerful and need to put a solid scowl back on your face come Monday morning.

With just 82 confusing instructions that might not work with your favorite toolchain, you, too, can get lost in a maze of subtle fiddly details. In fact we’ll throw in a free performance penalty for your app’s asynchronous code!

But wait, there’s more! It’s not enough to install the dependencies, you’ve got to use them, too. Luckily, using OpenTelemetry to instrument the frontend is sure to bring out a frown. From facing context woes correlating server-side rendering with client hydration, to watching user sessions break on page navigation unless you carefully thread it through local storage, to dealing with telemetry dying when users close or switch the tab, you’re sure to find a way to experience exasperation.

If that’s not enough, you can always trigger an existential crisis by asking yourself how to add an attribute to the “main” span. Is there an API method for that? Of course not. Luckily all the cool kids know to cite Jeremy’s blog post for a neat workaround (naturally, making it work on the browser is left as an exercise to the reader).

OK, there’s a bit of exaggeration going on and things are improving over time, but the friction is pervasive, especially when compared to Java or Go on the backend. That said, hmm… Wait a minute, aren’t frontend developers the majority of the industry? Haven’t they needed high cardinality observability for decades? How did we get here, anyway? Maybe we need to take a step back and ask if we’re approaching frontend observability the wrong way.

You’re So Context, Baby, and You Don’t Even Know It

Let’s take that step back for a minute and talk about how OpenTelemetry for the frontend got here in the first place. To start, the OpenTelemetry JavaScript implementation is doing the best it can. All things considered, the implementation (and its team of maintainers) are doing a great job! While it’s cathartic to moan about how annoying things are for the frontend, knowing the history of how we got here is crucial to identifying ways that we can realistically improve things.

Stepping back a bit, what we’re really seeing is a combination of the difficulties of implementing the SDK and API specification of OpenTelemetry in JavaScript, plus the implementation quirks of how the implementation was designed.

To take a trip down history lane, OpenTelemetry was originally designed to be used by backends that were stateless and serving multiple tenants with short-lived requests. Those requests would typically live less than 200 milliseconds or so and involved a manually instrumented call chain of child services that was very shallow in depth. Persistent concepts like user sessions were naturally carried in on every request, so handling long-lived state was never really an issue for the server to deal with.

Given that the backend was hosted in a data center, the network was fast and reliable, so minimizing payload for the telemetry itself was rarely a big concern. In addition, due to the service focus of the original implementations, visualizing and designing traces as callstacks made a lot of sense; this callstack design is deeply pervasive in the API, SDK and data specifications.

None of those choices were bad in context, and they still make sense for most backend systems today. Unfortunately, the frontend doesn’t play nicely with a lot of those assumptions, and it makes using OpenTelemetry particularly challenging at times.

An easy example of that is using it for asynchronous code that works in an event-loop architecture: something experienced in ReactJS apps. Another example is harder to see, but it turns out that handling event-driven architectures and long-running spans in OpenTelemetry require similar approaches. This isn’t a coincidence. Both architectures have several similarities, with the salient one here being the need to represent incomplete information inside the telemetry pipeline (a concept analogous to a write-ahead log).

The Shape of Data Flow

When we build systems, developers are used to tightly correlated data flow and code structure. However, this isn’t the case with telemetry. With telemetry, developers need to consider how the users engage with the system. In other words, we use OpenTelemetry to tell stories about system behavior so  that that we better understand it.

For the backend, often the users and the developers are the same people, so the narrative can closely match the code structure, especially if you practice a DevOps culture where teams operate the systems they build. For the frontend, in contrast, the split between the user and the developer widens because user interaction will have zero resemblance to code structure.

Which is fine. Well, unless your tooling assumes a strong correlation between code structure and user interaction. Whoops! This innocent assumption ends up making life quite troublesome in the frontend. Ideally, understanding the frontend involves looking at a temporal stream of events as users interact with the system. Every user interaction tells a story that developers need to be able to piece together to know what’s going on. However, this introduces friction because it runs counter to the way the tooling was originally built.

Imagine this tension as being the difference between giving people music albums to play versus letting them play songs in any order. With albums, you know the song order, so you can make the album very coherent. Then came music streaming: Now people pick the song order and blend between albums, which means there’s no way to reason about how your song appears in the context of their listening experience. Over time, that encourages changes in the way you write the music and publish it. Indeed, we’ve gone from designing albums as a coherent artifact to stating that songs need a hook every seven seconds.

Drawing a parallel back to tech: Users are unpredictable, they’ll use the same app from multiple tabs, and they might start on one device and continue on another. The real world is messy and never maps nicely to your code’s structure. Nothing makes that more obvious than when the instrumentation of your frontend and backend join together into a chaotic soup of contextual mud.

So, what does that mean for designing effective instrumentation that works for both? Well, it means you’re probably going to have a bad time. It also means you’ll likely reach for different data structures, code patterns and instrumentation techniques (which could be an entire series by itself). Except we don’t really have that option with OpenTelemetry. To understand why, we’re going to need to dig a bit further

Correlate Now vs. Correlate Later

First, we need to talk about observability vendors. How they store and ingest data has very deep implications for how instrumentation libraries then go on to evolve. To put it shortly, the ideal case for an observability vendor is if the customer sends complete data such that the vendor can append the data to its data store as-is. This is largely the case for OpenTelemetry today (using out-of-the-box capabilities), but taking this approach causes a lot of data duplication over the wire since the data is maximally denormalized. For example, OpenTelemetry’s semantic conventions recommend using a service.address set, but it will be repeated on every span ever sent — despite it being a value that will never change.

If one imagines adding useful debugging data such as a build ID, service version, user ID or user agent information, this adds up extraordinarily quickly. That increase in bandwidth is even more painful in the browser since browser OpenTelemetry exporters don’t support compression yet.

In a world where networking is both fast and reliable, the trade-off makes perfect sense. Unsurprisingly, OpenTelemetry developed a bandwidth-heavy but compute-light specification. Pushing that complexity of data processing toward the customer makes even more sense given that many of the original vendor implementations were inspired by in-house tooling where, historically, the same company built the instrumentation, the libraries and the telemetry backend.

Once you go into the world of frontend services, all of those constraints start to change. The frontend has significantly restricted compute and network bandwidth, which is exacerbated when you consider that the mobile browsing accounts for over half of all traffic on the web. As a consequence, offloading the complexity of data management to the observability vendor by deduplicating data and having the vendor correlate it would be ideal. For error detection and bandwidth reasons, sending incomplete traces as snapshots — rather than keeping the entire trace in memory — also makes sense. Naturally, this is the opposite of what most existing tooling, libraries and vendors support, although this story is slowly changing.

You can think of this tension as “correlate now” vs. “correlate later.” In a “correlate now” system, you need to send a complete span containing all the data in it that you want to query in one go, with no later updates. “Correlate later” systems mean you can send whatever data when you have it, but you have to do the work later to correlate it (via indices), and that can become prohibitively expensive. Sound familiar? It’s the old database debate that we’ve been having for decades. Indices or no indices? One table or many? Should we use schemas? Do we normalize the data?

Ultimately, it depends, but the problem we’re going to run into is that while it does depend, the frontend and backend are most likely going to end up with different ideals; when the two meet, it gets messy.

What’s the Deal With JavaScript?

Phew! That was a lot, but it was useful background context. That said, we started talking about JavaScript, and it feels like we got sidetracked a bit… What does all of that talk about context and streaming and correlation have to do with JavaScript on the frontend web?

The answer starts by first observing that JavaScript is in a unique position as the only programming language supported by web browsers that can directly interact with the browser’s document object model (DOM); it also happens to be immensely popular on the backend. Because of that, it encounters these issues in a particularly frustrating way: The initial OpenTelemetry instrumentation libraries in JavaScript were built for NodeJS and running in a backend environment, which means they introduce significant friction in the frontend. Egads!

Can we do anything about that? It’d be great if we could. Maybe we “simply” tweak the JavaScript libraries, make some browser-friendly versions and then bam, perfection? Perfect! Everyone’s happy and all is right with the world. I can already hear the unicorns frolicking in the meadows. Wait, what’s that? Unicorns don’t frolic? I’m going to pretend I didn’t hear that.

Coming back down to reality, if you’re wondering why browser-friendly OpenTelemetry libraries are insufficient, ask yourselves the question, “How do we send data to an OpenTelemetry-compliant backend in a way that’s friendly for frontend users in real-world network conditions?”

The answer is going to be a bit horrifying: It turns out that you kind of can’t (at least, out of the box with OpenTelemetry: Embrace, however, works around that issue). Are we stuck, then? Forever doomed to wander in a land of mediocre tooling and quirky libraries and frustrating language support because the data model that OpenTelemetry currently implies doesn’t match the data model that frontend applications use? If we’re aiming for perfection, one could argue that, but if we don’t let perfect be the enemy of good, we can solve several immediate challenges for the frontend now, even without changing how OpenTelemetry works.

A Pragmatic Path Forward

Frontend developers deserve something better for OpenTelemetry, especially since they stand to benefit so much from adopting high-cardinality and high-context instrumentation. Understanding the user experience, especially when interaction is so unconstrained, is a game changer.

I think we can get there. Here’s the pitch:

  1. Let’s figure out what the frontend web needs most right now.
  2. We make that happen as soon as possible, incrementally, without breaking all the other JavaScript users.
  3. Then, use feedback from the community to improve OpenTelemetry itself.
  4. Collaboratively build a more powerful observability experience for everyone.

Are you sold? I know I am, and I’m not the only one. It just so happens that the OpenTelemetry community started a frontend browser Special Interest Group (SIG) dedicated to improving OpenTelemetry in the browser. Some of my favorite initiatives of the Frontend Browser SIG are:

  • Improving handling of loading and unloading in the browser
  • Better session support without breaking existing OpenTelemetry data models
  • Better logging models for client events, dependency sizes and tracking context across async boundaries

Those are huge and that’s just the start. What I love about this is that no matter how difficult improving OpenTelemetry for the frontend might seem, there’s an international community of people passionate about making things better one step at a time. Today’s development pains will become, slowly but surely, a thing of the distant past. It won’t happen overnight, but it will happen. We’ll be able to work together every step of the way to make understanding our users and improving their experiences a happier and more productive experience for everyone involved.

If you’d like to learn more about what’s in store for OpenTelemetry support for the browser, check out this live panel discussion on July 31 at 10 a.m. PT, hosted by Embrace.

The post Setting Up OpenTelemetry on the Frontend Because I Hate Myself appeared first on The New Stack.

Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building stronger engineering teams with aligned autonomy

1 Share
Striking the balance between speed and strategy is a major challenge for business and tech leaders. That’s where aligned autonomy comes in.
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Why Your Scrum Master Job Needs a Reset with Every Leadership Change | Joelle Tegwen

1 Share

Joelle Tegwen: Why Your Scrum Master Job Needs a Reset with Every Leadership Change

Read the full Show Notes and search through the world's largest audio library on Agile and Scrum directly on the Scrum Master Toolbox Podcast website: http://bit.ly/SMTP_ShowNotes.

Joelle shares her experience as a coach and Scrum Master at a small startup where multiple companies had merged over several years. When a new VP with a conservative approach replaced her original sponsor who favored significant change, Joelle failed to adapt her tactics and align with the new leadership's direction. 

She emphasizes the critical importance of listening to feedback from leaders and avoiding the anti-pattern of only listening to peers and direct managers instead of higher-level leadership. Joelle explains that whenever you get a new leader, your job essentially starts over again, requiring you to discover their goals and style through interviews about their priorities. She stresses that change happens through people, not just actions, and that pushing too hard creates more resistance.

In this segment, we refer to the book The First 90 Days by Michael D. Watkins and the Deep Canvassing Technique

Self-reflection Question: How do you currently assess and adapt to new leadership styles in your organization, and what steps could you take to better align your change management approach with leadership expectations?

[The Scrum Master Toolbox Podcast Recommends]

🔥In the ruthless world of fintech, success isn’t just about innovation—it’s about coaching!🔥

Angela thought she was just there to coach a team. But now, she’s caught in the middle of a corporate espionage drama that could make or break the future of digital banking. Can she help the team regain their mojo and outwit their rivals, or will the competition crush their ambitions? As alliances shift and the pressure builds, one thing becomes clear: this isn’t just about the product—it’s about the people.

🚨 Will Angela’s coaching be enough? Find out in Shift: From Product to People—the gripping story of high-stakes innovation and corporate intrigue.

Buy Now on Amazon

[The Scrum Master Toolbox Podcast Recommends]

About Joelle Tegwen 

Joelle helps teams build products that customers love in a sustainable way and with high-quality. She creates environments that foster high performing teams improving their interactions. Her background in science and passion for cognitive science complement her work. Joelle's non-linear, 15-year career in software development has provided her with diverse perspectives.

You can link with Joelle Tegwen on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20250714_Joelle_Tegwen_M.mp3?dest-id=246429
Read the whole story
alvinashcraft
5 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories