Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
140083 stories
·
31 followers

Left-leaning influencers embrace Bluesky without abandoning X, Pew says

1 Share
It’s no surprise that many big, left-leaning social media accounts have recently joined Bluesky — but a new analysis from the Pew Research Center attempts to quantify that shift. This comes as an update to Pew’s news influencer report released in November 2024, which did not include Bluesky in its numbers. The report focused on […]
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

GeekWire Podcast: Microsoft, Remitly, and the new shape of work — plus, Amazon’s NYT AI deal

1 Share
A playful nod to classic computing on Microsoft’s new campus: A vintage computer mouse emerges from a faux mouse hole in the wall — a bit of tech humor in the modern workspace. (GeekWire Photo / Kurt Schlosser)

This week on the GeekWire Podcast, we discuss Amazon’s new licensing agreement with The New York Times to train its AI platforms, a notable move in the evolving relationship between media and tech.

We also go behind the scenes at two very different office spaces that reflect changing approaches to the workplace: Microsoft’s sprawling and still-developing Redmond campus, and Remitly’s globally inspired new HQ in downtown Seattle.

We start the show on a lighter note, with a confession about computer mouse loyalty and a debate over whether a trackpad is good enough in a pinch.

Listen to the full episode below or wherever you get your podcasts.

Related stories:

With GeekWire co-founder Todd Bishop and reporter Kurt Schlosser.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Hugging Face Introduces Two Open-Source Robot Designs

1 Share
An anonymous reader quotes a report from SiliconANGLE: Hugging Face has open-sourced the blueprints of two internally developed robots called HopeJR and Reachy Mini. The company debuted the machines on Thursday. Hugging Face is backed by more than $390 million in funding from Nvidia Corp., IBM Corp. and other investors. It operates a GitHub-like platform for sharing open-source artificial intelligence projects. It says its platform hosts more than 1 million AI models, hundreds of thousands of datasets and various other technical assets. The company started prioritizing robotics last year after launching LeRobot, a section of its platform dedicated to autonomous machines. The portal provides access to AI models for powering robots and datasets that can be used to train those models. Hugging Face released its first hardware blueprint, a robotic arm design called the SO-100, late last year. The SO-100 was developed in partnership with a startup called The Robot Studio. Hugging Face also collaborated with the company on the HopeJR, the first new robot that debuted this week. According to TechCrunch, it's a humanoid robot that can perform 66 movements including walking. HopeJR is equipped with a pair of robotic arms that can be remotely controlled by a human using a pair of specialized, chip-equipped gloves. HopeJR's arms replicate the movements made by the wearer of the gloves. A demo video shared by Hugging Face showed that the robot can shake hands, point to a specific text snippet on a piece of paper and perform other tasks. Hugging Face's other new robot, the Reachy Mini, likewise features an open-source design. It's based on technology that the company obtained through the acquisition of a venture-backed startup called Pollen Robotics earlier this year. Reachy Mini is a turtle-like robot that comes in a rectangular case. Its main mechanical feature is a retractable neck that allows it to follow the user with its head or withdraw into the case. This case, which is stationary, is compact and lightweight enough to be placed on a desk. Hugging Face will offer pre-assembled versions of its open-source Reach Mini and HopeJR robots for $250 and $3,000, with the first units starting to ship by the end of the year.

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Docker’s Best-Kept Secret: How Observability Saves Developers’ Sanity

1 Share

As software systems become increasingly complex and distributed, developers and operations teams face a daunting challenge in understanding application behavior at scale. While technologies based on containers, microservices, and cloud native architectures make it easier to deliver software, they also increase the difficulty in debugging and monitoring processes. How do you effectively diagnose problems, monitor performance in real time, and ensure reliability between services?

The answer lies in end-to-end observability. Observability extends beyond traditional monitoring, offering in-depth insights into system behavior. With the adoption of effective tracing solutions, such as OpenTelemetry and Jaeger, in Docker containers, developers can now proactively detect performance issues, increase reliability, and significantly reduce downtime.

This guide provides an overview of observability principles, clarifies the role of distributed tracing, and discusses how Docker, OpenTelemetry, and Jaeger work together to enhance operational intelligence and streamline incident responses.

Why Observability Matters More Than Ever

Modern applications increasingly feature as distributed systems made up of many interdependent services and application programming interfaces (APIs). While Docker makes scaling and deployment easier for microservices, its native complexity often leads to unclear performance problems and scaling roadblocks.

Key observability challenges include:

  • Distributed Systems Complexity: The challenge in distributed systems is pinpointing the cause of errors or bottlenecks within many interconnected microservices.
  • Latency and Performance Issues: Quickly detecting slow responses or resource contention.
  • Real-Time Insights: There is a need for real-time visibility into system performance rather than relying on logs or traditional monitoring tools that are time-lagged.

Troubleshooting is slow and laborious without full observability, and it dramatically adds to Mean Time To Resolution (MTTR).

In my own experience with container service performance debugging for a major cloud-scale infrastructure provider, the absence of distributed tracing meant that we relied almost exclusively on log correlation and alert-driven metrics, which succeeded 70% of the time. The balance was guesswork and long war room meetings. With trace propagation among services in that equation, MTTR plummeted, and debugging became more an exercise in navigating timelines than trawling through logs.

Why Docker-Based Environments Need Observability

I recall troubleshooting a deployment that had flaked out, where an application running in containers within a frontend application kept crashing periodically. CPU and memory were fine, logs were opaque, and autoscaling hid symptoms. We didn’t know it was an authentication service timing out downstream during high-concurrent traffic until we added trace context using OpenTelemetry and visualized dependencies in Jaeger. That kind of intelligence wasn’t possible based on metrics alone. Docker and Jaeger were game changers here.

Docker, in general, has revolutionized the software deployment landscape by enabling portability, consistency, and simplicity of scaling. However, the transient nature of containers poses some challenges:

  • Containers can start and stop often, thus making monitoring more complex.
  • Containers share resources, potentially masking performance issues.
  • Microservices often communicate asynchronously, obscuring tracing and visibility.

Deploying an observability solution in Docker environments allows developers and operators to gain in-depth insights into applications running in containers.

Introducing OpenTelemetry and Jaeger

OpenTelemetry

OpenTelemetry is an open CNCF standard designed for instrumentation, tracing, and metrics collection in cloud native applications. It makes it easy to provide consistent telemetry data across your applications, thus making observability easier to implement and data analysis simpler.

Jaeger

Jaeger is an open-source distributed trace system developed initially by Uber and is very effective in visualizing and analyzing trace data from OpenTelemetry. Jaeger is very helpful in offering practical advice via simple dashboards, allowing developers to quickly identify performance bottlenecks and problems.

Alternative Solutions to Jaeger

While Jaeger is a powerful tool, some other trace tools might be considered depending on specific requirements:

  • Zipkin is an excellent alternative that shares similar features and is OpenTelemetry compliant.
  • Elastic APM is a full observability solution with native support for tracing, metrics, and logging.
  • Datadog and New Relic are proprietary software offering deep observability features.

However, Jaeger’s open source nature and seamless integration with Docker makes it particularly well-suited for teams that need an affordable and flexible solution.

Setting Up OpenTelemetry and Jaeger in Docker

Step 1: Instrument Your Application

Consider a Node.js microservice as an example:

// server.js
const express = require('express');
const { NodeTracerProvider } = require('@opentelemetry/sdk-node');
const { registerInstrumentations } = require('@opentelemetry/instrumentation');
const { ExpressInstrumentation } = require('@opentelemetry/instrumentation-express');
const { JaegerExporter } = require('@opentelemetry/exporter-jaeger');

const provider = new NodeTracerProvider();
provider.addSpanProcessor(
  new (require('@opentelemetry/sdk-trace-base').SimpleSpanProcessor)(
    new JaegerExporter({ endpoint: 'http://jaeger:14268/api/traces' })
  )
);

provider.register();
registerInstrumentations({ instrumentations: [new ExpressInstrumentation()] });

const app = express();
app.get('/', (req, res) => res.send('Hello World'));
app.listen(3000);


Step 2: Dockerize Your App

FROM node:18-alpine
WORKDIR /app
COPY package.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]


Step 3: Deploying With Docker Compose

version: '3.8'
services:
  app:
    build: .
    ports:
      - "3000:3000"
    depends_on:
      - jaeger
  jaeger:
    image: jaegertracing/all-in-one:1.55
    ports:
      - "16686:16686"
      - "14268:14268"
Run your environment with:
docker compose up
Access Jaeger UI at http://localhost:16686 to explore tracing data.

Real Experience Implementing This at Scale:

Applying this configuration to many microservices in a high-traffic production system taught an important lesson: observability is not an afterthought but an integral part of infrastructure. With container orchestration offering scalability and with traces providing immense visibility into the system, each team, from the infrastructure team to the frontend team, could rely on the same trace IDs while solving edge cases; this capability wasn’t achieved earlier with disconnected logging approaches.

Practical Use Cases and Industry Examples

Jaeger is heavily used by major technology firms, including Uber, Red Hat, and Shopify, to enable real-time observability. These organizations use distributed tracing for:

  • Quickly detect performance degradation of microservices
  • Improve the end-user experience by proactively detecting latency problems.
  • Ensure high reliability through timely detection and resolution of incidents.

Advanced Observability Techniques

Distributed Context Propagation

Leverage OpenTelemetry auto HTTP header propagation to maintain trace context between services.

Custom Span Creation

const axios = require('axios');
app.get('/fetch', async (req, res) => {
  const result = await axios.get('http://service-b/api');
  res.send(result.data);
});


Gain a deeper understanding of complicated functions using manual definition of spans.

const { trace } = require('@opentelemetry/api');
app.get('/compute', (req, res) => {
  const span = trace.getTracer('compute-task').startSpan('heavy-computation');
  // Compute-intensive task
  span.end();
  res.send('Done');
});


Integrating Observability into CI/CD Pipelines

It is important to integrate observability checks into continuous integration and continuous deployment pipelines, such as GitHub Actions, to ensure that code changes meet visibility expectations.

name: CI Observability Check
on: [push]
jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run Docker Compose
        run: docker compose up -d
      - name: Observability Verification
        run: curl --retry 5 --retry-delay 10 --retry-connrefused http://localhost:16686

The Future of Observability

Observability continues to evolve rapidly, especially with AI-driven analytics and predictive monitoring capabilities. Emerging trends include:

  • Automated anomaly detection
  • AI-assisted root cause analysis
  • Improved predictive alerting allows for early incident prevention.

OpenTelemetry and Jaeger are leading-edge technologies that allow organizations to take advantage of improved observability in future deployments.

As more teams deploy AI/ML services, observability must continue to improve. I’ve seen firsthand, through my experience integrating LLM services into container pipelines, just how opaque model behavior is becoming. OpenTelemetry and similar technologies are now starting to fill that gap, and being able to see inference, latency, and system interaction all on one timeline will be crucial in the AI-native world.

Conclusion

The integration of OpenTelemetry and Jaeger is used to significantly enhance observability in Docker environments, thereby increasing the ability to monitor and govern distributed systems more effectively. These technologies, when combined, yield real-time, actionable intelligence that promotes effective troubleshooting, boosts performance, and maintains high availability for teams. As more organizations adopt containerization and microservices, understanding observability best practices has become a vital component in achieving operational success.

The post Docker’s Best-Kept Secret: How Observability Saves Developers’ Sanity appeared first on The New Stack.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Microsoft Targets AI ‘Holy Grail’ With Windows ML 2.0

1 Share

Windows old-timers may remember PC gaming in the ’80s and ’90s. Games wouldn’t load in MS-DOS without something as rudimentary as the right sound card.

It’s no different with AI on Windows PCs today. Models don’t load without the right software tools, drivers, neural networks or relevant PC hardware.

But Microsoft is on the brink of solving this AI problem, much like it solved a gaming problem to transform Windows 95 into a PC gaming powerhouse.

The DirectX technology introduced in Windows 95 was a breakthrough. Games just worked, regardless of the hardware. Developers were sold on Windows 95’s ease of use, and DirectX revolutionized game development. Now PC gaming is bigger than console gaming.

Similarly, Microsoft hopes it has a breakthrough on its hands to run AI on PCs with Windows ML 2.0, a core AI runtime that will make AI models just run, regardless of hardware. The technology was announced at Microsoft’s Build developer show last week.

Windows ML 2.0 — which is based on the ONNX runtime — is a wrapper that allows developers to bring their own AI models and create AI apps for PCs. The runtime can compile reasonably sized models in a matter of minutes.

Developers can create apps and not worry about what’s under the hood. It’s like a version of Microsoft’s plug-and-play technology, in which hardware — old or new — just works.

Previous Mistakes and Problems

Microsoft relied on DirectML — a descendant of DirectX mostly for GPU acceleration — to do the job, but it wasn’t fast enough.

The company discovered weaknesses in DirectML when developing a feature called “Click to Do,” which can identify text and images on screen and take action on them.

“We’ve come to the realization that we need something faster,” said Ryan Demopoulos, principal product manager at Microsoft, during a Build session.

Windows ML makes sure AI apps automatically run on CPUs and neural-processing units (NPUs) in addition to GPUs.

A Closer Look

Microsoft has clearly learned from past problems of offering multiple versions of Windows for x86 and Arm.

The AI chip ecosystem is even more diverse, with Windows 11 PCs supporting AI chips, CPUs and GPUs from Intel, AMD and NVIDIA.

That’s where Windows ML 2.0 steps in. The Windows ML runtime handles the heavy lifting of identifying the hardware and automating hardware support for AI models, while also extracting the maximum performance from AI chips.

Windows ML 2.0 also figures out dependencies, procurement and update management, and includes them in installers. That is typically a lot of manual labor in fragmented environments.

An experimental version of Windows ML 2.0 is now available for developers to try out.

“It’s not yet ready or meant for production apps. Please don’t use it in your production apps,” Demopoulos said.

The ‘Holy Grail’

Microsoft reached out to Reincubate, which develops the popular Camo webcam app, to take an early look at Windows ML.

Windows ML 2.0 meets Reincubate’s vision and desire for AI models to just work on silicon without the need to consistently quantize, tune, test and deploy for a bunch of frameworks, Aidan Fitzpatrick, CEO of Reincubate, told The New Stack.

“The holy grail is being able to take a single high-precision model and have it JIT — or ‘just work’ — seamlessly across Windows silicon with different drivers, different capabilities and different precision,” Fitzpatrick said.

Having Windows manage versioning and retrieval of frameworks and models dynamically makes sense, Fitzpatrick explained.

“It’ll make our lives easier. What’s wrong with smaller, faster downloads, installs and updates? Users appreciate that,” Fitzpatrick added.

An emerging Camo feature is a real-time, adjustable retoucher, so users can tweak appearances in meetings, streams and recordings. With Windows ML and models including feature and landmark detection, “we can make it work,” Fitzpatrick said.

“Windows ML has a lot of existing, robust components behind it such as ORT (ONNX runtime), and that’s made it a lot more straightforward than it otherwise might have been — in adopting it, we’ve not had to blow things up or start over,” Fitzpatrick said.

“Windows ML should be a powerful tool in … helping us to move at the speed of silicon innovation,” Fitzpatrick said.

How It Works

Developers can bring in their own models via Windows ML public APIs. The runtime identifies the hardware and manages and updates the dependencies.

AI models talk to silicon through an “execution provider,” which brings in the necessary hardware support. The layer identifies the hardware and scales AI performance accordingly.

Developers don’t have to create multiple copies of executables for different hardware configurations. Microsoft services updates to the runtime, so developers can focus on developing models.

“Once your app is installed and initializes Windows ML, then we will scan the current hardware and download any execution providers applicable for this device,” said Xiaoxi Han, senior software engineer at Microsoft.

Digging Deeper

Xiaoxi Han demonstrated how developers could get their AI models running on Windows 11 PCs. The demonstration used the VS Code extension toolkit to convert a preselected ResNet model to evaluate the image of a puppy.

She initiated a new “conversion” tool to optimize the preselected ResNet model to open source ONNX format, optimize it, and quantize it. Models downloaded from Hugging Face could also be converted to ONNX format.

“If you have a PyTorch model, if you’ve trained one or if you’ve obtained one, you can convert it into ONNX format and run it with Windows ML to run your on-device workloads,” Demopoulos said.

The conversion feature optimizes AI models to run locally on NPUs from Intel, Qualcomm and AMD. Over time, Microsoft will get rid of this step as conversion will support all chips.

Clicking “Run” in the interface converted the Hugging Face model to ONNX format. A small ResNet model took about 30 seconds to convert.

Han next created the app in Visual Studio by starting a console project. The .NET version and the target OS of Windows version were set in project properties. Then a NuGet package called Microsoft.AI.Windows.MachineLearning was installed for the Windows ML runtime package, which also includes the ONNX runtime bits and ML Layer APIs for execution providers.

NuGet automatically sets up the dependencies between the app code and Windows ML runtime. Other NuGet packages may be required for large language models.

Han created an entry point for the console app by bringing up the namespace and creating a program class. She initialized an ONNX runtime environment, and then Windows ML.

Han also created an infrastructure object, which needs to stay alive for the app process. That scans the hardware and downloads relevant ‘execution provider’ packages. One execution provider example is QNN, which helps AI models take advantage of NPUs in laptops with Qualcomm’s Snapdragon chip.

Then came standard AI code writing, which includes setting up the file path to the model, the label file and the image file. An ONNX session for inferencing was set up and configured, which included loading the image and setting up policies based on type of chip or power consumption.

Running inference fed the processed image tensor into the ONNX session, which analyzed the image and returned raw prediction scores.

The output results were processed to convert them to probabilities, then translated to human-readable format, showing high confidence that the image was indeed that of a golden retriever.

Coders can specify the type of performance extracted from hardware. A line with “MAX_PERFORMANCE” indicates top-line performance, or “PREFER_CPU” or “PREFER_NPU” or “PREFER_GPU” may be for AI models running consistently in the background. Another instruction can set up AI models to run at minimal speed to save battery life.

“In the not-too-distant future, we also want to add … ‘workload splitting.’ You can have a single AI workload that is split across multiple different types of processors to get even greater performance,” Demopoulos said.

The full codebase from the demonstration is available on GitHub.

What APIs?

The main Windows ML layer includes “initialization” APIs — Microsoft.Windows.AI.MachineLearning — which keep the runtime up to date and download the necessary elements for the model to talk to the hardware.

The main ML Layer includes generative AI APIs that are designed to help in generative AI loops for LLMs, including Microsoft.ML.OnnxRuntimeGenAI.WinML. A runtime API layer gives developers fine-grained control over execution of the AI model.

The layers are exposed in WinRT, but Microsoft is also providing flat C wrappers with managed projections as a convenience so developers don’t need to learn WinRT.

The post Microsoft Targets AI ‘Holy Grail’ With Windows ML 2.0 appeared first on The New Stack.

Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

BONUS Martti Kuldma: How to Transform Century-Old Organizations Through Product-Driven Agile Transformation

1 Share

BONUS: Martti Kuldma shares how to transform century-old organizations through product-driven agile transformation

In this BONUS episode we explore the remarkable transformation journey at Omniva with CEO Martti Kuldma. From traditional postal services to innovative logistics solutions, we explore how a 100+ year old company embraced product thinking, DevOps practices, and agile transformation to become a competitive force in modern logistics.

Omniva's Digital Evolution—IT as a Revenue Center

"We innovated the parcel machine business for a few years, and software has been an area of investment for us - software as a separate vertical in our business."

Omniva represents a fascinating case study in organizational transformation. While many know it as Estonia's post office, the company has evolved into an international logistics powerhouse with significant revenue streams beyond traditional postal services. Under Martti's leadership, the organization has reimagined software not as a support function but as a core revenue driver, positioning itself for the dramatic shifts expected in logistics delivery over the next five years.

The Vision: Physical Mailing as the Next IP Network

"The Vision: physical mailing as the next IP network - this will give us a lot more freedom to adapt to changes in delivery demand."

Martti's strategic vision extends far beyond conventional logistics thinking. By conceptualizing physical delivery networks similar to internet protocols, Omniva is preparing for a future where logistics companies leverage their physical infrastructure advantages. This approach addresses the fundamental challenge of fluctuating demand in e-commerce and traditional logistics, creating opportunities for crowd delivery solutions and gig economy integration that capitalize on existing network effects.

Breaking Down Waterfall Barriers

"When I came we had waterfall processes - annual budgeting, procurement for software development. It took a couple of weeks to do the first rounds, and understand what could be improved."

The transformation from traditional procurement-based software development to agile product teams required dismantling entrenched processes. Martti discovered that the contractor model, while seemingly cost-effective, created expensive knowledge transfer cycles and left the organization vulnerable when external teams departed. His engineering background enabled him to recruit talent and build sustainable development capabilities that keep critical knowledge within the organization.

Creating Cross-Functional Product Teams

"We started to create cross-functional product area teams. We are not going to tell you what you need to build. You are accountable for the logistics efficiency."

The shift from eleven distinct roles in software development to autonomous product teams represents more than organizational restructuring. By empowering teams with accountability for business outcomes rather than just deliverables, Omniva transformed how work gets planned and executed. This approach eliminates traditional handoffs and role silos, creating teams that own both the problem and the solution.

The Product Manager Evolution

"For me, the PM is directly accountable for the business results. The final step of the transformation started when I took the CEO role."

Martti identifies a critical challenge in agile transformations: the misunderstanding of Product Manager responsibilities. Rather than falling into delivery or project management patterns, effective PMs at Omniva own business results directly. This shift required company-wide transformation because technical changes alone cannot sustain organizational evolution without corresponding changes in mindset and accountability structures.

Leadership Through Storytelling

"My main tool is just talking. All I do is story-telling internally and externally. I needed to become the best salesman in the company."

The transition from technical leadership to CEO revealed that transformation leadership requires different skills than technical management. Martti discovered that his primary value comes through narrative construction and communication rather than direct technical contribution. This realization highlights how senior leaders must evolve their impact methods as organizations scale and transform.

Real-Time Feedback Philosophy

"The feedback needs to be given immediately. ‘Last year, in May your performance was not the best’ - this is completely useless feedback."

Martti's rejection of annual reviews stems from practical experience with feedback effectiveness. Immediate, personal feedback creates learning opportunities and course corrections that annual cycles cannot provide. Anonymous 360 feedback systems often dilute accountability and actionability, whereas direct, timely conversations enable meaningful professional development and relationship building.

Essential Transformation Practices

"You need to tell the story - and convince people that this transformation is essential and needed. You need to trust and let them make their own decisions."

Drawing from experiences at both Pipedrive and Omniva, Martti identifies three critical elements for leading complex organizational change:

  • Compelling narrative: People need to understand why transformation is necessary and how it benefits both the organization and their individual growth

  • Distributed decision-making: Trust enables teams to solve problems creatively rather than waiting for hierarchical approval

  • Business accountability for engineers: When technical teams understand and own business outcomes, they innovate more effectively toward meaningful goals

The dynamic team formation model used at Pipedrive, where engineers and PMs pitched ideas and assembled mission-focused teams, demonstrates how organizational structure can enable rather than constrain innovation.

About Martti Kuldma

Martti Kuldma is CEO of Omniva, leading its transformation into a product-driven logistics company. A former engineering leader at Pipedrive and CTO at Omniva, he brings deep expertise in scaling teams, agile transformation, and digital innovation. Martti is also a startup founder and passionate advocate for high-impact product organizations.

You can link with Martti Kuldma on LinkedIn.





Download audio: https://traffic.libsyn.com/secure/scrummastertoolbox/20250531_Martti_Kuldma_BONUS.mp3?dest-id=246429
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories