Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150156 stories
·
33 followers

New Android features keep you focused on what matters most

1 Share
See the emotion behind speech in captions, send new emoji combos, let friends know when your call is urgent and more.
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

7 ways we’re making Android more accessible

1 Share
This December, Android is getting new accessibility updates like dark theme’s expanded option, emotion tags in Expressive Captions, easier Voice Access and more.
Read the whole story
alvinashcraft
11 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Stay organized and express yourself with Android 16’s new updates

1 Share
With today’s Android 16 release, your device is getting smarter, more personal and easier to manage.
Read the whole story
alvinashcraft
17 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Android 16’s latest update ushers in a more frequent release schedule

1 Share

The latest update for Android 16 has arrived, marking the official end to Google’s annual release schedule. Compared to the major Android 16 launch in June, the new 16 QPR2 release is a minor update that expands features for notifications, icons, calling screens, and more, and signals the start of Google’s long-awaited plans to release more frequent OS updates.

The accelerated release timeline addresses frustrations with Google’s previous yearly update schedule, which left even relatively new third-party Android phones waiting months or longer to get updates that were already available on the latest Pixel devices. Google announced the change in October 2024, saying that releasing more frequent platform updates “will help to drive faster innovation in apps and devices.”

These small biannual SDK releases will be the model going forward, alongside bringing major releases forward — with Android 16 having launched in Q2 instead of Q3, for example — and the usual quarterly Android feature updates. By moving to an earlier, more frequent release schedule for developer previews and general release rollouts, third-party phone makers will have more time to prepare their latest devices to launch with the latest version of Android. Google’s Pixel lineup will still be first in line to receive updates, but it could mean that other Android devices won’t be far behind, increasing the number of devices that support new features and giving developers more reasons to use them in their apps.

Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Anker’s new desktop docking station has a removable USB-C hub you can take with you

1 Share
Two images of the Anker Nano Docking Station showing the smaller hub being removed.
You can remove the dock’s smaller hub using an eject button on top. | Image: Anker

Anker has released a new docking station that eliminates the  problem of having to choose between portability and a plethora of ports. The 13-in-1 Anker Nano Docking Station includes various USB and video ports so you can quickly connect a laptop to multiple monitors, but its most useful feature is a small, portable USB-C hub that ejects from the front of the dock to expand your laptop’s connectivity while you’re away from your desk.

The Anker Nano Docking Station is available through the company’s website for $149.99 or at Amazon, where it’s currently discounted to $119.99 for Prime members. It’s a much cheaper alternative to the $399.99 14-in-1 Thunderbolt 5 dock Anker released earlier this year, but it does come with some performance limitations.

An illustration showing all of the ports on Anker’s Nano Docking Station and USB-C hub.

The new dock connects to your laptop over USB-C, but that port needs to support both DP Alt Mode and Power Delivery to access all of its functionality and use it as a power source. The dock supports a resolution of 4K / 60Hz when connecting a laptop to a single monitor, but the resolution drops to just 1920×1080 / 60Hz when connecting to three displays using the dock’s two HDMI ports and single DisplayPort connector. And while Windows users can display something different on all four screens (including the laptop’s), for Mac users all screens connected through the dock will display the same thing.

An Anker USB-C hub connected to a laptop.

The docking station also features a 10Gbps USB-C port, a 5Gbps USB-C and USB-A port, two slower 480Mbps USB-A ports, a 1Gbps ethernet port, a headphone jack, and SD and microSD memory card slots. The removable  hub, which is a bit smaller than a credit card, can be ejected with a button press even when the dock is connected to a laptop. It takes the dock’s 5Gbps USB-C and USB-A ports with it, along with the memory card slots, and includes its own dedicated HDMI port, a USB-C port for connecting a power source, and a USB-C jack for attaching the hub directly to a laptop without a cable.

Correction, December 2nd: An earlier version of this article misstated that Mac users can only mirror what’s on their laptop’s screen to monitors connected through the dock. The monitors can display something different to what’s on the laptop’s screen, but multiple monitors will all display the same thing.

Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

“The local-first rebellion”: How Home Assistant became the most important project in your house

1 Share

Franck Nijhof—better known as Frenck—is one of those maintainers who ended up at the center of a massive open source project not because he chased the spotlight, but because he helped hold together one of the most active, culturally important, and technically demanding open source ecosystems on the planet. As a lead of Home Assistant and a GitHub Star, Frenck guides the project that didn’t just grow. It exploded.

This year’s Octoverse report confirms it: Home Assistant was one of the fastest-growing open source projects by contributors, ranking alongside AI infrastructure giants like vLLM, Ollama, and Transformers. It also appeared in the top projects attracting first-time contributors, sitting beside massive developer platforms such as VS Code. In a year dominated by AI tooling, agentic workflows, and typed language growth, Home Assistant stood out as something else entirely: an open source system for the physical world that grew at an AI-era pace.

The scale is wild. Home Assistant is now running in more than 2 million households, orchestrating everything from thermostats and door locks to motion sensors and lighting. All on users’ own hardware, not the cloud. The contributor base behind that growth is just as remarkable: 21,000 contributors in a single year, feeding into one of GitHub’s most lively ecosystems at a time when a new developer joins GitHub every second.

In our podcast interview, Frenck explains it almost casually.

Home Assistant is a free and open source home automation platform. It allows you to connect all your devices together, regardless of the brands they’re from… And it runs locally.

Franck Nijhof, lead of Home Assistant

He smiles when he describes just how accessible it is. “Flash Home Assistant to an SD card, put it in, and it will start scanning your home,” he says. 

This is the paradox that makes Home Assistant compelling to developers: it’s simple to use, but technically enormous. A local-first, globally maintained automation engine for the home. And Frenck is one of the people keeping it all running.

The architecture built to tame thousands of device ecosystems

At its core, Home Assistant’s problem is combinatorial explosion. The platform supports “hundreds, thousands of devices… over 3,000 brands,” as Frenck notes. Each one behaves differently, and the only way to normalize them is to build a general-purpose abstraction layer that can survive vendor churn, bad APIs, and inconsistent firmware.

Instead of treating devices as isolated objects behind cloud accounts, everything is represented locally as entities with states and events. A garage door is not just a vendor-specific API; it’s a structured device that exposes capabilities to the automation engine. A thermostat is not a cloud endpoint; it’s a sensor/actuator pair with metadata that can be reasoned about.

That consistency is why people can build wildly advanced automations.

Frenck describes one particularly inventive example: “Some people install weight sensors into their couches so they actually know if you’re sitting down or standing up again. You’re watching a movie, you stand up, and it will pause and then turn on the lights a bit brighter so you can actually see when you get your drink. You get back, sit down, the lights dim, and the movie continues.”

A system that can orchestrate these interactions is fundamentally a distributed event-driven runtime for physical spaces. Home Assistant may look like a dashboard, but under the hood it behaves more like a real-time OS for the home.

Running everything locally is not a feature. It’s a hard constraint. 

Almost every mainstream device manufacturer has pivoted to cloud-centric models. Frenck points out the absurdity:

It’s crazy that we need the internet nowadays to change your thermostat.

The local-first architecture means Home Assistant can run on hardware as small as a Raspberry Pi but must handle workloads that commercial systems offload to the cloud: device discovery, event dispatch, state persistence, automation scheduling, voice pipeline inference (if local), real-time sensor reading, integration updates, and security constraints.

This architecture forces optimizations few consumer systems attempt. If any of this were offloaded to a vendor cloud, the system would be easier to build. But Home Assistant’s philosophy reverses the paradigm: the home is the data center.

Everything from SSD wear leveling on the Pi to MQTT throughput to Zigbee network topologies becomes a software challenge. And because the system must keep working offline, there’s no fallback.

This is engineering with no safety net.

The open home foundation: governance as a technical requirement

When you build a system that runs in millions of homes, the biggest long-term risk isn’t bugs. It’s ownership.

“It can never be bought, it can never be sold,” Frenck says of Home Assistant’s move to the Open Home Foundation. “We want to protect Home Assistant from the big guys in the end.”

This governance model isn’t philosophical; it is an architectural necessity. If Home Assistant ever became a commercial acquisition, cloud lock-in would follow. APIs would break. Integrations would be deprecated. Automations built over years would collapse.

A list of the fastest-growing open source projects by contributors. home-assistant/core is number 10.

The Foundation encodes three engineering constraints that ripple through every design decision:

  • Privacy: “Local control and privacy first.” All processing must occur on-device.
  • Choice: “You should be able to choose your own devices” and expect them to interoperate.
  • Sustainability: If a vendor kills its cloud service, the device must still work.

Frenck calls out Nest as an example: “If some manufacturer turns off the cloud service… that turns into e-waste.”

This is more than governance; it is technical infrastructure. It dictates API longevity, integration strategy, reverse engineering priorities, and local inference choices. It’s also a blueprint that forces the project to outlive any individual device manufacturer.

The community model that accidentally solved software quality

We don’t build Home Assistant, the community does.

“We cannot build hundreds, thousands of device integrations. I don’t have tens of thousands of devices in my home,” Frenck says.

This is where the project becomes truly unique.

Developers write integrations for devices they personally own. Reviewers test contributions against devices in their own homes. Break something, and you break your own house. Improve something, and you improve your daily life.

“That’s where the quality comes from,” Frenck says. “People run this in their own homes… and they take care that it needs to be good.”

This is the unheard-of secret behind Home Assistant’s engineering velocity. Every contributor has access to production hardware. Every reviewer has a high-stakes environment to protect. No staging environment could replicate millions of real homes, each with its own weird edge cases.

Assist: A local voice assistant built before the AI hype wave

Assist is Home Assistant’s built-in voice assistant, a modular system that lets you control your home using speech without sending audio or transcripts to any cloud provider. As Frenck puts it:

We were building a voice assistant before the AI hype… we want to build something privacy-aware and local.

Rather than copying commercial assistants like Alexa or Google Assistant, Assist takes a two-layer approach that prioritizes determinism, speed, and user choice.

Stage 1: Deterministic, no-AI commands

Assist began with a structured intent engine powered by hand-authored phrases contributed by the community. Commands like “Turn on the kitchen light” or “Turn off the living room fan” are matched directly to known actions without using machine learning at all. This makes them extremely fast, reliable, and fully local. No network calls. No cloud. No model hallucinations. Just direct mapping from phrase to automation.

Stage 2: Optional AI when you want natural language

One of the more unusual parts of Assist is that AI is never mandatory. Frenck emphasizes that developers and users get to choose their inference path: “You can even say you want to connect your own OpenAI account. Or your own Google Gemini account. Or get a Llama running locally in your own home.”

Assist evaluates each command and decides whether it needs AI. If a command is known, it bypasses the model entirely.

“Home Assistant would be like, well, I don’t have to ask AI,” Frenck says. “I know what this is. Let me turn off the lights.”

The system only uses AI when a command requires flexible interpretation, making AI a fallback instead of the foundation.

Open hardware to support the system

To bootstrap development and give contributors a reference device, the team built a fully open source smart speaker—the Voice Assistant Preview Edition.

“We created a small speaker with a microphone array,” Frenck says. “It’s fully open source. The hardware is open source; the software running on it is ESPHome.”

This gives developers a predictable hardware target for building and testing voice features, instead of guessing how different microphones, DSP pipelines, or wake word configurations behave across vendors.

Hardware as a software accelerator

Most open source projects avoid hardware. Home Assistant embraced it out of practical necessity.

“In order to get the software people building the software for hardware, you need to build hardware,” Frenck says.

Home Assistant Green, its prebuilt plug-and-play hub, exists because onboarding requires reliable hardware. The Voice Assistant Preview Edition exists because the voice pipeline needs a known microphone and speaker configuration.

This is a rare pattern: hardware serves as scaffolding for software evolution. It’s akin to building a compiler and then designing a reference CPU so contributors can optimize code paths predictably.

The result is a more stable, more testable, more developer-friendly software ecosystem.

A glimpse into the future: local agents and programmable homes

The trajectory is clear. With local AI models, deterministic automations, and a stateful view of the entire home, the next logical step is agentic behavior that runs entirely offline.

If a couch can trigger a movie automation, and a brewery can run a fermentation pipeline, the home itself becomes programmable. Every sensor is an input. Every device is an actuator. Every automation is a function. The entire house becomes a runtime.

And unlike cloud-bound competitors, Home Assistant’s runtime belongs to the homeowner, not the service provider.

Frenck sums up the ethos: “We give that control to our community.”

Looking to stay one step ahead? Read the latest Octoverse report and consider trying Copilot CLI.

The post “The local-first rebellion”: How Home Assistant became the most important project in your house appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories