Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
151744 stories
·
33 followers

Up Next for Arduino After Qualcomm Acquisition: High-Performance Computing

1 Share
Even after its acquisition by Qualcomm, the EFF believes Arduino "isn't imposing any new bans on tinkering with or reverse engineering Arduino boards," (according to Mitch Stoltz, EFF director for competition and IP litigation). While Adafruit's managing editor Phillip Torrone had claimed to 36,000+ followers on LinkedIn that Arduino users were now "explicitly forbidden from reverse engineering," Arduino corrected him in a blog post, noting that clause in their Terms & Conditions was only for Arduino's Software-as-a-Service cloud applications. "Anything that was open, stays open." And this week EE Times spoke to Guneet Bedi, SVP of Arduino, "who was unequivocal in saying that Arduino's governance structure had remained intact even after the acquisition." "As a business unit within Qualcomm, Arduino continues to make independent decisions on its product portfolio, with no direction imposed on where it should or should not go," Bedi said. "Everything that Arduino builds will remain open and openly available to developers, with design engineers, students and makers continuing to be the primary focus.... Developers who had mastered basic embedded workflows were now asking how to run large language models at the edge and work with artificial intelligence for vision and voice, with an open source mindset," he said. According to Bedi, this was where Qualcomm's technology became relevant. "Qualcomm's chipsets are high performance while also being very low power, which comes from their mobile and Android phone heritage. Despite being great technology, it is not easily accessible to design engineers because of cost and complexity. That made this a strong fit," he said. The most visible outcome of this acquisition is Uno Q, which Bedi described as being comparable to a mid-tier Android phone in capability, starting at a price of $44. For Arduino, this marked a shift beyond microcontrollers without abandoning them. "At the end of the day, we have not gone away from our legacy," Bedi said. "You still have a real-time microcontroller, and you still write code the way Arduino developers are used to. What we added is compute, without forcing people to change how they work." Uno Q combines a Linux-based compute system with a real-time microcontroller from the STM32 family. "You do not need two different development environments or two different hardware platforms," Bedi added... Rather than introducing a customized operating system, Arduino chose standard Debian upstream. "We are not locking developers into anything," Bedi said. "It is standard Debian, completely open...." Pre-built models covering tasks like object detection and voice recognition run locally on the board.... While the first reference design uses Qualcomm silicon, Bedi was careful to stress that this does not define the roadmap. "There is zero dependency on Qualcomm silicon," he said. "The architecture is portable. Tomorrow, we can run this on something else." That distinction matters, particularly for developers wary of vendor lock-in following the acquisition. Uno Q does compete directly with platforms like Raspberry Pi and Nvidia Jetson, but Bedi framed the difference less in terms of raw performance and more in flexibility. "When you build on those platforms, you are locked to the board," he said. "Here, you can build a prototype, and if you like it, you can also get access to the chip and design your own hardware." With built-in storage removing the need for external components, Uno Q positions itself less as a faster board and more as a way to simplify what had become an increasingly messy development stack... Looking a year ahead, Bedi believes developers should experience continuity rather than disruption. The familiar Arduino approach to embedded and real-time systems remains unchanged, while extending naturally into more compute-intensive applications... Taken together, Bedi's comments suggest that Arduino's post-acquisition direction is less about changing what Arduino is, and more about expanding what it can realistically be used for, without abandoning the simplicity that made it relevant in the first place. "We want to redefine prototyping in the age of physical artificial intelligence," Bedi said...

Read more of this story at Slashdot.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

My Favorite Podcasts of 2025 ⭐

1 Share

2025 was a strange year for podcasts: I added four new shows but my overall listening time was down by about 50 percent year-over-year. I chalk that up to a big lifestyle change this past year in which we spent over half our time in Mexico City, where we walk everywhere every day. But we do that together, so I can’t listen to podcasts, audiobooks, or music at that time. And since we walk so much, I don’t see any point in walking solo just so I can catch up on content.

I’ll try to review the life balance thing in 2026. But the way I listen hasn’t changed: I still use and prefer Pocket Casts, and recommend it to everyone.
❤️ Favorite bingeable podcasts
Every once in a while, I discover a podcast that I enjoy so much I want to listen to every episode. But even those come and go. I was a big fan of How Did This Get Made for several years, for example, but stopped listening a few years back. And while The Rewatchables is still a favorite, I’ve listened to all the previous episodes I would like to listen to and now just cherry–pick from the new episodes as they appear, which explains why it’s moved into the next section.

But the following three endure, in alphabetical order.

If Books Could Kill

This might be my favorite podcast overall because the hosts are consistently smart and funny and land on the correct side of big issues. It skewers and debunks so-called “airport books,” those books, often of the self-help variety, that somehow sell in the millions despite being absolutely terrible and in many cases objectively wrong. Think Freakanomics, The 4-Hour Work Week, The Secret, and similar drivel. Classic.

Made by Google

I’ve been listening to this podcast since its inception and it added a video version in the most recent season. I like the host, I like hearing from those who work on Google’s products, and I make a point to listen to (or, now, watch) every episode. (The video version is not in Pocket Casts, but it’s available on YouTube.)

Scott & Mark Learn To

I could not have been happier last year when I saw that two of my favorite people at Microsoft started this podcast. It’s still a favorite, and I look forward to each episode.
👍 Favorite podcasts for cherry-picked episodes
I don’t have time to listen to every single episode of every podcast I subscribe to, but I have several that are favorites depending on the topic. This is pretty much why my Pocket Casts playlists view is set to “New Releases.” You never know when the next great show is going to pop up.

Here are my favorites, in alphabetical order.

.NET Rocks

Richard Campbell and Carl Franklin are friends and two of my favorite people, and they’re terrific together on this developer-focused podcast, which may now be the longest-running podcast in history.

Hanselminutes

Scott Hanselman (of Scott & Mark Learn To fame) is a tech-focused content maker who I’ve know for decades and really enjoy. Thi...

The post My Favorite Podcasts of 2025 ⭐ appeared first on Thurrott.com.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Goodbye Plugins: MCP Is Becoming the Universal Interface for AI

1 Share
Stylized image of a cat.

Three months ago, I spent two weeks building a custom plugin to connect our AI assistant to our internal CRM system. Last week, I replaced it with a Model Context Protocol (MCP) server that took four hours to implement, and it works with every AI model in our stack.

This is not just a productivity win, but a glimpse into a fundamental shift happening across the AI ecosystem. It feels like the era of fragmented plugins is ending, and MCP is emerging as the universal interface that could standardize the way AI systems interact with tools, data and the real world.

The Plugin Problem We All Know Very Well

Anyone who has built AI integrations knows the pain. Each service demands its own plugin with many unique authentication schemes, API formats and maintenance overhead. I have seen teams spend 60% of their AI development time on integration plumbing instead of solving actual business problems.

Consider the integration complexity:

  • Different schemas for every platform
  • Plugins that work only with specific models
  • Context passed in isolated chunks with no unified meaning
  • Constant maintenance as APIs change

A typical enterprise AI deployment might require dozens of plugins, each a potential point of failure. The maintenance burden alone creates significant scaling challenges for AI deployments.

Enter MCP: A Different Approach

The Model Context Protocol takes a fundamentally different approach. Instead of building separate plugins for every integration, developers create MCP servers that expose system capabilities through a standardized protocol. Any MCP-compatible AI model can then discover and use those capabilities automatically.

Why MCP Is Winning

Universal Compatibility

The most compelling advantage is cross-model compatibility. A single MCP server works with Claude, GPT, local models and enterprise AI orchestration platforms. Write once, run everywhere isn’t just a slogan anymore; it’s reality.

Rich Context, Not Just Endpoints

Where plugins expose API endpoints, MCP exposes context. Instead of blindly calling APIs, AI models can see which tools are available, how they work and what they can safely do. This means better decisions with less handholding from developers.

Built for Autonomy

Traditional plugins assumed human approval for every action. MCP was designed for agentic AI systems that need structured actions, typed inputs and outputs, safety boundaries, and auditability. It’s the natural foundation for autonomous AI workers.

Lower Overhead

Instead of maintaining dozens of plugins, teams create a single MCP server, define capabilities and let the protocol handle discovery and negotiation. The development and maintenance burden drops significantly.

Technical Considerations for Implementing MCP

Implementing MCP isn’t without challenges. The protocol requires thoughtful design around:

  • Security boundaries and access controls
  • Error handling and recovery mechanisms
  • Performance optimization for high-throughput scenarios
  • Monitoring and observability for autonomous operations

However, these challenges exist with plugins too, and MCP just provides better tools to address them systematically.

What This Means for Development Teams

The shift to MCP represents more than a technical upgrade. It’s a fundamental change in the way we architect AI systems:

  • For platform teams: Focus shifts from maintaining integration adapters to building robust, well-designed MCP servers that expose organizational capabilities safely.
  • For AI engineers: Less time spent on plumbing, more time on intelligent behavior and user experience.
  • For enterprise architects: A path toward standardized AI integration patterns that reduce complexity and improve governance.

Adoption Is Accelerating

MCP is gaining traction fast. Major AI platforms are adding support, open source tools are multiplying, and enterprise teams are choosing MCP for new projects.

This isn’t hype; it’s solving real problems. MCP fixes the integration headaches every AI developer knows, and it’s open source with no vendor lock-in. That’s why it’s winning.

Looking Forward

A new standard is forming, and we’re seeing it happen live. Just as REST APIs replaced SOAP, and GraphQL provided a better query interface, MCP is positioning itself as the successor to fragmented plugin ecosystems.

The transition won’t happen overnight, but the direction is clear. Organizations building AI systems today should consider MCP not just as an alternative to plugins, but as the foundation for a scalable, maintainable AI integration architecture.

The age of plugins isn’t ending with a bang; it’s ending with the quiet adoption of something better. MCP is becoming the universal interface for AI, and smart development teams are getting ahead of the curve.

The post Goodbye Plugins: MCP Is Becoming the Universal Interface for AI appeared first on The New Stack.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

The Most Important AI Lesson for Businesses From 2025

1 Share
From: AIDailyBrief
Duration: 16:49
Views: 313

Deloitte's 17th Tech Trends report prioritizes operational redesign, agent-ready architectures, and robust orchestration for real AI value. Primary obstacles include legacy-system integration, poor data searchability and reusability, and governance unprepared for autonomous decisions. Solutions include specialized orchestrated agents, product-aligned technology squads, and strategic compute planning to manage rising inference costs.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Aspire – Beyond the Basics

1 Share
From: VisualStudio
Duration: 1:01:50
Views: 88

In this deep-dive Live! 360 session, Loek Duys takes you beyond the basics of Aspire and into the internals that power local orchestration, service discovery, extensibility, and deployment workflows. Designed for developers who already know Aspire, this session peels back the curtain on how it really works—and how you can bend it to your needs.

You’ll explore the App Host, the Developer Control Plane (DCP), Aspire’s built-in reverse proxy, and its Kubernetes-compatible management API. Through live demos and real code, Luke shows how Aspire launches and monitors apps, containers, and sidecars; avoids port conflicts; wires up service discovery; and enables powerful custom integrations and publishers for Docker, Kubernetes, and Azure environments.

🔑 What You’ll Learn
• What Aspire’s App Host actually does (and what it doesn’t)
• How the Developer Control Plane (DCP) orchestrates processes, containers, and telemetry
• How Aspire avoids port conflicts using a built-in reverse proxy
• How service discovery works using standard ASP.NET Core features
• How to build custom Aspire integrations and host extensions
• How Aspire’s graph model enables publishing and deployment to Docker, Kubernetes, and Azure
• When (and when not) to use Aspire pipelines for deployment workflows

⏱️ Chapters
01:13 What Aspire is (and why it’s no longer “.NET Aspire”)
02:45 AskVantage Demo app architecture (OCR, LLMs, state storage)
04:23 How Aspire launches apps and dependencies
07:02 The App Host demystified
11:38 App Host vs Developer Control Plane (DCP)
12:48 DCP architecture: DCP, DCPCTRL, DCPPROC
15:59 How debugging works when launching from an IDE
17:03 Live demo: inspecting Aspire’s process tree
20:18 Containers, monitoring, and telemetry flow
22:15 Built-in reverse proxy & avoiding port conflicts
24:30 Demo: proxy behavior and multiple instances
26:42 DCP management API & Kubernetes-style config
30:17 Using kubectl-style tools with Aspire
32:04 Demo: launching executables and containers via DCP
35:08 Why DCP is not Kubernetes (and never for production)
36:03 Service discovery with ASP.NET Core
39:07 Demo: service discovery in the Aspire dashboard
42:41 Custom integrations & Aspire extensions
44:15 Building a custom LLM (Ollama) integration
50:34 Surfacing lifecycle events in the Aspire dashboard
52:48 Publishing & deployment concepts
54:11 Aspire publishers: Docker, Kubernetes, Azure
55:01 Aspire pipelines overview
57:00 Demo: generating Docker Compose output
1:00:12 Wrap-up & key takeaways

👤 Speaker: Loek Duys
Independent Cloud Architect, LoekD Consultancy

🔗 Links
• Download Visual Studio 2026: http://visualstudio.com/download
• Explore more Live! 360 sessions: https://aka.ms/L360Orlando25
• Join upcoming VS Live! events: https://aka.ms/VSLiveEvents

#aspire #dotnet

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete

Why Blogging Still Matters in the Age of AI

1 Share

Hi lovely readers,

If you’ve ever thought about starting a blog but stopped because “no one reads blogs anymore,” you’re not alone. With AI changing how people consume content, it’s easy to assume blogging has lost its value. I started blogging in 2021, and since then I’ve published 45 posts, with my most read one reaching 40 thousand views. That experience has shown me that blogging is still incredibly important and worth pursuing today. Here’s why.

For yourself

We only save a limited amount of information every day in our heads. In my case, that limit is very noticeable. My brain often feels like a strainer. All my loved ones know that if they want me to remember something, they need to write it down for me. Telling me while I am doing something else guarantees I will forget it within minutes.

I sometimes get the opportunity to talk to some of the smartest people in the tech field. People who know far more about the subject than I ever will, and that is completely fine. What frustrates me is when someone explains something clearly, and five minutes later I realize I have already forgotten most of it.

That is where writing things down becomes important. Whether it is a reminder, a to do list, or a personal knowledge base, writing forces you to slow down and think about what you just learned. Instead of just listening and moving on, you actively process the information.

Writing things down also helps with memory. When you put something into your own words, your brain has to revisit the concept instead of letting it fade away. Writing on paper is generally better for remembering the concept, but writing a blog post is a great alternative. You can even start on paper and later turn it into a digital blog post.

Over time, your blog becomes a manual for your future self. You will forget things, especially technical details. Having blog posts that you wrote yourself means you always have an explanation that already made sense to you once. Often these posts include examples, demos, or code snippets that you can reuse or quickly understand again.

Another benefit is language development, especially if English is not your first language. My native language is Dutch, and during university my teachers often pointed out how I would lose my train of thought halfway through a sentence. Writing technical blog posts in a non-native language takes time and effort, but that effort adds up. I now notice awkward sentences much faster and make far fewer grammar mistakes than I did in 2021. That improvement also carries over into my work life, where clear written communication through Slack, Teams, or email is essential in an international environment.

For others

If you learn something interesting or feel genuinely excited about a topic, sharing it can help others while trying to understand it or practicing it yourself.

When I started blogging, I was still in university. By spending a lot of time on Twitter, I slowly realized that going to university is not something everyone has access to. In many countries, studying is extremely expensive. Others choose not to pursue it because of family responsibilities or personal circumstances. Meanwhile, I was 22, living in a country where university cost around 2500 euros a year, and I was learning a lot of genuinely interesting things.

That realization became a big motivator for me. Besides all the personal benefits blogging gave me, it felt good to share knowledge that I had easy access to. I would go to class, take notes on my laptop or on paper, put everything into Notion, and then turn those notes into a blog post or sometimes a short series of tweets if I felt it could be useful to others.

I still do this today. I learn from public speakers at conferences, from colleagues, and even from students I teach who are deeply interested in topics I know very little about. Someone mentions a concept or explains something new to me, and I write it down so I can explore it further later.

Some of those notes stay in Notion forever. Others turn into blog posts. Either way, the act of writing and sharing creates a way to pass knowledge forward, even while I am still learning myself.

For your career

Blogging also plays a role in your career, even if that is not the reason you started in the first place.

When you write blog posts, you are not just sharing knowledge. You are showing how you think. You show how you break down problems, how you explain concepts, and what you find important enough to spend time on. That is very different from listing skills on a CV or LinkedIn profile.

A blog gives context to your experience. It shows what you actually do with the tools and technologies you work with. Someone reading your post can see how deep you go, how you approach learning something new, and how you communicate technical ideas.

Another important part is visibility. Your work normally stays within your team or your company. A blog moves that work into the open. People outside your immediate circle can find it, learn from it, and form an opinion of you long before you ever speak to them.

This can lead to opportunities you did not actively chase. Conversations start more easily when someone already knows what you’re interested in and what you know. Sometimes people reach out because a post helped them. Sometimes it becomes a reference point during an interview or a collaboration. None of this is guaranteed, but it is something you would not have without blogging.

For me, blogging never felt like a career strategy. It felt like documenting what I was learning. Over time, it simply became part of how I work and how I grow. And that visibility is something that helps just as much as the knowledge itself.

Your keystrokes are precious

Once you start working in a team and someone asks you to explain something over text, you usually have two options. One is writing a long Teams or Slack message that only one or two people will ever read. The other is writing a blog post so that many people can benefit from the same amount of effort.

At that point, it is worth asking yourself whether those keystrokes are really worth your time and energy.

I first learned about this idea in 2024 when Scott Hanselman spoke at a meetup near me. He mentioned his website keysleft.com and his blog post Do they deserve the gift of your keystrokes. One quote from that post really stuck with me:

If you email someone one on one, you are reaching that one person.

If you blog about it, or update a wiki or something similar, your keystrokes travel farther and reach more people.

You only have so many hours in the day.

That idea is absolutely true. Why put the same amount of effort into a one to one message when you can help more people with that same effort. Writing a blog post forces you to think more clearly about the concept, allows you to reuse it later, and gives future colleagues or random people on the internet something useful to learn from.

I do this for students as well. In their first year, my students have to learn PyGame and Python Flask. Instead of explaining the same concepts from scratch every year, I wrote blog posts like How to Build Your First Python Game: A Step-by-Step Guide to Creating a Simple Shooter with PyGame and Build a To-Do List App Using Python Flask, Jinja2, and SQL. They also struggled with topics like SSH keys and Git Tags, so I wrote blog posts on those as well. All of these blog posts are public. I need to explain these concepts every year anyway, so it makes sense to put them online where they can help more than just one group of students.

The work stays the same. The reach does not.

That’s a wrap!

Blogging does not have to be about algorithms, monetization, or chasing numbers. At its core, it is about learning, documenting, and sharing in a way that works for you. It helps you remember what you learn, makes knowledge more accessible for others, and allows your effort to reach further than a single message or conversation ever could.

You do not need a big audience to start. You do not need a perfect writing style or a niche figured out from day one. What matters is that you write things down, in your own words, and put them somewhere you can come back to later. Everything else grows from there.

If you have been on the fence about starting a blog, I hope this gave you a different perspective. And if you already have one, maybe this is your sign to keep going.

If you have thoughts, questions, or experiences you would like to share, feel free to leave a comment under this post or reach out to me on my socials. I am always happy to talk about blogging, learning in public, and all things tech.

Read the whole story
alvinashcraft
1 hour ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories