Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150917 stories
·
33 followers

SE Radio 714: Costa Alexoglou on Remote Pair Programming

1 Share

Costa Alexoglou, co-founder of the open source Hopp pair-programming application, talks with host Brijesh Ammanath about remote pair programming. They start with a quick introduction to pair programming and its importance to software development before discussing the various problems with the current toolset available and the challenges that tool developers face for enabling pair programming. They consider the key features necessary for a good pair-programming tool, and then Costa describes the journey of building Hopp and the challenges faced while building it.





Download audio: https://traffic.libsyn.com/secure/seradio/714-costa-alexoglou-remote-pair-programming.mp3?dest-id=23379
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

WW 977: Moonshine University - The Push for Building 100% Native Windows Apps

1 Share

Microsoft's AI ambitions overflowed into GitHub, sparking backlash when ads appeared in pull requests and raising new concerns about where your code is really going. GitHub is going to automatically use your data to train AI, so Paul tells how to opt-out if you don't want that. Plus, there's a new Microsoft 365 alternative in town, and this one is from a little tech company you can trust

Windows

  • Week D updates go live last Thursday - Smart App Control, many other minor changes
  • And here we go again: Microsoft issues emergency patch for March Week D optional update
  • Microsoft says it will replace web-based in-box apps and experiences with native apps ... somehow
  • Four builds across three channels - Canary with opt-in gets a huge Windows Console upgrade, Dev/Beta get Administrator Protection (again), more
  • AMD has a new flagship gaming processor

AI/Dev

  • Microsoft Research AI has a new Critique feature that uses ChatGPT and Claude together in an unholy Frankenstein's monster of orchestration
  • A week of Siri AI rumors/leaks - This is Apple's version of Microsoft trying to buy TikTok
  • Google makes it easier to switch to Gemini - This is like Mac vs. PC, but for AI
  • Mozilla's approach to AI in Firefox is both right and correct
  • The plan to the save the open web from Big Tech
  • The future of Firefox includes a Smart Window mode that works like Private window but for AI
  • SwiftUI SDK for Android is now available

Xbox and gaming

  • New Xbox chief seeks to reset Xbox brand image - reminder, that's not the same as changing anything
  • Xbox announces 14 Day One Game Pass titles coming soon
  • Xbox Games Showcase 2026 and Gears of War E-Day Direct are coming in June
  • Sony to raise PS5 prices soon
  • Nintendo to raise prices for physical Switch 2 games soon

Tips & picks

  • Tip of the week: Opt-out of training AI on GitHub
  • App pick of the week: Proton Workspace with Meet
  • RunAs Radio this week: My Home Lab
  • Brown liquor pick of the week: Jeptha Creed Six Year Old Wheated Bourbon

Hosts: Leo Laporte, Paul Thurrott, and Richard Campbell

Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly

Check out Paul's blog at thurrott.com

The Windows Weekly theme music is courtesy of Carl Franklin.

Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit

Sponsors:





Download audio: https://pdst.fm/e/pscrb.fm/rss/p/mgln.ai/e/294/cdn.twit.tv/megaphone/ww_977/ARML2198816630.mp3
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Get your Wear OS apps ready for the 64-bit requirement

1 Share
Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS

64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We’ve supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019.

Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements.

The 64-bit requirement: timeline for Wear OS developers

Starting September 15, 2026:

  • All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play.
  • Google Play will start blocking the upload of non-compliant apps to the Play Console.

We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices.

The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small.

Preparing for the 64-bit requirement

Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code.

Assess your app

  • Inspect your APK or app bundle for native code using the APK Analyzer in Android Studio.
  • Look for .so files within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a.
  • Ensure parity: The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs.
  • Upgrade SDKs: If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version.

How to test 64-bit compatibility

The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment.

Note: Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images.

When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware.

Next steps

We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures.

This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.

Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

PowerShell 7.6 release postmortem and investments

1 Share

We recently released PowerShell 7.6, and we want to take a moment to share context on the delayed timing of this release, what we learned, and what we’re already changing as a result.

PowerShell releases typically align closely with the .NET release schedule. Our goal is to provide predictable and timely releases for our users. For 7.6, we planned to release earlier in the cycle, but ultimately shipped in March 2026.

What goes into a PowerShell release

Building and testing a PowerShell release is a complex process with many moving parts:

  • 3 to 4 release versions of PowerShell each month (e.g. 7.4.14, 7.5.5, 7.6.0)
  • 29 packages in 8 package formats
  • 4 architectures (x64, Arm64, x86, Arm32)
  • 8 operating systems (multiple versions each)
  • Published to 4 repositories (GitHub, PMC, winget, Microsoft Store) plus a PR to the .NET SDK image
  • 287,855 total tests run across all platforms and packages per release

What happened

The PowerShell 7.6 release was delayed beyond its original target and ultimately shipped in March 2026.

During the release cycle, we encountered a set of issues that affected packaging, validation, and release coordination. These issues emerged late in the cycle and reduced our ability to validate changes and maintain release cadence.

Combined with the standard December release pause, these factors extended the overall release timeline.

Timeline

  • October 2025 – Packaging-related changes were introduced as part of ongoing work for the 7.6 release.

    • Changes to the build created a bug in 7.6-preview.5 that caused the Alpine package to fail. The method used in the new build system to build the Microsoft.PowerShell.Native library wasn’t compatible with Alpine. This required additional changes for the Alpine build.
  • November 2025 – Additional compliance requirements were imposed requiring changes to packaging tooling for non-Windows platforms.

    • Because of the additional work created by these requirements, we weren’t able to ship the fixes made in October until December.
  • December 2025 – We shipped 7.6-preview.6, but due to the holidays there were complications caused by a change freeze and limited availability of key personnel.

    • We weren’t able to publish to PMC during our holiday freeze window.
    • We couldn’t publish NuGet packages because the current manual process limits who can perform the task.
  • January 2026 – Packaging changes required deeper rework than initially expected and validation issues began surfacing across platforms.

    • We also discovered a compatibility issue in RHEL 8. The libpsl-native library must be built to support glibc 2.28 rather than glibc 2.33 used by RHEL 9 and higher.
  • February 2026 – Ongoing fixes, validation, and backporting of packaging changes across release branches continued.
  • March 2026 – Packaging changes stabilized, validation completed, and PowerShell 7.6 was released.

What went wrong and why

Several factors contributed to the delay beyond the initial packaging change.

  • Late-cycle packaging system changes A compliance requirement required us to replace the tooling used to generate non-Windows packages (RPM, DEB, PKG). We evaluated whether this could be addressed with incremental changes, but determined that the existing tooling could not be adapted to meet requirements. This required a full replacement of the packaging workflow.Because this change occurred late in the release cycle, we had limited time to validate the new system across all supported platforms and architectures.
  • Tight coupling to packaging dependencies Our release pipeline relied on this tooling as a critical dependency. When it became unavailable, we did not have an alternate implementation ready. This forced us to create a replacement for a core part of the release pipeline, from scratch, under time pressure, increasing both risk and complexity.
  • Reduced validation signal from previews Our preview cadence slowed during this period, which reduced opportunities to validate changes incrementally. As a result, issues introduced by the packaging changes were discovered later in the cycle, when changes were more expensive to correct.
  • Branching and backport complexity Because of new compliance requirements, changes needed to be backported and validated across multiple active branches. This increased the coordination overhead and extended the time required to reach a stable state.
  • Release ownership and coordination gaps Release ownership was not explicitly defined, particularly during maintainer handoffs. This made it difficult to track progress, assign responsibility for blockers, and make timely decisions during critical phases of the release.
  • Lack of early risk signals We did not have clear signals indicating that the release timeline was at risk. Without structured tracking of release health and ownership, issues accumulated without triggering early escalation or communication.

How we responded

As the scope of the issue became clear, we shifted from attempting incremental fixes to stabilizing the packaging system as a prerequisite for release.

  • We evaluated patching the existing packaging workflow versus replacing it, and determined a full replacement was required to meet compliance requirements.
  • We rebuilt the packaging workflows for non-Windows platforms, including RPM, DEB, and PKG formats.
  • We validated the new packaging system across all supported architectures and operating systems to ensure correctness and consistency.
  • We backported the updated packaging logic across active release branches to maintain alignment between versions.
  • We coordinated across maintainers to prioritize stabilization work over continuing release progression with incomplete validation.

This shift ensured a stable and compliant release, but extended the overall timeline as we prioritized correctness and cross-platform consistency over release speed.

Detection gap

A key gap during this release cycle was the lack of early signals indicating that the packaging changes would significantly impact the release timeline.

Reduced preview cadence and late-cycle changes limited our ability to detect issues early. Additionally, the absence of clear release ownership and structured tracking made it more difficult to identify and communicate risk as it developed.

What we are doing to improve

This experience highlighted several areas where we can improve how we deliver releases. We’ve already begun implementing changes:

  • Clear release ownership We have established explicit ownership for each release, with clear responsibility and transfer mechanisms between maintainers.
  • Improved release tracking We are using internal tracking systems to make release status and blockers more visible across the team.
  • Consistent preview cadence We are reinforcing a regular preview schedule to surface issues earlier in the cycle.
  • Reduced packaging complexity We are working to simplify and consolidate packaging systems to make future updates more predictable.
  • Improved automation We are exploring additional automation to reduce manual steps and improve reliability in the face of changing requirements.
  • Better communication signals We are identifying clearer signals in the release process to notify the community earlier when timelines are at risk. Going forward, we will share updates through the PowerShell repository discussions.

Moving forward

We understand that many of you rely on PowerShell releases to align with your own planning and validation cycles. Improving release predictability and transparency is a priority for the team, and these changes are already in progress.

We appreciate the feedback and patience we received from the community as we worked through these changes, and we’re committed to continuing to improve how we deliver PowerShell.

— The PowerShell Team

The post PowerShell 7.6 release postmortem and investments appeared first on PowerShell Team.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

AI Tools for Developers –

1 Share

Using AI tools is an important part of being a software developer.

We just posted a course on the freeCodeCamp.org YouTube channel that will teach you how to use AI tools to become more productive as a developer. I created this course!

In this course, you will master AI pair programming and agentic terminal workflows using top-tier tools like GitHub Copilot, Anthropic's Claude Code, and the Gemini CLI. The course also covers open-source automation with OpenClaw, teaching you how to set up a highly customizable, locally hosted AI assistant for your development environment. Finally, you will learn how to maintain high code quality and streamline your team's workflow by integrating CodeRabbit for automated, AI-driven pull request analysis.

Watch the full course on the freeCodeCamp.org YouTube channel (1.5 hour watch).



Read the whole story
alvinashcraft
57 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The Future of Tech Blogging in the Age of AI

1 Share

I've been blogging on this site for almost 20 years now, and the majority of my posts are simple coding tutorials, where I share what I've learned as I explore various new technologies (my journey on this blog has taken me through Silverlight, WPF, IronPython, Mercurial, LINQ, F#, Azure, and much more).

My process has always been quite simple. First, I work through a technical challenge and eventually get something working. And then, I write some instructions for how to do it.

Benefits of tech blogging

There are many benefits to sharing your progress like this:

  1. The process of putting it into writing helps solidify what you learned
  2. Despite this I still often forget how I achieved something, so my blog functions as a journal I can refer back to later
  3. You're supporting the wider developer community by sharing proven ways to get something working
  4. Thanks to "Cunningham's Law" ("the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer."), your post may lead you to discover a better way to achieve the same goal, or a fatal flaw in your approach
  5. And gradually it builds your personal reputation and credibility, as eventually you'll build up visitors (although you may find that your most popular post of all time is on the one topic that you most certainly aren't an expert on!)

Are LLMs going to ruin it all?

But recently I've been wondering - are LLM's going to put an end to coding tutorial blogs like mine? Do they render it all pointless?

For starters, GitHub Copilot and Claude Code have already dramatically changed the way I go about exploring a new technique or technology. Instead of slogging through Bicep documentation, and endlessly debugging why my template didn't work, I now just ask the AI model to create one for me.

Refreshingly, I notice that it gets it wrong just as frequently as I do, but it doesn't get frustrated - it just keeps battling away until eventually it gets something working.

But now it feels like a hollow victory. Is there even any point writing a tutorial about it? If you can simply ask an agent to solve the problem, why would anyone need to read my tutorial? Are developers even going to bother visiting blogs like mine in the future?

And then there's the question of who writes the tutorial? Not only is the agent much quicker than me at solving the technical challenge, it's also significantly faster at writing the tutorial, and undeniably a better writer than me too. So maybe I should just let it write the article for me? But the internet is already full of AI-generated slop...

Should you let AI write your blog posts?

This is a deeply polarizing question. There's a number of possible options:

Level 1: Human only

You could insist on hand-writing everything yourself, with strictly no AI assistance. That's what you're reading right now (if you can't already tell from the decidedly mediocre writing style!)

This mirrors a big debate going on in the world of music production at the moment. If AI tools like Suno can generate an entire song from a single prompt, that sounds far more polished than anything I've ever managed to produce, then does that spell the end of real humans writing and recording songs? And should we fight against it or just embrace it as the future?

I think tech tutorials do fall into a different category to music though. If I want to learn how to achieve X with technology Y, I just want clear, concise and correct instructions - and I'm not overly bothered whether it came 100% from a human mind or not.

Having said that, we've already identified a key benefit of writing your own tutorials: it helps solidify what you've learned. Doing your own writing will also improve your own powers of communication. For those reasons alone I have no intention of delegating all my blog writing to LLMs.

Level 2: Human writes, AI refines

On the other hand, it seems churlish to refuse to take advantage of the benefits of LLM for proof reading, fact checking, and stylistic improvements. When I posted recently about does code quality still matter this is exactly what I did. I wrote the post myself, and then asked Claude Code to help me refine it, by critiquing my thoughts and providing counter-arguments.

To be honest, I ignored most of the feedback, but undoubtedly it improved the final article. This is the approach I've been taking with my Pluralsight course scripts - I first write the whole thing myself, and then ask an LLM to take me to task and tell me all the things I got wrong. (Although they're still ridiculously sycophantic and tell me it's the greatest thing they've ever read on the topic of lazy loading!)

Level 3: AI writes, human refines

But of course, my time is at a premium. A blog tutorial often takes me well over two hours to write. That's a big time investment for something that will likely barely be read by anyone.

And if all I'm producing is a tutorial, perhaps it would be better for me to get the LLM to do the leg-work of creating the structure and initial draft, and then I can edit afterwards, adapting the language to sound a bit more in my voice, and deleting some of the most egregious AI-speak.

That's exactly what I tried with a recent post on private endpoints. Claude Code not only created the Bicep and test application, but once it was done I got it to write up the instructions and even create a GitHub repo of sample code. The end result was far more thorough than I would have managed myself, and although I read the whole thing carefully and edited it a bit, I have to admit that most of the time I couldn't think of better ways to phrase each sentence, so a lot of it ended up unchanged.

That left a bad taste in my mouth to be honest. If I do that too often will I lose credibility and scare away readers? And yet I do feel like it was a genuinely valuable article that shows how to solve a problem that I'd been wanting to blog about for a long time.

Level 4: AI only

Of course, there is a level further, and now we are getting to the dark side. Could I ask Claude or ChatGPT to write me a blog post and just publish it without even reading it myself? I could instruct it to mimic my writing style, and it might even do a good enough job to go unnoticed? Maybe at some time in the future, Claude can dethrone my most popular article with one it wrote entirely itself.

To be honest, I have no interest in doing that at all - it undermines the purpose of this blog which is a way for me to share the things that I have learned. So I can assure you I have no intention of filling this site up with "slop" articles where the LLM has come up with the idea, written and tested the code, and published the article all without me having to be involved at all.

But interestingly, this approach might make sense for back-filling the documentation for my open-source project NAudio. Over the years I've written close to one hundred tutorials but there are still major gaps in the documentation.

I'm thinking of experimenting with asking Claude Code to write a short tutorial for every public class in the NAudio repo, and to then check its work by following the tutorial and making sure it really works.

I expect we're going to see an explosion of this approach to, and it could be a genuine positive for the open source community, where documentation is often lacking and outdated. If LLMs are to make a positive contribution to the world of coding tutorials, this is probably one of the best ways they can be utilized.

Why tech blogging still matters

If you're still with me at this point, well done - I know I've gone on too long. Even humans can be as long-winded as LLMs sometimes. But the process of writing down my thoughts on this issue has helped me gain some clarity, and made me realise that it doesn't necessarily matter whether or not I take an AI-free, AI-assisted or even a AI-first approach to my posts.

The value of sharing these coding tutorials is that the problems I'm solving are real-world problems. They are tasks that I genuinely needed to accomplish, and came with unique constraints and requirements that are specific to my circumstances. That gives them an authenticity that an AI can't fake. At best it can guess at what humans might want to achieve, and create a tutorials about that.

So when I'm reading your tech blog (which I hope you'll share a link to), I won't really care whether or not you used ChatGPT to create the sample code, or make you sound like a Pulitzer prize winner. I'll be interested because you're sharing your experience of how you solved a problem using the tools at your disposal.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories