Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149627 stories
·
33 followers

How I Use Audio Notes and the Microsoft Agent Framework to Save Hours Each Week

1 Share

Keeping up with long-form content is one of the biggest time sinks for developers and knowledge workers.

Podcasts, conference talks, and YouTube tutorials are invaluable sources of information, but a 90-minute episode demands 90 minutes of your attention.

I built a YouTube and podcast summarisation tool to solve this problem for me.

Audio Notes takes any YouTube or podcast URL and produces an AI-powered summary complete with key takeaways, action items, and topic tags.

Instead of watching an hour-long video, you read a 3-minute summary and decide whether the full content is worth your time.

In this post I walk through how the tool works, the technology behind it, and how I combine it with my AI Researcher agent to stay across new developments without drowning in content.

~

The Problem

The pace of change in AI and software development is relentless.

New frameworks ship weekly.

Conference talks pile up.

Podcast backlogs grow faster than you can listen.

The traditional options are:

  • Watch everything and lose hours each day
  • Skim titles and miss important content
  • Rely on someone else’s summary and hope they captured what matters to you

 

None of these are great. I wanted a tool that could process the source material directly and surface the parts that matter.

~

How Audio Notes Works

 

The workflow is three steps.

1. Paste a URL

Drop in any YouTube video or podcast URL. The tool accepts public video links and extracts the audio for processing.

2. Transcribe and Summarise

The audio is sent to Azure Speech Services for batch transcription. Once the transcript is ready, OpenAI generates a structured summary using a tailored prompt. This produces:

  • A concise written summary of the content
  • Key takeaways pulled from the discussion
  • Action items if any are mentioned
  • Topic tags for quick categorisation

 

3. Review Insights

The summary page presents everything at a glance.

At the top, three stat cards show the original content duration, the estimated reading time for the summary, and the time saved.

For a 60-minute podcast, you typically get a summary that takes 3-4 minutes to read. That is a 90%+ time saving on every piece of content you process.

Below the stats, the full summary is rendered with markdown formatting, followed by the key takeaways, action items, and topic badges.

~

The Technology Stack

Audio Notes is built on .NET Core with the following services:

  • Azure Speech Services
  • OpenAI
  • Azure Blob Storage
  • SQL Server
  • Semantic Kernel

 

The batch transcription pipeline runs asynchronously. You submit a URL, a background process handles the download, transcription, and file storage, and the web app polls for completion.

Once the transcript is available, summary generation takes a few seconds.

~

Combining Audio Notes with the AI Researcher Agent

This is where things get interesting.

In a previous post I described how I built an AI Researcher and Newsletter Publisher using the Microsoft Agent Framework with background responses.

That agent searches for the latest developments across blogs, GitHub repositories, and news sources, then compiles a newsletter.

I use both tools together as part of my weekly learning workflow:

  1. The AI Researcher agent identifies what is new and noteworthy. It surfaces blog posts, release announcements, or conference talks I should pay attention to.
  2. When the researcher flags a long-form video or podcast, I feed the URL into Audio Notes to get the summary.
  3. I scan the key takeaways and decide whether the full content warrants a deeper look.

 

This combination means I can process a week’s worth of AI and development news in under an hour.

The researcher agent handles breadth, telling me what exists.

Audio Notes handles depth, telling me what each piece of content actually says.

Neither tool replaces the other. Together they cover a pipeline from discovery to quick understanding

~

A Practical Example

A recent workflow looked like this.

The AI Researcher agent flagged a 6 hour Lex Fridman podcast and discussion with David Heinemeier Hansson (DHH), the creator of Ruby on Rails

Rather than blocking out time to listen, I pasted the URL into Audio Notes and within minutes I had:

  • A summary of the episode
  • Key and notable takeaways around hiring, software development, work-life balance and more
  • Action items to consider

 

The summary told me everything.  Total time spent: 2 minutes instead of 6 hours.  In the end, I decided to listen to this podcast at another time.  Maybe on a long drive.

~

Time Saved at Scale

The time savings compound quickly. If you process 5 pieces of long-form content per week, each averaging 45 minutes, that is nearly 4 hours of listening.

With Audio Notes, the same content takes roughly 20 minutes to review as summaries.

Over a month that is close to 14 hours reclaimed.  The summary page makes all this visible.

Each summary shows the original duration alongside the estimated reading time and a percentage indicator of time saved.

It is a small detail, but seeing “XX% time saved” on every summary reinforces that the tool is doing its job.

~

What’s Next and Ideas

I am continuing to refine the summarisation prompts to improve the quality of key takeaways and action item extraction.

I am also exploring the possibility of batch-processing multiple URLs or entire playlists in a single operation, so the AI Researcher agent could automatically feed its discoveries into Audio Notes without manual intervention.

If you are interested in the background responses pattern that powers the AI Researcher agent, I covered the implementation in detail in the background responses post.

~

Summary

Audio Notes turns long-form video and podcast content into structured, scannable summaries.

Combined with the AI Researcher agent for content discovery, it forms a complete pipeline for staying current without the time commitment of consuming everything in full.

You can learn more about the AI Researcher in new Microsoft Agent Framework course here.

~

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.
Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Keeping The Streak: Just Under The Wire. Something, Something, Error Handling.

1 Share
We spend a bit of time this week talking about error handling in PowerShell. All because I spent the day staring at roller coaster procedures and electronics instead of the animals at Seaworld.
Read the whole story
alvinashcraft
28 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

InterlinedList Is Live: Lists, Posts, and Markdown Finally in One Place

1 Share

It’s the 73rd Day of 2026! Per my previous post, I promised updates and thus, updates delivered!

There’s a problem I’ve run into repeatedly over the years. Actually, it’s more like a pattern of problems.

I’ve got:

  • notes scattered across markdown files
  • lists living in some task app
  • social media posts written in drafts somewhere else
  • and half-finished ideas bouncing between GitHub issues, notebooks, and random documents.

Individually, each of these tools is “fine” yet fragmented and leaves ideas, messaging, and lists leaking and losing ideas to the nebulous.

That’s exactly the mess that led me to build InterlinedList.

And now it’s live: 👉 https://interlinedlist.com

What InterlinedList Actually Is

At its core, InterlinedList is a platform that ties together three things that are usually awkwardly separated:

  1. Lists
  2. Social media posting (to your other accounts too, not just on IntelinedList)
  3. Markdown documents

Each of these solves a different part of the “organize your thinking and output” problem. But the real value shows up when they’re connected.

InterlinedList brings them together into a single system.

Not another note app.
Not another scheduling tool.
Not another task manager.

Instead, it’s a workflow** platform for ideas that turn into posts, lists, and documents.

Lists That Connect to What You Do

Everyone has lists. They might be all over the place. With InterlinedList you can create your own lists, with whatever schema of columns you want.

Ideas lists.
Research lists.
Feature lists.
Writing queues.
Project breakdowns.

The problem is most list tools treat lists like dead data. You write them down, check things off, and that’s about it. InterlinedList treats lists more like launch points. A list item can become:

  • a social media post
  • a markdown document
  • a reference entry
  • a trackable idea

Instead of bouncing between five tools, the list becomes the center of gravity. Which is how most people actually work. Over time, my intent is to bring these features to be even more seamlessly connected. Eventually, there will even be options to bring together your LLMs you prefer to extend the capabilities of each of these things in your workflow.

Social Media Posting Without the Chaos

Posting to social platforms today usually looks like this:

  • Write something somewhere
  • Copy it into another platform
  • Schedule it somewhere else
  • Lose track of what you’ve already posted

InterlinedList brings posting directly into the workflow.

You can:

  • draft posts
  • schedule posts
  • organize posts into lists
  • connect posts to notes or markdown docs
  • refer to your cross-posted posts from InterlinedList (for example, see image!)
Screenshot of a social media post by Adron Hall discussing Ba Bar in University Village, featuring images of the restaurant and links to Mastodon and Blue Sky.

The goal is simple: make posting part of your idea workflow instead of a disconnected chore.

The first integrations include platforms like:

  • Mastodon
  • Bluesky

And the idea is to keep expanding that ecosystem. More to come and also open to ideas!

Markdown Documents That Fit the Workflow

If you’re like me, markdown is where the real thinking happens.

Articles. Notes. Research. Drafts. Documentation.

But markdown tools often exist in their own isolated worlds.

InterlinedList allows you to maintain markdown documents directly alongside your lists and posts, making it possible to move naturally between: writing, organizing, and publishing.

Why These Three Things Belong Together

This was the key realization. Lists, posts, and markdown aren’t separate activities. They’re three phases of the same process:

  1. Capture the idea → lists
  2. Develop the idea → markdown
  3. Share the idea → social posts

Most tools treat these as unrelated workflows. InterlinedList treats them as one continuous pipeline. Which means less context switching, less tool juggling, and far fewer lost ideas.

Early Access Offer

To kick things off an early access offer, I’m doing something simple. If you’re interested in organizing ideas, posts, and documents in one place, now’s a great time to jump in.

The first 10 users who sign up will receive a full-featured subscription account for free.

No trial. No feature restrictions. Just the full platform.

Built Because I Wanted It

Like a lot of the things I’ve built, InterlinedList started as something I wanted for myself.

I needed a place where:

  • research lists
  • post drafts
  • markdown articles
  • and publishing

could actually live in the same ecosystem.

After building it and using it, the obvious next step was to open it up so others could use it too. Let me know what you think!

With that, stay tuned, the team has a lot more coming!

** I’d add that, this is absolutely a work in progress and the team will be working to bring together more of the workflow concept and features to bridge this set of tooling together to be even more seamless.

Read the whole story
alvinashcraft
55 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

F# Weekly #11, 2026 – F# in .NET 11 Preview 2

1 Share

Welcome to F# Weekly,

A roundup of F# content from this past week:

News

F# in .NET 11 Preview 2 – Release Notes #fsharp github.com/dotnet/core/…

Sergey Tihon 🦔🦀 (@sergeytihon.com) 2026-03-14T09:42:27.657Z

Microsoft News

Videos

Blogs

📖 After four months of work, the book “Safe Clean Architecture” is now complete 🎉🔗 Check it out online.rdeneau.gitbook.io/safe-clean-a…#fsharp #free #e-book

(@romain-deneau.bsky.social) 2026-03-13T09:35:04.083Z

Highlighted projects

New Releases

That’s all for now. Have a great week.

If you want to help keep F# Weekly going, click here to jazz me with Coffee!

Buy Me A Coffee





Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Digg lays off staff and shuts down app as company retools

1 Share
Digg laid off a significant number of staff and shut down its app, but says it's not giving up on the startup.
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Room 3.0 - Modernizing the Room

1 Share
Posted by Daniel Santiago Rivera, Software Engineer





The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support.

In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0.

Breaking changes

Room 3.0 includes the following breaking API changes:

  • Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room’s dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends.

  • No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations.

  • Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language.

  • Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms.

A new package

To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase.

Kotlin and Coroutines First

With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase.

Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application.

Migration to SQLiteDriver APIs

With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites.

For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation.

For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite.

Room SupportSQLite wrapper

We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage.

For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room’s builder.

For more details check out the room-sqlite-wrapper documentation.

Room and SQLite Web Support

Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS).

For more details on how to set up Room for the Web check out the Room 3.0 release notes.

Custom DAO Return Types

Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room’s generated code becomes accessible at runtime, this enables  @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions.

For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter.

For more details check out the DAO Return Type Converters section in the Room 3.0 release notes.

Maintenance mode of Room 2.x

Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable.

Final thoughts

We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!
Read the whole story
alvinashcraft
9 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories