Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153224 stories
·
33 followers

Resurgence of a multi‑stage AiTM phishing and BEC campaign abusing SharePoint

1 Share

Microsoft Defender Researchers uncovered a multi‑stage adversary‑in‑the‑middle (AiTM) phishing and business email compromise (BEC) campaign targeting multiple organizations in the energy sector, resulting in the compromise of various user accounts. The campaign abused SharePoint file‑sharing services to deliver phishing payloads and relied on inbox rule creation to maintain persistence and evade user awareness. The attack transitioned into a series of AiTM attacks and follow-on BEC activity spanning multiple organizations.

Following the initial compromise, the attackers leveraged trusted internal identities from the target to conduct large‑scale intra‑organizational and external phishing, significantly expanding the scope of the campaign. Defender detections surfaced the activity to all affected organizations.

This attack demonstrates the operational complexity of AiTM campaigns and the need for remediation beyond standard identity compromise responses. Password resets alone are insufficient. Impacted organizations in the energy sector must additionally revoke active session cookies and remove attacker-created inbox rules used to evade detection.

Attack chain: AiTM phishing attack

Stage 1: Initial access via trusted vendor compromise

Analysis of the initial access vector indicates that the campaign leveraged a phishing email sent from an email address belonging to a trusted organization, likely compromised before the operation began. The lure employed a SharePoint URL requiring user authentication and used subject‑line mimicry consistent with legitimate SharePoint document‑sharing workflows to increase credibility.

Threat actors continue to leverage trusted cloud collaboration platforms particularly Microsoft SharePoint and OneDrive due to their ubiquity in enterprise environments. These services offer built‑in legitimacy, flexible file‑hosting capabilities, and authentication flows that adversaries can repurpose to obscure malicious intent. This widespread familiarity enables attackers to deliver phishing links and hosted payloads that frequently evade traditional email‑centric detection mechanisms.

Stage 2: Malicious URL clicks

Threat actors often abuse legitimate services and brands to avoid detection. In this scenario, we observed that the attacker leveraged the SharePoint service for the phishing campaign. While threat actors may attempt to abuse widely trusted platforms, Microsoft continuously invests in safeguards, detections, and abuse prevention to limit misuse of our services and to rapidly detect and disrupt malicious activity

Stage 3: AiTM attack

Access to the URL redirected users to a credential prompt, but visibility into the attack flow did not extend beyond the landing page.

Stage 4: Inbox rule creation

The attacker later signed in with another IP address and created an Inbox rule with parameters to delete all incoming emails on the user’s mailbox and marked all the emails as read.

Stage 5: Phishing campaign

Followed by Inbox rule creation, the attacker initiated a large-scale phishing campaign involving more than 600 emails with another phishing URL. The emails were sent to the compromised user’s contacts, both within and outside of the organization, as well as distribution lists. The recipients were identified based on the recent email threads in the compromised user’s inbox.

Stage 6: BEC tactics

The attacker then monitored the victim user’s mailbox for undelivered and out of office emails and deleted them from the Archive folder. The attacker read the emails from the recipients who raised questions regarding the authenticity of the phishing email and responded, possibly to falsely confirm that the email is legitimate. The emails and responses were then deleted from the mailbox. These techniques are common in any BEC attacks and are intended to keep the victim unaware of the attacker’s operations, thus helping in persistence.

Stage 7: Accounts compromise

The recipients of the phishing emails from within the organization who clicked on the malicious URL were also targeted by another AiTM attack. Microsoft Defender Experts identified all compromised users based on the landing IP and the sign-in IP patterns. 

Mitigation and protection guidance

Microsoft Defender XDR detects suspicious activities related to AiTM phishing attacks and their follow-on activities, such as sign-in attempts on multiple accounts and creation of malicious rules on compromised accounts. To further protect themselves from similar attacks, organizations should also consider complementing MFA with conditional access policies, where sign-in requests are evaluated using additional identity-driven signals like user or group membership, IP location information, and device status, among others.

Defender Experts also initiated rapid response with Microsoft Defender XDR to contain the attack including:

  • Automatically disrupting the AiTM attack on behalf of the impacted users based on the signals observed in the campaign.
  • Initiating zero-hour auto purge (ZAP) in Microsoft Defender XDR to find and take automated actions on the emails that are a part of the phishing campaign.

Defender Experts further worked with customers to remediate compromised identities through the following recommendations:

  • Revoking session cookies in addition to resetting passwords.
  • Revoking the MFA setting changes made by the attacker on the compromised user’s accounts.
  • Deleting suspicious rules created on the compromised accounts.

Mitigating AiTM phishing attacks

The general remediation measure for any identity compromise is to reset the password for the compromised user. However, in AiTM attacks, since the sign-in session is compromised, password reset is not an effective solution. Additionally, even if the compromised user’s password is reset and sessions are revoked, the attacker can set up persistence methods to sign-in in a controlled manner by tampering with MFA. For instance, the attacker can add a new MFA policy to sign in with a one-time password (OTP) sent to attacker’s registered mobile number. With these persistence mechanisms in place, the attacker can have control over the victim’s account despite conventional remediation measures.

While AiTM phishing attempts to circumvent MFA, implementation of MFA still remains an essential pillar in identity security and highly effective at stopping a wide variety of threats. MFA is the reason that threat actors developed the AiTM session cookie theft technique in the first place. Organizations are advised to work with their identity provider to ensure security controls like MFA are in place. Microsoft customers can implement MFA through various methods, such as using the Microsoft Authenticator, FIDO2 security keys, and certificate-based authentication.

Defenders can also complement MFA with the following solutions and best practices to further protect their organizations from such attacks:

  • Use security defaults as a baseline set of policies to improve identity security posture. For more granular control, enable conditional access policies, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP location information, and device status, among others, and are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, trusted IP address requirements, or risk-based policies with proper access control.
  • Implement continuous access evaluation.
  • Invest in advanced anti-phishing solutions that monitor and scan incoming emails and visited websites. For example, organizations can leverage web browsers that automatically identify and block malicious websites, including those used in this phishing campaign, and solutions that detect and block malicious emails, links, and files.
  • Continuously monitor suspicious or anomalous activities. Hunt for sign-in attempts with suspicious characteristics (for example, location, ISP, user agent, and use of anonymizer services).

Detections

Because AiTM phishing attacks are complex threats, they require solutions that leverage signals from multiple sources. Microsoft Defender XDR uses its cross-domain visibility to detect malicious activities related to AiTM, such as session cookie theft and attempts to use stolen cookies for signing in.

Using Microsoft Defender for Cloud Apps connectors, Microsoft Defender XDR raises AiTM-related alerts in multiple scenarios. For Microsoft Entra ID customers using Microsoft Edge, attempts by attackers to replay session cookies to access cloud applications are detected by Defender for Cloud Apps connectors for Microsoft 365 and Azure. In such scenarios, Microsoft Defender XDR raises the following alert:

  • Stolen session cookie was used

In addition, signals from these Defender for Cloud Apps connectors, combined with data from the Defender for Endpoint network protection capabilities, also triggers the following Microsoft Defender XDR alert on Microsoft Entra ID. environments:

  • Possible AiTM phishing attempt

A specific Defender for Cloud Apps connector for Okta, together with Defender for Endpoint, also helps detect AiTM attacks on Okta accounts using the following alert:

  • Possible AiTM phishing attempt in Okta

Other detections that show potentially related activity are the following:

Microsoft Defender for Office 365

  • Email messages containing malicious file removed after delivery
  • Email messages from a campaign removed after delivery
  • A potentially malicious URL click was detected
  • A user clicked through to a potentially malicious URL
  • Suspicious email sending patterns detected

Microsoft Defender for Cloud Apps

  • Suspicious inbox manipulation rule
  • Impossible travel activity
  • Activity from infrequent country
  • Suspicious email deletion activity

Microsoft Entra ID Protection

  • Anomalous Token
  • Unfamiliar sign-in properties
  • Unfamiliar sign-in properties for session cookies

Microsoft Defender XDR

  • BEC-related credential harvesting attack
  • Suspicious phishing emails sent by BEC-related user

Indicators of Compromise

  • Network Indicators
    • 178.130.46.8 – Attacker infrastructure
    • 193.36.221.10 – Attacker infrastructure

Microsoft recommends the following mitigations to reduce the impact of this threat:

Hunting queriesMicrosoft XDR

AHQ#1 – Phishing Campaign:

EmailEvents

| where Subject has “NEW PROPOSAL – NDA”

AHQ#2 – Sign-in activity from the suspicious IP Addresses

AADSignInEventsBeta

| where Timestamp >= ago(7d)

| where IPAddress startswith “178.130.46.” or IPAddress startswith “193.36.221.”

Microsoft Sentinel

Microsoft Sentinel customers can use the following analytic templates to find BEC related activities similar to those described in this post:

In addition to the analytic templates listed above, Microsoft Sentinel customers can use the following hunting content to perform Hunts for BEC related activities:


The post Resurgence of a multi‑stage AiTM phishing and BEC campaign abusing SharePoint  appeared first on Microsoft Security Blog.

Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

MAUI in 2026 with Gerald Versluis

1 Share
What's happening with MAUI today? Carl and Richard talk to Gerald Versluis about the latest version of MAUI - and what's coming next! Gerald talks about the release of .NET 10 and the new features that have come to MAUI, including improvements in quality, performance, and ease of use. The conversation also digs into adjacent technologies like Uno and Avalonia and how they are collaborating with the MAUI team to make development even easier!



Download audio: https://dts.podtrac.com/redirect.mp3/api.spreaker.com/download/episode/69540840/dotnetrocks_1986_maui_in_2026.mp3
Read the whole story
alvinashcraft
28 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

How to Turn Your Favorite Tech Blogs into a Personal Podcast

1 Share

These days it feels almost impossible to keep up with tech news. I step away for three days, and suddenly there is a new AI model, a new framework, and a new tool everyone says I must learn. Reading everything no longer scales, but I still want to stay informed.

So I decided to change the format instead of giving up. I took a few tech blogs I already enjoy reading, picked the best articles, converted them to audio using my own voice, and turned the result into a private podcast. Now I can stay up to date while walking, running, or driving.

In this tutorial, you’ll learn how to build a simplified version of that pipeline step by step.

Table of Contents

What You Are Going to Build

You will build a Node.js script that does the following:

  • Fetches articles from RSS feeds.

  • Extracts clean, readable text from each article.

  • Filters out content you do not want to listen to.

  • Cleans the text so it sounds good when spoken.

  • Converts the text to natural-sounding audio using your own voice.

  • Uploads the audio to Cloudflare R2.

  • Generates a podcast RSS feed.

  • Runs automatically on a schedule.

At the end, you will have a real podcast feed you can subscribe to on your phone.

The generated podcast showing converted blog posts as episodes.

If you want to skip the tutorial and jump straight into using the finished tool, you can find the complete version and instructions on GitHub.

Prerequisites

To follow along, you need basic JavaScript knowledge.

You also need:

  • Node.js 22 or newer.

  • A place to store audio files (Cloudflare R2 in this tutorial).

  • A text-to-speech API (OrangeClone in this tutorial).

Project Overview

Before writing code, it helps to understand the idea clearly.

This project is a pipeline:

Fetch content -> Filter content -> Clean up content -> Convert to audio -> Repeat

Each step takes the output of the previous one. Keeping the flow linear makes the project easier to reason about, debug, and automate.

All code in this tutorial lives in a single file called index.js.

Getting Started

Create a new project folder and your main file.

mkdir podcast-pipeline
cd podcast-pipeline
touch index.js

Initialize the project and install dependencies.

npm init -y
npm install rss-parser @mozilla/readability jsdom node-fetch uuid xmlbuilder @aws-sdk/client-s3

Enable ESM so import syntax works in Node 22.

npm pkg set type=module

Here is what each dependency is used for:

  • rss-parser reads RSS feeds.

  • @mozilla/readability extracts readable article text.

  • jsdom provides a DOM for Readability.

  • node-fetch fetches remote content.

  • uuid generates unique filenames.

  • xmlbuilder creates the podcast RSS feed.

  • @aws-sdk/client-s3 uploads audio to Cloudflare R2.

How to Get the Content

The first decision is where your content comes from.

Avoid scraping websites directly. Scraped HTML is noisy and inconsistent. RSS feeds are structured and reliable. Most serious blogs provide one.

Open index.js and define your sources.

import Parser from "rss-parser";
import fetch from "node-fetch";
import { JSDOM } from "jsdom";
import { Readability } from "@mozilla/readability";

const parser = new Parser();

const NUMBER_OF_ARTICLES_TO_FETCH = 15;

const SOURCES = [
  "https://www.freecodecamp.org/news/rss/",
  "https://hnrss.org/frontpage",
];

Now fetch articles and extract readable content.

async function fetchArticles() {
  const articles = [];

  for (const source of SOURCES) {
    const feed = await parser.parseURL(source);

    for (const item of feed.items.slice(0, NUMBER_OF_ARTICLES_TO_FETCH)) {
      if (!item.link) continue;

      const response = await fetch(item.link);
      const html = await response.text();

      const dom = new JSDOM(html, { url: item.link });
      const reader = new Readability(dom.window.document);
      const content = reader.parse();

      if (!content) continue;

      articles.push({
        title: item.title,
        link: item.link,
        content: content.content,
        text: content.textContent,
      });
    }
  }

  return articles.slice(0, NUMBER_OF_ARTICLES_TO_FETCH);
}

This function:

  • Reads RSS feeds.

  • Downloads each article.

  • Extracts clean text using Readability.

  • Returns a list of articles ready for processing.

How to Filter the Content

Not every article deserves your attention. Start by filtering out topics you do not want to hear about.

const BLOCKED_KEYWORDS = ["crypto", "nft", "giveaway"];

function filterByKeywords(articles) {
  return articles.filter(
    (article) =>
      !BLOCKED_KEYWORDS.some((keyword) =>
        article.text.toLowerCase().includes(keyword)
      )
  );
}

Next, remove promotional content.

function removePromotionalContent(articles) {
  return articles.filter(
    (article) => !article.text.toLowerCase().includes("sponsored")
  );
}

Finally, remove articles that are too short.

function filterByWordCount(articles, minWords = 700) {
  return articles.filter(
    (article) => article.text.split(/\s+/).length >= minWords
  );
}

After these steps, you are left with articles you actually want to listen to.

How to Clean Up the Content

Raw articles text still need to be cleaned up to sound good when spoken. First, replace images with spoken placeholders.

function replaceImages(html) {
  return html.replace(/<img[^>]*alt="([^"]*)"[^>]*>/gi, (_, alt) => {
    return alt ? `[Image: ${alt}]` : `[Image omitted]`;
  });
}

Next, remove code blocks.

function replaceCodeBlocks(html) {
  return html.replace(
    /<pre><code>[\s\S]*?<\/code><\/pre>/gi,
    "[Code example omitted]"
  );
}

Strip URLs and replace them with spoken text.

function replaceUrls(text) {
  return text.replace(/https?:\/\/\S+/gi, "link removed");
}

Normalize common symbols.

function normalizeSymbols(text) {
  return text
    .replace(/&/g, "and")
    .replace(/%/g, "percent")
    .replace(/\$/g, "dollar");
}

Convert HTML to text so TTS does not read tags.

function stripHtml(html) {
  return html.replace(/<[^>]+>/g, " ");
}

Combine everything into one cleanup step.

function cleanArticle(article) {
  let cleaned = replaceImages(article.content);
  cleaned = replaceCodeBlocks(cleaned);
  cleaned = stripHtml(cleaned);
  cleaned = replaceUrls(cleaned);
  cleaned = normalizeSymbols(cleaned);

  return {
    ...article,
    cleanedText: cleaned,
  };
}

At this point, the text is ready for audio generation.

How to Convert Content to Audio

Browser speech APIs sound robotic. I wanted something that sounded human and familiar. After trying several tools, I settled on OrangeClone. It was the only option that actually sounded like me.

Create a free account and copy your API key from the dashboard.

OrangeClone dashboard with API key visible.

Record 10 to 15 seconds of clean audio and save it as SAMPLE_VOICE.wav in the project root. Then create a voice character (one-time setup).

import fs from "node:fs/promises";

const ORANGECLONE_API_KEY = process.env.ORANGECLONE_API_KEY;
const ORANGECLONE_BASE_URL =
  process.env.ORANGECLONE_BASE_URL || "https://orangeclone.com/api";

async function createVoiceCharacter({ name, avatarStyle, voiceSamplePath }) {
  const audioBuffer = await fs.readFile(voiceSamplePath);
  const audioBase64 = audioBuffer.toString("base64");

  const response = await fetch(
    `${ORANGECLONE_BASE_URL}/characters/create`,
    {
      method: "POST",
      headers: {
        Authorization: `Bearer ${ORANGECLONE_API_KEY}`,
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        name,
        avatarStyle,
        voiceSample: {
          format: "wav",
          data: audioBase64,
        },
      }),
    }
  );

  if (!response.ok) {
    const errorText = await response.text();
    throw new Error(`Failed to create character: ${errorText}`);
  }

  const data = await response.json();

  return (
    data.data?.id ||
    data.data?.characterId ||
    data.id ||
    data.characterId
  );
}

Generate audio from text.

async function generateAudio(characterId, text) {
  const response = await fetch(`${ORANGECLONE_BASE_URL}/voices_clone`, {
    method: "POST",
    headers: {
      Authorization: `Bearer ${ORANGECLONE_API_KEY}`,
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      characterId,
      text,
    }),
  });

  return response.json();
}

Wait for the job to complete.

async function waitForAudio(jobId) {
  while (true) {
    const response = await fetch(`${ORANGECLONE_BASE_URL}/voices/${jobId}`);
    const data = await response.json();

    if (data.status === "completed") {
      return data.audioUrl;
    }

    await new Promise((r) => setTimeout(r, 5000));
  }
}

How to Upload the Audio to Cloudflare R2

OrangeClone returns an audio URL, but podcast apps need a stable, public file that will not expire.
That is where Cloudflare R2 comes in.

R2 is S3-compatible storage, which means we can upload files using the AWS SDK and serve them publicly for podcast apps.

How to Set Up Credentials

Create an R2 bucket in your Cloudflare dashboard and set the following environment variables:

  • R2_ACCOUNT_ID

  • R2_ACCESS_KEY_ID

  • R2_SECRET_ACCESS_KEY

  • R2_BUCKET_NAME

  • R2_PUBLIC_URL

These values allow the script to upload files and generate public URLs for them.

How to Initialize the R2 Client

import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

const r2 = new S3Client({
  region: "auto",
  endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
  credentials: {
    accessKeyId: process.env.R2_ACCESS_KEY_ID,
    secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
  },
});

This creates an S3-compatible client that connects directly to your Cloudflare R2 account instead of AWS.

How to Download the Audio

async function downloadAudio(audioUrl) {
  const response = await fetch(audioUrl);
  const buffer = await response.arrayBuffer();
  return Buffer.from(buffer);
}

OrangeClone gives us a URL, not a file.
This function downloads the audio and converts it into a Node.js buffer so it can be uploaded to R2.

How to Upload to R2

import { v4 as uuid } from "uuid";

async function uploadToR2(audioBuffer) {
  const fileName = `${uuid()}.mp3`;

  const command = new PutObjectCommand({
    Bucket: process.env.R2_BUCKET_NAME,
    Key: fileName,
    Body: audioBuffer,
    ContentType: "audio/mpeg",
  });

  await r2.send(command);

  return `${process.env.R2_PUBLIC_URL}/${fileName}`;
}

This function uploads the audio buffer to R2 using a unique filename and returns a public URL that podcast apps can access.

Putting It Together

const audioUrl = await waitForAudio(jobId);
const audioBuffer = await downloadAudio(audioUrl);
const publicAudioUrl = await uploadToR2(audioBuffer);

At the end of this step, publicAudioUrl is the final audio file used in the podcast RSS feed.

How to Make the Podcast

With public audio URLs, you can now generate an RSS feed.

import xmlbuilder from "xmlbuilder";

function generatePodcastFeed(episodes) {
  const feed = xmlbuilder
    .create("rss", { version: "1.0" })
    .att("version", "2.0")
    .ele("channel");

  feed.ele("title", "My Tech Podcast");
  feed.ele("description", "Tech articles converted to audio");
  feed.ele("link", "https://your-site.com");

  episodes.forEach((ep) => {
    const item = feed.ele("item");
    item.ele("title", ep.title);
    item.ele("enclosure", {
      url: ep.audioUrl,
      type: "audio/mpeg",
    });
  });

  return feed.end({ pretty: true });
}

How to Automate the Pipeline

Automation in this project happens in two stages. First, the code itself must be able to process multiple articles in one run. Second, the script must run automatically on a schedule. We’ll start with the code-level automation.

Automating Inside the Code

Earlier, we fetched up to fifteen articles. Now we need to make sure every article that passes our filters goes through the full pipeline.

Add the following function near the bottom of index.js.

async function runPipeline() {
  const rawArticles = await fetchArticles();

  const filteredArticles = filterByWordCount(
    removePromotionalContent(filterByKeywords(rawArticles))
  );

  if (filteredArticles.length === 0) {
    console.log("No articles passed the filters");
    return [];
  }

  const characterId = await createVoiceCharacter({
    name: "My Voice",
    avatarStyle: "realistic",
    voiceSamplePath: "./SAMPLE_VOICE.wav",
  });

  const episodes = [];

  for (const article of filteredArticles) {
    console.log(`Processing: ${article.title}`);

    const cleaned = cleanArticle(article);

    const job = await generateAudio(characterId, cleaned.cleanedText);

    const audioUrl = await waitForAudio(job.id);
    const audioBuffer = await downloadAudio(audioUrl);
    const publicAudioUrl = await uploadToR2(audioBuffer);

    episodes.push({
      title: article.title,
      audioUrl: publicAudioUrl,
    });
  }

  return episodes;
}

This function does all the heavy lifting:

  • Fetches articles

  • Applies all filters

  • Creates the voice character once

  • Loops through every valid article

  • Converts each article into audio

  • Uploads the audio to Cloudflare R2

  • Collects podcast episode data

At this point, one script run can generate multiple podcast episodes.

Running the Pipeline and Generating the Feed

Now we need a single entry point that runs the pipeline and writes the podcast feed. Add this below the pipeline function.

import fs from "node:fs/promises";

async function main() {
  const episodes = await runPipeline();

  if (episodes.length === 0) {
    console.log("No episodes generated");
    return;
  }

  const rss = generatePodcastFeed(episodes);

  await fs.mkdir("./public", { recursive: true });
  await fs.writeFile("./public/feed.xml", rss);

  console.log("Podcast feed generated at public/feed.xml");
}

main().catch(console.error);

When you run node index.js, this now:

  • Processes all selected articles

  • Creates multiple audio files

  • Generates a valid podcast RSS feed

This is the core automation.

Scheduling the Pipeline with GitHub Actions

The final step is to make this script run automatically. Create a GitHub Actions workflow file at .github/workflows/podcast.yml.

name: Podcast Pipeline

on:
  schedule:
    - cron: "0 6 * * *"

jobs:
  run:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 22
      - run: npm install
      - run: node index.js
        env:
          ORANGECLONE_API_KEY: ${{ secrets.ORANGECLONE_API_KEY }}
          R2_ACCOUNT_ID: ${{ secrets.R2_ACCOUNT_ID }}
          R2_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
          R2_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
          R2_BUCKET_NAME: ${{ secrets.R2_BUCKET_NAME }}
          R2_PUBLIC_URL: ${{ secrets.R2_PUBLIC_URL }}

This workflow runs the pipeline every morning at 6 AM.

Each run:

  • Fetches new articles

  • Generates fresh audio

  • Updates the podcast feed

Once this is set up, your podcast updates itself without manual work.

Conclusion

This is a basic version of my full production pipeline, PostCast, but the core idea is the same.

You now know how to turn blogs into a personal podcast. Be mindful of copyright and only use content you are allowed to consume.

If you have questions, reach me on X at @sprucekhalifa. I write practical tech articles like this regularly.



Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Color‑Driven Code Navigation

1 Share

Color has become one of the most important tools in my daily development workflow.

When I am moving across dozens of solutions, projects, and file types, a consistent color system gives me instant orientation. Visual Studio has made this easier than ever with a combination of tab coloring by project rules, file extension patterns, regular expressions, and both solution level and user level themes. By leaning into a couple of simple color palettes I have built a visual map that helps me understand where I am in the codebase before I read a single file name or line of code.

Themes by Solution (and User)

When I am working with multiple solutions I use themes to associate colors with each solution. This gives me a top level color identity, a visual anchor that helps me recognize which codebase I am in before I read a single project name.

How do I do this? Visual Studio's Unified Settings feature allows me to save settings by user or by solution. Navigate to Tools > Options and select Environment > Visual Experience. To ensure this does not impact every instance, set Applies to Solution. This saves your preferences in the settings.VisualStudio.json file in the solution root directory.

Visual Studio "Visual Experience" settings showing a dropdown for color theme selection with options including Dark, Cool Breeze, Icy Mint, Moonlight Glow, and Blue Train, scoped to the DasBlog solution.

Once you get into the IDE you have several options for tab organization that are worth experimenting with for project, file extension, and regular expression.

Visual Studio dropdown menu for "Tab colorization method" with four options: None, Project, File Extension, and Regular Expression, used to customize tab appearance based on file context.

Tab Color – Project

Associating tabs with the project is especially helpful for solutions with more than half a dozen projects. The file name is not always enough context and hovering over a tab to figure out the root location is not always efficient. Color gives me a structural map, letting me see boundaries of responsibility within the solution at a glance.

Tab Color - File Extension

For single project solutions I typically group by file extension. Extension based colors create quick, low friction cues that tell me what kind of work I am about to do the moment I open a file.

Tab Color - Regular Expression

My favorite and most versatile option is the use of regular expressions. It lets me surface patterns in folders, names, extensions, or a combination of all three. One of my favorites is assigning a distinct tab color for controllers in ASP.NET projects.

Visual Studio settings panel for tab layout and color options, showing placement choices (Left, Top, Right), multi-row tab display toggles, and a highlighted option to configure regular expressions
Controller\.cs$
^.*\.cs$
^.*\.fs$
^.*\.vb$
^.*\.json$
^.*\.txt$

This creates a ColorByRegexConfig.txt file in the .vs folder. After defining this file you can apply a specific color by right clicking on the tab and selecting Set Tab Color.

Visual Studio tab context menu for "BlogManager.cs" showing options like Save, Close, Move to New Window, Duplicate, Pin Tab, and Set Tab Color with color choices: None, Lavender, Gold, Cyan, Burgundy, and Green.

Color is not useful to everyone, but when it is available it can become a grounding part of your workflow. For me it is a small shift in how Visual Studio presents information, but it has a big impact on how quickly I can move through a codebase and find the work that matters most.

Visual Studio with overlapping windows showing a custom theme file in the background and a C# controller file in the foreground, with using directives for ASP.NET and DasBlog namespaces.
Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Day 22: Production Debugging: When It’s On Fire

1 Share

Picture the worst debugging scenario. Something’s broken in production. You can’t add console.log and redeploy. You can’t attach a debugger. You can’t reproduce it locally because you don’t know what’s causing it.

All you have is what’s already there: logs, error messages, metrics if you were smart enough to add them.

This is where AI earns its keep. Not because it has magic access to your systems, but because it’s good at exactly the thing you need: analyzing information and finding patterns. You paste in logs, stack traces, error messages. AI spots the anomaly you’d miss after staring at the screen for an hour.

The Incident Response Prompt

When things are on fire, start here:

Production incident in progress. Help me triage.

Error message:
[paste the error]

What I'm seeing:
[describe symptoms - errors, timeouts, user reports]

What changed recently:
[recent deploys, config changes, traffic patterns]

System overview:
[relevant architecture - what talks to what]

Help me:
1. What's the most likely cause?
2. What should I check first?
3. What's the fastest way to confirm?
4. What's the quickest mitigation (even if not a fix)?

Keep it short. You’re in a hurry.

Stack Trace Analysis Under Pressure

When you have a stack trace:

Quick analysis of this production error:

[paste stack trace]

Tell me:
1. What's the actual error (one sentence)?
2. Is this our code or a library?
3. What file and line to look at first?
4. Most likely cause?

No elaborate explanations needed. You need direction, not education.

Log Pattern Recognition

When you have logs but can’t see the pattern:

Find the pattern in these production logs.

These are from the last hour. Something is wrong.

[paste logs]

What I'm looking for:
- When did the problem start?
- What's different before vs after?
- Any correlation with specific users, endpoints, or data?
- What's the error rate pattern?

AI can spot patterns in logs faster than scrolling through them manually.

The Correlation Prompt

When multiple things seem broken:

Multiple things are failing. Help me find the root cause.

Symptom 1: [describe]
Symptom 2: [describe]
Symptom 3: [describe]

Started at: [time]
Recent changes: [deploys, configs]

What single cause could explain all of these?
What should I check to confirm?

Often multiple symptoms have one root cause. AI helps find the connection.

Quick Mitigation Strategies

When you need to stop the bleeding:

Production is broken. I need mitigation options.

The problem: [describe]
The impact: [who's affected, how badly]
Rollback possible: [yes/no/partial]

Give me mitigation options ranked by:
1. Speed to implement
2. Risk of making things worse
3. Effectiveness

I need to stop the bleeding, then I can fix properly.

Sometimes the right answer is a workaround that buys you time.

Database Issue Diagnosis

Database problems need specific analysis:

Production database issue.

Symptoms:
[slow queries / connection errors / deadlocks / ???]

Current metrics:
- Connections: [number]
- Active queries: [number]
- Slow query log: [paste if relevant]

Recent database changes:
[migrations, new queries, data growth]

What's the likely cause?
What query would show me the problem?
What's the quick fix?

The Rollback Decision

When you’re not sure whether to rollback:

Deciding whether to rollback.

Current situation: [describe the problem]
Last deploy: [what changed, when]
Rollback would: [describe what gets reverted]
Rollback risks: [data migration issues, etc.]

Help me decide:
1. Is the problem likely caused by the deploy?
2. What would rollback fix?
3. What would rollback break?
4. Should I rollback or fix forward?

Post-Incident Analysis Prompt

After the fire is out:

We had an incident. Help me analyze it.

What happened:
[timeline of events]

Impact:
[duration, users affected, severity]

Root cause:
[what we found]

Help me create:
1. Timeline of events with gaps identified
2. What monitoring would have caught this sooner
3. What would have prevented this
4. Action items for follow-up

The “I Have No Idea” Prompt

Sometimes you’re truly stuck:

I'm stuck. Production is broken and I don't know why.

What I see: [symptoms]
What I've checked: [what you ruled out]
What I've tried: [attempted fixes]

I'm out of ideas. Help me:
1. What haven't I checked?
2. What assumptions might be wrong?
3. What's a completely different angle?

Admitting you’re stuck is the first step to getting unstuck.

Information Gathering Under Pressure

When you need to collect more data quickly:

I need more information to debug this. Generate the commands.

System: [Linux, AWS, Kubernetes, etc.]
Problem: [what's failing]

Generate commands to check:
1. System resources (CPU, memory, disk, network)
2. Process status
3. Recent logs
4. Database connections
5. Network connectivity
6. Service health

Just the commands, I'll run them.

The Customer Impact Prompt

When you need to communicate:

Help me write a status update for customers.

What happened: [technical description]
Current status: [ongoing / resolved / monitoring]
Impact: [what customers experienced]
ETA: [if known]

Write a customer-facing update that is:
- Honest without being alarming
- Non-technical but not condescending
- Includes what we're doing about it
- Includes when we'll update next

Communication matters during incidents. AI can help you write clearly when you’re stressed.

Building Your Incident Playbook

After enough incidents, you have patterns. Document them:

I want to create an incident response playbook.

Common incident types we see:
1. [type 1]
2. [type 2]
3. [type 3]

For each type, help me create:
- Symptoms to look for
- First things to check
- Common causes
- Mitigation options
- Resolution steps
- Verification steps

Then when an incident happens, you have a starting point.

What AI Can’t Do in Incidents

AI can’t:

  • Access your systems
  • See your actual metrics
  • Know your specific architecture
  • Make decisions for you
  • Tell you what changed recently
  • Know your organizational context

You still need to:

  • Gather the information
  • Run the commands
  • Make the judgment calls
  • Communicate with stakeholders
  • Execute the fixes

AI is your thinking partner, not your incident commander.

The Incident Checklist

Keep this handy:

□ Acknowledge the incident
□ Assess severity and impact
□ Notify stakeholders
□ Check recent changes
□ Gather logs and errors
□ Form hypothesis
□ Test hypothesis
□ Mitigate or fix
□ Verify resolution
□ Communicate resolution
□ Schedule post-mortem

Tomorrow

Incidents often come from edge cases you didn’t anticipate. Tomorrow I’ll show you how to use AI to find edge cases before they find you. Proactive problem-finding instead of reactive firefighting.


Try This Today

Think about your last production incident.

  1. What information did you have?
  2. What questions did you need answered?
  3. How would you have prompted AI?

Having prompts ready before incidents happen means you can move faster when they do.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Identify and Mitigate SQL Server High Memory Usage

1 Share

Learn to tackle SQL Server high memory usage by identifying memory intensive queries and applying effective mitigation techniques.

The post Identify and Mitigate SQL Server High Memory Usage appeared first on MSSQLTips.com.

Read the whole story
alvinashcraft
30 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories