Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146921 stories
·
33 followers

Blazorise 1.8.10 - Table Scrolling & Globalization Improvements

1 Share
Blazorise 1.8.10 delivers a fix for Table ScrollToRow visibility, adds support for Invariant Globalization, and resolves Signature Pad input issues on iOS Safari.
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building with Azure OpenAI Sora: A Complete Guide to AI Video Generation

1 Share

In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API.

Table of Contents

  1. Introduction to Sora Models
  2. Azure AI Foundry vs. OpenAI API Structure
  3. API Integration: Request Body Parameters
  4. Video Generation Modes
  5. Cost Analysis per Generation
  6. Technical Limitations & Constraints
  7. Resolution & Duration Support
  8. Implementation Best Practices

Introduction to Sora Models

Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions:

  • Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds)
  • Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview)

Azure AI Foundry vs. OpenAI API Structure

Key Architectural Differences

Sora 1 uses Azure's traditional deployment-based API structure:

  • Endpoint Patternhttps://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/...
  • Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields
  • Job Management: Uses /jobs/{id} for status polling
  • Content Download: Uses /video/generations/{generation_id}/content/video

Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure:

  • Endpoint Patternhttps://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos
  • Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720")
  • Job Management: Uses /videos/{video_id} for status polling
  • Content Download: Uses /videos/{video_id}/content

Why This Matters?

This architectural difference requires conditional request formatting in your code:

const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; }

 

API Integration: Request Body Parameters

Sora 1 API Parameters

Standard Text-to-Video Request:

{ "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" }

 

Parameter Details:

  • model (String, Required): Your Azure deployment name
  • prompt (String, Required): Natural language description of the video (max 32000 chars)
  • height (String, Required): Video height in pixels
  • width (String, Required): Video width in pixels
  • n_seconds (String, Required): Duration (1-20 seconds)
  • n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution)

Sora 2 API Parameters

Text-to-Video Request:

{ "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" }

 

Image-to-Video Request (uses FormData):

const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP

 

Video-to-Video Remix Request:

  • EndpointPOST .../videos/{video_id}/remix
  • Body: Only { "prompt": "your new description" }
  • The original video's structure, motion, and framing are reused while applying the new prompt

Parameter Details:

  • model (String, Optional): Your deployment name
  • prompt (String, Required): Video description
  • size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280")
  • seconds (String, Optional): "4", "8", or "12" (defaults to "4")
  • input_reference (File, Optional): Reference image for image-to-video mode
  • remix_video_id (String, URL parameter): ID of video to remix

Video Generation Modes

1. Text-to-Video (Both Models)

The foundational mode where you provide a text prompt describing the desired video.

Implementation:

const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), });

Best Practices:

  • Include shot type (wide, close-up, aerial)
  • Describe subject, action, and environment
  • Specify lighting conditions (golden hour, dramatic, soft)
  • Add camera movement if desired (pans, tilts, tracking shots)

2. Image-to-Video (Sora 2 Only)

Generate a video anchored to or starting from a reference image.

Key Requirements:

  • Supported formats: JPEG, PNG, WebP
  • Image dimensions must exactly match the selected video resolution
  • Our implementation automatically resizes uploaded images to match

Implementation Detail:

// Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage);

3. Video-to-Video Remix (Sora 2 Only)

Create variations of existing videos while preserving their structure and motion.

Use Cases:

  • Change weather conditions in the same scene
  • Modify time of day while keeping camera movement
  • Swap subjects while maintaining composition
  • Adjust artistic style or color grading

Endpoint Structure:

POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview

Implementation:

let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; }

 

Cost Analysis per Generation

Sora 1 Pricing Model

Base Rate: ~$0.05 per second per variant at 720p 

Resolution Scaling: Cost scales linearly with pixel count

Formula:

const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier;

Examples:

  • 720p (1280×720), 12 seconds, 1 variant: $0.60
  • 1080p (1920×1080), 12 seconds, 1 variant: $1.35
  • 720p, 12 seconds, 2 variants: $1.20

Sora 2 Pricing Model

Flat Rate: $0.10 per second per variant (no resolution scaling in public preview)

Formula:

const totalCost = 0.10 * duration * variants;

Examples:

  • 720p (1280×720), 4 seconds: $0.40
  • 720p (1280×720), 12 seconds: $1.20
  • 720p (720×1280), 8 seconds: $0.80

 

Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters.

Cost Comparison

ScenarioSora 1 (720p)Sora 2 (720p)Winner
4s video$0.20$0.40Sora 1
12s video$0.60$1.20Sora 1
12s + audioN/A (no audio)$1.20Sora 2 (unique)
Image-to-videoN/A$0.40-$1.20Sora 2 (unique)

 

Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities.

Technical Limitations & Constraints

Sora 1 Limitations

Resolution Options:

  • 9 supported resolutions from 480×480 to 1920×1080
  • Includes square, portrait, and landscape formats
  • Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080

Duration:

  • Flexible: 1 to 20 seconds
  • Any integer value within range

Variants:

  • Depends on resolution:
    • 1080p: Variants disabled (n_variants must be 1)
    • 720p: Max 2 variants
    • Other resolutions: Max 4 variants

Concurrent Jobs: Maximum 2 jobs running simultaneously

Job Expiration: Videos expire 24 hours after generation

Audio: No audio generation (silent videos only)

Sora 2 Limitations

Resolution Options (Public Preview):

  • Only 2 options: 720×1280 (portrait) or 1280×720 (landscape)
  • No square formats
  • No 1080p support in current preview

Duration:

  • Fixed options only: 4, 8, or 12 seconds
  • No custom durations
  • Defaults to 4 seconds if not specified

Variants:

  • Not prominently supported in current API documentation
  • Focus is on single high-quality generations with audio

Concurrent Jobs: Maximum 2 jobs (same as Sora 1)

Job Expiration: 24 hours (same as Sora 1)

Audio: Native audio generation included (dialogue, sound effects, ambience)

Shared Constraints

Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third.

Job Lifecycle:

queued → preprocessing → processing/running → completed

Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video.

Generation Time:

  • Typical: 1-5 minutes depending on resolution, duration, and API load
  • Can occasionally take longer during high demand

Resolution & Duration Support Matrix

Sora 1 Support Matrix

ResolutionAspect RatioMax VariantsDuration RangeUse Case
480×480Square41-20sSocial thumbnails
480×854Portrait41-20sMobile stories
854×480Landscape41-20sQuick previews
720×720Square41-20sInstagram posts
720×1280Portrait21-20sTikTok/Reels
1280×720Landscape21-20sYouTube shorts
1080×1080Square11-20sPremium social
1080×1920Portrait11-20sPremium vertical
1920×1080Landscape11-20sFull HD content

Sora 2 Support Matrix

ResolutionAspect RatioDuration OptionsAudioGeneration Modes
720×1280Portrait4s, 8s, 12s✅ YesText, Image, Video Remix
1280×720Landscape4s, 8s, 12s✅ YesText, Image, Video Remix

Note: Sora 2's limited resolution options in public preview are expected to expand in future releases.

Implementation Best Practices

1. Job Status Polling Strategy

Implement adaptive backoff to avoid overwhelming the API:

const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; }

2. Handling Different Response Structures

Sora 1 Video Download:

const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`;

Sora 2 Video Download:

const videoUrl = `${root}/videos/${jobId}/content`;

3. Error Handling

try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification }

 

4. Image Preprocessing for Image-to-Video

Always resize images to match the target video resolution:

async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); }

5. Cost Tracking

Implement cost estimation before generation and tracking after:

// Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId });

6. Progressive User Feedback

Provide detailed status updates during the generation process:

const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`);

Conclusion

Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways:

  1. Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities
  2. Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version
  3. Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows
  4. Optimize costs: Calculate estimates upfront and track actual usage for better budget management
  5. Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages

The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge.

Resources:

This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.

 

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Azure CLI Windows MSI Upgrade Issue: Root Cause, Mitigation, and Performance Improvements

1 Share

Azure CLI on Windows MSI Upgrade Issue

Summary

About six months ago, some Windows users experienced an Azure CLI crash after upgrading via the MSI installer from Azure CLI 2.76.0 (or earlier) to 2.77.0 (or later). The failure occurred immediately on startup with: “ImportError: DLL load failed while importing win32file: The specified module could not be found.” This post explains what happened, why upgrades were affected (while clean installs typically worked), and what you can do to recover.

Who is affected?

You are likely affected if:

  • You installed Azure CLI using the Windows MSI installer.
  • You upgraded from Azure CLI 2.76.0 (or earlier) to 2.77.0 (or later) without fully uninstalling first.
  • After the upgrade, any az command fails with the win32file ImportError on startup.

Symptoms

Typical error output (Azure CLI/Azure PowerShell):

ImportError: DLL load failed while importing win32file: The specified module could not be found.

Immediate recovery

  1. Upgrade to the latest version 2.83.0
  2. If you want to install other versions of Azure CLI, perform a clean reinstall by uninstalling Microsoft Azure CLI from Windows Settings → Apps, deleting any remaining install folder (such as the CLI2 directory), reinstalling the latest Azure CLI using MSI or winget, and then verifying the installation with az --version.

Root cause analysis

During an affected MSI upgrade, the Azure CLI installation directory ended up missing a set of native Python extension files (.pyd files) required by the Windows integration layer. MSI logging showed components being blocked with messages indicating MSI believed the existing (older) key file was “newer” than the incoming one.

The root cause was an interaction between Windows Installer file versioning rules and a third‑party dependency packaging change. Azure CLI 2.76.0/2.77.0 consumed pywin32 311, whose .pyd binaries were missing Windows version resource metadata. When upgrading from a previous Azure CLI build that contained version-stamped pywin32 binaries (e.g., pywin32 306), MSI could treat the older versioned files as higher priority than the incoming non-versioned files. As a result, MSI could remove the old files during upgrade but skip installing the new ones, leaving the install incomplete.

Version mapping observed

Azure CLI version

Python

pywin32

pywin32 .pyd version resource

≤ 2.75.0

3.12

306

Present (e.g., 3.12.306.0)

2.76.0

3.12

311

Missing / empty

2.77.0+

3.13

311

Missing / empty

If you need to collect MSI logs (for support)

Run the installer with verbose logging (example):

msiexec /i "azure-cli-2.77.0.msi" /l*vx "C:\temp\azure-cli-install.log"

References

 

Windows MSI Upgrade Performance Optimization

The MSI upgrade process for Azure CLI on Windows has been significantly improved.

Previously, Windows Installer performed per‑file version comparisons—particularly expensive for Python runtime files—which made upgrades slow and sometimes inconsistent.

With the new logic, which skips the comparison and performs an overwrite installation. Upgrades now use a streamlined clean‑install process, resulting in faster and more reliable MSI upgrades.

Performance Improvements

Scenario

Before

After

Improvement

Fresh Install

Baseline

~5% faster

5% faster

Upgrade

Long due to file-by-file version comparison

~23% faster

23% faster

This update makes upgrades noticeably faster and more reliable by removing old files first and skipping slow per‑file version checks.

For more details, please refer to: [Packaging] Optimize MSI upgrade performance by simplifying file replacement logic by wangzelin007 · Pull Request #32678 · Azure/azure-cli

 

We encourage you to upgrade to the latest version of the Azure CLI. This will not only resolve the issue but also improve installation performance. Here is our release note.
If you encounter any problems, please feel free to report them on Azure CLI GitHub.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Inside CodeMash 2026: A Geek Family Reunion (Powered by Microsoft MVPs)

1 Share

Every January, developers from across the world head to CodeMash at the Kalahari Resort in Ohio, US - equal parts deep technical learning and community warmth. It’s the kind of event where you park your car, stay onsite for days, and the hallway conversations become the main track. For me, it feels like a geek family reunion: you don’t just meet people - you really get to know and connect with them.

That community is also why Microsoft MVPs travel so far to be here - from Washington and Arizona to Texas, Ontario (Canada), Michigan, Minnesota, Ohio, and beyond. In 2026, MVPs delivered 30 sessions and workshops, and many also volunteered behind the scenes - helping with speaker selection, conference operations, and efforts that keep CodeMash welcoming.

Here are a few moments and reflections that captured CodeMash this year - from snowy travel stories to sessions that sparked “I’m trying this on Monday” energy, to the offstage conversations that turn conference acquaintances into real friends.

Microsoft MVP meet-up at the CodeMash 2026 conference

MVP impact: on stage, behind the scenes, and in the hallways

CodeMash is packed with great talks, but what makes it stick is how the community shows up for each other. This year, Microsoft MVPs covered cloud, AI, security, data, and developer tools - and also gave back through volunteer leadership, from helping with speaker/session selection to supporting safety and inclusion. That mix of strong content plus shared ownership is what turns a conference into a community.

CodeMash also brings families along for the ride. KidzMash is a free family track where kids learn by building - plus hands-on time in the Maker Space (soldering, CAD/3D design, electronics labs, and more). It’s a reminder that the curiosity that fuels great engineering can start early.

It’s hard to explain CodeMash without talking about the “everything else.” MVP Sam Basu put it best: “Codemash has always been a special place to see old friends and make new ones.” And MVP Robert Fornal captured the returning-attendee feeling in one line: “The energy of the conference is amazing… there’s always something new to see and learn.”

MVP Sam Basu presenting "Design to Code for .NET Developers"

Voices from CodeMash: the stories behind the sessions

Ask anyone why they came to CodeMash, and you’ll hear the same themes - community, learning, and that unique “park once and stay awhile” rhythm. Of course, first you have to get there - which in January often means snow.
MVP Joseph Guadagno flew in from Arizona and shared, “It’s been a really long time since I’ve driven in the snow.” He still made the trip for the people—especially the Midwest community he doesn’t see as often: “The event is definitely worth it.”

MVP Mike Benkovich had an appropriately wintery travel tale: “Going thru Cleveland to get to Sandusky was an adventure… we got the full effects of drifting snow, but we made it.” His verdict after arriving: “Highly recommended!”

 

MVP James Petty presenting at his workshop, "Stop Clicking, Start Scripting: PowerShell for Real People"

MVP James Petty told me CodeMash has turned into a tradition: “Honestly? CodeMash has become our family vacation tradition.” And yes, they learned the hard way not to ignore the forecast: “We… tried to leave during Thursday’s snowstorm - not our smartest move.”

Why MVPs Come to CodeMash

CodeMash is built for builders - and the MVP sessions leaned hard into practical, hands-on takeaways - from AI and cloud to security, data, and developer productivity. The best talks weren’t just interesting; they were the kind that send you home with a plan (and a backlog) for next week.

 

MVP Jake Hildreth presenting, “PKI Unlocked: A No-Math Primer for Builders”

MVP Jake Hildreth shared he wasn’t sure his topic would fit a dev-heavy conference—until it did: “I always felt like my special interest… was a little too niche.” After a strong reaction to an earlier talk, he pitched it to CodeMash and got the response every speaker wants: “YES, PLEASE.”

After his workshop, James Petty knew it landed when attendees started planning: “They already had ‘to-do’ lists ready for when they got back to work - they weren't just learning, they were ready to apply it immediately.”

MVP Brian McKeiver had a simple “yep, this is why I came” moment: Thursday breakfast, looking out at a room full of people comparing notes, laughing, and reconnecting. “It made me smile,” he said.

MVP Ron Dagdag presenting, "AI in the Browser: Practical ML with JavaScript"

Joseph Guadagno noted, “I ran into a former attendee that let me know she is speaking at this CodeMash because I inspired here based on my conversations with her at CodeMash and TechBash.”

MVP Jonathan “J” Tower summed up why he keeps showing up in person: “I value being with the community in person. With so much work now done remotely and so much content available online, being face to face matters more than ever. I always enjoy seeing audiences understanding something new in a session, but the moments that I go for are the hallway and meal conversations, where people talk openly about what they are building, learning, or struggling with. Those connections are hard to replicate anywhere else but in-person."

 

MVP Jonathan “J” Towerpresenting, "Layers Are for Lasagna: Embracing Vertical Slice Architecture"

“I embraced the fun and chaos by dressing up…”

CodeMash is one of the rare conferences where serious technical learning and playful experimentation happily coexist - and KidzMash makes that impossible to miss. MVP Matt Eland described it perfectly: “You don’t do any rehearsals because kids are chaotic and wonderful.” His advice for presenters is basically to show up prepared… and then roll with it: “You just embrace the chaos.”

MVP Matt Eland & MVP Alumni Sarah “Sadukie” Dutkiewicz presenting at KidzMash, "Think like a Computer"

Where to start

For a lot of developers, the hardest part of community involvement isn’t motivation - it’s knowing where to start. I asked a few MVPs what they’d tell someone who wants to begin contributing, and their answers all point to the same idea: start small, start now, and don’t wait for permission.

Joseph Guadagno summed it up simply as, “Just volunteer, anything helps. Whether it's helping at the registration desk, telling people about the event, or speaking.”

MVP Chris Woodruffpresenting his workshop, "Modern UX Without JavaScript Madness"

MVP Chris Woodruff said, “Don’t wait for someone to invite you into the developer community.” He recommends starting with meetups, conferences, and online forums - places where you can learn in public and meet people who share your curiosity.

MVP Hasan Savran mentioned, "Stop stressing and just give it a shot. You won't always be right, and that's okay. Speak up and share what you are thinking. It doesn't matter who you are; there will always be people who judge you and your knowledge. You are here to express yourself and tell a story from your viewpoint through your words or code. Learn to agree to disagree and move on."

Sam Basu shared ““It’s good to start small… everyone has to find a niche to be really good at something.”

Jake Hildreth added “Build in public!” He shared that once he pushed past the “is this good enough?” fear, publishing his work made it easier for others to help—and for him to see his own growth over time.

MVP Stephen Bilogan kept it practical: “There is no one framework to rule them all.” His nudge to attendees: try things, see what fits, and stay curious.

Brian McKeiver urged, “Don’t wait… just be fearless.” His take: share what you’re learning from real work - what went wrong and what went right - because your perspective helps more people than you think.

MVP Brian McKeiver presenting, "Tuning Azure App Services for Peak Performance"

The best moments happen offstage

The longer you attend, the more CodeMash becomes a timeline of friendships and memories. And if you ask people for their favorite moment, it’s rarely just one session - it’s the conversations between everything else.

Some of the best CodeMash memories don’t come from a slide deck - they come from the in-between moments: a conversation you didn’t expect, a table you sat at on a whim, a familiar face in a new hallway. So I asked a simple question - one that always reveals what people value most about this place: "What do you love most about CodeMash and its community?”

MVP Samuel Gomez said it was “a tie between meeting fellow MVPs at dinner and being able to complete the ‘quadfecta’ of CodeMash roles this year (attendee, speaker, organizer and volunteer).” He added, “It was amazing to experience what goes behind the scenes… and all the hard work and love that goes into making an event like this happen.”

MVP Deepthi Goguripresenting, “SQL Server Query Store Masterclass: From Troubleshooting to Optimization”

MVP Deepthi Goguri shared her favorite moment was meeting other MVPs: “These are the type of moments which makes me feel home.”

MVP Jim Bennett keeps it simple: “Dinner one evening, I just grabbed a seat at a table of folks I’d never met before and had a very enjoyable few hours chatting with them all. Having a welcoming community makes all the difference at an event, and it’s so nice to meet new people, hear their stories, and geek out on technical concepts.” For him, those casual table conversations often end up being the most valuable part of the conference.

MVP Jim Bennett presenting his workshop, "Do you want to build a copilot? Star Wars edition!"

Closing Thoughts

CodeMash 2026 was a good reminder that while the tech changes fast, the best parts of our industry don’t: curiosity, collaboration, and the joy of building things with (and for) other people. The MVPs brought strong technical content - but also the stories and generosity that make conferences feel human. And in a world where tech can sometimes feel overwhelming, those human moments are what keep us connected.

If you were at CodeMash, I hope you left with a few ideas you can use right away - and a few new friends. If you missed it, put it on your list for next year. Come for the sessions, stay for the conversations, and if you’re ready to go deeper: submit a talk, volunteer, or help someone else take their first step. Learn more (including the family-friendly KidzMash track) at codemash.org.



Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Getting Started using Purview with Erica Toelle

1 Share

Ready to get started with Purview? Richard chats with Erica Toelle about the first steps you can take to harness the power of Purview in your organization. Erica explains that Purview is an umbrella product that covers several infosec technologies, including information rights management, data loss prevention, structured data governance, and more. When preparing for M365 Copilot, you want to start tagging sensitive information in your organization, and Purview can help by using LLMs to identify potentially sensitive content. You can also monitor how data is used, the types of prompts sent to M365 Copilot, and more. This can help you bootstrap M365 Copilot by using Purview to see which data Copilot uses, and then tune the access rules for that data. "Getting your data estate in order" is not a destination; it's a journey, and Purview can give you a map!

Links

Recorded January 7, 2026





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/dc69cbde-03b4-4f6a-b595-ffe1cb8fe16a/audio/64dc578f-ff76-48a0-89fd-9606e65fb604/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Launching The Rural Guaranteed Minimum Income Initiative

1 Share
Launching The Rural Guaranteed Minimum Income Initiative

It's been a year since I invited Americans to join us in a pledge to Share the American Dream:

1. Support organizations you feel are effectively helping those most in need across America right now.

2. Within the next five years, also contribute public dedications of time or funds towards longer term efforts to keep the American Dream fair and attainable for all our children.

Stay gold, America. 💛

Personally, I’ve become a big believer in one particular quote, especially considering the specific context in which it was delivered:

“From those to whom much is given, much is expected.” – Mary Gates

Those 10 words had a profound effect on the world. Indeed, we were given much, so we, as a family, will choose to give much. On a recent podcast, my partner Betsy said it better than I could have:

“Well, we have everything we need!” That’s how I’ve always phrased it to [our children]. That, I think, extends [to our philanthropy]. We have everything we need; how do we make sure everybody has what they need? Because that’s the basic thing — Do you have a comfortable place to live? Do you have enough to eat? Do you have healthcare? If you have the basics, you’re in a good place in life, and everybody should have that opportunity.

It’s a question I’ve asked myself a lot since 2021. When, exactly, is enough?

Launching The Rural Guaranteed Minimum Income Initiative

We do have everything we need. Why can’t everyone else have the basic things they need, too?

Beyond the $1M to eight nonprofit charities we listed in January 2025, we saw immediate needs becoming so urgent that we quickly added an additional $13M in donations within a few months, for a total of $21M.

But you can’t take a completely short term view and fight each individual fire reactively, as it comes. You'll never stop firefighting. We also have to do fire abatement and deal with the root causes, improving conditions in this country such that there aren’t so many fires. Thus for the second half, much longer term part, in addition to the $21M already donated, we pledged $50M half of our remaining wealth to address the underlying, systemic issues.

I proposed some speculative ideas in “Stay Gold,” and this one ended up being the closest:

We could found a new organization loosely based on the original RAND Corporation, but modernized like Lever for Change. We can empower the best and brightest to determine a realistic, achievable path toward preserving the American Dream for everyone, working within the current system or outside it.

By March, 2025 we had consensus The Road Not Taken is Guaranteed Minimum Income.

The Road Not Taken is Guaranteed Minimum Income
The dream is incomplete until we share it with our fellow Americans.
Launching The Rural Guaranteed Minimum Income Initiative

Guaranteed Minimum Income (GMI) is an improved version of the older concept of Universal Basic Income (UBI) rather than indiscriminately giving money to “everyone,” GMI directs the money towards those who most need it, particularly families experiencing generational poverty.

📢 Please note that after this post, Coding Horror will revert to normal nerdy blog posts, and all future GMI content will be at a dedicated site linked below.

Why did we decide on GMI?

  • Almost every existing UBI/GMI study result data we could find indicates cash generally works. For example, OpenResearch data showed the greatest increase in spending among study participants was in meeting basic needs, with the greatest percent increase in support to others (26%), along with huge decreases in reported alcohol use (20% less) and days using non-prescribed painkillers (53% fewer). Why wouldn’t we continue to build something that has generally been shown to work, study after study, time and time again?
  • This is survival money, cash for folks so they can put food on the table, get a roof over their heads, have a functioning vehicle to go to work, and decide how to meet their most basic, critical needs. It pains me to say this, but we live in a world where many people simply do not often experience open generosity, or regular income. When you show someone what it feels like to just not be hungry for a little while, their view of the world changes. They feel trusted. They see possibility.

Launching The Rural Guaranteed Minimum Income Initiative
RISE Recipient Stacy D. | WV

I moved here with my family. And I have no family up here other than who I brought with me. So, how most people can be like, “Hey, I’m having a hard time. Got $20 or a pack of diapers.” I have nobody up here to do that. So, if me and my husband don't figure it out, it don't get figured out.

So, I’ve got five kids that live with me... I was working full-time until I got pregnant. I prayed for this baby for 10 years. So, as soon as I got pregnant, I stopped working. I was high risk.

The day I got cleared to go back to work, my vehicle broke down. It was the only vehicle that we had that carried all the kids. So, I’ve been four months without my car. So this is also going to get my vehicle back on the road.

You don’t know how hard it is to ask people, hey, can I get a ride to the grocery store? Or, hey, my baby has two month shots. I had to borrow a vehicle. This is gonna... it’s going to do a lot!

  • Unlike many other social programs, GMI studies require initiative. These are opt-in studies that you have to sign up for, demonstrate that you meet the income criteria and are a resident of the county and because spots are limited, be randomly selected from eligible applicants. We emphasize that this is not passive, it is active teamwork to improve the GMI program with your family, your community, and everyone else we can reach together over the next few decades.

Building On What Works

  • The massive OpenResearch UBI study, the largest and most detailed guaranteed income study ever conducted in the USA, was designed to be a template for future, more refined studies, and that’s exactly what we’re doing. We will also use what we learn in this group of three counties as in software, the rule of three to iterate, adapt, and improve our GMI study playbook with every new group of three counties, generating a playbook anyone can use.
  • We strive to do repeatable, replicable science in every study, and all our data will be open and freely shared with the world. We’re contributing to and partially funding a global, open data repository for basic income pilots all around the world, UBIdata. It’s the same reason we made Stack Overflow content part of the creative commons, and Discourse fully open source.
  • GMI is seed funding for families, investing in our fellow Americans, those who need it the most. A large body of research shows that dollars targeted to lower-income families are more likely to be spent quickly and reduce hardship, and can improve outcomes for children. “Trickle up” economics works, whereas "trickle down" tax cuts for the rich increase income inequality and provide no significant effect on growth or jobs.
  • This is the newer trust based model of philanthropy, much closer to venture capital funding. We primarily empower, fund, and build up existing organizations like GiveDirectly and OpenResearch, forming a collaborative team to leverage all their existing work and grow their organizations in whatever way they see fit, because they have the most experience in the GMI space.

The Rural Guaranteed Minimum Income Initiative

I like to go that way, really fast, so we are already well underway with the Rural Guaranteed Minimum Income Initiative.

We focus on rural counties, where dollars go a lot further, poverty is more prevalent, and populations are smaller for tighter studies. Rural counties are also greatly overlooked in this country, in my opinion, yet they have so much incredible untapped talent. I know because that’s exactly where my parents and I are from.

Launching The Rural Guaranteed Minimum Income Initiative

We’ve funded 3 county level programs (Mercer, WV; Beaufort, NC; Warren, MS) that are already underway, where we will help lift thousands of people out of poverty for a period of 16 months, while sharing data and results with the world. That’s a good start.

Launching The Rural Guaranteed Minimum Income Initiative

But I think we can do considerably more. With your help, we hope to reach all 50 states over time.

In “Stay Gold,” I noted that all of American history contains the path of love, and the path of hate. But the path of love is the only survivable path. It’s so much harder, and it’s going to be a lifetime of work. But what else could I possibly buy with our money that would be worth anything close to this, for all of us?

What You Can Do

Everyone is invited to help. Share results, learn the history of GMI (it’s actually fascinating, I swear), talk to your representatives and generally spread the word. A surprising number of people have never even heard the terms UBI or GMI, and sometimes have misconceptions about what they are and how they work.

Launching The Rural Guaranteed Minimum Income Initiative

If you, or someone you know, is “those to whom much is given,” and in a position to sponsor county-scale work, please join us in bringing a GMI study to a new rural county and reach all 50 states. Let’s continue to do science and help lift thousands of people out of poverty while generating open data for the world.

Launching The Rural Guaranteed Minimum Income Initiative

This is my third and final startup. Rather than an “Atwood Foundation,” all we want to do is advance the concept of direct cash transfer. Simply giving money to those most in need is perhaps the most radical act of love we can take on... and all the data I can find shows us that it works – helping people afford basic needs, keep stable housing, and handle unexpected expenses.

Dreams, like happiness, are only real when shared. So let’s do that together.

staygold.us 💛

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories