Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146922 stories
·
33 followers

Your Test Data Environment: Build vs Buy – a conversation we need to have

1 Share

 

After three decades of working with databases, one thing I’ve seen over and over is this: we don’t treat our development and test environments with the same respect we do our production systems.

Not because people don’t care. Far from it. It’s usually because teams are under pressure, everyone’s juggling multiple priorities, and the quickest path forward often wins the day.

Those lower environments usually contain copies of real customer data, though, and that makes them real targets for malicious actors.

Developers need realistic data to build reliable software. Testers need it to validate accurate logic and acceptable performance. And the business needs to stay compliant and avoid risk. That combination creates tension, and most organizations resolve it with what feels like the most straightforward fix.

Typical ways around this are anonymizing or generating data for test and development environments. Many organizations jump to tasking someone in the team to create this process. This typically isn’t their primary job, and a quick DIY script is chosen as the quickest, easiest option, without considering the potential issues of risk or governance.

Why DIY scripts seem easy… until they aren’t

When someone gets asked to “sort out the masking,” they’re usually trying to make progress quickly so they can get back to their real job. That leads to decisions like:

  • Shuffling data around, masking parts of strings, setting all rows to the same value
  • Reusing old logic in their masking scripts
  • Copying and pasting scripts and forgetting to update all of them

The challenge is that databases keep evolving – new columns appear, new data types get introduced, the business adds new use cases – and unless masking scripts evolve at the same pace, they drift out of date quickly and quietly.

And then maybe the person who wrote them moves to another team. Or leaves the organization. Suddenly no one knows why something was masked a certain way or what the original intention was, and that’s when things get risky.

The outcome?

You end up with scripts that don’t account for new schema changes, or worse, scripts that break, aren’t a priority for fixing, and are not run in the future. And at that point, you’re effectively restoring production data into development with no protection at all – which dramatically increases the risk of regulatory penalties, or worse, reputational degradation from a data breach.

Synthetic data has its own limits

Some teams avoid masking altogether and swing to the other extreme: synthetic data.
There’s nothing wrong with synthetic data – when it’s done well. But most of the time, teams generate something small, simple, and not very realistic.

Developers then test with data that doesn’t reflect real-world patterns. Edge cases disappear. Performance bottlenecks stay hidden.

The problem isn’t the intent. It’s that generating high quality synthetic data is a craft. And most DBAs and developers don’t have the time to master that on top of everything else they’re responsible for. Both DIY approaches solve the problem for today – but rarely for tomorrow.

What a good, sustainable approach to TDM actually looks like

If we want to protect data and help teams move faster, the solution must be easy to use, flexible to handle a variety of situations, and simple enough for our staff to understand, and more importantly, maintain across time and personnel changes.

The right approach to test data should be:

1. Simple to pick up and simple to maintain

Teams change. Documentation gets lost. That is critical as comments in code are not enough. Documentation or automated processes that enable others to extend or apply the solution to new situations.

2. Smart enough to find sensitive data for you

We store sensitive data in more places than ever. No one can manually keep track anymore. Automatic detection – with the flexibility to override – is essential.

3. Capable of producing realistic, familiar data

Developers do their best work with data that “feels” real. Random strings and nonsense values slow everyone down.

4. Able to subset production safely

You don’t always need the entire production database. Often, a well-formed slice is enough – and it’s much faster to work with.

5. Built for modern databases

A solution also needs to work at scale, handling the complexities of modern databases. Varied data types, large volumes of data, referential integrity (declared or not), and a wide variety of datasets in different languages.

These are the main considerations for any solution that protects test data.

The Build vs. Buy decision

I’ll be the first to admit: you can build your own masking solution. Plenty of teams do. But much like a monitoring system, this can result in substantial ongoing investment of time and labor. Maintaining them? That’s the real challenge.

Our databases are always changing, and we often add new database platforms. Ensuring a masking solution works well across time and the entire database estate takes a great deal of effort.

That’s why a lot of organizations are moving toward paid for, vendor supported solutions. Not because they couldn’t build their own, but because:

It reduces overhead

It preserves institutional knowledge

It scales with the environment

It gives new joiners a fighting chance

It ensures compliant test data and reduces risk

We often build software because we can’t buy something that works in the way our business works. However, as we’ve just walked through, for some tasks – like masking test data in a compliant way – a paid solution is better for several reasons, and Redgate has just the thing.

Where Redgate Test Data Manager fits in

Redgate Test Data Manager was built for exactly these challenges. It helps teams:

  • Discover sensitive data automatically
  • Mask it with realistic values
  • Retain referential integrity
  • Subset intelligently
  • Reduce PII exposure
  • Scale with changes in the database estate
  • Ensures that compliance is built in and not a bottleneck

If you want to see how it fits into your workflow, download the free trial and see it in action. It’s quick to set up, easy to learn, and takes the burden of DIY off your team’s plate.

The post Your Test Data Environment: Build vs Buy – a conversation we need to have appeared first on Redgate.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Blazorise 1.8.10 - Table Scrolling & Globalization Improvements

1 Share
Blazorise 1.8.10 delivers a fix for Table ScrollToRow visibility, adds support for Invariant Globalization, and resolves Signature Pad input issues on iOS Safari.
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Building with Azure OpenAI Sora: A Complete Guide to AI Video Generation

1 Share

In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API.

Table of Contents

  1. Introduction to Sora Models
  2. Azure AI Foundry vs. OpenAI API Structure
  3. API Integration: Request Body Parameters
  4. Video Generation Modes
  5. Cost Analysis per Generation
  6. Technical Limitations & Constraints
  7. Resolution & Duration Support
  8. Implementation Best Practices

Introduction to Sora Models

Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions:

  • Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds)
  • Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview)

Azure AI Foundry vs. OpenAI API Structure

Key Architectural Differences

Sora 1 uses Azure's traditional deployment-based API structure:

  • Endpoint Patternhttps://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/...
  • Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields
  • Job Management: Uses /jobs/{id} for status polling
  • Content Download: Uses /video/generations/{generation_id}/content/video

Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure:

  • Endpoint Patternhttps://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos
  • Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720")
  • Job Management: Uses /videos/{video_id} for status polling
  • Content Download: Uses /videos/{video_id}/content

Why This Matters?

This architectural difference requires conditional request formatting in your code:

const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; }

 

API Integration: Request Body Parameters

Sora 1 API Parameters

Standard Text-to-Video Request:

{ "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" }

 

Parameter Details:

  • model (String, Required): Your Azure deployment name
  • prompt (String, Required): Natural language description of the video (max 32000 chars)
  • height (String, Required): Video height in pixels
  • width (String, Required): Video width in pixels
  • n_seconds (String, Required): Duration (1-20 seconds)
  • n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution)

Sora 2 API Parameters

Text-to-Video Request:

{ "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" }

 

Image-to-Video Request (uses FormData):

const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP

 

Video-to-Video Remix Request:

  • EndpointPOST .../videos/{video_id}/remix
  • Body: Only { "prompt": "your new description" }
  • The original video's structure, motion, and framing are reused while applying the new prompt

Parameter Details:

  • model (String, Optional): Your deployment name
  • prompt (String, Required): Video description
  • size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280")
  • seconds (String, Optional): "4", "8", or "12" (defaults to "4")
  • input_reference (File, Optional): Reference image for image-to-video mode
  • remix_video_id (String, URL parameter): ID of video to remix

Video Generation Modes

1. Text-to-Video (Both Models)

The foundational mode where you provide a text prompt describing the desired video.

Implementation:

const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), });

Best Practices:

  • Include shot type (wide, close-up, aerial)
  • Describe subject, action, and environment
  • Specify lighting conditions (golden hour, dramatic, soft)
  • Add camera movement if desired (pans, tilts, tracking shots)

2. Image-to-Video (Sora 2 Only)

Generate a video anchored to or starting from a reference image.

Key Requirements:

  • Supported formats: JPEG, PNG, WebP
  • Image dimensions must exactly match the selected video resolution
  • Our implementation automatically resizes uploaded images to match

Implementation Detail:

// Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage);

3. Video-to-Video Remix (Sora 2 Only)

Create variations of existing videos while preserving their structure and motion.

Use Cases:

  • Change weather conditions in the same scene
  • Modify time of day while keeping camera movement
  • Swap subjects while maintaining composition
  • Adjust artistic style or color grading

Endpoint Structure:

POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview

Implementation:

let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; }

 

Cost Analysis per Generation

Sora 1 Pricing Model

Base Rate: ~$0.05 per second per variant at 720p 

Resolution Scaling: Cost scales linearly with pixel count

Formula:

const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier;

Examples:

  • 720p (1280×720), 12 seconds, 1 variant: $0.60
  • 1080p (1920×1080), 12 seconds, 1 variant: $1.35
  • 720p, 12 seconds, 2 variants: $1.20

Sora 2 Pricing Model

Flat Rate: $0.10 per second per variant (no resolution scaling in public preview)

Formula:

const totalCost = 0.10 * duration * variants;

Examples:

  • 720p (1280×720), 4 seconds: $0.40
  • 720p (1280×720), 12 seconds: $1.20
  • 720p (720×1280), 8 seconds: $0.80

 

Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters.

Cost Comparison

ScenarioSora 1 (720p)Sora 2 (720p)Winner
4s video$0.20$0.40Sora 1
12s video$0.60$1.20Sora 1
12s + audioN/A (no audio)$1.20Sora 2 (unique)
Image-to-videoN/A$0.40-$1.20Sora 2 (unique)

 

Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities.

Technical Limitations & Constraints

Sora 1 Limitations

Resolution Options:

  • 9 supported resolutions from 480×480 to 1920×1080
  • Includes square, portrait, and landscape formats
  • Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080

Duration:

  • Flexible: 1 to 20 seconds
  • Any integer value within range

Variants:

  • Depends on resolution:
    • 1080p: Variants disabled (n_variants must be 1)
    • 720p: Max 2 variants
    • Other resolutions: Max 4 variants

Concurrent Jobs: Maximum 2 jobs running simultaneously

Job Expiration: Videos expire 24 hours after generation

Audio: No audio generation (silent videos only)

Sora 2 Limitations

Resolution Options (Public Preview):

  • Only 2 options: 720×1280 (portrait) or 1280×720 (landscape)
  • No square formats
  • No 1080p support in current preview

Duration:

  • Fixed options only: 4, 8, or 12 seconds
  • No custom durations
  • Defaults to 4 seconds if not specified

Variants:

  • Not prominently supported in current API documentation
  • Focus is on single high-quality generations with audio

Concurrent Jobs: Maximum 2 jobs (same as Sora 1)

Job Expiration: 24 hours (same as Sora 1)

Audio: Native audio generation included (dialogue, sound effects, ambience)

Shared Constraints

Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third.

Job Lifecycle:

queued → preprocessing → processing/running → completed

Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video.

Generation Time:

  • Typical: 1-5 minutes depending on resolution, duration, and API load
  • Can occasionally take longer during high demand

Resolution & Duration Support Matrix

Sora 1 Support Matrix

ResolutionAspect RatioMax VariantsDuration RangeUse Case
480×480Square41-20sSocial thumbnails
480×854Portrait41-20sMobile stories
854×480Landscape41-20sQuick previews
720×720Square41-20sInstagram posts
720×1280Portrait21-20sTikTok/Reels
1280×720Landscape21-20sYouTube shorts
1080×1080Square11-20sPremium social
1080×1920Portrait11-20sPremium vertical
1920×1080Landscape11-20sFull HD content

Sora 2 Support Matrix

ResolutionAspect RatioDuration OptionsAudioGeneration Modes
720×1280Portrait4s, 8s, 12s✅ YesText, Image, Video Remix
1280×720Landscape4s, 8s, 12s✅ YesText, Image, Video Remix

Note: Sora 2's limited resolution options in public preview are expected to expand in future releases.

Implementation Best Practices

1. Job Status Polling Strategy

Implement adaptive backoff to avoid overwhelming the API:

const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; }

2. Handling Different Response Structures

Sora 1 Video Download:

const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`;

Sora 2 Video Download:

const videoUrl = `${root}/videos/${jobId}/content`;

3. Error Handling

try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification }

 

4. Image Preprocessing for Image-to-Video

Always resize images to match the target video resolution:

async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); }

5. Cost Tracking

Implement cost estimation before generation and tracking after:

// Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId });

6. Progressive User Feedback

Provide detailed status updates during the generation process:

const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`);

Conclusion

Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways:

  1. Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities
  2. Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version
  3. Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows
  4. Optimize costs: Calculate estimates upfront and track actual usage for better budget management
  5. Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages

The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge.

Resources:

This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.

 

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Azure CLI Windows MSI Upgrade Issue: Root Cause, Mitigation, and Performance Improvements

1 Share

Azure CLI on Windows MSI Upgrade Issue

Summary

About six months ago, some Windows users experienced an Azure CLI crash after upgrading via the MSI installer from Azure CLI 2.76.0 (or earlier) to 2.77.0 (or later). The failure occurred immediately on startup with: “ImportError: DLL load failed while importing win32file: The specified module could not be found.” This post explains what happened, why upgrades were affected (while clean installs typically worked), and what you can do to recover.

Who is affected?

You are likely affected if:

  • You installed Azure CLI using the Windows MSI installer.
  • You upgraded from Azure CLI 2.76.0 (or earlier) to 2.77.0 (or later) without fully uninstalling first.
  • After the upgrade, any az command fails with the win32file ImportError on startup.

Symptoms

Typical error output (Azure CLI/Azure PowerShell):

ImportError: DLL load failed while importing win32file: The specified module could not be found.

Immediate recovery

  1. Upgrade to the latest version 2.83.0
  2. If you want to install other versions of Azure CLI, perform a clean reinstall by uninstalling Microsoft Azure CLI from Windows Settings → Apps, deleting any remaining install folder (such as the CLI2 directory), reinstalling the latest Azure CLI using MSI or winget, and then verifying the installation with az --version.

Root cause analysis

During an affected MSI upgrade, the Azure CLI installation directory ended up missing a set of native Python extension files (.pyd files) required by the Windows integration layer. MSI logging showed components being blocked with messages indicating MSI believed the existing (older) key file was “newer” than the incoming one.

The root cause was an interaction between Windows Installer file versioning rules and a third‑party dependency packaging change. Azure CLI 2.76.0/2.77.0 consumed pywin32 311, whose .pyd binaries were missing Windows version resource metadata. When upgrading from a previous Azure CLI build that contained version-stamped pywin32 binaries (e.g., pywin32 306), MSI could treat the older versioned files as higher priority than the incoming non-versioned files. As a result, MSI could remove the old files during upgrade but skip installing the new ones, leaving the install incomplete.

Version mapping observed

Azure CLI version

Python

pywin32

pywin32 .pyd version resource

≤ 2.75.0

3.12

306

Present (e.g., 3.12.306.0)

2.76.0

3.12

311

Missing / empty

2.77.0+

3.13

311

Missing / empty

If you need to collect MSI logs (for support)

Run the installer with verbose logging (example):

msiexec /i "azure-cli-2.77.0.msi" /l*vx "C:\temp\azure-cli-install.log"

References

 

Windows MSI Upgrade Performance Optimization

The MSI upgrade process for Azure CLI on Windows has been significantly improved.

Previously, Windows Installer performed per‑file version comparisons—particularly expensive for Python runtime files—which made upgrades slow and sometimes inconsistent.

With the new logic, which skips the comparison and performs an overwrite installation. Upgrades now use a streamlined clean‑install process, resulting in faster and more reliable MSI upgrades.

Performance Improvements

Scenario

Before

After

Improvement

Fresh Install

Baseline

~5% faster

5% faster

Upgrade

Long due to file-by-file version comparison

~23% faster

23% faster

This update makes upgrades noticeably faster and more reliable by removing old files first and skipping slow per‑file version checks.

For more details, please refer to: [Packaging] Optimize MSI upgrade performance by simplifying file replacement logic by wangzelin007 · Pull Request #32678 · Azure/azure-cli

 

We encourage you to upgrade to the latest version of the Azure CLI. This will not only resolve the issue but also improve installation performance. Here is our release note.
If you encounter any problems, please feel free to report them on Azure CLI GitHub.

Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Inside CodeMash 2026: A Geek Family Reunion (Powered by Microsoft MVPs)

1 Share

Every January, developers from across the world head to CodeMash at the Kalahari Resort in Ohio, US - equal parts deep technical learning and community warmth. It’s the kind of event where you park your car, stay onsite for days, and the hallway conversations become the main track. For me, it feels like a geek family reunion: you don’t just meet people - you really get to know and connect with them.

That community is also why Microsoft MVPs travel so far to be here - from Washington and Arizona to Texas, Ontario (Canada), Michigan, Minnesota, Ohio, and beyond. In 2026, MVPs delivered 30 sessions and workshops, and many also volunteered behind the scenes - helping with speaker selection, conference operations, and efforts that keep CodeMash welcoming.

Here are a few moments and reflections that captured CodeMash this year - from snowy travel stories to sessions that sparked “I’m trying this on Monday” energy, to the offstage conversations that turn conference acquaintances into real friends.

Microsoft MVP meet-up at the CodeMash 2026 conference

MVP impact: on stage, behind the scenes, and in the hallways

CodeMash is packed with great talks, but what makes it stick is how the community shows up for each other. This year, Microsoft MVPs covered cloud, AI, security, data, and developer tools - and also gave back through volunteer leadership, from helping with speaker/session selection to supporting safety and inclusion. That mix of strong content plus shared ownership is what turns a conference into a community.

CodeMash also brings families along for the ride. KidzMash is a free family track where kids learn by building - plus hands-on time in the Maker Space (soldering, CAD/3D design, electronics labs, and more). It’s a reminder that the curiosity that fuels great engineering can start early.

It’s hard to explain CodeMash without talking about the “everything else.” MVP Sam Basu put it best: “Codemash has always been a special place to see old friends and make new ones.” And MVP Robert Fornal captured the returning-attendee feeling in one line: “The energy of the conference is amazing… there’s always something new to see and learn.”

MVP Sam Basu presenting "Design to Code for .NET Developers"

Voices from CodeMash: the stories behind the sessions

Ask anyone why they came to CodeMash, and you’ll hear the same themes - community, learning, and that unique “park once and stay awhile” rhythm. Of course, first you have to get there - which in January often means snow.
MVP Joseph Guadagno flew in from Arizona and shared, “It’s been a really long time since I’ve driven in the snow.” He still made the trip for the people—especially the Midwest community he doesn’t see as often: “The event is definitely worth it.”

MVP Mike Benkovich had an appropriately wintery travel tale: “Going thru Cleveland to get to Sandusky was an adventure… we got the full effects of drifting snow, but we made it.” His verdict after arriving: “Highly recommended!”

 

MVP James Petty presenting at his workshop, "Stop Clicking, Start Scripting: PowerShell for Real People"

MVP James Petty told me CodeMash has turned into a tradition: “Honestly? CodeMash has become our family vacation tradition.” And yes, they learned the hard way not to ignore the forecast: “We… tried to leave during Thursday’s snowstorm - not our smartest move.”

Why MVPs Come to CodeMash

CodeMash is built for builders - and the MVP sessions leaned hard into practical, hands-on takeaways - from AI and cloud to security, data, and developer productivity. The best talks weren’t just interesting; they were the kind that send you home with a plan (and a backlog) for next week.

 

MVP Jake Hildreth presenting, “PKI Unlocked: A No-Math Primer for Builders”

MVP Jake Hildreth shared he wasn’t sure his topic would fit a dev-heavy conference—until it did: “I always felt like my special interest… was a little too niche.” After a strong reaction to an earlier talk, he pitched it to CodeMash and got the response every speaker wants: “YES, PLEASE.”

After his workshop, James Petty knew it landed when attendees started planning: “They already had ‘to-do’ lists ready for when they got back to work - they weren't just learning, they were ready to apply it immediately.”

MVP Brian McKeiver had a simple “yep, this is why I came” moment: Thursday breakfast, looking out at a room full of people comparing notes, laughing, and reconnecting. “It made me smile,” he said.

MVP Ron Dagdag presenting, "AI in the Browser: Practical ML with JavaScript"

Joseph Guadagno noted, “I ran into a former attendee that let me know she is speaking at this CodeMash because I inspired here based on my conversations with her at CodeMash and TechBash.”

MVP Jonathan “J” Tower summed up why he keeps showing up in person: “I value being with the community in person. With so much work now done remotely and so much content available online, being face to face matters more than ever. I always enjoy seeing audiences understanding something new in a session, but the moments that I go for are the hallway and meal conversations, where people talk openly about what they are building, learning, or struggling with. Those connections are hard to replicate anywhere else but in-person."

 

MVP Jonathan “J” Towerpresenting, "Layers Are for Lasagna: Embracing Vertical Slice Architecture"

“I embraced the fun and chaos by dressing up…”

CodeMash is one of the rare conferences where serious technical learning and playful experimentation happily coexist - and KidzMash makes that impossible to miss. MVP Matt Eland described it perfectly: “You don’t do any rehearsals because kids are chaotic and wonderful.” His advice for presenters is basically to show up prepared… and then roll with it: “You just embrace the chaos.”

MVP Matt Eland & MVP Alumni Sarah “Sadukie” Dutkiewicz presenting at KidzMash, "Think like a Computer"

Where to start

For a lot of developers, the hardest part of community involvement isn’t motivation - it’s knowing where to start. I asked a few MVPs what they’d tell someone who wants to begin contributing, and their answers all point to the same idea: start small, start now, and don’t wait for permission.

Joseph Guadagno summed it up simply as, “Just volunteer, anything helps. Whether it's helping at the registration desk, telling people about the event, or speaking.”

MVP Chris Woodruffpresenting his workshop, "Modern UX Without JavaScript Madness"

MVP Chris Woodruff said, “Don’t wait for someone to invite you into the developer community.” He recommends starting with meetups, conferences, and online forums - places where you can learn in public and meet people who share your curiosity.

MVP Hasan Savran mentioned, "Stop stressing and just give it a shot. You won't always be right, and that's okay. Speak up and share what you are thinking. It doesn't matter who you are; there will always be people who judge you and your knowledge. You are here to express yourself and tell a story from your viewpoint through your words or code. Learn to agree to disagree and move on."

Sam Basu shared ““It’s good to start small… everyone has to find a niche to be really good at something.”

Jake Hildreth added “Build in public!” He shared that once he pushed past the “is this good enough?” fear, publishing his work made it easier for others to help—and for him to see his own growth over time.

MVP Stephen Bilogan kept it practical: “There is no one framework to rule them all.” His nudge to attendees: try things, see what fits, and stay curious.

Brian McKeiver urged, “Don’t wait… just be fearless.” His take: share what you’re learning from real work - what went wrong and what went right - because your perspective helps more people than you think.

MVP Brian McKeiver presenting, "Tuning Azure App Services for Peak Performance"

The best moments happen offstage

The longer you attend, the more CodeMash becomes a timeline of friendships and memories. And if you ask people for their favorite moment, it’s rarely just one session - it’s the conversations between everything else.

Some of the best CodeMash memories don’t come from a slide deck - they come from the in-between moments: a conversation you didn’t expect, a table you sat at on a whim, a familiar face in a new hallway. So I asked a simple question - one that always reveals what people value most about this place: "What do you love most about CodeMash and its community?”

MVP Samuel Gomez said it was “a tie between meeting fellow MVPs at dinner and being able to complete the ‘quadfecta’ of CodeMash roles this year (attendee, speaker, organizer and volunteer).” He added, “It was amazing to experience what goes behind the scenes… and all the hard work and love that goes into making an event like this happen.”

MVP Deepthi Goguripresenting, “SQL Server Query Store Masterclass: From Troubleshooting to Optimization”

MVP Deepthi Goguri shared her favorite moment was meeting other MVPs: “These are the type of moments which makes me feel home.”

MVP Jim Bennett keeps it simple: “Dinner one evening, I just grabbed a seat at a table of folks I’d never met before and had a very enjoyable few hours chatting with them all. Having a welcoming community makes all the difference at an event, and it’s so nice to meet new people, hear their stories, and geek out on technical concepts.” For him, those casual table conversations often end up being the most valuable part of the conference.

MVP Jim Bennett presenting his workshop, "Do you want to build a copilot? Star Wars edition!"

Closing Thoughts

CodeMash 2026 was a good reminder that while the tech changes fast, the best parts of our industry don’t: curiosity, collaboration, and the joy of building things with (and for) other people. The MVPs brought strong technical content - but also the stories and generosity that make conferences feel human. And in a world where tech can sometimes feel overwhelming, those human moments are what keep us connected.

If you were at CodeMash, I hope you left with a few ideas you can use right away - and a few new friends. If you missed it, put it on your list for next year. Come for the sessions, stay for the conversations, and if you’re ready to go deeper: submit a talk, volunteer, or help someone else take their first step. Learn more (including the family-friendly KidzMash track) at codemash.org.



Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Getting Started using Purview with Erica Toelle

1 Share

Ready to get started with Purview? Richard chats with Erica Toelle about the first steps you can take to harness the power of Purview in your organization. Erica explains that Purview is an umbrella product that covers several infosec technologies, including information rights management, data loss prevention, structured data governance, and more. When preparing for M365 Copilot, you want to start tagging sensitive information in your organization, and Purview can help by using LLMs to identify potentially sensitive content. You can also monitor how data is used, the types of prompts sent to M365 Copilot, and more. This can help you bootstrap M365 Copilot by using Purview to see which data Copilot uses, and then tune the access rules for that data. "Getting your data estate in order" is not a destination; it's a journey, and Purview can give you a map!

Links

Recorded January 7, 2026





Download audio: https://cdn.simplecast.com/audio/c2165e35-09c6-4ae8-b29e-2d26dad5aece/episodes/dc69cbde-03b4-4f6a-b595-ffe1cb8fe16a/audio/64dc578f-ff76-48a0-89fd-9606e65fb604/default_tc.mp3?aid=rss_feed&feed=cRTTfxcT
Read the whole story
alvinashcraft
4 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories