Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149943 stories
·
33 followers

Defender for AI services: Threat Protection and AI red team workshop

1 Share

Generative AI is reshaping how enterprises operate, introducing new efficiencies—and new risks. Imagine launching a helpful chatbot, only to learn a cleverly crafted prompt can bypass safety controls and exfiltrate sensitive data. This is today’s reality: every system prompt, plugin/tool, dataset, fine‑tune, or orchestration step can change the attack surface. This is due to the non deterministic nature in how LLM models craft responses in output. The slightest change in the input of a prompt in verbiage or tone can change the outcome of a output in subtle but non predictable ways especially when your data is involved.

This post shows how to operationalize AI red teaming with Microsoft Defender for AI services so security teams gain evidence‑backed visibility into adversarial behavior and turn that visibility into daily defense. By aligning with Microsoft’s Responsible AI principles of transparency, accountability, and continuous improvement, we demonstrate a pragmatic, repeatable loop that makes AI safer week after week. Crucially, security needs to have a seat at the table across the AI app lifecycle from model selection and pilot to production and ongoing updates.

Who Should Read This (and What You’ll See)

  • SOC analysts & incident responders - See how AI signals materialize as high‑fidelity alerts (prompt evidence, URL intel, identity context) in Defender for Cloud and Defender XDR for fast triage and correlation.
  • AI/ML engineers - Validate model safety with controlled simulations (PyRIT‑informed strategies) and understand which filters/guardrails move the needle.
  • Security architects - Integrate Microsoft Defender for AI services into your cloud security program; codify improvements as policy, IaC, and identity hygiene.
  • Red teamers/researchers - Run structured, repeatable adversarial tests that produce measurable outcomes the org can act on.

Why now? Data leakage, prompt injection, jailbreaks, and endpoint abuse are among the fastest‑growing threats to AI systems. With AI red teaming and Microsoft Defender for AI services, you catch intent before impact and translate insight into durable controls.

What’s Different About the AI Attack Surface

New risks sit alongside the traditional ones:

  • Prompts & responses — Susceptible to prompt injection and jailbreak attempts (rule change, role‑play, encoding/obfuscation).
  • User & application context — Missing context slows investigations and blurs accountability.
  • Model endpoints & identities — Static keys and weak identity practices increase credential theft and scripted probing risk.
  • Attached data (RAG/fine‑tuning) — Indirect prompt injection via documents or data sources.
  • Orchestration layers/agents — Tool invocation abuse, unintended actions, or “over‑permissive” chains.
  • Content & safety filters — Configuration drift or silent loosening erodes protection.

A key theme across these risks is context propagation. The way user identity, application parameters, and environmental signals travel with each prompt and response. When context is preserved and surfaced in security alerts, SOC teams can quickly correlate incidents, trace attack paths, and remediate threats with precision. Effective context propagation transforms raw signals into actionable intelligence, making investigations faster and more accurate.

Microsoft Defender for AI services adds a real‑time protection layer across this surface by combining Prompt Shields, activity monitoring, and Microsoft threat intelligence to produce high‑fidelity alerts you can operationalize.

The Improvement Loop (Responsible AI in Practice)

Responsible AI comes to life when teams Observe → Correlate → Remediate → Retest → Codify:

  1. Observe controlled jailbreak/phishing/automation patterns and collect prompt evidence.
  2. Correlate with identity, network, and prior incidents in Defender XDR.
  3. Remediate with the smallest effective control (filters, identities, rate limits, data scoping).
  4. Retest the same scenario to verify risk reduction.
  5. Codify as baseline (policy, IaC template, guardrail profile, rotation notes).

Repeat this rhythm on a schedule and you’ll build durable posture faster than a one‑time “big‑bang” control set.

Prerequisites:

To take advantage of this workshop you’ll need:

  1. Sandbox subscription (ideally inside a Sandbox Mgmt Group with lighter policies), you may also leverage a Free trial of Azure Subscription as well.
  2. Microsoft Defender for AI services plan enabled (see Participant Guide )
  3. Contributor access (you can deploy + view alerts)
  4. Region capacity confirmed (Azure AI Foundry in East US 2)

Workshop flow and testing:

Prep: Enable the Microsoft Defender for AI services plan with prompt evidence, deploy the Azure Template (one hub + single endpoint), and open the AIRT-Eval.ipynb notebook; you now have a controlled space to generate signals(see Participant Guide).

Controlled Signals: Trigger against a jailbreak attempt, a phishing URL simulation, and a suspicious user agent simulation to produce three distinct alert types.

Triage & Correlate: For each alert, review anatomy (evidence, severity, IDs) and capture prompt/URL evidence.

Harden & Retest: Apply improvements or security controls, then validate fixes.

After you harden controls and retest, the next step is validating that your defenses trigger the right alerts on demand. There is a list of Microsoft Defender for AI services alerts here. To evaluate alerts, open DfAI‑Eval.ipynb - a streamlined notebook that safely simulates adversarial activity (current alerts: jailbreak, phishing URL, suspicious user agent) to exercise Microsoft Defender for AI services detections. Think of it as the EICAR test for AI workloads: consistent, repeatable, and safe. 

Next, we will review and break down each of the alerts you’ll generate in the workshop and how to read them effectively.

Anatomy of Jailbreak from AI Red team Agent:

A jailbreak is a user prompt designed to sidestep system or safety instructions—rule‑change (“ignore previous rules”), fake embedded conversation, role‑play as an unrestricted persona, or encoding tricks. Microsoft Defender for AI services (via Prompt Shields + threat intelligence) flags it before unsafe output (“left‑of‑boom”) and publishes a correlated high‑fidelity alert into Defender XDR for cross‑signal investigation.

Anatomy of a Phishing involved in an attack:

Phishing prompt URL alerts fire when a prompt or draft response embeds domains linked to impersonation, homoglyph tricks, newly registered infrastructure, encoded redirects, or reputation‑flagged hosting. Microsoft Defender for AI services enriches the URL (normalization, age, reputation, brand similarity) and—if prompt evidence is enabled—includes the exact snippet, then streams the alert into Defender XDR where end‑user/application context fields (e.g. `EndUserId`, `SourceIP`) let analysts correlate repeated lure attempts and pivot to related credential or jailbreak activity.

Anatomy of a User Agent involved in an attack:

Suspicious user agent alerts highlight enumeration or automation patterns (generic library signatures, headless runners, scanner strings, cadence anomalies) tied to AI endpoint usage and identity context. Microsoft Defender for AI services scores the anomaly and forwards it to Defender XDR enriched with optional `UserSecurityContext` (IP, user ID, application name) so analysts can correlate rapid probing with concurrent jailbreak or phishing alerts and enforce mitigations like managed identity, rate limits, or user agent filtering.

Conclusion

The goal of this Red teaming and AI Threat workshop amongst the different attendees is to catch intent before impact, prompt manipulation before unsafe output, phishing infrastructure before credential loss, and scripted probing before exfiltration. Microsoft Defender for AI services feeding Defender XDR enables a compact improvement loop that converts red team findings into operational guardrails.

Within weeks, this cadence transforms AI from experimental liability into a governed, monitored asset aligned with your cloud security program. Incrementally closing gaps within context propagation, identity hygiene, Prompt Shields & filter tuning—builds durable posture. Small, focused cycles win ship one improvement, measure its impact, promote to baseline, and repeat.

Read the whole story
alvinashcraft
19 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Common Misconceptions When Running Locally vs. Deploying to Azure Linux-based Web Apps

1 Share

TOC

  1. Introduction
  2. Environment Variable
  3. Build Time
    1. Compatible
    2. Memory
  4. Conclusion

1. Introduction

One of the most common issues during project development is the scenario where “the application runs perfectly in the local environment but fails after being deployed to Azure.”


In most cases, deployment logs will clearly reveal the problem and allow you to fix it quickly.
However, there are also more complicated situations where "due to the nature of the error itself" relevant logs may be difficult to locate.

This article introduces several common categories of such problems and explains how to troubleshoot them.
We will demonstrate them using Python and popular AI-related packages, as these tend to exhibit compatibility-related behavior.

Before you begin, it is recommended that you read Deployment and Build from Azure Linux based Web App | Microsoft Community Hub on how Azure Linux-based Web Apps perform deployments so you have a basic understanding of the build process.

 

2. Environment Variable

Simulating a Local Flask + sklearn Project

First, let’s simulate a minimal Flask + sklearn project in any local environment (VS Code in this example).

For simplicity, the sample code does not actually use any sklearn functions; it only displays plain text.

 

app.py

from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "hello deploy environment variable" if __name__ == "__main__": app.run(host="0.0.0.0", port=8000)

 

We also preset the environment variables required during Azure deployment, although these will not be used when running locally.

 

.deployment

[config] SCM_DO_BUILD_DURING_DEPLOYMENT=false

 

As you may know, the old package name sklearn has long been deprecated in favor of scikit-learn.
However, for the purpose of simulating a compatibility error, we will intentionally specify the outdated package name.

 

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 sklearn

 

After running the project locally, you can open a browser and navigate to the target URL to verify the result.

python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt python app.py

 

Of course, you may encounter the same compatibility issue even in your local environment.
Simply running the following command resolves it:

export SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True

 

We will revisit this error and its solution shortly.
For now, create a Linux Web App running Python 3.12 and configure the following environment variables.
We will define Oryx Build as the deployment method.

SCM_DO_BUILD_DURING_DEPLOYMENT=false WEBSITE_RUN_FROM_PACKAGE=false ENABLE_ORYX_BUILD=true

 

After deploying the code and checking the Deployment Center, you should see an error similar to the following.

 

From the detailed error message, the cause is clear:
sklearn is deprecated and replaced by scikit-learn, so additional compatibility handling is now required by the Python runtime.

The error message suggests the following solutions:

  1. Install the newer scikit-learn package directly.

  2. If your project is deeply coupled to the old sklearn package and cannot be refactored yet, enable compatibility by setting an environment variable to allow installation of the deprecated package.

Typically, this type of “works locally but fails on Azure” behavior happens because the deprecated package was installed in the local environment a long time ago at the start of the project, and everything has been running smoothly since.
Package compatibility issues like this are very common across various languages on Linux.

 

When a project becomes tightly coupled to an outdated package, you may not be able to upgrade it immediately.
In these cases, compatibility workarounds are often the only practical short-term solution.
In our example, we will add the environment variable:

SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True

 

However, here comes the real problem:
This variable is needed during the build phase, but the environment variables set in Azure Portal’s Application Settings only take effect at runtime. So what should we do?

 

The answer is simple, shift the Oryx Build process from build-time to runtime.

 

First, open Azure Portal → Configuration and disable Oryx Build.

ENABLE_ORYX_BUILD=false

 

Next, modify the project by adding a startup script.

run.sh

#!/bin/bash export SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True python -m venv .venv source .venv/bin/activate pip install -r requirements.txt python app.py


The startup script works just like the commands you run locally before executing the application.
The difference is that you can inject the necessary compatibility environment variables before running pip install or starting the app.

 

After that, return to Azure Portal and add the following Startup Command under Stack Settings.
This ensures that your compatibility environment variables and build steps run before the runtime starts.

bash run.sh

 

Your overall project structure will now look like this.
Once redeployed, everything should work correctly.

 

3. Build Time

Build-Time Errors Caused by AI-Related Packages

Many build-time failures are caused by AI-related packages, whose installation processes can be extremely time-consuming.
You can investigate these issues by reviewing the deployment logs at the following maintenance URL:

https://<YOUR_APP_NAME>.scm.azurewebsites.net/newui

 

Compatible

Let’s simulate a Flask + numpy project.

The code is shown below.

app.py

from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "hello deploy compatible" if __name__ == "__main__": app.run(host="0.0.0.0", port=8000)

 

We reuse the same environment variables from the sklearn example.

.deployment

[config] SCM_DO_BUILD_DURING_DEPLOYMENT=false

 

This time, we simulate the incompatibility between numpy==1.21.0 and Python 3.10.

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 numpy==1.21.0

 

We will skip the local execution part and move directly to creating a Linux Web App running Python 3.10.
Configure the same environment variables as before, and define the deployment method as runtime build.

SCM_DO_BUILD_DURING_DEPLOYMENT=false WEBSITE_RUN_FROM_PACKAGE=false ENABLE_ORYX_BUILD=false

 

After deployment, Deployment Center shows a successful publish.

 

However, the actual website displays an error.

 

At this point, you must check the deployment log files mentioned earlier.
You will find two key logs:

1. docker.log

  • Displays real-time logs of the platform creating and starting the container.

  • In this case, you will see that the health probe exceeded the default 230-second startup window, causing container startup failure.

  • This tells us the root cause is container startup timeout.

    To determine why it timed out, we must inspect the second file.

2. default_docker.log

  • Contains the internal execution logs of the container.

  • Not generated in real time, usually delayed around 15 minutes.

  • Therefore, if docker.log shows a timeout error, wait at least 15 minutes to allow the logs to be written here.

 

In this example, the internal log shows that numpy was being compiled during pip install, and the compilation step took too long.
We now have a concrete diagnosis: numpy 1.21.0 is not compatible with Python 3.10, which forces pip to compile from source. 

 

The compilation exceeds the platform’s startup time limit (230 seconds) and causes the container to fail.

We can verify this by checking numpy’s official site:

numpy · PyPI


numpy 1.21.0 only provides wheels for cp37, cp38, cp39 but not cp310 (which is python 3.10).
Thus, compilation becomes unavoidable.

 

Possible Solutions

  1. Set the environment variable

    WEBSITES_CONTAINER_START_TIME_LIMIT

    to increase the allowed container startup time.

  2. Downgrade Python to 3.9 or earlier.

  3. Upgrade numpy to 1.21.0+, where suitable wheels for Python 3.10 are available.

    In this example, we choose this option.

 

After upgrading numpy to version 1.25.0 (which supports Python 3.10) from specifying in requirements.txt and redeploying, the issue is resolved.

numpy · PyPI

 

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 numpy==1.25.0

 

 

Memory

The final example concerns the App Service SKU.
AI packages such as Streamlit, PyTorch, and others require significant memory.
Any one of these packages may cause the build process to fail due to insufficient memory.
The error messages vary widely each time.

If you repeatedly encounter unexplained build failures, check Deployment Center or default_docker.log for Exit Code 137, which indicates that the system ran out of memory during the build.

The only solution in such cases is to scale up.

 

4. Conclusion

This article introduced several common troubleshooting techniques for resolving Linux Web App issues caused during the build stage.
Most of these problems relate to package compatibility, although the symptoms may vary greatly.
By understanding the debugging process demonstrated in these examples, you will be better prepared to diagnose and resolve similar issues in future deployments.

 

 

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How Big a Deal is the USA's AI Genesis Mission?

1 Share
From: AIDailyBrief
Duration: 8:38
Views: 822

The Genesis Mission establishes a Manhattan Project–scale national AI science program to centralize federal datasets, train scientific foundation models, and build a closed-loop AI experimentation platform using DOE supercomputers. Amazon pledged up to $50 billion to expand AWS supercomputing capacity for classified and unclassified government AI work, and Meta is exploring installing Google's TPUs as chip competition with NVIDIA intensifies. OpenAI CEO Sam Altman hinted at a consumer AI device built around total contextual awareness with a potential two-year timeline while keeping features deliberately vague.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

The future of AI-powered sales with Vercel COO, Jeanne DeWitt

1 Share

Jeanne DeWitt Grosser built world-class GTM teams at Stripe, Google, and, most recently, Vercel, where she serves as COO and oversees marketing, sales, customer success, revenue operations, and field engineering. She transformed Stripe’s early sales organization from the ground up and advises founders on GTM strategy.

We discuss:

1. Why GTM is becoming more strategically important in the AI era

2. The rise of the GTM engineer

3. A primer on segmentation

4. How to build a sales org that engineers and product teams respect

5. The changing calculus of build vs. buy for go-to-market tools in the AI era

6. Why most customers buy to avoid pain rather than to gain upside

Brought to you by:

Datadog—Now home to Eppo, the leading experimentation and feature flagging platform: https://www.datadoghq.com/lenny

Lovable—Build apps by simply chatting with AI: https://lovable.dev/

Stripe—Helping companies of all sizes grow revenue: https://stripe.com/

Transcript: https://www.lennysnewsletter.com/p/what-the-best-gtm-teams-do-differently

My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/179503137/my-biggest-takeaways-from-this-conversation

Where to find Jeanne DeWitt Grosser:

• X: https://x.com/jdewitt29

• LinkedIn: https://www.linkedin.com/in/jeannedewitt

Where to find Lenny:

• Newsletter: https://www.lennysnewsletter.com

• X: https://twitter.com/lennysan

• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/

In this episode, we cover:

(00:00) Introduction to Jeanne DeWitt Grosser

(05:26) Defining go-to-market

(08:43) The evolution of go-to-market roles

(11:23) The rise of the go-to-market engineer

(14:21) Implementing AI in sales processes

(15:28) Optimizing sales with AI agents

(23:47) Defining sales roles: SDRs and AEs

(26:04) When to hire a GTM engineer

(29:04) Hiring and scaling sales teams

(30:50) The ideal go-to-market engineer

(34:24) The go-to-market tool stack

(40:39) Advice on building a great sales bot

(44:34) Vercel’s unfair advantage

(46:37) Go-to-market as a product

(47:04) Innovative sales tactics at Stripe

(52:38) Effective go-to-market tactics

(01:00:37) Segmentation strategies

(01:09:31) Building a sales org that engineers love

(01:14:00) Thoughts on PLG and pricing

(01:16:44) Sales compensation and hiring

(01:19:24) Lightning round and final thoughts

Referenced:

• Vercel: https://vercel.com

• Stripe: https://stripe.com

• Rosalind Franklin: https://en.wikipedia.org/wiki/Rosalind_Franklin

• Ben Salzman on LinkedIn: https://www.linkedin.com/in/bensalzman

• SDK: https://ai-sdk.dev/docs/introduction

• Gong: https://www.gong.io

• Lyft: https://www.lyft.com

• Instacart: https://www.instacart.com

• DoorDash: https://www.instacart.com

• “Sell the alpha, not the feature”: The enterprise sales playbook for $1M to $10M ARR | Jen Abel: https://www.lennysnewsletter.com/p/the-enterprise-sales-playbook-1m-to-10m-arr

• A step-by-step guide to crafting a sales pitch that wins | April Dunford (author of Obviously Awesome and Sales Pitch): https://www.lennysnewsletter.com/p/a-step-by-step-guide-to-crafting

• Kate Jensen on LinkedIn: https://www.linkedin.com/in/kateearle

• Lessons from scaling Stripe | Claire Hughes Johnson (former COO of Stripe): https://www.lennysnewsletter.com/p/lessons-from-scaling-stripe-tactics

• Atlassian: atlassian.com

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.

Lenny may be an investor in the companies discussed.



To hear more, visit www.lennysnewsletter.com



Download audio: https://api.substack.com/feed/podcast/179503137/963b0154dbed0781d447b17aa2e803ef.mp3
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Gaming on Linux? Bazzite Is a Great Place To Start

1 Share

Gaming on Linux got a huge boost recently, thanks to the pending release of the upcoming Steam Machine, a compact gaming machine powered by Arch Linux.

That doesn’t mean, however, you’ll have to wait for that new device to be released. Why? There are several Linux distributions that have been created with gaming in mind. One such distribution is called Bazzite.

According to the official PR material, “Bazzite makes gaming and everyday use smoother and simpler across desktop PCs, handhelds, tablets, and home theater PCs.”

Your first question might be, “What does Bazzite do differently?” The list might be short in size, but it’s huge in results. Bazzite includes the following additions to the standard Linux distribution:

  • Steam, Proton, Proton+, Lutris, and Protontricks are pre-installed.
  • HDR & VRR support
  • Improved CPU schedulers
  • Several community-developed tools and customizations to streamlining the gaming experience
  • Support for several gaming controllers (Xbox, Wii, Switch, PS3/4/5, and others).
  • Latest NVIDIA and Mesa drivers for AMD and Intel.
  • Support for additional Wi-Fi and display hardware
  • Waydroid for Android app support
  • Homebrew included

You can check the entire gaming hardware compatibility listing here.

Bazzite not only works on desktops and laptops, but also on handhelds and tablets. And because you’ll be using Steam, you’ll have access to your entire Steam library.

Bazzite is also an image-based OS, which means if an update were to cause problems, you can easily roll back to a previous working image. And because Bazzite is an immutable distribution, it’s also highly secure. The entire core system is mounted in read-only mode, so those files cannot be altered.

Bazzite was built from Fedora Kinoite and uses either the KDE Plasma or GNOME desktop.

All of this comes together to create a Linux distribution that can run just about anything.

I installed the GNOME version of Bazzite to see what was what.

Gaming on Linux

Since Bazzite is promoted as a gaming distribution, I thought the first thing I should do is see how well it performs with Steam Games. I opened the app (there’s no installation of anything needed), logged into my account, and fired up a game in my library.

I’ve gone through these motions before and have found mixed results with some Linux distributions. I wasn’t surprised, however, at how seamlessly Steam worked with Bazzite. In minutes, I had Albion Online up and running. It’s not the most popular game on the planet, but it’s one I tend to use to test Steam on Linux.

As usual, it did take some time to download the game and start playing. All-in-all, it was about five minutes before I was testing the game (Figure 1).

Games screenshot.

Figure 1: Playing games on Bazzite is vastly simplified with Steam.

One thing that I’ve always understood is that using Steam on a PC isn’t quite as easy as using a dedicated console. Games have to be downloaded, space has to be reserved, etc. But once you start playing on Bazzite, it runs as well as it would on a console. In fact, I don’t think I’ve ever experienced such seamless gaming on Linux. It runs so well.

Graphics are outstanding, sound is great, and play is smooth. Of course, how well a game runs will depend on the game and the hardware. Sure, load times might be slow, but playing is solid.

You can also use Lutris as an even easier path to gaming with Bazzite. The one caveat to Lutris is that you have to download GOG files for installation of those games, and many GOG files have an associated price. But, hey, pay to play is the name of the game in this world.

Beyond Gaming

Yes, Bazzite might be geared towards gaming, but that doesn’t mean it can’t be used for other purposes. Although the distribution doesn’t ship with much in the way of productivity, it does use Flatpak and the Bazaar app store, which means you can install tons of applications. With a couple of clicks, you can install anything you need (such as office suites, IDEs, browsers, and all points in between – Figure 2).

Screenshot

Figure 2: The Bazaar app store is very easy to use.

I found Bazzite to be a rock-solid distribution for both productivity and creativity.

And then there’s the Btrfs Assistant (Figure 3), where you can manage Btrfs snapshots, which are used for rolling back, should a problem occur.

Screenshot

Figure 3: This tool should be considered a must-use.

The Btrfs Assistant does have a slight learning curve, but once you get up to speed, you’ll be zipping your way through creating and managing snapshots like a pro.

Distroshelf for Containerized Distributions

Another outstanding application included with Bazzite is Distroshelf, which allows you to quickly spin up containerized versions of Linux distributions in the same way you would with GNOME Boxes. All you have to do is download a base image and allow Distroshelf to install it (Figure 4). It does take a bit of time to get a VM up and running, but only because of the large download sizes of the required files. Other than that, it’s as easy to use as it gets.

Screenshot

Figure 4: Distroshelf is a great way to run virtual machines in Bazzite.

All in all, I found Bazzite to be a remarkably solid and fun distribution to use. I would suggest you download an ISO and spin it up as either a virtual machine or on a spare system. I have the utmost confidence that you’ll enjoy the experiences as much as I did.

The post Gaming on Linux? Bazzite Is a Great Place To Start appeared first on The New Stack.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

2025’s ‘Advent of Code’ Event Chooses Tradition Over AI

1 Share

Are you ready to code? Today, programmers around the world start counting the hours until Dec. 1, and 2025’s annual “Advent of Code” event.

And this year there’s something new — it’s the first year with some very big changes. After 10 years of tantalizing coders with 25 two-part puzzles each year, event creator Eric Wastl explained what’s changing — and why — in a short, succinct announcement in October.

But what’s equally significant is how the community reacted.

AI may be changing the world of programming, but some human holiday traditions continue. And as hundreds of thousands of coders barrel toward another year of holiday-themed puzzles — like reindeer flying through a North Pole blizzard — it’s nice to see that their shared communal excitement will be seeing yet another year.

Advent of Code 2025: Fewer Puzzles for More Accessibility

There’s no mistaking the widespread fondness for the site. According to its statistics page, 284,977 people solved last year’s first puzzle. (Completing a puzzle awards a “star,” and Wastl has announced that 23,170,305 stars have been awarded since the site’s launch in 2015.) In fact, 779 users have solved every puzzle, every year — earning all 500 stars.

And over the years, more than a million people have collected at least one star.

But speaking last year at the C++ conference CppNorth, Wastl admitted it’s hard to solve all 25 puzzles in a single year. “I try to make the beginning ones easy, and I try to make the later ones hard.”

This means that while 263,746 people solved both parts of last year’s first puzzle, only 17,088 conquered its last puzzle.

Screenshot from Advent of Code site - 2024 puzzle completion statistics

So for 2025, Wastl is reducing the number of puzzles from 25 to 12. “It takes a ton of my free time every year to run Advent of Code,” Wastl wrote in the site’s FAQ, “and building the puzzles accounts for the majority of that time.

“After keeping a consistent schedule for ten years(!), I needed a change.”

Responses on social media have been generally supportive…

Screenshot of BlueSky response to Eric Wastl announcing fewer Advent of Code puzzles in 2025
“Honestly, this makes it easier to participate,” posted one backend services developer. “I was never able to keep doing it every day as the holidays got closer, too much else to do!”

And a Los Angeles-based engineering manager added that the event “is such a gift to all of us, whatever you give us is a blessing! I’m glad you’re able to find a balance that works for you.”

On Reddit, Wastl confirmed that he’s still planning to have two parts for each puzzle (joking that “I reserve the right to some day have a 37-part puzzle!”).

But will this affect the difficulty of the puzzles?

“I’m still calibrating that,” Wastl posted on Reddit. “My hope right now is to have a more condensed version of the 25-day complexity curve, maybe skewed a little to the simpler direction in the middle of the curve? I’d still like something there for everyone, without outpacing beginners too quickly, if I can manage it.”

Global Leaderboards Removed and AI Use Policy

And the event is also officially discontinuing its global leaderboard showing the fastest finishing times, which Wastl writes was “one of the largest sources of stress for me, for the infrastructure, and for many users. … What started as a fun feature in 2015 became an ever-growing problem.”

One issue was “People took things too seriously, going way outside the spirit of the contest; some people even resorted to things like DDoS attacks.” But he seems especially bothered by the way the fastest times from around the world seemed to discourage others about their own programming skills.

One Hachyderm user welcomed the change, saying the leaderboards had brought “a lot of dopamine for a very small amount of people (mayyyybe the top100?) and a lot of dread for everyone else.”

And another user said they welcomed the changes, since they’d found the event “always took too much time towards the end.” (And “I thought the global leaderboard should’ve gone a few years ago anyways, when LLMs started being a thing.”)

In fact, the site’s FAQ list now also specifically tells users that they shouldn’t use AI when solving puzzles. “If you send a friend to the gym on your behalf, would you expect to get stronger…?

“If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.”

The FAQ even addresses users who want to use AI when competing with others on a private leaderboard, urging them to first ask the people running the board about their policies and expectations.

Community Reactions to Advent of Code Changes and AI

Advent of Code merchandise helps support the site’s operating costs (available at the site’s Shop link).

So how did the community react?

As the contest drew nigh, one fan launched a Reddit discussion just to emphasize Wastl’s no-AI admonition — and found others who agreed. “Using AI to do Advent of Code this year is like setting up a checkers board on your own, taking away all the pieces of one color, and declaring you won,” joked one Reddit commenter. “Yeah, nobody’s stopping you from doing it … but what do you get out of it?”

Not everyone agreed. “I am unashamedly going to use AI this year,” one commenter responded, “because I want to learn Golang. I’m planning to get interactive training specifically geared to each puzzle.” And another wrote, “My plan is to use AI to code the input parsing. Always hated that part. After that, I plan to turn it off.”

And ironically, OpenAI bought an ad in the official Advent of Code subreddit. (“Codex gets you up to speed fast with straightforward summaries so you can keep moving. All powered by ChatGPT…”)

But programmer Jeroen Heijmans publishes an unofficial survey of thousands of participants each year, and since 2023 has been asking an additional question. What do you think of AI/LLMs [large language models]?

“I was unprepared for the volume and general need for moderation of these answers,” Heijmans said when announcing 2024’s survey results on Reddit. More than 62% of respondents said they used “zero” AI — roughly the same percentage as in 2023 (when Heijmans first began asking the question).

Multiple answers were allowed, with 31.8% selecting “AI is bad for Advent of Code” (up from the 27.0% in 2023). Additionally, 21.8% selected an even more emphatic option — “AI is horrible for Advent of Code” — a big jump from the 15.4% who selected that answer at the end of 2023.

And 39.2% selected “Not again with AI” — a slight drop from the 40.7% who’d selected that answer in 2023, while 0.6% chose “Don’t know what AI/LLM means” (down from 1.0% in 2023).

Not everyone is avoiding AI: 15.7% of 2024’s survey respondents said they’d used “some” AI when solving the puzzles (with an additional 0.5% saying they’d used “lots” of AI). But this figure was down slightly from 2023’s 14.1% who’d reported using “some” AI and the 0.7% who’d used “lots.”

But in 2024, just 7.6% chose “AI is good for Advent of Code,” with 2.4% choosing “AI is great for Advent of Code” — a slight drop from the people who’d chosen those answers in 2023 (10.8% and 3.6%, respectively).

Programming Trends and the Enduring AoC Tradition

But mostly, the community just seems grateful that their December tradition continues.

This year, one Reddit user even proposed a new challenge — trying to solve all the puzzles without if-then statements or other “flow control” keywords like while loops.

So, what will this year’s event look like? Through the years, the most popular programming language for solving the puzzles — by far — seems to be Python, according to Heijmans’s unofficial survey, with nearly 40% of its respondents saying they used Python in 2024. (And 40% have said the same thing every year since 2018, when his survey began.) Rust has been a consistent second choice, used by more than 16% of participants in each of the last three years.

For the last seven years, more than 30% of solvers use a Linux OS, according to the survey results, while the share of Windows users dropped from 40.5% in 2022 to 35.86% in 2023, and then to 33.239% in 2024. Heijmans notes that the Windows Subsystem for Linux got another 7.2% of responses in 2024, meaning that Linux and WSL together combined for more users than Windows only.

And for the last four years, more than 40% report they used VS Code as their code editor.

Screenshot of Eric Wastl celebrating daylight saving time on Hachyderm

Screenshot of Eric Wastl celebrating daylight saving time on Hachyderm.

And as they count down those final hours until midnight (EST, or 9 p.m. PST), some eager coders may even find themselves reading Eric’s secret greeting in the source code of the contest’s home page.

“A lot of effort went into building this thing,” he tells visitors to his site, adding, “I hope you’re enjoying playing it as much as I enjoyed making it for you!”

The post 2025’s ‘Advent of Code’ Event Chooses Tradition Over AI appeared first on The New Stack.

Read the whole story
alvinashcraft
2 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories