Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
149946 stories
·
33 followers

Python In The Age Of AI

1 Share
Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Steam Next Fest games using AI? And Blender 5 is here!

1 Share

Hello and Welcome, I’m your Code Monkey!

December starts tomorrow! Are you ready for it? This year has flown right by! I've simultaneously done so much and yet there was so much more left that I wanted to do. But let's not get ahead of ourselves, there's still a full month left in the year, so use that time to build something awesome!

I'm currently in the process of doing something I've wanted to do for ages, remaster all my tutorials to Unity 6! This is being quite a lot more work than I thought but I'm really looking forward to upgrading all the hundreds of tutorials I've done over the years and have them available for you to easily download and learn!

Oh and if you need anything to make your projects a reality then there are still some Black Friday deals going on, the Asset Store has added some new deals, and I'm running my own Black Friday sale for just a few more days.

  • Game Dev: Next Fest AI Usage; s&box Open Source

  • Tech: Blender 5.0

  • Fun: Guess AI Art


Game Dev

Data about AI usage in Next Fest

AI is everywhere and people are finding more and more ways to use it, sometimes in good ways and sometimes in bad ways. Steam a while ago introduced an AI disclosure that states on the game page whether the game uses AI or not and how does it use it.

Steam Next Fest has just wrapped up and someone made an extremely detailed report on the games in the fest, how many use AI and how exactly do they use it.

There were about 3000 games included in the October Next Fest, and 507 (17%) of those used AI in some way.

This graph showcases how the AI was used, with the biggest use case being In Game Art (56%) followed by Marketing (26%) and then Voice Acting (12%) and Music (11%). However most of the games that used AI art also made sure to include in the disclosure how the AI art was curated and refined by humans.

Interesting is how 53% of developers used AI in just one category, but 47% used it in multiple ways. Meaning developers are finding AI is usable in many different use cases, not just one single thing.

In terms of Code, this is one area that is likely widely underreported since it's not really visible in the final product, according to this study only 8% say they used AI coding.

But of course what really matters is the final result. With AI being such a polarizing topic many people (myself included) have wondered what is the result of this disclosure, do players look at it and actively avoid game with AI disclosures? Or do players not care about it at all? I tried doing some research myself a while ago and the answer seems to be that players don't care. Just like assets, all they want are fun games to play.

And actually a great recent example is ARC Raiders. This is the new hot game that people love, it is out on Steam and it has the AI disclosure saying "During the development process, we may use procedural- and AI-based tools to assist with content creation. In all such cases, the final product reflects the creativity and expression of our own development team." and it has Very Positive reviews.

Also on this topic there is a recent hot take by Tim Sweeney (Epic CEO) stating that the AI disclosure on Steam doesn't make sense since in the future AI will be involved in every single game production.

I am always interested in seeing actual data to make better decisions so I really enjoyed this study with a ton of detail. I'd be curious to see a similar study in the February Next Fest to see if the trend is going up or down.


Affiliate

Black Friday CONTINUES! FREE Environment

Unity BLACK FRIDAY Sale is still ongoing! You can see everything on sale right here!

Top assets 50% OFF, and Flash Deals changing every day up to 95% OFF!

I made a video talking about my best recommendations from the sale. Lots of awesome stuff that will help you a lot!

The Publisher of the Week this time is Black Horizon Studios, publisher with environments and tools.

Get the FREE Ultimate Nature Pack which is a gorgeous environment. It features lots of trees and grass, as well as a snow scene and nice oasis.

Get it HERE and use coupon BLACKHORIZON2025 at checkout to get it for FREE!

There’s a MASSIVE HumbleBundle with thousands of Realistic and Stylized environments at 99% OFF!

Contains both Unity and Unreal assets.

Get it HERE!


Game Dev

s&box Game Engine goes open source!

Have you played Garry's Mod? What about Rust? Those are excellent games that have been massively successful. The company behind them, Facepunch, has been using all that money to build an engine called s&box (read "sandbox") which they have just open sourced!

It really seems we are in a golden age of game engines. There are so many great engines, all of them very good and all of them free and some of them open source. More competition is always great for all game devs.

However the open source part is the s&box code itself, this engine is actually built on top of Source 2, which is Valve's proprietary engine that is not open source. They describe it as a long term project meant to take all the best things of Source 1, Unity, Unreal and put them all in one game engine.

You can get s&box from their website which then lets you download it from Steam and from there it basically works like a Platform where you can play games that were built using it.

Looking at the list of games that have already been created it does seem clearly that this is very capable, there's a lot of variety in game types.

The engine runs on C# 14 with .NET 10, sharing your games is effortless within the engine, multiplayer is baked in, it includes hot reload, cloud assets, shader graph, visual scripting and tons more that you expect from any modern engine.

There is also a Play Fund where you can earn money when people play your games and they have already paid out $250,000 to developers, so this could be a good reason to explore this engine to try to get into it and make some money while the platform is small.

I loved playing Garry's Mod as a kid, I haven't played it since but looking at videos for this engine really brought back memories. I remember making all kinds of vehicle by just adding rockets to a cart and watching it all fly away, fun times!



Tech

Blender 5.0 is out!

Blender is one of the heavyweights of the games industry, at least for indie developers. It's impressive how completely free open source software is so good.

The new massive 5.0 version has just been fully released! It includes lots of new modifiers to place objects and modify meshes, UV selection has been greatly improved, Sculpting got various improvements, animation and rigging updates, geometry nodes, rendering and more.

It is a massive new version that brings lots of improvements, again whilst keeping the software free and open source. If you use it a lot consider sending a donation their way.

I have tried using Blender myself, I went through a 10 hour course and managed to actually learn the basics which I was pretty happy with! It is definitely an extremely powerful tool but one that requires quite a bit of training, just like any other skill like programming.



Fun

Can you guess AI art?

AI art is everywhere nowadays and it is constantly improving. The days of guessing AI art by looking at fingers are long gone, now it can accurately draw almost anything.

Some people hate AI art and have a tendency to accuse people of using AI to draw something, although in many cases that assumption is incorrect and leads to false accusations which can ruin someone.

Do you think you can identify which art is AI or not? Here is a website to do just that.

It shows you 50 works of art, it's up to you to tag them as AI or not. Go ahead give it a try.

I got 56% correct, so basically a coin flip. It really is nearly impossible to tell with confidence nowadays, the tools are just too good.




Get Rewards by Sending the Game Dev Report to a friend!

(please don’t try to cheat the system with temp emails, it won’t work, just makes it annoying for me to validate)

Thanks for reading!

Code Monkey

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Go from prompt to production using a set of AI tools, or just one (Google Antigravity)

1 Share

We’ve passed the first phase of AI dev tooling. When I first saw AI-assisted code completion and generation, I was wow-ed. Still am. Amazing stuff. Then agentic coding environments went a step further. We could generate entire apps with products like Replit or v0! Following that, we all got new types of agentic IDEs, CLIs, background coding agents, and more. With all these options, there isn’t just one way to work with AI in software engineering.

I’m noticing that I’m using AI tools to command (perform actions on my environment or codebase), to code (write or review code), and to conduct (coordinate agents who work on my behalf). Whether these are done via separate tools or the same one, this seems like a paradigm that will persist for a while.

Let’s see this in action. I’ll first do this with a set of popular tools—Google AI Studio, Gemini CLI, Gemini Code Assist, and Jules—and then do the same exercise with the new Google Antigravity agent-first development platform.

Architecture diagram generated with nano-banana

I’ve accepted that I’ll never be a professional baseball player. It’s just not in the cards. But can I use AI to help me pretend that I played! Let’s build an application that uses AI to take an uploaded picture and generate images of that person in various real-life baseball situations.

Build with a set of AI tools

Gemini 3 Pro is excellent at frontend code and Google AI Studio is a fantastic way to get started building my app. I went to the “Build” section where I could provide a natural language prompt to start vibe-coding my baseball app. Here’s an example of “commanding” with AI tools.

Google AI Studio

After a few seconds of thinking, I saw a stash of files created for my application. Then a preview popped up that I could actually interact with.

Vibe coded app in Google AI Studio

Jeez, only one prompt and I have an awesome AI app. How cool is that? The Nano Banana model is just remarkable.

Now I wanted to do more with this app and bring it into my IDE to make some updates before deploying it. In the top right of the screen, there’s a GitHub icon. After I clicked that, I was asked to authenticate with my GitHub account. Next, I had to provide details about which repo to create for this new codebase.

Create GitHub repo from Google AI Studio

Then Google AI Studio showed me all the changes it made in the local repo. I get one last chance to review things before staging and committing the changes.

Push changes to GitHub

A moment later, I had a fully populated GitHub repo. This gave me the intermediate storage I needed to pick up and continue with my IDE and agentic CLI.

Vibe coded app code in my GitHub repo

I jumped into Visual Studio Code with the installed Gemini Code Assist plugin. I’ve also got the Gemini CLI integration set up, so everything is all in one place.

Visual Studio Code with Gemini Code Assist and the Gemini CLI

Here, I can command and code my way to a finished app. I could ask (command) for a summary of the application itself and how it’s put together. But even more useful, I issued a command asking for how this app was authenticating with the Gemini API.

Gemini Code Assist helping me understand the codebase

Very helpful! Notice that it found a config file that shows a mapping from GEMINI_API_KEY (which is the environment variable I need to set) to the API_KEY referred to in code. Good to know.

Here’s where I could continue to code my way through the app with AI assistance if there were specific changes I felt like making ahead of deploying it. I wrote a mix of code (and used the Gemini CLI) to add a Node server to serve this static content and access the environment variable from the runtime.

Let’s do some conducting. I didn’t feel like writing up a whole README and wanted some help from AI. Here’s where Jules comes in, and its extension for the Gemini CLI. Notice that I have Gemini CLI extensions for Jules and Cloud Run already installed.

Two MCP servers added to the Gemini CLI

I can go ahead and ask Jules to create a better README, and then continue on my work. Agents working on my behalf!

Using the Gemini CLI to trigger a background task in Jules

After doing some other work, I came back and checked the status of the Jules job (/jules status) and saw that the task was done. The Jules extension asked me if I wanted a new branch, or to apply the changes locally. I chose the former option and reviewed the PR before merging.

Reviewing a branch with a README updated by Jules

Finally, I was ready to deploy this to Google Cloud Run. Here, I also used a command approach and instructed the Gemini CLI to deploy this app with the help of the extension for Cloud Run.

Using a natural language request from me, the Gemini CLI crafted the correct gcloud CLI command to deploy my app.

Doing a deployment to Cloud Run from the Gemini CLI

That finished in a few seconds, and I had my vibe-coded app, with some additional changes, deployed and running in Google Cloud.

App running on Google Cloud

So we commanded Google AI Studio to build the fundamentals of the app, used Gemini Code Assist and the Gemini CLI to code and command towards deployment, and Jules to conduct background agents on our behalf. Not particularly difficult, and the handoffs via a Git repo worked well.

This process works great if you have distinct roles with handoffs (designer –> developer –> deployment team) or want to use distinct products at each stage.

Build with Google Antigravity

Google Antigravity isn’t a code editor. It’s not an IDE. It’s something more. Yes, you can edit code and do classic IDE things. What’s different is that it’s agent-first, and supports a rich set of surfaces in a single experience. I can kick off a series of agents to do work, trigger Computer Use in a dedicated browser, and extend behavior through MCP servers. Basically, I can do everything I did above, but within a single experience.

Starting point with Google Antigravity

I fed it the same prompt I gave to Google AI Studio. Immediately, Google Antigravity got to work building an implementation plan.

Giving a prompt to Antigravity to build out an application

I love that I can review this implementation plan, and add comments to sections I want to update. This feels like a very natural way to iterate on this specification. Right away, I asked for Node server host for this app, and am building it that way from the start.

Implementation Plan, with comments

The AI agent recognizes my comments and refreshes its plans.

Antigravity using the Implementation Plan to begin its work

At this point, the agent is rolling. It built out the entire project structure, created all the code files, and plowed through its task list. Yes, it creates and maintains a task list so we can track what’s going on.

Task List maintained by Antigravity

The “Agent Manager” interface is wild. From here I can see my inbox of agent tasks, and monitor what my agents are currently doing. This one is running shell commands.

Agent Manager view for triggering and managing agent work

The little “drawer” at the bottom of the main chat window also keeps tabs on what’s going on across all the various agents. Here I could see what docs need my attention, which processes are running (e.g. web servers), and which artifacts are part of the current conversation.

View of processes, documents, and conversation artifacts

The whole app building processed finished in just a few minutes. It looked good! And because Google Antigravity has built-in support for Computer Use with a Chrome browser, it launched a browser instance and showed me how the app worked. I can also prompt Computer Use interactions any time via chat.

Computer Use driving the finished application

Antigravity saved the steps it followed into an artifact called Walkthrough. Including a screenshot!

Generated walkthrough including screenshots

How about fixing the README? In the previous example, I threw that to a background task in Jules. I could still do that here, but Antigravity is also adept at doing asynchronous work. I went into the Agent Manager and asked for a clean README with screenshots and diagrams. Then I closed Agent Manager and did some other things. Never breaking flow!

Triggering a background agent to update the README

Later, I noticed that the work was completed. The Agent Manager showed me what it did, and gave me a preview of the finished README. Nice job.

Finished README with diagrams and screenshots

I wanted to see the whole process through, so how about using Google Antigravity to deploy this final app to Google Cloud Run?

This product also supports extension via MCP. During this product preview, it comes with a couple dozen MCP servers in the “MCP Store.” These include ones for Google products, Figma, GitHub, Stripe, Notion, Supabase, and more.

MCP servers available out of the box

We don’t yet include one for Cloud Run, but I can add that myself. The “manage MCP servers” is empty to start, but it shows you the format you need to add to the configuration file. I added the configuration for the local Cloud Run MCP server.

Configuration for the Cloud Run MCP server

After saving that configuration, I refreshed the “manage MCP servers” screen and saw all the tools at my disposal.

Tools available from the Cloud Run MCP server

Sweet! I went back to the chat window and asked Google Antigravity to deploy this app to Cloud Run.

Antigravity deploying the app to Google Cloud Run

The first time, the deployment failed but Google Antigravity picked up the error and updated the app to start on the proper port and tweak how it handled wildcard paths. It then redeployed, and worked.

Chat transcript of attempt to deploy to Google Cloud Run

Fantastic. Sure enough, browsing the URL showed my app running and working flawlessly. Without a doubt, this would have been hours or days of work for me. Especially on the frontend stuff since I’m terrible at it. Instead, the whole process took less than an hour.

Finished application running in Google Cloud Run

I’m very impressed! For at least the next few years, software engineering will likely include a mix of commands, coding, and conducting. As I showed you here, you can do that with distinct tools that enable distinct stages and offer one or more of those paradigms. Products like Google Antigravity offer a fresh perspective, and make it possible to design, build, optimize, and deploy all from one product. And I can now seamlessly issue commands, write code, and conduct agents without ever breaking flow. Pretty awesome.



Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Defender for AI services: Threat Protection and AI red team workshop

1 Share

Generative AI is reshaping how enterprises operate, introducing new efficiencies—and new risks. Imagine launching a helpful chatbot, only to learn a cleverly crafted prompt can bypass safety controls and exfiltrate sensitive data. This is today’s reality: every system prompt, plugin/tool, dataset, fine‑tune, or orchestration step can change the attack surface. This is due to the non deterministic nature in how LLM models craft responses in output. The slightest change in the input of a prompt in verbiage or tone can change the outcome of a output in subtle but non predictable ways especially when your data is involved.

This post shows how to operationalize AI red teaming with Microsoft Defender for AI services so security teams gain evidence‑backed visibility into adversarial behavior and turn that visibility into daily defense. By aligning with Microsoft’s Responsible AI principles of transparency, accountability, and continuous improvement, we demonstrate a pragmatic, repeatable loop that makes AI safer week after week. Crucially, security needs to have a seat at the table across the AI app lifecycle from model selection and pilot to production and ongoing updates.

Who Should Read This (and What You’ll See)

  • SOC analysts & incident responders - See how AI signals materialize as high‑fidelity alerts (prompt evidence, URL intel, identity context) in Defender for Cloud and Defender XDR for fast triage and correlation.
  • AI/ML engineers - Validate model safety with controlled simulations (PyRIT‑informed strategies) and understand which filters/guardrails move the needle.
  • Security architects - Integrate Microsoft Defender for AI services into your cloud security program; codify improvements as policy, IaC, and identity hygiene.
  • Red teamers/researchers - Run structured, repeatable adversarial tests that produce measurable outcomes the org can act on.

Why now? Data leakage, prompt injection, jailbreaks, and endpoint abuse are among the fastest‑growing threats to AI systems. With AI red teaming and Microsoft Defender for AI services, you catch intent before impact and translate insight into durable controls.

What’s Different About the AI Attack Surface

New risks sit alongside the traditional ones:

  • Prompts & responses — Susceptible to prompt injection and jailbreak attempts (rule change, role‑play, encoding/obfuscation).
  • User & application context — Missing context slows investigations and blurs accountability.
  • Model endpoints & identities — Static keys and weak identity practices increase credential theft and scripted probing risk.
  • Attached data (RAG/fine‑tuning) — Indirect prompt injection via documents or data sources.
  • Orchestration layers/agents — Tool invocation abuse, unintended actions, or “over‑permissive” chains.
  • Content & safety filters — Configuration drift or silent loosening erodes protection.

A key theme across these risks is context propagation. The way user identity, application parameters, and environmental signals travel with each prompt and response. When context is preserved and surfaced in security alerts, SOC teams can quickly correlate incidents, trace attack paths, and remediate threats with precision. Effective context propagation transforms raw signals into actionable intelligence, making investigations faster and more accurate.

Microsoft Defender for AI services adds a real‑time protection layer across this surface by combining Prompt Shields, activity monitoring, and Microsoft threat intelligence to produce high‑fidelity alerts you can operationalize.

The Improvement Loop (Responsible AI in Practice)

Responsible AI comes to life when teams Observe → Correlate → Remediate → Retest → Codify:

  1. Observe controlled jailbreak/phishing/automation patterns and collect prompt evidence.
  2. Correlate with identity, network, and prior incidents in Defender XDR.
  3. Remediate with the smallest effective control (filters, identities, rate limits, data scoping).
  4. Retest the same scenario to verify risk reduction.
  5. Codify as baseline (policy, IaC template, guardrail profile, rotation notes).

Repeat this rhythm on a schedule and you’ll build durable posture faster than a one‑time “big‑bang” control set.

Prerequisites:

To take advantage of this workshop you’ll need:

  1. Sandbox subscription (ideally inside a Sandbox Mgmt Group with lighter policies), you may also leverage a Free trial of Azure Subscription as well.
  2. Microsoft Defender for AI services plan enabled (see Participant Guide )
  3. Contributor access (you can deploy + view alerts)
  4. Region capacity confirmed (Azure AI Foundry in East US 2)

Workshop flow and testing:

Prep: Enable the Microsoft Defender for AI services plan with prompt evidence, deploy the Azure Template (one hub + single endpoint), and open the AIRT-Eval.ipynb notebook; you now have a controlled space to generate signals(see Participant Guide).

Controlled Signals: Trigger against a jailbreak attempt, a phishing URL simulation, and a suspicious user agent simulation to produce three distinct alert types.

Triage & Correlate: For each alert, review anatomy (evidence, severity, IDs) and capture prompt/URL evidence.

Harden & Retest: Apply improvements or security controls, then validate fixes.

After you harden controls and retest, the next step is validating that your defenses trigger the right alerts on demand. There is a list of Microsoft Defender for AI services alerts here. To evaluate alerts, open DfAI‑Eval.ipynb - a streamlined notebook that safely simulates adversarial activity (current alerts: jailbreak, phishing URL, suspicious user agent) to exercise Microsoft Defender for AI services detections. Think of it as the EICAR test for AI workloads: consistent, repeatable, and safe. 

Next, we will review and break down each of the alerts you’ll generate in the workshop and how to read them effectively.

Anatomy of Jailbreak from AI Red team Agent:

A jailbreak is a user prompt designed to sidestep system or safety instructions—rule‑change (“ignore previous rules”), fake embedded conversation, role‑play as an unrestricted persona, or encoding tricks. Microsoft Defender for AI services (via Prompt Shields + threat intelligence) flags it before unsafe output (“left‑of‑boom”) and publishes a correlated high‑fidelity alert into Defender XDR for cross‑signal investigation.

Anatomy of a Phishing involved in an attack:

Phishing prompt URL alerts fire when a prompt or draft response embeds domains linked to impersonation, homoglyph tricks, newly registered infrastructure, encoded redirects, or reputation‑flagged hosting. Microsoft Defender for AI services enriches the URL (normalization, age, reputation, brand similarity) and—if prompt evidence is enabled—includes the exact snippet, then streams the alert into Defender XDR where end‑user/application context fields (e.g. `EndUserId`, `SourceIP`) let analysts correlate repeated lure attempts and pivot to related credential or jailbreak activity.

Anatomy of a User Agent involved in an attack:

Suspicious user agent alerts highlight enumeration or automation patterns (generic library signatures, headless runners, scanner strings, cadence anomalies) tied to AI endpoint usage and identity context. Microsoft Defender for AI services scores the anomaly and forwards it to Defender XDR enriched with optional `UserSecurityContext` (IP, user ID, application name) so analysts can correlate rapid probing with concurrent jailbreak or phishing alerts and enforce mitigations like managed identity, rate limits, or user agent filtering.

Conclusion

The goal of this Red teaming and AI Threat workshop amongst the different attendees is to catch intent before impact, prompt manipulation before unsafe output, phishing infrastructure before credential loss, and scripted probing before exfiltration. Microsoft Defender for AI services feeding Defender XDR enables a compact improvement loop that converts red team findings into operational guardrails.

Within weeks, this cadence transforms AI from experimental liability into a governed, monitored asset aligned with your cloud security program. Incrementally closing gaps within context propagation, identity hygiene, Prompt Shields & filter tuning—builds durable posture. Small, focused cycles win ship one improvement, measure its impact, promote to baseline, and repeat.

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

Common Misconceptions When Running Locally vs. Deploying to Azure Linux-based Web Apps

1 Share

TOC

  1. Introduction
  2. Environment Variable
  3. Build Time
    1. Compatible
    2. Memory
  4. Conclusion

1. Introduction

One of the most common issues during project development is the scenario where “the application runs perfectly in the local environment but fails after being deployed to Azure.”


In most cases, deployment logs will clearly reveal the problem and allow you to fix it quickly.
However, there are also more complicated situations where "due to the nature of the error itself" relevant logs may be difficult to locate.

This article introduces several common categories of such problems and explains how to troubleshoot them.
We will demonstrate them using Python and popular AI-related packages, as these tend to exhibit compatibility-related behavior.

Before you begin, it is recommended that you read Deployment and Build from Azure Linux based Web App | Microsoft Community Hub on how Azure Linux-based Web Apps perform deployments so you have a basic understanding of the build process.

 

2. Environment Variable

Simulating a Local Flask + sklearn Project

First, let’s simulate a minimal Flask + sklearn project in any local environment (VS Code in this example).

For simplicity, the sample code does not actually use any sklearn functions; it only displays plain text.

 

app.py

from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "hello deploy environment variable" if __name__ == "__main__": app.run(host="0.0.0.0", port=8000)

 

We also preset the environment variables required during Azure deployment, although these will not be used when running locally.

 

.deployment

[config] SCM_DO_BUILD_DURING_DEPLOYMENT=false

 

As you may know, the old package name sklearn has long been deprecated in favor of scikit-learn.
However, for the purpose of simulating a compatibility error, we will intentionally specify the outdated package name.

 

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 sklearn

 

After running the project locally, you can open a browser and navigate to the target URL to verify the result.

python3 -m venv .venv source .venv/bin/activate pip install -r requirements.txt python app.py

 

Of course, you may encounter the same compatibility issue even in your local environment.
Simply running the following command resolves it:

export SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True

 

We will revisit this error and its solution shortly.
For now, create a Linux Web App running Python 3.12 and configure the following environment variables.
We will define Oryx Build as the deployment method.

SCM_DO_BUILD_DURING_DEPLOYMENT=false WEBSITE_RUN_FROM_PACKAGE=false ENABLE_ORYX_BUILD=true

 

After deploying the code and checking the Deployment Center, you should see an error similar to the following.

 

From the detailed error message, the cause is clear:
sklearn is deprecated and replaced by scikit-learn, so additional compatibility handling is now required by the Python runtime.

The error message suggests the following solutions:

  1. Install the newer scikit-learn package directly.

  2. If your project is deeply coupled to the old sklearn package and cannot be refactored yet, enable compatibility by setting an environment variable to allow installation of the deprecated package.

Typically, this type of “works locally but fails on Azure” behavior happens because the deprecated package was installed in the local environment a long time ago at the start of the project, and everything has been running smoothly since.
Package compatibility issues like this are very common across various languages on Linux.

 

When a project becomes tightly coupled to an outdated package, you may not be able to upgrade it immediately.
In these cases, compatibility workarounds are often the only practical short-term solution.
In our example, we will add the environment variable:

SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True

 

However, here comes the real problem:
This variable is needed during the build phase, but the environment variables set in Azure Portal’s Application Settings only take effect at runtime. So what should we do?

 

The answer is simple, shift the Oryx Build process from build-time to runtime.

 

First, open Azure Portal → Configuration and disable Oryx Build.

ENABLE_ORYX_BUILD=false

 

Next, modify the project by adding a startup script.

run.sh

#!/bin/bash export SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True python -m venv .venv source .venv/bin/activate pip install -r requirements.txt python app.py


The startup script works just like the commands you run locally before executing the application.
The difference is that you can inject the necessary compatibility environment variables before running pip install or starting the app.

 

After that, return to Azure Portal and add the following Startup Command under Stack Settings.
This ensures that your compatibility environment variables and build steps run before the runtime starts.

bash run.sh

 

Your overall project structure will now look like this.
Once redeployed, everything should work correctly.

 

3. Build Time

Build-Time Errors Caused by AI-Related Packages

Many build-time failures are caused by AI-related packages, whose installation processes can be extremely time-consuming.
You can investigate these issues by reviewing the deployment logs at the following maintenance URL:

https://<YOUR_APP_NAME>.scm.azurewebsites.net/newui

 

Compatible

Let’s simulate a Flask + numpy project.

The code is shown below.

app.py

from flask import Flask app = Flask(__name__) @app.route("/") def index(): return "hello deploy compatible" if __name__ == "__main__": app.run(host="0.0.0.0", port=8000)

 

We reuse the same environment variables from the sklearn example.

.deployment

[config] SCM_DO_BUILD_DURING_DEPLOYMENT=false

 

This time, we simulate the incompatibility between numpy==1.21.0 and Python 3.10.

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 numpy==1.21.0

 

We will skip the local execution part and move directly to creating a Linux Web App running Python 3.10.
Configure the same environment variables as before, and define the deployment method as runtime build.

SCM_DO_BUILD_DURING_DEPLOYMENT=false WEBSITE_RUN_FROM_PACKAGE=false ENABLE_ORYX_BUILD=false

 

After deployment, Deployment Center shows a successful publish.

 

However, the actual website displays an error.

 

At this point, you must check the deployment log files mentioned earlier.
You will find two key logs:

1. docker.log

  • Displays real-time logs of the platform creating and starting the container.

  • In this case, you will see that the health probe exceeded the default 230-second startup window, causing container startup failure.

  • This tells us the root cause is container startup timeout.

    To determine why it timed out, we must inspect the second file.

2. default_docker.log

  • Contains the internal execution logs of the container.

  • Not generated in real time, usually delayed around 15 minutes.

  • Therefore, if docker.log shows a timeout error, wait at least 15 minutes to allow the logs to be written here.

 

In this example, the internal log shows that numpy was being compiled during pip install, and the compilation step took too long.
We now have a concrete diagnosis: numpy 1.21.0 is not compatible with Python 3.10, which forces pip to compile from source. 

 

The compilation exceeds the platform’s startup time limit (230 seconds) and causes the container to fail.

We can verify this by checking numpy’s official site:

numpy · PyPI


numpy 1.21.0 only provides wheels for cp37, cp38, cp39 but not cp310 (which is python 3.10).
Thus, compilation becomes unavoidable.

 

Possible Solutions

  1. Set the environment variable

    WEBSITES_CONTAINER_START_TIME_LIMIT

    to increase the allowed container startup time.

  2. Downgrade Python to 3.9 or earlier.

  3. Upgrade numpy to 1.21.0+, where suitable wheels for Python 3.10 are available.

    In this example, we choose this option.

 

After upgrading numpy to version 1.25.0 (which supports Python 3.10) from specifying in requirements.txt and redeploying, the issue is resolved.

numpy · PyPI

 

requirements.txt

Flask==3.1.0 gunicorn==23.0.0 numpy==1.25.0

 

 

Memory

The final example concerns the App Service SKU.
AI packages such as Streamlit, PyTorch, and others require significant memory.
Any one of these packages may cause the build process to fail due to insufficient memory.
The error messages vary widely each time.

If you repeatedly encounter unexplained build failures, check Deployment Center or default_docker.log for Exit Code 137, which indicates that the system ran out of memory during the build.

The only solution in such cases is to scale up.

 

4. Conclusion

This article introduced several common troubleshooting techniques for resolving Linux Web App issues caused during the build stage.
Most of these problems relate to package compatibility, although the symptoms may vary greatly.
By understanding the debugging process demonstrated in these examples, you will be better prepared to diagnose and resolve similar issues in future deployments.

 

 

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete

How Big a Deal is the USA's AI Genesis Mission?

1 Share
From: AIDailyBrief
Duration: 8:38
Views: 822

The Genesis Mission establishes a Manhattan Project–scale national AI science program to centralize federal datasets, train scientific foundation models, and build a closed-loop AI experimentation platform using DOE supercomputers. Amazon pledged up to $50 billion to expand AWS supercomputing capacity for classified and unclassified government AI work, and Meta is exploring installing Google's TPUs as chip competition with NVIDIA intensifies. OpenAI CEO Sam Altman hinted at a consumer AI device built around total contextual awareness with a potential two-year timeline while keeping features deliberately vague.

Brought to you by:
KPMG – Go to ⁠www.kpmg.us/ai⁠ to learn more about how KPMG can help you drive value with our AI solutions.
Vanta - Simplify compliance - ⁠⁠⁠⁠⁠⁠⁠https://vanta.com/nlw

The AI Daily Brief helps you understand the most important news and discussions in AI.
Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Get it ad free at
Join our Discord: https://bit.ly/aibreakdown

Read the whole story
alvinashcraft
2 hours ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories