Azure AI Foundry offers a unified platform that streamlines enterprise AI operations, model development, and application deployment. It brings together robust infrastructure and intuitive tools, empowering organizations to confidently build and manage AI-driven solutions. Under the Azure AI Foundry there are two components - hub and project. The hub serves as the central development environment. With hub access, users can configure infrastructure, create additional hubs, and launch new projects. Each project resides within a hub and can have its own set of permissions and allocated resources.
The hub has the Azure AI services as a separate resource which is the provider of Microsoft-maintained base models. Along with this to configure how user data is stored, uploaded data, stored credentials and generated artifacts like logs are stored in a storage account. As a credential store, there are two options. First option is the key vault used to store secrets and other sensitive information that is needed by the AI hub. Users may create a new Azure Key Vault resource or select an existing one in their subscription. The second option is in preview, and it is the Microsoft-managed credential store. Secret data lifecycle follows the hub, projects, connections and compute.
This blog focuses on deploying the above-mentioned core components—hub, project, AI Services, Key Vault, and Storage—using Infrastructure-as-Code with Bicep, automated through a GitHub workflow.
Repository Structure and Setup
To maintain modularity and clarity, the project is organized as follows:
- The main.bicep file orchestrates the deployment.
- The *.main.bicepparam files define environment-specific values.
To enable consistent and environment-specific deployments, separate .bicepparam files were created for development, staging, and production environments. These files reference the main Bicep template and supply values such as resource name prefixes, suffixes, tags, user identities, and model deployment configurations. This structure promotes reusability and clarity, allowing each environment to be provisioned with tailored configurations—such as different AI models or capacity settings—without modifying the core template.
using './main.bicep'
param prefix = 'foundry'
param suffix = 'dev01'
param userObjectId = ''
param keyVaultEnablePurgeProtection = false
param openAiDeployments = [
{
model: {
name: 'gpt-4o'
version: '2024-05-13'
}
sku: {
name: 'GlobalStandard'
capacity: 10
}
}
]
param tags = {
environment: 'development'
iac: 'bicep'
}
- The modules/ folder holds logically separated Bicep modules for key Azure AI Foundry
components.
Modular Bicep Architecture
Each module (like keyVault.bicep, aiServices.bicep, etc.) accepts clearly defined parameters and
follows best practices such as:
- Conditional naming using prefix and suffix
- Role-based access assignment using userObjectId
- Secure and policy-compliant resource configuration (e.g., Key Vault purge protection)
The main.bicep file automates the deployment of an Azure AI Foundry environment using
modular Bicep templates. It provisions:
- An AI Foundry Hub
- An Azure AI Services resource along with model deployments.
- A Project within that Hub
- A Key Vault
- A Storage Account
The hub bicep module primarily defines the Azure AI Foundry hub resource and its configuration, along with a connection block to securely link Azure AI Services using the specified authentication method. The resource blocks are shown in the below code snippet.
// Resources
resource aiServices 'Microsoft.CognitiveServices/accounts@2024-04-01-preview' existing = {
name: aiServicesName
}
resource hub 'Microsoft.MachineLearningServices/workspaces@2024-04-01-preview' = {
name: name
location: location
tags: tags
sku: {
name: skuName
tier: skuTier
}
kind: 'Hub'
identity: {
type: 'SystemAssigned'
}
properties: {
// organization
friendlyName: friendlyName
description: description_
managedNetwork: {
isolationMode: isolationMode
}
publicNetworkAccess: publicNetworkAccess
// dependent resources
keyVault: keyVaultId
storageAccount: storageAccountId
systemDatastoresAuthMode: systemDatastoresAuthMode
}
resource aiServicesConnection 'connections@2024-01-01-preview' = {
name: !empty(aiServicesConnectionName) ? aiServicesConnectionName : toLower('${aiServices.name}-connection')
properties: {
category: 'AIServices'
target: aiServices.properties.endpoint
#disable-next-line BCP225
authType: connectionAuthType
isSharedToAll: true
metadata: {
ApiType: 'Azure'
ResourceId: aiServices.id
}
credentials: connectionAuthType == 'ApiKey'
? {
key: aiServices.listKeys().key1
}
: null
}
}
}
The Ai Services bicep module mostly deploys an Azure AI Services (OpenAI) account with a specified SKU, identity, access settings, and an optional custom subdomain name to manage secure access and configuration and models that iterates over the provided deployment list to create OpenAI model deployments (like GPT or embedding models) under the parent aiServices resource. The resource blocks are shown in the below code snippet.
// Resources
resource aiServices 'Microsoft.CognitiveServices/accounts@2024-04-01-preview' = {
name: name
location: location
sku: sku
kind: 'AIServices'
identity: identity
tags: tags
properties: {
customSubDomainName: customSubDomainName
disableLocalAuth: disableLocalAuth
publicNetworkAccess: publicNetworkAccess
}
}
@batchSize(1)
resource model 'Microsoft.CognitiveServices/accounts/deployments@2023-05-01' = [
for deployment in deployments: {
name: deployment.model.name
parent: aiServices
sku: {
capacity: deployment.sku.capacity ?? 100
name: empty(deployment.sku.name) ? 'Standard' : deployment.sku.name
}
properties: {
model: {
format: 'OpenAI'
name: deployment.model.name
version: deployment.model.version
}
}
}
]
The project bicep module creates an Azure Machine Learning Project workspace within a specified AI Hub. It sets properties like public network access, system-assigned identity, and links the project to its parent hub using hubResourceId. The resource blocks are shown in the below code snippet.
// Resources
resource project 'Microsoft.MachineLearningServices/workspaces@2024-04-01-preview' = {
name: name
location: location
tags: tags
kind: 'Project'
sku: {
name: 'Basic'
tier: 'Basic'
}
identity: {
type: 'SystemAssigned'
}
properties: {
friendlyName: friendlyName
hbiWorkspace: false
v1LegacyMode: false
publicNetworkAccess: publicNetworkAccess
hubResourceId: hubId
systemDatastoresAuthMode: 'identity'
}
}
The storage bicep module provisions a secure Azure Storage Account with configurable containers and access settings, enabling storage of AI-related data and logs while enforcing encryption and network policies.
// Resources
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' = {
name: name
location: location
tags: tags
sku: {
name: skuName
}
kind: 'StorageV2'
// Containers live inside of a blob service
resource blobService 'blobServices' = {
name: 'default'
// Creating containers with provided names if contition is true
resource containers 'containers' = [
for containerName in containerNames: if (createContainers) {
name: containerName
properties: {
publicAccess: 'None'
}
}
]
}
properties: {
accessTier: accessTier
allowBlobPublicAccess: allowBlobPublicAccess
allowCrossTenantReplication: allowCrossTenantReplication
allowSharedKeyAccess: allowSharedKeyAccess
encryption: {
keySource: 'Microsoft.Storage'
requireInfrastructureEncryption: false
services: {
blob: {
enabled: true
keyType: 'Account'
}
file: {
enabled: true
keyType: 'Account'
}
queue: {
enabled: true
keyType: 'Service'
}
table: {
enabled: true
keyType: 'Service'
}
}
}
isHnsEnabled: false
isNfsV3Enabled: false
keyPolicy: {
keyExpirationPeriodInDays: 7
}
largeFileSharesState: 'Disabled'
minimumTlsVersion: minimumTlsVersion
networkAcls: {
bypass: 'AzureServices'
defaultAction: networkAclsDefaultAction
}
publicNetworkAccess: allowStorageAccountPublicAccess
supportsHttpsTrafficOnly: supportsHttpsTrafficOnly
}
}
The key vault bicep module deploys an Azure Key Vault configured for RBAC and soft delete, allowing secure storage and lifecycle management of secrets and credentials used across the AI Foundry deployment.
// Resources
resource keyVault 'Microsoft.KeyVault/vaults@2023-07-01' = {
name: name
location: location
tags: tags
properties: {
createMode: 'default'
sku: {
family: 'A'
name: skuName
}
tenantId: tenantId
networkAcls: {
bypass: 'AzureServices'
defaultAction: networkAclsDefaultAction
}
enabledForDeployment: enabledForDeployment
enabledForDiskEncryption: enabledForDiskEncryption
enabledForTemplateDeployment: enabledForTemplateDeployment
enablePurgeProtection: enablePurgeProtection ? enablePurgeProtection : null
enableRbacAuthorization: enableRbacAuthorization
enableSoftDelete: enableSoftDelete
softDeleteRetentionInDays: softDeleteRetentionInDays
}
}
To ensure secure access and fine-grained control over the deployed resources, each Bicep module includes role definitions and corresponding role assignments. These are scoped appropriately to grant required permissions either to users or managed identities, such as enabling access to AI services, storage accounts, key vaults, and project workspaces. This setup aligns with the principle of least privilege and supports operational scenarios like data access, deployment actions, or inference execution within Azure AI Foundry.
This structure of the bicep code promotes clean separation of concerns, enabling easy reuse and scalable growth as your AI infrastructure expands.
Workflow Breakdown: Automating Deployment with GitHub Actions
The GitHub Actions workflow is designed to be environment-agnostic, triggered manually via
workflow_dispatch with selectable environments (development, staging, or production).
# Manual trigger with environment selection
on:
workflow_dispatch:
inputs:
environment:
description: "Select the environment to deploy"
required: true
default: development
type: choice
options:
- development
- staging
- production
Manual Trigger with Environment Selection
GitHub Actions allows for flexible workflow execution using manual triggers via the workflow_dispatch event. This feature empowers users to choose the target deployment environment—such as development, staging, or production—directly from the GitHub Actions UI at runtime. By providing a dropdown list of predefined options, teams can enforce environment-specific configurations and prevent accidental deployments to sensitive environments. This method ensures safer, more controlled release processes, especially in CI/CD pipelines with multiple stages.
Secure Azure Authentication
The workflow uses OIDC (OpenID Connect) via azure/login@v1 to securely authenticate to Azure using environment-specific secrets (AZURE_CLIENT_ID, etc.).
The GitHub environment (development in this case) is configured with three encrypted secrets as shown below:
- AZURE_CLIENT_ID: Client ID of the workload identity app.
- AZURE_TENANT_ID: Your Azure AD tenant ID.
- AZURE_SUBSCRIPTION_ID: The Azure subscription used for the deployment.
These secrets are used at runtime to authenticate to Azure in a secure, environment-scoped context, enabling the workflow to perform operations like deploying Bicep templates.
# Global environment variables
env:
CODE_PATH: 'aiInfrastructure/' # Path to your Bicep files
DESCRIPTION: 'ai-foundry-infra' # Used for naming deployments
AZURE_LOCATION: ${{ vars.AZURE_LOCATION }} # Location pulled from repository variable
To enable secure and password less authentication from GitHub Actions to Azure, the workflow leverages OpenID Connect (OIDC) through the azure/login@v1 action. This method avoids storing static credentials by federating GitHub with Microsoft Entra ID.
GitHub OIDC Credential Setup Summary
- Federated credential scenario: Enables GitHub Actions to securely access Azure resources using OIDC (OpenID Connect).
- Issuer: Pre-defined as https://token.actions.githubusercontent.com (GitHub's identity provider).
- Organization & Repository: Ties the credential to your GitHub repo (reponame/Automating-AI-Foundry-Deployment).
- Entity type: Set as Environment to scope access only to a specific GitHub environment (e.g., development).
- Subject identifier: Auto-generated to match the exact GitHub environment, ensuring scoped token issuance.
- Name: Name of the federated credential (Limit of 120 characters)
- Audience: Set to api://AzureADTokenExchange, which is required for Azure token exchange.
- Purpose: Allows GitHub workflows to authenticate into Azure without storing client secrets, improving security and automation compliance.
Bicep CLI Setup
To support .bicepparam files and ensure the latest Bicep features are used, the workflow installs
the Bicep CLI and sets up a symlink for Azure CLI compatibility.
# Step 3: Install latest Bicep CLI manually (for .bicepparam support)
- name: Install the latest Bicep CLI
shell: bash
run: |
curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/bicep-linux-x64
chmod +x ./bicep
sudo mv ./bicep /usr/local/bin/bicep
bicep --version
# Step 4: Create symlink so Azure CLI can find the Bicep binary
- name: Create symlink where Azure CLI expects Bicep
run: |
mkdir -p ~/.azure/bin
ln -sf /usr/local/bin/bicep ~/.azure/bin/bicep
Environment-Based Resource Group Creation
Instead of hardcoding locations, the workflow retrieves the deployment region dynamically via
repository variable (AZURE_LOCATION), keeping the pipeline flexible and clean.
# Step 5: Create the resource group if it doesn’t exist
- name: Create resource group if it doesn't exist
run: |
az group create \
--name AI-${{ github.event.inputs.environment }} \
--location ${{ env.AZURE_LOCATION }}
Parameterized Deployment
Each environment has its own .bicepparam file which is passed directly to az deployment group
create. This enables clean separation of development, staging, and production configurations.
# Step 6: Deploy Bicep template using the environment-specific .bicepparam
- name: Deploy Bicep Template
run: |
az deployment group create \
--resource-group AI-${{ github.event.inputs.environment }} \
--template-file "${{ env.CODE_PATH }}main.bicep" \
--parameters "${{ env.CODE_PATH }}${{ github.event.inputs.environment }}.main.bicepparam" \
--name "${{ env.DESCRIPTION }}-${{ github.event.inputs.environment }}"
Deployment Summary Output
Post-deployment, the workflow extracts and logs the outputs from the Bicep modules (like Key
Vault name, AI Services endpoint, etc.) using:
# Step 7: Show deployment outputs for verification/logging
- name: Show deployment outputs
run: |
echo "Deployment outputs:"
az deployment group show \
--resource-group AI-${{ github.event.inputs.environment }} \
--name "${{ env.DESCRIPTION }}-${{ github.event.inputs.environment }}" \
--query "properties.outputs"
This gives you immediate visibility into what was provisioned — ideal for chaining deployments
or debugging.
Complete workflow code:
name: Deploy AI Foundry Infrastructure
# Manual trigger with environment selection
on:
workflow_dispatch:
inputs:
environment:
description: "Select the environment to deploy"
required: true
default: development
type: choice
options:
- development
- staging
- production
# Permissions required for OIDC-based Azure login
permissions:
id-token: write
contents: read
# Global environment variables
env:
CODE_PATH: 'aiInfrastructure/' # Path to your Bicep files
DESCRIPTION: 'ai-foundry-infra' # Used for naming deployments
AZURE_LOCATION: ${{ vars.AZURE_LOCATION }} # Location pulled from repository variable
jobs:
deploy:
name: Deploy Bicep Template to Azure
runs-on: ubuntu-latest
environment: ${{ github.event.inputs.environment }}
steps:
# Step 1: Checkout code from the repository
- name: Checkout code
uses: actions/checkout@v4
# Step 2: Login to Azure using federated identity (OIDC)
- name: Azure Login
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
# Step 3: Install latest Bicep CLI manually (for .bicepparam support)
- name: Install the latest Bicep CLI
shell: bash
run: |
curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/bicep-linux-x64
chmod +x ./bicep
sudo mv ./bicep /usr/local/bin/bicep
bicep --version
# Step 4: Create symlink so Azure CLI can find the Bicep binary
- name: Create symlink where Azure CLI expects Bicep
run: |
mkdir -p ~/.azure/bin
ln -sf /usr/local/bin/bicep ~/.azure/bin/bicep
# Step 5: Create the resource group if it doesn’t exist
- name: Create resource group if it doesn't exist
run: |
az group create \
--name AI-${{ github.event.inputs.environment }} \
--location ${{ env.AZURE_LOCATION }}
# Step 6: Deploy Bicep template using the environment-specific .bicepparam
- name: Deploy Bicep Template
run: |
az deployment group create \
--resource-group AI-${{ github.event.inputs.environment }} \
--template-file "${{ env.CODE_PATH }}main.bicep" \
--parameters "${{ env.CODE_PATH }}${{ github.event.inputs.environment }}.main.bicepparam" \
--name "${{ env.DESCRIPTION }}-${{ github.event.inputs.environment }}"
# Step 7: Show deployment outputs for verification/logging
- name: Show deployment outputs
run: |
echo "Deployment outputs:"
az deployment group show \
--resource-group AI-${{ github.event.inputs.environment }} \
--name "${{ env.DESCRIPTION }}-${{ github.event.inputs.environment }}" \
--query "properties.outputs"
Conclusion
Deploying Azure AI Foundry components using Infrastructure-as-Code brings consistency, repeatability, and security to AI environments. By modularizing Bicep templates and integrating them with GitHub Actions, this approach enables automated, environment-specific deployments that adhere to best practices in access control and configuration. From provisioning hubs and projects to setting up AI services, key vaults, and storage accounts, the entire stack can be deployed in a streamlined and scalable manner. This framework not only simplifies infrastructure management but also accelerates innovation in enterprise AI workloads. As your use of AI Foundry grows, these patterns can be extended to support multi-region rollouts, advanced networking, and governance at scale.