Content Developer II at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
122721 stories
·
29 followers

Build your next website in Swift

1 Share

Swift's result builders are a powerful language feature that let us create domain-specific languages right inside our Swift code. With a little thinking, this means we can actually create whole websites in Swift, with our code automatically being converted to valid, accessible Swift, and we can even sprinkle in a little SwiftUI magic to complete the effect.

Let's get to it…

Starting from scratch

For over 30 years, HTML has been a great language for describing the structure of web pages.

For example, we can write HTML like this:

<h1>Wise words</h1>
<p>"If you don't take risks, you can't create a future" - <em>Monkey D. Luffy</em> in <a href="https://en-wp.org/wiki/One_Piece">One Piece</a></p>

That has a heading, a paragraph of text with some emphasis, and a link to another page. But, what happens if we forget the closing </em> tag? Without it, web browsers will assume everything that follows should also be emphasized.

That's not what I intended, but it's easy to do because HTML is just a bunch of text.

But even if you write perfect HTML, there are other, bigger problems:

  1. How can you make sure your pages look the same on all browsers?
  2. How can you make your page adapt to different screen sizes, such as iPhone and iPad?
  3. How can you use more advanced UI elements such as dropdown menus, carousels, and accordions?
  4. Most importantly, how can you be sure your site is accessible to everyone?

Ultimately, all these boil down to one huge problem: most people don't have enough time to become experts in Swift and also experts in HTML.

And so I want to suggest that the answer is to not use HTML, or at least not directly. Instead, I would you like to propose that we use Swift to build our websites.

Introducing result builders

Back in 2019 when Apple announced SwiftUI there the usual What's New in Swift presentation. During that talk they showed the following HTML:


<html>
    <head>
        <title>Jess...


Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete

How to integrate continuous integration and deployment with WordPress on App Service

1 Share

Why use Continuous integration and deployment with WordPress?

 

CI/CD is a system that automates steps in software delivery. For WordPress developers, it means less manual work: once you push updates, the system automatically tests and deploys them. It's like having a assistant that not only speeds up your work but also checks for errors with each update, ensuring your WordPress site runs smoothly. This constant testing and feedback loop means you can fix bugs quickly and improve your site continuously without disrupting the live version. In short, CI/CD makes your development process faster, more efficient, and less error prone.

 

How to get started with Continuous integration and deployment in WordPress on App Service?

 

Note: This code integration feature is currently enabled with WordPress on App Service images having tags PHP 8.2 or greater. https://github.com/Azure/wordpress-linux-appservice/tree/main?tab=readme-ov-file#image-details

 

Part 1: Before integrating code with Git, it is important to decide which files to include. It is recommended that you keep track of a smaller number of files. For example, keeping track of files in wp-content/uploads is inefficient as it might contain large static files. Instead, files like these must be stored on blob storage. Another example is wp_config.php file, since this file contains separate settings for development and production environments.

You should also choose to ignore WordPress core files if you are not making any changes to them.

 

This is a simple .gitignore file to get you started: (ref: https://github.com/github/gitignore/blob/main/WordPress.gitignore)

 

*~ *.log .htaccess wp-content/uploads/ wp-content/upgrade/ wp-content/cache/ wp-content/backups/ wp-content/backup-db/ wp-content/blogs.dir/ wp-content/advanced-cache.php wp-content/object-cache.php wp-content/wp-cache-config.php sitemap.xml sitemap.xml.gz # WP Core /wp-admin/ /wp-content/index.php /wp-content/languages /wp-content/plugins/index.php /wp-content/themes/index.php /wp-includes/ /index.php /license.txt /readme.html /wp-*.php /xmlrpc.php # Configuration wp-config.php

 

 

Part 2: Downloading WordPress code from App Service

 

Step 1: Go to Kudu dashboard https://<sitename>.scm.azurewebsites.net/newui (Ref: Kudu Dashboard explained)

Step 2: Go to file manager and download the /site/wwwroot folder as a zip file

abhishekreddy_0-1716031377784.png

 

Step 3: Extract the zip file, use the gitignore file from step 1, and push it to your remote repository.

 

Part 3: Enabling CI/CD with GitHub

 

Step 1: Go to Deployment Center on your app service dashboard.

Step 2: From the Source dropdown, select ‘GitHub’

Step 3: Select the repository and the branch from step 2.

abhishekreddy_1-1716031377791.png

 

Step 4: Select ‘Add a workflow’ from the workflow options.

Step 5: Under ‘Authentication settings’, it is recommended that you select User-assigned authentication type.

Step 6: Click on Save

abhishekreddy_2-1716031377798.png

 

 

Support and Feedback 

In case you need any support, you can open a support request at New support request - Microsoft Azure. 
 For more details about the offering, please visitAnnouncing the General Availability of WordPress on Azure App Service - Microsoft Tech Community. 

 If you have any ideas about how we can make WordPress on Azure App Service better, please post your ideas atPost idea · Community (azure.com) 
 
or you could email us atwordpressonazure@microsoft.com to start a conversation. 

 

Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete

Build a chatbot service to ensure safe conversations: Using Azure OpenAI & Azure Content Safety

1 Share

Build a chatbot service to ensure safe conversations: Using Azure OpenAI & Azure Content Safety



Why should we care about the safety of our chat service?

 

When you deploy a chat service on a website, users may enter inappropriate or harmful messages that can lead to unwanted responses from the chatbot. This can pose significant risks, including the potential for revealing sensitive company information or allowing other users' messages to influence the chatbot's responses. That's why it's critical to implement a content filtering mechanism that can inspect messages from users. Azure Content Safety provides a robust solution for inspecting and filtering inappropriate content, ensuring safe and secure interactions. This tutorial is ideal for anyone who wants to build a chat service with strong content moderation capabilities. In this tutorial, you will learn how to build a chatbot service that interacts with users using Azure Cosmos DB, Azure Content Safety, and Azure OpenAI. This service provides the following features:

  1. Analyze user messages for safety: Analyze messages entered by users using Azure Content Safety to evaluate them for hate, self-harm, sexual content, and violence.
  2. Conversations with chatbot: Conduct conversations about safe messages using Azure OpenAI.
  3. Manage conversation history: Store a user's conversation history in Azure Cosmos DB and load or clear the history as needed.

Here is an overview of this tutorial.

 

architecture2.png

 

This tutorial is related to the following topics:

  • AI Engineer
  • Developer
  • Azure CosmosDB
  • Azure Content Safety
  • Azure OpenAI
  • Semantic Kernel

Prerequisites

Microsoft Cloud Technologies used in this Tutorial

  • Azure CosmosDB
  • Azure Content Safety
  • Azure OpenAI Service

Table of Contents

  1. Create an Azure CosmosDB
  2. Create an Azure Content Safety
  3. Create an Azure OpenAI
  4. Set up the project and install the libraries
  5. Set up the project in Visual Studio Code
  6. Set up and initialize the ChatbotService class with Azure Services
  7. Implement core functions in ChatbotService
  8. Run the main function and test the ChatbotService
  9. Congratulations!
  10. Full code for tutorial

 

Create an Azure CosmosDB

 

To store and load your chat history, you need to create an Azure Cosmos DB resource.

In this exercise, you will:

  • Create an Azure Cosmos DB account to store your chat history.
  • Create a container within Azure Cosmos DB to store and manage chat messages.

 

Create an Azure CosmosDB account

 

  1. Type cosmos db in the search bar at the top of the portal page and select Azure Cosmos DB from the options that appear.

    01-1-type-cosmos-db.png

  2. Select + Create from the navigation menu.

    01-2-select-create.png

  3. Select Azure Cosmos DB for NoSQL from the navigation menu.

    01-3-select-cosmosdb-nosql.png

  4. Perform the following tasks:

    • Select your Azure Subscription.
    • Select the Resource group to use (create a new one if needed).
    • Enter Account Name. It must be a unique value.
    • Select the Availability Zones to Disable.
    • Select the Location you'd like to use.
    • Select the Capacity mode to Provisioned throughput.
    • Select the Apply Free Tier Discount to Apply.
    • Select the Limit total account throughput to prevent unexpected charges.

     

     

    01-4-create-cosmosdb.png

  5. Select Review + Create.

  6. Select Create.

Create an Azure CosmosDB database and container

 

  1. Navigate to the Azure CosmosDB resource that you created.

  2. Select Data Explorer from the left side tab.

  3. Select New Container from the navigation menu.

    01-5-select-new-container.png

  4. Perform the following tasks:

    • Select the Database id to Create new.
    • Enter database id. It must be a unique value.
    • Select the Database throughtput (autoscale) to Autoscale.
    • Enter Database Max RU/s as 1000.
    • Enter Container id. It must be a unique value.
    • Select the Indexing to Automatic.
    • Enter Partition key as /userId.

  5. Select OK.

 

Create an Azure Content Safety

 

To create a service to detect inappropriate content, you need to create an Azure Content Safety resource.

In this exercise, you will:

  • Create an Azure Content Safety to detect inappropriate content.

Create an Azure Content Safety resource

 

  1. Type content safety in the search bar at the top of the portal page and select Content safety from the options that appear.

     
    02-1-type-content-safety.png

  2. Select + Create from the navigation menu.

     

    02-2-select-create.png

  3. Perform the following tasks:

    • Select your Azure Subscription.
    • Select the Resource group to use (create a new one if needed).
    • Select the Region you'd like to use.
    • Enter Content safety name. It must be a unique value.
    • Select the Free F0 pricing tier.

     

     
    02-3-create-content-safety.png

     

  4. Select Review + Create.

  5. Select Create.

 

Create an Azure OpenAI

 

To enable your chat service to provide answers based on chat history stored in Azure Cosmos DB, you need to create and deploy an Azure OpenAI resource.

In this exercise, you will:

  • Create an Azure OpenAI resource.
  • Deploy Azure OpenAI models.

 

 Note


Access to the Azure OpenAI service is currently available by request. To request access, please fill out the form on the Azure OpenAI request page.

 

Create an Azure OpenAI resource

 

  1. Type azure openai in the search bar at the top of the portal page and select Azure OpenAI from the options that appear.

     

     

     
    03-1-type-azure-openai.png

  2. Select + Create from the navigation menu.

     

    03-2-select-create.png

     

  3. Perform the following tasks:

    • Select your Azure Subscription.
    • Select the Resource group to use (create a new one if needed).
    • Select the Region you'd like to use.
    • Enter Azure OpenAI Name. It must be a unique value.
    • Select the Standard S0 pricing tier.

     

     
    03-3-fill-basics.png

     

     Note

    To minimize costs, try to create all the resources in the same region.
  4. Select Next to move to the Network page.

  5. Select a network security Type.

    03-4-select-security-type.png

  6. Select Next to move to the Tags page.

  7. Select Next to move to the Review + submit page.

  8. Select Create.

     

     

    03-5-select-create.png

Deploy Azure OpenAI models

 

  1. Navigate to the Azure OpenAI resource that you created.

  2. Select Go to Azure OpenAI Studio from the navigation menu.

     

     
    03-6-go-to-studio.png

  3. Inside Azure OpenAI Studio, select Deployments from the left side tab.

     

     
    03-7-select-deployments.png

  4. Select + Create new deployment from the navigation menu to create a new gpt-35-turbo deployment.

     

     
    03-8-create-model.png

  5. Perform the following tasks:

    • For the model, select gpt-35-turbo.
    • For the Model version, select Default.
    • For the Deployment name, add a name that's unique to this cloud instance. For example, gpt-35-turbo.

  6. Select Create.

  7. Now you've learned how to set up Azure resources to implement features that allow the Azure Content Safety resource to analyze conversations and Azure OpenAI to generate responses. In the next exercise, you will develop a Python program that interacts with users to ensure safe conversations.

 

Set up the project and install the libraries

Now, you will create a folder to work in and set up a virtual environment to develop a program.

In this exercise, you will

  • Create a folder to work inside it.
  • Create a virtual environment.
  • Install the required packages.

Create a folder to work inside it

 

  1. Open a terminal window and type the following command to create a folder named safety-chatbot in the default path.

    mkdir safety-chatbot
    
  2. Type the following command inside your terminal to navigate to the safety-chatbot folder you created.

    cd safety-chatbot
    

Create a virtual environment

 

  1. Type the following command inside your terminal to create a virtual environment named .venv.


    python -m venv .venv
    
  2. Type the following command inside your terminal to activate the virtual environment.


    .venv\Scripts\activate.bat

 

 Note


If it worked, you should see (.venv) before the command prompt.

 

Install the required packages

  1. Type the following commands inside your terminal to install the required packages.


    pip install azure-cosmos==4.6.0
    pip install azure-ai-contentsafety==1.0.0
    pip install semantic-kernel==0.9.7b1

 

Set up the project in Visual Studio Code

 

To develop a program that uses the Azure resources that you created, you need config.py file to enter Azure information.

 

In this exercise, you will:

  • Create an example.py file.
  • Import the required packages.
  • Create a config.py file to enter Azure information.

Set up example.py file

 

  1. Open Visual Studio Code.

  2. Select File from the menu bar.

  3. Select Open Folder.

  4. Select the safety-chatbot folder that you created, which is located at C:\Users\yourUserName\safety-chatbot.

     

     
    05-1-open-project-folder.jpg

  5. In the left pane of Visual Studio Code, right-click and select New File to create a new file named example.py.

     

     
    05-2-create-new-file.jpg

  6. Add the following code to the example.py file to import the required libraries.

    # Library imports import asyncio from datetime import datetime, timezone from concurrent.futures import ThreadPoolExecutor # Azure imports from azure.ai.contentsafety.aio import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.cosmos import CosmosClient # Semantic Kernel imports import semantic_kernel as sk import semantic_kernel.connectors.ai.open_ai as sk_oai from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion from semantic_kernel.prompt_template import PromptTemplateConfig from semantic_kernel.prompt_template.input_variable import InputVariable from semantic_kernel.functions.kernel_arguments import KernelArguments

 

Set up config.py file

  1. In the left pane of Visual Studio Code, right-click and select New File to create a new file named config.py.

     Note


    Complete folder structure:
    
    
    └── YourUserName
         └── safety-chatbot
             ├── example.py
             └── config.py
    
    
  2. Add the following code to the config.py file to include your Azure information.

    # Azure Cosmos DB settings AZURE_COSMOSDB_ACCOUNT_NAME = 'your-cosmosdb-account-name' # 'safetychatbot-storage' AZURE_COSMOSDB_DATABASE_NAME = 'your-cosmosdb-database-name' # 'safetychatbot-database' AZURE_COSMOSDB_CONTAINER_NAME = 'your-cosmosdb-container-name' # 'safetychatbot-container' AZURE_COSMOSDB_ENDPOINT = f'https://{AZURE_COSMOSDB_ACCOUNT_NAME.lower()}.documents.azure.com:443/' AZURE_COSMOSDB_KEY = 'your-cosmosdb-key' # Azure Content Safety settings AZURE_CONTENT_SAFETY_NAME = 'your-content-safety-name' # 'safetychatbot-contentsafety' AZURE_CONTENT_SAFETY_ENDPOINT = f'https://{AZURE_CONTENT_SAFETY_NAME.lower()}.cognitiveservices.azure.com/' AZURE_CONTENT_SAFETY_KEY = 'your-content-safety-key' # Azure OpenAI settings AZURE_OPENAI_NAME = 'safetychatbot-openai' AZURE_OPENAI_ENDPOINT = f'https://{AZURE_OPENAI_NAME.lower()}.openai.azure.com/' AZURE_OPENAI_KEY = 'your-openai-key' AZURE_OPENAI_API_VERSION = 'your-API-version' # '2023-08-01-preview' AZURE_OPENAI_CHAT_DEPLOYMENT_NAME = 'your_chat_deployment_name' # 'gpt-35-turbo'


Add Azure Environment Variables 

  1. Perform the following tasks to add the Azure Cosmos DB account name:

    • Navigate to the Azure CosmosDB resource that you created.
    • Copy and paste your account name into the config.py file.

    05-1-cosmosdb-account.png

  2. Perform the following tasks to add the Azure Cosmos DB database name:

    • Navigate to the Azure Cosmos DB resource that you created.
    • Select Data Explorer from the left side tab.
    • Copy and paste your database name into the config.py file.

     

     
    05-2-cosmosdb-database.png

     

  3. Perform the following tasks to add the Azure Cosmos DB container name:

    • Navigate to the Azure Cosmos DB resource that you created.
    • Select Data Explorer from the left side tab.
    • Copy and paste your container name into the config.py file.

     

     
    05-3-cosmosdb-container.png

  4. Perform the following tasks to add the Azure Cosmos DB key:

    • Navigate to the Azure CosmosDB resource that you created.
    • Select Keys from the left side tab.
    • Copy and paste your key into the config.py file.

     

     
    05-4-cosmosdb-key.png

  5. Perform the following tasks to add the Azure Content Safety name:

    • Navigate to the Azure Content Safety resource that you created.
    • Copy and paste your resource name into the config.py file.

     

     
    05-5-safety-name.png

  6. Perform the following tasks to add the Azure Content Safety key:

    • Navigate to the Azure Content Safety resource that you created.
    • Select Keys and Endpoint from the left side tab.
    • Copy and paste your key into the config.py file.

    05-6-safety-key.png

  7. Perform the following tasks to add the Azure OpenAI name:

    • Navigate to the Azure OpenAI resource that you created.
    • Copy and paste your resource name into the config.py file.

     

     
    05-7-openai-name.png

  8. Perform the following tasks to add the Azure OpenAI key:

    • Navigate to the Azure OpenAI resource that you created.
    • Select Keys and Endpoint from the left side tab.
    • Copy and paste your key into the config.py file.

     

     
    05-8-openai-key.png

  9. Perform the following task to add the Azure OpenAI API versions:

    • Select the appropriate Azure OpenAI API version. you can refer to the Azure OpenAI Service REST API reference documents.

       Note



      In this tutorial, you use the  2024-02-15-preview version of the Azure OpenAI API.
    • Copy and paste your Azure OpenAI API versions into the config.py file.

 

Set up and initialize the ChatbotService class with Azure Services

 

To implement the ChatbotService for handling chat interactions and Azure services, you need to import the Azure information from the config.py file into the example.py file and implement a class that efficiently orchestrates these functionalities.

In this exercise, you will:

  • Import Azure information from the config.py file into the example.py file.
  • Create the ChatbotService class to efficiently manage chat interactions and integrate Azure service capabilities.

 Note


The complete code for this tutorial is provided at the end to make it easy to connect the pieces and understand the overall implementation.

 

Set up and initialize the ChatbotService class with Azure Services

 

To implement the ChatbotService for handling chat interactions and Azure services, you need to import the Azure information from the config.py file into the example.py file and implement a class that efficiently orchestrates these functionalities.

In this exercise, you will:

  • Import Azure information from the config.py file into the example.py file.
  • Create the ChatbotService class to efficiently manage chat interactions and integrate Azure service capabilities.

 Note


The complete code for this tutorial is provided at the end to make it easy to connect the pieces and understand the overall implementation.

Import Azure information from the config.py file

  1. Add the following code to the example.py file to import the values from config.py file.

    # Configuration imports from config import ( AZURE_COSMOSDB_DATABASE_NAME, AZURE_COSMOSDB_CONTAINER_NAME, AZURE_COSMOSDB_ENDPOINT, AZURE_COSMOSDB_KEY, AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_KEY, AZURE_OPENAI_CHAT_DEPLOYMENT_NAME )

Create the ChatbotService Class

  1. Add the following code to the example.py file to set up the ChatbotService class. This class initializes Azure Cosmos DB for NoSQL, which is a NoSQL database service that handles database operations synchronously.

    class ChatbotService: def __init__(self, loop, user_id): """ Initialize the ChatbotService with event loop, user ID, and necessary Azure CosmosDB clients. """ # Set up the event loop and thread pool executor for managing asynchronous tasks, # allowing Azure Cosmos DB operations to be handled in an asynchronous environment self.loop = loop self.executor = ThreadPoolExecutor() # Store the user ID, which is used as the partition key (/userId) in Azure CosmosDB self.user_id = user_id # Initialize the Azure Cosmos DB client self.cosmos_client = CosmosClient(AZURE_COSMOSDB_ENDPOINT, credential=AZURE_COSMOSDB_KEY) # Initialize the Azure Cosmos DB database client self.database = self.cosmos_client.get_database_client(AZURE_COSMOSDB_DATABASE_NAME) # Initialize the Azure Cosmos DB container client self.container = self.database.get_container_client(AZURE_COSMOSDB_CONTAINER_NAME)

     

  2. Add the following code to the example.py file to initialize the Azure Content Safety client and the Semantic Kernel within the ChatbotService class. This step sets up the asynchronous operations required for Content Safety analysis and chatbot interactions.

    async def init(self): """ Initialize the Content Safety client and Semantic Kernel. """ # Initialize the Azure Content Safety client self.content_safety_client = ContentSafetyClient(AZURE_CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(AZURE_CONTENT_SAFETY_KEY)) # Initialize the Semantic Kernel self.kernel = sk.Kernel() # Initialize the chat service for Azure OpenAI self.chat_service = AzureChatCompletion( service_id='chat_service', deployment_name=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME, endpoint=AZURE_OPENAI_ENDPOINT, api_key=AZURE_OPENAI_KEY ) # Add the chat service to the Semantic Kernel self.kernel.add_service(self.chat_service) # Define the prompt template configuration for the chatbot self.prompt_template_config = PromptTemplateConfig( template="""ChatBot can have a conversation with you about any topic. It can give explicit instructions or say 'I don't know' if it does not have an answer. {{$history}} User: {{$user_message}} ChatBot: """, name='chat_prompt_template', template_format='semantic-kernel', input_variables=[ InputVariable(name='user_message', description='The user message.', is_required=True), InputVariable(name='history', description='The conversation history', is_required=True), ], execution_settings=sk_oai.OpenAIChatPromptExecutionSettings( service_id='chat_service', ai_model_id='gpt-3.5-turbo', max_tokens=500, temperature=0.7 ) ) # Add the chat function to the Semantic Kernel self.chat_function = self.kernel.add_function( function_name="chat_function", plugin_name="chat_plugin", prompt_template_config=self.prompt_template_config, ) return self

     

 Note


Why is the Constructor Split?

When setting up the ChatbotService, the constructor is split into two parts: __init__ and async init. This separation is critical for effectively managing both synchronous and asynchronous operations required by different Azure services. Azure Cosmos DB for NoSQL operates in a synchronous environment, while Azure Content Safety and the Semantic Kernel benefit from asynchronous operations. This requires a clear distinction between synchronous and asynchronous initialization.

Implement core functions in ChatbotService

 

To implement the core functions of the ChatbotService for handling chat interactions and integrating with Azure services, you need to add several methods to the class. These methods enable text analysis, chatbot interactions, and conversation history management.

In this exercise, you will:

  • Implement a method to analyze text for safety using Azure Content Safety.
  • Implement a method to interact with the chatbot using the Semantic Kernel.
  • Implement methods to store, load, and clear the conversation history in Azure Cosmos DB.

  1. Add the following code to the ChatbotService class to create the analyze_text function that analyze the input text for safety using Azure Content Safety.

    async def analyze_text(self, text): """ Analyze the input text for safety using Azure Content Safety. """ # Create a request with the input text to be analyzed request = AnalyzeTextOptions(text=text) try: # Send the request to the Azure Content Safety client and await the response response = await self.content_safety_client.analyze_text(request) # Get the analysis results for different categories results = { 'hate': next((item for item in response.categories_analysis if item.category == TextCategory.HATE), None), 'self_harm': next((item for item in response.categories_analysis if item.category == TextCategory.SELF_HARM), None), 'sexual': next((item for item in response.categories_analysis if item.category == TextCategory.SEXUAL), None), 'violence': next((item for item in response.categories_analysis if item.category == TextCategory.VIOLENCE), None), } # Print content safety analysis results print("\n<-- Content Safety Analysis Results -->") for category, result in results.items(): if result: print(f"{category.capitalize()} severity: {result.severity}") print("<-- End of Content Safety Analysis Results -->\n") # Define a threshold for the text to be considered unsafe threshold = 2 # Based on the threshold, determine if the text is safe is_safe = not any(result and result.severity >= threshold for result in results.values()) return is_safe, results except HttpResponseError as e: # Handle any HTTP response errors that occur during the request print(f"Failed to analyze text. Error: {e}")

     

  2. Add the following code to the ChatbotService class to create the chat_with_kernel function that interacts with the chatbot using the semantic kernel and history provided.

    async def chat_with_kernel(self, user_message, history): """ Interact with the chatbot using the Semantic Kernel and provided history. """ # Create arguments for the chat function using the user message and conversation history stored in Azure Cosmos DB arguments = KernelArguments(user_message=user_message, history=history) # Invoke the chat function in the Semantic Kernel with the provided arguments and await the response response = await self.kernel.invoke(self.chat_function, arguments) # Return the chatbot's response return response

     

  3. Add the following code to the ChatbotService class to create the store_interaction and _store_interaction_sync functions that store user interactions with the chatbot in Azure Cosmos DB.

    async def store_interaction(self, user_message, chat_response): """ Store the user interaction with the chatbot in Azure Cosmos DB. """ # Run the _store_interaction_sync method in an asynchronous execution environment await self.loop.run_in_executor(self.executor, self._store_interaction_sync, user_message, chat_response) def _store_interaction_sync(self, user_message, chat_response): """ Synchronously store the interaction in Azure Cosmos DB. """ # Get the current time in UTC current_time = datetime.now(timezone.utc) # Upsert (insert or update) the interaction data into the Cosmos DB container self.container.upsert_item({ 'id': str(current_time.timestamp()), # Use the current timestamp as a unique ID 'user_message': user_message, # Store the user message 'bot_response': chat_response, # Store the chatbot response 'timestamp': current_time.isoformat(), # Store the timestamp in ISO format 'userId': self.user_id # Store the user ID for partition key })

     

  4. Add the following code to the ChatbotService class to create the load_historical_context and _load_historical_context_sync functions that load the user's chat history from Azure Cosmos DB.

    async def load_historical_context(self): """ Load the user's chat history from Azure Cosmos DB. """ # Run the _load_historical_context_sync method in an asynchronous execution environment return await self.loop.run_in_executor(self.executor, self._load_historical_context_sync) def _load_historical_context_sync(self): """ Synchronously load the user's chat history from Azure Cosmos DB. """ # Define the query to select items for the current user, ordered by timestamp query = "SELECT * FROM c WHERE c.userId = @userId ORDER BY c.timestamp DESC" parameters = [{"name": "@userId", "value": self.user_id}] # Execute the query and retrieve the items items = list(self.container.query_items(query=query, parameters=parameters, enable_cross_partition_query=True)) # Include only the last 5 conversations history_items = items[:5] # Format the conversion history return "\n".join([f"User: {item['user_message']}\nChatBot: {item['bot_response']}" for item in history_items])

     

  5. Add the following code to the ChatbotService class to create the clear_historical_context and _clear_historical_context_sync functions that clear the user's chat history from Azure Cosmos DB.

    async def clear_historical_context(self): """ Clear the user's chat history from Azure Cosmos DB. """ # Run the _clear_historical_context_sync method in an asynchronous execution environment await self.loop.run_in_executor(self.executor, self._clear_historical_context_sync) def _clear_historical_context_sync(self): """ Synchronously clear the user's chat history from Azure Cosmos DB. """ # Define the query to select items for the current user query = "SELECT * FROM c WHERE c.userId = @userId" parameters = [{"name": "@userId", "value": self.user_id}] # Execute the query and retrieve the items items = list(self.container.query_items(query=query, parameters=parameters, enable_cross_partition_query=True)) # Clear the chat history by deleting all items for the current user for item in items: self.container.delete_item(item, partition_key=self.user_id)



Run the main function and test the ChatbotService

 

In this exercise, you will implement the main function that manages user interactions, analyzes and stores conversation history using Azure Cosmos DB, Azure Content Safety, Azure OpenAI, and the Semantic Kernel.

 

In this exercise, you will:

  • Create the main function to manage user interactions and perform safety checks.
  • Run the program to see if it works well.

  1. Add the following code to create the main function in the example.py file that handles user inputs, manages chat history, and interacts with the chatbot service.

    async def main(): """ Main function to run the chatbot service. """ # Enter the User ID to join the chat. # A conversation history is stored based on the user ID. user_id = input("User ID: ") # Get the event loop to allow synchronous operations in Azure Cosmos DB for NoSQL to run asynchronously. loop = asyncio.get_running_loop() # Initialize the ChatbotService with the event loop and user ID chatbot_service = await ChatbotService(loop, user_id).init() # Main loop for the interaction with the user while True: user_message = input("You: ") # Exit the chat loop if the user types 'exit' if user_message.lower() == 'exit': break elif user_message.lower() == 'history': # Load and print the chat history if the user types 'history' history = await chatbot_service.load_historical_context() print(f"\n<-- Chat history of user ID: {user_id} -->") print(history) print(f"<-- End of chat history of user ID: {user_id} -->\n") elif user_message.lower() == 'clear': # Clear the chat history if the user types 'clear' await chatbot_service.clear_historical_context() print("Chat history cleared.") else: # Analyze the text for safety is_safe, _ = await chatbot_service.analyze_text(user_message) if is_safe: # Load chat history and interact with the chatbot if the message is safe history = await chatbot_service.load_historical_context() chat_response = await chatbot_service.chat_with_kernel(user_message, history) print("Chatbot:", chat_response) # Store the interaction in Cosmos DB await chatbot_service.store_interaction(user_message, str(chat_response)) else: # Inform the user if their message is not safe print("Chatbot: Your message is not safe to process. Please rephrase and try again.") if __name__ == "__main__": asyncio.run(main())

     

  2. Type the following command inside your terminal to run the program and see if it can answer questions.

    python example.py
    
  3. This will start the chatbot service, prompting you to enter a User ID and interact with the chatbot. You can enter exit to end the session, history to view past interactions, and clear to current user's chat history. Here's an example of the results.

     

     

    08-1-example-result.png

     

Congratulations!

 

You've completed this tutorial

 

Congratulations! You've successfully learned how to integrate Azure Content Safety with Azure OpenAI. In this tutorial, you have navigated through a practical journey of integrating Azure Cosmos DB, Azure Content Safety, and Azure OpenAI to create a robust chatbot service. By leveraging these Azure services, you now have a chatbot that can safely and effectively interact with users, analyze and store conversations, and ensure content safety.

 

Clean Up Azure Resources

 

Cleanup your Azure resources to avoid additional charges to your account. Go to the Azure portal and delete the following resources:

  • The Azure Cosmos DB resource
  • The Azure Content Safety resource
  • The Azure OpenAI resource

Next Steps

Documentation

Training Content

 

Full code for tutorial

 

 Note


This exercise is designed to provide the complete code used in the tutorial. It is a separate exercise from the rest of the tutorial.

  1. config.py

    # Azure Cosmos DB settings AZURE_COSMOSDB_ACCOUNT_NAME = 'your-cosmosdb-account-name' # 'safetychatbot-storage' AZURE_COSMOSDB_DATABASE_NAME = 'your-cosmosdb-database-name' # 'safetychatbot-database' AZURE_COSMOSDB_CONTAINER_NAME = 'your-cosmosdb-container-name' # 'safetychatbot-container' AZURE_COSMOSDB_ENDPOINT = f'https://{AZURE_COSMOSDB_ACCOUNT_NAME.lower()}.documents.azure.com:443/' AZURE_COSMOSDB_KEY = 'your-cosmosdb-key' # Azure Content Safety settings AZURE_CONTENT_SAFETY_NAME = 'your-content-safety-name' # 'safetychatbot-contentsafety' AZURE_CONTENT_SAFETY_ENDPOINT = f'https://{AZURE_CONTENT_SAFETY_NAME.lower()}.cognitiveservices.azure.com/' AZURE_CONTENT_SAFETY_KEY = 'your-content-safety-key' # Azure OpenAI settings AZURE_OPENAI_NAME = 'safetychatbot-openai' AZURE_OPENAI_ENDPOINT = f'https://{AZURE_OPENAI_NAME.lower()}.openai.azure.com/' AZURE_OPENAI_KEY = 'your-openai-key' AZURE_OPENAI_API_VERSION = 'your-API-version' # '2024-02-15-preview' AZURE_OPENAI_CHAT_DEPLOYMENT_NAME = 'your_chat_deployment_name' # 'gpt-35-turbo'

     

  2. example.py

    # Library imports import asyncio from datetime import datetime, timezone from concurrent.futures import ThreadPoolExecutor # Azure imports from azure.ai.contentsafety.aio import ContentSafetyClient from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError from azure.cosmos import CosmosClient # Semantic Kernel imports import semantic_kernel as sk import semantic_kernel.connectors.ai.open_ai as sk_oai from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion from semantic_kernel.prompt_template import PromptTemplateConfig from semantic_kernel.prompt_template.input_variable import InputVariable from semantic_kernel.functions.kernel_arguments import KernelArguments # Configuration imports from config import ( AZURE_COSMOSDB_DATABASE_NAME, AZURE_COSMOSDB_CONTAINER_NAME, AZURE_COSMOSDB_ENDPOINT, AZURE_COSMOSDB_KEY, AZURE_CONTENT_SAFETY_ENDPOINT, AZURE_CONTENT_SAFETY_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_KEY, AZURE_OPENAI_CHAT_DEPLOYMENT_NAME ) class ChatbotService: def __init__(self, loop, user_id): """ Initialize the ChatbotService with event loop, user ID, and necessary Azure CosmosDB clients. """ # Set up the event loop and thread pool executor for managing asynchronous tasks, # allowing Azure Cosmos DB operations to be handled in an asynchronous environment self.loop = loop self.executor = ThreadPoolExecutor() # Store the user ID, which is used as the partition key (/userId) in Azure CosmosDB self.user_id = user_id # Initialize the Azure Cosmos DB client self.cosmos_client = CosmosClient(AZURE_COSMOSDB_ENDPOINT, credential=AZURE_COSMOSDB_KEY) # Initialize the Azure Cosmos DB database client self.database = self.cosmos_client.get_database_client(AZURE_COSMOSDB_DATABASE_NAME) # Initialize the Azure Cosmos DB container client self.container = self.database.get_container_client(AZURE_COSMOSDB_CONTAINER_NAME) async def init(self): """ Initialize the Content Safety client and Semantic Kernel. """ # Initialize the Azure Content Safety client self.content_safety_client = ContentSafetyClient(AZURE_CONTENT_SAFETY_ENDPOINT, AzureKeyCredential(AZURE_CONTENT_SAFETY_KEY)) # Initialize the Semantic Kernel self.kernel = sk.Kernel() # Initialize the chat service for Azure OpenAI self.chat_service = AzureChatCompletion( service_id='chat_service', deployment_name=AZURE_OPENAI_CHAT_DEPLOYMENT_NAME, endpoint=AZURE_OPENAI_ENDPOINT, api_key=AZURE_OPENAI_KEY ) # Add the chat service to the Semantic Kernel self.kernel.add_service(self.chat_service) # Define the prompt template configuration for the chatbot self.prompt_template_config = PromptTemplateConfig( template="""ChatBot can have a conversation with you about any topic. It can give explicit instructions or say 'I don't know' if it does not have an answer. {{$history}} User: {{$user_message}} ChatBot: """, name='chat_prompt_template', template_format='semantic-kernel', input_variables=[ InputVariable(name='user_message', description='The user message.', is_required=True), InputVariable(name='history', description='The conversation history', is_required=True), ], execution_settings=sk_oai.OpenAIChatPromptExecutionSettings( service_id='chat_service', ai_model_id='gpt-3.5-turbo', max_tokens=500, temperature=0.7 ) ) # Add the chat function to the Semantic Kernel self.chat_function = self.kernel.add_function( function_name="chat_function", plugin_name="chat_plugin", prompt_template_config=self.prompt_template_config, ) return self async def analyze_text(self, text): """ Analyze the input text for safety using Azure Content Safety. """ # Create a request with the input text to be analyzed request = AnalyzeTextOptions(text=text) try: # Send the request to the Azure Content Safety client and await the response response = await self.content_safety_client.analyze_text(request) # Get the analysis results for different categories results = { 'hate': next((item for item in response.categories_analysis if item.category == TextCategory.HATE), None), 'self_harm': next((item for item in response.categories_analysis if item.category == TextCategory.SELF_HARM), None), 'sexual': next((item for item in response.categories_analysis if item.category == TextCategory.SEXUAL), None), 'violence': next((item for item in response.categories_analysis if item.category == TextCategory.VIOLENCE), None), } # Print content safety analysis results print("\n<-- Content Safety Analysis Results -->") for category, result in results.items(): if result: print(f"{category.capitalize()} severity: {result.severity}") print("<-- End of Content Safety Analysis Results -->\n") # Define a threshold for the text to be considered unsafe threshold = 2 # Based on the threshold, determine if the text is safe is_safe = not any(result and result.severity >= threshold for result in results.values()) return is_safe, results except HttpResponseError as e: # Handle any HTTP response errors that occur during the request print(f"Failed to analyze text. Error: {e}") async def chat_with_kernel(self, user_message, history): """ Interact with the chatbot using the Semantic Kernel and provided history. """ # Create arguments for the chat function using the user message and conversation history stored in Azure Cosmos DB arguments = KernelArguments(user_message=user_message, history=history) # Invoke the chat function in the Semantic Kernel with the provided arguments and await the response response = await self.kernel.invoke(self.chat_function, arguments) # Return the chatbot's response return response async def store_interaction(self, user_message, chat_response): """ Store the user interaction with the chatbot in Azure Cosmos DB. """ # Run the _store_interaction_sync method in an asynchronous execution environment await self.loop.run_in_executor(self.executor, self._store_interaction_sync, user_message, chat_response) def _store_interaction_sync(self, user_message, chat_response): """ Synchronously store the interaction in Azure Cosmos DB. """ # Get the current time in UTC current_time = datetime.now(timezone.utc) # Upsert (insert or update) the interaction data into the Cosmos DB container self.container.upsert_item({ 'id': str(current_time.timestamp()), # Use the current timestamp as a unique ID 'user_message': user_message, # Store the user message 'bot_response': chat_response, # Store the chatbot response 'timestamp': current_time.isoformat(), # Store the timestamp in ISO format 'userId': self.user_id # Store the user ID for partition key }) async def load_historical_context(self): """ Load the user's chat history from Azure Cosmos DB. """ # Run the _load_historical_context_sync method in an asynchronous execution environment return await self.loop.run_in_executor(self.executor, self._load_historical_context_sync) def _load_historical_context_sync(self): """ Synchronously load the user's chat history from Azure Cosmos DB. """ # Define the query to select items for the current user, ordered by timestamp query = "SELECT * FROM c WHERE c.userId = @userId ORDER BY c.timestamp DESC" parameters = [{"name": "@userId", "value": self.user_id}] # Execute the query and retrieve the items items = list(self.container.query_items(query=query, parameters=parameters, enable_cross_partition_query=True)) # Include only the last 5 conversations history_items = items[:5] # Format the conversion history return "\n".join([f"User: {item['user_message']}\nChatBot: {item['bot_response']}" for item in history_items]) async def clear_historical_context(self): """ Clear the user's chat history from Azure Cosmos DB. """ # Run the _clear_historical_context_sync method in an asynchronous execution environment await self.loop.run_in_executor(self.executor, self._clear_historical_context_sync) def _clear_historical_context_sync(self): """ Synchronously clear the user's chat history from Azure Cosmos DB. """ # Define the query to select items for the current user query = "SELECT * FROM c WHERE c.userId = @userId" parameters = [{"name": "@userId", "value": self.user_id}] # Execute the query and retrieve the items items = list(self.container.query_items(query=query, parameters=parameters, enable_cross_partition_query=True)) # Clear the chat history by deleting all items for the current user for item in items: self.container.delete_item(item, partition_key=self.user_id) async def main(): """ Main function to run the chatbot service. """ # Enter the User ID to join the chat. # A conversation history is stored based on the user ID. user_id = input("User ID: ") # Get the event loop to allow synchronous operations in Azure Cosmos DB for NoSQL to run asynchronously. loop = asyncio.get_running_loop() # Initialize the ChatbotService with the event loop and user ID chatbot_service = await ChatbotService(loop, user_id).init() # Main loop for the interaction with the user while True: user_message = input("You: ") # Exit the chat loop if the user types 'exit' if user_message.lower() == 'exit': break elif user_message.lower() == 'history': # Load and print the chat history if the user types 'history' history = await chatbot_service.load_historical_context() print(f"\n<-- Chat history of user ID: {user_id} -->") print(history) print(f"<-- End of chat history of user ID: {user_id} -->\n") elif user_message.lower() == 'clear': # Clear the chat history if the user types 'clear' await chatbot_service.clear_historical_context() print("Chat history cleared.") else: # Analyze the text for safety is_safe, _ = await chatbot_service.analyze_text(user_message) if is_safe: # Load chat history and interact with the chatbot if the message is safe history = await chatbot_service.load_historical_context() chat_response = await chatbot_service.chat_with_kernel(user_message, history) print("Chatbot:", chat_response) # Store the interaction in Cosmos DB await chatbot_service.store_interaction(user_message, str(chat_response)) else: # Inform the user if their message is not safe print("Chatbot: Your message is not safe to process. Please rephrase and try again.") if __name__ == "__main__": asyncio.run(main())

 

Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete

Add a Cookie to an HttpClient Request/Response in ASP.NET Core

1 Share

What are cookies? Websites send small pieces of text, known as internet cookies, to our browsers whenever we visit them. They help us have a personalized experience on that particular website and remember it for the next time we visit it. Our goal today is how to make that happen in ASP.NET Core. In this […]

The post Add a Cookie to an HttpClient Request/Response in ASP.NET Core appeared first on Code Maze.

Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete

AI is Overrated – Why ThePrimeagen Ripped Out GitHub Copilot From His Code Editor [Podcast #124]

1 Share
On this week's episode of the podcast, freeCodeCamp I interview The Primeagean. He's a software engineer who streams himself programming. He recently left his job at Netflix to stream full-time. We talk about: Prime's journey from his teacher tellin...

Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete

Docker Compose Profile, one the most useful and underrated features

1 Share

2024 05 18 cover

Erik Shafer asked me on the Emmett Discord if I could provide a sample of how to run the WebApi application using Emmett. Of course, I said: sure will! I already had WebApi sample in the repository I also explained here How to build and push Docker image with GitHub actions?. Easy peasy, then, right?

Indeed, it wasn’t that hard. Of course, I had to fight with ESModules quirks, but I also expanded the scope a bit, as I also decided to use one of the most underrated Docker Compose features: Profiles.

What are Docker Compose Profiles?

They’re a way to logically group services inside your Docker Compose definition, allowing you to run a subset of services. My original Docker Compose definition contained the EventStoreDB startup, which I use in my Emmett samples as the real event store example.

version: '3.5'

services:
  eventstoredb:
    image: eventstore/eventstore:23.10.0-bookworm-slim
    container_name: eventstoredb
    environment:
      - EVENTSTORE_CLUSTER_SIZE=1
      - EVENTSTORE_RUN_PROJECTIONS=All
      - EVENTSTORE_START_STANDARD_PROJECTIONS=true
      - EVENTSTORE_EXT_TCP_PORT=1113
      - EVENTSTORE_HTTP_PORT=2113
      - EVENTSTORE_INSECURE=true
      - EVENTSTORE_ENABLE_EXTERNAL_TCP=true
      - EVENTSTORE_ENABLE_ATOM_PUB_OVER_HTTP=true
    ports:
      - '1113:1113'
      - '2113:2113'
    volumes:
      - type: volume
        source: eventstore-volume-data
        target: /var/lib/eventstore
      - type: volume
        source: eventstore-volume-logs
        target: /var/log/eventstore
    networks:
      - esdb_network

networks:
  esdb_network:
    driver: bridge

volumes:
  eventstore-volume-data:
  eventstore-volume-logs:

Nothing fancy here so far. You can just run it with:

docker compose up

It will start the database; then, you can run a sample application with

npm run start

And play with Emmett.

I wanted to keep the sample experience straightforward and use local development/debugging as the default. Docker image build and run would be optional (we could call it “Erik mode”!).

Now, profiles come in handy here, as they enable that, I just had to add:

version: '3.5'

services:
  app:
    build:
      dockerfile: Dockerfile
      context: .
    container_name: emmett_api
    profiles: [app]
    environment:
      - ESDB_CONNECTION_STRING=esdb://eventstoredb:2113?tls=false
    networks:
      - esdb_network
    ports:
      - '3000:3000'

  # (...) EventStoreDB Definition

The setup is pretty straightforward.

We’re stating which Docker file to use and where it is located (. means that it is in the same folder as the Docker Compose file definition):

    build:
      dockerfile: Dockerfile
      context: .

We ensure that we have a connection to EventStoreDB by placing it in the same network and passing the connection string as an environment variable.

    environment:
      - ESDB_CONNECTION_STRING=esdb://eventstoredb:2113?tls=false
    networks:
      - esdb_network

The new thing is the profile definition:

    profiles: [app]

Thanks to that, we’re saying that this service will only be used if we explicitly specify that in the command line. We can, for instance build the image by running

docker compose --profile app build

Or run both EventStoreDB and Emmett WebApi by calling:

docker compose --profile app up

And let’s stop here for a moment! Why both if I specified the app profile? Docker Compose will run, in this case, specified profile AND all services that don’t have a profile specified. That’s quite neat, as we can define the set of default services (e.g. databases, messaging systems, etc.) and add others as optional. Ah, and you can specify multiple profiles by, e.g.:

docker compose --profile backend --profile frontend up

You can also group multiple services into a single profile. Why would you do it? Let’s go to…

Docker Profiles advanced scenario

In my Event Sourcing .NET samples repository, I’m trying to cover multiple aspects, tools, ways to build Event Sourcing, CQRS and Event-Driven systems. I’m using:

  • Marten (so Postgres) and EventStoreDB as example event stores,
  • Postgres and Elasticsearch as read model stores,
  • Kafka as a messaging system to show the integration between services,
  • UI tools like PgAdmin and Kafka UI to easier investigate sample data.

Multiple samples are using those services in various configurations.

Initially, I kept multiple Docker Compose files for:

  • default configuration with all services,
  • continuous integration pipeline configuration without UI components, as they’re not needed for tests. They’d just eat resources and make pipeline runs longer. They also don’t have Kafka, as I’m just testing inner modules functionalities,
  • sample web API Docker Image build (similar to the one explained above),
  • only containing Postgres-related configurations,
  • accordingly, only with EventStoreDB,
  • etc.

I’m sure that you can relate that to your projects. Now, how can Docker Compose Profiles help us with that? It could definitely help us merge multiple configurations into one and easier manage updating versions, etc.

Let’s see the config I ended up with and then explain the reasoning. I’ll trim the detailed service configuration; you can check the whole file here.

version: "3"
services:
    #######################################################
    #  Postgres
    #######################################################
    postgres:
        profiles: [ postgres, postgres-all, all, all-no-ui, ci ]
        image: postgres:15.1-alpine
	# (...) rest of the config

    pgadmin:
        profiles: [ postgres-ui, postgres-all, all ]
        image: dpage/pgadmin4
	# (...) rest of the config

    jaeger:
        image: jaegertracing/all-in-one:latest
        profiles: [ otel, otel-all, all ]
	# (...) rest of the config

    #######################################################
    #  EventStoreDB
    #######################################################
    eventstore.db:
        image: eventstore/eventstore:23.10.0-bookworm-slim
        profiles: [ eventstoredb, eventstoredb-all, all, all-no-ui, ci ]
	# (...) rest of the config

    #######################################################
    #  Elastic Search
    #######################################################
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:8.13.2
        profiles: [ elastic, elastic-all, all, all-no-ui, ci ]
	# (...) rest of the config

    kibana:
        image: docker.elastic.co/kibana/kibana:8.13.2
        profiles: [ elastic-ui, elastic-all, all ]
	# (...) rest of the config

    #######################################################
    #  Kafka
    #######################################################
    kafka:
        image: confluentinc/confluent-local:7.6.1
        profiles: [kafka, kafka-all, all, all-no-ui]
	# (...) rest of the config

    init-kafka:
        image: confluentinc/confluent-local:7.6.1
        profiles: [ kafka, kafka-all, all, all-no-ui ]
        command: "#shell script to setup Kafka topics"
	# (...) rest of the config
    #######################################################
    #  Schema Registry
    #######################################################
    schema_registry:
        image: confluentinc/cp-schema-registry:7.6.1
        profiles: [ kafka-ui, kafka-all, all ]        
	# (...) rest of the config

    kafka_topics_ui:
        image: provectuslabs/kafka-ui:latest
        profiles: [ kafka-ui, kafka-all, all ]
        depends_on:
            - kafka
	# (...) rest of the config

    #######################################################
    #  Test Backend Service
    #######################################################
    backend:
        build:
            dockerfile: Dockerfile
            context: .
        profiles: [build]
	# (...) rest of the config

## (...) Network and Volumes config

As you see, we have a few general profiles:

  • postgres
  • elastic
  • kafka
  • eventstoredb
  • otel
  • build

They group the needed tooling containers.

Each of them has the additional profiles with prefixes:

  • {profile}-all (e.g. postgres-all) - will start all needed tooling containers plus supportive like ui,
  • {profile}-all-no-ui - will start just the needed tooling without UI components. There’s no {profile}-all-ui, as starting UI without actual components doesn’t make sense.

I also defined additional profiles:

  • all - that’ll run all components,
  • ci - only components needed for the CI pipeline (so no UI and Kafka).

So by default, if I don’t mind my RAM being eaten by all containers, I’d run:

docker compose --profile all up

If I’d like to run the Marten sample with Elasticsearch read models, I could just run:

docker compose --profile postgres --profile elastic up

In the CI, I can run:

docker compose --profile ci up

It’s important to find balance and conventions for profile names. If you have too many of them, it’ll be challenging for people to memorise all of them. That’s why grouping them and adding standard conventions can be helpful. We should always consider intended usage and make it accessible. I could potentially provide profiles for dedicated samples.

Read more in the official Docker Compose Profiles guide.

See also the Pull Requests where I introduced explained changes to:

If you get to this place, then you may also like my other articles around Docker and Continuous Integration:

Cheers!

Oskar

p.s. Ukraine is still under brutal Russian invasion. A lot of Ukrainian people are hurt, without shelter and need help. You can help in various ways, for instance, directly helping refugees, spreading awareness, putting pressure on your local government or companies. You can also support Ukraine by donating e.g. to Red Cross, Ukraine humanitarian organisation or donate Ambulances for Ukraine.

Read the whole story
alvinashcraft
4 hours ago
reply
West Grove, PA
Share this story
Delete
Next Page of Stories