Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
143539 stories
·
32 followers

How to use GitHub Copilot to level up your code reviews and pull requests

1 Share

Since I joined GitHub as a software engineer on the billing team almost three years ago, I’ve had a front row seat to the evolution of AI coding tools including Github Copilot. What started out as code completions has evolved into so much more including agentic workflows and refactoring suggestions. When I first started using Copilot, I was mainly using it in VSCode. As Copilot has grown and expanded, I’ve extended my use cases beyond my code editor and into all parts of my day-to-day work, including pull requests, code reviews, and more.  

GitHub Copilot is now available in all parts of the software development life cycle and one place where it can be extremely useful is when you’re creating pull requests and doing code reviews. During my time at GitHub, I’ve discovered some practical ways Copilot can make a difference during the pull request and code review processes. Here are a few things I’ve started doing that have made my workflow smoother and more efficient.

Using Copilot suggestions when reviewing code

Often, when I’m reviewing a teammate’s pull request, I’ll have an idea for how their code could be improved, or I’ll want to confirm the code is following best practices. However, I don’t always have time to write the suggested refactored code myself. In these cases, I’ll usually click the Copilot icon next to the file I’m reviewing and ask Copilot to suggest a specific improvement about the lines of code I’m currently reviewing. Then I’ll add that suggestion in my review comment along with some explanation of why I think the suggestion would improve the code.  

Recently while reviewing a teammate’s code, I noticed some repetitive code in a Ruby file. I clicked the Copilot icon next to the relevant code and prompted it with:

> “Can you refactor this Ruby on Rails code to reduce repetition?”

Copilot’s suggestion matched what I would have written and saved me several minutes, which allowed me to focus on higher-level feedback.

Since I frequently switch context and review code in different languages like Ruby, TypeScript and Go, sometimes I’ll need a reminder on best practices for each language. I recently used this prompt to help me review some Go code:

> “Is this code addition following Go best practices for variable assignment? If not, can you suggest improvements?”

Copilot quickly generated a suggestion that I reviewed and used as part of my code review comment. It saved me time and made my feedback more actionable.

Screenshot of a GitHub pull request for a Go file, suggesting a change to use the short variable declaration “result := add(1, 2)” instead of “var result int”. A Copilot panel on the right summarizes the changes and explains the reasoning behind the suggestion.

Formatting tables and data for pull requests

I like to add images, screen recordings, and occasionally spreadsheets to my pull request descriptions to add context to help reviewers understand my changes. Adding this type of information in Markdown can be tedious! In these cases, I ask Copilot to do the formatting for me. 

Recently, I was working on some improvements to help decrease the load time of a React page, and I wanted to show the improvements that my changes were making. I had been keeping a spreadsheet detailing the the page load times before and after my changes, and I wanted to include it in my pull request description, so I gave Copilot this prompt:

Format this data into a GitHub flavored markdown table that I can paste into a GitHub pull request description: 	

Load Time Before (in seconds)	Load Time After Updates (in seconds)
	1.3	1.2
	1.2	1.1
	1.1	0.885
	1.3	1.3
	1.2	0.918
		
Average	1.22	1.0806

And Copilot gave this response: 

Here’s your data formatted as a GitHub-flavored markdown table, ready to paste into a pull request description:
Markdown
| Test Run | Load Time Before (seconds) | Load Time After Updates (seconds) |
|----------|---------------------------|-----------------------------------|
| 1        | 1.3                       | 1.2                               |
| 2        | 1.2                       | 1.1                               |
| 3        | 1.1                       | 0.885                             |
| 4        | 1.3                       | 1.3                               |
| 5        | 1.2                       | 0.918                             |
| **Average** | **1.22**                  | **1.0806**                         

Let me know if you’d like any adjustments!

This simple trick has made it much easier for me to clearly share data in my pull request descriptions and comments.

Letting Copilot help write pull request summaries

I often use Copilot to help me write pull request summaries. We’ve all been there: you finally open your pull request after fixing a bug or introducing a new feature and the last hurdle is writing the pull request description. Copilot can help kickstart this process for you by using the Copilot icon in the pull request description editor bar. Even if I end up editing the text, having a starting point makes it less daunting.

Screenshot of a GitHub pull request page titled "Test PR #1" showing the conversation tab, with a Copilot menu offering options to generate a summary or an outline of the pull request changes.

Using Copilot for code reviews and understanding unfamiliar code

Copilot is very good at reviewing and explaining code and two ways I leverage this in my day-to-day work are requesting initial code reviews from Copilot and asking questions about unfamiliar code. 

Before I mark a pull request as ready for review, I’ll use Copilot to do a quick pass over my changes by requesting a code review from Copilot. It often catches things I might have missed or suggests a better way to write something. And don’t forget to add some notes in the custom instructions in your repository on what you want Copilot to focus on when reviewing pull requests.

If I’m reviewing someone else’s code and I don’t understand a change, I’ll ask Copilot to explain it. This helps me get context quickly, especially when I’m less familiar with that part of the codebase. This better understanding of the code allows me to provide more thoughtful and thorough code reviews for my teammates and ensures that I fully understand the potential impact of any pull request that I’m approving. 

Copilot’s impact on code reviews and pull requests 

While Copilot isn’t a replacement for thoughtful, engaged code reviews, it has become an indispensable tool in my daily workflow as a software engineer. From generating smart suggestions and code refactors, to quick Markdown formatting and drafting pull request summaries, Copilot helps streamline the work that surrounds writing code by making feedback more actionable and the code review process faster and more thorough. By integrating Copilot into every stage of the software development life cycle, I’m able to focus on higher-level engineering problems and collaboration. As Copilot continues to evolve, I’m excited to see how it will further transform not just my workflow, but the way developers everywhere build software together.

Get started with GitHub Copilot >

The post How to use GitHub Copilot to level up your code reviews and pull requests appeared first on The GitHub Blog.

Read the whole story
alvinashcraft
10 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Do not Interrupt Developers, Study Says

1 Share

In a study called Breaking the Flow: A Study of Interruptions During Software Engineering Activities, researchers at Duke and Vanderbilt analyzed how interruptions influence three common engineering tasks: code writing, code comprehension, and code review.

Twenty participants performed these tasks while experiencing six types of interruptions, both in person and on screen.

The research showed that it takes 10-15 minutes for a developer to return to editing code after an interruption, and as much as 30-45 minutes to recover the full context they had before breaking focus. That disruption does not mean that they’ve wasted only those 10 to 15 minutes; the cost is also in fragmented flow and decreased creativity. And the importance of the requester increased the impact of the interruption.

A study done by GitHub estimates that interruptions can erase up to 82% of productive work time when developers face frequent disruptions from meetings, messages, and quick questions.

Each interruption can cost a dev 30 minutes

Developing software demands a complex internal mental model-tracking system architecture, problem logic, edge cases, and more. Interruptions shatter that model, forcing a restart. Whether the distraction comes from Slack, a teammate, or even internal thoughts, context-switching costs both time and mental energy.

Developers themselves recognize the impact: Reddit discussions often cite 15-30 minutes lost per interruption, especially on complex tasks, and the cumulative effect means whole afternoons can vanish in broken focus.

What this means for teams?

Interruptions not only waste time, but they also reduce code quality and increase bugs. The Duke study showed higher error rates during fragmented workdays and noted that rushed re-entry into complex tasks often leads to sloppy code.

Even self-imposed context switches – voluntarily checking messages or shifting between tasks -can be as disruptive as external ones, according to studies of software developers’ work habits

Meet less, code more

Engineering leaders who want to protect the flow of their developers limit the number of meetings. Research shows that teams with just one meeting per day maintain daily progress nearly 99% of the time, while adding a third meeting drops progress to 14%.

Asynchronous communication – when answers to pings and messages are not expected to happen instantly – also helps. By answering messages in batches, software engineers can block periods of time for deep focus. Two hours of uninterrupted work delivers a 20% increase in focus time in organizations that track these metrics.

Open-plan layouts, split calendars, uncoordinated tools, and reactive meeting culture all chip away at developer mental bandwidth

Want to solve the case? Do the research

Researchers also point out that interruptions are usually silent productivity killers that rarely, if ever, appear on developer productivity metrics dashboards.

They advise engineering organizations to use a combination of metrics and developer satisfaction surveys to understand the impact of interruptions. No fancy tools are needed; the goal is to get feedback from developers, establish a baseline, and work on improvements.

The post Do not Interrupt Developers, Study Says appeared first on ShiftMag.

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Advanced Installer 22.9.1

1 Share
Advanced Installer 22.9.1 was released on August 8th, 2025
Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

What's the number one piece of advice you can give developers about security?

1 Share
From: Microsoft Developer
Duration: 0:43
Views: 330

What’s the #1 piece of advice Michael Howard gives developers about security?
In this #OneDevQuestion, he explains why every input should be questioned.

Learn more: https://msft.it/6058sOulu

#AppSec #SecureCoding

Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Is No-Code Really Dead?

1 Share
Is there still a place for no/low-code tools like those in Microsoft's Power Platform with the rise of generative AI and GitHub Copilot? The New Stack's news editor Darryl Taft joins Directions Rob Sanfilippo and Mary Jo Foley to debate the topic.



Download audio: https://www.directionsonmicrosoft.com/wp-content/uploads/2025/08/season4ep13taftsanfilippo.mp3
Read the whole story
alvinashcraft
11 minutes ago
reply
Pennsylvania, USA
Share this story
Delete

Build a Recipe AI Agent with Koog and Docker

1 Share

Hi, I’m Philippe Charriere, a Principal Solutions Architect at Docker. I like to test new tools and see how they fit into real-world workflows. Recently, I set out to see if JetBrains’ Koog framework could run with Docker Model Runner, and what started as a quick test turned into something a lot more interesting than I expected. In this new blog post, we’ll explore how to create a small Koog agent specializing in ratatouille recipes using popular Docker AI tools (disclaimer: I’m French). We’ll be using:

Prerequisites: Kotlin project initialization

Step 1: Gradle Configuration

Here’s my build configuration: build.gradle.kts

plugins {
    kotlin("jvm") version "2.1.21"
    application
}

group = "kitchen.ratatouille"
version = "1.0-SNAPSHOT"

repositories {
    mavenCentral()
}

dependencies {
    testImplementation(kotlin("test"))
    implementation("ai.koog:koog-agents:0.3.0")
    implementation("org.slf4j:slf4j-simple:2.0.9")

}

application {
    mainClass.set("kitchen.ratatouille.MainKt")
}

tasks.test {
    useJUnitPlatform()
}

tasks.jar {
    duplicatesStrategy = DuplicatesStrategy.EXCLUDE
    manifest {
        attributes("Main-Class" to "kitchen.ratatouille.MainKt")
    }
    from(configurations.runtimeClasspath.get().map { if (it.isDirectory) it else zipTree(it) })
}

kotlin {
    jvmToolchain(23)
}

Step 2: Docker Compose Project Configuration

The new “agentic” feature of Docker Compose allows defining the models to be used by Docker Compose services.

With the content below, I define that I will use the hf.co/menlo/lucy-128k-gguf:q4_k_m model from Hugging Face for my “Koog agent”.

models:
  app_model:
    model: hf.co/menlo/lucy-128k-gguf:q4_k_m

And I make the “link” between the koog-app service and the app_model model and the Koog agent as follows at the service level:

models:
      app_model:
        endpoint_var: MODEL_RUNNER_BASE_URL
        model_var: MODEL_RUNNER_CHAT_MODEL

Docker Compose will automatically inject the MODEL_RUNNER_BASE_URL and MODEL_RUNNER_CHAT_MODEL environment variables into the koog-app service, which allows the Koog agent to connect to the model.

If you entered interactive mode in the koog-app container, you could verify that the environment variables are properly defined with the command:

env | grep '^MODEL_RUNNER'

And you would get something like:

MODEL_RUNNER_BASE_URL=http://model-runner.docker.internal/engines/v1/
MODEL_RUNNER_CHAT_MODEL=hf.co/menlo/lucy-128k-gguf:q4_k_m

It’s entirely possible to define multiple models.

The complete compose.yaml file looks like this:

services:

  koog-app:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      SYSTEM_PROMPT: You are a helpful cooking assistant.
      AGENT_INPUT: How to cook a ratatouille?
    models:
      app_model:
        endpoint_var: MODEL_RUNNER_BASE_URL
        model_var: MODEL_RUNNER_CHAT_MODEL

models:
  app_model:
    model: hf.co/menlo/lucy-128k-gguf:q4_k_m

Step 3: Dockerfile

Next, we’ll need a Dockerfile to build the Docker image of our Koog application. The Dockerfile uses multi-stage build to optimize the final image size, so it’s divided into two parts/stages: one for building the application (build) and one for execution (runtime). Here’s the content of the Dockerfile:

# Stage 1: Build
FROM eclipse-temurin:23-jdk-noble AS build

WORKDIR /app

COPY gradlew .
COPY gradle/ gradle/
COPY build.gradle.kts .
COPY settings.gradle.kts .

RUN chmod +x ./gradlew

COPY src/ src/

# Build
RUN ./gradlew clean build

# Stage 2: Runtime
FROM eclipse-temurin:23-jre-noble AS runtime

WORKDIR /app

COPY --from=build /app/build/libs/ratatouille-1.0-SNAPSHOT.jar app.jar
CMD ["java", "-jar", "app.jar"]

Step 4: Kotlin side:

Connecting to Docker Model Runner

Now, here’s the source code of our application, in the src/main/kotlin/Main.kt file to be able to use Docker Model Runner. The API exposed by Docker Model Runner is compatible with the OpenAI API, so we’ll use Koog’s OpenAI client to interact with our model:

package kitchen.ratatouille

import ai.koog.prompt.executor.clients.openai.OpenAIClientSettings
import ai.koog.prompt.executor.clients.openai.OpenAILLMClient

suspend fun main() {

    val apiKey = "nothing"
    val customEndpoint = System.getenv("MODEL_RUNNER_BASE_URL").removeSuffix("/")
    val model = System.getenv("MODEL_RUNNER_CHAT_MODEL")

    val client = OpenAILLMClient(
        apiKey=apiKey,
        settings = OpenAIClientSettings(customEndpoint)
    )
}

First Koog Agent

Creating an agent with Koog is relatively simple as you can see in the code below. We’ll need:

  • a SingleLLMPromptExecutor that will use the OpenAI client we created previously to execute requests to the model.
  • an LLModel that will define the model we’re going to use.
  • an AIAgent that will encapsulate the model and the prompt executor to execute requests.

Regarding the prompt, I use the SYSTEM_PROMPT environment variable to define the agent’s system prompt, and AGENT_INPUT to define the agent’s input (the “user message”). These variables were defined in the compose.yaml file previously:

environment:
      SYSTEM_PROMPT: You are a helpful cooking assistant.
      AGENT_INPUT: How to cook a ratatouille?

And here’s the complete code of the Koog agent in the src/main/kotlin/Main.kt file:

package kitchen.ratatouille

import ai.koog.agents.core.agent.AIAgent
import ai.koog.prompt.executor.clients.openai.OpenAIClientSettings
import ai.koog.prompt.executor.clients.openai.OpenAILLMClient
import ai.koog.prompt.executor.llms.SingleLLMPromptExecutor
import ai.koog.prompt.llm.LLMCapability
import ai.koog.prompt.llm.LLMProvider
import ai.koog.prompt.llm.LLModel

suspend fun main() {

    val apiKey = "nothing"
    val customEndpoint = System.getenv("MODEL_RUNNER_BASE_URL").removeSuffix("/")
    val model = System.getenv("MODEL_RUNNER_CHAT_MODEL")

    val client = OpenAILLMClient(
        apiKey=apiKey,
        settings = OpenAIClientSettings(customEndpoint)
    )

    val promptExecutor = SingleLLMPromptExecutor(client)

    val llmModel = LLModel(
        provider = LLMProvider.OpenAI,
        id = model,
        capabilities = listOf(LLMCapability.Completion)
    )

    val agent = AIAgent(
        executor = promptExecutor,
        systemPrompt = System.getenv("SYSTEM_PROMPT"),
        llmModel = llmModel,
        temperature = 0.0
    )

    val recipe = agent.run(System.getenv("AGENT_INPUT"))

    println("Recipe:\n $recipe")

}

Running the project

All that’s left is to launch the project with the following command:

docker compose up --build --no-log-prefix

Then wait a moment, depending on your machine, the build and completion times will be more or less long. I nevertheless chose Lucy 128k because it can run on small configurations, even without a GPU. This model also has the advantage of being quite good at “function calling” detection despite its small size (however, it doesn’t support parallel tool calls). And you should finally get something like this in the console:

Recipe:
 Sure! Here's a step-by-step guide to cooking a classic ratatouille:

---

### **Ingredients**  
- 2 boneless chicken thighs or 1-2 lbs rabbit (chicken is common, but rabbit is traditional)  
- 1 small onion (diced)  
- 2 garlic cloves (minced)  
- 1 cup tomatoes (diced)  
- 1 zucchini (sliced)  
- 1 yellow squash or eggplant (sliced)  
- 1 bell pepper (sliced)  
- 2 medium potatoes (chopped)  
- 1 red onion (minced)  
- 2 tbsp olive oil  
- 1 tbsp thyme (or rosemary)  
- Salt and pepper (to taste)  
- Optional: 1/4 cup wine (white or red) to deglaze the pan  

---

### **Steps**  
1. **Prep the Ingredients**  
   - Dice the onion, garlic, tomatoes, zucchini, squash, bell pepper, potatoes.  
   - Sauté the chicken in olive oil until browned (about 10–15 minutes).  
   - Add the onion and garlic, sauté for 2–3 minutes.  

2. **Add Vegetables & Flavor**  
   - Pour in the tomatoes, zucchini, squash, bell pepper, red onion, and potatoes.  
   - Add thyme, salt, pepper, and wine (if using). Stir to combine.  
   - Add about 1 cup water or stock to fill the pot, if needed.  

3. **Slow Cook**  
   - Place the pot in a large pot of simmering water (or use a Dutch oven) and cook on low heat (around 200°F/90°C) for about 30–40 minutes, or until the chicken is tender.  
   - Alternatively, use a stovetop pot with a lid to cook the meat and vegetables together, simmering until the meat is cooked through.  

4. **Finish & Serve**  
   - Remove the pot from heat and let it rest for 10–15 minutes to allow flavors to meld.  
   - Stir in fresh herbs (like rosemary or parsley) if desired.  
   - Serve warm with crusty bread or on the plate as is.  

---

### **Tips**  
- **Meat Variations**: Use duck or other meats if you don't have chicken.  
- **Vegetables**: Feel free to swap out any vegetables (e.g., mushrooms, leeks).  
- **Liquid**: If the mixture is too dry, add a splash of water or stock.  
- **Serving**: Ratatouille is often eaten with bread, so enjoy it with a side of crusty bread or a simple salad.  

Enjoy your meal! 

As you can see, it’s quite simple to create an agent with Koog and Docker Model Runner! 

But we have a problem, I told you I was French and the ratatouille recipe proposed by Lucy 128k doesn’t really suit me: there’s no rabbit, chicken, or duck in a ratatouille!!!. But let’s see how to fix that.

Let’s add superpowers to our Koog agent with the Docker MCP Gateway

What I’d like to do now is have my application first search for information about ratatouille ingredients, and then have the Koog agent use this information to improve the recipe. For this, I’d like to use the DuckDuckGo MCP server that’s available on the Docker MCP Hub. And to make my life easier, I’m going to use the Docker MCP Gateway to access this MCP server.

Configuring the Docker MCP Gateway in Docker Compose

To use the Docker MCP Gateway, I’ll first modify the compose.yml file to add the gateway configuration.

Configuring the gateway in the compose.yaml file

Here’s the configuration I added for the gateway in the compose.yaml file:

 mcp-gateway:
    image: docker/mcp-gateway:latest
    command:
      - --port=8811
      - --transport=sse
      - --servers=duckduckgo
      - --verbose
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

This configuration will create an mcp-gateway service that will listen on port 8811 and use the sse (Server-Sent Events) transport to communicate with MCP servers.

Important:

  • with –servers=duckduckgo I can filter the available MCP servers to only use the DuckDuckGo server.
  • the MCP Gateway will automatically pull the available MCP servers from the Docker MCP Hub.

The MCP Gateway is an open-source project that you can find here

Next, I’ll modify the koog-app service so it can communicate with the gateway by adding the MCP_HOST environment variable that will point to the gateway URL, as well as the dependency on the mcp-gateway service:

environment:
      MCP_HOST: http://mcp-gateway:8811/sse
    depends_on:
      - mcp-gateway      

I’ll also modify the system prompt and user message:

environment:
      SYSTEM_PROMPT: |
        You are a helpful cooking assistant.
        Your job is to understand the user prompt and decide if you need to use tools to run external commands.
      AGENT_INPUT: |
        Search for the ingredients to cook a ratatouille, max result 1
        Then, from these found ingredients, generate a yummy ratatouille recipe
        Do it only once

So here’s the complete compose.yml file with the MCP Gateway configuration and the modifications made to the koog-app service:

services:

  koog-app:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      SYSTEM_PROMPT: |
        You are a helpful cooking assistant.
        Your job is to understand the user prompt and decide if you need to use tools to run external commands.
      AGENT_INPUT: |
        Search for the ingredients to cook a ratatouille, max result 1
        Then, from these found ingredients, generate a yummy ratatouille recipe
        Do it only once
      MCP_HOST: http://mcp-gateway:8811/sse
    depends_on:
      - mcp-gateway
    models:
      app_model:
        # NOTE: populate the environment variables with the model runner endpoint and model name
        endpoint_var: MODEL_RUNNER_BASE_URL
        model_var: MODEL_RUNNER_CHAT_MODEL

  mcp-gateway:
    image: docker/mcp-gateway:latest
    command:
      - --port=8811
      - --transport=sse
      - --servers=duckduckgo
      - --verbose
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

models:
  app_model:
    model: hf.co/menlo/lucy-128k-gguf:q4_k_m 

Now, let’s modify the Kotlin code to use the MCP Gateway and search for ratatouille ingredients.

Modifying the Kotlin code to use the MCP Gateway

The modification is extremely simple; you just need to:

  • define the MCP transport (SseClientTransport) with the gateway URL: val transport = McpToolRegistryProvider.defaultSseTransport(System.getenv(“MCP_HOST”))
  • create the MCP tools registry with the gateway: val toolRegistry = McpToolRegistryProvider.fromTransport(transport = transport, name = “sse-client”, version = “1.0.0”)
  • and finally, add the tools registry to the Koog agent constructor: toolRegistry = toolRegistry

Extremely important: I added capabilities = listOf(LLMCapability.Completion, LLMCapability.Tools) for the LLM model, because we’re going to use its “function calling” capabilities (the tools are defined and provided by the MCP server).

Here’s the complete code of the Koog agent modified to use the MCP Gateway in the src/main/kotlin/Main.kt file:

package kitchen.ratatouille

import ai.koog.agents.core.agent.AIAgent
import ai.koog.agents.mcp.McpToolRegistryProvider
import ai.koog.prompt.executor.clients.openai.OpenAIClientSettings
import ai.koog.prompt.executor.clients.openai.OpenAILLMClient
import ai.koog.prompt.executor.llms.SingleLLMPromptExecutor
import ai.koog.prompt.llm.LLMCapability
import ai.koog.prompt.llm.LLMProvider
import ai.koog.prompt.llm.LLModel

suspend fun main() {

    val transport = McpToolRegistryProvider.defaultSseTransport(System.getenv("MCP_HOST"))
    // Create a tool registry with tools from the MCP server
    val toolRegistry = McpToolRegistryProvider.fromTransport(
        transport = transport,
        name = "sse-client",
        version = "1.0.0"
    )
    println(toolRegistry.tools)

    val apiKey = "nothing"
    val customEndpoint = System.getenv("MODEL_RUNNER_BASE_URL").removeSuffix("/")
    val model = System.getenv("MODEL_RUNNER_CHAT_MODEL")

    val client = OpenAILLMClient(
        apiKey=apiKey,
        settings = OpenAIClientSettings(customEndpoint)
    )

    val promptExecutor = SingleLLMPromptExecutor(client)

    val llmModel = LLModel(
        provider = LLMProvider.OpenAI,
        id = model,
        capabilities = listOf(LLMCapability.Completion, LLMCapability.Tools)
    )

    val agent = AIAgent(
        executor = promptExecutor,
        systemPrompt = System.getenv("SYSTEM_PROMPT"),
        llmModel = llmModel,
        temperature = 0.0,
        toolRegistry = toolRegistry
    )

    val recipe = agent.run(System.getenv("AGENT_INPUT"))

    println("Recipe:\n $recipe")

}

Launching the project with the MCP Gateway

Let’s launch the project again with the command:

docker compose up --build --no-log-prefix

And after a while, you should get a new ratatouille recipe, but the LLM will have relied on the search results performed by the DuckDuckGo MCP server (via the MCP Gateway) to improve the recipe. The response time will be a bit longer because the LLM will first query the MCP server to get the ratatouille ingredients, then it will generate the recipe. And the DuckDuckGo MCP server will search for links and then retrieve the content of those links (indeed, the DuckDuckGo MCP server exposes 2 tools: search and fetch_content).

Here’s an example of what you might get with an improved and more “authentic” ratatouille recipe:

Recipe:
 Here's a **complete and easy-to-follow version** of **Ratatouille**, based on the recipe you provided, with tips and variations to suit your preferences:

---

###  **What Is Ratatouille?**  
A classic French vegetable stew, traditionally made with eggplant, tomatoes, zucchini, bell peppers, onions, and mushrooms. It's often seasoned with herbs like parsley, thyme, or basil and paired with crusty bread or a light sauce.

---

###  **Ingredients** (for 4 servings):  
- **1/2 cup olive oil** (divided)  
- **2 tbsp olive oil** (for the skillet)  
- **3 cloves garlic**, minced  
- **1 eggplant**, cubed  
- **2 zucchinis**, sliced  
- **2 large tomatoes**, chopped  
- **2 cups fresh mushrooms**, sliced  
- **1 large onion**, sliced  
- **1 green or red bell pepper**, sliced  
- **1/2 tsp dried parsley**  
- **Salt to taste**  
- **1/2 cup grated Parmesan cheese** (or pecorino, as you mentioned)  

---

###  **How to Make Ratatouille**  
**Preheat oven** to 350°F (175°C).  

1. **Prepare the dish**: Coat a 1½-quart casserole dish with 1 tbsp olive oil.  
2. **Cook the base**: In a skillet, sauté garlic until fragrant (about 1–2 minutes). Add eggplant, parsley, and salt; cook for 10 minutes until tender.  
3. **Layer the vegetables**: Spread the eggplant mixture in the dish, then add zucchini, tomatoes, mushrooms, onion, and bell pepper. Top with Parmesan.  
4. **Bake**: Cover and bake for 45 minutes. Check for tenderness; adjust time if needed.  

**Cook's Note**:  
- Add mushrooms (optional) or omit for a traditional flavor.  
- Use fresh herbs like thyme or basil if preferred.  
- Substitute zucchini with yellow squash or yellow bell pepper for color.  

---

###  **How to Serve**  
- **Main dish**: Serve with crusty French bread or rice.  
- **Side**: Pair with grilled chicken or fish.  
- **Guilt-free twist**: Add black olives or a sprinkle of basil/others for a lighter version.  

---

Conclusion

This blog post perfectly illustrates the modern containerized AI ecosystem that Docker is building. By combining Docker Model Runner, Agentic Compose, Docker MCP Gateway, and the Koog framework (but we could of course use other frameworks), we were able to create an “intelligent” agent quite simply.

  • Docker Model Runner allowed us to use an AI model locally.
  • Agentic Compose simplified the integration of the model into our application by automatically injecting the necessary environment variables.
  • The Docker MCP Gateway transformed our little agent into a system capable of interacting with the outside world.
  • The Koog framework allowed us to orchestrate these components in Kotlin.

Soon, I’ll go deeper into the MCP Gateway and how to use it with your own MCP servers, and not just with Koog. And I continue my explorations with Koog and Docker Model Runner. Check out the entire source code of this project is available here 

Learn more

Read the whole story
alvinashcraft
12 minutes ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories