Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146648 stories
·
33 followers

How to Build an AI Social Media Post Scheduler Using Gemini and Late API in Next.js

1 Share

Social media has become a vital tool for people and businesses to share ideas, promote products, and connect with their target audience. But creating posts regularly and managing schedules across multiple platforms can be time-consuming and repetitive.

In this tutorial, you’ll learn how to build an AI-powered social media post scheduler using Gemini, Late API, and Next.js.

We’ll use the Gemini API to generate engaging social media content from user prompts, Next.js to handle both the frontend and backend of the application, and Late API to publish and schedule posts across multiple social media platforms from a single platform.

Social media platforms

Table of Contents

Prerequisites

To fully understand this tutorial, you need to have a basic understanding of React or Next.js.

We will use the following tools:

  • Late API: A social media API that lets you create and schedule posts across 13 social media platforms from a single dashboard.

  • Next.js: A React framework for building fast, scalable web applications, handling both the frontend and backend.

  • Google Gemini API: Provides access to Google’s AI models for generating text and other content based on user prompts.

Setup and Installation

Create a new Next.js project using the following code snippet:

npx create-next-app post-scheduler

Install the project dependencies. We’ll use Day.js to work with JavaScript dates, making it easier to schedule and publish social media posts at the correct time.

npm install @google/genai dayjs utc

Next, add a .env.local file containing your Gemini API key at the root of your Next.js project:

GEMINI_API_KEY=<paste_your API key>

Once everything is set up, your Next.js project is ready. Now, let's start building! 🚀

Late API and the available social media platforms

How to Schedule Social Media Posts with Late

Late is an all-in-one social media scheduling platform that allows you to connect your social media accounts and publish posts across multiple platforms. In this section, you’ll learn how to create and schedule social media posts using the Late dashboard.

To get started, create a Late account and sign in.

Sign in and get Late API key

Create an API key and add it to the .env.local file within your Next.js project.

LATE_API_KEY=<your_API_key>

Copy Late API key

Connect your social media accounts to Late so you can manage and publish posts across all platforms.

Social media platforms

After connecting your social media accounts via OAuth, you can start writing, posting, and scheduling content directly to your social media platforms.

Twitter (X) account connected

Late lets you write your post content and attach media files directly from the dashboard.

Create Social media contents from your dashboard

You can choose when your content should be published: post immediately, schedule for later, add it to a job queue, or save it as a draft.

Publish your post

Once a post is published, you can view its status and preview it directly in the dashboard using the post link.

Social media post created with Late

🎉 Congratulations! You’ve successfully created your first post using the Late dashboard. In the next sections, you’ll learn how to use the Late API to create and schedule posts directly from your applications.

How to Build the Next.js App Interface

In this section, you’ll build the user interface for the application. The app uses a single-page route with conditional rendering to display recent posts, an AI prompt input field, and a form that allows users to create or schedule posts.

App Overview

Before we proceed, create a types.d.ts file within your Next.js project and copy the following code snippet into the file:

interface Post {
    _id: string;
    content: string;
    scheduledFor: string;
    status: string;
}

interface AIFormProps {
    handleGeneratePost: (e: React.FormEvent<HTMLFormElement>) => void;
    useAI: boolean;
    setUseAI: React.Dispatch<React.SetStateAction<boolean>>;
    prompt: string;
    setPrompt: React.Dispatch<React.SetStateAction<string>>;
    disableBtn: boolean;
}

interface FormProps {
    handlePostSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
    content: string;
    setContent: React.Dispatch<React.SetStateAction<string>>;
    date: string;
    setDate: React.Dispatch<React.SetStateAction<string>>;
    disableBtn: boolean;
    setUseAI: React.Dispatch<React.SetStateAction<boolean>>;
    useAI: boolean;
}

The types.d.ts file defines all the data structures and type declarations used throughout the application.

Copy the following code snippet into the app/page.tsx file:

"use client";
import Nav from "./components/Nav";
import { useState } from "react";
import NewPost from "./components/NewPost";
import PostsQueue from "./components/PostsQueue";

export default function Page() {
    const [showPostQueue, setShowPostQueue] = useState<boolean>(false);
    return (
        <div className='w-full h-screen'>
            <Nav showPostQueue={showPostQueue} setShowPostQueue={setShowPostQueue} />
            {showPostQueue ? <PostsQueue /> : <NewPost />}
        </div>
    );
}

The Page component renders the Nav component and uses conditional rendering to display either the PostsQueue or NewPost component based on the value of the showPostQueue state.

Create a components folder to store the page components used in the application.

cd app
mkdir components && cd components
touch Nav.tsx NewPost.tsx PostElement.tsx PostsQueue.tsx

Add the code snippet below to the Nav.tsx file:

export default function Nav({
    showPostQueue,
    setShowPostQueue,
}: {
    showPostQueue: boolean;
    setShowPostQueue: React.Dispatch<React.SetStateAction<boolean>>;
}) {
    return (
        <nav>
            <h2>Post Scheduler</h2>

            <button onClick={() => setShowPostQueue(!showPostQueue)}>
                {showPostQueue ? "New Post" : "Schedule Queue"}
            </button>
        </nav>
    );
}

Copy the following code snippet into the PostsQueue.tsx file:

"use client";
import { useEffect, useState, useCallback } from "react";
import PostElement from "./PostElement";

export default function PostsQueue() {
    const [posts, setPosts] = useState<Post[]>([]);
    const [loading, setLoading] = useState<boolean>(true);

    return (
        <div className='p-4'>
            <h2 className='text-xl font-bold'>Scheduled Posts</h2>

            {loading ? (
                <p className='text-sm'>Loading scheduled posts...</p>
            ) : (
                <div className='mt-4'>
                    {posts.length > 0 ? (
                        posts.map((post) => <PostElement key={post._id} post={post} />)
                    ) : (
                        <p>No scheduled posts available.</p>
                    )}
                </div>
            )}
        </div>
    );
}

The PostsQueue.tsx component displays a list of previously created posts along with their current status, showing whether each post has been published or scheduled for a later time. While the data is being loaded, it shows a loading message, and once loaded, it renders each post using the PostElement component.

Add the following to the PostElement.tsx component:

export default function PostElement({ post }: { post: Post }) {
    export const formatReadableTime = (isoString: string) => {
        const date = new Date(isoString); // parses UTC automatically
        return date.toLocaleString(undefined, {
            year: "numeric",
            month: "short",
            day: "numeric",
            hour: "2-digit",
            minute: "2-digit",
            second: "2-digit",
            hour12: true, // set to false for 24h format
        });
    };

    return (
        <div className='p-4 border flex items-center justify-between  space-x-4 rounded mb-2 hover:bg-gray-100 cursor-pointer'>
            <div>
                <p className='font-semibold text-sm'>{post.content.slice(0, 100)}</p>
                <p className='text-blue-400 text-xs'>
                    Scheduled for: {formatReadableTime(post.scheduledFor)}
                </p>
            </div>

            <p className='text-sm text-red-500'>{post.status}</p>
        </div>
    );
}

Finally, copy the following code snippet into the NewPost.tsx file:

"use client";
import { useState } from "react";

export default function NewPost() {
 const [disableBtn, setDisableBtn] = useState<boolean>(false);
 const [useAI, setUseAI] = useState<boolean>(false);
 const [content, setContent] = useState<string>("");
 const [prompt, setPrompt] = useState<string>("");
 const [date, setDate] = useState<string>("");

 //👇🏻 generates post content
 const handleGeneratePost = async (e: React.FormEvent<HTMLFormElement>) => {
  e.preventDefault();
  setDisableBtn(true);
 };

 //👇🏻 create/schedule post
 const handlePostSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
  e.preventDefault();
 };

 return (
  <div className='w-full p-4  h-[90vh] flex flex-col items-center justify-center border-t'>
   <h3 className='text-xl font-bold'>New Post</h3>

   {useAI ? (
    <AIPromptForm
     handleGeneratePost={handleGeneratePost}
     useAI={useAI}
     setUseAI={setUseAI}
     prompt={prompt}
     setPrompt={setPrompt}
     disableBtn={disableBtn}
    />
   ) : (
    <PostForm
     handlePostSubmit={handlePostSubmit}
     content={content}
     setContent={setContent}
     date={date}
     setDate={setDate}
     disableBtn={disableBtn}
     setUseAI={setUseAI}
     useAI={useAI}
    />
   )}
  </div>
 );
}

The NewPost component conditionally renders the AIPromptForm and the PostForm. When a user chooses to generate content using AI, the AIPromptForm component is displayed to collect the prompt. Once the content is generated, the PostForm component is shown, allowing the user to edit, create, or schedule the post.

Add the components below inside the NewPost.tsx file:

export const AIPromptForm = ({
    handleGeneratePost,
    useAI,
    setUseAI,
    prompt,
    setPrompt,
    disableBtn,
}: AIFormProps) => {
    return (
        <form onSubmit={handleGeneratePost}>
            <p onClick={() => setUseAI(!useAI)}>Exit AI </p>
            <textarea
                rows={3}
                required
                value={prompt}
                onChange={(e) => setPrompt(e.target.value)}
                placeholder='Enter prompt...'
            />
            <button type='submit' disabled={disableBtn}>
                {disableBtn ? "Generating..." : "Generate Post with AI"}
            </button>
        </form>
    );
};

// 👇🏻 Post Form component
export const PostForm = ({
    handlePostSubmit,
    content,
    setContent,
    date,
    setDate,
    disableBtn,
    setUseAI,
    useAI,
}: FormProps) => {
    const getNowForDatetimeLocal = () => {
        const now = new Date();
        return new Date(now.getTime() - now.getTimezoneOffset() * 60000)
            .toISOString()
            .slice(0, 16);
    };

    return (
        <form onSubmit={handlePostSubmit}>
            <p onClick={() => setUseAI(!useAI)}>Generate posts with AI </p>
            <textarea
                value={content}
                onChange={(e) => setContent(e.target.value)}
                rows={4}
                placeholder="What's happening?"
                required
                maxLength={280}
            />
            <input
                type='datetime-local'
                min={getNowForDatetimeLocal()}
                value={date}
                onChange={(e) => setDate(e.target.value)}
            />
            <button disabled={disableBtn} type='submit'>
                {disableBtn ? "Posting..." : "Create post"}
            </button>
        </form>
    );
};

Congratulations! You've completed the application interface.

How to integrate Gemini API for Post Generation

Here, you will learn how to generate post content from the user's prompt using the Gemini API.

Before we proceed, make sure you have copied your API key from the Google AI Studio.

Create Gemini API key

Create an api folder inside the Next.js app directory. This folder will contain the API routes used to generate AI content and create or schedule posts using the Late API.

cd app && mkdir api

Next, create a generate folder inside the api directory and add a route.ts file. Copy the following code into the file:

// 👇🏻 In api/generate/route.ts file
import { NextRequest, NextResponse } from "next/server";
import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY! });

export async function POST(req: NextRequest) {
    const { prompt } = await req.json();

    try {
        const response = await ai.models.generateContent({
            model: "gemini-3-flash-preview",
            contents: `
    You are a social media post generator, very efficient in generating engaging posts for Twitter (X). Given a topic, generate a creative and engaging post that captures attention and encourages interaction. This posts will always be within the character limit of X (Twitter) which is 280 characters, which includes any hashtags or mentions, spaces, punctuation, and emojis.

    The user will provide a topic or theme, and you will generate a post based on that input.
    Here is the instruction from the user:
    "${prompt}"`,
        });
        if (!response.text) {
            return NextResponse.json(
                {
                    message: "Encountered an error generating the post.",
                    success: false,
                },
                { status: 400 },
            );
        }

        return NextResponse.json(
            { message: response.text, success: true },
            { status: 200 },
        );
    } catch (error) {
        return NextResponse.json(
            { message: "Error generating post.", success: false },
            { status: 500 },
        );
    }
}

The api/generate endpoint accepts the user's prompt and generates post content using the Gemini API.

Now you can send a request to the newly created /api/generate endpoint from the NewPost component. Update the handleGeneratePost function as shown below:

const handleGeneratePost = async (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    setDisableBtn(true);
    const result = await fetch("/api/generate", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify({ prompt }),
    });

    const data = await result.json();
    if (data.success) {
        setUseAI(false);
        setContent(data.message);
        setPrompt("");
    }
    setDisableBtn(false);
};

The handleGeneratePost function accepts the user's prompt and returns the AI-generated content.

How to Use Late API in Next.js

Late provides API endpoints that let you create, schedule, and manage posts programmatically. This allows you to integrate social media posting directly into your applications or automation workflows.

To get started, copy your Late API key and the account ID of your social media platforms into the .env.local file:

LATE_API_KEY=<Late_API_key>
ACCOUNT_ID=<social_media_acct_id>

# Gemini API key
GEMINI_API_KEY=<gemini_API_key>

Connect Twitter (X) account and copy account ID

Note: In this tutorial, we will be using Twitter (X) as the social media platform for scheduling posts. You can adapt the same workflow to other platforms supported by Late API by updating the platform and accountId values in your API requests.

Create an api/post endpoint to accept post content and schedule or publish posts using the Late API.

cd api
mkdir post && cd post
touch route.ts

Then, add the following POST method to post/route.ts:

import { NextRequest, NextResponse } from "next/server";
import utc from "dayjs/plugin/utc";
import dayjs from "dayjs";

dayjs.extend(utc);

export async function POST(req: NextRequest) {
    const { content, publishAt } = await req.json();

    // Determine if the post should be scheduled or published immediately
    const nowUTC = publishAt ? dayjs(publishAt).utc() : null;
    const publishAtUTC = nowUTC ? nowUTC.format("YYYY-MM-DDTHH:mm") : null;

    try {
        const response = await fetch("https://getlate.dev/api/v1/posts", {
            method: "POST",
            headers: {
                Authorization: `Bearer ${process.env.LATE_API_KEY}`,
                "Content-Type": "application/json",
            },
            body: JSON.stringify({
                content,
                platforms: [
                    {
                        platform: "twitter",
                        accountId: process.env.ACCOUNT_ID!,
                    },
                ],
                publishNow: !publishAt,
                scheduledFor: publishAtUTC,
            }),
        });

        const { post, message } = await response.json();

        if (post?._id) {
            return NextResponse.json({ message, success: true }, { status: 201 });
        }

        return NextResponse.json({ message: "Error occurred", success: false }, { status: 500 });
    } catch (error) {
        return NextResponse.json({ message: "Error scheduling post.", success: false }, { status: 500 });
    }
}

From the code snippet above:

  • The api/post endpoint accepts the post’s content and an optional publishAt time.

  • If publishAt is null, the post is published immediately. Otherwise, the time is converted to UTC for scheduling.

  • It then sends a request to the Late API using your API key and the account ID to create or schedule the post on the selected social media platform.

You can also add a GET method to the /api/post endpoint to retrieve posts that have already been created or scheduled:

export async function GET() {
    try {
        const response = await fetch(
            "https://getlate.dev/api/v1/posts?platform=twitter",
            {
                method: "GET",
                headers: {
                    Authorization: `Bearer ${process.env.LATE_API_KEY}`,
                    "Content-Type": "application/json",
                },
            },
        );

        const { posts } = await response.json();

        return NextResponse.json({ posts }, { status: 200 });
    } catch (error) {
        return NextResponse.json(
            { message: "Error fetching posts.", success: false },
            { status: 500 },
        );
    }
}

Next, update the handlePostSubmit function in NewPost.tsx to send a POST request to /api/post. This will create or schedule the post and notify the user of the result:

const handlePostSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    setDisableBtn(true);

    const now = new Date();
    const selected = date ? new Date(date) : null;
    const publishAt = !selected || selected <= now ? null : date;

    const result = await fetch("/api/post", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ content, publishAt }),
    });

    const { message, success } = await result.json();

    if (success) {
        setContent("");
        setDate("");
        alert("Success: " + message);
    } else {
        alert("Error: " + message);
    }

    setDisableBtn(false);
};

Finally, fetch all scheduled or published posts and render them in the PostsQueue component:

const fetchScheduledPosts = useCallback(async () => {
    try {
        const response = await fetch("/api/post", {
            method: "GET",
            headers: { "Content-Type": "application/json" },
        });
        const data = await response.json();
        setPosts(data.posts);
        setLoading(false);
    } catch (error) {
        console.error("Error fetching scheduled posts:", error);
        setLoading(false);
    }
}, []);

useEffect(() => {
    fetchScheduledPosts();
}, [fetchScheduledPosts]);

🎉 Congratulations! You’ve successfully built an AI-powered social media post scheduler using Next.js, Gemini API, and Late API.

The source code for this tutorial is available on GitHub.

Conclusion

In this tutorial, you’ve learnt how to create and schedule social media posts across multiple platforms using a single scheduling platform, Late, and how to generate AI content using the Gemini API.

The Late API is a powerful tool for automating social media tasks, posting at specific intervals, managing multiple accounts, and tracking analytics – all from one platform. By combining it with generative AI models like Gemini and automation tools like n8n or Zapier, you can build automated workflows that keep your audience engaged with minimal effort.

The Gemini API also makes it easy to integrate AI-powered text, images, or code generation directly into your applications, opening up a wide range of creative possibilities.

Thank you for reading! 🎉



Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Containerize Your .NET Applications Without a Dockerfile

1 Share

Developers report that over 40% of committed code is now AI-assisted, yet they still spend about 25% of their week on toil. Sonar's new developer survey shows AI didn't eliminate work; it shifted it into reviewing, fixing, and verifying code that looks correct but isn't. Download the report to understand why your workload feels the same, just harder to reason about. (No form fill required.)

Webinar: How to Build Faster with AI Agents Learn how full-stack developers boost productivity by 50% with AI agents that automate layout, styling, and component generation through RAG and LLM pipelines. See how orchestration and spec-driven workflows keep you in control of quality and consistency. Check it out

Containers have become the standard for deploying modern applications. But if you've ever written a Dockerfile, you know it can be tedious. You need to understand multi-stage builds, pick the right base images, configure the right ports, and remember to copy files in the correct order.

What if I told you that you don't need a Dockerfile at all?

Since .NET 7, the SDK has built-in support for publishing your application directly to a container image. You can do this with a single dotnet publish command.

In this week's newsletter, we'll explore:

  • Why Dockerfile-less publishing matters
  • How to enable container publishing in your project
  • Customizing the container image
  • Publishing to container registries
  • How I'm using this to deploy to a VPS

The Traditional Approach: Writing a Dockerfile

Before we look at the SDK approach, let's see what we're replacing.

A typical multi-stage Dockerfile for a .NET application looks like this:

FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS base
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["src/MyApi/MyApi.csproj", "src/MyApi/"]
RUN dotnet restore "src/MyApi/MyApi.csproj"

COPY . .
WORKDIR "/src/src/MyApi"
RUN dotnet build "MyApi.csproj" -c $BUILD_CONFIGURATION  -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "MyApi.csproj" -c $BUILD_CONFIGURATION -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyApi.dll"]

This works, but there's a learning curve and maintenance overhead:

  • Maintenance burden: You need to update base image tags manually
  • Layer caching: Getting the COPY order wrong kills your build cache
  • Duplication: Every project needs a similar Dockerfile
  • Context switching: You're writing Docker DSL, not .NET code

The .NET SDK approach eliminates all of this.

Enabling Container Publishing

If you're running on .NET 10, you don't need to do anything special to enable container publishing. This will work for ASP.NET Core apps, worker services, and console apps.

You can publish directly to a container image:

dotnet publish --os linux --arch x64 /t:PublishContainer

That's it. The .NET SDK will:

  1. Build your application
  2. Select the appropriate base image
  3. Create a container image with your published output
  4. Load it into your local OCI-compliant daemon

The most popular option is Docker, but it also works with Podman.

An image showing the output of the dotnet publish command creating a container image.

Customizing the Container Image

The SDK provides sensible defaults, but you'll often want to customize the image. For a more comprehensive list of options, see the official docs.

I'll cover the most common customizations here.

Setting the Image Name and Tag

The ContainerRepository property sets the image name (repository). The ContainerImageTags property sets one or more tags (separated by semicolons). If you want a single tag, you can use ContainerImageTag instead.

<PropertyGroup>
  <ContainerRepository>ghcr.io/USERNAME/REPOSITORY</ContainerRepository>
  <ContainerImageTags>1.0.0;latest</ContainerImageTags>
</PropertyGroup>

From .NET 8 and onwards, when a tag isn't provided the default is latest.

Choosing a Different Base Image

By default, the SDK uses the following base images:

  • mcr.microsoft.com/dotnet/runtime-deps for self-contained apps
  • mcr.microsoft.com/dotnet/aspnet image for ASP.NET Core apps
  • mcr.microsoft.com/dotnet/runtime for other cases

You can switch to a smaller or different image:

<PropertyGroup>
  <!-- Use the Alpine-based image for smaller size -->
  <ContainerBaseImage>mcr.microsoft.com/dotnet/aspnet:10.0-alpine</ContainerBaseImage>
</PropertyGroup>

You could also do this by setting ContainerFamily to alpine, and letting the rest be inferred.

Here's the size difference between the default and Alpine images for an ASP.NET Core app:

An image showing the size difference between the default and Alpine base images for ASP.NET Core applications.
| Base Image                                  | Size (MB) |
| ------------------------------------------- | --------- |
| mcr.microsoft.com/dotnet/aspnet:10.0        | 231.73    |
| mcr.microsoft.com/dotnet/aspnet:10.0-alpine | 122.65    |

You can see a significant size reduction by switching to alpine.

Configuring Ports

For web applications, the default exposed ports are 8080 and 8081 for HTTP and HTTPS. These are inferred from ASP.NET Core environment variables (ASPNETCORE_URLS, ASPNETCORE_HTTP_PORT, ASPNETCORE_HTTPS_PORT). The Type attribute can be tcp or udp.

<PropertyGroup>
  <ContainerPort Include="8080" Type="tcp" />
  <ContainerPort Include="8081" Type="tcp" />
</PropertyGroup>

Publishing to a Container Registry

Publishing locally is useful for development, but you'll want to push to a registry for deployment. You can specify the target registry during publishing.

Here's an example publishing to GitHub Container Registry:

dotnet publish --os linux --arch x64  /t:PublishContainer /p:ContainerRegistry=ghcr.io

Authentication: The SDK uses your local Docker credentials. Make sure you've logged in with docker login before publishing to a remote registry.

However, I don't use the above approach. I prefer using docker CLI for the publishing step, as it gives me more control over authentication and tagging.

CI/CD Integration

Here's what I'm doing in my GitHub Actions workflow to build and push my .NET app container. I left out the boring bits of seting up the .NET environment and checking out code.

This will build the container image, tag it, and push it to GitHub Container Registry:

- name: Publish
  run: dotnet publish "${{ env.WORKING_DIRECTORY }}" --configuration ${{ env.CONFIGURATION }} --os linux -t:PublishContainer
# Tag the build for later steps
- name: Log in to ghcr.io
  run: echo "${{ env.DOCKER_PASSWORD }}" | docker login ghcr.io -u "${{ env.DOCKER_USERNAME }}" --password-stdin
- name: Tag Docker image
  run:
    docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:${{ github.sha }} |
    docker tag ${{ env.IMAGE_NAME }}:latest ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:latest
- name: Push Docker image
  run:
    docker push ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:${{ github.sha }} |
    docker push ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:latest

Once my images are up in the registry, I can deploy them to my VPS.

I'm using Dokploy (a simple but powerful deployment tool for Docker apps) to pull the latest image and restart my service.

deploy:
  runs-on: ubuntu-latest
  needs: build-and-publish
  steps:
    - name: Trigger deployment
      run: |
        curl -X POST ${{ env.DEPLOYMENT_TRIGGER_URL }} \
          -H 'accept: application/json' \
          -H 'Content-Type: application/json' \
          -H 'x-api-key: ${{ env.DEPLOYMENT_TRIGGER_API_KEY }}' \
          -d '{
            "applicationId": "${{ env.DEPLOYMENT_TRIGGER_APP_ID }}"
          }'

This kicks off a deployment on my VPS, pulling the latest image and restarting the container.

An image showing the output of the dokploy deployment command restarting the container.

By the way, I'm running my VPS on Hetzner Cloud - highly recommended if you're looking for affordable and reliable VPS hosting.

When You Still Need a Dockerfile

The SDK container support is powerful, but it doesn't cover every scenario.

You'll still need a Dockerfile when:

  • Installing system dependencies: If your app needs native libraries (like libgdiplus for image processing)
  • Complex multi-stage builds: When you need to run custom build steps
  • Non-.NET components: If your container needs additional services or tools

For most web APIs and background services, the SDK approach is sufficient.

Summary

The .NET SDK's built-in container support removes the friction of containerization.

You get:

  • No Dockerfile to maintain - one less file to worry about
  • Automatic base image selection - always uses the right image for your framework version
  • MSBuild integration - configure everything in your .csproj
  • CI/CD friendly - works anywhere dotnet runs

The days of copy-pasting Dockerfiles between projects are over.

Just enable the feature, customize what you need, and publish.

Thanks for reading.

And stay awesome!




Read the whole story
alvinashcraft
20 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Server Pagination with COUNT(*) OVER() Window Function

1 Share
A simple SQL Server trick that eliminates the need for separate count queries when building paginated APIs.
Read the whole story
alvinashcraft
33 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Why not store the SAFEARRAY reference count as a hidden allocation next to the SAFEARRAY?

1 Share

When I described how Safe­Array­Add­Ref keeps its reference count in a side table, commenter Koro Unhallowed wondered why we couldn’t store the reference count either before or after the formal SAFEARRAY structure. Commenter Peter Cooper Jr. suspected that there might be cases where applications assumed how much memory a SAFEARRAY occupied.

And indeed that is the case.

Not all SAFEARRAYs are created by the Safe­Array­Create function. I’ve seen code that declared and filled out their own SAFEARRAY structure. In those cases, the code allocates exactly sizeof(SAFEARRAY) bytes and doesn’t allocate any bonus data for the reference count.

Indeed, there three flags in the fFeatures member for these “bring your own SAFEARRAY” structures.

FADF_AUTO An array that is allocated on the stack.
FADF_STATIC An array that is statically allocated.
FADF_EMBEDDED An array that is embedded in a structure.

These flag indicate that the array was not created by Safe­Array­Create but rather was constructed manually by the caller in various ways.¹

Note that if you pass a SAFEARRAY with one these flags to Safe­Array­Add­Ref, it will still increment the reference count, but you don’t get a data pointer back because the caller does not control the lifetime of the SAFEARRAY. The lifetime of the SAFEARRAY is controlled by the lifetime of the SAFEARRAY variable on the stack (FADF_AUTO), in the DLL’s global data segment (FADF_STATIC), or in the enclosing object (FADF_EMBEDDED).

This means that our earlier suggestion to wrap the SAFEARRAY inside an in/out VARIANT runs into trouble if the SAFEARRAY is one of these types of arrays with externally-controlled lifetime. For those, you have no choice but to copy the data.

¹ The documentation is, however, ambiguous about what “the array” refers to. Is it referring to the SAFEARRAY structure itself? Or is it referring to the data pointed to by the pvData member?

The post Why not store the <CODE>SAFEARRAY</CODE> reference count as a hidden allocation next to the <CODE>SAFEARRAY</CODE>? appeared first on The Old New Thing.

Read the whole story
alvinashcraft
38 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

5 Tips for Building MCP Apps That Work

1 Share

Level Up Your MCP Apps - goose and MCP Jam

MCP Apps allow you to render interactive UI directly inside any agent supporting the Model Context Protocol. Instead of a wall of text, your agent can now provide a functional chart, a checkout form, or a video player. This bridges the gap in agentic workflows: clicking a button is often clearer than describing the action you hope an agent executes.

MCP Apps originated as MCP-UI, an experimental project. After adoption by early clients like goose, the MCP maintainers incorporated it as an official extension. Today, it's supported by clients like goose, MCPJam, Claude, ChatGPT, and Postman.

Even though MCP Apps use web technologies, building one isn't the same as building a traditional web app. Your UI runs inside an agent you don't control, communicates with a model that can't see user interactions, and needs to feel native across multiple hosts.

After implementing MCP App support in our own hosts and building several individual apps to run on them, here are the practical lessons we've picked up along the way.

Overview of how UI renders with MCP Apps

At a high level, clients that support MCP Apps load your UI via iFrames. Your MCP App exposes an MCP server with tools and resources. When the client wants to load your app's UI, it calls the associated MCP tool, loads the resource containing the HTML, then loads your HTML into an iFrame to display in the chat interface.

Here's an example flow of what happens when goose renders a cocktail recipe UI:

  1. You ask the LLM "Show me a margarita recipe".
  2. The LLM calls the get-cocktail tool with the right parameters. This tool has a UI resource link in _meta.ui.resourceUri pointing to the resource containing the HTML.
  3. The client then uses the Uri to fetch the MCP resource. This resource contains the HTML content of the view.
  4. The HTML is then loaded into the iFrame directly in the chat interface, rendering the cocktail recipe.

MCP Apps flow diagram showing how UI renders

There's a lot that also goes on behind the scenes, such as widget hydration, capability negotiation, and CSPs, but this is how it works at a high level. If you're interested in the full implementation of MCP Apps, we highly recommend giving the spec a read.

Tip 1: Adapt to the Host Environment

When building an MCP App, you want it to feel like a natural part of the agent experience rather than something bolted on. Visual mismatches are one of the fastest ways to break that illusion.

Imagine a user starting an MCP App interaction inside a dark-mode agent, but the app renders in light mode and creates a harsh visual contrast. Even if the app works correctly, the experience immediately feels off.

By default, your MCP App has no awareness of the surrounding agent environment because it runs inside a sandboxed iframe. It cannot tell whether the agent is in light or dark mode, how large the viewport is, or which locale the user prefers.

The agent, referred to as the Host, solves this by sharing its environment details with your MCP App, known as the Guest UI. When the Guest UI connects, it sends a ui/initialize request. The Host responds with a hostContext object describing the current environment. When something changes, such as theme, viewport, or locale, the Host sends a ui/notifications/host-context-changed notification containing only the updated fields.

Imagine this dialogue between the Guest UI and Host:

Guest UI: "I'm initializing. What does your environment look like?"
Host: "We're in dark mode, viewport is 400×300, locale is en-US, and we're on desktop."
User switches to light theme
Host: "Update: we're now in light mode."

It is your job as the developer to ensure your MCP App makes use of the hostContext so it can adapt to the environment.

How to use hostContext in your MCP App

import { useState } from "react";
import { useApp } from "@modelcontextprotocol/ext-apps/react";
import type { McpUiHostContext } from "@modelcontextprotocol/ext-apps";

function MyApp() {
const [hostContext, setHostContext] = useState<McpUiHostContext | undefined>(undefined);

const { app, isConnected, error } = useApp({
appInfo: { name: "MyApp", version: "1.0.0" },
capabilities: {},
onAppCreated: (app) => {
app.onhostcontextchanged = (ctx) => {
setHostContext((prev) => ({ ...prev, ...ctx }));
};
},
});

if (error) return <div>Error: {error.message}</div>;
if (!isConnected) return <div>Connecting...</div>;

return (
<div>
<p>Theme: {hostContext?.theme}</p>
<p>Locale: {hostContext?.locale}</p>
<p>Viewport: {hostContext?.containerDimensions?.width} x {hostContext?.containerDimensions?.height}</p>
<p>Platform: {hostContext?.platform}</p>
</div>
);
}
tip

If you're using the useApp hook in your MCP App, the hook provides a onhostcontextchanged listener. You can then use a React useState to update your app context. The host will provide their context, it's up to you as the app developer to decide what you want to do with that. For example, you can use theme to render light mode vs dark mode, locale to show a different language, or containerDimensions to adjust the app's sizing.

Tip 2: Control What the Model Sees and What the View Sees

There are cases where you may want to have granular control over what data the LLM has access to, and what data the view can show. The MCP Apps spec specifies three different tool return values that lets you control data flow, each are handled differently by the app host.

  • content: Content is the info that you want to expose to the model. Gives model context.
  • structuredContent: This data is hidden from the model context. It is used to send data over the View for hydration.
  • _meta: This data is hidden from the model context. Used to provide additional info such as timestamps, version info.

Let's look at a practical example of how we can use these three tool return types effectively:

server.registerTool(
"view-cocktail",
{
title: "Get Cocktail",
description: "Fetch a cocktail by id with ingredients and images...",
inputSchema: z.object({ id: z.string().describe("The id of the cocktail to fetch.") }),
_meta: {
ui: { resourceUri: "ui://cocktail/cocktail-recipe-widget.html" },
},
},
async ({ id }: { id: string }): Promise<CallToolResult> => {
const cocktail = await convexClient.query(api.cocktails.getCocktailById, {
id,
});

return {
content: [
{ type: "text", text: `Loaded cocktail "${cocktail.name}".` },
{ type: "text", text: `Cocktail ingredients: ${cocktail.ingredients}.` },
{ type: "text", text: `Cocktail instructions: ${cocktail.instructions}.` },
],
structuredContent: { cocktail },
_meta: { timestamp: new Date().toString() }
};
},
);

This tool renders a view showing a cocktail recipe. The cocktail data is being fetched from the backend database (Convex). The View needs the entire cocktail data so we pass the data to it via structuredContent. For the model context, the LLM doesn't need to know the entire cocktail data like the image URL. We can extract the information that the model should know about the cocktail, like the name, ingredients, and instructions. That information can be passed to the model via content.

It's important to note that currently, ChatGPT apps SDK handles it differently, where structuredContent is exposed to both the model and the View. Their model is the following:

  • content: Content is the info that you want to expose to the model. Gives model context.
  • structuredContent: This data is exposed to the model and the View.
  • _meta: This data is hidden from the model context.

If you're building an app that supports both MCP Apps and ChatGPT apps SDK, this is an important distinction. You may want to conditionally return values, or conditionally render tools based off of whether the client is MCP App support or ChatGPT app.

Tip 3: Properly Handle Loading States and Error States

It's pretty typical for the iFrame to render first before the tool finishes executing and the widget gets hydrated. You're going to want to let your user know that the app is loading by presenting a beautiful loading state.

Loading state example showing skeleton UI

To implement this, let's take a look at the same cocktail recipes app. The MCP tool fetches the cocktail data and passes it to the widget via structuredContent. We don't know how long it takes to fetch that cocktail data, could be anywhere from a few ms to a few seconds on a bad day.

server.registerTool(
"view-cocktail",
{
title: "Get Cocktail",
description: "Fetch a cocktail by id with ingredients and images...",
inputSchema: z.object({ id: z.string().describe("The id of the cocktail to fetch.") }),
_meta: {
ui: {
resourceUri: "ui://cocktail/cocktail-recipe-widget.html",
visibility: ["model", "app"],
},
},
},
async ({ id }: { id: string }): Promise<CallToolResult> => {
const cocktail = await convexClient.query(api.cocktails.getCocktailById, {
id,
});

return {
content: [
{ type: "text", text: `Loaded cocktail "${cocktail.name}".` },
],
structuredContent: { cocktail },
};
},
);

On the view side (React), the useApp AppBridge hook has a app.ontoolresult listener that listens for the tool return results and hydrates your widget. While onToolResult hasn't come in yet and the data is empty, we can render a beautiful loading state.

import { useApp } from "@modelcontextprotocol/ext-apps/react";

function CocktailApp() {
const [cocktail, setCocktail] = useState<CocktailData | null>(null);

useApp({
appInfo: IMPLEMENTATION,
capabilities: {},
onAppCreated: (app) => {
app.ontoolresult = async (result) => {
const data = extractCocktail(result);
setCocktail(data);
};
},
});

return cocktail ? <CocktailView cocktail={cocktail} /> : <CocktailViewLoading />;
}

Handling errors

We also want to handle errors gracefully. In the case where there's an error in your tool, such as the cocktail data failing to load, both the LLM and the view should be notified of the error.

In your MCP tool, you should return an error in the tool result. This is exposed to the model and also passed to the view.

server.registerTool(
"view-cocktail",
{
title: "Get Cocktail",
description: "Fetch a cocktail by id with ingredients and images...",
inputSchema: z.object({ id: z.string().describe("The id of the cocktail to fetch.") }),
_meta: {
ui: { resourceUri: "ui://cocktail/cocktail-recipe-widget.html" },
visibility: ["model", "app"],
},
},
async ({ id }: { id: string }): Promise<CallToolResult> => {
try {
const cocktail = await convexClient.query(api.cocktails.getCocktailById, {
id,
});

return {
content: [
{ type: "text", text: `Loaded cocktail "${cocktail.name}".` },
],
structuredContent: { cocktail },
};
} catch (error) {
return {
content: [
{ type: "text", text: `Could not load cocktail` },
],
error
};
}
},
);

Then in useApp on the React client side, you can detect whether or not there was an error by looking at the existence of error from the tool result.

Tip 4: Keep the Model in the Loop

Because your MCP App operates in a sandboxed iframe, the model powering your agent can't see what happens inside the app by default. It won't know if a user fills out a form, clicks a button, or completes a purchase.

Without a feedback loop, the model loses context. If a user buys a pair of shoes and then asks, "When will they arrive?", the model won't even realize a transaction occurred.

To solve this, the SDK provides two methods to keep the model synchronized with the user's journey: sendMessage and updateModelContext.

sendMessage()

Use this for active triggers. It sends a message to the model as if the user typed it, prompting an immediate response. This is ideal for confirming a "Buy" click or suggesting related items right after an action.

// User clicks "Buy" - the model responds immediately
await app.sendMessage({
role: "user",
content: [{ type: "text", text: "I just purchased Nike Air Max for $129" }],
});
// Result: Model responds: "Great choice! Want me to track your order?"

updateModelContext()

Use this for background awareness. It quietly saves information for the model to use later without interrupting the flow. This is perfect for tracking browsing history or cart updates without triggering a chat response every time.

// User is browsing - no immediate response needed
await app.updateModelContext({
content: [{ type: "text", text: "User is viewing: Nike Air Max, Size 10, $129" }],
});
// Result: No response. But if the user later asks, "What was I looking at?", the model knows.

Tip 5: Control Who Can Trigger Tools

With a standard MCP server, the model sees your tools, interprets the user's prompt, and calls the right tool. If a user says "delete that email," the model decides what that means and invokes the delete tool.

However, with an MCP App, tools can be triggered in two ways: the model interpreting the user's prompt, or the user interacting directly with the UI.

By default, both can call any tool. For example, say you build an MCP App that visually surfaces an email inbox and lets users interact with emails. Now there are two potential triggers for your tools: the model acting on a prompt to delete an email, and the user clicking a delete button directly in the App's interface.

The model works by interpreting intent. If a user says "delete my old emails," the model has to decide what "old" means and which emails qualify. For some actions like deleting emails, that ambiguity can be risky.

When a user clicks a "Delete" button next to a specific message in your MCP App, there is no ambiguity. They have made an explicit choice.

To prevent the model from accidentally performing high-stakes actions based on a misunderstanding, you can use tool visibility to restrict certain tools to the MCP App's UI only. This allows the model to display the interface while requiring a human click to finalize the action.

You can define visibility using these three configurations:

  • ["model", "app"] (default) — Both the model and the UI can call it
  • ["model"] — Only the model can call it; the UI cannot
  • ["app"] — Only the UI can call it; hidden from the model

Here's how you might implement this:

// Model calls this to display the inbox
registerAppTool(server, "show-inbox", {
description: "Display the user's inbox",
_meta: {
ui: {
resourceUri: "ui://email/inbox.html",
visibility: ["model"],
},
},
}, async () => {
const emails = await getEmails();
return { content: [{ type: "text", text: JSON.stringify(emails) }] };
});

// User clicks delete button in the UI
registerAppTool(server, "delete-email", {
description: "Delete an email",
inputSchema: { emailId: z.string() },
_meta: {
ui: {
resourceUri: "ui://email/inbox.html",
visibility: ["app"],
},
},
}, async ({ emailId }) => {
await deleteEmail(emailId);
return { content: [{ type: "text", text: "Email deleted" }] };
});

Start Building with goose and MCPJam

MCP Apps open up a new dimension for agent interactions. Now it's time to build your own.

  • Test with MCPJam — the open source local inspector for MCP Apps, ChatGPT apps SDK, and MCP servers. Perfect for debugging and iterating on your app before shipping.
  • Run in goose — an open source AI agent that renders MCP Apps directly in the chat interface. See your app come to life in a real agent environment.

Ready to dive deeper? Check out the MCP Apps tutorial or build your first MCP App with MCPJam.

Read the whole story
alvinashcraft
52 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Build AI Agents with Claude Agent SDK and Microsoft Agent Framework

1 Share

Microsoft Agent Framework now integrates with the Claude Agent SDK, enabling you to build AI agents powered by Claude’s full agentic capabilities. This integration brings together the Agent Framework’s consistent agent abstraction with Claude’s powerful features, including file editing, code execution, function calling, streaming responses, multi-turn conversations, and Model Context Protocol (MCP) server integration — available in Python.

Why Use Agent Framework with Claude Agent SDK?

You can use the Claude Agent SDK on its own to build agents. So why use it through Agent Framework? Here are the key reasons:

  • Consistent agent abstraction — Claude agents implement the same BaseAgent interface as every other agent type in the framework. You can swap providers or combine them without restructuring your code.
  • Multi-agent workflows — Compose Claude agents with other agents (Azure OpenAI, OpenAI, GitHub Copilot, and more) in sequential, concurrent, handoff, and group chat workflows using built-in orchestrators.
  • Ecosystem integration — Access the full Agent Framework ecosystem: declarative agent definitions, A2A protocol support, and consistent patterns for function tools, sessions, and streaming across all providers.

In short, Agent Framework lets you treat Claude as one building block in a larger agentic system rather than a standalone tool.

Install the Claude Agent SDK Integration

Python

pip install agent-framework-claude --pre

Create a Claude Agent

Getting started is straightforward. Create a ClaudeAgent and start interacting with it using the async context manager pattern.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant.",
    ) as agent:
        response = await agent.run("What is Microsoft Agent Framework?")
        print(response.text)

Use Built-in Tools

Claude Agent SDK provides access to powerful built-in tools for file operations, shell commands, and more. Simply pass tool names as strings to enable them.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful coding assistant.",
        tools=["Read", "Write", "Bash", "Glob"],
    ) as agent:
        response = await agent.run("List all Python files in the current directory")
        print(response.text)

Add Function Tools

Extend your agent with custom function tools to give it domain-specific capabilities.

Python

from typing import Annotated
from pydantic import Field
from agent_framework_claude import ClaudeAgent

def get_weather(
    location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
    """Get the weather for a given location."""
    return f"The weather in {location} is sunny with a high of 25C."

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful weather agent.",
        tools=[get_weather],
    ) as agent:
        response = await agent.run("What's the weather like in Seattle?")
        print(response.text)

Stream Responses

For a better user experience, you can stream responses as they are generated instead of waiting for the complete result.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant.",
    ) as agent:
        print("Agent: ", end="", flush=True)
        async for chunk in agent.run_stream("Tell me a short story."):
            if chunk.text:
                print(chunk.text, end="", flush=True)
        print()

Multi-Turn Conversations

Maintain conversation context across multiple interactions using threads. The Claude Agent SDK automatically manages session resumption to preserve context.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant. Keep your answers short.",
    ) as agent:
        thread = agent.get_new_thread()

        # First turn
        await agent.run("My name is Alice.", thread=thread)

        # Second turn - agent remembers the context
        response = await agent.run("What is my name?", thread=thread)
        print(response.text)  # Should mention "Alice"

Configure Permission Modes

Control how the agent handles permission requests for file operations and command execution using permission modes.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a coding assistant that can edit files.",
        tools=["Read", "Write", "Bash"],
        default_options={
            "permission_mode": "acceptEdits",  # Auto-accept file edits
        },
    ) as agent:
        response = await agent.run("Create a hello.py file that prints 'Hello, World!'")
        print(response.text)

Connect MCP Servers

Claude agents support connecting to external MCP servers, giving the agent access to additional tools and data sources.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant with access to the filesystem.",
        default_options={
            "mcp_servers": {
                "filesystem": {
                    "command": "npx",
                    "args": ["-y", "@modelcontextprotocol/server-filesystem", "."],
                },
            },
        },
    ) as agent:
        response = await agent.run("List all files in the current directory using MCP")
        print(response.text)

Use Claude in a Multi-Agent Workflow

One of the key benefits of using Agent Framework is the ability to combine Claude with other agents in a multi-agent workflow. In this example, an Azure OpenAI agent drafts a marketing tagline and a Claude agent reviews it — all orchestrated as a sequential pipeline.

Python

import asyncio
from typing import cast

from agent_framework import ChatMessage, Role, SequentialBuilder, WorkflowOutputEvent
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework_claude import ClaudeAgent
from azure.identity import AzureCliCredential

async def main():
    # Create an Azure OpenAI agent as a copywriter
    chat_client = AzureOpenAIChatClient(credential=AzureCliCredential())

    writer = chat_client.as_agent(
        instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
        name="writer",
    )

    # Create a Claude agent as a reviewer
    reviewer = ClaudeAgent(
        instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
        name="reviewer",
    )

    # Build a sequential workflow: writer -> reviewer
    workflow = SequentialBuilder().participants([writer, reviewer]).build()

    # Run the workflow
    async for event in workflow.run_stream("Write a tagline for a budget-friendly electric bike."):
        if isinstance(event, WorkflowOutputEvent):
            messages = cast(list[ChatMessage], event.data)
            for msg in messages:
                name = msg.author_name or ("assistant" if msg.role == Role.ASSISTANT else "user")
                print(f"[{name}]: {msg.text}\n")

asyncio.run(main())

This example shows how a single workflow can combine agents from different providers. You can extend this pattern to concurrent, handoff, and group chat workflows as well.

More Information

Summary

The Claude Agent SDK integration for Microsoft Agent Framework makes it easy to build AI agents that leverage Claude’s full agentic capabilities. With support for built-in tools, function tools, streaming, multi-turn conversations, permission modes, and MCP servers in Python, you can build powerful agentic applications that interact with code, files, shell commands, and external services.

We’re always interested in hearing from you. If you have feedback, questions or want to discuss further, feel free to reach out to us and the community on the discussion boards on GitHub! We would also love your support, if you’ve enjoyed using Agent Framework, give us a star on GitHub.

The post Build AI Agents with Claude Agent SDK and Microsoft Agent Framework appeared first on Semantic Kernel.

Read the whole story
alvinashcraft
59 seconds ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories