Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
146651 stories
·
33 followers

RNR 352 - Expo Launch with Cedric van Putten

1 Share

Mazen and Robin welcome back Cedric van Putten to discuss Expo Launch, a new tool that automates deploying React Native apps to the App Store. Learn how Expo is streamlining certificates, screenshots, and submission workflows.


Show Notes


Connect With Us!


This episode is brought to you by Infinite Red!

Infinite Red is an expert React Native consultancy located in the USA. With over a decade of React Native experience and deep roots in the React Native community (hosts of Chain React and the React Native Newsletter, core React Native contributors, creators of Ignite and Reactotron, and much, much more), Infinite Red is the best choice for helping you build and deploy your next React Native app.





Download audio: https://cdn.simplecast.com/audio/2de31959-5831-476e-8c89-02a2a32885ef/episodes/893211e3-93d7-46c0-bf2b-5e1551043e02/audio/398da2b5-d273-41d9-9e6d-c3d1f7120556/default_tc.mp3?aid=rss_feed&feed=hEI_f9Dx
Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Azure Developer CLI (azd) – January 2026: Configuration & Performance

1 Share

Welcome to the round up of the azd releases in January 2026! This blog post (covers release versions 1.23.0, 1.23.1, 1.23.2) packed with updates: improved configuration management, enhanced authentication, and performance improvements.

Whether you’re managing multiple environments, working in multitenant scenarios, or just looking for faster workflows, this release has something for you. As always, we’d love to hear your thoughts—join us in the January release discussion on GitHub.

Highlights:

  • New configuration management commands for better control over azd settings
  • Enhanced environment-specific configuration capabilities
  • Cross-tenant authentication support for remote state
  • Automatic notifications for available extension updates
  • Podman support as fallback container runtime
  • Performance improvements with file-based caching
  • Breaking changes: removed deprecated commands and Azure Spring Apps support

📣 We want to hear from you!

Help us shape the future of azd by sharing your experience. We’re conducting user research to better understand how you’re using the Azure Developer CLI. Sign up to participate and make your voice heard!

New features

⚙ Configuration management

This release enhances configuration management capabilities, making it easier to discover and manage azd settings at both global and environment-specific levels.

  • Configuration options command: New azd config options command lists all available configuration settings with descriptions, making it easier to discover and understand available options. [#6390]
  • Environment-specific configuration: New azd env config commands enable environment-specific configuration management, allowing you to set different configurations for different environments (dev, staging, production, etc.). [#6348]
  • Enhanced environment management: New azd env remove command allows deleting local environment configuration files cleanly. [#6511]

🔐 Authentication and security

  • Cross-tenant authentication: Added support for cross-tenant authentication when using remote environment state in Azure Blob Storage, enabling more flexible multitenant scenarios. [#6441]
  • Authentication status: New azd auth status command displays current authentication status, making it easier to verify credentials and troubleshoot authentication issues. [#6377]

🚀 Performance and infrastructure

  • File-based caching: Added file-based caching to azd show for approximately 60x performance improvement, dramatically speeding up resource inspection operations. [#6418]
  • Podman support: Added Podman support as fallback container runtime when Docker is unavailable, providing more flexibility in containerized environments. [#6436]
  • Infrastructure provider auto-detection: Automatically detects infrastructure provider (Bicep/Terraform) from infra directory files when not explicitly specified, streamlining project setup. [#6461]

✨ Developer experience

  • Extension update notifications: When customers run extension commands, azd checks if a newer version is available and displays an update warning with upgrade instructions. The check runs in parallel without adding latency, using registry caching with 4-hour TTL (time to live) and a 24-hour warning cooldown per extension. [#6512]
  • Property-level change details: Added property-level change details in azd provision --preview output for Bicep deployments, providing more granular visibility into infrastructure changes. [#6262]
  • Non-Aspire projects support: Added support for non-Aspire projects in Visual Studio connected services, expanding IDE integration capabilities. [#5536]

🎁 Other enhancements

  • GitHub CLI update: Updated GitHub CLI tool version to 2.86.0. [#6579]

Breaking changes

Deprecated commands removed

As announced in previous releases, the following deprecated commands are removed:

  • Login commands: Removed deprecated azd login and azd logout commands in favor of azd auth login and azd auth logout. Users must update scripts and workflows to use the azd auth subcommands. [#6395]

Azure Spring Apps

  • Support discontinued: Azure Spring Apps support is removed from azd. Projects using Azure Spring Apps need to migrate to alternative hosting solutions like Azure Container Apps or Azure App Service. [#6369]

🪲 Bugs fixed

Extension and configuration fixes

  • Extension configuration properties: Fixed extension configuration properties support by adding AdditionalProperties fields to project and service configurations. [#6481]
  • Extension command failures and update notifications: Fixed extension commands failing after update notification is displayed, and fixed update notification cooldown being recorded even when warning isn’t shown. [#6604]
  • Extension error messages: Improved extension error messages by including error suggestion text. [#6588]
  • Extension installation: Fixed azd ext install --force to properly reinstall extensions when version matches. [#6435]

Deployment and infrastructure

  • Bicep CLI path: Fixed Bicep CLI uninitialized path causing container app deployments to fail. [#6610]
  • Resource display names: Fixed azd down to dynamically resolve resource display names. [#6452]
  • Deployment state handling: Fixed azd down to handle deployment state correctly when resources are manually deleted. [#6267]
  • Azure Kubernetes Service (AKS) deployment schema: Fixed AKS deployment schema to allow Helm deployments without project field. [#6444]

Authentication and URL handling

  • GitHub URL parsing: Fixed GitHub URL parsing to check authentication before branch resolution. [#6478]
  • Authentication handling: Improved authentication handling when not using built-in auth methods. [#5954]

Error handling

  • Configuration loading: Fixed panic on middleware construction failure when loading invalid configuration files. [#6517]
  • Context cancellation: Fixed context cancellation issue causing subsequent operations to fail after command steps complete. [#6536]
  • Workflow step errors: Fixed context canceled errors in workflow steps. [#6446]

Resource management

  • DocumentDB resources: Fixed Azure DocumentDB (mongoClusters) resources not being displayed in provisioning output. [#6527]

Other changes

  • GitHub CLI update: Updated GitHub CLI tool version to 2.86.0. [#6579]
  • Error telemetry: Improved error telemetry with specific error type classification. [#6389]

New docs

The Azure Developer CLI documentation continues to expand with updates and improvements:

  • Proxy configuration guide (January 15): New comprehensive guide for configuring azd to work with HTTP/HTTPS proxies, including corporate proxy scenarios and troubleshooting tips. Learn more
  • Device code authentication FAQ (January 26): Added guidance on device code authentication flow for scenarios where browser-based authentication isn’t available. Learn more
  • azd publish command (January 23): New documentation for the azd publish command, which separates publishing to container registries from deployment. Learn more

New templates

January brings an incredible surge of 23 new templates across the Awesome azd and AI App Template galleries! This month’s contributions showcase the growing adoption of Model Context Protocol (MCP) servers, multi-agent AI systems, and secure-by-default Azure integrations. A huge thank you to our amazing community contributors for sharing these production-ready solutions!

Template Contributor Description
Azure MCP Server – ACA with Copilot Studio agent Chunan Ye Self-host Azure MCP Server on Container Apps with managed identity for Copilot Studio integration.
Agents and MCP Orchestration with Langchain.js, LlamaIndex.TS, and Microsoft Agent Framework Microsoft DevRel Multi-agent travel orchestration using LangChain.js, LlamaIndex.TS, and Microsoft Agent Framework with MCP servers.
MCP Container TS – Model Context Protocol in TypeScript Microsoft DevRel TypeScript MCP server reference with authentication, SQLite state, and OpenTelemetry tracing.
Azure MCP Server – ACA with Managed Identity Anu Thomas Self-host Azure MCP Server on Container Apps with managed identity for Microsoft Foundry agents.
Protect API Management with OAuth Ronald Bosma Secure API Management APIs with OAuth and Entra ID app registrations.
Call OAuth-Protected APIs on API Management with Managed Identity Ronald Bosma Call OAuth-protected APIs using managed identities from Functions, Logic Apps, and CI/CD pipelines.
Image search with Azure AI Search Azure Content Team Full-stack image search using Azure AI Vision embeddings and Azure AI Search vectorization.
Call API Management backend with OAuth Ronald Bosma Call OAuth-protected backend APIs through API Management using three authentication patterns.
Azure Functions C# Timer Trigger using Azure Developer CLI Azure Functions Team C# timer trigger quickstart with managed identity and virtual network on Flex Consumption.
FastAPI PostgreSQL on Azure Container Apps Mark Anthony Estopace FastAPI membership API on Container Apps with Azure Database for PostgreSQL.
Getting Started with Remote MCP Servers using Azure Functions (Java) Azure Samples Build and deploy secure Java MCP server on Azure Functions with OAuth and virtual network isolation.
Azure Storage Account with Blobs and File Share Peter De Tender Storage account with Blob, File Share, and sample Seattle scenery images.
Azure Monitor custom logs, and external telemetry Karel De Winter Log Analytics workspace with custom table for logs and external telemetry.
Building a Multi-Agent Support Triage System with AZD and Azure AI Foundry Dave Rendon Production-ready multi-agent support triage system using Azure AI Foundry Agent Service.
Azure Firewall Syslog Emulator for Microsoft Sentinel Training Koenraad Haedens Emulated firewall sending syslog messages to VM for Microsoft Sentinel training.
Product Catalog MCP Server Peter De Tender App Service MCP server with product catalog for Foundry or Copilot Studio.
BetterNotes – AI Document Analysis – Demo Scenario Peter De Tender Extract and transform text from PDFs, Word docs, and images using Document Intelligence.
AZD-based Azure VM Backup template Peter De Tender Windows VM backup demo with Recovery Services Vault.
Azure Routing Demo Joerg Menne Hub & Spoke VNet topology with Windows VMs for demonstrating peerings and route tables.
Getting Started with NLWeb Foundry Heng-yi Liu Natural language interface for websites deployed as Foundry agent using NLWeb.
Customer Chatbot Solution Accelerator Travis Hilbert, Solomon Pickett, and Brittnee Keller E-commerce chatbot with specialized agents for product recommendations and policy questions.
Real-Time Intelligence for Operations Solution Accelerator Alvaro Guadamillas Herranz, Gaiye Zhou, and Seth Steenken Manufacturing asset monitoring with real-time anomaly detection and notifications.
Multi Agent Banking Assistant Davide Antelmo Banking assistant for checking balances, transactions, and payments using Microsoft Agent Framework.

🙋‍♀️ New to azd?

If you’re new to the Azure Developer CLI, azd is an open-source command-line tool that accelerates the time it takes to get your application from local development environment to Azure. azd provides best practice, developer-friendly commands that map to key stages in your workflow, whether you’re working in the terminal, your editor or CI/CD.

The post Azure Developer CLI (azd) – January 2026: Configuration & Performance appeared first on Azure SDK Blog.

Read the whole story
alvinashcraft
29 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Go From Hello World to Building Real World Applications

1 Share

Many developers start learning programming by building simple projects like todo apps, calculators, and basic CRUD applications. These projects are useful at the beginning, as they help you understand how a programming language works and give you the confidence to start building things. But for many developers, progress stops there.

Real world applications aren’t just about showing data on a screen. They solve real problems, work with real users, and handle situations that don’t always go as planned. This is where many developers struggle.

When I review junior developers’ résumés, I notice a common pattern: the projects section is often filled with beginner apps that look very similar. In today’s job market, this is usually not enough. Employers want to see that you can build something useful, something people would actually use.

The goal of this article is to help you move past simple Hello World projects like todo apps, calculators, and basic CRUD applications. By the end, you’ll understand how to approach building real applications that solve real problems and feel closer to what is built in the real world.

You might be wondering why you should listen to me.

Over the last 10 years, I’ve spent a lot of time building, breaking, and rebuilding software. I have experimented, failed many times, and eventually built applications that thousands of people use every day. A few weeks ago, I launched one of my own SaaS products and watched real users use it at scale, with over a thousand active users during peak hours, without the system crashing.

I’m sharing this because I’ve been where you are now.

If you continue reading, you will:

  • Learn from real experience building applications used by real users

  • Learn what not to do, based on years of mistakes and lessons

  • Learn what actually works and what helps you stand out as a junior developer

  • Clear up common misconceptions that slow people down

This article is not about theory. It’s about building real things.

To drive this point home, we’ll build a real world application that solves a real problem and is used by real people. Along the way, you’ll learn how to come up with real application ideas, how to build them with simple tools, and how to serve them to many users.

If this sounds like something you’re up for, then let’s get started.

Table of Contents

  1. You Probably Think You Don’t Know Enough

  2. How to Find Real World Application Ideas

  3. What Are We Going to Build

  4. Prerequisites

  5. Step 1: Setting Up the Backend

  6. Step 2: Adding Background Removal to the Backend

  7. Step 3: Building the Frontend

  8. Step 4: Putting the Backend on the Internet

  9. Step 5: Making the Backend and Frontend Work Together CORS

  10. Step 6: Putting the Frontend on the Internet

  11. Final Thoughts What You Just Built Matters

You Probably Think You Don't Know Enough

Before we begin, I want to clear up the most common misconception beginners have.

Many developers believe they don’t know enough to start building real world applications. They think they need to learn more JavaScript, more Python, or another framework before they are ready. Some even think learning React is the final step that will suddenly make everything click.

This belief is very common, but it’s not true.

If you know how to write HTML, CSS, and a simple loop or function in any programming language, you already have most of what it takes to build a real world application. This tutorial is proof of that.

That’s why I am writing this article. Not to tell you to learn more first, but to show you how far you can go with what you already know.

How to Find Real World Application Ideas

One question beginners ask a lot is, “What kind of apps should I build?”

This is rarely talked about, but it matters more than most people think.

A simple rule is this: build things that already exist.

Look at the apps you use every day, especially the simple ones. Tools that do one thing well. If you find yourself using an app often, that app is solving a real problem.

For example, people use background removers to clean up images. They use URL shorteners to share links. They use notes apps to save quick thoughts. None of these ideas are new, but they are real.

You don’t need to invent something original. You need to understand a problem and build a working solution for it.

When you replicate real tools, you naturally learn how real applications are structured. You also end up with projects that make sense on a résumé, because they solve problems people recognize.

That’s exactly what we are going to do in this tutorial.

What Are We Going to Build?

I’ll tell you now, it is not going to be another Hello World application. We’re going to be building a background remover web application.

This is the kind of tool people actually use. Designers use it for images, content creators use it for thumbnails, and developers build similar features into real products. It works with real files, real data, and real results.

If you want to see the final result before we start building, you can try the working app here: https://iamspruce.github.io/background-remover/

The background remover application we are about to build

We’ll build this app using simple tools on purpose. Plain HTML, CSS, and JavaScript for the frontend, and Python for the backend. No heavy frameworks and no complicated setup.

Before we continue, let me clear up another common misconception.

Many developers believe that for an application to be taken seriously, it must be built with complex frameworks. In my experience building applications for clients around the world over the last 10 years, not a single client has ever asked me what framework I used.

They only cared about one thing: Did it work, and did it solve their problem?

Users won’t care how your app is built. They care that it works. If HTML, CSS, and JavaScript can solve the problem, there’s no reason to wait months just to learn a new framework.

Now that we understand why we’re building this app and what problem it solves, it’s time to start writing code.

Prerequisites

Before we start writing code, let’s quickly talk about what you need.

This tutorial is not for absolute beginners, but it’s also not super advanced. If you’ve built small things before and you want to build something that actually feels real, you’re in the right place.

What You Should Already Know

You should be comfortable with:

  • Basic HTML: You know what inputs, buttons, images, and divs do.

  • Basic CSS: You can style a page and make it look presentable.

  • Basic JavaScript: You know how to listen for a button click and send a request using fetch.

That’s enough to follow along.

You don’t need React or any other frontend framework.

Backend Knowledge

For the backend, you don’t need to be a Python expert.

You just need to understand that:

  • Python can run a server

  • A server can receive requests

  • A server can send back responses

Everything else will be explained as we go.

Tools You Need

Make sure you have these installed:

  • Python 3.9 or newer

  • Git

  • A code editor like VS Code

  • A browser

No Docker knowledge or cloud experience required.

GitHub Account (Important)

We’ll deploy the backend directly from GitHub, so you’ll need a GitHub account.

If you’ve never pushed a project to GitHub or hosted one before, I’ve already written a beginner-friendly guide you can follow first.

Read that article, then come back here.

A Quick Mindset Check

This is not a copy-paste tutorial. You will see real errors. Things may break. That’s normal. That’s how real applications are built.

Now that we’re clear on what you need, let’s start writing code.

We’ll begin with the backend. This is an important decision, so let’s explain it properly.

Step 1: Setting Up the Backend

What Is a Backend and Why Do We Need One?

A backend is a program that runs on a server and does the heavy work for an application.

In our case, the heavy work is image processing. Removing a background from an image requires libraries that can’t run inside the browser. Browsers are designed for safety and user interaction, not for this kind of processing.

That’s why we need a backend.

The backend will:

  • Receive an image from the user

  • Remove the background

  • Send the processed image back

The frontend will simply talk to this backend later.

Keeping Things Simple on Purpose

Because this might be your first real project, we’re going to keep the backend as simple as possible.

  • One programming language (Python)

  • One backend file

  • No complex folder structure

  • No advanced concepts

This is intentional. Real world applications don’t have to start out complex. They typically start small and grow.

Project Structure

Create a new folder called:

background-remover

Inside it, create a folder for the backend:

background-remover/
  backend/

Move into the backend folder:

cd background-remover/backend

Now create a virtual environment:

python -m venv env

A virtual environment keeps this project’s dependencies separate from everything else on your computer. This is standard practice and something you’ll see in real projects.

Now activate it:

macOS or Linux:

source env/bin/activate

Windows:

env\Scripts\activate

Now create a folder for your application code:

mkdir api

Your structure should now look like this:

background-remover/
  backend/
    env/
    api/

At this stage, nothing looks impressive yet. That’s normal.

Installing FastAPI

We’ll use FastAPI to build the backend API.

Install it together with Uvicorn, which is the server that runs our app:

pip install fastapi uvicorn

FastAPI allows us to define endpoints clearly and with very little code, which is perfect for us.

Creating the First Backend File

Inside the api folder, create a file called main.py.

Add the following code:

from fastapi import FastAPI

app = FastAPI()

@app.get("/health")
def health():
    return {"status": "ok"}

Let’s pause and understand what this does.

  • We created a FastAPI application

  • We added a /health endpoint

  • This endpoint simply returns a message

Before building real features, developers always confirm that their server actually runs. That is exactly what this endpoint is for.

Running the Server

From inside the backend folder, start the server:

uvicorn api.main:app --reload

Now open a new terminal and run:

curl http://localhost:8000/health

You should see:

{"status":"ok"}

This is an important moment.

You now have:

  • A running backend server

  • A real HTTP endpoint

  • A response coming from your own code

This is how real backend services start.

Why We Did This First

At this point, you might wonder why we didn’t jump straight into background removal.

The reason is simple: if the server doesn’t run, nothing else matters.

By starting with a health endpoint, we removed uncertainty. We know the server works. Everything we add next is built on top of something we already know is working.

Now that the foundation is in place, we can move on to the real feature.

Step 2: Adding Background Removal to the Backend

Before we write any code here, we need to clear up an important misconception.

A Common Misconception About Machine Learning

When people hear “background removal,” they often think machine learning is too advanced for them.

In real world development, this is almost never how it works.

You aren’t expected to build machine learning models yourself. You use libraries created by others and focus on integrating them correctly.

That is exactly what we’re doing here.

Installing the Background Removal Library

Install the required packages:

pip install rembg pillow onnxruntime
  • rembg handles the background removal

  • pillow helps us work with images

For our purposes here, you don’t need to understand how these libraries work internally. You only need to know how to use them.

Adding the Background Removal Endpoint

Now update main.py so it looks like this:

from fastapi import FastAPI, UploadFile, File
from rembg import remove
from PIL import Image
import io

app = FastAPI()

@app.get("/health")
def health():
    return {"status": "ok"}

@app.post("/remove-bg")
async def remove_bg(file: UploadFile = File(...)):
    image_bytes = await file.read()
    image = Image.open(io.BytesIO(image_bytes))

    output = remove(image)

    buffer = io.BytesIO()
    output.save(buffer, format="PNG")
    buffer.seek(0)

    return buffer.getvalue()

Let’s explain this carefully.

  • The endpoint accepts an uploaded image

  • The image is read into memory

  • The background is removed

  • The result is saved as a PNG with transparency

  • The image is returned to the client

This endpoint is the core of our application.

Testing the Endpoint With curl

Before building the frontend, we’ll test the backend directly.

Run this command:

curl -X POST \
  -F "file=@person.jpg" \
  http://localhost:8000/remove-bg \
  --output result.png

Open result.png. If you see the background removed, then the backend is complete.

At this point, you have built a backend that:

  • Accepts real user input

  • Processes real data

  • Returns a meaningful result

This is a real backend.

Where We Are Now

Let’s pause and summarize:

  • We set up a backend server

  • We confirmed that it runs

  • We added a real feature

  • We tested it without a frontend

This is exactly how real developers work.

In the next section, we’ll build a simple frontend that talks to this backend and turns it into something users can interact with.

Step 3: Building the Frontend (What the User Actually Sees)

At this point, our backend is working.

It can receive an image, remove the background, and send the result back. But right now, only developers can use it, because it requires terminal commands.

To make this useful to actual (non-technical) people, we need a frontend.

The frontend is simply the part of the application users see and interact with in their browser.

Clearing a Common Misconception About Frontends

Many beginners think building a frontend means learning a framework first.

This is not true.

Frameworks help later, but they aren’t required to build real applications. Under the hood, every frontend still comes down to HTML, CSS, and JavaScript.

That’s why we are using plain HTML, CSS, and JavaScript here. No React, no build tools, no setup. Just the basics.

What Our Frontend Will Do

Our frontend has one job. It will:

  • Let the user select an image

  • Send that image to the backend

  • Receive the processed image

  • Show it on the screen

  • Allow the user to download it

That’s all.

If it does these things correctly, it’s a real frontend.

Creating the Frontend

Go back to the root of your project and create three files:

index.html
styles.css
app.js

This simple setup is very common. Each file has a clear responsibility, which makes the code easier to understand.

Writing the HTML Page

Open index.html and add the following:

<!DOCTYPE html>
<html>
  <head>
    <title>Background Remover</title>
    <link rel="stylesheet" href="styles.css" />
  </head>
  <body>
    <h1>Background Remover</h1>

    <input type="file" id="imageInput" />
    <button id="removeBtn">Remove Background</button>

    <div class="result">
      <img id="resultImage" />
    </div>

    <a id="downloadLink" download>Download Image</a>

    <script src="app.js"></script>
  </body>
</html>

Right now, this page won’t do anything interesting. That’s expected.

HTML only describes what should be on the page. The behavior comes from JavaScript, which we’ll add after we add some styling.

Adding Some Basic Styling

Open styles.css and add this:

body {
  font-family: sans-serif;
  max-width: 600px;
  margin: 40px auto;
}

button {
  margin-top: 10px;
}

.result {
  margin-top: 20px;
}

img {
  max-width: 100%;
}

#downloadLink {
  display: none;
  margin-top: 10px;
}

This is not about making things look fancy.

The goal here is simply to make the page readable and pleasant to use. Many real internal tools look no better than this. (And you can always improve the styling later if you want.)

Connecting the Frontend to the Backend

Now we’ll write the JavaScript that makes everything work.

Open app.js and add the following code:

const imageInput = document.getElementById("imageInput");
const removeBtn = document.getElementById("removeBtn");
const resultImage = document.getElementById("resultImage");
const downloadLink = document.getElementById("downloadLink");

removeBtn.addEventListener("click", async () => {
  const file = imageInput.files[0];

  if (!file) {
    return;
  }

  const formData = new FormData();
  formData.append("file", file);

  const response = await fetch("http://localhost:8000/remove-bg", {
    method: "POST",
    body: formData,
  });

  const blob = await response.blob();
  const imageUrl = URL.createObjectURL(blob);

  resultImage.src = imageUrl;
  downloadLink.href = imageUrl;
  downloadLink.style.display = "inline";
});

Let’s slow down and explain what’s happening here:

  • We read the image selected by the user

  • We wrap it in FormData so it can be sent to the backend

  • We send it to our /remove-bg endpoint

  • We receive the processed image back

  • We display it and prepare it for download

This is the moment where the frontend and backend finally talk to each other.

Running the Frontend With Live Server

Before testing, make sure your backend is still running.

Now, instead of opening index.html directly, use the Live Server extension in VS Code.

If you don’t have it installed, just open VS Code extensions, search for “Live Server”, and then install it. Then right click on index.html and click “Open with Live Server”.

This starts a small local server for the frontend.

Why does this matter?

Many browser features work better when files are served through a server instead of opened directly from the file system. Using Live Server also matches how real frontends are served.

Testing the Full Application Locally

Now choose an image and click the button.

If everything is working:

  • The image is sent to the backend

  • The background is removed

  • The result appears on the page

  • The download link shows up

Take a moment here. You have just built:

  • A backend that processes real data

  • A frontend that talks to it

  • A complete application running locally

This is already more than a “Hello World” project.

The application running locally after connecting the frontend and backend and removing background from an image

Clearing One More Misconception

Some beginners look at this and think: “This feels too simple to be a real app.”

This is another misconception. Real applications aren’t defined by complexity. They’re defined by usefulness. If your app solves a real problem and people can use it, it’s a real application.

In the next section, we’ll take this exact app and put it on the internet so anyone can use it.

Step 4: Putting the Backend on the Internet

Right now, your backend is running on your computer.

That means:

  • It works only for you

  • If you close your laptop, it stops

  • If someone opens your frontend, it cannot reach your backend

To fix this, we need to run the backend on a computer that is always online.

This process is called deployment.

A Simple Way to Think About Deployment

Deployment does not mean writing new code.

It simply means this: instead of running your backend on your laptop, you run it on another computer that never sleeps.

Everything we do next exists only to make that happen.

Why We Aren’t Using Netlify or Vercel

You might be wondering why we aren’t using Netlify or Vercel. Those platforms are great, but they’re mainly for frontends.

They work best when your app is:

  • Static HTML, CSS, and JavaScript

  • A frontend framework like React or Vue

  • Small serverless functions

Our backend is different. It’s a Python server that:

  • Stays running

  • Accepts image uploads

  • Processes images

  • Uses heavy native libraries for background removal

This kind of backend needs a real server, not a lightweight serverless function.

That’s why Netlify and Vercel aren’t a good fit here.

Why We’re Using Cloud Run

Instead, we’re using Cloud Run.

Cloud Run lets us run real backend servers on Google’s infrastructure without managing servers ourselves.

We’re using it because:

  • It supports full Python backends

  • It deploys directly from GitHub

  • It handles scaling and servers for us

  • It works well with heavy workloads

  • It’s beginner-friendly

Most importantly, it lets you deploy a real backend without learning cloud commands or CI/CD pipelines.

Preparing the Backend for Cloud Run

Before deploying, we need to make sure our backend is ready.

Creating requirements.txt

Inside the backend folder, create a file called requirements.txt.

Add this:

fastapi
uvicorn
rembg
pillow
onnxruntime

This file tells Cloud Run which Python libraries to install.

Creating a GitHub Repository

Cloud Run deploys directly from GitHub, so our code must live there.

From the project root, run:

git init
git add .
git commit -m "Initial background remover project"
git branch -M main
git remote add origin YOUR_REPO_URL
git push -u origin main

This single repository will be used for both backend and frontend.

Deploying the Backend on Cloud Run

Now open your browser and go to Google Cloud Console.

1. Create a New Project

Open Google Cloud Console and click New Project. Give it a name (for example: background-remover) and then click Create.

Creating a new Google Cloud project

2. Open Cloud Run

Use the search bar at the top and search for Cloud Run. Then open it.

Cloud Run overview page

Cloud Run will automatically enable the required APIs.

You will be asked to set up billing. Google gives you $300 free credit, which is more than enough for this tutorial.

3. Start Creating the Service

Click Create Service and choose Deploy continuously from a repository.

Creating a Cloud Run service from a repository

4. Connect Your GitHub Account

Select GitHub as the repository provider and authenticate your GitHub account. Then install Google Cloud Build on your GitHub account. Choose only the repository you want to deploy.

Installing Google Cloud Build on GitHub

This allows Google Cloud to build and deploy your code automatically.

5. Select the Repository

Then choose the repository you just installed Cloud Build on and select the main branch.

Selecting the GitHub repository

6. Configure the Build

Now comes the important part: building the context directory.

Set this to:

backend

This tells Cloud Run:

“My backend code lives inside the backend folder.”

For the entry command, enter:

uvicorn api.main:app --host 0.0.0.0 --port 8080

This is how Cloud Run starts your FastAPI server.

Build configuration for the backend

7. Configure Container Resources

Our backend runs a background-removal model, which is heavy.

So we must increase resources.

  • Change memory from 512 MB → 2 GB

  • Set CPU to 4

Increasing memory and CPU

This ensures the model can load and run properly.

8. Deploy

Now click Create.

Cloud Run will:

  • Build your app

  • Install dependencies

  • Create a container

  • Deploy it to the internet

You’ll see logs showing the build and deployment process.

Cloud Build running

This can take a few minutes. That’s normal.

Checking That the Backend Is Live

Once deployment finishes, Cloud Run will show you a public URL.

Test it:

curl https://YOUR_CLOUD_RUN_URL/health

If you see:

{"status":"ok"}

Your backend is officially live on the internet.

Pause here for a second.

You just deployed a real backend. Congratulations.

Updating the Frontend to Use the Live Backend

Open app.js.

Replace:

http://localhost:8000/remove-bg

With:

https://YOUR_CLOUD_RUN_URL/remove-bg

Save the file and reload the frontend.

Step 5: Making the Backend and Frontend Work Together

At this point, we have two things:

  • A backend running on the internet

  • A frontend running in the browser

Now we want them to talk to each other.

Open your frontend, select an image, and click the button. You’ll notice that it still doesn’t work. This is expected.

What Is Happening Here?

Your frontend is running on one address, while your backend is running on another address.

Browsers are very strict about this. By default, a browser will block requests from one website to another unless the backend explicitly allows it. This is a security feature.

This rule is called CORS.

Clearing a Common Misconception About CORS

When beginners see a CORS error, they often think something is broken.

Nothing is broken. CORS is simply the browser saying: “I need the backend to confirm that this frontend is allowed to talk to it.”

So all we need to do is tell the backend: “It’s okay for requests to come from my frontend.”

Allowing Only Our Frontend (Not Everyone)

Instead of allowing requests from everywhere, we’ll allow requests only from our frontend. This is a good habit to learn early.

Open backend/api/main.py.

Add this import at the top:

from fastapi.middleware.cors import CORSMiddleware

Then, after creating the FastAPI app, add this:

app.add_middleware(
    CORSMiddleware,
    allow_origins=[
        "http://127.0.0.1:5500",
        "http://localhost:5500"
    ],
    allow_methods=["POST"],
    allow_headers=["*"],
)

Why these URLs?

If you’re using the Live Server extension, your frontend is usually served on port 5500. These are the addresses your browser is using locally.

We’re telling the backend: “Only accept requests from this frontend.”

That is exactly what we want.

Redeploying the Backend

Any time you change backend code, you need to redeploy it.

Since our backend is deployed on Cloud Run and set up with automatic deploy, redeploying is simple, all you need to do is push your changes.

From the project root, run:

git add .
git commit -m "Add CORS configuration"
git push

Cloud Run will automatically detect the change and redeploy your backend.

Wait for the deployment to finish. Once it’s done, your backend will now allow requests from your local frontend.

Testing Again

Reload your frontend in the browser.

Select an image and click the button. This time, it should work.

You just handled a real browser security rule that every production app runs into. That alone is a huge learning step.

Step 6: Putting the Frontend on the Internet (GitHub Pages)

Right now, your frontend works only on your computer.

Just like the backend earlier, this means no one else can use it.

Let’s fix that.

Why GitHub Pages?

Our frontend is:

  • Just HTML, CSS, and JavaScript

  • No backend code

  • No build step

This makes it perfect for GitHub Pages. GitHub Pages can host static sites for free, and it’s very beginner friendly.

Preparing the Frontend for Deployment

Make sure all your frontend files are inside the frontend folder:

frontend/
  index.html
  styles.css
  app.js

Open app.js and make sure the backend URL is the Cloud Run URL, not localhost.

fetch("https://YOUR_CLOUD_RUN_URL/remove-bg", {
  method: "POST",
  body: formData,
});

Save the file.

Pushing the Frontend to GitHub

We already created a GitHub repository earlier, so we’ll reuse it.

From the project root, run:

git add .
git commit -m "Add frontend and prepare for GitHub Pages"
git push

Enabling GitHub Pages

  1. Go to your repository on GitHub.

  2. Open Settings.

  3. Click Pages.

Under Source, select:

  1. Branch: main

  2. Folder: /(root)

Save everything. After a few seconds, GitHub will give you a URL. This URL is now your frontend on the internet.

Updating CORS for the Live Frontend

Now that the frontend is live, go back to backend/api/main.py.

Replace the local origins with your GitHub Pages URL:

allow_origins=[
    "https://YOUR_GITHUB_USERNAME.github.io"
]

Commit and push the change:

git add .
git commit -m "Update CORS for GitHub Pages"
git push

Cloud Run will redeploy the backend automatically.

Final Test

Open your GitHub Pages URL.

Upload an image, remove the background, and download the result.

Everything is now live.

Where You Are Now

Let’s be very clear about what you just did.

You:

  • Built a real backend

  • Deployed it to the internet

  • Built a frontend

  • Deployed it to the internet

  • Fixed real production issues

  • Connected everything properly

This is not a demo project. This is a real application.

In the final section, we’ll wrap things up, talk about what you learned, and where you can go next.

Final Thoughts: What You Just Built Matters

At this point, it is worth stopping and looking back at what you have actually done.

You didn’t just follow steps. You didn’t just copy code. You built a real application.

Let’s Be Clear About What You Accomplished

You started with nothing more than basic tools and ideas.

By the end of this tutorial, you:

  • Built a backend that processes real data

  • Used a machine learning tool without fear or overthinking

  • Exposed a backend to the internet

  • Built a frontend with plain HTML, CSS, and JavaScript

  • Connected the frontend and backend properly

  • Deployed both parts so real users can access them

This is exactly how real applications are built, just at a smaller and more manageable scale.

Why This Is No Longer a “Beginner Project”

Many projects are called beginner projects, but they stop at showing things on a screen.

This one does not.

Your app:

  • Accepts real input

  • Performs real work

  • Runs on real servers

  • Handles real browser rules

  • Can be shared with anyone

That is the difference between learning syntax and building software.

The Most Important Lesson in This Tutorial

The most important thing you should take away from this is not the project you built.

It is this: You didn’t need to “know more” before you started.

You learned by building. You figured things out as they appeared. You fixed problems when they showed up.

That is how experience is gained. No one has experience before building real applications. No one lacks experience after building many of them.

What You Can Do Next

This project isn’t the end. It’s a starting point.

Here are a few ideas you can explore next, using what you already know:

  • Improve the user interface

  • Add loading states and better feedback

  • Restrict image size or file types

  • Add simple rate limiting

  • Build another small tool that solves a real problem

You don’t need to jump to frameworks yet.

If you can build a few more projects like this, frameworks will make a lot more sense when you meet them.

One Last Thought

If this was your first real project, you should be proud of yourself.

You moved past tutorials, built something useful, and put it on the internet. That’s the line many developers never cross.

Now that you have crossed it, the next one will be easier.

So keep building!

Source Code and Live Demo

If you want to explore the full project or build on top of it, you can find everything here.

If you have questions, reach me on X at @sprucekhalifa. I write practical tech articles like this regularly.



Read the whole story
alvinashcraft
46 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

How to Build an AI Social Media Post Scheduler Using Gemini and Late API in Next.js

1 Share

Social media has become a vital tool for people and businesses to share ideas, promote products, and connect with their target audience. But creating posts regularly and managing schedules across multiple platforms can be time-consuming and repetitive.

In this tutorial, you’ll learn how to build an AI-powered social media post scheduler using Gemini, Late API, and Next.js.

We’ll use the Gemini API to generate engaging social media content from user prompts, Next.js to handle both the frontend and backend of the application, and Late API to publish and schedule posts across multiple social media platforms from a single platform.

Social media platforms

Table of Contents

Prerequisites

To fully understand this tutorial, you need to have a basic understanding of React or Next.js.

We will use the following tools:

  • Late API: A social media API that lets you create and schedule posts across 13 social media platforms from a single dashboard.

  • Next.js: A React framework for building fast, scalable web applications, handling both the frontend and backend.

  • Google Gemini API: Provides access to Google’s AI models for generating text and other content based on user prompts.

Setup and Installation

Create a new Next.js project using the following code snippet:

npx create-next-app post-scheduler

Install the project dependencies. We’ll use Day.js to work with JavaScript dates, making it easier to schedule and publish social media posts at the correct time.

npm install @google/genai dayjs utc

Next, add a .env.local file containing your Gemini API key at the root of your Next.js project:

GEMINI_API_KEY=<paste_your API key>

Once everything is set up, your Next.js project is ready. Now, let's start building! 🚀

Late API and the available social media platforms

How to Schedule Social Media Posts with Late

Late is an all-in-one social media scheduling platform that allows you to connect your social media accounts and publish posts across multiple platforms. In this section, you’ll learn how to create and schedule social media posts using the Late dashboard.

To get started, create a Late account and sign in.

Sign in and get Late API key

Create an API key and add it to the .env.local file within your Next.js project.

LATE_API_KEY=<your_API_key>

Copy Late API key

Connect your social media accounts to Late so you can manage and publish posts across all platforms.

Social media platforms

After connecting your social media accounts via OAuth, you can start writing, posting, and scheduling content directly to your social media platforms.

Twitter (X) account connected

Late lets you write your post content and attach media files directly from the dashboard.

Create Social media contents from your dashboard

You can choose when your content should be published: post immediately, schedule for later, add it to a job queue, or save it as a draft.

Publish your post

Once a post is published, you can view its status and preview it directly in the dashboard using the post link.

Social media post created with Late

🎉 Congratulations! You’ve successfully created your first post using the Late dashboard. In the next sections, you’ll learn how to use the Late API to create and schedule posts directly from your applications.

How to Build the Next.js App Interface

In this section, you’ll build the user interface for the application. The app uses a single-page route with conditional rendering to display recent posts, an AI prompt input field, and a form that allows users to create or schedule posts.

App Overview

Before we proceed, create a types.d.ts file within your Next.js project and copy the following code snippet into the file:

interface Post {
    _id: string;
    content: string;
    scheduledFor: string;
    status: string;
}

interface AIFormProps {
    handleGeneratePost: (e: React.FormEvent<HTMLFormElement>) => void;
    useAI: boolean;
    setUseAI: React.Dispatch<React.SetStateAction<boolean>>;
    prompt: string;
    setPrompt: React.Dispatch<React.SetStateAction<string>>;
    disableBtn: boolean;
}

interface FormProps {
    handlePostSubmit: (e: React.FormEvent<HTMLFormElement>) => void;
    content: string;
    setContent: React.Dispatch<React.SetStateAction<string>>;
    date: string;
    setDate: React.Dispatch<React.SetStateAction<string>>;
    disableBtn: boolean;
    setUseAI: React.Dispatch<React.SetStateAction<boolean>>;
    useAI: boolean;
}

The types.d.ts file defines all the data structures and type declarations used throughout the application.

Copy the following code snippet into the app/page.tsx file:

"use client";
import Nav from "./components/Nav";
import { useState } from "react";
import NewPost from "./components/NewPost";
import PostsQueue from "./components/PostsQueue";

export default function Page() {
    const [showPostQueue, setShowPostQueue] = useState<boolean>(false);
    return (
        <div className='w-full h-screen'>
            <Nav showPostQueue={showPostQueue} setShowPostQueue={setShowPostQueue} />
            {showPostQueue ? <PostsQueue /> : <NewPost />}
        </div>
    );
}

The Page component renders the Nav component and uses conditional rendering to display either the PostsQueue or NewPost component based on the value of the showPostQueue state.

Create a components folder to store the page components used in the application.

cd app
mkdir components && cd components
touch Nav.tsx NewPost.tsx PostElement.tsx PostsQueue.tsx

Add the code snippet below to the Nav.tsx file:

export default function Nav({
    showPostQueue,
    setShowPostQueue,
}: {
    showPostQueue: boolean;
    setShowPostQueue: React.Dispatch<React.SetStateAction<boolean>>;
}) {
    return (
        <nav>
            <h2>Post Scheduler</h2>

            <button onClick={() => setShowPostQueue(!showPostQueue)}>
                {showPostQueue ? "New Post" : "Schedule Queue"}
            </button>
        </nav>
    );
}

Copy the following code snippet into the PostsQueue.tsx file:

"use client";
import { useEffect, useState, useCallback } from "react";
import PostElement from "./PostElement";

export default function PostsQueue() {
    const [posts, setPosts] = useState<Post[]>([]);
    const [loading, setLoading] = useState<boolean>(true);

    return (
        <div className='p-4'>
            <h2 className='text-xl font-bold'>Scheduled Posts</h2>

            {loading ? (
                <p className='text-sm'>Loading scheduled posts...</p>
            ) : (
                <div className='mt-4'>
                    {posts.length > 0 ? (
                        posts.map((post) => <PostElement key={post._id} post={post} />)
                    ) : (
                        <p>No scheduled posts available.</p>
                    )}
                </div>
            )}
        </div>
    );
}

The PostsQueue.tsx component displays a list of previously created posts along with their current status, showing whether each post has been published or scheduled for a later time. While the data is being loaded, it shows a loading message, and once loaded, it renders each post using the PostElement component.

Add the following to the PostElement.tsx component:

export default function PostElement({ post }: { post: Post }) {
    export const formatReadableTime = (isoString: string) => {
        const date = new Date(isoString); // parses UTC automatically
        return date.toLocaleString(undefined, {
            year: "numeric",
            month: "short",
            day: "numeric",
            hour: "2-digit",
            minute: "2-digit",
            second: "2-digit",
            hour12: true, // set to false for 24h format
        });
    };

    return (
        <div className='p-4 border flex items-center justify-between  space-x-4 rounded mb-2 hover:bg-gray-100 cursor-pointer'>
            <div>
                <p className='font-semibold text-sm'>{post.content.slice(0, 100)}</p>
                <p className='text-blue-400 text-xs'>
                    Scheduled for: {formatReadableTime(post.scheduledFor)}
                </p>
            </div>

            <p className='text-sm text-red-500'>{post.status}</p>
        </div>
    );
}

Finally, copy the following code snippet into the NewPost.tsx file:

"use client";
import { useState } from "react";

export default function NewPost() {
 const [disableBtn, setDisableBtn] = useState<boolean>(false);
 const [useAI, setUseAI] = useState<boolean>(false);
 const [content, setContent] = useState<string>("");
 const [prompt, setPrompt] = useState<string>("");
 const [date, setDate] = useState<string>("");

 //👇🏻 generates post content
 const handleGeneratePost = async (e: React.FormEvent<HTMLFormElement>) => {
  e.preventDefault();
  setDisableBtn(true);
 };

 //👇🏻 create/schedule post
 const handlePostSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
  e.preventDefault();
 };

 return (
  <div className='w-full p-4  h-[90vh] flex flex-col items-center justify-center border-t'>
   <h3 className='text-xl font-bold'>New Post</h3>

   {useAI ? (
    <AIPromptForm
     handleGeneratePost={handleGeneratePost}
     useAI={useAI}
     setUseAI={setUseAI}
     prompt={prompt}
     setPrompt={setPrompt}
     disableBtn={disableBtn}
    />
   ) : (
    <PostForm
     handlePostSubmit={handlePostSubmit}
     content={content}
     setContent={setContent}
     date={date}
     setDate={setDate}
     disableBtn={disableBtn}
     setUseAI={setUseAI}
     useAI={useAI}
    />
   )}
  </div>
 );
}

The NewPost component conditionally renders the AIPromptForm and the PostForm. When a user chooses to generate content using AI, the AIPromptForm component is displayed to collect the prompt. Once the content is generated, the PostForm component is shown, allowing the user to edit, create, or schedule the post.

Add the components below inside the NewPost.tsx file:

export const AIPromptForm = ({
    handleGeneratePost,
    useAI,
    setUseAI,
    prompt,
    setPrompt,
    disableBtn,
}: AIFormProps) => {
    return (
        <form onSubmit={handleGeneratePost}>
            <p onClick={() => setUseAI(!useAI)}>Exit AI </p>
            <textarea
                rows={3}
                required
                value={prompt}
                onChange={(e) => setPrompt(e.target.value)}
                placeholder='Enter prompt...'
            />
            <button type='submit' disabled={disableBtn}>
                {disableBtn ? "Generating..." : "Generate Post with AI"}
            </button>
        </form>
    );
};

// 👇🏻 Post Form component
export const PostForm = ({
    handlePostSubmit,
    content,
    setContent,
    date,
    setDate,
    disableBtn,
    setUseAI,
    useAI,
}: FormProps) => {
    const getNowForDatetimeLocal = () => {
        const now = new Date();
        return new Date(now.getTime() - now.getTimezoneOffset() * 60000)
            .toISOString()
            .slice(0, 16);
    };

    return (
        <form onSubmit={handlePostSubmit}>
            <p onClick={() => setUseAI(!useAI)}>Generate posts with AI </p>
            <textarea
                value={content}
                onChange={(e) => setContent(e.target.value)}
                rows={4}
                placeholder="What's happening?"
                required
                maxLength={280}
            />
            <input
                type='datetime-local'
                min={getNowForDatetimeLocal()}
                value={date}
                onChange={(e) => setDate(e.target.value)}
            />
            <button disabled={disableBtn} type='submit'>
                {disableBtn ? "Posting..." : "Create post"}
            </button>
        </form>
    );
};

Congratulations! You've completed the application interface.

How to integrate Gemini API for Post Generation

Here, you will learn how to generate post content from the user's prompt using the Gemini API.

Before we proceed, make sure you have copied your API key from the Google AI Studio.

Create Gemini API key

Create an api folder inside the Next.js app directory. This folder will contain the API routes used to generate AI content and create or schedule posts using the Late API.

cd app && mkdir api

Next, create a generate folder inside the api directory and add a route.ts file. Copy the following code into the file:

// 👇🏻 In api/generate/route.ts file
import { NextRequest, NextResponse } from "next/server";
import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY! });

export async function POST(req: NextRequest) {
    const { prompt } = await req.json();

    try {
        const response = await ai.models.generateContent({
            model: "gemini-3-flash-preview",
            contents: `
    You are a social media post generator, very efficient in generating engaging posts for Twitter (X). Given a topic, generate a creative and engaging post that captures attention and encourages interaction. This posts will always be within the character limit of X (Twitter) which is 280 characters, which includes any hashtags or mentions, spaces, punctuation, and emojis.

    The user will provide a topic or theme, and you will generate a post based on that input.
    Here is the instruction from the user:
    "${prompt}"`,
        });
        if (!response.text) {
            return NextResponse.json(
                {
                    message: "Encountered an error generating the post.",
                    success: false,
                },
                { status: 400 },
            );
        }

        return NextResponse.json(
            { message: response.text, success: true },
            { status: 200 },
        );
    } catch (error) {
        return NextResponse.json(
            { message: "Error generating post.", success: false },
            { status: 500 },
        );
    }
}

The api/generate endpoint accepts the user's prompt and generates post content using the Gemini API.

Now you can send a request to the newly created /api/generate endpoint from the NewPost component. Update the handleGeneratePost function as shown below:

const handleGeneratePost = async (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    setDisableBtn(true);
    const result = await fetch("/api/generate", {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
        },
        body: JSON.stringify({ prompt }),
    });

    const data = await result.json();
    if (data.success) {
        setUseAI(false);
        setContent(data.message);
        setPrompt("");
    }
    setDisableBtn(false);
};

The handleGeneratePost function accepts the user's prompt and returns the AI-generated content.

How to Use Late API in Next.js

Late provides API endpoints that let you create, schedule, and manage posts programmatically. This allows you to integrate social media posting directly into your applications or automation workflows.

To get started, copy your Late API key and the account ID of your social media platforms into the .env.local file:

LATE_API_KEY=<Late_API_key>
ACCOUNT_ID=<social_media_acct_id>

# Gemini API key
GEMINI_API_KEY=<gemini_API_key>

Connect Twitter (X) account and copy account ID

Note: In this tutorial, we will be using Twitter (X) as the social media platform for scheduling posts. You can adapt the same workflow to other platforms supported by Late API by updating the platform and accountId values in your API requests.

Create an api/post endpoint to accept post content and schedule or publish posts using the Late API.

cd api
mkdir post && cd post
touch route.ts

Then, add the following POST method to post/route.ts:

import { NextRequest, NextResponse } from "next/server";
import utc from "dayjs/plugin/utc";
import dayjs from "dayjs";

dayjs.extend(utc);

export async function POST(req: NextRequest) {
    const { content, publishAt } = await req.json();

    // Determine if the post should be scheduled or published immediately
    const nowUTC = publishAt ? dayjs(publishAt).utc() : null;
    const publishAtUTC = nowUTC ? nowUTC.format("YYYY-MM-DDTHH:mm") : null;

    try {
        const response = await fetch("https://getlate.dev/api/v1/posts", {
            method: "POST",
            headers: {
                Authorization: `Bearer ${process.env.LATE_API_KEY}`,
                "Content-Type": "application/json",
            },
            body: JSON.stringify({
                content,
                platforms: [
                    {
                        platform: "twitter",
                        accountId: process.env.ACCOUNT_ID!,
                    },
                ],
                publishNow: !publishAt,
                scheduledFor: publishAtUTC,
            }),
        });

        const { post, message } = await response.json();

        if (post?._id) {
            return NextResponse.json({ message, success: true }, { status: 201 });
        }

        return NextResponse.json({ message: "Error occurred", success: false }, { status: 500 });
    } catch (error) {
        return NextResponse.json({ message: "Error scheduling post.", success: false }, { status: 500 });
    }
}

From the code snippet above:

  • The api/post endpoint accepts the post’s content and an optional publishAt time.

  • If publishAt is null, the post is published immediately. Otherwise, the time is converted to UTC for scheduling.

  • It then sends a request to the Late API using your API key and the account ID to create or schedule the post on the selected social media platform.

You can also add a GET method to the /api/post endpoint to retrieve posts that have already been created or scheduled:

export async function GET() {
    try {
        const response = await fetch(
            "https://getlate.dev/api/v1/posts?platform=twitter",
            {
                method: "GET",
                headers: {
                    Authorization: `Bearer ${process.env.LATE_API_KEY}`,
                    "Content-Type": "application/json",
                },
            },
        );

        const { posts } = await response.json();

        return NextResponse.json({ posts }, { status: 200 });
    } catch (error) {
        return NextResponse.json(
            { message: "Error fetching posts.", success: false },
            { status: 500 },
        );
    }
}

Next, update the handlePostSubmit function in NewPost.tsx to send a POST request to /api/post. This will create or schedule the post and notify the user of the result:

const handlePostSubmit = async (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    setDisableBtn(true);

    const now = new Date();
    const selected = date ? new Date(date) : null;
    const publishAt = !selected || selected <= now ? null : date;

    const result = await fetch("/api/post", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ content, publishAt }),
    });

    const { message, success } = await result.json();

    if (success) {
        setContent("");
        setDate("");
        alert("Success: " + message);
    } else {
        alert("Error: " + message);
    }

    setDisableBtn(false);
};

Finally, fetch all scheduled or published posts and render them in the PostsQueue component:

const fetchScheduledPosts = useCallback(async () => {
    try {
        const response = await fetch("/api/post", {
            method: "GET",
            headers: { "Content-Type": "application/json" },
        });
        const data = await response.json();
        setPosts(data.posts);
        setLoading(false);
    } catch (error) {
        console.error("Error fetching scheduled posts:", error);
        setLoading(false);
    }
}, []);

useEffect(() => {
    fetchScheduledPosts();
}, [fetchScheduledPosts]);

🎉 Congratulations! You’ve successfully built an AI-powered social media post scheduler using Next.js, Gemini API, and Late API.

The source code for this tutorial is available on GitHub.

Conclusion

In this tutorial, you’ve learnt how to create and schedule social media posts across multiple platforms using a single scheduling platform, Late, and how to generate AI content using the Gemini API.

The Late API is a powerful tool for automating social media tasks, posting at specific intervals, managing multiple accounts, and tracking analytics – all from one platform. By combining it with generative AI models like Gemini and automation tools like n8n or Zapier, you can build automated workflows that keep your audience engaged with minimal effort.

The Gemini API also makes it easy to integrate AI-powered text, images, or code generation directly into your applications, opening up a wide range of creative possibilities.

Thank you for reading! 🎉



Read the whole story
alvinashcraft
53 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Containerize Your .NET Applications Without a Dockerfile

1 Share

Developers report that over 40% of committed code is now AI-assisted, yet they still spend about 25% of their week on toil. Sonar's new developer survey shows AI didn't eliminate work; it shifted it into reviewing, fixing, and verifying code that looks correct but isn't. Download the report to understand why your workload feels the same, just harder to reason about. (No form fill required.)

Webinar: How to Build Faster with AI Agents Learn how full-stack developers boost productivity by 50% with AI agents that automate layout, styling, and component generation through RAG and LLM pipelines. See how orchestration and spec-driven workflows keep you in control of quality and consistency. Check it out

Containers have become the standard for deploying modern applications. But if you've ever written a Dockerfile, you know it can be tedious. You need to understand multi-stage builds, pick the right base images, configure the right ports, and remember to copy files in the correct order.

What if I told you that you don't need a Dockerfile at all?

Since .NET 7, the SDK has built-in support for publishing your application directly to a container image. You can do this with a single dotnet publish command.

In this week's newsletter, we'll explore:

  • Why Dockerfile-less publishing matters
  • How to enable container publishing in your project
  • Customizing the container image
  • Publishing to container registries
  • How I'm using this to deploy to a VPS

The Traditional Approach: Writing a Dockerfile

Before we look at the SDK approach, let's see what we're replacing.

A typical multi-stage Dockerfile for a .NET application looks like this:

FROM mcr.microsoft.com/dotnet/aspnet:10.0 AS base
WORKDIR /app
EXPOSE 8080
EXPOSE 8081

FROM mcr.microsoft.com/dotnet/sdk:10.0 AS build
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["src/MyApi/MyApi.csproj", "src/MyApi/"]
RUN dotnet restore "src/MyApi/MyApi.csproj"

COPY . .
WORKDIR "/src/src/MyApi"
RUN dotnet build "MyApi.csproj" -c $BUILD_CONFIGURATION  -o /app/build

FROM build AS publish
ARG BUILD_CONFIGURATION=Release
RUN dotnet publish "MyApi.csproj" -c $BUILD_CONFIGURATION -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyApi.dll"]

This works, but there's a learning curve and maintenance overhead:

  • Maintenance burden: You need to update base image tags manually
  • Layer caching: Getting the COPY order wrong kills your build cache
  • Duplication: Every project needs a similar Dockerfile
  • Context switching: You're writing Docker DSL, not .NET code

The .NET SDK approach eliminates all of this.

Enabling Container Publishing

If you're running on .NET 10, you don't need to do anything special to enable container publishing. This will work for ASP.NET Core apps, worker services, and console apps.

You can publish directly to a container image:

dotnet publish --os linux --arch x64 /t:PublishContainer

That's it. The .NET SDK will:

  1. Build your application
  2. Select the appropriate base image
  3. Create a container image with your published output
  4. Load it into your local OCI-compliant daemon

The most popular option is Docker, but it also works with Podman.

An image showing the output of the dotnet publish command creating a container image.

Customizing the Container Image

The SDK provides sensible defaults, but you'll often want to customize the image. For a more comprehensive list of options, see the official docs.

I'll cover the most common customizations here.

Setting the Image Name and Tag

The ContainerRepository property sets the image name (repository). The ContainerImageTags property sets one or more tags (separated by semicolons). If you want a single tag, you can use ContainerImageTag instead.

<PropertyGroup>
  <ContainerRepository>ghcr.io/USERNAME/REPOSITORY</ContainerRepository>
  <ContainerImageTags>1.0.0;latest</ContainerImageTags>
</PropertyGroup>

From .NET 8 and onwards, when a tag isn't provided the default is latest.

Choosing a Different Base Image

By default, the SDK uses the following base images:

  • mcr.microsoft.com/dotnet/runtime-deps for self-contained apps
  • mcr.microsoft.com/dotnet/aspnet image for ASP.NET Core apps
  • mcr.microsoft.com/dotnet/runtime for other cases

You can switch to a smaller or different image:

<PropertyGroup>
  <!-- Use the Alpine-based image for smaller size -->
  <ContainerBaseImage>mcr.microsoft.com/dotnet/aspnet:10.0-alpine</ContainerBaseImage>
</PropertyGroup>

You could also do this by setting ContainerFamily to alpine, and letting the rest be inferred.

Here's the size difference between the default and Alpine images for an ASP.NET Core app:

An image showing the size difference between the default and Alpine base images for ASP.NET Core applications.
| Base Image                                  | Size (MB) |
| ------------------------------------------- | --------- |
| mcr.microsoft.com/dotnet/aspnet:10.0        | 231.73    |
| mcr.microsoft.com/dotnet/aspnet:10.0-alpine | 122.65    |

You can see a significant size reduction by switching to alpine.

Configuring Ports

For web applications, the default exposed ports are 8080 and 8081 for HTTP and HTTPS. These are inferred from ASP.NET Core environment variables (ASPNETCORE_URLS, ASPNETCORE_HTTP_PORT, ASPNETCORE_HTTPS_PORT). The Type attribute can be tcp or udp.

<PropertyGroup>
  <ContainerPort Include="8080" Type="tcp" />
  <ContainerPort Include="8081" Type="tcp" />
</PropertyGroup>

Publishing to a Container Registry

Publishing locally is useful for development, but you'll want to push to a registry for deployment. You can specify the target registry during publishing.

Here's an example publishing to GitHub Container Registry:

dotnet publish --os linux --arch x64  /t:PublishContainer /p:ContainerRegistry=ghcr.io

Authentication: The SDK uses your local Docker credentials. Make sure you've logged in with docker login before publishing to a remote registry.

However, I don't use the above approach. I prefer using docker CLI for the publishing step, as it gives me more control over authentication and tagging.

CI/CD Integration

Here's what I'm doing in my GitHub Actions workflow to build and push my .NET app container. I left out the boring bits of seting up the .NET environment and checking out code.

This will build the container image, tag it, and push it to GitHub Container Registry:

- name: Publish
  run: dotnet publish "${{ env.WORKING_DIRECTORY }}" --configuration ${{ env.CONFIGURATION }} --os linux -t:PublishContainer
# Tag the build for later steps
- name: Log in to ghcr.io
  run: echo "${{ env.DOCKER_PASSWORD }}" | docker login ghcr.io -u "${{ env.DOCKER_USERNAME }}" --password-stdin
- name: Tag Docker image
  run:
    docker tag ${{ env.IMAGE_NAME }}:${{ github.sha }} ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:${{ github.sha }} |
    docker tag ${{ env.IMAGE_NAME }}:latest ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:latest
- name: Push Docker image
  run:
    docker push ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:${{ github.sha }} |
    docker push ghcr.io/${{ env.DOCKER_USERNAME }}/${{ env.IMAGE_NAME }}:latest

Once my images are up in the registry, I can deploy them to my VPS.

I'm using Dokploy (a simple but powerful deployment tool for Docker apps) to pull the latest image and restart my service.

deploy:
  runs-on: ubuntu-latest
  needs: build-and-publish
  steps:
    - name: Trigger deployment
      run: |
        curl -X POST ${{ env.DEPLOYMENT_TRIGGER_URL }} \
          -H 'accept: application/json' \
          -H 'Content-Type: application/json' \
          -H 'x-api-key: ${{ env.DEPLOYMENT_TRIGGER_API_KEY }}' \
          -d '{
            "applicationId": "${{ env.DEPLOYMENT_TRIGGER_APP_ID }}"
          }'

This kicks off a deployment on my VPS, pulling the latest image and restarting the container.

An image showing the output of the dokploy deployment command restarting the container.

By the way, I'm running my VPS on Hetzner Cloud - highly recommended if you're looking for affordable and reliable VPS hosting.

When You Still Need a Dockerfile

The SDK container support is powerful, but it doesn't cover every scenario.

You'll still need a Dockerfile when:

  • Installing system dependencies: If your app needs native libraries (like libgdiplus for image processing)
  • Complex multi-stage builds: When you need to run custom build steps
  • Non-.NET components: If your container needs additional services or tools

For most web APIs and background services, the SDK approach is sufficient.

Summary

The .NET SDK's built-in container support removes the friction of containerization.

You get:

  • No Dockerfile to maintain - one less file to worry about
  • Automatic base image selection - always uses the right image for your framework version
  • MSBuild integration - configure everything in your .csproj
  • CI/CD friendly - works anywhere dotnet runs

The days of copy-pasting Dockerfiles between projects are over.

Just enable the feature, customize what you need, and publish.

Thanks for reading.

And stay awesome!




Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

SQL Server Pagination with COUNT(*) OVER() Window Function

1 Share
A simple SQL Server trick that eliminates the need for separate count queries when building paginated APIs.
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories