Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150785 stories
·
33 followers

The Rise of the Cognitive Architect

1 Share
At KubeCon North America 2025, GitLab’s Emilio Salvador outlined how developers are shifting from individual coders to leaders of hybrid human–AI teams

The role of the developer is changing fast. At KubeCon North America 2025 in Atlanta, we sat down with Emilio Salvador, GitLab’s VP of Strategy and Developer Relations, to talk about his vision of how software developers will evolve from individual coders into managers of hybrid teams.

“We believe that there’s going to be more developers, but what they do is going to be more than just coding. The developer is becoming a manager of a team that will be formed by both humans and agents,” Salvador told The New Stack.

In this episode of The New Stack Makers, we discussed how AI is reshaping developer roles, why GitLab is building an agent orchestration platform, and what the “meta agent” of the future might look like.

Cognitive Architects and AI Guardians

In an article Salvador published on The New Stack recently, he coined two terms for where developers are heading. The first, the “cognitive architect,” is about how development is becoming less about writing individual functions and more about decomposing large problems into pieces that can be assigned to AI agents or human team members.

“In the past, developers were tasked with a single thing that they needed to do — read this JSON file or parse this,” Salvador explained. “Now, the scale of the problem is much bigger, and you have to start thinking more like an architect who is going to take a much bigger problem, break it down into smaller pieces, and then assign those pieces to different agents or other team members.”

The second role, the “AI guardian,” is a bit of a reality check: developers are losing confidence in the quality of AI-generated code. Salvador pointed to large companies claiming that 80% of their code is now written by AI, yet every line of code those agents write is still reviewed by a human.

“There will be people who will oversee compliance, security, code quality — those will always be humans […],” he said. “I don’t see any European bank deploying an application that has not been signed by a person saying that the application is right.”

The AI Paradox: Faster Coding, Same Bottlenecks

But even as developers are increasingly writing more code with the help of AI — or at least think they do — most companies haven’t really seen major productivity gains from their software teams.

GitLab calls it the “AI paradox.” Even though developers can now write code faster, those gains disappear because the rest of the software development lifecycle hasn’t caught up. Testing, security reviews and compliance are becoming the new bottlenecks.

“You can only be as fast as your slowest wheel,” Salvador said. “So when you think about implementing AI across your software development life cycle, you need to take a more strategic approach. You have to look at your entire software development life cycle and apply AI at every single stage, because otherwise you’re going to end up with those bottlenecks even if you do everything right.”

GitLab’s own Duo Agent Platform is meant to extend AI capabilities beyond coding into planning, security, compliance, and deployment. The platform is also designed for extensibility, Salvador noted, because no single vendor can keep pace with the ever-changing landscape of new AI tools and models.

“There’s a new tool, there’s a new model, there’s a new agent every week,” Salvador said. “The tool that is amazing today won’t be in the market six months from now.”

The Meta Agent

Maybe Salvador’s most interesting prediction is the rise of what he calls the “meta agent,” a role-based AI agent that is essentially a full member of a development team, complete with an email address, phone number, and Slack handle.

“You will be able to communicate with that meta agent like you communicate with any other one of your team members,” he said. “You will be able to assign tasks and then they will act as an agent of agents.”

The idea here is that this meta agent won’t just be waiting for you to call, but would also be proactive and be monitoring applications 24/7. Ideally, it would be able to detect problems before they happen and automatically generate fixes.

Agent Hype vs. Reality

Despite his overall vision for the field, Salvador is also aware of the slow pace of enterprise adoption. He recalled a recent visit to a large European bank still running mainframes and COBOL.

“I think there’s a ton of hype,” he said. “For enterprises, the speed of adoption is significantly slower than what we believe.”

He pointed to this year’s Dora report from Google, which found that a year ago, productivity gains from AI were “almost insignificant.” Now the gains are there, but it took time for both the technology to improve and for developers to learn how to use it effectively.

“AI is an amazing tool, but like any other tool, you need to know how to use it,” Salvador said. “When we ignore the human factor and change management, things don’t happen overnight.”

His advice for teams getting started: think small. Pick a single application, work with a small team, and guide them through the entire lifecycle from coding to deployment.

“If you can help developers accelerate that learning process, then they will be able to spread that knowledge across your entire org,” he said.

The post The Rise of the Cognitive Architect appeared first on The New Stack.

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

A Complete Native PostgreSQL Setup on macOS: Homebrew and Launchd

1 Share

Advantages

Native PostgreSQL installation gives you the best performance and the closest thing to a production Linux environment on macOS. You get full control over configuration, service management, and when PostgreSQL starts. Homebrew makes installation clean, and launchd handles service management reliably. You can run multiple PostgreSQL versions simultaneously on different ports, and you have direct access to all PostgreSQL files and configuration. This is the most flexible option for serious local development.

Disadvantages

Native installation requires more setup and management than Postgres.app or Docker. You’re responsible for starting and stopping the service, configuring auto-start, and managing system-level settings. Updates require manual intervention, and you need to understand launchd and Homebrew service management. It’s easier to accidentally break things or create conflicts with other PostgreSQL installations. For developers who want simplicity, Postgres.app is easier. For those who want isolation and reproducibility, Docker might be better.

Sometimes you want PostgreSQL installed directly on your Mac. No containers, no apps. Just PostgreSQL running as a system service. This gives you the most control, the best performance, and the closest thing to a production Linux environment you’ll get on macOS.

The trade-off is you’re managing a system service. But if you know what you’re doing, it’s straightforward. Homebrew makes the installation painless, and macOS’s launchd handles the service management. You get full control over when it starts, how it’s configured, and what versions you’re running.

This is the complete guide to installing PostgreSQL natively on macOS, configuring it properly, managing the service, and setting up an initial development database named interlinedlist.

Installation via Homebrew

Homebrew is the cleanest way to install PostgreSQL on macOS. If you don’t have Homebrew:

https://brew.sh

Install it, then install PostgreSQL:

brew install postgresql@16

This installs PostgreSQL 16. If you need a different version, swap postgresql@16 for postgresql@15, postgresql@14, or whatever matches your production environment.

Homebrew installs PostgreSQL but doesn’t start it automatically. That’s intentional—you control when it runs.

Start the Service

PostgreSQL on macOS runs as a launchd service. Start it:

brew services start postgresql@16

This starts PostgreSQL and configures it to start automatically on boot. If you don’t want it starting automatically, use:

brew services start postgresql@16 --no-autostart

Check if it’s running:

brew services list | grep postgresql

You should see postgresql@16 listed as started.

Verify It’s Running

Connect to PostgreSQL to verify everything works:

psql postgres

If you see a PostgreSQL prompt, you’re good. Type \q to exit.

Create Your Initial Development Database

Create the database for your project:

createdb interlinedlist

Confirm it exists:

psql -l | grep interlinedlist

Connect to it:

psql interlinedlist

You’re in.

Create a Local Superuser

By default, PostgreSQL creates a superuser matching your macOS username. If that doesn’t exist or you want to set it up explicitly:

createuser -s $USER

This creates a superuser with your macOS username. For local development, this keeps things simple. ORMs, migration tools, and scripts expect you to have full database privileges.

If you want a password for your user:

psql postgres

Then:

ALTER USER adron WITH PASSWORD 'your-password';
\q

Connection String

For most tooling and frameworks, the connection string will look like:

postgres://adron@localhost:5432/interlinedlist

Or if you set a password:

postgres://adron:your-password@localhost:5432/interlinedlist

Drop that into your .env file or wherever your project expects it.

Managing the Service

Stop PostgreSQL:

brew services stop postgresql@16

Start PostgreSQL:

brew services start postgresql@16

Restart PostgreSQL:

brew services restart postgresql@16

Check service status:

brew services list

This shows all Homebrew-managed services and their status.

Configure Auto-Start on Boot

By default, brew services start configures PostgreSQL to start automatically when your Mac boots. If you want to disable this:

brew services stop postgresql@16
launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.postgresql@16.plist

To re-enable auto-start:

brew services start postgresql@16

Or manually:

launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql@16.plist

Finding PostgreSQL Files

Homebrew installs PostgreSQL in a versioned directory:

Binaries:

/opt/homebrew/opt/postgresql@16/bin/

Or on Intel Macs:

/usr/local/opt/postgresql@16/bin/

Data directory:

/opt/homebrew/var/postgresql@16/

Or on Intel Macs:

/usr/local/var/postgresql@16/

Configuration file:

/opt/homebrew/var/postgresql@16/postgresql.conf

Logs:

/opt/homebrew/var/log/postgresql@16.log

Updating PostgreSQL

When a new version is available:

brew upgrade postgresql@16

This upgrades PostgreSQL in place. Your data stays intact. After upgrading, restart the service:

brew services restart postgresql@16

Multiple PostgreSQL Versions

You can run multiple PostgreSQL versions simultaneously on different ports. Install another version:

brew install postgresql@15

Configure it to run on a different port. Edit the configuration:

brew services stop postgresql@15

Edit /opt/homebrew/var/postgresql@15/postgresql.conf and change:

port = 5433

Start it:

brew services start postgresql@15

Now you have PostgreSQL 16 on port 5432 and PostgreSQL 15 on port 5433.

Configuration Best Practices

For local development, you’ll want to adjust some PostgreSQL settings. Edit the config file:

nano /opt/homebrew/var/postgresql@16/postgresql.conf

Or use your preferred editor. Key settings for local dev:

# Increase shared buffers for better performance
shared_buffers = 256MB

# Increase work memory for complex queries
work_mem = 16MB

# Enable logging for debugging
logging_collector = on
log_directory = 'log'
log_filename = 'postgresql-%Y-%m-%d.log'
log_statement = 'all'  # Log all queries (useful for dev)

# Increase max connections if needed
max_connections = 100

After changing configuration, restart PostgreSQL:

brew services restart postgresql@16

Backup and Restore

Create a backup:

pg_dump interlinedlist > interlinedlist_backup.sql

Restore from backup:

psql interlinedlist < interlinedlist_backup.sql

Backup all databases:

pg_dumpall > all_databases_backup.sql

Common Issues and Solutions

Port already in use:

If port 5432 is already taken (maybe by Postgres.app or another PostgreSQL instance):

lsof -i :5432

Kill the process or change PostgreSQL’s port in the config file.

Can’t connect:

Check if PostgreSQL is running:

brew services list | grep postgresql

Check logs:

tail -f /opt/homebrew/var/log/postgresql@16.log

Permission errors:

Make sure your user owns the data directory:

sudo chown -R $(whoami) /opt/homebrew/var/postgresql@16

Uninstalling PostgreSQL

If you need to remove PostgreSQL completely:

brew services stop postgresql@16
brew uninstall postgresql@16

This removes PostgreSQL but leaves your data directory intact. To remove data as well:

rm -rf /opt/homebrew/var/postgresql@16

Summary of Native PostgreSQL Setup

Native PostgreSQL on macOS gives you full control. You manage the service, configure it exactly how you want, and run multiple versions if needed. It’s the closest thing to a production Linux environment you’ll get on a Mac.

Homebrew handles the installation cleanly, launchd manages the service reliably, and you get all the PostgreSQL tools you need without any abstraction layers. For serious local development that needs to match production, this is the way to go.

The service starts when you boot (or doesn’t, if you configure it that way), runs in the background, and stays out of your way until you need it. Clean, predictable, and exactly what you’d expect from a native system service.

Read the whole story
alvinashcraft
37 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A Clean, Production-Parity Way to Run PostgreSQL on macOS: Docker

1 Share

Advantages

Docker gives you production parity and complete isolation. You can match exact PostgreSQL versions, run multiple instances simultaneously on different ports, and tear everything down with a single command. Docker Compose makes it easy to version control your database configuration and share it with your team. If you’re already using Docker for other services, PostgreSQL fits right into your existing workflow. Containers are predictable, reproducible, and never pollute your system with leftover installs.

Disadvantages

Docker adds overhead and complexity compared to Postgres.app or native installation. You need Docker Desktop running, which consumes system resources. Containers can be slower than native installations, and you’ll need to manage volumes for data persistence. If you’re not already using Docker, this adds another tool to your stack. For simple single-database development, Postgres.app might be faster and easier.

Sometimes you need more than Postgres.app. Maybe you’re matching production versions exactly, running multiple PostgreSQL instances, or you want complete isolation between projects. Docker gives you that control without polluting your system with multiple PostgreSQL installs.

Docker-based PostgreSQL setups are predictable, reproducible, and easy to tear down. You get production parity without the headaches of managing multiple versions on your Mac. If you’re already using Docker for other services, this fits right into your workflow.

This is the short guide to getting PostgreSQL running in Docker, configured, and ready for an initial development database named interlinedlist.

Prerequisites

You’ll need Docker Desktop installed. If you don’t have it:

https://docker.com

Download, install, and make sure Docker is running. You should see the Docker icon in your menu bar.

Run PostgreSQL in Docker

The simplest way to get PostgreSQL running:

docker run --name postgres-interlinedlist \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_DB=interlinedlist \
  -p 5432:5432 \
  -d postgres:16

This does a few things:

  • Creates a container named postgres-interlinedlist
  • Sets up a database called interlinedlist automatically
  • Exposes PostgreSQL on port 5432 (standard)
  • Runs PostgreSQL 16 in detached mode (-d)
  • Uses the default postgres user with password postgres

If you need a different version, swap postgres:16 for postgres:15, postgres:14, or whatever matches your production environment.

Verify It’s Running

Check that your container is up:

docker ps | grep postgres-interlinedlist

You should see the container running. If you don’t, check logs:

docker logs postgres-interlinedlist

Install the Command Line Tools

You’ll want psql available locally to connect to your Dockerized PostgreSQL. The easiest way is via Homebrew:

brew install postgresql@16

Or if you already have Postgres.app installed, you can use those tools. They’ll connect to Docker just fine.

Verify:

psql --version

Connect to Your Database

Now connect to the database running in Docker:

psql -h localhost -U postgres -d interlinedlist

When prompted, enter the password: postgres

You’re in. Same PostgreSQL, just running in a container instead of directly on your Mac.

Create a Local Superuser (Optional but Recommended)

If you want to use your macOS username instead of postgres:

psql -h localhost -U postgres -d interlinedlist

Then inside psql:

CREATE USER adron WITH SUPERUSER PASSWORD 'your-password';
GRANT ALL PRIVILEGES ON DATABASE interlinedlist TO adron;
\q

Now you can connect as yourself:

psql -h localhost -U adron -d interlinedlist

Connection String

For most tooling and frameworks, the connection string will look like:

postgres://postgres:postgres@localhost:5432/interlinedlist

Or if you created your own user:

postgres://adron:your-password@localhost:5432/interlinedlist

Drop that into your .env file or wherever your project expects it.

Managing Your Container

Stop the container:

docker stop postgres-interlinedlist

Start it again:

docker start postgres-interlinedlist

Remove the container (this deletes your data):

docker stop postgres-interlinedlist
docker rm postgres-interlinedlist

View logs:

docker logs postgres-interlinedlist

Persisting Data

By default, Docker containers are ephemeral. If you remove the container, your data goes with it. To persist data between container restarts, use a volume:

docker run --name postgres-interlinedlist \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_DB=interlinedlist \
  -p 5432:5432 \
  -v postgres-interlinedlist-data:/var/lib/postgresql/data \
  -d postgres:16

The -v postgres-interlinedlist-data:/var/lib/postgresql/data flag creates a Docker volume that persists even if you remove the container. Your data stays safe.

Using Docker Compose (Recommended)

For anything more than a quick test, use Docker Compose. Create a docker-compose.yml:

version: '3.8'

services:
  postgres:
    image: postgres:16
    container_name: postgres-interlinedlist
    environment:
      POSTGRES_PASSWORD: postgres
      POSTGRES_USER: postgres
      POSTGRES_DB: interlinedlist
    ports:
      - "5432:5432"
    volumes:
      - postgres-interlinedlist-data:/var/lib/postgresql/data

volumes:
  postgres-interlinedlist-data:

Then:

docker-compose up -d

To stop:

docker-compose down

To stop and remove volumes (deletes data):

docker-compose down -v

Docker Compose keeps everything in one file, version-controlled, and easy to share with your team.

Summary of Production-Parity with Docker

Docker gives you production parity, version control, and complete isolation. You can run multiple PostgreSQL versions simultaneously on different ports, match your production environment exactly, and tear everything down with a single command.

If you need more like multiple databases, extensions, seed scripts, or integration with other Docker services, Docker Compose makes it all straightforward. But for a single PostgreSQL instance that matches production, this gets you there fast.

The container runs when you need it, stops when you don’t, and never touches your system PostgreSQL installs. Clean, predictable, and exactly what you need for local development.

Read the whole story
alvinashcraft
43 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

A Clean, No-Nonsense Way to Install PostgreSQL on macOS: Postgres.app

1 Share

Advantages

Postgres.app is the simplest PostgreSQL setup for macOS. It’s self-contained, requires no system configuration, and runs as a regular application. You get a working PostgreSQL server in minutes without touching the command line or managing system services. Everything stays in one place, making it easy to find your databases and configuration. It’s perfect for developers who want PostgreSQL available without the overhead of Docker or native system service management.

Disadvantages

Postgres.app doesn’t give you production parity or version control like Docker does. You can’t easily run multiple PostgreSQL versions simultaneously, and it’s harder to match exact production environments. It also requires manual management of when the server runs, and you’ll need to remember to start it if you don’t configure auto-start. For teams that need reproducible environments or multiple PostgreSQL versions, Docker or native installation might be better choices.

Local PostgreSQL setups get messy fast if you let them. Multiple versions, stray installers, and leftover data directories can leave you wondering which psql binary you’re even using. On macOS, the cleanest and least painful path is Postgres.app. It’s self-contained, predictable, and never leaves you guessing where your database went.

I’ve cycled through Homebrew installs, Docker setups, and enough legacy PostgreSQL installers to know when something is worth keeping around. Postgres.app is that option.

This is the short guide to getting it installed, configured, and ready for an initial development database named interlinedlist.

Download

Grab the latest release here:

https://postgresapp.com

Pick the version that matches whatever you’re running in production or closest to it. If you’re not sure, go with the latest LTS-style release (right now that’s PostgreSQL 16 or 17 depending on your needs).

Once downloaded:

  1. Drag Postgres.app into /Applications
  2. Launch it

You now have a local PostgreSQL server running with zero system pollution.

Install the Command Line Tools

You’ll want psql, createdb, createuser, and the rest available in your shell.

Inside Postgres.app:

Postgres.app → Menu → Install Command Line Tools

This drops the binaries into /usr/local/bin or /opt/homebrew/bin on Apple Silicon, depending on your setup.

Verify the install:

psql --version

If you see a version and not an error, you’re good.

Verify the Server Is Running

Postgres.app launches its own server instance automatically. You should see something like:

Server Running - PostgreSQL 16

If it’s not running, click Start.

Nothing exotic here. This is what keeps Postgres.app clean. No hidden launch daemons, no orphaned services.

Create Your Initial Development Database

Let’s create a database for your project: interlinedlist.

From your terminal:

createdb interlinedlist

And confirm it exists:

psql -l | grep interlinedlist

Connect to it:

psql interlinedlist

You’re in.

Create a Local Superuser (Optional but Recommended)

If your macOS username doesn’t already map to a PostgreSQL superuser, fix that now:

createuser -s $USER

This keeps local dev friction to a minimum. ORMs, migration tools, and scripts expect you to have the ability to create and drop databases during development.

Connection String

For most tooling and frameworks, the connection string will look like:

postgres://<your-macos-username>@localhost:5432/interlinedlist

Example:

postgres://adron@localhost:5432/interlinedlist

Drop that into your .env file or wherever your project expects it.

And That’s It for the Simple Way

No services to debug. No random installers to clean up. No guessing which version is running. Postgres.app is the most stable, least irritating way to keep PostgreSQL available on macOS without turning your workstation into a zoo of leftover data directories.

If you need more like Docker-based production parity, extensions, seed scripts, or per-project isolation, I’ve got posts coming for those too. But for everyday development, Postgres.app gets you up and running in minutes and stays out of your way.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Simple Network Communications in Docker Without Compose

1 Share

I often run services like databases or messaging systems in Docker and connect to them from my host computer, where I have a .NET application using them.

But sometimes I want to run the application in a container too. In this case, I have to connect the two containers over a Docker network.

In this post, I’ll show how to do it without Docker Compose, and in a follow-up post, I’ll use Docker Compose.

Set up the network

First, I create a Docker network that both containers will use to communicate.

docker network create app_to_app_network

That’s all there is to it.

Run the two containers

I’m using Alpine for this simple example because it’s small and starts quickly.

Connecting the containers to the network is part of the command to run the containers.

Open a terminal and run this -

docker run -it --rm --name alpineone --network app_to_app_network alpine

Open another terminal and run this -

docker run -it --rm --name alpinetwo --network app_to_app_network alpine

This runs two containers named alpineone and alpinetwo, connects them to the app_to_app_network network, and starts the shell inside the containers.

Test the connection

From the first container -

ping alpinetwo

You should see output like this -

PING alpineone (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.111 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.119 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.176 ms

From the second container -

ping alpineone

You should see output like this -

PING alpineone (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.111 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.119 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.176 ms

There you go! Two containers communicating over a Docker network.

For fun, you could also use nc to send messages between the two containers.

In the first container, run this to listen on port 8888 -

nc -l -p 8888

In the other container, run this to send a message -

nc alpineone 8888
# type your message and hit enter

You should see the message appear in the first container.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Building dbRosetta Part 5: We Need an API

1 Share

Because I don’t want to have to fight with our support team (they’re awesome, but busy) I decided that, initially, I’m going to host dbRosetta at ScaryDBA.com. I have full control of the web site, and I won’t be breaking Redgate Software entirely if I accidently do something silly. Before starting the process of developing our next prompt, or set of prompts, I discussed the project with CoPilot. We agreed to break the next part into two pieces. This one, we’ll build an API interface. Why an API? Well, I’ll let CoPilot explain it:

An API provides a secure, consistent way to expose dbRosetta’s database so applications and websites can interact with it without direct access to the underlying system. By adding this layer, we separate the database from the front‑end, making it easier to build interactive features while keeping the data protected and portable.

And there you have it. If it’s not clear, this is Part 5 of a multi-part series wherein I’m building a database and application using LLMs. I’m writing everything myself, except the parts, like above, where I call out the AI.

Let’s get going.

Generating the API Prompt

As I’ve done throughout this series, I’m writing prompts to the CoPilot LLM that are then used to generate a prompt for GitHub Copilot. Here’s how I defined my own prompt for this step and the output I received:

Let’s get started on the API for dbRosetta. I’d like you to generate a prompt for Github Copilot within vsCode using the agreed upon structure. This prompt will define the steps necessary to create an API within Azure that accesses our PostgreSQL database for dbRosetta. It will be a secure API to protect access to the database. The API will provide mechanisms for querying the information stored in the PostgreSQL database. The API will be designed to support a web page running within WordPress, which we’ll build later. We’ll need to incorporate the API and it’s code into our existing Github repository. Deployment of the API and any updates will be done using our Github Actions pipeline already created. We’ll also need a full test suite to validate that the API is working appropriately. Project: dbRosetta Secure API — Implementation Plan and Tasks

 

Context

You are an expert backend engineer and DevOps automation assistant. Implement a secure, production-ready REST API for dbRosetta that runs in Azure, accesses our PostgreSQL database, and is consumable by a WordPress front end. All code must be TypeScript, follow best practices, include a full test suite, and integrate into the existing GitHub repository and GitHub Actions pipeline.

 

High-level Goals

1. Create a secure, well-documented REST API that exposes read and query endpoints for dbRosetta PostgreSQL data.

2. Use Azure-managed services and security best practices (Managed Identity, Key Vault, private endpoints, firewall rules).

3. Integrate code into the existing GitHub repo with clear branch/PR workflow and CI/CD via existing GitHub Actions.

4. Provide unit, integration, and end-to-end tests plus a Postman collection and contract tests.

5. Produce OpenAPI (Swagger) documentation and example client usage for WordPress (AJAX/REST calls).

 

Constraints and Nonfunctional Requirements

– Language: **TypeScript** (Node.js 18+).

– Framework: **Fastify** (or Express if preferred) with dependency injection and typed request/response models.

– ORM/DB Layer: **TypeORM** or **Prisma** (prefer Prisma for type-safety and migrations).

– Secrets: **Azure Key Vault** for DB connection strings and JWT signing keys.

– Authentication: **OAuth2 / OpenID Connect** with Azure AD or JWT bearer tokens issued by a trusted identity provider; support role-based access control (RBAC).

– Network: Use **Azure Private Endpoint** for PostgreSQL and restrict API access via Azure App Service VNet integration or Azure API Management.

– CI/CD: Use existing GitHub Actions pipeline; add jobs for build, lint, test, migration, and deploy.

– Observability: Structured logging (Winston/Pino), distributed tracing (OpenTelemetry), and Azure Monitor integration.

– Security: Parameterized queries, input validation, rate limiting, CORS restricted to WordPress domain(s), and automated dependency scanning.

 

Deliverables

– New API service in services/dbrosetta-api/ with:

src/ TypeScript source

tests/ unit and integration tests

prisma/ or migrations/ DB schema and migration scripts

openapi.yaml generated from code

Dockerfile and Azure deployment manifest (ARM/Bicep or Terraform snippets)

README.md with setup, local dev, and deployment instructions

– GitHub Actions workflow updates or new workflow file that plugs into existing pipeline steps

– Postman collection and example WordPress client snippet

– Test coverage report and contract tests

– Security checklist and runbook for rotating secrets and emergency rollback

 

API Surface and Example Endpoints

Design endpoints to be RESTful, paginated, and filterable. Example endpoints:

GET /v1/records — list records with pagination, sorting, and filters (query params).

GET /v1/records/{id} — fetch single record by id.

POST /v1/query — parameterized, read-only query endpoint that accepts a safe query DSL (no raw SQL).

GET /v1/schema — returns a sanitized schema summary for the WordPress UI.

GET /v1/health — readiness and liveness checks.

GET /v1/metrics — Prometheus-compatible metrics (secured).

 

Security and Access Control

– **Authentication**: Validate JWTs issued by Azure AD or a configured OIDC provider. Reject unsigned or expired tokens.

– **Authorization**: Implement RBAC with scopes/roles (e.g., reader, admin). Enforce at route level.

– **Secrets**: Do not store secrets in repo or environment variables in plaintext. Use Azure Key Vault and assign the API a Managed Identity to fetch secrets at runtime.

– **Network Security**: Use private endpoints for PostgreSQL; restrict API inbound traffic via Azure API Management or App Service access restrictions; enable TLS everywhere.

– **SQL Safety**: Use Prisma/TypeORM parameterized queries; never interpolate user input into raw SQL. Validate and sanitize all inputs.

– **Rate Limiting and Throttling**: Implement per-IP and per-client rate limits; return standard 429 responses.

– **CORS**: Restrict to the WordPress domain(s) and admin origins only.

 

Database Access and Migrations

– Use Prisma (recommended) with a schema that maps to existing PostgreSQL tables; include migration scripts.

– Provide a safe read-only query DSL for complex queries that maps to parameterized Prisma queries.

– Add a migration job in CI that runs prisma migrate deploy against a staging DB and requires manual approval for production.

 

Local Development and Secrets

– Provide .env.example with placeholders.

– Local dev should support a Docker Compose setup with a local Postgres instance and a local Key Vault emulator or secrets file (for dev only).

– Document how to authenticate locally with Azure AD (dev flow) or use a test JWT.

 

Testing Strategy

– **Unit tests**: Jest with ts-jest for services, controllers, and utilities.

– **Integration tests**: Spin up a test Postgres (Docker) and run tests against it; use test fixtures and transactions to isolate tests.

– **Contract tests**: Use Pact or similar to validate API contract with the WordPress client expectations.

– **E2E tests**: Use Supertest or Playwright for API flows.

– **CI**: Run lint, typecheck, unit tests, integration tests (with a test DB container), and generate coverage. Fail pipeline on coverage drop below threshold (e.g., 80%).

– Provide a Postman collection and automated Newman run in CI for smoke tests post-deploy.

 

CI/CD and GitHub Actions Integration

– Add or update workflow steps:

build — install, lint, typecheck, build.

test — run unit and integration tests; start test DB container as needed.

migrate — run migrations against staging (manual approval for prod).

containerize — build and push Docker image to Azure Container Registry.

deploy — deploy to Azure App Service or Azure Container Apps using existing pipeline secrets and service principal.

post-deploy — run smoke tests and health checks.

– Use environment-specific secrets stored in GitHub Secrets and Azure Key Vault.

– Ensure pipeline uses least privilege service principal and supports rollback.

 

Repository Integration and Developer Workflow

– Create a feature branch feature/api/dbrosetta and open a PR with the implementation.

– Include a PR template that requires: architecture diagram, security checklist, migration plan, and test results.

– Add CODEOWNERS for the API folder.

– Add pre-commit hooks (husky) for linting and formatting.

 

Observability and Monitoring

– Structured JSON logs with request IDs.

– Expose Prometheus metrics and integrate with Azure Monitor.

– Add alerts for error rate, latency, and failed deployments.

 

Acceptance Criteria

– API endpoints implemented and documented in OpenAPI.

– All tests pass in CI and coverage meets threshold.

– Secrets are stored in Key Vault and accessed via Managed Identity.

– Deployment via GitHub Actions to staging succeeds and smoke tests pass.

– WordPress-compatible example client snippet provided.

– Security checklist completed and reviewed.

 

Developer Tasks and Step-by-Step Implementation Plan

1. Initialize service folder services/dbrosetta-api with TypeScript, Fastify, Prisma, ESLint, Prettier, and Jest.

2. Add Prisma schema and generate client; create initial migration from existing DB schema.

3. Implement DB layer with typed models and safe query helpers.

4. Implement authentication middleware for JWT/OIDC and RBAC enforcement.

5. Implement endpoints and input validation (zod or class-validator).

6. Add OpenAPI generation and serve Swagger UI on a secured route.

7. Add logging, metrics, and health endpoints.

8. Write unit tests for each module and integration tests for endpoints using a test DB container.

9. Add Dockerfile and Azure deployment manifests.

10. Update GitHub Actions workflows to include build/test/migrate/deploy steps and Newman/Postman smoke tests.

11. Document local dev, secrets, and deployment steps in README.

12. Create Postman collection and example WordPress AJAX client snippet.

13. Run security review and fix any issues; finalize PR.

 

PR Checklist

– Code compiles and lints clean.

– Tests pass and coverage threshold met.

– OpenAPI spec included and validated.

– Migration scripts included and tested.

– Secrets not in repo; Key Vault integration documented.

– CI pipeline updated and tested on staging.

– README and runbook updated.

 

Notes for Copilot

– Generate code in small, reviewable commits with clear messages.

– Prefer explicit typing and small, testable functions.

– When scaffolding, include TODOs for environment-specific values and manual approval gates for production migration/deploy steps.

– Provide example curl and WordPress fetch snippets for each endpoint.

 

End of prompt.

I’m not going to try to hide anything from you. I’m now outside my comfort zone. I’ve no idea if the approach we’re following now is correct or not. I’m going to be pretty dependent on the two AIs to get this stuff right. I’m now thinking through my own prompts and trying to be more thorough and more prescriptive. I read through the output and it sure seems right to me. Over to VSCode.

Building the API

I’m not entirely sure what I expected, but after I created the markdown file for the prompt (you can read through all the code, including the prompts I’m using here on Github), I pasted it into the chat window in VSCode. Github Copilot didn’t go into a discussion. It started generating files, folders, and code. Lots of code. And it asked me a question:

What would you like to do next?

  1. Test what we have so far?
  2. Continue with implementing the API routes?
  3. Focus on a specific component (tests, CI/CD, documentation)?
  4. Something else?

It’s claiming to be 70% complete, after about two minutes. Honestly, I’d like to immediately finish up, just to see if it really is done that quickly, but I’m going to test what we have so I can understand where we are. Nice thing is, it told me what to do for that as well:

cd services/dbrosetta-api
npm install
npm run docker:up
npm run prisma:generate
npm run dev
# Visit http://localhost:3000/docs
Which is awesome!
Until I hit my first error. npm wouldn’t run. Node.js wasn’t installed locally and the AI never asked. No biggie. I told the AI and it walked me through getting the install done. Now, I ran the first npm command, but this output absolutely put me on pause:

npm warn deprecated inflight@1.0.6: This module is not supported, and leaks memory. Do not use it. Check out lru-cache if you want a good and tested way to coalesce async requests by a key value, which is much more comprehensive and powerful.
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported
npm warn deprecated glob@7.2.3: Glob versions prior to v9 are no longer supported

> dbrosetta-api@1.0.0 prepare
> husky install

husky – install command is DEPRECATED

added 928 packages, and audited 929 packages in 50s

169 packages are looking for funding
run npm fund for details

2 moderate severity vulnerabilities

To address all issues (including breaking changes), run:
npm audit fix –force

Run npm audit for details.

Wow! I mean, wow! It’s use all sorts of deprecated versions and functions. This is one of the kinds of problems I honestly expected. I took this output to CoPilot (I’m also going to have Github CoPilot fix it, but I wanted to see what the AI thought). It thinks I’m dangerous:

VS Code didn’t intentionally pick deprecated code — the warnings come from transitive dependencies in the toolchain (packages you installed depend on older libraries like glob@7.x which in turn pull inflight@1.0.6). Those upstream packages haven’t been updated to newer versions yet, so npm prints deprecation warnings; you can either upgrade/override the offending dependency, wait for upstream fixes, or suppress/mitigate the risk with fixes and CI checks

I’m not going to argue with it when the install I did was what vsCode told me to do. However, this is all interesting stuff. This is why, straight up, I don’t think we have to worry about our jobs. We’re still very much necessary to this process. It’s just, this process is WAY faster than it would be.

Lots more back and forth between me and the agent. As we ran the tests, we discovered coding errors, configuration problems and other stuff. Most of which was generated by the LLM, but also fixed by the LLM.

As a side note, for the first four parts of this series, I merged the branches into the main and deleted them. Now, I’m realizing it would be helpful for you, if you’re interested, to be able to see each of the steps taken within VSCode. So, from here on, you’ll see branches that are available and each of the commits I made on that branch, just as a way to see just how interactive this process is (as well as seeing how problematic code is introduced and then eliminated through the use of the agent). I’ll be merging the code into main as before, but I won’t delete the previous branches going forward so you can see the sausage making process in its entirety.

In the middle of ALL this, I had to switch Azure Subscriptions (don’t ask).

However, with the testing scaffolding in place, the migration from one Subscription to another was shockingly simple, if a bit long-winded.

Conclusion

I keep getting reminded, over and over, the AI is a particularly brilliant, but extremely junior, developer on my team. They generate code like lightning. Most of the time, it’s fully functional. However, sometimes, silly mistakes are made. A constant, steady, guide must be in place to arrive at a good destination. Although, to be fair, at one point, I forgot to update the IP address in the firewall in Azure PostgreSQL. When the tests failed, the AI was hellbent on doing massive troubleshooting to find out why. I had to force it to forget what it saw after I got the security situation worked out again. Otherwise, it was a real pleasure to work with. I mean, it did keep trying to bypass Flyway and use some internal tool instead, but I got my way in the end.

Here’s CoPilot’s assessment:

By completing this stage, we’ve established a secure and reliable bridge between dbRosetta’s PostgreSQL foundation and the broader ecosystem it will support. The API not only enforces best practices around authentication, input validation, and deployment hygiene, but also sets the stage for reproducible integrations with other platforms. With this layer in place, contributors and downstream applications can query the database confidently, knowing that access is protected and performance is monitored. This milestone transforms dbRosetta from a static repository of translations into a living service, ready to be consumed by the WordPress interface we’ll build next.

OK. A bit grandiose, but otherwise, yeah. Pretty much.

If you’re just landing here for the first time, the code and the prompts I used to generate it, are all here on GitHub.

The rest of the series is here:

Part 1: Introducing the Concept of dbRosetta
Part 2: Defining the Project & Prompt Templates
Part 3: Creating a Database
Part 4: Automating a CI/CD Database Build

The post Building dbRosetta Part 5: We Need an API appeared first on Redgate.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories