Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
150407 stories
·
33 followers

Announcing Amazon Aurora PostgreSQL serverless database creation in seconds

1 Share

At re:Invent 2025, Colin Lazier, vice president of databases at AWS, emphasized the importance of building at the speed of an idea—enabling rapid progress from concept to running application. Customers can already create production-ready Amazon DynamoDB tables and Amazon Aurora DSQL databases in seconds. He previewed creating an Amazon Aurora serverless database with the same speed, and customers have since requested quick access and speed to this capability.

Today, we’re announcing the general availability of a new express configuration for Amazon Aurora PostgreSQL, a streamlined database creation experience with preconfigured defaults designed to help you get started in seconds.

With only two clicks, you can have an Aurora PostgreSQL serverless database ready to use in seconds. You have the flexibility to modify certain settings during and after database creation in the new configuration. For example, you can change the capacity range for the serverless instance at the time of create or add read replicas, modify parameter groups after the database is created. Aurora clusters with express configuration are created without an Amazon Virtual Private Cloud (Amazon VPC) network and include an internet access gateway for secure connections from your favorite development tools – no VPN, or AWS Direct Connect required. Express configuration also sets up AWS Identity and Access Management (IAM) authentication for your administrator user by default, enabling passwordless database authentication from the beginning without additional configuration.

After it’s created, you have access to features available for Aurora PostgreSQL serverless, such as deploying additional read replicas for high availability and automated failover capabilities. This launch also introduces a new internet access gateway routing layer for Aurora. Your new serverless instance comes enabled by default with this feature, which allows your applications to connect securely from anywhere in the world through the internet using the PostgreSQL wire protocol from a wide range of developer tools. This gateway is distributed across multiple Availability Zones, offering the same level of high availability as your Aurora cluster.

Creating and connecting to Aurora in seconds means fundamentally rethinking how you get started. We launched multiple capabilities that work together to help you onboard and run your application with Aurora. Aurora is now available on AWS Free Tier, which you gain hands-on experience with Aurora at no upfront cost. After it’s created, you can directly query an Aurora database in AWS CloudShell or using programming languages and developer tools through a new internet accessible routing component for Aurora. With integrations such as v0 by Vercel, you can use natural language to start building your application with the features and benefits of Aurora.

Create an Aurora PostgreSQL serverless database in seconds
To get started, go to the Aurora and RDS console and in the navigation pane, choose Dashboard. Then, choose Create with a rocket icon.

Review pre-configured settings in the Create with express configuration dialog box. You can modify the DB cluster identifier or the capacity range as needed. Choose Create database.

You can also use the AWS Command Line Interface (AWS CLI) or AWS SDKs with the parameter --with-express-configuration to create both a cluster and an instance within the cluster with a single API call which makes it ready for running queries in seconds.To learn more, visit Creating an Aurora PostgreSQL DB cluster with express configuration.

Here is a CLI command to create the cluster:

$ aws rds create-db-cluster --db-cluster-identifier channy-express-db \
    --engine aurora-postgresql \
    –-with-express-configuration

Your Aurora PostgreSQL serverless database should be ready in seconds. A success banner confirms the creation, and the database status changes to Available.

After your database is ready, go to the Connectivity & security tab to access three connection options. When connecting through SDKs, APIs, or third-party tools including agents, choose Code snippets. You can choose various programming languages such as .NET, Golang, JDBC, Node.js, PHP, PSQL, Python, and TypeScript. You can paste the code from each step into your tool and run the commands.

For example, the following Python code is dynamically generated to reflect the authentication configuration:

import psycopg2
import boto3

auth_token = boto3.client('rds', region_name='ap-south-1').generate_db_auth_token(DBHostname='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com', Port=5432, DBUsername='postgres', Region='ap-south-1')

conn = None
try:
    conn = psycopg2.connect(
        host='channy-express-db-instance-1.abcdef.ap-south-1.rds.amazonaws.com',
        port=5432,
        database='postgres',
        user='postgres',
        password=auth_token,
        sslmode='require'
    )
    cur = conn.cursor()
    cur.execute('SELECT version();')
    print(cur.fetchone()[0])
    cur.close()
except Exception as e:
    print(f"Database error: {e}")
    raise
finally:
    if conn:
        conn.close()

Choose CloudShell for quick access to the AWS CLI which launches directly from the console. When you choose Launch CloudShell, you can see the command is pre-populated with relevant information to connect to your specific cluster. After connecting to the shell, you should see the psql login and the postgres => prompt to run SQL commands.

You can also choose Endpoints to use tools that only support username and password credentials, such as pgAdmin. When you choose Get token, you use an AWS Identity and Access Management (IAM) authentication token generated by the utility in the password field. The token is generated for the master username that you set up at the time of creating the database. The token is valid for 15 minutes at a time. If the tool you’re using terminates the connection, you will need to generate the token again.

Building your application faster with Aurora databases
At re:Invent 2025, we announced enhancements to the AWS Free Tier program, offering up to $200 in AWS credits that can be used across AWS services. You’ll receive $100 in AWS credits upon sign-up and can earn an additional $100 in credits by using services such as Amazon Relational Database Service (Amazon RDS), AWS Lambda, and Amazon Bedrock. In addition, Amazon Aurora is now available across a broad set of eligible Free Tier database services.

Developers are embracing platforms such as Vercel, where natural language is all it takes to build production-ready applications. We announced integrations with Vercel Marketplace to create and connect to an AWS database directly from Vercel in seconds and v0 by Vercel, an AI-powered tool that transforms your ideas into production-ready, full-stack web applications in minutes. It includes Aurora PostgreSQL, Aurora DSQL, and DynamoDB databases. You can also connect your existing databases created through express configuration with Vercel. To learn more, visit AWS for Vercel.

Like Vercel, we’re bringing our databases seamlessly into their experiences and are integrating directly with widely adopted frameworks, AI assistant coding tools, environments, and developer tools, all to unlock your ability to build at the speed of an idea.

We introduced Aurora PostgreSQL integration with Kiro powers, which developers can use to build Aurora PostgreSQL backed applications faster with AI agent-assisted development through Kiro. You can use Kiro power for Aurora PostgreSQL within Kiro IDE and from the Kiro powers webpage for one-click installation. To learn more about this Kiro Power, read Introducing Amazon Aurora powers for Kiro and Amazon Aurora Postgres MCP Server.

Now available
You can create an Aurora PostgreSQL serverless database in seconds today in all AWS commercial Regions. For Regional availability and a future roadmap, visit the AWS Capabilities by Region.

You pay only for capacity consumed based on Aurora Capacity Units (ACUs) billed per second from zero capacity, which automatically starts up, shuts down, and scales capacity up or down based on your application’s needs. To learn more, visit the Amazon Aurora Pricing page.

Give it a try in the Aurora and RDS console and send feedback to AWS re:Post for Aurora PostgreSQL or through your usual AWS Support contacts.

Channy

Read the whole story
alvinashcraft
just a second ago
reply
Pennsylvania, USA
Share this story
Delete

Resolve Merge Conflicts the Easy Way

1 Share

Git is great at merging until it isn’t. Most of the time, when I rebase my feature branch against the main branch, it all goes to plan. Nothing to do for me. But when it doesn’t go to plan, it can be a big mess. Git dumps a wall of conflict markers on you. You resolve those, continue the rebase, and the next commit has conflicts too. Depending on the scope of changes, resolving merge conflicts can be a very tedious chore. The temptation to git rebase --abort and pretend this never happened is overwhelming.

It turns out, we have some great tools now for dealing with tedious chores. In particular, I’ve set up two tools that turned merge conflicts from a dreaded chore into a minor speed bump. Most of the time, they resolve themselves before I even see them. For the ones that don’t, automation handles the tedious parts so I only deal with the genuinely ambiguous cases.

A friendly robot referee untangling two git branches

The Problem with Textual Merging

Git’s built-in merge is purely textual. It compares lines of text and looks for overlapping changes. It doesn’t understand your code. It doesn’t know what a function is, or an import statement, or a class definition. It just sees lines.

This means Git reports conflicts that aren’t actually conflicts. Two developers add different imports to the same file, near the same spot. Git sees overlapping line changes and panics:

<<<<<<< HEAD
import { useState } from 'react';
import { useQuery } from '@tanstack/react-query';
=======
import { useState } from 'react';
import { useEffect } from 'react';
>>>>>>> feature/dashboard

A human can see instantly that both changes are independent additions. One added useQuery, the other added useEffect. The correct resolution is to keep all three imports. But Git can’t see that because Git doesn’t understand syntax. It only sees text.

These false conflicts add up. On a large rebase, they can turn a five-minute task into a thirty-minute slog.

Layer 1: Mergiraf

Mergiraf is a structural merge driver for Git. Instead of comparing lines of text, it parses your files using language grammars and merges at the syntax tree level. If two changes touch different parts of the syntax tree, it merges them cleanly. If they genuinely conflict at the structural level, it falls back to standard conflict markers.

That import example above? Mergiraf resolves it automatically. It understands that those are independent additions to an import list and combines them.

Mergiraf supports over 25 languages: Java, Rust, Go, Python, JavaScript, TypeScript, C, C++, C#, Ruby, Elixir, PHP, Dart, Scala, Haskell, OCaml, Lua, Nix, YAML, TOML, HTML, XML, and more. For file types it doesn’t support, it returns a non-zero exit code and Git falls back to its default textual merge. No harm done.

Setup

Three steps.

1. Install mergiraf:

brew install mergiraf

2. Register the merge driver in your ~/.gitconfig:

[merge "mergiraf"]
    name = mergiraf
    driver = mergiraf merge --git %O %A %B -s %S -x %X -y %Y -p %P -l %L

3. Apply it globally in ~/.config/git/attributes:

* merge=mergiraf

The wildcard (*) tells Git to run every file through mergiraf. This might sound aggressive, but it’s fine. If mergiraf doesn’t recognize the file type, it steps aside and Git handles it normally.

Note

If you use my dotfiles, the git/install.sh script creates the attributes file for you. Run it once and you’re done.

Companion Settings

Two additional Git settings complement mergiraf nicely.

diff3 conflict style: By default, Git’s conflict markers only show your version and their version. With diff3, you also see the common ancestor (the “base”). This gives both mergiraf and humans more context to resolve conflicts correctly.

[merge]
    conflictStyle = diff3

Here’s the difference. Standard conflict markers:

<<<<<<< HEAD
const timeout = 5000;
=======
const timeout = 10000;
>>>>>>> feature

With diff3:

<<<<<<< HEAD
const timeout = 5000;
||||||| base
const timeout = 3000;
=======
const timeout = 10000;
>>>>>>> feature

The base section tells you the original value was 3000. Now you can see that HEAD changed it to 5000 and the feature branch changed it to 10000. Without the base, you’re guessing.

rerere (reuse recorded resolution): Rerere records how you resolve conflicts and automatically replays those resolutions if the same conflict comes up again. This is useful during rebases where you might encounter the same conflict multiple times.

[rerere]
    enabled = true

Layer 2: Automating the Rest

Mergiraf handles the structural conflicts, but some conflicts are genuinely ambiguous. And some aren’t ambiguous at all, they’re just tedious. Lock files, database migrations, stacked PR duplicates. Each of these has a clear resolution strategy, but you still have to do the work manually.

This is drudgery. Drudgery that follows clear rules. Perfect for automation.

I built a Claude Code skill called /resolve-conflicts that handles the entire conflict resolution workflow. Type /resolve-conflicts and it takes over.

How It Works

The skill follows a three-step loop:

  1. Detect context. It reads Git’s internal state files to determine whether you’re in a rebase, merge, cherry-pick, or revert, and how far along you are (e.g., step 3 of 12 in a rebase).

  2. Categorize and resolve. It runs a categorization script on every conflicted file, sorting them into buckets: lock file, migration, mergiraf-supported, or other. Then it resolves each bucket with the appropriate strategy.

  3. Continue. It regenerates any lock files, runs the appropriate continue command (git rebase --continue, git commit --no-edit, etc.), and loops back to step 1 if more conflicts appear.

Resolution Strategies

Each category gets its own treatment:

Lock files: Accept theirs to clear the conflict markers, then regenerate. The content of a lock file is derived from the dependency manifest, so there’s no point resolving individual lines. The skill runs the appropriate package manager (npm install, cargo generate-lockfile, poetry lock --no-update, etc.) to produce a correct lock file from the resolved manifest.

Migrations: Ask the human. Migration files represent sequential schema changes where order matters. Getting this wrong can break your database. The skill flags these and asks you how to proceed.

Mergiraf files: Run mergiraf as a second pass. Even though mergiraf already ran as a merge driver during the git operation, sometimes conflicts remain (partial resolutions, complex restructuring). The skill runs mergiraf solve on the file. If conflict markers remain after that, it falls through to AI analysis.

Everything else: Read the conflict, analyze both sides, and resolve it. This is where the skill earns its keep on the genuinely tricky ones.

Stacked PR Duplicate Detection

If you work with stacked PRs, you’ve probably hit this one. You have a feature branch with sub-PRs stacked on top of each other. You merge a sub-PR into main. Now when you rebase the parent branch, Git produces conflicts where both sides have nearly identical code and the base section is empty.

Here’s what that looks like with diff3:

<<<<<<< HEAD
function calculateTotal(items) {
  return items.reduce((sum, item) => sum + item.price, 0);
}
||||||| base
=======
function calculateTotal(items) {
  return items.reduce((sum, item) => sum + item.price, 0);
}
>>>>>>> feature/add-checkout

Empty base. Both sides identical. This isn’t a real conflict. It’s just Git seeing that code appeared on both sides independently (once from the merged sub-PR, once from the feature branch that originally authored it).

The skill detects this pattern. When the base is empty but HEAD and the incoming side are more than 95% similar (after normalizing whitespace), it auto-resolves by keeping the HEAD version and tells you what it did. For 70-95% similarity, it shows both versions and asks you to confirm. Below 70%, it’s a genuine divergence and presents both options for you to decide.

A Realistic Session

Here’s what it looks like in practice. You’re rebasing and hit conflicts:

> /resolve-conflicts

Conflicts (rebase step 3/12):
- 1 lockfile: package-lock.json
- 2 mergiraf: src/app.ts, src/utils.ts
- 1 other: README.md

Resolving lockfile: package-lock.json... accepted theirs (will regenerate)
Resolving mergiraf: src/app.ts... resolved structurally
Resolving mergiraf: src/utils.ts... 1 conflict remains, analyzing...
  Auto-resolved src/utils.ts hunk at line 42: stacked PR duplicate
  (HEAD and incoming 98% similar with empty base). Kept HEAD version.
Resolving other: README.md... [presents conflict for user review]

Regenerating package-lock.json... done
Continuing rebase (step 4/12)...

If step 4 has conflicts, it resolves those too. All the way through step 12. You just watch it work and only chime in when it needs a human decision.

Setup

The skill lives in my dotfiles repo. If you use my dotfiles, it’s already available via symlink. Otherwise, grab the files and drop them into ~/.claude/skills/resolve-conflicts/.

Note

The skill works best with mergiraf installed for the structural merging step. Without it, those files fall through to AI analysis, which still works but is less precise for structural changes.

Putting It Together

The two layers complement each other:

  1. During any git merge, rebase, or cherry-pick, mergiraf runs automatically as the merge driver. It silently resolves structural conflicts before you ever see them. You don’t have to do anything.
  2. For the conflicts that remain, /resolve-conflicts categorizes them, applies the right strategy for each type, and continues the operation.

The result: most rebases that used to require manual intervention now complete with zero or minimal human input. The conflicts that do need your attention are the genuinely ambiguous ones that deserve it.

Try It

Both tools are open source and available in my dotfiles repo. Mergiraf is available at mergiraf.org and installs in minutes. The resolve-conflicts skill requires Claude Code.

Merge conflicts are an inevitable part of collaborative development. The suffering is optional.

Read the whole story
alvinashcraft
26 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

What Is Body Language? Cheat Sheets For Writing Body Language

1 Share

What is body language and how do you use it when you write? Use these cheat sheets to help you with your body language descriptions.

What Is Body Language?

People react to situations with micro-expressions, hand gestures, and posture. Most of us are not even aware of them. However, what we do with our body language has a huge impact on other people and how they interpret and perceive us.

‘Even when they don’t express their thoughts verbally, most people constantly throw off clues to what they’re thinking and feeling. Non-verbal messages communicated through the sender’s body movements, facial expressions, vocal tone and volume, and other clues are collectively known as body language.’ (Psychology Today)

Body language happens when we are doing something. We could be sitting, standing, or walking. We could be talking or thinking. Body language is often an involuntary reaction to something perceived by one of the five senses.

How To Use It In Writing

Using body language is one of the best ways to show and not tell when we write.

This is why we are always told to use body language in our writing. Sometimes, it’s easier said than written. So, I created these cheat sheets to help you show a character’s state of mind through their body language.

When you are completing your character biographies, be sure to include how your main characters move and talk. This is especially important for your protagonist, antagonist, confidant, and love interest. They are the characters that hold the story together and they should be as well-rounded and believable as possible.

The Top Five Tips For Using Body Language

  1. Use body language to add depth to dialogue.
  2. Use it because more than 50% of human communication is non-verbal.
  3. Use it to show how your character’s emotions (like anger, empathy, fear, happiness, love) affect their actions.
  4. Use it to help you show rather than tell your reader everything.
  5. Use it in moderation. If overused, it can slow your story down.

Cheat Sheets For Writing Body Language

Use this list to help you with your body language descriptions. It will help you to translate emotions and thoughts into written body language.

Obviously, a character may exhibit a number of these behaviours. For example, they may be shocked and angry, or shocked and happy. Use these combinations as needed.

Cheat Sheets For Body Language
Cheat Sheets For Body Language

Use our Character Creation Kit to create great characters for your stories.


by Amanda Patterson
© Amanda Patterson

If you enjoyed this, read:

  1. The 17 Most Popular Genres In Fiction – And Why They Matter
  2. How To Write A One-Page Synopsis
  3. 123 Ideas For Character Flaws – A Writer’s Resource
  4. The 7 Critical Elements Of A Great Book
  5. All About Parts Of Speech
  6. Punctuation For Beginners
  7. 5 Incredibly Simple Ways to Help Writers Show and Not Tell
  8. 5  Instances When You Need To Tell (And Not Show)
  9. The 4 Main Characters As Literary Devices
  10. 106 Ways To Describe Sounds

Source for skeleton image

Top Tip: Find out more about our workbooks and online courses in our shop.

The post What Is Body Language? Cheat Sheets For Writing Body Language appeared first on Writers Write.

Read the whole story
alvinashcraft
41 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Thoughts on slowing the fuck down

1 Share

Thoughts on slowing the fuck down

Mario Zechner created the Pi agent framework used by OpenClaw, giving considerable credibility to his opinions on current trends in agentic engineering. He's not impressed:

We have basically given up all discipline and agency for a sort of addiction, where your highest goal is to produce the largest amount of code in the shortest amount of time. Consequences be damned.

Agents and humans both make mistakes, but agent mistakes accumulate much faster:

A human is a bottleneck. A human cannot shit out 20,000 lines of code in a few hours. Even if the human creates such booboos at high frequency, there's only so many booboos the human can introduce in a codebase per day. [...]

With an orchestrated army of agents, there is no bottleneck, no human pain. These tiny little harmless booboos suddenly compound at a rate that's unsustainable. You have removed yourself from the loop, so you don't even know that all the innocent booboos have formed a monster of a codebase. You only feel the pain when it's too late. [...]

You have zero fucking idea what's going on because you delegated all your agency to your agents. You let them run free, and they are merchants of complexity.

I think Mario is exactly right about this. Agents let us move so much faster, but this speed also means that changes which we would normally have considered over the course of weeks are landing in a matter of hours.

It's so easy to let the codebase evolve outside of our abilities to reason clearly about it. Cognitive debt is real.

Mario recommends slowing down:

Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.

Anything that defines the gestalt of your system, that is architecture, API, and so on, write it by hand. [...]

I'm not convinced writing by hand is the best way to address this, but it's absolutely the case that we need the discipline to find a new balance of speed v.s. mental thoroughness now that typing out the code is no longer anywhere close to being the bottleneck on writing software.

Tags: ai, generative-ai, llms, coding-agents, cognitive-debt, agentic-engineering

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Claude Code auto mode: a safer way to skip permissions

1 Share
Claude Code users approve 93% of permission prompts. We built classifiers to automate some decisions, increasing safety while reducing approval fatigue. Here's what it catches, and what it misses.\n
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

From Lights-On to Lights-Out: Rethinking Software Delivery in the Age of AI

1 Share
I mentioned “Dark delivery” to a colleague last week and he laughed and said, “this is not Star Wars.” I think he was right, “Lights-Out” delivery sounds a lot less threatening. Not sure either is a settled term, but I will stick with “Lights-Out” for now. You might wonder what I mean by it? I […]





Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories