Sr. Content Developer at Microsoft, working remotely in PA, TechBash conference organizer, former Microsoft MVP, Husband, Dad and Geek.
153089 stories
·
33 followers

Beyond Web Apps: Designing Database with Google Antigravity

1 Share

We’re only getting started with what you can build with agentic tools. Sure, vibe coding platforms like Lovable make it super simple to develop full-featured web apps. But developers are also building all sorts of software with AI products like Claude Code and Google Antigravity.

Antigravity doesn’t just plan wide-ranging work; it does it too!

Tweet from the Antigravity account showing a non-coding use case

Reading that tweet gave me an idea. Could I build out a complex database solution? Not an “app”, but the schema for a multi-tenant SaaS billing system? One that takes advantage of Antigravity’s browser use, builder tools, and CLI support?

Yes, yes I can. I took a single prompt to flex some of the best parts of this product, and, to generate an outcome in minutes that would have taken me hours or days to get right.

I started by opening an empty folder in Antigravity.

An empty Google Antigravity session

Here’s my prompt that took advantage of Antigravity’s unique surfaces:

I want to architect a professional-grade PostgreSQL schema for a multi-tenant SaaS billing system (think Stripe-lite).

Phase 1: Research & Best Practices
Use the Antigravity Browser to research modern best practices for SaaS subscription modeling, focusing specifically on 'point-in-time' billing, handling plan upgrades/downgrades, and PostgreSQL indexing strategies for multi-tenant performance. Summarize your findings in a Research Artifact.

Phase 2: Schema Design
Based on the research, generate a multi-file SQL project in the /schema directory. Include DDL for tables, constraints, and optimized indexes. Ensure you account for data isolation between tenants.

Phase 3: Verification & Load Testing
Once the scripts are ready, use the Terminal to spin up a local PostgreSQL database. Apply the scripts and then write a Python script to generate 100 rows of synthetic billing data to verify the indexing strategy.

Requirements:
Start by providing a high-level Implementation Plan and Task List.
Wait for my approval before moving between phases.

Note that I’m using Antigravity’s “planning” mode (versus Fast action-oriented mode) and Gemini 3 Flash.

A few seconds after feeding that prompt into Antigravity, I got two artifacts to review. The first is a high-level task list.

Google Antigravity creating a task list for our database project

I also got an implementation plan. This listed objectives and steps for each phase of work. It also called out a verification approach. As you can see in the screenshot, I can comment on any step and refine the tasks or overall plan at any time.

An AI-generated implementation plan for the database project

I chose to proceed and let the agent get to work on phase 1. This was awesome to watch. Antigravity spun up a Chrome browser and began to quickly run Google searches and “read” the results.

A view of Antigravity’s browser use where it searched for web pages and browsed relevant sites

Once it decided which links it wanted to follow, Antigravity asked me for permission to navigate to specific web pages that provided more information on SaaS billing schemas.

Google Antigravity asking permission before browsing a web site

When the research phase finished, I had a research summary that summarized the architecture, patterns, and details that represented our solution. It also embedded a video overview of the agent’s search process. I never had this paper trail when I build software manually!

Research summary including a video capture of Antigravity’s browser search process

Note that Antigravity also kept my task list up to date. The first phase was all checked off.

Maintained task list

Because I was doing this all in one session, I added a note to the chat that indicated I was ready to proceed. If I had walked away and forgot where I was, I could always go into the Antigravity Agent Manager and see my open tasks in the Inbox.

Antigravity Agent Manager inbox where we can see actions needing our attention

It took less than 25 seconds for the next phase to complete. When it was over, I had a handful of SQL script files in the project folder.

Generated scripts for our database project

At this point, I could ask Google Antigravity to do another evaluation for completeness, or ask for detailed explanations of its decisions. I’m in control, and can intervene at any point to redirect the work or make sure I understand what’s happened so far.

But I was ready to keep going to phase 3 where we tested this schema with actual data. I gave the “ok” to proceed.

This was fun too! I relocated the agent terminal to my local terminal window so that I could see all the action happening. Notice here that Antigravity created seed data, a data generation script, and then started up my local PostgreSQL instance. It loaded the data in, and ran a handful of tests. All I did was watch!

Google Antigravity using terminal commands to test our database solution

That was it. When the process wrapped up, Antigravity generated a final Walkthrough artifact that explained what it did, and even offered a couple of possible next steps for my data architecture.

Complete walkthrough of how Google Antigravity built this solution

Is your mind swirling on use cases right now? Mine still is. Maybe infrastructure-as-code artifact generation based on analyzing your deployed architecture? Maybe create data pipelines or Kubernetes YAML? Use Google Antigravity to build apps, but don’t discount how powerful it is for any software solution.



Read the whole story
alvinashcraft
39 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Considering Spec Driven Development

1 Share

 People are making a big deal about the new way of agentic working: Spec-Driven!!!

But, wait...

Big Design Up Front (BDUF) is something we tried for many years, in many companies, many times. 

It was proven to be a losing proposition and a bad idea. We did it in the 80s, and by 96 or so we had alternatives. 

If the idea of spec-driven is to fully detail a system up-front, and then use agents to implement it, then I think we're about to repeat a historic mistake

 But, wait...

BDD and TDD are also specification-driven, just in very tight loops with constant verification. The test acts as a specification, and we implement code to make all the tests pass. We do this incrementally and iteratively, adding tests as we go. 

If the idea of spec-driven is to iteratively and incrementally build software covered by the safety mechanism of tests, and the feedback loop is tight, then maybe we're about to repeat one of the best ideas of the 20th and 21st centuries.

But, wait...

If the specification is not a deterministic test, then it may end up causing us to generate little more than technical debt and future headaches.

But, wait...

If the purpose of the specification is to generate tests, then it may be a much faster way to initiate BDD.

But, wait...

How do we know the generated tests are good? What if it generates a small raft of pointless tests and we can't use them?

But, wait...

What if we examine the tests, and are able to quickly read them because they are business tests written from the specs, using something like playwright or gherkin?  Then we have a human in the loop, and if we don't like them we can re-generate them.

After all, it only took minutes the first time, redoing it seems perfectly reasonable; then make sure the tests are reasonable. Then use the tests to make sure the code is written correctly. Then use mutation testing to make sure the tests are valid.

But, wait...

What about the visual elements? I guess they can be generated from wireframes and validated in storybook, and then we have humans in the loop validating the output. Then, of course the code has to be able to use the visual components, and the tests will check the code... maybe this isn't too bad.

And, of course...

We have a lot of tools like SonarQube and Veracode and ruff and such, looking for problems and failings in the code which we may be able to mitigate with entries in AGENT.md files.


Have we reached a point where we have the languages, tools, frameworks, and LLMs such that we can do work reliably, with humans in the loop, generating and reviewing everything up front and maybe it will turn out okay?

It's worth trying, whether one feels it's promising or poppycock.  The whole ecosystem make this something a bit different from the BDUF/Waterfall we did in the 80s and 90s.

I don't plan on abandoning the disciplines we've learned, but maybe there are new ways to support them.

I'm no AI fanboy, but neither am I a stubborn detractor: we try to find ways to make the best use of all the tech available to us all, don't we? Why would I oppose those experiments? 

There is a lot to learn from trying, I think.


Read the whole story
alvinashcraft
45 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Daily Reading List – January 20, 2026 (#703)

1 Share

Happy pretend Monday. Since yesterday was a US holiday, I’ll be thrown off all week. But, today was maybe my favorite reading list of the year so far. Some really fun items.

[blog] How Our Engineering Team Uses AI. Here’s how a startup engineering team uses AI for understanding codebases, explore ideas, write scripts, and outsource toil. They also call out where AI isn’t making a big difference.

[blog] How we built an AI-first culture at Ably. You might have to mandate it to force the habit change, but AI adoption often becomes organic once people see where the value is. This post offers good pillars for successful AI adoption.

[blog] Everything Becomes an Agent. Will every AI project, given enough time, converge on becoming an agent? Allen thinks so.

[report] State of MCP. I don’t think I’ve seen this much data about MCP usage. Check it out for early signals on patterns, pain points, and value.

[blog] The Power of Constraints. Constraints are freeing. Some of the best people use their present limitations to do amazing things within those (often temporary) boundaries.

[blog] The Flexibility Fallacy: How We Confused Developer Choice with Developer Productivity. Completely related to the previous post. The best teams don’t have the most choices. They have the right constraints in place.

[blog] How Google Antigravity is changing spec-driven development. There’s a lot still happening in this space. Far from mature. But track the progress!

[article] Demystifying evals for AI agents. Anthropic put out some terrific content here that will put you in better shape when designing and running evaluations of your agents.

[blog] The Question Your Observability Vendor Won’t Answer. How much of your data is waste? Up to 40%. You’re paying way too much right now.

[article] The Agentic AI Handbook: Production-Ready Patterns. Dig through 113 patterns to see if any can help you out.

Want to get this update sent to you every day? Subscribe to my RSS feed or subscribe via email below:



Read the whole story
alvinashcraft
51 seconds ago
reply
Pennsylvania, USA
Share this story
Delete

Replit Goes "Pro" with More Discounts & More Collaborators

1 Share
Summary We’re introducing a new Pro plan and bringing collaboration to Core so you can build more, collaborate more, and better control how you use credits. These changes will take effect on Feb 20, 2026. New Pro plan starts at $100/month for teams, includes tiered credits with discounts, priority support, and up to 15 builders. Core plan now includes real collaboration for up to 5 people—previously exclusive to Teams users. The Teams plan will be sunset, and all Teams users will be automatically upgraded to Pro at no additional cost for the remainder of their term, with access to enhanced features. Simplified collaboration experience with clearer personal vs. team workspaces, easier team collaboration, and better app organization and control.

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Electricity use of AI coding agents

1 Share

Electricity use of AI coding agents

Previous work estimating the energy and water cost of LLMs has generally focused on the cost per prompt using a consumer-level system such as ChatGPT.

Simon P. Couch notes that coding agents such as Claude Code use way more tokens in response to tasks, often burning through many thousands of tokens of many tool calls.

As a heavy Claude Code user, Simon estimates his own usage at the equivalent of 4,400 "typical queries" to an LLM, for an equivalent of around $15-$20 in daily API token spend. He figures that to be about the same as running a dishwasher once or the daily energy used by a domestic refrigerator.

Via Hacker News

Tags: ai, generative-ai, llms, ai-ethics, ai-energy-usage, coding-agents, claude-code

Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete

Introducing Waypoint-1: Real-time interactive video diffusion from Overworld

1 Share
Read the whole story
alvinashcraft
1 minute ago
reply
Pennsylvania, USA
Share this story
Delete
Next Page of Stories