Stop Building Supabase Projects from the Dashboard
Most developers start with Supabase by signing into the Dashboard, opening the Table Editor, and clicking their way through schema creation. It works, until you need to share that schema with a teammate, deploy to a staging environment, or roll back a breaking change. Then you realize your entire database structure lives in the cloud with no version control and no reproducible setup.
I've onboarded developers across dozens of startups, some using Next.js, others FastAPI or Flutter. The stack doesn't matter. The first thing I always set up is the same: a local-first Supabase workflow using the CLI. Schema as code. Migrations in Git. One command to deploy.
This post walks through the exact workflow I recorded in my YouTube video building Trelly, a task management app with Next.js and Supabase.
Why Local-First Matters
The Supabase Dashboard is great for exploration. But when you use it as your primary development tool, you're building on sand:
No version control. Your schema exists in Supabase's cloud. There's no file in your repo that describes your database. If you delete a column by accident, there's no Git history to recover from.
No reproducibility. A new teammate joins the project. How do they get the same database? Screenshots? A Notion doc? With a local-first workflow, they clone the repo, run two commands, and they have the exact same database.
No safe deployment path. Moving schema changes from development to production means manually recreating them in the cloud Dashboard. Miss one constraint, one RLS policy, one index, and you've got a production bug.
The CLI workflow fixes all three. Your schema is SQL files in your repo. Migrations are tracked in Git. Deployment is one command.
Prerequisites
Two things installed:
Supabase CLI, Install via Homebrew, npm, or Scoop depending on your OS. The official docs cover all methods.
Docker Desktop, Supabase runs the full stack locally using Docker containers. Make sure it's running before you start.
Initialize the Project
mkdir trelly && cd trelly
supabase init
This creates a supabase/ directory with:
config.toml, configures your entire local environment: auth providers, email settings, ports, Studio UI. Most tutorials never mention this file, but it's where you configure OAuth providers locally and customize your setup.migrations/, where your schema SQL files live.seed.sql, test data that runs after every migration reset.
Start the Local Environment
supabase start
This spins up the entire Supabase stack in Docker: Postgres, GoTrue (auth), PostgREST (auto-generated REST API), Studio (the Dashboard UI), Realtime, and Storage. Everything runs on your machine.
First run takes one to two minutes to pull images. After that, about ten seconds.
Run supabase status to see your local credentials, API URL, Studio URL, anon key, service role key, the MCP endpoint, and the database connection string. Open localhost:54323 and you'll see Studio, the same UI as the cloud Dashboard, running locally.
Create the Tasks Table and Generate a Migration
Here's where this workflow differs from every Dashboard tutorial. I created the tasks table using the local Studio UI, but instead of leaving it there, I captured it as a migration file:
supabase db diff -f tasks --local
This compares the current state of your local database against your migration files and generates the difference as a new migration. The result is a timestamped SQL file in supabase/migrations/:
create table "public"."tasks" (
"id" uuid not null default gen_random_uuid(),
"title" text not null,
"created_at" timestamp with time zone not null default now()
);
alter table "public"."tasks" enable row level security;
CREATE UNIQUE INDEX tasks_pkey ON public.tasks USING btree (id);
alter table "public"."tasks" add constraint "tasks_pkey" PRIMARY KEY using index "tasks_pkey";
The file also includes all the grant statements for anon, authenticated, and service_role roles, the full permission setup that Supabase generates.
The important line: alter table "public"."tasks" enable row level security. RLS is on from the start.
The RLS Trap You Need to Know
When you create a table in the Supabase Dashboard (Table Editor), RLS is auto-enabled. Silently. But when you create a table via raw SQL without going through the Table Editor, RLS is off by default. Your table is completely open. Anyone with the anon key can read and write everything.
I've seen this in production across multiple startups. Real user data exposed because the developer wrote SQL without enabling RLS, because they learned from a Dashboard tutorial where it happened automatically.
The fix: always include enable row level security in the same migration file where you create the table. Same file. Same commit.
Reset and Verify
supabase db reset
This drops everything and re-applies all migrations from scratch. Check Studio, your tasks table is there with RLS enabled. The schema is now a SQL file in your repo. It's in Git. Anyone who clones this repo runs supabase start then db reset and gets the exact same database.
Push to Production
supabase login
supabase link
supabase db push
supabase login authenticates with your Supabase account. supabase link connects your local project to a cloud project, you create the cloud project once in the Dashboard, the only time you need to touch it. supabase db push applies all your local migration files to production.
Same schema, same RLS, same constraints. Guaranteed, because it's the same SQL files.
Push vs Pull vs Diff
Three commands, three directions:
db push, sends your local migrations to the cloud. Your normal workflow. Use this 90% of the time.
db pull, generates a migration file from what already exists in the cloud. Use this once if you inherited a project built in the Dashboard, then switch to push forever.
db diff, compares your local state and generates the difference as a migration. This is what we used to capture the tasks table.
AI-Assisted Migrations with Claude and MCP
Once the project is running locally, I connected Claude Code to the local Supabase database using the MCP endpoint from supabase status.
You can install the MCP server by running:
claude mcp add local-supabase -s project -- npx -y @modelcontextprotocol/server-postgres "postgresql://postgres:postgres@127.0.0.1:54322/postgres"
This will install the MCP server in your project directory and connect it to the local Supabase database. This lets Claude read the actual schema, tables, columns, constraints, relationships, not guess from documentation.
I also imported the Supabase Postgres Best Practices skill, which gives Claude context on Supabase conventions so it generates SQL that follows the platform's patterns.
With this setup, I asked Claude to add a description column to the tasks table. It generated a new migration using supabase migration new:
ALTER TABLE tasks ADD COLUMN description text;
Clean, correct, in the right directory. Applied it with db reset and the column was there.
The combination of MCP (so Claude reads your schema) and the Supabase skill (so it follows best practices) is a serious productivity boost. But I review every migration it writes.
The Seed File Trick
Create a seed.sql file in your supabase/ folder with test data, sample tasks with different statuses. Every time you run db reset, Supabase applies all migrations and then runs the seed file.
Your entire team starts from the same test data. I've worked with teams where developers spent thirty minutes a day recreating test data after a reset. The seed file takes ten minutes to write once.
The Full Workflow in 30 Seconds
supabase init → supabase start → build your schema in local Studio → supabase db diff -f <name> --local to capture it as a migration → connect Claude via MCP for additional migrations → supabase db push to deploy. Schema is in Git. Teammates clone and run. Production matches local. Done.
The full video walkthrough is on YouTube and the Trelly repo is on GitHub.