Supabase is a managed Postgres platform with a generous free tier and a frontend-friendly JavaScript client. Many developers discover it as a quick backend-as-a-service and don't look much deeper. That's a mistake — the features that make Supabase genuinely powerful for production systems are Row-Level Security, Realtime subscriptions, and Edge Functions. Here's what I've learned deploying it across several commercial projects.
Row-Level Security Is Your Access Control Layer
Row-Level Security (RLS) is a native Postgres feature. When enabled on a table, every query — including those from the Supabase JavaScript client — is automatically filtered through policies that run in the database engine. The key insight: RLS moves access control out of your application code and into the database, where it can't be bypassed by a bug in your API layer.
-- Enable RLS on the table
ALTER TABLE orders ENABLE ROW LEVEL SECURITY;
-- Users can only see their own orders
CREATE POLICY "users_see_own_orders"
ON orders FOR SELECT
USING (auth.uid() = user_id);
-- Users can only insert orders for themselves
CREATE POLICY "users_insert_own_orders"
ON orders FOR INSERT
WITH CHECK (auth.uid() = user_id);With these policies in place, a client calling `supabase.from('orders').select('*')` only ever receives rows where `user_id` matches the authenticated JWT. No application-layer filter is required, and no developer can accidentally forget to add one.
Warning
Always call `supabase.from('table').select()` from the authenticated client (initialized with the user's JWT), never the service-role key, in browser code. The service-role key bypasses all RLS policies — it should only be used in trusted server-side contexts like Edge Functions.
Multi-Tenant RLS with Organisations
For SaaS products with team workspaces, you need a multi-tenant model: users belong to organisations, and permissions depend on both the user's identity and their role within the organisation. The pattern I use:
-- Helper: returns the org IDs the current user belongs to
CREATE OR REPLACE FUNCTION auth.user_org_ids()
RETURNS uuid[] AS $$
SELECT array_agg(organisation_id)
FROM organisation_members
WHERE user_id = auth.uid()
$$ LANGUAGE sql SECURITY DEFINER STABLE;
-- Org-scoped read policy
CREATE POLICY "org_members_read_projects"
ON projects FOR SELECT
USING (organisation_id = ANY(auth.user_org_ids()));Marking the helper function `SECURITY DEFINER STABLE` is important: it runs with the definer's privileges (bypassing RLS on the `organisation_members` table) and its result is cached within the transaction, avoiding redundant queries for each row evaluated.
Realtime: Broadcast vs Postgres Changes
Supabase Realtime has two distinct mechanisms that are often confused:
- **Postgres Changes**: subscribes to a PostgreSQL logical replication stream. Every INSERT, UPDATE, or DELETE on a table emits an event to subscribed clients. Good for syncing data state — but the payload only includes changed rows, not the full result of a view or join.
- **Broadcast**: a low-latency pub/sub channel where clients can publish arbitrary JSON payloads to a named channel. No database persistence. Good for ephemeral collaboration events — cursor positions, typing indicators, presence.
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(url, anonKey);
// Postgres Changes — receive DB mutations in real time
const channel = supabase
.channel("inventory-changes")
.on(
"postgres_changes",
{ event: "*", schema: "public", table: "inventory" },
(payload) => console.log("Row changed:", payload),
)
.subscribe();
// Broadcast — ephemeral pub/sub (e.g. live cursor positions)
const presenceChannel = supabase
.channel("room:project-123")
.on("broadcast", { event: "cursor" }, ({ payload }) => {
updateCursor(payload.userId, payload.x, payload.y);
})
.subscribe();
// Publish a cursor event
await presenceChannel.send({
type: "broadcast",
event: "cursor",
payload: { userId: currentUserId, x: 340, y: 120 },
});Tip
Postgres Changes respects RLS — a subscribed client only receives events for rows they're allowed to see. Broadcast is unfiltered within a channel; you control access at the channel subscription level.
Edge Functions for Server-Side Logic
Supabase Edge Functions are Deno-based serverless functions deployed globally. They run in the same network region as your Supabase project, which means database round-trips are sub-millisecond. The primary use cases I reach for them:
- Webhooks that must verify a signature and then mutate the database with the service-role key (e.g. Stripe webhooks updating subscription status).
- Scheduled jobs via Supabase Cron — run a function on a schedule to send digest emails, expire stale records, or sync external APIs.
- Any operation that requires the service-role key or a secret, but should not run in browser client code.
// supabase/functions/stripe-webhook/index.ts
import { serve } from "https://deno.land/std@0.168.0/http/server.ts";
import Stripe from "https://esm.sh/stripe@14";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
const stripe = new Stripe(Deno.env.get("STRIPE_SECRET_KEY")!);
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!, // bypasses RLS — safe inside Edge Function
);
serve(async (req) => {
const sig = req.headers.get("stripe-signature")!;
const body = await req.text();
const event = stripe.webhooks.constructEvent(
body,
sig,
Deno.env.get("STRIPE_WEBHOOK_SECRET")!,
);
if (event.type === "customer.subscription.updated") {
const sub = event.data.object as Stripe.Subscription;
await supabase
.from("subscriptions")
.upsert({ stripe_subscription_id: sub.id, status: sub.status });
}
return new Response(JSON.stringify({ received: true }), { status: 200 });
});Storage + RLS for Private Files
Supabase Storage buckets support RLS policies too. Mark a bucket as private and define policies to control who can upload or download objects. The storage path convention I use for user-scoped files: `{user_id}/{filename}` — then a single policy covers all user files without per-file ACLs.
-- Allow users to read only their own files in the "documents" bucket
CREATE POLICY "users_read_own_files"
ON storage.objects FOR SELECT
USING (
bucket_id = 'documents'
AND (storage.foldername(name))[1] = auth.uid()::text
);Performance: Generated Columns and Indexes
Since Supabase is just Postgres, all the standard performance tools apply. Two that pay off quickly in product work:
- **Generated columns**: store a computed value (e.g. `full_name = first_name || ' ' || last_name`) as a real column. Queries can filter and sort on it without re-computing per row, and you can index it.
- **Partial indexes**: an index with a WHERE clause. If you frequently query `orders WHERE status = 'pending'`, a partial index on that subset is far smaller and faster than a full-table index on `status`.
-- Generated column for fast full-name search
ALTER TABLE profiles
ADD COLUMN full_name text GENERATED ALWAYS AS
(first_name || ' ' || last_name) STORED;
CREATE INDEX profiles_full_name_idx ON profiles USING gin(to_tsvector('english', full_name));
-- Partial index for pending orders only
CREATE INDEX orders_pending_idx ON orders (created_at DESC)
WHERE status = 'pending';What Supabase Is Not
Supabase is not a magic scalability layer. The free tier pauses projects after one week of inactivity — unacceptable for production. The Pro plan ($25/month) keeps projects active and raises connection limits. For high-traffic applications you'll eventually need to right-size the compute add-on and tune Supavisor's pool size. It's still Postgres under the hood: slow queries need indexes, schema design decisions are permanent in ways that schema-less databases forgive, and migrations require care.
Note
Supabase's value proposition is that it gives you a production-grade Postgres backend — with auth, storage, realtime, and edge functions — without operating infrastructure. For most product teams and consulting projects, that trade-off is excellent.