SQLite at the edge. Real SQL, real migrations, real constraints — including the honest ones. Single-primary, eventually consistent reads, and the question of whether SQLite is enough for your workload.
D1 is Cloudflare's serverless SQL database. Under the hood it's SQLite — the same engine that runs in every phone and browser — hosted and replicated by Cloudflare. You write standard SQL, run migrations via Wrangler, and query it from Workers via a binding.
The architecture is single-primary with read replicas. Writes go to a single location (currently US or EU); reads can be served from replicas closer to the user. This means writes have higher latency for users far from the primary, and read-after-write consistency requires reading from the primary.
D1 is genuinely good for what it is: a zero-ops relational database for applications that fit inside SQLite's model. It's not Postgres. If you need stored procedures, advanced JSON operators, or multi-TB datasets, look at Hyperdrive to a real Postgres instead.
Declare a D1 database in wrangler.toml. The .prepare() API uses parameterised queries to prevent SQL injection. Results come back as plain objects.
# wrangler.toml [[d1_databases]] binding = "DB" database_name = "prod" database_id = "abc-123..." // Worker code export default { async fetch(req, env) { // Select const { results } = await env.DB .prepare("SELECT * FROM users WHERE email = ?") .bind("user@example.com") .all(); // Insert await env.DB .prepare("INSERT INTO users (email, name) VALUES (?, ?)") .bind("new@example.com", "New User") .run(); return Response.json(results); } };
D1 supports batching multiple statements into a single round trip. This is critical for performance when you need to run several related queries.
const results = await env.DB.batch([ env.DB.prepare("INSERT INTO orders (user_id, total) VALUES (?, ?)") .bind(1, 99.50), env.DB.prepare("UPDATE users SET order_count = order_count + 1 WHERE id = ?") .bind(1), env.DB.prepare("SELECT * FROM orders WHERE user_id = ?") .bind(1) ]);
Wrangler manages migrations as numbered SQL files. Apply them locally for dev, or against the remote database for production.
# Create a migration npx wrangler d1 migrations create prod create-users # migrations/0001_create-users.sql CREATE TABLE users ( id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE, name TEXT NOT NULL, created_at TEXT DEFAULT (datetime('now')) ); # Apply to remote npx wrangler d1 migrations apply prod --remote
If your primary is in the US and your user is in Johannesburg, every write is a 200ms+ round trip. Read replicas help reads but don't fix write latency.
D1 is not for large datasets. If you're approaching the limit, either archive old data to R2 or use Hyperdrive to a proper Postgres.
No PL/pgSQL, no triggers (beyond basic SQLite triggers), no custom functions. Complex business logic stays in your Worker code.
Only one write can execute at a time on the primary. High-write-throughput workloads will queue. This is a SQLite limitation, not a Cloudflare one.