know.2nth.aiTechnologytechcloudflareD1
Technology · Cloudflare · Skill Node

Cloudflare
D1.

SQLite at the edge. Real SQL, real migrations, real constraints — including the honest ones. Single-primary, eventually consistent reads, and the question of whether SQLite is enough for your workload.

TechnologyEdge DatabaseOpen TierLast updated · Apr 2026

A managed SQLite database you query from Workers.

D1 is Cloudflare's serverless SQL database. Under the hood it's SQLite — the same engine that runs in every phone and browser — hosted and replicated by Cloudflare. You write standard SQL, run migrations via Wrangler, and query it from Workers via a binding.

The architecture is single-primary with read replicas. Writes go to a single location (currently US or EU); reads can be served from replicas closer to the user. This means writes have higher latency for users far from the primary, and read-after-write consistency requires reading from the primary.

D1 is genuinely good for what it is: a zero-ops relational database for applications that fit inside SQLite's model. It's not Postgres. If you need stored procedures, advanced JSON operators, or multi-TB datasets, look at Hyperdrive to a real Postgres instead.

Binding, querying, and migrations.

01

Binding and queries

Declare a D1 database in wrangler.toml. The .prepare() API uses parameterised queries to prevent SQL injection. Results come back as plain objects.

# wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "prod"
database_id = "abc-123..."

// Worker code
export default {
  async fetch(req, env) {
    // Select
    const { results } = await env.DB
      .prepare("SELECT * FROM users WHERE email = ?")
      .bind("user@example.com")
      .all();

    // Insert
    await env.DB
      .prepare("INSERT INTO users (email, name) VALUES (?, ?)")
      .bind("new@example.com", "New User")
      .run();

    return Response.json(results);
  }
};
02

Batch operations

D1 supports batching multiple statements into a single round trip. This is critical for performance when you need to run several related queries.

const results = await env.DB.batch([
  env.DB.prepare("INSERT INTO orders (user_id, total) VALUES (?, ?)")
    .bind(1, 99.50),
  env.DB.prepare("UPDATE users SET order_count = order_count + 1 WHERE id = ?")
    .bind(1),
  env.DB.prepare("SELECT * FROM orders WHERE user_id = ?")
    .bind(1)
]);
03

Migrations via Wrangler

Wrangler manages migrations as numbered SQL files. Apply them locally for dev, or against the remote database for production.

# Create a migration
npx wrangler d1 migrations create prod create-users

# migrations/0001_create-users.sql
CREATE TABLE users (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  email TEXT NOT NULL UNIQUE,
  name TEXT NOT NULL,
  created_at TEXT DEFAULT (datetime('now'))
);

# Apply to remote
npx wrangler d1 migrations apply prod --remote

Where it bites you.

Write latency

Single-primary means writes cross the ocean

If your primary is in the US and your user is in Johannesburg, every write is a 200ms+ round trip. Read replicas help reads but don't fix write latency.

Size limits

10 GB per database

D1 is not for large datasets. If you're approaching the limit, either archive old data to R2 or use Hyperdrive to a proper Postgres.

No stored procedures

It's SQLite, not Postgres

No PL/pgSQL, no triggers (beyond basic SQLite triggers), no custom functions. Complex business logic stays in your Worker code.

Concurrent writes

Single-writer model

Only one write can execute at a time on the primary. High-write-throughput workloads will queue. This is a SQLite limitation, not a Cloudflare one.

When it fits. When it doesn't.

✓ Use it when
  • You need relational data with zero ops. No connection strings, no patching, no scaling knobs. It just works.
  • Your dataset fits in SQLite's envelope. Under 10 GB, moderate write volume, standard SQL queries.
  • You want the full Cloudflare binding model. D1 pairs naturally with Workers, KV, R2, and Queues in the same wrangler.toml.

Where this node connects.

Go deeper.