Global, eventually consistent key-value storage. Reads are fast everywhere; writes propagate in seconds. The right shape for config, sessions, and feature flags — the wrong shape for anything that needs strong consistency.
KV is Cloudflare's simplest state primitive. You write a key-value pair, and within roughly 60 seconds it's readable from every edge location on the network. Reads are fast — typically under 10ms — because the data is cached locally at the PoP serving the request.
The trade-off is eventual consistency. If you write a key in Johannesburg and read it from London a second later, you might get the old value. For session tokens, feature flags, and cached API responses, this is fine. For a shopping cart or account balance, it isn't.
Values can be up to 25 MB, keys up to 512 bytes. You get metadata on each key (up to 1 KB of JSON) and TTL-based expiration. The API is three methods: get, put, delete.
Declare a KV namespace in wrangler.toml, then use env.CACHE in your Worker. The API is intentionally minimal.
# wrangler.toml [[kv_namespaces]] binding = "CACHE" id = "a1b2c3d4..."
Read returns null on cache miss. Write accepts an optional expirationTtl in seconds. You can store strings, JSON, ArrayBuffers, or ReadableStreams.
export default { async fetch(req, env) { // Read const cached = await env.CACHE.get("user:123", "json"); // Write with TTL (seconds) await env.CACHE.put("user:123", JSON.stringify(data), { expirationTtl: 3600, metadata: { role: "admin" }, }); // Delete await env.CACHE.delete("user:123"); // List keys with prefix const keys = await env.CACHE.list({ prefix: "user:" }); return Response.json(cached); } };
For seeding data or migrations, Wrangler supports bulk put from a JSON file. Useful for populating feature flags or config on deploy.
# Bulk write from JSON file npx wrangler kv:bulk put --namespace-id a1b2c3d4 ./seed.json # seed.json [ { "key": "feature:dark-mode", "value": "true" }, { "key": "feature:beta-nav", "value": "false" } ]
If you write in Cape Town and read in Frankfurt a second later, you might get stale data. For read-after-write consistency, use D1 or Durable Objects instead.
KV is optimised for read-heavy workloads. If you're writing the same key rapidly (counters, rate limiters), you'll hit the wall. Use Analytics Engine or Durable Objects for high-write patterns.
Listing keys with a prefix still scans the full namespace. For large namespaces, list operations get slow. Design your key structure carefully.
For large objects, use R2 and store the R2 key in KV. KV is a cache, not a file store.