S3-compatible object storage with zero egress fees. The service that turns the cloud pricing model on its head — you pay for storage and operations, never for bandwidth out.
R2 is Cloudflare's S3-compatible object storage. It speaks the S3 API, so existing tools (AWS CLI, rclone, any S3 SDK) work out of the box. The differentiator is pricing: $0/GB for egress. On AWS S3, egress costs $0.09/GB — for asset-heavy applications, that's often the largest line item on the bill.
From a Worker, you access R2 via a binding. From outside Cloudflare, you use the S3 API with R2 credentials. You can also enable public access via a custom domain, turning R2 into a CDN-backed asset store with no origin to manage.
R2 supports lifecycle rules (auto-delete after N days), multipart uploads for large files, and conditional operations with ETags. It's not a filesystem — it's a flat key-value store where keys look like paths and values can be up to 5 TB.
Declare an R2 bucket in wrangler.toml and access it via env.BUCKET. The API supports put, get, delete, list, and head.
# wrangler.toml [[r2_buckets]] binding = "BUCKET" bucket_name = "uploads" // Worker code export default { async fetch(req, env) { // Upload await env.BUCKET.put("images/photo.jpg", req.body, { httpMetadata: { contentType: "image/jpeg" }, }); // Download const obj = await env.BUCKET.get("images/photo.jpg"); return new Response(obj.body, { headers: { "Content-Type": obj.httpMetadata.contentType }, }); } };
Use any S3 client with R2's endpoint. Useful for migrations from AWS, or for tools that don't know about Cloudflare.
# AWS CLI with R2 endpoint aws s3 cp ./backup.tar.gz \ s3://my-bucket/backups/backup.tar.gz \ --endpoint-url https://<account-id>.r2.cloudflarestorage.com # rclone sync rclone sync ./dist r2:my-bucket/site/ --progress
Generate a presigned URL from your Worker, hand it to the client, and the client uploads directly to R2 without routing through your Worker. Saves compute and keeps large files off your Worker's memory.
import { AwsClient } from "aws4fetch"; const r2 = new AwsClient({ accessKeyId: env.R2_ACCESS_KEY, secretAccessKey: env.R2_SECRET_KEY, }); const signed = await r2.sign( new Request(`https://${ACCOUNT}.r2.cloudflarestorage.com/bucket/key`, { method: "PUT", }), { aws: { signQuery: true } } );
A GET immediately after a PUT returns the new object. But LIST operations may take a moment to reflect new keys. Don't use list as a source of truth right after writes.
Writes (PUT, POST, LIST) are Class A at $4.50/million. Reads (GET, HEAD) are Class B at $0.36/million. A write-heavy workload can still run up a bill even with free egress.
R2 buckets aren't publicly accessible out of the box. You need to attach a custom domain (which routes through Cloudflare's CDN) or serve objects via a Worker.
R2 auto-places buckets near where they're first written. For af-south latency, write from a South African PoP first, or use jurisdiction hints if available.