know.2nth.ai Technology Cloudflare Workers & Pages
Technology · Cloudflare · Skill Node

Cloudflare Workers
& Pages.

An edge runtime built on V8 isolates, paired with a git-driven static host. Together they replaced a surprising amount of the origin-server stack — but only for workloads that fit inside their constraints. Here's what the pieces actually are, and when to pick something else.

Technology Edge Compute Open Tier Last updated · Apr 2026

Compute and static assets, sitting in Cloudflare's network instead of yours.

Workers is a JavaScript and WebAssembly runtime that executes on Cloudflare's network — the same network that already answers DNS, terminates TLS, and caches assets for roughly a fifth of the web. Each request lands in whichever of Cloudflare's edge locations is closest to the user, and your code runs there instead of in a region on the other side of the ocean.

Pages is the static-site sibling: connect a Git repo, Cloudflare builds it, and the output sits on that same network. When you need a dynamic endpoint, a file under functions/ compiles into a Worker route on the same deploy. Pages + Functions is, in practice, how most teams start — the "serverless" bit arrives as a side effect of deploying a site.

The problem this solves isn't exotic. It's the boring one: every request to a single-region origin is a round trip, every container cold start is a visible delay, and every byte leaving your cloud has a price on it. Workers moves the code to where the user already is; Pages moves the assets there too.

01 Client
browser / app
02 Nearest PoP
330+ cities
03 Worker
V8 isolate
04 Bindings
KV · D1 · R2 · DO
05 Origin (opt.)
fallback

The numbers that make the model interesting.

None of these are marketing adjectives. They're the reasons a particular class of workload — latency-sensitive, globally distributed, low-state — moved off traditional clouds and onto this one.

330+
Edge cities
~95% of users within 50ms
~0ms
Cold start
V8 isolate, not a container
100k
Free requests / day
Per account, unmetered bandwidth
$0
Egress on R2
vs per-GB on S3 and most peers

Three ideas. The rest is detail.

Most of Workers' behaviour — good and bad — falls out of three design decisions: how it runs code, how it serves static assets alongside it, and how it gives code access to state.

01

V8 Isolates, not containers

Each Worker runs in a V8 isolate — the same primitive a Chrome tab uses. An isolate spins up in roughly a millisecond, shares a single V8 process with thousands of peers, and has no filesystem, no Node.js, no native modules. That's how you get sub-millisecond cold starts and per-request pricing. It's also why you can't `require('sharp')` or shell out to ImageMagick.

// Every request gets a fresh isolate context.
// No container. No VM. No cold start.
export default {
  async fetch(req, env, ctx) {
    const { colo } = req.cf;
    return new Response(`hello from ${colo}`);
  }
};
02

Pages + Functions: static first, dynamic where needed

Pages compiles your site from a Git repo and serves the output from every edge location. Drop a file under functions/ and that route becomes a Worker on the same deploy, with automatic preview URLs per pull request. For the common case — static site with a handful of dynamic endpoints — you skip most of the serverless config ceremony entirely.

// functions/api/hello.ts — becomes /api/hello
export const onRequest: PagesFunction = async (ctx) => {
  const body = { ok: true, city: ctx.request.cf.city };
  return Response.json(body);
};
03

Bindings: state is declared, not imported

Workers doesn't give your code an SDK for KV or D1 — it gives it a binding. You declare the resources you need in wrangler.toml and they arrive as properties on the env argument. No credentials to rotate, no connection strings, no client libraries to keep in sync. The primitives you get are KV (eventual key-value), D1 (SQLite), R2 (S3-compatible object storage with zero egress), Durable Objects (stateful, single-writer), and Queues.

# wrangler.toml
name = "edge-api"
main = "src/index.ts"

[[kv_namespaces]]
binding = "CACHE"
id = "a1b2..."

[[d1_databases]]
binding = "DB"
database_name = "prod"

[[r2_buckets]]
binding = "ASSETS"
bucket_name = "uploads"

Workers vs the other edge runtimes.

Edge compute isn't a category of one. Here's how the main options compare on the dimensions that usually decide the choice.

Platform Runtime Cold start Edge state Best for
Cloudflare Workers V8 isolate ~0ms KV · D1 · R2 · DO · Queues Global APIs, static + dynamic sites, auth middleware
Vercel Edge Functions V8 isolate (on Cloudflare) ~0ms Vercel KV, Postgres (regional) Next.js-first teams already on Vercel
Deno Deploy Deno (V8 + Rust) ~0ms Deno KV TypeScript-first, standards-aligned APIs
AWS Lambda@Edge Node on CloudFront 100–300ms None at edge Teams deep in AWS willing to pay the latency cost
Fastly Compute WebAssembly (Wasmtime) ~1ms KV Store Language flexibility, polyglot shops
Netlify Edge Functions Deno (on Deno Deploy) ~0ms Blob Store Netlify-hosted sites needing light edge logic

What people actually build with it.

Skipping the theoretical. These are the workloads where Workers or Pages is usually the first answer rather than a compromise.

API gateway

Global BFF in front of regional services

Auth check, rate limit, cache read, request coalescing — all at the edge — before a single packet hits your origin. The classic Worker shape.

Auth middleware

OIDC / JWT at the door

Validate tokens, enrich with claims, reject unauthenticated traffic without letting it near your expensive app servers. Pairs well with Cloudflare Access.

A/B testing

Feature flags and experiment routing

Bucket users, set cookies, rewrite paths. Because the decision happens at the edge, cached variants stay cached and your experiment doesn't collapse latency.

Static SaaS

Docs, marketing, and app shells

Pages handles the site, Functions handles the contact form and the webhook, and a single git push deploys globally in under a minute. No Kubernetes required.

Image transforms

On-the-fly resize and reformat

Store originals in R2, transform at the edge via the Images binding, serve WebP/AVIF with no origin round trip. Zero egress cost on R2 is the kicker.

Edge SSR

Server-rendered pages without the region tax

Render React, Svelte, or Astro pages at the edge, hydrating with data from KV or D1. Faster TTFB than regional SSR for a globally distributed audience.

From a JS runtime to a small cloud.

Workers launched as "run a bit of JS at the edge". What's interesting is how quickly that grew into a set of primitives broad enough to host full applications.

2017
Workers GA
JavaScript at the edge via V8 isolates. Mostly used as a smarter CDN rule.
2018
KV
Global, eventually consistent key-value store. The first real state primitive.
2020
Durable Objects · Workers Unbound
Single-writer stateful objects with strong consistency. Longer-running CPU budgets for heavier workloads.
2021
Pages · Workers for Platforms
Git-driven static hosting with built-in Functions. Multi-tenant runtime for SaaS providers.
2022
R2 · D1 · Queues
S3-compatible storage with zero egress. SQLite at the edge. Managed message queues. The "small cloud" is visibly assembling.
2023
Workers AI · Vectorize · Hyperdrive
GPU inference at the edge, vector search, and pooled connections to regional Postgres. The AI-shaped primitives land.
2024–25
Workflows · Containers (beta)
Durable workflow orchestration, and — notably — support for running actual containers alongside Workers for the heavy bits that isolates can't handle.

When it fits. When it doesn't.

Workers is excellent for what it's excellent at, and miserable for what it isn't. The fastest way to waste a week is to force the wrong shape of workload into an isolate.

✓ Use it when
  • Latency matters and users are global. The PoP network is the whole value proposition — skip Workers if your users are all in one region.
  • Workloads are short, stateless, or use provided state primitives. Auth, routing, caching, templating, thin CRUD over KV/D1/R2.
  • You're building a static site with a handful of dynamic endpoints. Pages + Functions is dramatically less ceremony than a container platform.
  • Egress costs are eating your budget. R2 is $0/GB egress. For asset-heavy apps this alone pays the bill.
  • You want one deploy primitive from edge to origin. Wrangler + Pages gives a single surface for CI, previews, and rollbacks.

How this node connects in the tree.

A skill node is only as valuable as the other nodes it plugs into. Here's where Workers & Pages touches the rest of the tree — and how an agent loading this context should think about those edges.

tech/iac
Terraform
Declare Workers, Pages projects, DNS, and Access rules alongside the rest of your infra.
tech/architecture
Traefik
The self-hosted routing layer for workloads Workers can't host — typically paired, not replaced.
tech/architecture
Docker & containers
The escape hatch for anything that needs Node natives, heavy memory, or long-running processes.
biz
Business domain hub
ERP integrations (Sage X3, Shopify, WooCommerce) are the most common 'what runs on Workers' destination in the tree today.
tech/cloudflare/queues
Queues & messaging
In-Cloudflare fan-out with reliable delivery — the lightweight sibling of Workers for event-driven flows.
tech/cloudflare/ai
Workers AI & Vectorize
Workers AI + Vectorize is a fully edge-hosted RAG stack when the corpus is small enough to fit the model.
For agents loading this context

What this node gives you

A working mental model of the Workers + Pages primitives, their performance envelope, and the places they start to hurt. Use it to answer "should this workload live at the edge?" questions without guessing, and to scaffold a Pages project with the right bindings before touching a real wrangler.toml.

When the agent-context API ships, this node will also expose deploy sequences, common wrangler errors, and the known-good shapes for KV / D1 / R2 bindings.

Go deeper.

Official docs first, because Cloudflare's are unusually good. A few background reads for anyone who wants to understand the runtime choices rather than just use them.

Agent context

Load this node into your agent

Structured context bundle with deploy patterns, bindings, and wrangler.toml scaffolds. Shipping with the know.2nth.ai Worker API.