know.2nth.aiTechnologytechcloudflareQueues
Technology · Cloudflare · Skill Node

Cloudflare
Queues.

Reliable messaging with batching, retries, and dead-letter queues. The async primitive for decoupling request handling from background work — entirely within Cloudflare's network.

TechnologyMessagingOpen TierLast updated · Apr 2026

A message queue that lives where your Workers live.

Queues is Cloudflare's managed message queue. A producer Worker sends a message; a consumer Worker processes it later. Messages are batched for efficiency, retried on failure, and sent to a dead-letter queue if they fail repeatedly.

The key advantage over SQS or RabbitMQ: Queues runs on the same network as everything else. No VPC peering, no cross-cloud latency, no credentials to manage. The producer binds to a queue in wrangler.toml, sends messages with env.QUEUE.send(), and the consumer handles batches via a queue() handler.

Common patterns: offload LLM calls, process webhooks asynchronously, fan out notifications, batch-write to D1 or R2 without blocking the request path.

Producer, consumer, dead-letter.

01

Producer: sending messages

Bind a queue as a producer in wrangler.toml. Send individual messages or batches. Messages are JSON-serialisable objects.

# wrangler.toml — producer Worker
[[queues.producers]]
binding = "JOBS"
queue = "processing-jobs"

// Send a message
export default {
  async fetch(req, env) {
    const payload = await req.json();

    // Single message
    await env.JOBS.send(payload);

    // Batch of messages
    await env.JOBS.sendBatch([
      { body: { type: "email", to: "user@example.com" } },
      { body: { type: "webhook", url: "https://..." } },
    ]);

    return Response.json({ queued: true });
  }
};
02

Consumer: processing batches

The consumer Worker receives messages in batches. Acknowledge individually or let the batch succeed/fail as a unit. Failed messages are retried automatically.

# wrangler.toml — consumer Worker
[[queues.consumers]]
queue = "processing-jobs"
max_batch_size = 10
max_batch_timeout = 30

// Consumer handler
export default {
  async queue(batch, env) {
    for (const msg of batch.messages) {
      try {
        await processJob(msg.body, env);
        msg.ack();
      } catch (err) {
        msg.retry();
      }
    }
  }
};
03

Dead-letter queue

After a configurable number of retries, failed messages land in a dead-letter queue. You can inspect them, replay them, or route them to an alerting system.

# wrangler.toml — dead-letter config
[[queues.consumers]]
queue = "processing-jobs"
max_retries = 3
dead_letter_queue = "processing-jobs-dlq"

Where it bites you.

At-least-once

Messages may be delivered more than once

Queues guarantees at-least-once delivery, not exactly-once. Your consumer must be idempotent. Use a deduplication key if you need to prevent duplicate processing.

Batch timeout

Batches wait up to max_batch_timeout

If messages arrive slowly, the consumer waits up to the timeout before processing. For latency-sensitive work, reduce the timeout but accept smaller batches.

Message size

128 KB per message

Large payloads don't fit. Store the data in R2 or KV and put the key in the message instead.

When it fits. When it doesn't.

✓ Use it when
  • You need to decouple request handling from background work. Accept the request, enqueue the job, return immediately.
  • Fan-out patterns. One event triggers multiple downstream processors — email, webhook, analytics, audit log.
  • You want retries and dead-letter without building them. The retry and DLQ mechanics are built in.

Where this node connects.

Go deeper.