know.2nth.aiTechnologytechcloudflareAnalytics Engine
Technology · Cloudflare · Skill Node

Analytics
Engine.

Time-series metrics you write from any Worker and query via SQL. Append-only, cheap at high volume, designed for usage metering, token tracking, and custom telemetry that doesn't belong in your application database.

TechnologyTime-series MetricsOpen TierLast updated · Apr 2026

A write-optimised metrics store for edge telemetry.

Analytics Engine is Cloudflare's time-series data store. You write data points from Workers using a binding, and query them later via a SQL API. Each data point has up to 20 numeric "blobs" and 20 string "indexes" — think of it as a structured event log.

It's designed for high-write, aggregate-read workloads: API request counts, token usage per customer, latency percentiles, error rates. Not for individual event lookups (use D1 or a log aggregator for that).

Writes are fire-and-forget from the Worker's perspective — no await, no acknowledgement. This keeps the hot path fast. Queries use a SQL dialect via the Cloudflare API, with automatic time-bucketing and aggregation functions.

Write from Workers, query via SQL.

01

Writing data points

Bind an Analytics Engine dataset in wrangler.toml. Write data points with indexes (strings for grouping) and blobs (numbers for aggregation).

# wrangler.toml
[[analytics_engine_datasets]]
binding = "METRICS"
dataset = "api_usage"

// Write a data point (fire-and-forget)
env.METRICS.writeDataPoint({
  indexes: ["customer-123", "POST /api/chat"],
  blobs: [1, 342, 1580],  // requests, tokens, latency_ms
});
02

Querying via SQL API

Query your dataset using SQL with time-bucketing. Results come back as JSON via the Cloudflare API.

# Query: tokens used per customer, last 24h
curl "https://api.cloudflare.com/client/v4/accounts/{id}/analytics_engine/sql" \
  -H "Authorization: Bearer $CF_TOKEN" \
  -d "SELECT
    index1 AS customer,
    SUM(blob2) AS total_tokens,
    COUNT() AS requests
  FROM api_usage
  WHERE timestamp > NOW() - INTERVAL '24' HOUR
  GROUP BY customer
  ORDER BY total_tokens DESC
  LIMIT 20"

Where it bites you.

Sampling

Queries sample at high volumes

Analytics Engine may sample data for large time ranges. Aggregations are still accurate within sampling bounds, but individual data points may not all appear.

Retention

Data retained for 90 days (free) or longer on paid

If you need long-term storage, export regularly to R2 or an external warehouse.

Schema

No schema enforcement on write

Indexes and blobs are positional. If you change the meaning of index1 or blob3 without versioning, your queries break silently.

When it fits. When it doesn't.

✓ Use it when
  • High-write telemetry from Workers. API usage, token counts, latency tracking — fire-and-forget writes that don't slow the hot path.
  • Usage-based billing data. Aggregate tokens, requests, or bandwidth per customer per period.
  • Custom metrics without running Prometheus. No infrastructure to manage, just a binding and SQL queries.

Where this node connects.

Go deeper.