CLI to warm and inspect Cloudflare edge cache status across all your URLs — no external dependencies, pure Node.js.
Cloudflare has no API to list what's currently cached. The only way to know is to ask. This tool does that at scale: fetches every URL you care about, records CF-Cache-Status (plus CF-Ray, Age, Cache-Control, latency), outputs structured JSONL logs, and prints a stats summary.
Useful for:
- Warming cache after a deploy
- Auditing which pages are still MISS/EXPIRED
- CI gates (fail if miss rate exceeds a threshold)
- Node.js 18+
tsx(already a devDependency)
git clone https://github.com/yuis-ice/cf-cache-utils
cd cf-cache-utils
npm install# Check URLs from a file
npm run cli -- --urls-file urls.txt --verbose
# Crawl a host (follows <a href> links, stays on same origin)
npm run cli -- --host https://example.com --crawl-depth 2
# Warm with GET instead of HEAD
npm run cli -- --urls-file urls.txt --method GET
# Dry-run: list discovered URLs without requesting
npm run cli -- --host https://example.com --dry-run
# Write JSONL log + stats to files
npm run cli -- --urls-file urls.txt --log-file run.jsonl --stats-file stats.json
# CI gate: exit 1 if miss rate > 20%
npm run cli -- --urls-file urls.txt --fail-on-miss-rate 0.2Each checked URL produces one JSONL line on stdout (or --log-file):
{
"url": "https://example.com/page",
"timestamp": "2026-04-13T01:24:24.730Z",
"method": "HEAD",
"statusCode": 200,
"cfCacheStatus": "HIT",
"cfRay": "9eb6abc6683bd4e3-NRT",
"age": 131991,
"cacheControl": "public, max-age=31536000, immutable",
"vary": null,
"contentType": "text/html; charset=utf-8",
"durationMs": 23.7,
"byteLength": null,
"error": null,
"attempt": 0
}Stats are printed to stderr (or --stats-file) on completion:
{
"totalUrls": 21,
"completed": 21,
"errors": 0,
"byStatus": { "HIT": 18, "MISS": 3 },
"byHttpStatus": { "200": 21 },
"latency": { "min": 23, "max": 100, "avg": 59, "p50": 53, "p95": 97 },
"hitRate": 0.857,
"missRate": 0.143,
"startedAt": "...",
"finishedAt": "...",
"durationMs": 284
}| Flag | Default | Description |
|---|---|---|
--urls-file <path> |
— | Newline-delimited URL list (required if no --host) |
--host <url> |
— | Base URL; crawl HTML links recursively |
--crawl-depth <n> |
3 |
Max recursion depth for --host |
--method HEAD|GET |
HEAD |
HTTP method (GET also warms and records byte length) |
--concurrency <n> |
10 |
Parallel requests |
--timeout-ms <n> |
10000 |
Per-request timeout (ms) |
--delay-ms <n> |
0 |
Fixed delay between requests (ms) |
--retries <n> |
3 |
Max retries per URL (exponential backoff + jitter) |
--retry-base-ms <n> |
1000 |
Backoff base (ms) |
--user-agent <string> |
cf-cache-checker/1.0 |
User-Agent header |
--include <substr> |
— | Only process URLs containing substring (repeatable) |
--exclude <substr> |
— | Skip URLs containing substring (repeatable) |
--log-file <path> |
stdout | JSONL output file |
--stats-file <path> |
stderr | JSON stats file |
--quiet |
false |
Suppress progress output |
--verbose |
false |
Print each result line to stderr |
--fail-on-miss-rate <0-1> |
— | Exit 1 if missRate exceeds threshold |
--dry-run |
false |
List URLs without making requests |
The CLI uses only Node.js built-ins and the standard fetch API. tsx is the only dev tool needed to run TypeScript directly.
MIT