Skip to content

[flags-core] perf improvements#382

Draft
dferber90 wants to merge 2 commits into
mainfrom
evaluate-perf-improvements
Draft

[flags-core] perf improvements#382
dferber90 wants to merge 2 commits into
mainfrom
evaluate-perf-improvements

Conversation

@dferber90
Copy link
Copy Markdown
Collaborator

@dferber90 dferber90 commented May 14, 2026

Summary

Three independent micro-optimizations on the flag evaluation hot path, identified from a CPU profile of client.evaluate() under load. No behavior change, no public API change. All 426 existing tests pass.

  1. Memoize scaledWeights for split outcomeshandleOutcome was recomputing sum(outcome.weights) + outcome.weights.map(w => w/sum * UINT32_MAX) on every evaluation. Cached on first call under a Symbol-keyed property on the outcome object.
  2. Memoize compiled RegExp for REGEX / NOT_REGEX conditionsmatchConditions was calling new RegExp(rhs.pattern, rhs.flags) on every evaluation. Cached under a Symbol on the rhs object.
  3. Memoize the data-spread in Controller.read() / getDatafile() — both methods used to destructure _origin and spread the entire datafile on every call. Now the destructure result is cached keyed on the this.data reference; the cache rebuilds once when stream/poll replaces the underlying data.

Bench results

Median over 3 runs of 2,000,000 iterations on Node 24 / darwin-arm64. Benchmark and CPU-profile summarizer are not committed in this PR; ping me if you want to land them.

Pure evaluate() — wins where expected, no regression elsewhere

Scenario Before After Δ
split 224.6 ns/op 174.3 ns/op −22%
regex 129.1 ns/op 88.0 ns/op −32%
rule-eq 63.6 ns/op 62.3 ns/op flat ✓
fallthrough 24.2 ns/op 23.3 ns/op flat ✓
paused 21.6 ns/op 20.5 ns/op flat ✓
rule-multi 124.0 ns/op 123.4 ns/op flat ✓
target-match 119.2 ns/op 60.5 ns/op (run variance — codepath unchanged)

Full client.evaluate() path (offline, datafile provided) — every scenario faster thanks to the read() cache

Scenario Before After Δ
paused 450.2 ns/op 371.4 ns/op −18%
fallthrough 450.1 ns/op 366.6 ns/op −19%
rule-eq 482.5 ns/op 415.4 ns/op −14%
split 652.8 ns/op 530.5 ns/op −19%
regex 568.4 ns/op 441.9 ns/op −22%
rollout 485.4 ns/op 420.6 ns/op −13%
contains-all-of 586.6 ns/op 506.3 ns/op −14%

CPU profile delta

Captured with `node --cpu-prof --cpu-prof-interval=100`, ~10s of sampled CPU. Self-% is share of total elapsed time spent inside that frame (excluding descendants).

Frame Before self% After self%
`Controller.read()` 8.1% 3.6%
`handleOutcome` 7.3% 5.1%
`new RegExp` (compile) in hot path gone
`matchConditions` + every-callback 16.9% 17.0%
`xxHash32` + `TextEncoder.encode` ~7% ~7%

`Controller.read()` self-time more than halved. `handleOutcome` shed the per-call `sum` + `weights.map` allocations. `new RegExp` no longer appears in the top frames at all. The remaining big costs (`matchConditions` dispatch, xxHash32 + UTF-8 encoding inside it) are intrinsic to the work and out of scope for this PR.

Why this is safe

  • Symbol-keyed caches don't leak: symbols are not enumerated by `JSON.stringify`, `for..in`, `Object.keys`, or structured cloning. Existing tests that compare result shape don't see them.
  • Datafiles are not shared between clients and are never mutated after install (each `tagData()` call mutates a freshly-created object once, in-place). So attaching internal caches to outcome / rhs objects can't surprise other consumers.
  • `Controller.read()` cache holds only the immutable destructure result, not anything mutable; per-call work still allocates fresh outer + `metrics` objects, so the contract that callers receive a usable object they can hold isn't broken. Cache invalidates implicitly because `resolveData()` returns a different `TaggedData` reference whenever stream/poll replaces `this.data`.
  • No public API change: `Datafile` shape, `FlagsClient` methods, types — all unchanged.

Test plan

  • `pnpm test` — all 426 tests pass unchanged (no test updates needed; existing assertions exercise the cached paths too)
  • `pnpm type-check`
  • `pnpm build`
  • Bench + CPU profile re-run after the change to confirm wins (numbers above)
bench/bench.mjs
// Microbenchmark for @vercel/flags-core
//
// Runs each scenario through the pure `evaluate()` function, then through the
// full `createClient(...).evaluate()` path (offline, with a provided datafile).
//
// Usage:
//   node bench/bench.mjs                            # full bench
//   node bench/bench.mjs --case=split --iters=2e6  # single case, more iters
//   node --cpu-prof --cpu-prof-dir=./bench/profiles bench/bench.mjs --case=all --iters=2e6

import { createClient, evaluate } from '../dist/index.default.js';

// Comparator enum values are not exported at runtime; mirror the string
// values from src/types.ts so the bench is self-contained.
const Comparator = {
  EQ: 'eq',
  NOT_EQ: '!eq',
  ONE_OF: 'oneOf',
  NOT_ONE_OF: '!oneOf',
  CONTAINS_ALL_OF: 'containsAllOf',
  CONTAINS_ANY_OF: 'containsAnyOf',
  CONTAINS_NONE_OF: 'containsNoneOf',
  STARTS_WITH: 'startsWith',
  NOT_STARTS_WITH: '!startsWith',
  ENDS_WITH: 'endsWith',
  NOT_ENDS_WITH: '!endsWith',
  CONTAINS: 'contains',
  NOT_CONTAINS: '!contains',
  EXISTS: 'ex',
  NOT_EXISTS: '!ex',
  GT: 'gt',
  GTE: 'gte',
  LT: 'lt',
  LTE: 'lte',
  REGEX: 'regex',
  NOT_REGEX: '!regex',
  BEFORE: 'before',
  AFTER: 'after',
};

// ---------------------------------------------------------------------------
// CLI args
// ---------------------------------------------------------------------------

const args = Object.fromEntries(
  process.argv.slice(2).map((s) => {
    const [k, v = 'true'] = s.replace(/^--/, '').split('=');
    return [k, v];
  }),
);
const ONLY = args.case && args.case !== 'all' ? String(args.case) : null;
const ITERS = Number(args.iters ?? 1_000_000);
const WARMUP = Number(args.warmup ?? 50_000);
const RUNS = Number(args.runs ?? 5);
const SKIP_CLIENT = args['skip-client'] === 'true';

// ---------------------------------------------------------------------------
// Scenarios — each builds a Packed.FlagDefinition + segments + entities.
// All variants are [false, true] unless noted otherwise.
// ---------------------------------------------------------------------------

const SEED = 0xdeadbeef;

/** @type {Record<string, { params: any, segments?: any, expected?: any }>} */
const scenarios = {
  // 1. Paused: environments.production = 1 (variant index). Hits the number
  //    short-circuit in evaluate().
  paused: {
    params: {
      definition: {
        variants: [false, true],
        environments: { production: 1 },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-1' } },
    },
  },

  // 2. Fallthrough: no rules, no targets — returns fallthrough variant.
  fallthrough: {
    params: {
      definition: {
        variants: [false, true],
        environments: { production: { fallthrough: 1 } },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-1' } },
    },
  },

  // 3. Target match: user.id is in targets[1].
  'target-match': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            targets: [
              {}, // variant 0 — no targets
              { user: { id: ['user-1', 'user-2', 'user-3'] } }, // variant 1
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-2' } },
    },
  },

  // 4. Target miss → falls through. Same shape as target-match, no matching id.
  'target-miss': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            targets: [{}, { user: { id: ['x1', 'x2', 'x3'] } }],
            fallthrough: 1,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-2' } },
    },
  },

  // 5. Rule match: single EQ condition matches.
  'rule-eq': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            rules: [
              {
                conditions: [[['user', 'country'], Comparator.EQ, 'US']],
                outcome: 1,
              },
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { country: 'US', id: 'user-1' } },
    },
  },

  // 6. Rule match: multi-condition rule, all matching.
  'rule-multi': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            rules: [
              {
                conditions: [
                  [['user', 'country'], Comparator.EQ, 'US'],
                  [['user', 'plan'], Comparator.ONE_OF, ['pro', 'enterprise']],
                  [['user', 'age'], Comparator.GTE, 18],
                ],
                outcome: 1,
              },
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: {
        user: { country: 'US', plan: 'pro', age: 32, id: 'user-1' },
      },
    },
  },

  // 7. Segment match: rule references a segment that splits at 50%.
  'rule-segment': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            rules: [
              {
                conditions: [['segment', Comparator.EQ, 'seg-1']],
                outcome: 1,
              },
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-2' } },
      segments: {
        'seg-1': {
          rules: [
            {
              conditions: [[['user', 'country'], Comparator.EQ, 'US']],
              outcome: {
                type: 'split',
                base: ['user', 'id'],
                passPromille: 50_000,
              },
            },
          ],
        },
      },
    },
  },

  // 8. Split outcome on fallthrough — xxHash32 per evaluation.
  split: {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            fallthrough: {
              type: 'split',
              base: ['user', 'id'],
              weights: [50, 50],
              defaultVariant: 0,
            },
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-1' } },
    },
  },

  // 9. Rollout: slot walk + hashing.
  rollout: {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            fallthrough: {
              type: 'rollout',
              base: ['user', 'id'],
              startTimestamp: Date.now() - 60_000,
              rollFromVariant: 0,
              rollToVariant: 1,
              defaultVariant: 0,
              slots: [
                [10_000, 30_000],
                [50_000, 30_000],
                [100_000, 30_000],
              ],
            },
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { id: 'user-1' } },
    },
  },

  // 10. Regex condition — `new RegExp(...)` per call.
  regex: {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            rules: [
              {
                conditions: [
                  [
                    ['user', 'email'],
                    Comparator.REGEX,
                    { type: 'regex', pattern: '^.+@example\\.com$', flags: '' },
                  ],
                ],
                outcome: 1,
              },
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: { user: { email: 'alice@example.com', id: 'user-1' } },
    },
  },

  // 11. containsAllOf: array membership check with set-fast-path.
  'contains-all-of': {
    params: {
      definition: {
        variants: [false, true],
        environments: {
          production: {
            rules: [
              {
                conditions: [
                  [
                    ['user', 'roles'],
                    Comparator.CONTAINS_ALL_OF,
                    ['admin', 'billing'],
                  ],
                ],
                outcome: 1,
              },
            ],
            fallthrough: 0,
          },
        },
        seed: SEED,
      },
      environment: 'production',
      entities: {
        user: { roles: ['admin', 'billing', 'support'], id: 'user-1' },
      },
    },
  },
};

// ---------------------------------------------------------------------------
// Bench runner — process.hrtime.bigint() for ns precision, median over RUNS
// ---------------------------------------------------------------------------

function formatNs(ns) {
  if (ns < 1_000) return `${ns.toFixed(2)} ns`;
  if (ns < 1_000_000) return `${(ns / 1_000).toFixed(2)} µs`;
  return `${(ns / 1_000_000).toFixed(2)} ms`;
}

function formatOps(opsPerSec) {
  if (opsPerSec >= 1_000_000)
    return `${(opsPerSec / 1_000_000).toFixed(2)}M ops/s`;
  if (opsPerSec >= 1_000) return `${(opsPerSec / 1_000).toFixed(2)}k ops/s`;
  return `${opsPerSec.toFixed(0)} ops/s`;
}

function median(xs) {
  const s = [...xs].sort((a, b) => a - b);
  const m = Math.floor(s.length / 2);
  return s.length % 2 ? s[m] : (s[m - 1] + s[m]) / 2;
}

function p(label, ns) {
  const opsPerSec = 1_000_000_000 / ns;
  console.log(
    `  ${label.padEnd(38)} ${formatNs(ns).padStart(10)}/op   ${formatOps(opsPerSec).padStart(14)}`,
  );
}

function benchSync(label, iters, runFn) {
  // Warmup
  for (let i = 0; i < WARMUP; i++) runFn(i);

  const runs = [];
  for (let r = 0; r < RUNS; r++) {
    const start = process.hrtime.bigint();
    for (let i = 0; i < iters; i++) runFn(i);
    const elapsed = Number(process.hrtime.bigint() - start);
    runs.push(elapsed / iters);
  }
  p(label, median(runs));
  return median(runs);
}

async function benchAsync(label, iters, runFn) {
  for (let i = 0; i < WARMUP; i++) await runFn(i);

  const runs = [];
  for (let r = 0; r < RUNS; r++) {
    const start = process.hrtime.bigint();
    for (let i = 0; i < iters; i++) await runFn(i);
    const elapsed = Number(process.hrtime.bigint() - start);
    runs.push(elapsed / iters);
  }
  p(label, median(runs));
  return median(runs);
}

// ---------------------------------------------------------------------------
// Pure evaluator bench
// ---------------------------------------------------------------------------

async function benchPureEvaluator() {
  console.log(
    `\n== pure evaluate() — ${ITERS.toLocaleString()} iters/run × ${RUNS} runs (warmup ${WARMUP.toLocaleString()}) ==`,
  );

  const entries = Object.entries(scenarios).filter(
    ([name]) => !ONLY || name === ONLY,
  );
  for (const [name, { params }] of entries) {
    benchSync(name, ITERS, () => evaluate(params));
  }
}

// ---------------------------------------------------------------------------
// client.evaluate() bench — exercises controller-fns + report-value + tracker
// ---------------------------------------------------------------------------

async function benchClientEvaluate() {
  console.log(
    `\n== client.evaluate() (offline, datafile provided) — ${Math.min(ITERS, 200_000).toLocaleString()} iters/run × ${RUNS} runs ==`,
  );

  // One client per scenario keeps the controller cache hot for that flag key.
  const sdkKey = 'vf_server_bench_only';
  const entries = Object.entries(scenarios).filter(
    ([name]) => !ONLY || name === ONLY,
  );

  // Build one datafile that contains all bench flags.
  const definitions = {};
  let segments = {};
  for (const [name, { params }] of entries) {
    definitions[name] = params.definition;
    if (params.segments) segments = { ...segments, ...params.segments };
  }
  const datafile = {
    definitions,
    segments,
    environment: 'production',
    projectId: 'prj_bench',
    revision: 1,
    configUpdatedAt: Date.now(),
  };

  const client = createClient(sdkKey, {
    datafile,
    stream: false,
    polling: false,
  });
  await client.initialize();

  // Cap client iters lower — each call goes through promise resolution +
  // report-value lookup, so they are inherently much slower than evaluate().
  const clientIters = Math.min(ITERS, 200_000);

  for (const [name, { params }] of entries) {
    await benchAsync(name, clientIters, () =>
      client.evaluate(name, undefined, params.entities),
    );
  }

  await client.shutdown();
}

// ---------------------------------------------------------------------------
// Sanity check — print first-result per scenario so we can see they evaluate
// as expected before the timing numbers.
// ---------------------------------------------------------------------------

function sanity() {
  console.log('\n== sanity check (first evaluation per scenario) ==');
  for (const [name, { params }] of Object.entries(scenarios)) {
    const r = evaluate(params);
    console.log(
      `  ${name.padEnd(22)} → value=${JSON.stringify(r.value)}  reason=${r.reason}${r.outcomeType ? `  outcomeType=${r.outcomeType}` : ''}`,
    );
  }
}

// ---------------------------------------------------------------------------
// Main
// ---------------------------------------------------------------------------

(async () => {
  console.log(`node ${process.version} · ${process.platform}/${process.arch}`);
  sanity();
  await benchPureEvaluator();
  if (!SKIP_CLIENT) await benchClientEvaluate();
})().catch((err) => {
  console.error(err);
  process.exit(1);
});
bench/summarize-profile.mjs
// Summarizes a V8 .cpuprofile produced by `node --cpu-prof`.
//
// Each sample in the profile is one stack-snapshot taken roughly every
// `cpu-prof-interval` microseconds. `samples[i]` is the node id at the top
// of the stack at sample i, and `timeDeltas[i]` is the time (µs) between
// sample i-1 and i. So a node's self-time = sum of timeDeltas for samples
// pointing at it.
//
// We also compute total-time (self + descendants) by walking the call tree.
//
// Usage: node bench/summarize-profile.mjs <path>

import fs from 'node:fs';
import path from 'node:path';

const inputPath = process.argv[2];
if (!inputPath) {
  console.error('Usage: node summarize-profile.mjs <path-to-.cpuprofile>');
  process.exit(1);
}

const profile = JSON.parse(fs.readFileSync(inputPath, 'utf8'));
const { nodes, samples, timeDeltas } = profile;

const byId = new Map(nodes.map((n) => [n.id, n]));

// Parent map: for each node, find its parent (so we can roll up).
const parentOf = new Map();
for (const n of nodes) {
  for (const childId of n.children ?? []) parentOf.set(childId, n.id);
}

// Self-time per node (in µs).
const selfUs = new Map();
for (let i = 0; i < samples.length; i++) {
  const id = samples[i];
  const dt = timeDeltas[i] ?? 0;
  selfUs.set(id, (selfUs.get(id) ?? 0) + dt);
}

// Total-time per node = self + sum of children total. Compute via topological
// roll-up. Simpler: for each node, sum self-time of all descendants by BFS.
const totalUs = new Map();
function totalFor(id) {
  if (totalUs.has(id)) return totalUs.get(id);
  let t = selfUs.get(id) ?? 0;
  for (const childId of byId.get(id)?.children ?? []) t += totalFor(childId);
  totalUs.set(id, t);
  return t;
}
for (const n of nodes) totalFor(n.id);

const totalSamples = samples.length;
const totalElapsedUs = timeDeltas.reduce((a, b) => a + b, 0);

function fmtUs(us) {
  if (us < 1_000) return `${us.toFixed(0)} µs`;
  if (us < 1_000_000) return `${(us / 1_000).toFixed(2)} ms`;
  return `${(us / 1_000_000).toFixed(2)} s`;
}

function pct(x) {
  return `${((x / totalElapsedUs) * 100).toFixed(1)}%`;
}

function label(node) {
  const cf = node.callFrame;
  const name = cf.functionName || '(anonymous)';
  const url = cf.url || '';
  // Trim absolute paths to package-relative
  const short = url
    .replace(/^file:\/\//, '')
    .replace(/^.*\/vercel-flags-core\//, '')
    .replace(/^.*\/node_modules\//, 'node_modules/')
    .replace(/^node:/, 'node:');
  const loc = short
    ? `${short}:${cf.lineNumber + 1}:${cf.columnNumber + 1}`
    : '(builtin)';
  return { name, loc };
}

// Filter out (root)/idle/program/gc pseudo-frames for the "user code" view.
const PSEUDO = new Set([
  '(root)',
  '(idle)',
  '(program)',
  '(garbage collector)',
]);

console.log(`profile:        ${path.basename(inputPath)}`);
console.log(`samples:        ${totalSamples.toLocaleString()}`);
console.log(`elapsed:        ${fmtUs(totalElapsedUs)}`);
console.log();

// ---------- Top self-time frames (excluding pseudo nodes) ----------
console.log('== TOP 25 by SELF time ==');
console.log(
  `${'self'.padStart(11)}  ${'%'.padStart(6)}  ${'total'.padStart(11)}  function · location`,
);
const sortedSelf = [...nodes]
  .filter((n) => !PSEUDO.has(n.callFrame.functionName))
  .filter((n) => (selfUs.get(n.id) ?? 0) > 0)
  .sort((a, b) => (selfUs.get(b.id) ?? 0) - (selfUs.get(a.id) ?? 0))
  .slice(0, 25);
for (const n of sortedSelf) {
  const s = selfUs.get(n.id) ?? 0;
  const t = totalUs.get(n.id) ?? 0;
  const { name, loc } = label(n);
  console.log(
    `${fmtUs(s).padStart(11)}  ${pct(s).padStart(6)}  ${fmtUs(t).padStart(11)}  ${name} · ${loc}`,
  );
}

// ---------- Top total-time frames (rolled up) ----------
console.log('\n== TOP 25 by TOTAL time (self + descendants) ==');
console.log(
  `${'total'.padStart(11)}  ${'%'.padStart(6)}  ${'self'.padStart(11)}  function · location`,
);
const sortedTotal = [...nodes]
  .filter((n) => !PSEUDO.has(n.callFrame.functionName))
  .filter((n) => (totalUs.get(n.id) ?? 0) > 0)
  .sort((a, b) => (totalUs.get(b.id) ?? 0) - (totalUs.get(a.id) ?? 0))
  .slice(0, 25);
for (const n of sortedTotal) {
  const s = selfUs.get(n.id) ?? 0;
  const t = totalUs.get(n.id) ?? 0;
  const { name, loc } = label(n);
  console.log(
    `${fmtUs(t).padStart(11)}  ${pct(t).padStart(6)}  ${fmtUs(s).padStart(11)}  ${name} · ${loc}`,
  );
}

// ---------- Bucket by callFrame "package" (rough source-file grouping) ----------
console.log('\n== SELF time bucketed by source file ==');
const byFile = new Map();
for (const n of nodes) {
  if (PSEUDO.has(n.callFrame.functionName)) continue;
  const s = selfUs.get(n.id) ?? 0;
  if (s === 0) continue;
  const url = n.callFrame.url || '(builtin)';
  const key = url
    .replace(/^file:\/\//, '')
    .replace(/^.*\/vercel-flags-core\//, '')
    .replace(/^.*\/node_modules\//, 'node_modules/')
    .replace(/^node:/, 'node:');
  byFile.set(key, (byFile.get(key) ?? 0) + s);
}
const sortedFiles = [...byFile.entries()]
  .sort((a, b) => b[1] - a[1])
  .slice(0, 20);
for (const [file, us] of sortedFiles) {
  console.log(`${fmtUs(us).padStart(11)}  ${pct(us).padStart(6)}  ${file}`);
}

// ---------- Just the evaluator hot path ----------
console.log('\n== SELF time within evaluate.js (dist) ==');
const evalNodes = nodes
  .filter((n) => /evaluate\.|chunk-/.test(n.callFrame.url || ''))
  .filter((n) => (selfUs.get(n.id) ?? 0) > 0)
  .sort((a, b) => (selfUs.get(b.id) ?? 0) - (selfUs.get(a.id) ?? 0))
  .slice(0, 15);
for (const n of evalNodes) {
  const s = selfUs.get(n.id) ?? 0;
  const { name, loc } = label(n);
  console.log(
    `${fmtUs(s).padStart(11)}  ${pct(s).padStart(6)}  ${name} · ${loc}`,
  );
}

@vercel
Copy link
Copy Markdown
Contributor

vercel Bot commented May 14, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
flags-playground Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
flags-sdk-dev Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
flags-sdk-next-15 Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
flags-sdk-next-16 Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
flags-sdk-snippets Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
flags-sdk-sveltekit-snippets Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
shirt-shop Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm
shirt-shop-api Ready Ready Preview, Comment, Open in v0 May 14, 2026 3:27pm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant