Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

README.md

YaiTimeout Example

YaiTimeout is the first canonical example target for prr.

It is a small VanillaScript utility that replaces setTimeout loops with a batched scheduling approach intended to avoid event-queue saturation during large bursts of timed work.

Why It Is In This Repo

This example is useful for evaluating prr because it is:

  • small enough for fast experiments
  • stateful enough to trigger real review questions
  • performance-sensitive enough to expose weak or noisy review behavior
  • representative of the kind of focused utility this project should review well with free/default models

Files

  • examples/yai-timeout/yai-timeout.js: the implementation under review

Suggested Review Flow

Inspect the planned review first:

prr review examples/yai-timeout/yai-timeout.js --dry-run

Run the actual review:

prr review examples/yai-timeout/yai-timeout.js

If you have extra context that is not obvious from the code, attach a short markdown note:

prr review examples/yai-timeout/yai-timeout.js \
  --context-file ./review-context.md

Refresh that note whenever the code or review goal changes materially.

Or compare reviewer strategies:

prr experiment run examples/yai-timeout/yai-timeout.js --preset smoke

What To Pay Attention To

Good reviews for this example should focus on:

  • cancellation and cleanup behavior
  • timer accuracy and drift assumptions
  • memory growth and retained references
  • callback error handling
  • edge cases around empty input, repeated execution, and abort timing

Weak reviews usually drift into:

  • generic style preferences
  • broad architecture advice without concrete defects
  • speculative claims about performance without evidence from the code

First Benchmark Notes

The first successful OpenAI-backed benchmark on this file validated the workflow:

  • --context-file produced a better review than the plain run
  • the briefing caught an O(n^2) metrics update path that the plain run missed
  • the briefing also reduced wasted exploration enough to use fewer tokens overall
  • later rounds worked best when the context note was refreshed after each material code change

The highest-confidence issues from that run were:

  • cancellation that can leave execute() unsettled
  • missing abort/cancel guards inside element callbacks
  • unbounded timing log retention
  • redundant or misleading timing metrics

Current Status

This example is considered done for the current prr proof-of-concept cycle.

Closure criteria met:

  • no unresolved high-severity findings from recent rounds
  • repeated review rounds converged on duplicate or low-impact suggestions
  • behavior is stable enough as a practical setTimeout replacement for focused utility workloads

Known limits remain intentional:

  • this is not a hard real-time scheduler
  • very large bursts can still trade precision for throughput
  • callback-heavy usage still depends on callback cost and browser/runtime constraints