Skip to content

[WIP] wallclock-aware context curriculum#203

Draft
LexHarie wants to merge 1 commit intoopenai:mainfrom
LexHarie:staging/wallclock-aware-context-curriculum
Draft

[WIP] wallclock-aware context curriculum#203
LexHarie wants to merge 1 commit intoopenai:mainfrom
LexHarie:staging/wallclock-aware-context-curriculum

Conversation

@LexHarie
Copy link
Copy Markdown

@LexHarie LexHarie commented Mar 20, 2026

Summary

  • test a wallclock-aware context-length curriculum: train at 1024 early, then switch to 2048 later, while keeping total batch tokens fixed
  • keep the rest of the stack relatively stable: mixed int5 MLP / int6 attention export, zstd-22, Muon, SWA, SmearGate, BigramHash, OrthoInit, grad clip, sliding-window eval
  • remove speculative extras from the prior failed branch so the main bet is easy to isolate and ablate

Hypothesis

The previous negative result looked like early wallclock underfitting, not an export bottleneck. This trainer is meant to test whether cheaper early sequence lengths buy more useful optimizer steps under the same 600s cap, while still recovering some long-context training later in the run.

The idea is intentionally clean to iterate on:

  • SEQ_WARMUP_ENABLED=0 gives the no-curriculum baseline
  • SHORT_SEQ_LEN, FINAL_SEQ_LEN, and SEQ_WARMUP_FRAC control the curriculum directly
  • optional EARLY_ABORT_ENABLED=1 adds cost guardrails for off-pace runs

Current status

This is a staged WIP PR.

  • syntax verified with python3 -m py_compile
  • no official 8xH100 result is claimed yet
  • waiting on compute credits before running the main experiment and ablations

Ablation plan

# Experiment Config
1 No curriculum baseline SEQ_WARMUP_ENABLED=0
2 Main bet SEQ_WARMUP_ENABLED=1 SHORT_SEQ_LEN=1024 FINAL_SEQ_LEN=2048 SEQ_WARMUP_FRAC=0.30
3 Curriculum duration sweep SEQ_WARMUP_FRAC in {0.20, 0.30, 0.40}
4 Early-seq sweep SHORT_SEQ_LEN in {768, 1024, 1536}
5 If early curve is better, additional seeds SEED={1337, 42, 7}

@LexHarie LexHarie changed the title Staging: wallclock-aware context curriculum [WIP] wallclock-aware context curriculum Mar 20, 2026
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — [WIP] wallclock-aware context curriculum

BPB: (not parsed — see PR title) | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA 408b63aea803, file records/track_10min_16mb/2026-03-20_wallclock_aware_context_curriculum/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.14s, dim=512, layers=10, vocab=1024, code=59286 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.14s, dim=512, layers=10, vocab=1024, code=59286 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants