Record: Int6 STE + SmearGate + Seq2048 + OrthoInit + RoPE50K + SWA/100 (mean val_bpb=1.1507)#206
Record: Int6 STE + SmearGate + Seq2048 + OrthoInit + RoPE50K + SWA/100 (mean val_bpb=1.1507)#206dexhunter wants to merge 2 commits intoopenai:mainfrom
Conversation
…SWA/100 (val_bpb=1.1507) 3-seed verified mean val_bpb=1.1507 (sliding window, stride=64). Seeds: 1337=1.1502, 42=1.1509, 7=1.1510. All artifacts under 16MB. Technique stack evolved over 31 AIDE2 optimization steps: - Int6 STE quantization-aware training (near-zero quant penalty) - NorMuon optimizer with decoupled weight decay (0.02) - 3x MLP width (1536 hidden) - SmearGate: learned embedding-level context blending - Orthogonal initialization for all linear layers - Sequence length 2048 with RoPE base 50K - SWA every 100 steps during warmdown - FP16 tied embedding passthrough - Sliding window eval (stride=64) - Zstd-22 compression - U-Net skip connections
Leaderboard expects val_bpb, val_loss, bytes_total, bytes_code at top level. Our submission used mean_val_bpb, artifact_bytes, etc.
Downloaded train_gpt.py and README from the top open PRs on openai/parameter-golf: - PR openai#198 (1.1318): 11L Int6 + WD + SWA + FA3 + SmearGate + BigramHash - PR openai#194 (1.1480): 11L Int6 QAT + SmearGate + SWA - PR openai#206 (1.1507): 9L Int6 STE + SmearGate + OrthoInit + U-Net skips Updated program.md to point agent at PR openai#198 as the new starting base, with detailed technique breakdown and strategy to beat 1.1318. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Record: Int6 STE + SmearGate + Seq2048 + OrthoInit + RoPE50K + SWA/100 (mean val_bpb=1.1507)BPB: 1.1507 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.23s, dim=512, layers=9, vocab=1024, code=55785 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.23s, dim=512, layers=9, vocab=1024, code=55785 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
Mean val_bpb = 1.1507 (3-seed verified, p<0.001), beating merged SOTA (1.1748) by 0.024.
Evolved over 31 AIDE2 optimization steps from baseline 1.1607 on 8xH100.
Technique Stack
Architecture
Submission checklist