Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219
Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219alertcat wants to merge 5 commits intoopenai:mainfrom
Conversation
Innovation over PR openai#198 (SOTA 1.1318): - 12 transformer layers (was 11): +2.2M params, better representation - Int5 quantization for MLP weights [-16,15]: 3 zero high bits - zstd compression 1.88x vs int6 1.51x, saves ~1.8MB - Funds the 12th layer within 16MB budget - Int6 kept for attention weights (precision-sensitive) - FA3 fallback for older PyTorch - LR=0.025 (validated as optimal in A/B testing) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Community Review — Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541BPB: 1.1541 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache What I found in the code (head SHA Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=9, vocab=1024, code=66224 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline. Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=9, vocab=1024, code=66224 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
12L Int5-MLP + Int6-Attn + SmearGate + BigramHash + SWA
val_bpb: 1.1541 (sliding window stride=64, 3-seed mean) | 8xH100 SXM, 600s
Key Innovation: Mixed Int5/Int6 Quantization + 12 Layers
Instead of uniform int6 for all weights, we use precision-tiered quantization:
The ~1.8MB saved by int5 MLP funds a 12th transformer layer -- the deepest model submitted to date.
Results (3-seed, 8xH100 SXM)
Inter-seed range: 0.00035 (highly stable)
Architecture
Why Non-Record
This explores a novel compression direction (int5 MLP quantization) that trades quantization precision for model depth. The 12th layer adds representation capacity but each step is slower (107ms vs 81ms for 11L), so fewer training steps fit in 600s (~5,590 vs ~7,412). The depth-vs-speed tradeoff does not beat current SOTA (1.1318) but demonstrates int5 as a viable compression strategy for fitting deeper models within the 16MB budget.
Reproduction
Built on techniques from PR #198, PR #162, PR #180 (SmearGate, BigramHash, OrthoInit, int6 quantization, sliding window eval).