Skip to content

Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219

Open
alertcat wants to merge 5 commits intoopenai:mainfrom
alertcat:submission-12L-int5
Open

Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541#219
alertcat wants to merge 5 commits intoopenai:mainfrom
alertcat:submission-12L-int5

Conversation

@alertcat
Copy link
Copy Markdown

12L Int5-MLP + Int6-Attn + SmearGate + BigramHash + SWA

val_bpb: 1.1541 (sliding window stride=64, 3-seed mean) | 8xH100 SXM, 600s

Key Innovation: Mixed Int5/Int6 Quantization + 12 Layers

Instead of uniform int6 for all weights, we use precision-tiered quantization:

  • Int5 [-16,15] for MLP weights - 3 zero high bits in int8 container - zstd compresses at ~1.88x (vs ~1.51x for int6)
  • Int6 [-32,31] for attention weights (precision-sensitive)
  • FP16 for tied embeddings

The ~1.8MB saved by int5 MLP funds a 12th transformer layer -- the deepest model submitted to date.

Results (3-seed, 8xH100 SXM)

Seed Steps ms/step Post-Q BPB Sliding BPB (s64)
1337 5,590 107.34 1.17668 1.15402
42 5,588 107.37 1.17647 1.15390
2024 5,589 107.35 1.17679 1.15425
Mean 1.17665 1.15406

Inter-seed range: 0.00035 (highly stable)

Architecture

  • 12 layers (deepest submission), 512 dim, 8 heads, 4 KV heads (GQA)
  • MLP 3x expansion (hidden=1536), relu-squared activation
  • 29.2M parameters, ~5,590 steps in 600s (107ms/step)
  • SmearGate (learned token blending) + BigramHash (2048 buckets, dim=128)
  • U-Net skip connections, orthogonal + muP-scaled init
  • Muon optimizer (momentum 0.99, WD=0.04) + AdamW (WD=0.04)
  • SWA: 7 checkpoint average during warmdown
  • Sliding window eval: stride=64

Why Non-Record

This explores a novel compression direction (int5 MLP quantization) that trades quantization precision for model depth. The 12th layer adds representation capacity but each step is slower (107ms vs 81ms for 11L), so fewer training steps fit in 600s (~5,590 vs ~7,412). The depth-vs-speed tradeoff does not beat current SOTA (1.1318) but demonstrates int5 as a viable compression strategy for fitting deeper models within the 16MB budget.

Reproduction

Built on techniques from PR #198, PR #162, PR #180 (SmearGate, BigramHash, OrthoInit, int6 quantization, sliding window eval).

alertcat and others added 5 commits March 20, 2026 21:22
Innovation over PR openai#198 (SOTA 1.1318):
- 12 transformer layers (was 11): +2.2M params, better representation
- Int5 quantization for MLP weights [-16,15]: 3 zero high bits
  - zstd compression 1.88x vs int6 1.51x, saves ~1.8MB
  - Funds the 12th layer within 16MB budget
- Int6 kept for attention weights (precision-sensitive)
- FA3 fallback for older PyTorch
- LR=0.025 (validated as optimal in A/B testing)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@MatoTeziTanka
Copy link
Copy Markdown

Community Review — Non-record: 12L Int5-MLP + Int6-Attn mixed quantization, val_bpb=1.1541

BPB: 1.1541 | Compliance: LOOKS CLEAN — pure-neural submission, no TTT/SLOT/n-gram-cache

What I found in the code (head SHA d21e7fb3389f, file records/track_10min_16mb/2026-03-20_12L_Int5MLP_SmearGate_BigramHash/train_gpt.py):

Static code review found no TTT adaptation function, no SLOT optimization loop, no n-gram-cache class, and no pre-quant val-token fine-tune. The eval path uses the standard sliding-window stride-64 pattern. The submission is a pure-neural architecture iteration on the standard SP1024/SP4096/SP8192 baseline.

CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=9, vocab=1024, code=66224 B, SMOKE_TEST_PASS

Verdict: LOOKS CLEAN.

Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending the usual record-track checks (3-seed validation, under-16MB artifact cap, ≤600s train + ≤600s eval on 8×H100 SXM). No compliance flags from the classification pass — this looks like a clean pure-neural iteration on the standard baseline.

Auto-classification caveat: this review was drafted by the AST-based classifier. If there's a non-standard eval mechanism (logit postprocessing, hedge mixing, etc.) that I missed because it's factored into a helper file or a non-standard function name, please flag it and I'll re-run the audit manually.


Reviewed by @MatoTeziTankaThe Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.15s, dim=512, layers=9, vocab=1024, code=66224 B, SMOKE_TEST_PASS. Classification via deterministic AST-based classify_prs.py (pattern bank derived from ~65 manually-reviewed PRs earlier in the 2026-04-11 sweep). This review was auto-drafted from a template and spot-checked before posting — if the template misread your code, please call it out so I can iterate the classifier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants