Skip to content

Record: PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE + PR #1767 TTT — val_bpb 1.06335#1787

Merged
cocohearts merged 3 commits intoopenai:mainfrom
nprime06:submission/polar-sparse-minlr-fusedce
Apr 29, 2026
Merged

Record: PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE + PR #1767 TTT — val_bpb 1.06335#1787
cocohearts merged 3 commits intoopenai:mainfrom
nprime06:submission/polar-sparse-minlr-fusedce

Conversation

@nprime06
Copy link
Copy Markdown
Contributor

@nprime06 nprime06 commented Apr 23, 2026

Summary

Results (8×H100 80GB SXM, phased TTT, 10-min train / 10-min eval)

Seed Steps Post-EMA (pre-quant) Quantized Post-TTT Artifact (bytes)
42 4961 1.06764 1.07699 1.06400 15,940,380
0 4957 1.06667 1.07603 1.06308 15,939,508
1234 4964 1.06665 1.07595 1.06297 15,939,918
Mean 4961 1.06699 1.07632 1.06335 15,939,935
Std 0.00057 0.00058 0.00054 436

Head-to-head vs PR #1736 (matched seeds)

Seed This PR PR #1736 Δ (mBPP)
42 1.06400 1.06610 −2.10
0 1.06308 1.06473 −1.65
1234 1.06297 1.06563 −2.66
Mean 1.06335 1.06549 −2.14

What this adds over PR #1736

Training-time wins (all ablation-validated on seed 0 before stacking):

  • Polar Express Newton-Schulz coefficients (ported from PR Record: SP4096 + Polar Express + MuonEq-R + Depth Recurrence — 1.0923 BPB (3-seed) #1344): replaces Muon's fixed (3.4445, -4.775, 2.0315) × 5 with 5 per-iteration minimax tuples inside zeropower_via_newtonschulz5 at unchanged MUON_BACKEND_STEPS=5.
  • MIN_LR=0.10 warmdown floor: floors LR at 10% of max instead of 0, so the final ~25% of training delivers useful gradient updates instead of frozen no-ops.
  • Sparse attention head-output gate (modded-nanogpt pattern): replaces dense GatedAttn (8, 512) = 4096 params/layer with narrow-input (8, gate_window=12) = 96 params/layer, preserving the attn_gate_w name so the int8-per-row quant path still routes it (size-range check widened to 32..8192). Saves ~44 KB.
  • Fused softcapped CE (Triton, training-only): single streaming kernel computes (softcap·tanh, LSE, per-row loss) in one pass on the training forward. Eval path (forward_logits) keeps eager softcap·tanh + F.cross_entropy numerics unchanged.

TTT improvements (from PR #1767, eval-only — zero training/artifact impact):

  • TTT_LORA_ALPHA=144: rank-scaled LoRA output (was implicit=96), decouples magnitude from rank.
  • TTT_WARM_START_A=1: keep LoRA A warm across per-doc resets, only zero B.
  • TTT_WEIGHT_DECAY=1.0: up from 0.5.

Polish: GPTQ_RESERVE_SECONDS=0.5 (was 4) and VAL_LOSS_EVERY=0 (was 4000) together reclaim ~15s of training budget → ~20 more depth-3 steps.

Methodology — training + TTT validated independently

Training-time and TTT improvements are orthogonal:

  1. Training (Polar Express NS, MIN_LR, sparse gate, fused CE): trained 3 seeds to completion, producing quantized artifacts. Full training logs in train_seed{42,0,1234}.log.
  2. TTT (PR Record: Alpha=144 LoRA + Warm-start A + WD 1.0 — val_bpb 1.07209 (3-seed mean) #1767): applied to the same quantized artifacts via TTT_EVAL_ONLY mode — no retraining, no re-quantization. TTT-only logs in ttt_pr1767/.

The shipped train_gpt.py defaults to PR #1767 TTT parameters, so a full end-to-end torchrun produces the reported 1.06335 mean directly.

Rule compliance

Test plan

See records/.../README.md for full write-up including the BOS-fix patch note, lineage, and credits.

🤖 Generated with Claude Code

… Attn Gate + Fused CE — val_bpb 1.06378

3-seed mean val_bpb = 1.06378 (std 0.00058), val_loss = 2.32794 nats/token.
-0.00171 BPB vs PR openai#1736 (1.06549), -0.00043 vs PR openai#1779 (1.06421).

Stacks 4 orthogonal wins on top of PR openai#1736, all ablation-validated on
seed 0 against stock openai#1736 before stacking:
- Polar Express per-iteration minimax Newton-Schulz coefficients (from
  PR openai#1344), replacing the fixed (3.44, -4.78, 2.03) tuple applied 5x
  with 5 distinct tuples baked into zeropower_via_newtonschulz5
- MIN_LR=0.10 warmdown floor (was 0)
- Sparse attention head-output gate (modded-nanogpt pattern, 96
  params/layer vs dense GatedAttn 4096), preserving the attn_gate_w
  name so the int8-per-row quant path still routes it (size-range
  check widened to 32..8192)
- Triton fused softcapped cross-entropy kernel on the training
  forward; eval path keeps eager numerics unchanged

Polish: GPTQ_RESERVE_SECONDS=0.5 (was 4) and VAL_LOSS_EVERY=0
(was 4000) together reclaim ~15s of training budget for additional
depth-3 steps.

All 3 seeds (42, 0, 1234) clear the 16M decimal cap (max
15,940,380 B, ~60 KB headroom), the 600s train budget (599.46-
599.57s), and the 600s TTT-eval budget (412.8-511.3s). Every
individual seed beats its PR openai#1736 counterpart (deltas -1.20 to
-2.27 mBPP).

Changes are fully orthogonal to PR openai#1779's frozen recurrent α/β and
PR openai#1767's LoRA-TTT tweaks — stackable.

Also ships the BOS-fix patch for prepare_caseops_data.py (matches
PR openai#1736 d7263a3 and PR openai#1769 fe7c309): sp.encode can't emit
BOS_ID=1 since IDs 0-7 are reserved, and phased TTT's
_loss_bpb_from_sums divides by zero on BOS-less shards.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@nprime06 nprime06 changed the title feat(submission): PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE — val_bpb 1.06378 Record: PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE — val_bpb 1.06378 Apr 23, 2026
@msisovic
Copy link
Copy Markdown
Contributor

Hey, interesting submission, I did find improvements with increasing the LR floor and the softcap+CE kernel from modded-nanogpt as well.

You didn't include a command to reproduce your runs though, that would be great for the rest of the folks. Also, I notice you say you reduced GPTQ_RESERVE_SECONDS to 0.5, even though the logs show that GPTQ took 3+ secs:

GPTQ:collected 67 Hessians in 3.5s

Reviewer requested the reproduction script be included.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@nprime06
Copy link
Copy Markdown
Contributor Author

nprime06 commented Apr 23, 2026

You didn't include a command to reproduce your runs though, that would be great for the rest of the folks.

Will edit the PR, thanks!

Also, I notice you say you reduced GPTQ_RESERVE_SECONDS to 0.5, even though the logs show that GPTQ took 3+ secs:

GPTQ:collected 67 Hessians in 3.5s

The GPTQ step doesn't involve any gradient updates or optimizer steps to the model, it's just gathering Hessians for compression. I argue it's legal to place these steps outside of the 600s training budget.

In practice this knob just stops training slightly early. I set it to 0.5s to keep a buffer to make sure training doesn't exceed 600s (for instance in PR#1736 readme we see training took 596.14s, slightly over 600-4 seconds)

Thanks for the review!!

Ports PR openai#1767's TTT-only improvements on top of our training-time wins:
- TTT_LORA_ALPHA=144 (rank-scaled LoRA output, was implicit=96)
- TTT_WARM_START_A=1 (keep A warm across doc resets, was re-init)
- TTT_WEIGHT_DECAY=1.0 (was 0.5)

These are eval-time-only changes: zero training or artifact impact.
Validated via TTT_EVAL_ONLY mode on the same 3 quantized artifacts
from the original training runs (no retraining, no re-quantization).

3-seed post-TTT results (PR openai#1767 TTT on draft-7 artifacts):
  seed 42:   1.06400 (was 1.06444, -0.44 mBPP)
  seed 0:    1.06308 (was 1.06353, -0.45 mBPP)
  seed 1234: 1.06297 (was 1.06336, -0.39 mBPP)
  mean:      1.06335 (was 1.06378, -0.43 mBPP)

train_gpt.py defaults updated to PR openai#1767 values so a fresh
end-to-end torchrun produces the reported 1.06335 directly.
TTT-only logs included in ttt_pr1767/ subdirectory.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@nprime06 nprime06 changed the title Record: PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE — val_bpb 1.06378 feat(submission): PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE + PR #1767 TTT — val_bpb 1.06335 Apr 23, 2026
@nprime06
Copy link
Copy Markdown
Contributor Author

nprime06 commented Apr 23, 2026

Update: added PR #1767 TTT improvements → 3-seed mean 1.06335 (was 1.06378)

We've updated this submission with three eval-time-only TTT changes from PR #1767:

Parameter Before After
TTT_LORA_ALPHA 96 (implicit = rank) 144 (scale = 1.5×)
TTT_WARM_START_A 0 (re-init A every doc) 1 (keep A warm, zero only B)
TTT_WEIGHT_DECAY 0.5 1.0

These are eval-time-only changes — zero impact on training or artifact size. We validated them by running TTT-only eval (TTT_EVAL_ONLY=1 mode) on the exact same 3 quantized artifacts from our original training runs, without retraining or re-quantizing.

Results (same artifacts, improved TTT):

Seed Before (stock TTT) After (PR #1767 TTT) Δ (mBPP)
42 1.06444 1.06400 −0.44
0 1.06353 1.06308 −0.45
1234 1.06336 1.06297 −0.39
Mean 1.06378 1.06335 −0.43

What changed in this commit (b667ea2):

  • train_gpt.py: updated BatchedLinearLoRA defaults to PR Record: Alpha=144 LoRA + Warm-start A + WD 1.0 — val_bpb 1.07209 (3-seed mean) #1767 values (alpha=144, warm_start_A=1) and ttt_weight_decay default to 1.0. Added TTT_EVAL_ONLY env gate for running TTT eval on saved artifacts without retraining. A fresh torchrun with the shipped code now produces 1.06335 directly.
  • submission.json: updated headline numbers to 1.06335.
  • README.md: updated tables, added methodology note explaining independent validation.
  • ttt_pr1767/: added TTT-only eval logs (3 seeds) for reproducibility.

Credit to @renqianluo (PR #1767) for the TTT improvements.

@nprime06 nprime06 changed the title feat(submission): PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE + PR #1767 TTT — val_bpb 1.06335 Record: PR #1736 + Polar Express NS + MIN_LR + Sparse Attn Gate + Fused CE + PR #1767 TTT — val_bpb 1.06335 Apr 23, 2026
@leon2k2k2k
Copy link
Copy Markdown

This is super nice work, and that credit should go to @renqianluo not me.

leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request Apr 24, 2026
leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request Apr 24, 2026
leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request Apr 24, 2026
sunnypatneedi pushed a commit to sunnypatneedi/parameter-golf that referenced this pull request Apr 24, 2026
…ai#1787 Polar Express NS new base; PR openai#1795 PPM 1.01252; Issue openai#1604 deadline passed; Session 20

- Merged SOTA 1.0810 confirmed Day 15 (README not updated despite Scylla record commit)
- Scylla 0.9485 committed to track_10min_16mb/ on Apr 23 (PR openai#1184) but byte accounting
  disputed by PR openai#1271 (corrected ~1.1289 bpb); treat merged SOTA as 1.0810
- PR openai#771 CLOSED/REJECTED confirmed; PR openai#727 CLOSED (illegal); PR openai#758 open but dead;
  PR openai#731 still awaiting seeds 1337+2024
- Issue openai#1604 (CaseOps ruling): NO @valerio-oai response in 11 days; self-deadline Apr 24
  passed; proceed with clean legal stack immediately
- NEW: PR openai#1787 (nprime06, 1.06335) — new community-consensus clean base with Polar Express
  Newton-Schulz (arXiv:2505.16932, ICLR 2026) + MIN_LR=0.10 warmdown floor
- NEW: PR openai#1795 (OE-GOD, 1.01252) — byte-level PPM order-4 adaptive mixture; gate legality
  concern fixed; await organizer ruling before implementing
- NEW: PR openai#1797 (dexhunter, 1.06157) — PR openai#1787 + SmearGate + LQER Asym; new dexhunter best
- NEW: PR openai#1802 (aamodbhatt, 1.0771) — Polar Express NS + Multi-Phase Global TTT
- TECHNIQUE: Polar Express NS (arXiv:2505.16932) and Gram NS (Dao-AILab) added to table
- TECHNIQUE: MIN_LR=0.10 warmdown floor added to best-stack approach
- Updated competition strategy: stop waiting for CaseOps, implement clean stack with
  Polar Express NS + MIN_LR immediately (6 days to deadline)

https://claude.ai/code/session_01JZ3FiS937NwLHt3Fv9WHPD
@msisovic
Copy link
Copy Markdown
Contributor

@nprime06 On the GPTQ reservation question, it was previously discussed in the challenge that training data is only allowed to be accessed during the training phase (mentioned in the README as "you aren't allowed to access any training data during evaluation", even though you could argue that this isn't eval yet technically), an this lead to approaches like AR self-generated data for GPTQ, whose point is to save on that tradeoff where we eat into our train time to collect the Hessians.

Not trying to bash your submission, in fact I plan to rebase my current approach on top of it, just something I thought it's good for you to know since I had the context from previous discussions (can link the comment from the organizers when I find the time to search through the old PRs) and since it shouldn't affect your score too much.

aquariouseworkman added a commit to aquariouseworkman/parameter-golf that referenced this pull request Apr 27, 2026
…symmetric + Phased TTT

val_bpb = 1.06128 | ~15.95 MB | 8xH100 SXM

Key Change: SmearGate BOS Document Boundary Fix
Builds on PR openai#1797 stack (PR openai#1787 base + SmearGate + LQER Asymmetric) but fixes the SmearGate cross-document leakage bug identified by @cocohearts in PR openai#1797 audit.

The bug: SmearGate 1-token causal lookback does not mask BOS positions, so the final token of document N smears into BOS of document N+1.

Credits
@nprime06 -- PR openai#1787 base stack
@romeerp -- CaseOps transform (PR openai#1729)
@dexhunter -- SmearGate + LQER (PR openai#1797)
@cocohearts -- Identifying SmearGate BOS bug
@abaybektursun -- Score-first TTT (PR openai#549)
@clarkkev -- GPTQ SDClip + SP8192 (PR openai#1394)
cocohearts added a commit that referenced this pull request Apr 30, 2026
…Base + Smear Gate + LQER Asymmetric + Phased TTT (indirect 3 seed mean)

Merge accepted Parameter Golf record/support submission #1851 after requested format cleanup.
leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request May 1, 2026
Audits every CaseOps-lineage record-track PR (merged + unmerged) since
2026-04-18 for whether val docs are also in the training set.

Working set: 34 PRs (31 from chronological seed list + 3 discovered ancestors:
openai#1908, openai#1923, openai#2007). Boundary nodes openai#1493 / openai#1626 (pre-CaseOps).

Verdicts:
  - CLEAN (8): openai#1729, openai#1851, openai#1868, openai#1908, openai#2019, openai#2027, openai#2031, openai#2068
  - LEAK (25): openai#1736 (our research baseline) → openai#1769openai#1787openai#1797openai#1855 → V21 family (openai#1945, openai#1923, openai#1953, openai#1967) → openai#2018openai#2118
    (current claimed frontier 1.04350), plus siblings.
  - INHERIT (1): openai#2050 (eval-only on frozen openai#1915)

Code-level evidence (not README claims):
  - Every shipped prepare_caseops_data.py is byte-identical:
    SHARD_TOKENS=10_000_000, default=10_000 for --val-docs
  - NO PR overrides --val-docs (searched all .sh files in all 34 PRs)
  - cached_challenge_fineweb.py downloads from romeerp/parameter-golf-caseops-v1
    HF dataset whose manifest pins docs_val=50000, docs_train=8181945,
    sums match → CLEAN by construction
  - PR openai#2018's DATASET_AUDIT.md is gold-standard explicit leak description
  - PR openai#2118's submission.json admits "--val-docs=10000 train shards + 50k val eval"

Three signposts:
  - Leak introduced: PR openai#1736 by @dexhunter (Apr 19) — first prepare_caseops_data.py
    default invocation
  - Leak fixed: PR openai#1851 by @aquariouseworkman (Apr 27) — switched to HF dataset
  - Leak re-introduced: PR openai#1855 by @codemath3000 (same day) — rebuilt locally

The merged-leaderboard SOTA (openai#1851/openai#1868 at 1.06128/1.06141) is CLEAN.
The unmerged frontier (openai#2118 at 1.04350) is LEAK. The 0.018 bpb gap is
inflated by val memorization; spec 301 was designed to measure how much
remains under clean data.

Files:
  caseops-memory-leakage/README.md       — overview, methodology, takeaways
  caseops-memory-leakage/verdicts.md     — 34-row master table with evidence
  caseops-memory-leakage/family-tree.md  — ASCII trees with [C]/[L] annotations
jzmyres pushed a commit to jzmyres/parameter-golf that referenced this pull request May 3, 2026
…ns through 2026-04-27

Adds the 10 most recent leaderboard records from origin/main, including:
- 2026-04-27_SP8192_LQER_SparseGate_BOSSmearFix_9HpStack_1.0611 (CURRENT TOP, val_bpb 1.06108)
- 2026-04-23_SP8192_CaseOps_SparseGate_QuantGate_Loop45_PhasedTTT_PolarNS_MinLR_FusedCE
- 2026-04-22_SP8192_CaseOps_GatedAttn_QuantGate_Loop45_PhasedTTT_MLPClip12
- 2026-04-19_SP8192_CaseOps_GatedAttn_QuantGate_Loop45_PhasedTTT
- 2026-04-18_PR1626_CaseOps_Taper
- 2026-04-14_MultiPhaseGlobalSGD_PhasedTTT
- 2026-04-13_VarLenAttn_PhasingTTT2000
- 2026-04-10_VarLenAttn
- 2026-04-09_A2_Muon097_3Seed
- 2026-03-29_Loader_FullGPTQ_XSA11_BigramHash2816

These records use the SP8192 tokenizer + 8x H100 + TTT + advanced quantization
on a non-DEQ baseline (standard transformer with U-Net skips, Loop4-5 depth
recurrence, XSA, Polar-Express NS, etc). Top record (1.0611) lineage:
PR openai#1797 (SmearGate + LQER) -> PR openai#1787 (Polar NS + MIN_LR + SparseAttnGate +
Fused CE) -> PR openai#1736 (CaseOps + GatedAttn + QuantGate + Loop4-5 + PhasedTTT).

SparseAttnGate (PR openai#1787) reviewed for incorporation into our model arch:
analysis follows in next commit / iter 120 design spec.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
jzmyres pushed a commit to jzmyres/parameter-golf that referenced this pull request May 3, 2026
…_blend_lr

Iter 117 v1 NaN'd at step 60 (recon 1.2e-4 → 6.8e-2, attn_cv 0.88 → NaN). Root
cause: 0.7% entmax-1.5 contribution at init combined with no entropy cushion
let CV concentration cascade once routing started concentrating. Three
principled fixes in this commit:

1. Annealed blend (H87 v2): blend = (1-anneal) + sigmoid(blend_logit)*anneal,
   with anneal: 0 → 1 over warmup_delay_frac=0.3. At init (anneal=0), blend
   forced to 1.0 = pure softmax = iter 100b exact (strict-gen verified
   numerically: max abs diff 0.00 at anneal=0). Avoids cold-start instability.
   New buffer `_entmax_blend_anneal` + setter, written by training-loop
   annealer alongside variance/entropy schedules.

2. Polar-Express Newton-Schulz Muon coefficients (PR openai#1344openai#1787 in records).
   Per-iter (a, b, c) tuples replace stock fixed (3.4445, -4.7750, 2.0315).
   Iter 1: aggressive (8.16, -22.5, 15.9); iter 5: gentle (2.35, -1.71, 0.42).
   At 10 iters: 0.05 mean rel err vs stock 0.20 — 4× better orthogonalization
   quality at the same matmul count.
   IMPORTANT — DEQ STABILITY CONSTRAINT: at backend_steps=5 (records' default)
   the aggressive iter-1 coefficient overshoots and breaks DEQ reverse
   reconstruction (smoke recon 1.2e-4 → 6.8e-2). Records are non-DEQ so they
   tolerate this. We MUST run at backend_steps=10 — confirmed smoke PASSES.
   Default lifted 5 → 10. Net throughput cost ~5-10% step_avg; net quality
   gain 4× orthogonalization. Likely net-positive.

3. Slow LR for blend_logit (entmax_blend_lr=0.002, 10× smaller than
   scalar_lr=0.02). Mirrors parcae_lr precedent — params controlling
   sensitive system dynamics get a slow LR to bound drift rate. Once anneal
   ramps after warmup_delay, gradient flows but blend_logit drifts 10×
   slower → routing has time to adapt to gradually-introduced sparsity.
   New optimizer group carved from scalar_params via "_entmax_blend_logit"
   name filter; coverage assertion updated.

Smoke test PASSED: loss 7.00 → 4.55 at 300 steps with all three fixes
active (entmax off by default — needs --use-entmax-routing=1 to enable).

CLAUDE.md §5 muon_backend_steps row added documenting the 5 → 10 lift and
the DEQ-specific constraint vs records' default.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
leon2k2k2k added a commit to leon2k2k2k/parameter-golf that referenced this pull request May 5, 2026
- Parallel residuals: credit PR openai#1204 (first) and PR openai#1529 (layer 8)
- Depth recurrence: clarify 2-pass origin (openai#1344), 3-pass is final config
- Loop curriculum frac=0.35: correct PR to openai#1420 (not openai#1344)
- SmearGate: note original PR openai#162, SP8192 reintroduction in openai#1667
- Attention output gate: PR openai#1667 first, PR openai#1787 narrowed it
- LeakyReLU²: replaced relu², not GELU
- EMA decay: remove specific wrong value, defer to PR openai#287
- GPTQ first: correct to PR openai#535 (not openai#1019)
- Global SGD phase: 2000 docs (openai#1610) refined to 2500 (openai#1626)
- Training closing line: "halfway point" → "35% mark"

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants