nohup: ignoring input Loading model... [Auto-detect] Qwen3-Omni MoE thinker (30.5B total, ~3.3B active) [FireEcho] Loading /run/media/echo/Echo/ECHO/training/Prototype Fireecho/model/Qwen3-Omni-30B-A3B-Instruct... [FireEcho] AutoConfig failed ('Qwen3OmniMoeTalkerCodePredictorConfig' object has no attribute 'use_sliding_window'), loading config.json directly Qwen3-Omni: will stream-load from 15 shards [Qwen3 Streaming] Loaded shard index: 28010 keys across 15 shards [Qwen3 Streaming] Building engine skeleton... [Qwen3 Streaming] Global params on GPU: 1.2 GB Layer 4/48: 393 weights, VRAM 2.8 GB, CPU 1.4 GB Layer 8/48: 393 weights, VRAM 4.3 GB, CPU 1.6 GB Layer 12/48: 393 weights, VRAM 5.8 GB, CPU 1.7 GB Layer 16/48: 393 weights, VRAM 7.4 GB, CPU 1.9 GB Layer 20/48: 393 weights, VRAM 8.9 GB, CPU 2.0 GB Layer 24/48: 393 weights, VRAM 10.4 GB, CPU 2.2 GB Layer 28/48: 393 weights, VRAM 11.9 GB, CPU 2.3 GB Layer 32/48: 393 weights, VRAM 13.5 GB, CPU 2.5 GB Layer 36/48: 393 weights, VRAM 15.0 GB, CPU 2.6 GB Layer 40/48: 393 weights, VRAM 16.5 GB, CPU 2.8 GB Layer 44/48: 393 weights, VRAM 18.0 GB, CPU 2.9 GB Layer 48/48: 393 weights, VRAM 19.6 GB, CPU 3.1 GB [Qwen3 Streaming] Final VRAM: 19.6 GB (FP4 quantized) [Qwen3 Streaming] Done: 1571.8M params, 18867 weights loaded Total params: 1.57B Frozen params: 1.54B (base model, FP4) Trainable params: 30.2M (Hebbian only) [Packed MoE] 48 layers packed (6144 experts → contiguous) [Flat KV] Enabled: 4096 tokens, 403 MB Warmup... ============================================================ Testing D=2 (D=2 baseline) ============================================================ [EAGLE] Loaded legacy D=2 checkpoint. 0 new layer params initialized randomly. [EAGLE-3] Draft head: D=2, 104.9M params, 210 MB, capture layers [8, 24, 47] + Hebbian memory Target prefill logits: has_nan=True, min=nan, max=nan First decoded token: 0 = '!' Target predicts next: 0 = '!' Feature layer 8: has_nan=True, min=nan, max=nan Feature layer 24: has_nan=True, min=nan, max=nan Feature layer 47: has_nan=True, min=nan, max=nan Draft tokens: [0] 0 = '!' [1] 0 = '!' [2] 0 = '!' [3] 0 = '!' [4] 0 = '!' Draft logits[0]: has_nan=True, min=nan, max=nan Target verify predictions: [1] target=0 ('!'), draft=0 ('!') → MATCH [2] target=0 ('!'), draft=0 ('!') → MATCH [3] target=0 ('!'), draft=0 ('!') → MATCH [4] target=0 ('!'), draft=0 ('!') → MATCH Accepted: 5/5 --- Full speculative_generate (max_new=30) --- [EAGLE-3] 5 rounds, 21 drafted, 21 accepted (100%), avg 4.2/round Output: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ============================================================ Testing D=8 (D=8 with random layers 2-7) ============================================================ [EAGLE] Loaded legacy D=2 checkpoint. 54 new layer params initialized randomly. [FE-XT] Draft head: D=8, 356.5M params, 713 MB, capture layers [8, 24, 47] + Hebbian memory Target prefill logits: has_nan=True, min=nan, max=nan First decoded token: 0 = '!' Target predicts next: 0 = '!' Feature layer 8: has_nan=True, min=nan, max=nan Feature layer 24: has_nan=True, min=nan, max=nan Feature layer 47: has_nan=True, min=nan, max=nan Draft tokens: [0] 0 = '!' [1] 0 = '!' [2] 0 = '!' [3] 0 = '!' [4] 0 = '!' Draft logits[0]: has_nan=True, min=nan, max=nan Target verify predictions: [1] target=0 ('!'), draft=0 ('!') → MATCH [2] target=0 ('!'), draft=0 ('!') → MATCH [3] target=0 ('!'), draft=0 ('!') → MATCH [4] target=0 ('!'), draft=0 ('!') → MATCH Accepted: 5/5 --- Full speculative_generate (max_new=30) --- [EAGLE-3] 5 rounds, 21 drafted, 21 accepted (100%), avg 4.2/round Output: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ============================================================ D=2 accepted: 5/5 D=8 accepted: 5/5 ============================================================