richardbaihe commited on
Commit
8bd929c
·
verified ·
1 Parent(s): d4ed680

add results table

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -22,6 +22,15 @@ This model was produced using **Simple Self-Distillation (SSD)**, a method that
22
 
23
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
24
 
 
 
 
 
 
 
 
 
 
25
  ## Paper
26
 
27
  **Embarrassingly Simple Self-Distillation Improves Code Generation**
 
22
 
23
  SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
24
 
25
+ ## Results
26
+
27
+ LiveCodeBench (%)
28
+
29
+ | Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
30
+ |---|---|---|---|---|
31
+ | Qwen3-30B-A3B-Instruct-2507 (base) | 42.4 | 53.5 | 45.8 | 58.7 |
32
+ | **+ SSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
33
+
34
  ## Paper
35
 
36
  **Embarrassingly Simple Self-Distillation Improves Code Generation**