yizheapple commited on
Commit
00db28a
·
verified ·
1 Parent(s): 417e22b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -10,7 +10,7 @@ library_name: transformers
10
 
11
  # SimpleSD-30B-instruct
12
 
13
- This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
14
 
15
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
16
  - **Evaluation sampling:** temperature=0.9, top_p=0.8, top_k=20
@@ -31,7 +31,7 @@ tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-30B-instruct")
31
 
32
  ## Method
33
 
34
- SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
35
 
36
  ## Results
37
 
@@ -40,7 +40,7 @@ LiveCodeBench (%)
40
  | Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
41
  |---|---|---|---|---|
42
  | Qwen3-30B-A3B-Instruct-2507 (base) | 42.4 | 53.5 | 45.8 | 58.7 |
43
- | **+ SSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
44
 
45
  ## Paper
46
  [**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)
 
10
 
11
  # SimpleSD-30B-instruct
12
 
13
+ This model was produced using **Simple Self-Distillation (SimpleSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
14
 
15
  - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
16
  - **Evaluation sampling:** temperature=0.9, top_p=0.8, top_k=20
 
31
 
32
  ## Method
33
 
34
+ SimpleSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SimpleSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SimpleSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
35
 
36
  ## Results
37
 
 
40
  | Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
41
  |---|---|---|---|---|
42
  | Qwen3-30B-A3B-Instruct-2507 (base) | 42.4 | 53.5 | 45.8 | 58.7 |
43
+ | **+ SimpleSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
44
 
45
  ## Paper
46
  [**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)