File size: 2,486 Bytes
f17846d
 
 
c90f821
f17846d
 
 
 
 
 
 
5be2bff
f17846d
 
 
c90f821
 
f17846d
fb497b5
 
 
 
 
5be2bff
 
 
 
 
 
 
 
 
f17846d
 
 
 
c90f821
 
 
 
 
 
 
 
 
f17846d
 
5be2bff
 
 
 
 
 
 
 
 
 
 
 
f17846d
 
 
 
5be2bff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: apple-amlr
base_model:
- Qwen/Qwen3-4B-Instruct-2507
tags:
- self-distillation
- code-generation
- ssd
library_name: transformers
---

# SimpleSD-4B-instruct

This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.

- **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
- **Evaluation sampling:** temperature=1.1, top_p=0.8, top_k=20

## Notes
- These are research checkpoints for reproducibility.
- They are not optimized Qwen releases.
- They don't represent a broader open-source model strategy.

## Usage

 ```python
from transformers import AutoModelForCausalLM, AutoTokenizer

 model = AutoModelForCausalLM.from_pretrained("apple/SimpleSD-4B-instruct")
tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-4B-instruct")
```

## Method

SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.

## Results

LiveCodeBench (%)

| Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
|---|---|---|---|---|
| Qwen3-4B-Instruct-2507 (base) | 34.0 | 41.0 | 34.3 | 45.4 |
| **+ SSD (this model)** | **41.5** (+7.5) | **56.8** (+15.8) | **45.7** (+11.4) | **61.9** (+16.5) |

## Paper

[**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)

 ```bibtex
@misc{zhang2026embarrassinglysimpleselfdistillationimproves,
      title={Embarrassingly Simple Self-Distillation Improves Code Generation},
      author={Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
      year={2026},
      eprint={2604.01193},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.01193},
}
```

## License

This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SimpleSD-4B-instruct/blob/main/LICENSE).