File size: 2,665 Bytes
1e64107
 
 
fc987ff
1e64107
 
 
 
 
 
d1f417f
1e64107
5309422
1e64107
fc987ff
 
1e64107
777470d
 
 
 
 
b0a0ae1
 
 
 
 
d1f417f
 
 
 
 
 
 
 
 
1e64107
 
819ce71
1e64107
fc987ff
 
 
 
 
 
 
819ce71
fc987ff
1e64107
 
d1f417f
 
 
 
 
 
 
 
 
 
 
 
1e64107
 
d1f417f
1e64107
 
02e3aaf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: apple-amlr
base_model:
- Qwen/Qwen3-4B-Thinking-2507
tags:
- self-distillation
- code-generation
library_name: transformers
---

# SimpleSD-4B-thinking

This model is an example of the **Simple Self-Distillation (SimpleSD)** method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning. Please see the paper below for more information. This uses Qwen for initialization.

- **Self-distillation sampling:** temperature=1.1, top_p=0.95, top_k=20
- **Evaluation sampling:** temperature=0.7, top_p=0.95, top_k=20

paper: https://arxiv.org/abs/2604.01193

code: https://github.com/apple/ml-ssd


## Notes
- These are research checkpoints for reproducibility.
- They are not optimized Qwen releases.
- They don't represent a broader open-source model strategy.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("apple/SimpleSD-4B-thinking")
tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-4B-thinking")
```

## Method

SimpleSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SimpleSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SimpleSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.

## Results

LiveCodeBench (%)

| Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
|---|---|---|---|---|
| Qwen3-4B-Thinking-2507 (base) | 54.5 | 67.5 | 59.6 | 70.3 |
| **+ SimpleSD (this model)** | **57.8** (+3.3) | **71.4** (+3.9) | **63.1** (+3.5) | **74.7** (+4.4) |

## Paper

[**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)

```bibtex
@misc{zhang2026embarrassinglysimpleselfdistillationimproves,
      title={Embarrassingly Simple Self-Distillation Improves Code Generation},
      author={Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
      year={2026},
      eprint={2604.01193},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2604.01193},
}
```


## License

This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SimpleSD-4B-thinking/blob/main/LICENSE).