richardbaihe's picture
Update Readme
478b215 verified
|
raw
history blame
2.5 kB
---
license: apple-amlr
base_model:
- Qwen/Qwen3-30B-A3B-Instruct-2507
tags:
- self-distillation
- code-generation
- ssd
library_name: transformers
---
# SimpleSD-30B-instruct
This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.
- **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20
- **Evaluation sampling:** temperature=0.9, top_p=0.8, top_k=20
## Notes
- These are research checkpoints for reproducibility.
- They are not optimized Qwen releases.
- They don't represent a broader open-source model strategy.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("apple/SimpleSD-30B-instruct")
tokenizer = AutoTokenizer.from_pretrained("apple/SimpleSD-30B-instruct")
```
## Method
SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.
## Results
LiveCodeBench (%)
| Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 |
|---|---|---|---|---|
| Qwen3-30B-A3B-Instruct-2507 (base) | 42.4 | 53.5 | 45.8 | 58.7 |
| **+ SSD (this model)** | **55.3** (+12.9) | **71.6** (+18.1) | **54.3** (+8.5) | **70.7** (+12.0) |
## Paper
[**Embarrassingly Simple Self-Distillation Improves Code Generation**](https://arxiv.org/abs/2604.01193)
```bibtex
@misc{zhang2026embarrassinglysimpleselfdistillationimproves,
title={Embarrassingly Simple Self-Distillation Improves Code Generation},
author={Ruixiang Zhang and Richard He Bai and Huangjie Zheng and Navdeep Jaitly and Ronan Collobert and Yizhe Zhang},
year={2026},
eprint={2604.01193},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2604.01193},
}
```
## License
This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SimpleSD-30B-instruct/blob/main/LICENSE).