--- license: apple-amlr base_model: - Qwen/Qwen3-4B-Instruct-2507 tags: - self-distillation - code-generation - ssd library_name: transformers --- # SSD-Qwen3-4B-Instruct This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning. - **Base model:** [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) - **Variant:** instruct - **Self-distillation sampling:** temperature=1.6, top_p=0.8, top_k=20 - **Evaluation sampling:** temperature=1.1, top_p=0.8, top_k=20 ## Method SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time. ## Results LiveCodeBench (%) | Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 | |---|---|---|---|---| | Qwen3-4B-Instruct-2507 (base) | 34.0 | 41.0 | 34.3 | 45.4 | | **+ SSD (this model)** | **41.5** (+7.5) | **56.8** (+15.8) | **45.7** (+11.4) | **61.9** (+16.5) | ## Paper **Embarrassingly Simple Self-Distillation Improves Code Generation** Ruixiang Zhang, Richard He Bai, Huangjie Zheng, Navdeep Jaitly, Ronan Collobert, Yizhe Zhang ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("apple/SSD-Qwen3-4B-Instruct") tokenizer = AutoTokenizer.from_pretrained("apple/SSD-Qwen3-4B-Instruct") ``` ## License This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SSD-Qwen3-4B-Instruct/blob/main/LICENSE).