--- license: apple-amlr base_model: - Qwen/Qwen3-4B-Thinking-2507 tags: - self-distillation - code-generation - ssd library_name: transformers --- # SSD-Qwen3-4B-Thinking This model was produced using **Simple Self-Distillation (SSD)**, a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning. - **Base model:** [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) - **Variant:** thinking - **Self-distillation sampling:** temperature=1.1, top_p=0.95, top_k=20 - **Evaluation sampling:** temperature=0.7, top_p=0.95, top_k=20 ## Method SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a *precision–exploration conflict*: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time. ## Results LiveCodeBench (%) | Model | LCBv6 pass@1 | LCBv6 pass@5 | LCBv5 pass@1 | LCBv5 pass@5 | |---|---|---|---|---| | Qwen3-4B-Thinking-2507 (base) | 54.5 | 67.5 | 59.6 | 70.3 | | **+ SSD (this model)** | **57.8** (+3.3) | **71.4** (+3.9) | **63.1** (+3.5) | **74.7** (+4.4) | ## Paper **Embarrassingly Simple Self-Distillation Improves Code Generation** Ruixiang Zhang, Richard He Bai, Huangjie Zheng, Navdeep Jaitly, Ronan Collobert, Yizhe Zhang ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("apple/SSD-Qwen3-4B-Thinking") tokenizer = AutoTokenizer.from_pretrained("apple/SSD-Qwen3-4B-Thinking") ``` ## License This model is released under the [Apple Machine Learning Research Model License](https://huggingface.co/apple/SSD-Qwen3-4B-Thinking/blob/main/LICENSE).