richardbaihe's picture
add results table
8bd929c verified
|
raw
history blame
2.11 kB
metadata
license: apple-amlr
base_model:
  - Qwen/Qwen3-30B-A3B-Instruct-2507
tags:
  - self-distillation
  - code-generation
  - ssd
library_name: transformers

SSD-Qwen3-30B-A3B-Instruct

This model was produced using Simple Self-Distillation (SSD), a method that improves code generation by fine-tuning a language model on its own sampled outputs—without rewards, verifiers, teacher models, or reinforcement learning.

  • Base model: Qwen/Qwen3-30B-A3B-Instruct-2507
  • Variant: instruct
  • Self-distillation sampling: temperature=1.6, top_p=0.8, top_k=20
  • Evaluation sampling: temperature=0.9, top_p=0.8, top_k=20

Method

SSD samples solutions from the base model using non-unit temperature and top-k/top-p truncation, then fine-tunes on those samples via standard supervised learning. Despite its simplicity, SSD yields large gains on competitive programming benchmarks, with improvements concentrating on harder problems. The mechanism traces to resolving a precision–exploration conflict: SSD reshapes token distributions in a context-dependent way so that a single global decoding configuration becomes far more effective at evaluation time.

Results

LiveCodeBench (%)

Model LCBv6 pass@1 LCBv6 pass@5 LCBv5 pass@1 LCBv5 pass@5
Qwen3-30B-A3B-Instruct-2507 (base) 42.4 53.5 45.8 58.7
+ SSD (this model) 55.3 (+12.9) 71.6 (+18.1) 54.3 (+8.5) 70.7 (+12.0)

Paper

Embarrassingly Simple Self-Distillation Improves Code Generation

Ruixiang Zhang, Richard He Bai, Huangjie Zheng, Navdeep Jaitly, Ronan Collobert, Yizhe Zhang

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("apple/SSD-Qwen3-30B-A3B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("apple/SSD-Qwen3-30B-A3B-Instruct")

License

This model is released under the Apple Machine Learning Research Model License.