DASD-1.7B-stage2
Overview
DASD-1.7B-stage2 is a small-scale long-reasoning instruction model trained using the DASD (Distribution-Aligned Sequence Distillation) methodology. It employs a Stage-wise (Stage 1 → Stage 2) distillation fine-tuning strategy, specifically designed to enhance the model's stability, generalization, and reasoning diversity in complex tasks.
This model uses Jackrong/DASD-1.7B-stage1 as its initial weights. Building upon the stable reasoning patterns established in Stage 1, it undergoes Stage 2 high-temperature distribution alignment training to further expand its reasoning coverage, allowing the small model to better approximate the reasoning distribution of larger teacher models.
Training Methodology
Figure 1: (a) The stage-wise DASD pipeline. (b) Comparison of reasoning trajectories between stages. (c) Theoretical visualization of distribution alignment.
Base Model
- Base model:
Jackrong/DASD-1.7B-stage1 - Architecture: Decoder-only Transformer
- Context length: up to 16,384 tokens
- Quantization: 4-bit
- Chat template: Qwen3 Instruct format
Training Method
- Training type: Supervised Fine-Tuning
- Frameworks:
- 🤗 Transformers
- TRL
- Unsloth (4-bit efficient training)
- Optimization:
- Optimizer:
adamw_8bit - Learning rate:
2e-5 - Scheduler: Linear
- Weight decay:
0.001 - Warmup ratio:
0.05
- Optimizer:
- Epochs:
1
Training Method: DASD Stage-wise Distillation
This model follows a two-stage Distribution-Aligned Sequence Distillation (DASD) training recipe, designed specifically to enable small dense language models to acquire strong long chain-of-thought (Long-CoT) reasoning behavior from large teacher models.
Instead of performing a single-pass SFT on mixed-temperature data, DASD decomposes training into two complementary stages with different optimization roles.
Stage 1: Low-Temperature Alignment (Stability)
In Stage 1, the model is trained on low-temperature teacher outputs, which emphasize:
- Canonical and structured reasoning paths.
- Reduced randomness and lower variance.
- High signal-to-noise supervision.
This stage serves as a distributional anchor, enabling the model to:
- Learn how to reason correctly before learning how reasoning can vary.
- Establish stable internal representations for long chain-of-thought.
- Avoid early exposure bias and optimization instability.
For small models, this stage is especially critical to prevent brittle reasoning and mode collapse during early fine-tuning.
Stage 2: High-Temperature Expansion (Diversity)
Stage 2 continues training from the Stage 1 checkpoint using higher-temperature DASD samples, which introduce:
- Multiple valid reasoning trajectories.
- Broader coverage of the teacher’s output distribution.
- Increased linguistic and structural diversity.
By aligning the student model to a wider but still distribution-aware target, this stage allows the model to:
- Generalize beyond a single reasoning style.
- Become more robust to paraphrasing and alternative solution paths.
- Better approximate the full reasoning support of large teacher models.
Why Stage-wise DASD Works for Small Models
Compared to conventional single-stage SFT, the Stage 1 → Stage 2 DASD pipeline provides:
- Improved optimization stability through curriculum-style training.
- Higher reasoning coverage without sacrificing correctness.
- Reduced exposure bias, especially for long-context reasoning.
- Better performance–data efficiency tradeoff, which is critical at the 1–2B parameter scale.
This strategy enables small dense models to inherit strong long-CoT reasoning behavior while remaining compact and efficient.
Training Data
The model is trained on multi-source, high-quality reasoning instruction data, primarily distilled from large teacher models using the DASD methodology.
Data sources include:
- Alibaba Apsara Superior-Reasoning-SFT (gpt-oss-120b) datasets (Stage 1 & Stage 2).
- Reasoning-focused ShareGPT-style instruction data.
- Natural and multi-step reasoning instruction corpora.
- Chinese and English bilingual long-CoT distillation datasets.
All data are unified into a conversational format using the Qwen3 Instruct chat template, with loss applied only to assistant responses.
Training Details
- Max sequence length: 16,384
- Loss masking: Assistant-only loss (
train_on_responses_only) - Checkpointing: Last checkpoint retained
- Logging: Experiment tracking enabled during training
Intended Use
This model is intended for:
- Long chain-of-thought reasoning tasks.
- Math, logic, code, and scientific QA.
- Bilingual (Chinese / English) instruction following.
- Research and experimentation on small-model reasoning distillation.
Limitations
- The model has not undergone RLHF or safety-specific alignment.
- Outputs may contain incorrect or unverified reasoning steps (hallucinations).
- Not suitable for high-risk domains such as legal or medical advice.
License
This model follows the licenses of its base model and training datasets. Please refer to the original dataset repositories for detailed license information.
Acknowledgements
- Alibaba Apsara DASD / gpt-oss-120b project.
- Unsloth efficient fine-tuning framework.
- Hugging Face Transformers & TRL teams.
- Downloads last month
- 19
