Add comprehensive model card for TokenSwift-QwQ-32B
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,199 +1,211 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
| 5 |
|
| 6 |
-
|
| 7 |
|
| 8 |
-
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
|
|
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
### Model Description
|
| 15 |
-
|
| 16 |
-
<!-- Provide a longer summary of what this model is. -->
|
| 17 |
-
|
| 18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
| 19 |
-
|
| 20 |
-
- **Developed by:** [More Information Needed]
|
| 21 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 22 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 23 |
-
- **Model type:** [More Information Needed]
|
| 24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 25 |
-
- **License:** [More Information Needed]
|
| 26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 27 |
-
|
| 28 |
-
### Model Sources [optional]
|
| 29 |
-
|
| 30 |
-
<!-- Provide the basic links for the model. -->
|
| 31 |
-
|
| 32 |
-
- **Repository:** [More Information Needed]
|
| 33 |
-
- **Paper [optional]:** [More Information Needed]
|
| 34 |
-
- **Demo [optional]:** [More Information Needed]
|
| 35 |
-
|
| 36 |
-
## Uses
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
### Direct Use
|
| 41 |
-
|
| 42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 43 |
-
|
| 44 |
-
[More Information Needed]
|
| 45 |
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
|
|
|
| 49 |
|
| 50 |
-
|
| 51 |
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
|
| 56 |
-
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
-
|
|
|
|
| 61 |
|
| 62 |
-
|
| 63 |
|
| 64 |
-
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
## How to Get Started with the Model
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 118 |
-
|
| 119 |
-
[More Information Needed]
|
| 120 |
-
|
| 121 |
-
#### Metrics
|
| 122 |
-
|
| 123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 124 |
-
|
| 125 |
-
[More Information Needed]
|
| 126 |
-
|
| 127 |
-
### Results
|
| 128 |
-
|
| 129 |
-
[More Information Needed]
|
| 130 |
-
|
| 131 |
-
#### Summary
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
## Model Examination [optional]
|
| 136 |
-
|
| 137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 138 |
-
|
| 139 |
-
[More Information Needed]
|
| 140 |
-
|
| 141 |
-
## Environmental Impact
|
| 142 |
-
|
| 143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 144 |
-
|
| 145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 146 |
-
|
| 147 |
-
- **Hardware Type:** [More Information Needed]
|
| 148 |
-
- **Hours used:** [More Information Needed]
|
| 149 |
-
- **Cloud Provider:** [More Information Needed]
|
| 150 |
-
- **Compute Region:** [More Information Needed]
|
| 151 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 152 |
-
|
| 153 |
-
## Technical Specifications [optional]
|
| 154 |
-
|
| 155 |
-
### Model Architecture and Objective
|
| 156 |
-
|
| 157 |
-
[More Information Needed]
|
| 158 |
-
|
| 159 |
-
### Compute Infrastructure
|
| 160 |
-
|
| 161 |
-
[More Information Needed]
|
| 162 |
-
|
| 163 |
-
#### Hardware
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
#### Software
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
## Citation [optional]
|
| 172 |
-
|
| 173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 174 |
-
|
| 175 |
-
**BibTeX:**
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
**APA:**
|
| 180 |
-
|
| 181 |
-
[More Information Needed]
|
| 182 |
-
|
| 183 |
-
## Glossary [optional]
|
| 184 |
-
|
| 185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 186 |
-
|
| 187 |
-
[More Information Needed]
|
| 188 |
-
|
| 189 |
-
## More Information [optional]
|
| 190 |
|
| 191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
|
| 194 |
|
| 195 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 196 |
|
| 197 |
-
|
| 198 |
|
| 199 |
-
|
|
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
tags:
|
| 6 |
+
- llm
|
| 7 |
+
- speculative-decoding
|
| 8 |
+
- long-context
|
| 9 |
+
- acceleration
|
| 10 |
+
- qwen2
|
| 11 |
---
|
| 12 |
|
| 13 |
+
<div align="center" id="title"> <img src="https://github.com/bigai-nlco/TokenSwift/raw/main/image/TokenSwiftLogo.png" width=400px /> </div>
|
| 14 |
|
| 15 |
+
<h3 align="center">TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation</h3>
|
| 16 |
|
| 17 |
+
<div align="center">
|
| 18 |
+
<a href="https://huggingface.co/papers/2502.18890"><img src="https://img.shields.io/badge/Paper-2502.18890-b31b1b.svg?logo=arXiv" alt="arXiv"></a>
|
| 19 |
+
<a href="https://bigai-nlco.github.io/TokenSwift/"><img src="https://img.shields.io/badge/Website-TokenSwift-brightgreen.svg" alt="Website"></a>
|
| 20 |
+
<a href="https://github.com/bigai-nlco/TokenSwift"><img src="https://img.shields.io/badge/GitHub-Code-181717.svg?logo=github" alt="GitHub"></a>
|
| 21 |
+
</div>
|
| 22 |
|
| 23 |
+
This repository contains a model checkpoint of the **TokenSwift** framework, specifically `TokenSwift-QwQ-32B`, designed for accelerating Large Language Models (LLMs).
|
| 24 |
|
| 25 |
+
TokenSwift is a novel framework designed to substantially accelerate the generation process of ultra-long sequences, up to 100K tokens, while maintaining the target model's inherent quality.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
## Model Details
|
| 30 |
|
| 31 |
+
### Model Description
|
| 32 |
+
Generating ultra-long sequences with large language models (LLMs) has become increasingly crucial but remains a highly time-intensive task, particularly for sequences up to 100K tokens. While traditional speculative decoding methods exist, simply extending their generation limits fails to accelerate the process and can be detrimental. Through an in-depth analysis, we identify three major challenges hindering efficient generation: frequent model reloading, dynamic key-value (KV) management and repetitive generation.
|
| 33 |
|
| 34 |
+
To address these issues, we introduce **TOKENSWIFT**, a novel framework designed to substantially accelerate the generation process of ultra-long sequences while maintaining the target model's inherent quality. Experimental results demonstrate that TOKENSWIFT achieves over 3 times speedup across models of varying scales (1.5B, 7B, 8B, 14B) and architectures (MHA, GQA). This acceleration translates to hours of time savings for ultra-long sequence generation, establishing TOKENSWIFT as a scalable and effective solution at unprecedented lengths.
|
| 35 |
|
| 36 |
+
#### ✨ Key Highlights
|
| 37 |
+
| Highlights | Description | Emoji |
|
| 38 |
+
|------------------|----------------------------------------------|-------|
|
| 39 |
+
| ⚡ **Speed** | 3× faster than vanilla Transformers | ⏩ |
|
| 40 |
+
| 🎯 **Lossless** | Matches original model's output quality | ✅ |
|
| 41 |
+
| 📈 **Scalability**| Linear time complexity for 100K+ sequences | 📏 |
|
| 42 |
+
| 🛠️ **Plug & Play**| Works with most HuggingFace models | 🤗 |
|
| 43 |
|
| 44 |
+
### Framework Illustration
|
| 45 |
+
<img src='https://github.com/bigai-nlco/TokenSwift/raw/main/image/framework.png' alt='TokenSwift Framework' style='width: 100%;'>
|
| 46 |
|
| 47 |
+
*Illustration of TOKENSWIFT Framework. First, target model (LLM) with partial KV cache and three linear layers outputs 4 logits in a single forward pass. Tree-based attention is then applied to construct candidate tokens. Secondly, top-k candidate 4-grams are retrieved accordingly. These candidates compose draft tokens, which are fed into the LLM with full KV cache to generate target tokens. The verification is performed by checking if draft tokens match exactly with target tokens. Finally, we randomly select one of the longest valid draft tokens, and update n-gram table and KV cache accordingly.*
|
| 48 |
|
| 49 |
+
This repository contains:
|
| 50 |
+
- ✅ **100% reproducibility** for all experiments
|
| 51 |
+
- 📊 Benchmark scripts for sequence lengths: 20K/40K/60K/80K/100K
|
| 52 |
+
- 🤖 Pre-trained model adapters for Any Structure
|
| 53 |
|
| 54 |
+
<img src='https://github.com/bigai-nlco/TokenSwift/raw/main/image/res1.png' width='48%'> <img src='https://github.com/bigai-nlco/TokenSwift/raw/main/image/res2.png' width='48%'>
|
| 55 |
+
*Visualization of our acceleration performance vs. baseline methods*
|
| 56 |
|
| 57 |
+
---
|
| 58 |
|
| 59 |
+
## ✨ News
|
| 60 |
|
| 61 |
+
- **2025.5.2**: 🔥🔥 Our Paper is accepted by ICML 2025!
|
| 62 |
+
- **2025.3.19**: 🔥🔥Relase model for finetuned [QwQ-32B](https://huggingface.co/TokenSwift/TokenSwift-QwQ-32B) with 3 × acceleration. Check out [inference guide](#inference) for deployment.
|
| 63 |
+
- **2025.2.28**: 🔥🔥Relase model for finetuned [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/TokenSwift/TokenSwift-DeepSeek-R1-Distill-Qwen-32B) with 3 × acceleration. Check out [inference guide](#inference) for deployment.
|
| 64 |
+
- **2025.2.27**: Paper Release on Arxiv.
|
| 65 |
|
| 66 |
+
---
|
| 67 |
|
| 68 |
## How to Get Started with the Model
|
| 69 |
|
| 70 |
+
### Installation
|
| 71 |
+
|
| 72 |
+
You can install the `tokenswift` library via pip or from source.
|
| 73 |
+
|
| 74 |
+
#### Method 1: With pip
|
| 75 |
+
```bash
|
| 76 |
+
pip install tokenswift
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
#### Method 2: From the source (recommended)
|
| 80 |
+
```bash
|
| 81 |
+
git clone https://github.com/bigai-nlco/TokenSwift.git
|
| 82 |
+
cd TokenSwift
|
| 83 |
+
conda create -n tokenswift python=3.11
|
| 84 |
+
conda activate tokenswift
|
| 85 |
+
conda install nvidia::cuda-nvcc
|
| 86 |
+
pip install -r requirements.txt
|
| 87 |
+
pip install https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.4.post1/flash_attn-2.7.4.post1+cu12torch2.4cxx11abiFALSE-cp311-cp311-linux_x86_64.whl
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
### Models Download
|
| 91 |
+
| Model Name | Download Link |
|
| 92 |
+
|------------|-------------|
|
| 93 |
+
| TokenSwift-Yarn-Llama-2-7b-128k | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-Yarn-Llama-2-7b-128k) |
|
| 94 |
+
| TokenSwift-Llama-3.1-8B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-Llama-3.1-8B) |
|
| 95 |
+
| TokenSwift-Qwen2.5-1.5B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-Qwen2.5-1.5B) |
|
| 96 |
+
| TokenSwift-Qwen2.5-7B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-Qwen2.5-7B) |
|
| 97 |
+
| TokenSwift-Qwen2.5-14B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-Qwen2.5-14B) |
|
| 98 |
+
| TokenSwift-DeepSeek-R1-Distill-Qwen-32B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-DeepSeek-R1-Distill-Qwen-32B) |
|
| 99 |
+
| TokenSwift-QwQ-32B | [HuggingFace](https://huggingface.co/TokenSwift/TokenSwift-QwQ-32B) |
|
| 100 |
+
|
| 101 |
+
### Inference / Sample Usage
|
| 102 |
+
|
| 103 |
+
#### Python via `transformers`
|
| 104 |
+
You can easily use this model with the Hugging Face `transformers` library. Ensure you have `transformers` and `torch` installed.
|
| 105 |
+
|
| 106 |
+
```python
|
| 107 |
+
import torch
|
| 108 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 109 |
+
|
| 110 |
+
model_name = "TokenSwift/TokenSwift-QwQ-32B" # This model
|
| 111 |
+
# The base model may vary depending on the specific TokenSwift checkpoint.
|
| 112 |
+
# For TokenSwift-QwQ-32B, the base model is Qwen/QwQ-32B.
|
| 113 |
+
# You might need to adjust the base model name for other TokenSwift variants.
|
| 114 |
+
|
| 115 |
+
# Load tokenizer and model
|
| 116 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 117 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
| 118 |
+
|
| 119 |
+
# Example prompt for long sequence generation
|
| 120 |
+
prompt = "In a realm far away, where magic intertwined with technology, a young apprentice named Elara discovered an ancient artifact. It pulsed with an ethereal glow, hinting at powers beyond her wildest dreams. As she touched its surface, a surge of energy coursed through her, revealing forgotten prophecies and a destiny she never imagined. The artifact hummed, a low vibration echoing through the silent chamber, drawing her deeper into its mysteries. Outside, the world continued oblivious, but within the confines of that room, Elara's life was irrevocably changed. She knew then that her journey had just begun, a path fraught with peril and untold wonders. This ancient power, once dormant, was now awakening within her, demanding to be understood and mastered. The weight of this new knowledge settled upon her shoulders, a burden and a blessing. She was no longer just an apprentice; she was a vessel for something ancient and powerful, a key to unlocking secrets long buried. The air crackled with anticipation, reflecting the storm brewing within her soul. Her fingers traced the intricate carvings on the artifact, each line telling a story of a forgotten era. She closed her eyes, trying to absorb every detail, every whisper of its past. The chamber itself seemed to breathe with her, a silent witness to her transformation. It was a moment of profound realization, a turning point that would shape the very fabric of her existence. The hum intensified, a symphony of awakened power. She felt the pull of a greater purpose, a call to adventure that she could not ignore. The journey ahead would be long and arduous, but she was ready. She would embrace her destiny, whatever it held. The last rays of sunlight pierced through a narrow slit in the ceiling, illuminating the dust motes dancing in the air, oblivious to the momentous change that had just occurred. The dust danced, each speck a tiny universe, unburdened by destiny or ancient prophecies. Elara, however, was keenly aware of the weight of her newfound path. She looked at the artifact, then back at the chamber, a new resolve hardening her gaze. The ancient magic flowed through her veins, a thrilling, terrifying current. She took a deep breath, and began to walk towards the exit, her steps firm and purposeful. Her journey was indeed just beginning, and the world was about to feel the ripple effect of her awakening. Her mind raced with possibilities and dangers, each step a testament to her courage. The artifact, now a part of her, guided her silently. She emerged from the chamber, transformed, into a world that would soon reckon with her power. The wind whispered secrets through the trees outside, and a single leaf detached itself, spiraling down to the forest floor, foretelling a storm."
|
| 121 |
+
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
|
| 122 |
+
|
| 123 |
+
# Generate text
|
| 124 |
+
outputs = model.generate(input_ids, max_new_tokens=100)
|
| 125 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
#### Command Line Interface
|
| 129 |
+
Take LLaMA3.1-8B as an example:
|
| 130 |
+
```bash
|
| 131 |
+
torchrun --master-port 1111 --nproc_per_node=1 main.py \
|
| 132 |
+
--model_type llama3_1 \
|
| 133 |
+
--ckpt_path your_checkpoint_path \
|
| 134 |
+
--prefill_len 4096 \
|
| 135 |
+
--retrival_max_budget 4096 \
|
| 136 |
+
--gen_len 102400 \
|
| 137 |
+
--gamma 4 \
|
| 138 |
+
--min_p 0.1 \
|
| 139 |
+
--temperature 1.0 \
|
| 140 |
+
--tree_decoding \
|
| 141 |
+
--ngram_topk 20 \
|
| 142 |
+
--penalty 1.2 \
|
| 143 |
+
--penalty_length 1024 \
|
| 144 |
+
--prompt_id 0
|
| 145 |
+
|
| 146 |
+
<NOTE: Modify the data and model path>
|
| 147 |
+
```
|
| 148 |
+
For other models, you can run the scripts in `infer_scripts/` folder. For example:
|
| 149 |
+
```bash
|
| 150 |
+
bash infer_scripts/r1_qwen_32b.sh
|
| 151 |
+
```
|
| 152 |
|
| 153 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 154 |
|
| 155 |
+
## Training Guide (Optional)
|
| 156 |
+
|
| 157 |
+
### Datasets Download
|
| 158 |
+
From the [PG-19](https://huggingface.co/datasets/deepmind/pg19) training set, data larger than 8K are filtered out according to different tokenizer.
|
| 159 |
+
|
| 160 |
+
Or download processed training datasets from [llama2-pg19](https://huggingface.co/datasets/TokenSwift/llama2_pg19_train_data), [llama3.1-pg19](https://huggingface.co/datasets/TokenSwift/llama3.1_pg19_train_data), [qwen2.5-pg19](https://huggingface.co/datasets/TokenSwift/qwen2.5_pg19_train_data).
|
| 161 |
+
|
| 162 |
+
### How to Train
|
| 163 |
+
Take LLaMA3.1-8B as an example:
|
| 164 |
+
```bash
|
| 165 |
+
torchrun --master-port 1111 --nproc_per_node=4 train/train_legacy.py \
|
| 166 |
+
--model_name_or_path /your_model_path/Meta-Llama-3.1-8B \
|
| 167 |
+
--llama_type llama3_1 \
|
| 168 |
+
--data_path /your_data_path/llama3_1_pg19_8k_data \
|
| 169 |
+
--output_dir /your_checkpoint_path/adapter_ckpts_llama3_1 \
|
| 170 |
+
--max_steps 200 \
|
| 171 |
+
--per_device_train_batch_size 3 \
|
| 172 |
+
--gradient_accumulation_steps 10 \
|
| 173 |
+
--save_steps 200 \
|
| 174 |
+
--learning_rate 5e-3 \
|
| 175 |
+
--weight_decay 0.1 \
|
| 176 |
+
--warmup_steps 50 \
|
| 177 |
+
--lr_scheduler_type cosine \
|
| 178 |
+
--logging_steps 5 \
|
| 179 |
+
--report_to tensorboard \
|
| 180 |
+
--bf16 True \
|
| 181 |
+
--medusa_heads 3 \
|
| 182 |
+
--remove-unused-columns false
|
| 183 |
+
|
| 184 |
+
<NOTE: Modify the data and model path>
|
| 185 |
+
```
|
| 186 |
+
For other models, you can run the scripts in `train/scripts/` folder. For example:
|
| 187 |
+
```bash
|
| 188 |
+
cd train
|
| 189 |
+
bash scripts/train_R1_qwen2_5_32b.sh
|
| 190 |
+
```
|
| 191 |
|
| 192 |
+
---
|
| 193 |
|
| 194 |
+
## Citation
|
| 195 |
+
If you are interested in our work or use our library, please cite:
|
| 196 |
+
```bibtex
|
| 197 |
+
@misc{tokenswift,
|
| 198 |
+
title={From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation up to 100K Tokens},
|
| 199 |
+
author={Tong Wu and Junzhe Shen and Zixia Jia and Yuxuan Wang and Zilong Zheng},
|
| 200 |
+
year={2025},
|
| 201 |
+
eprint={2502.18890},
|
| 202 |
+
archivePrefix={arXiv},
|
| 203 |
+
primaryClass={cs.CL},
|
| 204 |
+
url={https://arxiv.org/abs/2502.18890},
|
| 205 |
+
}
|
| 206 |
+
```
|
| 207 |
|
| 208 |
+
---
|
| 209 |
|
| 210 |
+
## Acknowledgment
|
| 211 |
+
This codebase is influenced by remarkable projects from the LLM community, including [Medusa](https://github.com/FasterDecoding/Medusa/tree/main) and [TriForce](https://github.com/Infini-AI-Lab/TriForce).
|