llmll commited on
Commit
687114c
·
verified ·
1 Parent(s): 63d3f83

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ ---
6
+
7
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66a056d0229269a861ac1245/UmJOD5HnhCfvy3nAXgxgE.png" alt="PARD" width="100" align="left">
8
+ <div align="center">
9
+ <h1>PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation</h1>
10
+ </div>
11
+
12
+
13
+ <p align="center"> |
14
+ <a href="https://arxiv.org/abs/2504.18583"><b>Paper</b></a> |
15
+ <a href="https://github.com/AMD-AIG-AIMA/PARD"><b>Github</b></a> |
16
+ <a href="https://www.amd.com/en/developer/resources/technical-articles/accelerating-generative-llms-interface-with-parallel-draft-model-pard.html"><b>Blog</b></a> |
17
+ </p>
18
+
19
+
20
+
21
+ ## Introduction
22
+
23
+ PARD is a high-performance speculative decoding method that also enables low-cost adaptation of autoregressive draft models into parallel draft models. It offers the following advantages:
24
+
25
+ - **Low-Cost Training**: PARD adapts AR (autoregressive) draft models into parallel draft models with minimal overhead. Compared to pure AR draft models, PARD achieves an average inference speedup of 1.78×. By introducing a conditional drop-token strategy, PARD improves training efficiency by up to 3× while maintaining the same level of accuracy.
26
+
27
+ - **Generalizability**: Thanks to its target-independent design, a single PARD draft model can accelerate an entire family of target models. This contrasts with target-dependent approaches such as Medusa and EAGLE, which require retraining or tuning for each new target. As a result, PARD significantly reduces both deployment complexity and adaptation cost.
28
+
29
+ - **High Performance**: When integrated into an optimized inference framework called Transformers+ PARD delivers up to a 4.08× speedup, with LLaMA3.1 8B reaches a state-of-the-art 311.5 tokens per second. When integrated into vLLM, PARD delivers up to 3.06× speedup, outperforming other speculative decoding methods in vLLM by 1.51×.
30
+
31
+
32
+ <p align="center">
33
+ <figure style="display: inline-block; text-align: center;">
34
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630cb01cc169245d78fe76b6/Dh-7wE-l0YAfU9lXWssKf.png" width="100%">
35
+ <figcaption style="font-style: italic; margin-top: 2px;">
36
+ AR and AR+ represent baseline auto-regressive generation using Transformers and Transformers+, respectively. VSD denotes vanilla speculative decoding. PARD refers to the proposed method in this work.
37
+ </figcaption>
38
+ </figure>
39
+ </p>
40
+
41
+
42
+ ## Model Weights
43
+
44
+ | Model Series | Model Name | Download |
45
+ |--------------|---------------------------------------|---------------|
46
+ | llama3 | PARD-Llama-3.2-1B | [🤗 HuggingFace](https://huggingface.co/amd/PARD-Llama-3.2-1B) |
47
+ | DSR Qwen | PARD-DeepSeek-R1-Distill-Qwen-1.5B | [🤗 HuggingFace](https://huggingface.co/amd/PARD-DeepSeek-R1-Distill-Qwen-1.5B) |
48
+ | Qwen | PARD-Qwen2.5-0.5B | [🤗 HuggingFace](https://huggingface.co/amd/PARD-Qwen2.5-0.5B) |
49
+
50
+
51
+ ## How To Use
52
+
53
+ Please visit [PARD](https://github.com/AMD-AIG-AIMA/PARD) repo for more information
54
+
55
+
56
+ ## Citation
57
+ ```
58
+ @article{an2025pard,
59
+ title={PARD: Accelerating LLM Inference with Low-Cost PARallel Draft Model Adaptation},
60
+ author={An, Zihao and Bai, Huajun and Liu, Ziqiong and Li, Dong and Barsoum, Emad},
61
+ journal={arXiv preprint arXiv:2504.18583},
62
+ year={2025}
63
+ }
64
+ ```