Overview

The Qwen2.5‑Coder‑1.5B model is a code‑specialized large language model designed to deliver strong performance in code generation, reasoning, and automated code correction. Built as part of the Qwen2.5‑Coder series, which spans 0.5B to 32B parameters, the 1.5B variant inherits the improvements of the broader family, including training on 5.5 trillion tokens of diverse data such as source code, grounded text-to-code pairs, and high‑quality synthetic examples. Optimized for pretraining as a causal language model, Qwen2.5‑Coder‑1.5B provides a compact yet capable foundation for code understanding, generation, and integration within coding‑assistant or code‑agent workflows. This repository hosts the quantized and compiled model for Ara240 DNPU.

Model Description

This is a quantized and compiled version of Qwen/Qwen2.5-Coder-1.5B optimized for Ara240 DNPU.

  • Base Model: Qwen/Qwen2.5-Coder-1.5B
  • Original Model Authors: Qwen Team, Alibaba Cloud
  • Original License: Apache-2.0
  • Modified by: NXP

Performance

  • SpecD - Uses a small draft model to generate speculated tokens, which the main model then verifies.
  • Unpack - A smaller model with 4-bit layers for prompt processing; these layers are unpacked to 8-bit precision at runtime.
  • TTFT: - Time to first token (TTFT). Reported as a range: the lower bound corresponds to prompts up to 128 tokens, and the upper bound reflects prompts at maximum context length.
  • Avg. Token Rate: Averge token rate over the context length.
Model Runtime Context Length SpecD Unpack Params
(billion)
Time To First Token
(s)
Avg. Token rate
(Tokens/second)
DDR Memory
(GB)
Qwen2.5-coder-1.5B r2.0.4 4096 false false 1.54 0.45 - 15.85 25.52 2.121

Modifications

This model is a derivative work with the following changes from the original:

Original model available at: Qwen/Qwen2.5-Coder-1.5B.

Limitations and Biases

This model inherits all limitations from the original Qwen/Qwen2.5-Coder-1.5B model. Additional limitations:

  • Hardware-specific: Only runs on Ara240 DNPU
  • Quantization effects: May have accuracy differences due to quantization

License

This model is released under the Apache License 2.0, the same license as the original Qwen/Qwen2.5-Coder-1.5B model.

Citation

  • B. Hui et al., “Qwen2.5-Coder Technical Report,” arXiv preprint arXiv:2409.12186, 2024.
  • A. Yang et al., “Qwen2 Technical Report,” arXiv preprint arXiv:2407.10671, 2024.

If you use this model, please cite both this work and the original model:

@article{hui2024qwen2,
      title={Qwen2. 5-Coder Technical Report},
      author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
      journal={arXiv preprint arXiv:2409.12186},
      year={2024}
}
@article{qwen2,
      title={Qwen2 Technical Report}, 
      author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
      journal={arXiv preprint arXiv:2407.10671},
      year={2024}
}
Downloads last month
67
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nxp/Qwen2.5-Coder-1.5B-Ara240

Finetuned
(46)
this model

Papers for nxp/Qwen2.5-Coder-1.5B-Ara240