--- license: mit language: - zh - en base_model: - Qwen/Qwen2.5-7B-Instruct-GPTQ-INT8 - Qwen/Qwen2.5-7B-Instruct-GPTQ-INT4 pipeline_tag: text-generation library_name: transformers tags: - Context - Qwen2.5-7B-Instruct-GPTQ-INT8 - Qwen2.5-7B-Instruct-GPTQ-INT4 --- # Qwen2.5-7B-Instruct This version of Qwen2.5-7B-Instruct has been converted to run on the Axera NPU using **w8a16** and **w4a16** quantization. This model has been optimized with the following LoRA: Compatible with Pulsar2 version: 4.1 ## Feature - Support for longer contexts, in this sample it's 2k - Support context dialogue - System prompt kvcache is supported ## Convert tools links: For those who are interested in model conversion, you can try to export axmodel through: [Pulsar2 Link, How to Convert LLM from Huggingface to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/appendix/build_llm.html) [AXera NPU AXEngine LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/ax-context) [AXera NPU AXCL LLM Runtime](https://github.com/AXERA-TECH/ax-llm/tree/axcl-context) ### Convert script The follow show how to convert Qwen2.5-7B-Instruct-GPTQ-Int4 ``` pulsar2 llm_build --input_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4 \ --output_path Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4-ctx-ax650 \ --hidden_state_type bf16 --kv_cache_len 2047 --prefill_len 128 \ --last_kv_cache_len 128 \ --last_kv_cache_len 256 \ --last_kv_cache_len 384 \ --last_kv_cache_len 512 \ --last_kv_cache_len 640 \ --last_kv_cache_len 768 \ --last_kv_cache_len 896 \ --last_kv_cache_len 1024 \ --chip AX650 -c 1 --parallel 8 ``` ## Support Platform - AX650 - AX650N DEMO Board - [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html) - [M4N-HAT](https://wiki.sipeed.com/hardware/zh/maixIV/m4nhat/intro.html) - [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html) |Chips|w8a16|w4a16| DDR(w8) | Flash(w8) | DDR(w4) | Flash(w4) | |--|--|--|--|--|--|--| |AX650| 2.8 tokens/sec| 5.0 tokens/sec | | | 5.2GB | 5.7GB | ## How to use Download all files from this repository to the device ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ tree -L 1 . ├── config.json ├── main_api ├── main_api_ax650 ├── main_api_axcl_aarch64 ├── main_api_axcl_x86 ├── main_ax650 ├── main_axcl_aarch64 ├── main_axcl_x86 ├── post_config.json ├── qwen2.5-7b-ctx-int4-ax650 ├── qwen2.5_tokenizer ├── qwen2.5_tokenizer_uid.py ├── README.md ├── run_qwen2.5_7b_ctx_ax650.sh ├── run_qwen2.5_7b_ctx_int4_ax650.sh ├── run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh └── run_qwen2.5_7b_ctx_int4_axcl_x86.sh 3 directories, 15 files ``` #### Start the Tokenizer service ``` (axcl) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ python qwen2.5_tokenizer_uid.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Server running at http://0.0.0.0:12345 ``` #### System prompt cache - The System prompt can be preset through the configuration file from `--system_prompt` - The System prompt can be cached in the form of kv cache to a specified folder for quick loading at the next run time from `--kvcache_path` - This folder needs to be created manually before running, for example `mkdir kvcache` ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ cat run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh ./main_axcl_aarch64 \ --template_filename_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_p128_l%d_together.axmodel" \ --axmodel_num 28 \ --url_tokenizer_model "http://0.0.0.0:12345" \ --filename_post_axmodel "qwen2.5-7b-ctx-int4-ax650/qwen2_post.axmodel" \ --filename_tokens_embed "qwen2.5-7b-ctx-int4-ax650/model.embed_tokens.weight.bfloat16.bin" \ --tokens_embed_num 152064 \ --tokens_embed_size 3584 \ --use_mmap_load_embed 1 \ --live_print 1 \ --devices 0 #--system_prompt "你的名字叫小智(allen),你是一个人畜无害的AI助手。深圳市今天(4月1日)阴天,愚人节,气温在14°C至19°C之间,微风。" \ #--kvcache_path "./kvcache" \ ``` #### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650N DEMO Board TODO #### Inference with M.2 Accelerator card [What is M.2 Accelerator card?](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html), Show this DEMO based on Raspberry PI 5. ``` (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ ./run_qwen2.5_7b_ctx_int4_axcl_aarch64.sh [I][ Init][ 130]: LLM init start [I][ Init][ 34]: connect http://0.0.0.0:12345 ok [I][ Init][ 57]: uid: ae9adea5-c64e-47df-92ca-29cbcc5a865f bos_id: -1, eos_id: 151645 3% | ██ | 1 / 31 [0.49s<15.16s, 2.04 count/s] tokenizer init ok[I][ Init][ 45]: LLaMaEmbedSelector use mmap 6% | ███ | 2 / 31 [0.49s<7.59s, 4.08 count/s] embed_selector init ok [I][ run][ 30]: AXCLWorker start with devid 0 54% | ████████████████████████████ █ █ █ ██ ██ | 17 / 31 [39.92s<77.35s, 0.40 count/s] init 24 axmodel ok,devid(0) remain_cmm(-1 MB) | 16 / 31 [39.92s<77.35s,100% | ████████████████████████████████ | 31 / 31 [80.60s<83.29s, 0.37 count/s] init post axmodel ok,remain_cmm(1324 MB)1891 MB) [I][ Init][ 221]: max_token_len : 2047 [I][ Init][ 224]: kv_cache_size : 512, kv_cache_num: 2047 [I][ Init][ 232]: prefill_token_num : 128 [I][ Init][ 236]: grp: 1, prefill_max_token_num : 1 [I][ Init][ 236]: grp: 2, prefill_max_token_num : 128 [I][ Init][ 236]: grp: 3, prefill_max_token_num : 256 [I][ Init][ 236]: grp: 4, prefill_max_token_num : 384 [I][ Init][ 236]: grp: 5, prefill_max_token_num : 512 [I][ Init][ 236]: grp: 6, prefill_max_token_num : 640 [I][ Init][ 236]: grp: 7, prefill_max_token_num : 768 [I][ Init][ 236]: grp: 8, prefill_max_token_num : 896 [I][ Init][ 236]: grp: 9, prefill_max_token_num : 1024 [I][ Init][ 240]: prefill_max_token_num : 1024 ________________________ | ID| remain cmm(MB)| ======================== | 0| 1324| ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯ [I][ load_config][ 282]: load config: { "enable_repetition_penalty": false, "enable_temperature": true, "enable_top_k_sampling": true, "enable_top_p_sampling": false, "penalty_window": 20, "repetition_penalty": 1.2, "temperature": 0.9, "top_k": 10, "top_p": 0.8 } [I][ Init][ 263]: LLM init ok Type "q" to exit, Ctrl+c to stop current running [I][ GenerateKVCachePrefill][ 324]: input token num : 21, prefill_split_num : 1 prefill_grpid : 2 [I][ GenerateKVCachePrefill][ 367]: input_num_token:21 [I][ main][ 234]: precompute_len: 21 [I][ main][ 235]: system_prompt: You are Qwen, created by Alibaba Cloud. You are a helpful assistant. prompt >> nice [I][ SetKVCache][ 614]: prefill_grpid:2 kv_cache_num:128 precompute_len:21 input_num_token:9 [I][ SetKVCache][ 617]: current prefill_max_token_num:896 [I][ Run][ 855]: input token num : 9, prefill_split_num : 1 [I][ Run][ 887]: input_num_token:9 [I][ Run][1016]: ttft: 928.08 ms Nice to meet you! If you have any questions or need some help, feel free to ask. [N][ Run][1168]: hit eos,avg 4.36 token/s [I][ GetKVCache][ 583]: precompute_len:50, remaining:974 prompt >> q [I][ run][ 80]: AXCLWorker exit with devid 0 (base) axera@raspberrypi:~/samples/AXERA-TECH/Qwen2.5-7B-Instruct $ ```