Instructions to use meituan/DeepSeek-R1-Block-INT8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use meituan/DeepSeek-R1-Block-INT8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="meituan/DeepSeek-R1-Block-INT8", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("meituan/DeepSeek-R1-Block-INT8", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("meituan/DeepSeek-R1-Block-INT8", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use meituan/DeepSeek-R1-Block-INT8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "meituan/DeepSeek-R1-Block-INT8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "meituan/DeepSeek-R1-Block-INT8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/meituan/DeepSeek-R1-Block-INT8
- SGLang
How to use meituan/DeepSeek-R1-Block-INT8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "meituan/DeepSeek-R1-Block-INT8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "meituan/DeepSeek-R1-Block-INT8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "meituan/DeepSeek-R1-Block-INT8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "meituan/DeepSeek-R1-Block-INT8", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use meituan/DeepSeek-R1-Block-INT8 with Docker Model Runner:
docker model run hf.co/meituan/DeepSeek-R1-Block-INT8
Optimal `weight_block_size` for Intel AMX `amx_int8` `amx_tile`?
在使用双路Intel Xeon NUMA节点的场景中,针对Intel(R) Xeon(R) 6980P平台,参考了关于双路Intel Xeon NUMA节点的讨论的建议,尝试采用int8量化以利用amx_tile和amx_int8 CPU指令集标志。
然而,当前配置中默认的weight_block_size设置为128x128。该参数值是否同时适用于GPU和CPU推理场景?
根据《Intel® 架构指令集扩展编程参考》第40页的说明,硬件对矩阵运算的维度限制如下:
位域07-00: tmul_maxk (行/列数最大值). 数值 = 16
位域23-08: tmul_maxn (列字节数最大值). 数值 = 64
实际限制可能取决于推理引擎(如ktransformers、llama.cpp等)是否会对模型权重矩阵进行分块处理。虽然可以通过实验验证,但若存在以下问题的明确结论将非常有帮助:
若已知Intel AMX CPU在推理过程中需要更小的权重分块尺寸(例如16x16),请提供指导建议。
祝好,感谢!
I have access to a dual socket Intel(R) Xeon(R) 6980P and received advice to try this int8 quant to take advantage of amx_tile and amx_int8 CPU flags from this post on Dual Intel Xeon NUMA Nodes.
However, I see the default weight_block_size is set to 128x128 in the configuration. Is this value appropriate for both GPU and CPU inferencing?
Refer to page 40 of Intel® Architecture Instruction Set Extensions Programming Reference suggests the tile max rows is 16 and tile max columns is 64 bytes:
Bits 07-00: tmul_maxk (rows or columns). Value = 16.
Bits 23-08: tmul_maxn (column bytes). Value = 64
It may depend on the inference engine e.g. ktransformers, llama.cpp, etc. if it splits the model weight tiles into smaller thread tiles etc.
I may just have to try it and see, but if anyone knows if Intel AMX CPU inference requires a smaller weight_block_size e.g. 16x16 please advise.
Cheers and thanks!