Instructions to use Kwaipilot/KAT-Dev with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Kwaipilot/KAT-Dev with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kwaipilot/KAT-Dev") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Kwaipilot/KAT-Dev") model = AutoModelForCausalLM.from_pretrained("Kwaipilot/KAT-Dev") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Kwaipilot/KAT-Dev with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Kwaipilot/KAT-Dev" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Kwaipilot/KAT-Dev
- SGLang
How to use Kwaipilot/KAT-Dev with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Kwaipilot/KAT-Dev with Docker Model Runner:
docker model run hf.co/Kwaipilot/KAT-Dev
Is KAT-Dev supposed to be that much slower than QWEN?
Hello, I tried both KAT-Dev (33B) and QWEN3-Coder-30B-A3B to compare speed. QWEN is significantly faster. I don't know about quality though, but I assume KAT-Dev is better from hearing from other people.
My stats:
- KAT-Dev GGUF Q5_K_M: 16 tk/s
- KAT-Dev GGUF Q4_K_M: 27 tk/s
- QWEN3-Coder GGUF Q5_K_XL: 201 tk/s
Load settings:
- Both KAT-Dev GGUFs: Just V Cache Quantization Type set to Q8_0 (with Flash Attention on), rest default
- QWEN3-Coder GGUF: K- and V Cache Quantization Type set to Q8_0 (with Flash Attention on), rest default
Inference settings (favored KAT-Dev settings):
- All 3 with same inference settings, which is:
Temp: 0.6,
Top K: 20,
Repeat Penatly: 1.05,
Min P Sampling: 0,
Top P Sampling: 0.95

1 x RTX 5090 (32 GB total VRAM)
2 x 32 GB RAM (64 GB total RAM) @ 5600 MHz
1 x Ryzen 9 7950x
Thanks for the benchmark! The speed difference is expected - KAT-Dev is a dense 32B model that processes all parameters per token, while QWEN3-Coder is MoE with only ~3B activated parameters.
Your results (7.4x faster for QWEN) match the theory pretty well, since 32B/3B β 10x. Dense models like KAT-Dev are more thorough but slower, while MoE gets you speed through selective activation.
Appreciate the detailed comparison!
@sk0d
If Kwaipilot releases 0.6b or 14b model, you can use so-called "draft model" acceleration like this:
https://developer.nvidia.com/blog/boost-llama-3-3-70b-inference-throughput-3x-with-nvidia-tensorrt-llm-speculative-decoding/
Do you use LMstudio? does it work with roocode/cline or you?