Instructions to use Kwaipilot/KAT-Dev-72B-Exp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Kwaipilot/KAT-Dev-72B-Exp with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kwaipilot/KAT-Dev-72B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Kwaipilot/KAT-Dev-72B-Exp") model = AutoModelForCausalLM.from_pretrained("Kwaipilot/KAT-Dev-72B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Kwaipilot/KAT-Dev-72B-Exp with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Kwaipilot/KAT-Dev-72B-Exp" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Kwaipilot/KAT-Dev-72B-Exp
- SGLang
How to use Kwaipilot/KAT-Dev-72B-Exp with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev-72B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Kwaipilot/KAT-Dev-72B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kwaipilot/KAT-Dev-72B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Kwaipilot/KAT-Dev-72B-Exp with Docker Model Runner:
docker model run hf.co/Kwaipilot/KAT-Dev-72B-Exp
Context Length Exceeded Issue - Difference between Official SWE-agent and Mini SWE-agent
Hi team,
I’m trying to reproduce the evaluation results of your model, but encountering some issues regarding context length management.
My Setup:
Using mini swe-agent framework (only bash tool available)
Set context length to 128k
Still frequently hitting “exceed context limit” errors
Official Setup (from your repo):
Using full SWE-agent (bash, str_replace_editor, submit tools)
Only used 85k context length
Achieved good results without context issues
Questions:
Why does the official setup with 85k context work better than my 128k setup?
Is the difference mainly due to the additional tools (str_replace_editor, submit)?
Do these tools help reduce context consumption?
Are there any specific context management strategies used in the official evaluation?
Could you share more details about the context window management in your evaluation setup?
Any guidance would be greatly appreciated!