Instructions to use moonshotai/Kimi-K2-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use moonshotai/Kimi-K2-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="moonshotai/Kimi-K2-Instruct", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("moonshotai/Kimi-K2-Instruct", trust_remote_code=True, dtype="auto") - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use moonshotai/Kimi-K2-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "moonshotai/Kimi-K2-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moonshotai/Kimi-K2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/moonshotai/Kimi-K2-Instruct
- SGLang
How to use moonshotai/Kimi-K2-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "moonshotai/Kimi-K2-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moonshotai/Kimi-K2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "moonshotai/Kimi-K2-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "moonshotai/Kimi-K2-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use moonshotai/Kimi-K2-Instruct with Docker Model Runner:
docker model run hf.co/moonshotai/Kimi-K2-Instruct
is kimi k2 trained with fp8?
Thanks for sharing the model with the world, it's wonderful and has few enough active parameters to run quickly by cpu.
Are you releasing training details or whitepaper?
Really nice choice and work, and easily accessible to a regular person!
We will discuss these details in our technical report.
I admire Kimi-k2 and now exploring its API.
i am confused.
2.4.3 of tech report says "we do not apply FP8 in computation".
from the size of checkpoint, it seems that weight is of dtype fp8.
Yes, the dtype of weight is fp8. The report mainly talks about training. In the inference, we use blockwise fp8. It is the same as deepseek fp8. We have tested the model in all benchmarks. Its performance is the same as bf16 inference.
Yes, the dtype of weight is fp8. The report mainly talks about training. In the inference, we use blockwise fp8. It is the same as deepseek fp8. We have tested the model in all benchmarks. Its performance is the same as bf16 inference.
so can we say the model is trained in dtype bf16, after training, the model is converted to blockwise fp8 and released?
Yes