Model Overview
Description:
The NVIDIA Kimi-K2.6-NVFP4 model is the quantized version of the Moonshot AI's Kimi-K2.6 model, which is an auto-regressive language model that uses an optimized transformer architecture. For more information, please check here. The NVIDIA Kimi-K2.6 NVFP4 model is quantized with Model Optimizer.
This model is ready for commercial/non-commercial use.
Third-Party Community Consideration
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party’s requirements for this application and use case; see link to Non-NVIDIA (Kimi-K2.6) Model Card.
License/Terms of Use:
Governing Terms: Use of this model is governed by the NVIDIA Open Model License.
ADDITIONAL INFORMATION : Modified MIT License. Kimi-K2.6 .
Deployment Geography:
Global
Use Case:
Kimi-K2.6 Use case: developers and inference providers who need ready-to-deploy, pre-quantized versions of popular generative models for NVIDIA GPU inference.
Release Date:
Hugging Face 05/13/2026 via https://huggingface.co/nvidia/Kimi-K2.6-NVFP4
Model Architecture:
Architecture Type: Transformers
Network Architecture: DeepSeek V3
Number of Model Parameters: 1T in total and 32B activated
Input:
Input Type(s): Text, Image, Video
Input Format(s): String, Binary(Base64 encoded), Binary(Base64 encoded)
Input Parameters: One-Dimensional (1D), Two-Dimensional (2D), Three-Dimensional (3D)
Other Properties Related to Input: Context length: 256k
Output:
Output Type(s): Text
Output Format: String
Output Parameters: 1D (One Dimensional): Sequences
Other Properties Related to Output: Outputs may include natural-language responses, code, structured JSON, tool-call requests, agent coordination instructions, and generated artifacts depending on serving configuration and application-level tooling.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
Software Integration:
Supported Runtime Engine(s):
- vLLM
Supported Hardware Microarchitecture Compatibility:
- NVIDIA Blackwell
Preferred Operating System(s):
- Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Model Version(s):
The model version is Kimi-K2.6 NVFP4 version 1.0 and is quantized with nvidia-modelopt v0.44.0
Training and Evaluation Datasets:
We calibrated the model using the dataset noted below, and performed evaluation using the benchmarks noted under Evaluation Datasets.
We did not perform training or testing for this Model Optimizer release. The methods noted under Training and Testing Datasets below represent the data collection and labeling methods used by the third-party to train and test the underlying model.
Calibration Dataset:
** Link: cnn_dailymail, Nemotron-Post-Training-Dataset-v2
** Data Collection Method by dataset: Automated.
** Labeling Method by dataset: Automated.
** Properties: The cnn_dailymail dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The Nemotron-Post-Training-Dataset-v2 is a post-training dataset curated by NVIDIA containing multi-turn conversations across diverse topics.
Training Dataset:
** Data Collection Method by dataset: Hybrid: Human, Automated
** Labeling Method by dataset: Hybrid: Human, Automated
** Data Modality: Text, Image, Video
** Training Data Size: Undisclosed
** Properties: Undisclosed
Testing Dataset:
** Data Collection Method by dataset: Hybrid: Human, Automated
** Labeling Method by dataset: Hybrid: Human, Automated
** Properties: Undisclosed
Evaluation Dataset:
- Datasets: GPQA Diamond, SciCode, τ²-Bench Telecom, MMMU Pro, AA-LCR, IFBench
** Data Collection Method by dataset: Hybrid: Automated, Human
** Labeling Method by dataset: Hybrid: Human, Automated
** Properties: We evaluated the model on text-based reasoning, coding, agentic tool-use, and multimodal benchmarks: GPQA Diamond contains 448 graduate-level multiple-choice questions written by domain experts in biology, physics, and chemistry; SciCode evaluates scientific coding capabilities; τ²-Bench Telecom evaluates agentic tool-use and policy-adherence capabilities in dual-control telecom customer-service scenarios where the model interacts with a simulated user and external tools to resolve account issues; MMMU Pro is the more challenging version of the Massive Multi-discipline Multimodal Understanding benchmark, measuring college-level multimodal reasoning across diverse disciplines with expanded answer choices and a vision-only input setting; AA-LCR (Artificial Analysis Long Context Recall) evaluates a model's ability to accurately retrieve and recall information from long input contexts; IFBench is a benchmark for evaluating instruction-following capabilities across diverse and structured task constraints.
Inference:
Acceleration Engine: vLLM
Test Hardware: NVIDIA B200
Post Training Quantization
This model was obtained by converting and quantizing the weights and activations of Kimi-K2.6 from INT4 to BF16 to NVFP4 data type, ready for inference with vLLM. Only the weights and activations of the linear operators within transformer blocks in MoE are quantized.
Usage
To serve this checkpoint with vLLM, you can start the docker vllm/vllm-openai:latest and run the sample command below:
python3 -m vllm.entrypoints.openai.api_server --model nvidia/Kimi-K2.6-NVFP4 --tensor-parallel-size 4 --tool-call-parser kimi_k2 --reasoning-parser kimi_k2 --trust-remote-code
Evaluation
The accuracy benchmark results are presented in the table below:
| Precision | GPQA Diamond | SciCode | τ²-Bench Telecom | MMMU Pro | AA-LCR | IFBench |
| Baseline (INT4) | 90.9 | 52.6 | 98.2 | 75.6 | 71.0 | 73.9 |
| NVFP4 | 90.4 | 54.4 | 98.0 | 76.5 | 71.8 | 73.9 |
Baseline: Kimi-K2.6 in its native INT4 format. Benchmarked with temperature=1.0, top_p=0.95, max num tokens 128000.
Model Limitations:
The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
- Downloads last month
- 77
Model tree for nvidia/Kimi-K2.6-NVFP4
Base model
moonshotai/Kimi-K2.6