Commit ·
58fa797
0
Parent(s):
Duplicate from vincentzed-hf/Kimi-K2.5-MXFP8
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .gitattributes +36 -0
- README.md +167 -0
- chat_template.jinja +108 -0
- config.json +449 -0
- configuration_deepseek.py +214 -0
- configuration_kimi_k25.py +123 -0
- generation_config.json +4 -0
- hf_quant_config.json +260 -0
- kimi_k25_processor.py +165 -0
- kimi_k25_vision_processing.py +251 -0
- media_utils.py +368 -0
- model-00001-of-00214.safetensors +3 -0
- model-00002-of-00214.safetensors +3 -0
- model-00003-of-00214.safetensors +3 -0
- model-00004-of-00214.safetensors +3 -0
- model-00005-of-00214.safetensors +3 -0
- model-00006-of-00214.safetensors +3 -0
- model-00007-of-00214.safetensors +3 -0
- model-00008-of-00214.safetensors +3 -0
- model-00009-of-00214.safetensors +3 -0
- model-00010-of-00214.safetensors +3 -0
- model-00011-of-00214.safetensors +3 -0
- model-00012-of-00214.safetensors +3 -0
- model-00013-of-00214.safetensors +3 -0
- model-00014-of-00214.safetensors +3 -0
- model-00015-of-00214.safetensors +3 -0
- model-00016-of-00214.safetensors +3 -0
- model-00017-of-00214.safetensors +3 -0
- model-00018-of-00214.safetensors +3 -0
- model-00019-of-00214.safetensors +3 -0
- model-00020-of-00214.safetensors +3 -0
- model-00021-of-00214.safetensors +3 -0
- model-00022-of-00214.safetensors +3 -0
- model-00023-of-00214.safetensors +3 -0
- model-00024-of-00214.safetensors +3 -0
- model-00025-of-00214.safetensors +3 -0
- model-00026-of-00214.safetensors +3 -0
- model-00027-of-00214.safetensors +3 -0
- model-00028-of-00214.safetensors +3 -0
- model-00029-of-00214.safetensors +3 -0
- model-00030-of-00214.safetensors +3 -0
- model-00031-of-00214.safetensors +3 -0
- model-00032-of-00214.safetensors +3 -0
- model-00033-of-00214.safetensors +3 -0
- model-00034-of-00214.safetensors +3 -0
- model-00035-of-00214.safetensors +3 -0
- model-00036-of-00214.safetensors +3 -0
- model-00037-of-00214.safetensors +3 -0
- model-00038-of-00214.safetensors +3 -0
- model-00039-of-00214.safetensors +3 -0
.gitattributes
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: image-text-to-text
|
| 3 |
+
base_model:
|
| 4 |
+
- moonshotai/Kimi-K2.5
|
| 5 |
+
license: mit
|
| 6 |
+
library_name: Model Optimizer
|
| 7 |
+
tags:
|
| 8 |
+
- nvidia
|
| 9 |
+
- ModelOpt
|
| 10 |
+
- KimiK25
|
| 11 |
+
- quantized
|
| 12 |
+
- MXFP8
|
| 13 |
+
- mxfp8
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# Model Overview
|
| 17 |
+
|
| 18 |
+
## Description:
|
| 19 |
+
The NVIDIA Kimi-K2.5-MXFP8 model is a quantized version of Moonshot AI's Kimi-K2.5 model, a native multimodal agentic model with Mixture of Experts (MoE) architecture. Kimi-K2.5 has 1T total parameters with 32B activated parameters, 384 routed experts (8 selected per token), and 61 transformer layers. For more information, refer to the [Kimi-K2.5 model card](https://huggingface.co/moonshotai/Kimi-K2.5). The NVIDIA Kimi-K2.5-MXFP8 model was quantized using the [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer).
|
| 20 |
+
|
| 21 |
+
This model is ready for commercial/non-commercial use. <br>
|
| 22 |
+
|
| 23 |
+
## Third-Party Community Consideration
|
| 24 |
+
This model is not owned or developed by NVIDIA. This model has been developed and built to a third-party's requirements for this application and use case; see link to Non-NVIDIA [(Kimi-K2.5) Model Card](https://huggingface.co/moonshotai/Kimi-K2.5).
|
| 25 |
+
|
| 26 |
+
### License/Terms of Use:
|
| 27 |
+
[Modified MIT](https://huggingface.co/moonshotai/Kimi-K2.5/blob/main/LICENSE)
|
| 28 |
+
|
| 29 |
+
### Deployment Geography:
|
| 30 |
+
Global <br>
|
| 31 |
+
|
| 32 |
+
### Use Case:
|
| 33 |
+
Developers looking to take off the shelf, pre-quantized models for deployment in AI Agent systems, chatbots, RAG systems, multimodal applications, and other AI-powered applications. <br>
|
| 34 |
+
|
| 35 |
+
### Release Date:
|
| 36 |
+
Huggingface via https://huggingface.co/vincentzed-hf/Kimi-K2.5-MXFP8 <br>
|
| 37 |
+
|
| 38 |
+
## Model Architecture:
|
| 39 |
+
**Architecture Type:** Transformers (Mixture of Experts) <br>
|
| 40 |
+
**Network Architecture:** KimiK25ForConditionalGeneration (DeepseekV3-based) <br>
|
| 41 |
+
**This model was developed based on [Kimi-K2.5](https://huggingface.co/moonshotai/Kimi-K2.5) <br>
|
| 42 |
+
**Total Parameters:** 1T <br>
|
| 43 |
+
**Activated Parameters:** 32B <br>
|
| 44 |
+
**Number of Layers:** 61 (including 1 dense layer) <br>
|
| 45 |
+
**Number of Experts:** 384 routed, 1 shared, 8 selected per token <br>
|
| 46 |
+
**Vision Encoder:** MoonViT (400M parameters) <br>
|
| 47 |
+
|
| 48 |
+
## Input:
|
| 49 |
+
**Input Type(s):** Text, Image, Video <br>
|
| 50 |
+
**Input Format(s):** String, Image tensors <br>
|
| 51 |
+
**Input Parameters:** Multi-modal <br>
|
| 52 |
+
|
| 53 |
+
## Output:
|
| 54 |
+
**Output Type(s):** Text <br>
|
| 55 |
+
**Output Format:** String <br>
|
| 56 |
+
**Output Parameters:** 1D (One-Dimensional): Sequences <br>
|
| 57 |
+
|
| 58 |
+
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 59 |
+
|
| 60 |
+
## Software Integration:
|
| 61 |
+
**Runtime Engine(s):** <br>
|
| 62 |
+
* SGLang <br>
|
| 63 |
+
|
| 64 |
+
**Supported Hardware Microarchitecture Compatibility:** <br>
|
| 65 |
+
* NVIDIA Blackwell <br>
|
| 66 |
+
|
| 67 |
+
**Preferred Operating System(s):** <br>
|
| 68 |
+
* Linux <br>
|
| 69 |
+
|
| 70 |
+
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
|
| 71 |
+
|
| 72 |
+
## Model Version(s):
|
| 73 |
+
** The model is quantized with nvidia-modelopt **0.41.0rc2.dev72+g886781332** <br>
|
| 74 |
+
|
| 75 |
+
## Training, Testing, and Evaluation Datasets:
|
| 76 |
+
|
| 77 |
+
## Calibration Dataset:
|
| 78 |
+
* Link: [Nemotron-Post-Training-Dataset-v2](https://huggingface.co/datasets/nvidia/Nemotron-Post-Training-Dataset-v2) <br>
|
| 79 |
+
* Data collection method: Automated. <br>
|
| 80 |
+
* Labeling method: Automated. <br>
|
| 81 |
+
|
| 82 |
+
## Training Datasets:
|
| 83 |
+
* Data Collection Method by Dataset: Undisclosed <br>
|
| 84 |
+
* Labeling Method by Dataset: Undisclosed<br>
|
| 85 |
+
* Properties: Undisclosed
|
| 86 |
+
|
| 87 |
+
## Testing Dataset:
|
| 88 |
+
* Data Collection Method by Dataset: Undisclosed <br>
|
| 89 |
+
* Labeling Method by Dataset: Undisclosed <br>
|
| 90 |
+
* Properties: Undisclosed <br>
|
| 91 |
+
|
| 92 |
+
## Evaluation Dataset:
|
| 93 |
+
* Data collection method: Hybrid: Automated, Human <br>
|
| 94 |
+
* Labeling method: Hybrid: Human, Automated <br>
|
| 95 |
+
|
| 96 |
+
## Inference:
|
| 97 |
+
**Acceleration Engine:** SGLang <br>
|
| 98 |
+
**Test Hardware:** B300 <br>
|
| 99 |
+
|
| 100 |
+
## Post Training Quantization
|
| 101 |
+
This model was obtained by quantizing the weights of Kimi-K2.5 to MXFP8 data type, ready for inference with SGLang. Only the weights of the linear operators within transformer blocks are quantized (excluding attention projections, vision tower, and mm_projector). This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 2x.
|
| 102 |
+
|
| 103 |
+
## Usage
|
| 104 |
+
|
| 105 |
+
### Deploy with SGLang
|
| 106 |
+
|
| 107 |
+
To serve the quantized MXFP8 checkpoint with [SGLang](https://github.com/sgl-project/sglang):
|
| 108 |
+
|
| 109 |
+
```bash
|
| 110 |
+
python3 -m sglang.launch_server --model-path vincentzed-hf/Kimi-K2.5-MXFP8 --quantization modelopt
|
| 111 |
+
```
|
| 112 |
+
Please install from source:
|
| 113 |
+
`git clone git@github.com:sgl-project/sglang.git`
|
| 114 |
+
Once the repo is cloned, do `uv pip install -e "python[all]"` and run the serve command.
|
| 115 |
+
|
| 116 |
+
### Reproduce with ModelOpt
|
| 117 |
+
|
| 118 |
+
You may want to produce this checkpoint yourself. To reproduce the MXFP8 quantized checkpoint using [TensorRT Model Optimizer](https://github.com/NVIDIA/TensorRT-Model-Optimizer):
|
| 119 |
+
|
| 120 |
+
```bash
|
| 121 |
+
python3 examples/llm_ptq/hf_ptq.py \
|
| 122 |
+
--pyt_ckpt_path /root/.cache/huggingface/hub/models--moonshotai--Kimi-K2.5/snapshots/c0d6821ed3d48201b834278fb99d8f2d37732a52 \
|
| 123 |
+
--qformat mxfp8 \
|
| 124 |
+
--kv_cache_qformat none \
|
| 125 |
+
--export_path ./kimi-k2.5-mxfp8 \
|
| 126 |
+
--trust_remote_code
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
### Evaluation
|
| 130 |
+
The accuracy benchmark results will be updated:
|
| 131 |
+
<table>
|
| 132 |
+
<tr>
|
| 133 |
+
<td><strong>Precision</strong>
|
| 134 |
+
</td>
|
| 135 |
+
<td><strong>Benchmark 1</strong>
|
| 136 |
+
</td>
|
| 137 |
+
<td><strong>Benchmark 2</strong>
|
| 138 |
+
</td>
|
| 139 |
+
</tr>
|
| 140 |
+
<tr>
|
| 141 |
+
<td>BF16
|
| 142 |
+
</td>
|
| 143 |
+
<td><!-- TODO -->
|
| 144 |
+
</td>
|
| 145 |
+
<td><!-- TODO -->
|
| 146 |
+
</td>
|
| 147 |
+
</tr>
|
| 148 |
+
<tr>
|
| 149 |
+
<td>MXFP8
|
| 150 |
+
</td>
|
| 151 |
+
<td><!-- TODO -->
|
| 152 |
+
</td>
|
| 153 |
+
<td><!-- TODO -->
|
| 154 |
+
</td>
|
| 155 |
+
</tr>
|
| 156 |
+
</table>
|
| 157 |
+
|
| 158 |
+
> Baseline: [Kimi-K2.5](https://huggingface.co/moonshotai/Kimi-K2.5).
|
| 159 |
+
|
| 160 |
+
## Model Limitations:
|
| 161 |
+
The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive.
|
| 162 |
+
|
| 163 |
+
## Ethical Considerations
|
| 164 |
+
|
| 165 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 166 |
+
|
| 167 |
+
Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
chat_template.jinja
ADDED
|
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{%- macro render_content(msg) -%}
|
| 2 |
+
{%- set c = msg.get('content') -%}
|
| 3 |
+
{%- if c is string -%}
|
| 4 |
+
{{ c }}
|
| 5 |
+
{%- elif c is not none -%}
|
| 6 |
+
{% for content in c -%}
|
| 7 |
+
{% if content['type'] == 'image' or content['type'] == 'image_url' -%}
|
| 8 |
+
<|media_begin|>image<|media_content|><|media_pad|><|media_end|>
|
| 9 |
+
{% elif content['type'] == 'video' or content['type']== 'video_url'-%}
|
| 10 |
+
<|kimi_k25_video_placeholder|>
|
| 11 |
+
{% else -%}
|
| 12 |
+
{{ content['text'] }}
|
| 13 |
+
{%- endif -%}
|
| 14 |
+
{%- endfor -%}
|
| 15 |
+
{%- endif -%}
|
| 16 |
+
{%- endmacro -%}
|
| 17 |
+
|
| 18 |
+
{% macro set_roles(message) -%}
|
| 19 |
+
{%- set role_name = message.get('name') or message['role'] -%}
|
| 20 |
+
{%- if message['role'] == 'user' -%}
|
| 21 |
+
<|im_user|>{{role_name}}<|im_middle|>
|
| 22 |
+
{%- elif message['role'] == 'assistant' -%}
|
| 23 |
+
<|im_assistant|>{{role_name}}<|im_middle|>
|
| 24 |
+
{%- else -%}
|
| 25 |
+
<|im_system|>{{role_name}}<|im_middle|>
|
| 26 |
+
{%- endif -%}
|
| 27 |
+
{%- endmacro -%}
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
{%- macro render_toolcalls(message) -%}
|
| 31 |
+
<|tool_calls_section_begin|>
|
| 32 |
+
{%- for tool_call in message['tool_calls'] -%}
|
| 33 |
+
{%- set formatted_id = tool_call['id'] -%}
|
| 34 |
+
<|tool_call_begin|>{{ formatted_id }}<|tool_call_argument_begin|>{% if tool_call['function']['arguments'] is string %}{{ tool_call['function']['arguments'] }}{% else %}{{ tool_call['function']['arguments'] | tojson }}{% endif %}<|tool_call_end|>
|
| 35 |
+
{%- endfor -%}
|
| 36 |
+
<|tool_calls_section_end|>
|
| 37 |
+
{%- endmacro -%}
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
{# Find last non-tool-call assisitant message #}
|
| 41 |
+
{%- set ns = namespace(last_non_tool_call_assistant_msg=-1) -%}
|
| 42 |
+
{%- for idx in range(messages|length-1, -1, -1) -%}
|
| 43 |
+
{%- if messages[idx]['role'] == 'assistant' and not messages[idx].get('tool_calls') -%}
|
| 44 |
+
{%- set ns.last_non_tool_call_assistant_msg = idx -%}
|
| 45 |
+
{%- break -%}
|
| 46 |
+
{%- endif -%}
|
| 47 |
+
{%- endfor -%}
|
| 48 |
+
|
| 49 |
+
{# split all messages into history & suffix, reasoning_content in suffix should be reserved.#}
|
| 50 |
+
{%- set hist_msgs = messages[:ns.last_non_tool_call_assistant_msg+1] -%}
|
| 51 |
+
{%- set suffix_msgs = messages[ns.last_non_tool_call_assistant_msg+1:] -%}
|
| 52 |
+
|
| 53 |
+
{%- if tools -%}
|
| 54 |
+
{%- if tools_ts_str -%}
|
| 55 |
+
<|im_system|>tool_declare<|im_middle|>{{ tools_ts_str }}<|im_end|>
|
| 56 |
+
{%- else -%}
|
| 57 |
+
<|im_system|>tool_declare<|im_middle|>{{ tools | tojson(separators=(',', ':')) }}<|im_end|>
|
| 58 |
+
{%- endif -%}
|
| 59 |
+
{%- endif -%}
|
| 60 |
+
|
| 61 |
+
{%- for message in hist_msgs -%}
|
| 62 |
+
{{set_roles(message)}}
|
| 63 |
+
{%- if message['role'] == 'assistant' -%}
|
| 64 |
+
<think></think>{{render_content(message)}}
|
| 65 |
+
{%- if message.get('tool_calls') -%}
|
| 66 |
+
{{render_toolcalls(message)}}
|
| 67 |
+
{%- endif -%}
|
| 68 |
+
{%- elif message['role'] == 'tool' -%}
|
| 69 |
+
{%- set tool_call_id = message.tool_call_id -%}
|
| 70 |
+
## Return of {{ tool_call_id }}
|
| 71 |
+
{{render_content(message)}}
|
| 72 |
+
{%- elif message['content'] is not none -%}
|
| 73 |
+
{{render_content(message)}}
|
| 74 |
+
{%- endif -%}
|
| 75 |
+
<|im_end|>
|
| 76 |
+
{%- endfor -%}
|
| 77 |
+
|
| 78 |
+
{%- for message in suffix_msgs -%}
|
| 79 |
+
{{set_roles(message)}}
|
| 80 |
+
{%- if message['role'] == 'assistant' -%}
|
| 81 |
+
{%- if thinking is defined and thinking is false -%}
|
| 82 |
+
<think></think>{{render_content(message)}}
|
| 83 |
+
{%- else -%}
|
| 84 |
+
{%- set rc = message.get('reasoning_content', '') -%}
|
| 85 |
+
<think>{{rc}}</think>{{render_content(message)}}
|
| 86 |
+
{%- endif -%}
|
| 87 |
+
{%- if message.get('tool_calls') -%}
|
| 88 |
+
{{render_toolcalls(message)}}
|
| 89 |
+
{%- endif -%}
|
| 90 |
+
{%- elif message['role'] == 'tool' -%}
|
| 91 |
+
{%- set tool_call_id = message.tool_call_id -%}
|
| 92 |
+
## Return of {{ tool_call_id }}
|
| 93 |
+
{{render_content(message)}}
|
| 94 |
+
{%- elif message['content'] is not none -%}
|
| 95 |
+
{{render_content(message)}}
|
| 96 |
+
{%- endif -%}
|
| 97 |
+
<|im_end|>
|
| 98 |
+
{%- endfor -%}
|
| 99 |
+
|
| 100 |
+
|
| 101 |
+
{%- if add_generation_prompt -%}
|
| 102 |
+
<|im_assistant|>assistant<|im_middle|>
|
| 103 |
+
{%- if thinking is defined and thinking is false -%}
|
| 104 |
+
<think></think>
|
| 105 |
+
{%- else -%}
|
| 106 |
+
<think>
|
| 107 |
+
{%- endif -%}
|
| 108 |
+
{%- endif -%}
|
config.json
ADDED
|
@@ -0,0 +1,449 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"architectures": [
|
| 3 |
+
"KimiK25ForConditionalGeneration"
|
| 4 |
+
],
|
| 5 |
+
"auto_map": {
|
| 6 |
+
"AutoConfig": "configuration_kimi_k25.KimiK25Config",
|
| 7 |
+
"AutoModel": "modeling_kimi_k25.KimiK25ForConditionalGeneration",
|
| 8 |
+
"AutoModelForCausalLM": "modeling_kimi_k25.KimiK25ForConditionalGeneration"
|
| 9 |
+
},
|
| 10 |
+
"bos_token_id": 163584,
|
| 11 |
+
"dtype": "bfloat16",
|
| 12 |
+
"eos_token_id": 163585,
|
| 13 |
+
"ignore_index": -100,
|
| 14 |
+
"media_placeholder_token_id": 163605,
|
| 15 |
+
"model_type": "kimi_k25",
|
| 16 |
+
"pad_token_id": 163839,
|
| 17 |
+
"quantization_config": {
|
| 18 |
+
"ignore": [
|
| 19 |
+
"language_model.lm_head",
|
| 20 |
+
"language_model.model.layers.0.self_attn.kv_a_proj_with_mqa",
|
| 21 |
+
"language_model.model.layers.0.self_attn.kv_b_proj",
|
| 22 |
+
"language_model.model.layers.0.self_attn.q_a_proj",
|
| 23 |
+
"language_model.model.layers.0.self_attn.q_b_proj",
|
| 24 |
+
"language_model.model.layers.1.self_attn.kv_a_proj_with_mqa",
|
| 25 |
+
"language_model.model.layers.1.self_attn.kv_b_proj",
|
| 26 |
+
"language_model.model.layers.1.self_attn.q_a_proj",
|
| 27 |
+
"language_model.model.layers.1.self_attn.q_b_proj",
|
| 28 |
+
"language_model.model.layers.10.self_attn.kv_a_proj_with_mqa",
|
| 29 |
+
"language_model.model.layers.10.self_attn.kv_b_proj",
|
| 30 |
+
"language_model.model.layers.10.self_attn.q_a_proj",
|
| 31 |
+
"language_model.model.layers.10.self_attn.q_b_proj",
|
| 32 |
+
"language_model.model.layers.11.self_attn.kv_a_proj_with_mqa",
|
| 33 |
+
"language_model.model.layers.11.self_attn.kv_b_proj",
|
| 34 |
+
"language_model.model.layers.11.self_attn.q_a_proj",
|
| 35 |
+
"language_model.model.layers.11.self_attn.q_b_proj",
|
| 36 |
+
"language_model.model.layers.12.self_attn.kv_a_proj_with_mqa",
|
| 37 |
+
"language_model.model.layers.12.self_attn.kv_b_proj",
|
| 38 |
+
"language_model.model.layers.12.self_attn.q_a_proj",
|
| 39 |
+
"language_model.model.layers.12.self_attn.q_b_proj",
|
| 40 |
+
"language_model.model.layers.13.self_attn.kv_a_proj_with_mqa",
|
| 41 |
+
"language_model.model.layers.13.self_attn.kv_b_proj",
|
| 42 |
+
"language_model.model.layers.13.self_attn.q_a_proj",
|
| 43 |
+
"language_model.model.layers.13.self_attn.q_b_proj",
|
| 44 |
+
"language_model.model.layers.14.self_attn.kv_a_proj_with_mqa",
|
| 45 |
+
"language_model.model.layers.14.self_attn.kv_b_proj",
|
| 46 |
+
"language_model.model.layers.14.self_attn.q_a_proj",
|
| 47 |
+
"language_model.model.layers.14.self_attn.q_b_proj",
|
| 48 |
+
"language_model.model.layers.15.self_attn.kv_a_proj_with_mqa",
|
| 49 |
+
"language_model.model.layers.15.self_attn.kv_b_proj",
|
| 50 |
+
"language_model.model.layers.15.self_attn.q_a_proj",
|
| 51 |
+
"language_model.model.layers.15.self_attn.q_b_proj",
|
| 52 |
+
"language_model.model.layers.16.self_attn.kv_a_proj_with_mqa",
|
| 53 |
+
"language_model.model.layers.16.self_attn.kv_b_proj",
|
| 54 |
+
"language_model.model.layers.16.self_attn.q_a_proj",
|
| 55 |
+
"language_model.model.layers.16.self_attn.q_b_proj",
|
| 56 |
+
"language_model.model.layers.17.self_attn.kv_a_proj_with_mqa",
|
| 57 |
+
"language_model.model.layers.17.self_attn.kv_b_proj",
|
| 58 |
+
"language_model.model.layers.17.self_attn.q_a_proj",
|
| 59 |
+
"language_model.model.layers.17.self_attn.q_b_proj",
|
| 60 |
+
"language_model.model.layers.18.self_attn.kv_a_proj_with_mqa",
|
| 61 |
+
"language_model.model.layers.18.self_attn.kv_b_proj",
|
| 62 |
+
"language_model.model.layers.18.self_attn.q_a_proj",
|
| 63 |
+
"language_model.model.layers.18.self_attn.q_b_proj",
|
| 64 |
+
"language_model.model.layers.19.self_attn.kv_a_proj_with_mqa",
|
| 65 |
+
"language_model.model.layers.19.self_attn.kv_b_proj",
|
| 66 |
+
"language_model.model.layers.19.self_attn.q_a_proj",
|
| 67 |
+
"language_model.model.layers.19.self_attn.q_b_proj",
|
| 68 |
+
"language_model.model.layers.2.self_attn.kv_a_proj_with_mqa",
|
| 69 |
+
"language_model.model.layers.2.self_attn.kv_b_proj",
|
| 70 |
+
"language_model.model.layers.2.self_attn.q_a_proj",
|
| 71 |
+
"language_model.model.layers.2.self_attn.q_b_proj",
|
| 72 |
+
"language_model.model.layers.20.self_attn.kv_a_proj_with_mqa",
|
| 73 |
+
"language_model.model.layers.20.self_attn.kv_b_proj",
|
| 74 |
+
"language_model.model.layers.20.self_attn.q_a_proj",
|
| 75 |
+
"language_model.model.layers.20.self_attn.q_b_proj",
|
| 76 |
+
"language_model.model.layers.21.self_attn.kv_a_proj_with_mqa",
|
| 77 |
+
"language_model.model.layers.21.self_attn.kv_b_proj",
|
| 78 |
+
"language_model.model.layers.21.self_attn.q_a_proj",
|
| 79 |
+
"language_model.model.layers.21.self_attn.q_b_proj",
|
| 80 |
+
"language_model.model.layers.22.self_attn.kv_a_proj_with_mqa",
|
| 81 |
+
"language_model.model.layers.22.self_attn.kv_b_proj",
|
| 82 |
+
"language_model.model.layers.22.self_attn.q_a_proj",
|
| 83 |
+
"language_model.model.layers.22.self_attn.q_b_proj",
|
| 84 |
+
"language_model.model.layers.23.self_attn.kv_a_proj_with_mqa",
|
| 85 |
+
"language_model.model.layers.23.self_attn.kv_b_proj",
|
| 86 |
+
"language_model.model.layers.23.self_attn.q_a_proj",
|
| 87 |
+
"language_model.model.layers.23.self_attn.q_b_proj",
|
| 88 |
+
"language_model.model.layers.24.self_attn.kv_a_proj_with_mqa",
|
| 89 |
+
"language_model.model.layers.24.self_attn.kv_b_proj",
|
| 90 |
+
"language_model.model.layers.24.self_attn.q_a_proj",
|
| 91 |
+
"language_model.model.layers.24.self_attn.q_b_proj",
|
| 92 |
+
"language_model.model.layers.25.self_attn.kv_a_proj_with_mqa",
|
| 93 |
+
"language_model.model.layers.25.self_attn.kv_b_proj",
|
| 94 |
+
"language_model.model.layers.25.self_attn.q_a_proj",
|
| 95 |
+
"language_model.model.layers.25.self_attn.q_b_proj",
|
| 96 |
+
"language_model.model.layers.26.self_attn.kv_a_proj_with_mqa",
|
| 97 |
+
"language_model.model.layers.26.self_attn.kv_b_proj",
|
| 98 |
+
"language_model.model.layers.26.self_attn.q_a_proj",
|
| 99 |
+
"language_model.model.layers.26.self_attn.q_b_proj",
|
| 100 |
+
"language_model.model.layers.27.self_attn.kv_a_proj_with_mqa",
|
| 101 |
+
"language_model.model.layers.27.self_attn.kv_b_proj",
|
| 102 |
+
"language_model.model.layers.27.self_attn.q_a_proj",
|
| 103 |
+
"language_model.model.layers.27.self_attn.q_b_proj",
|
| 104 |
+
"language_model.model.layers.28.self_attn.kv_a_proj_with_mqa",
|
| 105 |
+
"language_model.model.layers.28.self_attn.kv_b_proj",
|
| 106 |
+
"language_model.model.layers.28.self_attn.q_a_proj",
|
| 107 |
+
"language_model.model.layers.28.self_attn.q_b_proj",
|
| 108 |
+
"language_model.model.layers.29.self_attn.kv_a_proj_with_mqa",
|
| 109 |
+
"language_model.model.layers.29.self_attn.kv_b_proj",
|
| 110 |
+
"language_model.model.layers.29.self_attn.q_a_proj",
|
| 111 |
+
"language_model.model.layers.29.self_attn.q_b_proj",
|
| 112 |
+
"language_model.model.layers.3.self_attn.kv_a_proj_with_mqa",
|
| 113 |
+
"language_model.model.layers.3.self_attn.kv_b_proj",
|
| 114 |
+
"language_model.model.layers.3.self_attn.q_a_proj",
|
| 115 |
+
"language_model.model.layers.3.self_attn.q_b_proj",
|
| 116 |
+
"language_model.model.layers.30.self_attn.kv_a_proj_with_mqa",
|
| 117 |
+
"language_model.model.layers.30.self_attn.kv_b_proj",
|
| 118 |
+
"language_model.model.layers.30.self_attn.q_a_proj",
|
| 119 |
+
"language_model.model.layers.30.self_attn.q_b_proj",
|
| 120 |
+
"language_model.model.layers.31.self_attn.kv_a_proj_with_mqa",
|
| 121 |
+
"language_model.model.layers.31.self_attn.kv_b_proj",
|
| 122 |
+
"language_model.model.layers.31.self_attn.q_a_proj",
|
| 123 |
+
"language_model.model.layers.31.self_attn.q_b_proj",
|
| 124 |
+
"language_model.model.layers.32.self_attn.kv_a_proj_with_mqa",
|
| 125 |
+
"language_model.model.layers.32.self_attn.kv_b_proj",
|
| 126 |
+
"language_model.model.layers.32.self_attn.q_a_proj",
|
| 127 |
+
"language_model.model.layers.32.self_attn.q_b_proj",
|
| 128 |
+
"language_model.model.layers.33.self_attn.kv_a_proj_with_mqa",
|
| 129 |
+
"language_model.model.layers.33.self_attn.kv_b_proj",
|
| 130 |
+
"language_model.model.layers.33.self_attn.q_a_proj",
|
| 131 |
+
"language_model.model.layers.33.self_attn.q_b_proj",
|
| 132 |
+
"language_model.model.layers.34.self_attn.kv_a_proj_with_mqa",
|
| 133 |
+
"language_model.model.layers.34.self_attn.kv_b_proj",
|
| 134 |
+
"language_model.model.layers.34.self_attn.q_a_proj",
|
| 135 |
+
"language_model.model.layers.34.self_attn.q_b_proj",
|
| 136 |
+
"language_model.model.layers.35.self_attn.kv_a_proj_with_mqa",
|
| 137 |
+
"language_model.model.layers.35.self_attn.kv_b_proj",
|
| 138 |
+
"language_model.model.layers.35.self_attn.q_a_proj",
|
| 139 |
+
"language_model.model.layers.35.self_attn.q_b_proj",
|
| 140 |
+
"language_model.model.layers.36.self_attn.kv_a_proj_with_mqa",
|
| 141 |
+
"language_model.model.layers.36.self_attn.kv_b_proj",
|
| 142 |
+
"language_model.model.layers.36.self_attn.q_a_proj",
|
| 143 |
+
"language_model.model.layers.36.self_attn.q_b_proj",
|
| 144 |
+
"language_model.model.layers.37.self_attn.kv_a_proj_with_mqa",
|
| 145 |
+
"language_model.model.layers.37.self_attn.kv_b_proj",
|
| 146 |
+
"language_model.model.layers.37.self_attn.q_a_proj",
|
| 147 |
+
"language_model.model.layers.37.self_attn.q_b_proj",
|
| 148 |
+
"language_model.model.layers.38.self_attn.kv_a_proj_with_mqa",
|
| 149 |
+
"language_model.model.layers.38.self_attn.kv_b_proj",
|
| 150 |
+
"language_model.model.layers.38.self_attn.q_a_proj",
|
| 151 |
+
"language_model.model.layers.38.self_attn.q_b_proj",
|
| 152 |
+
"language_model.model.layers.39.self_attn.kv_a_proj_with_mqa",
|
| 153 |
+
"language_model.model.layers.39.self_attn.kv_b_proj",
|
| 154 |
+
"language_model.model.layers.39.self_attn.q_a_proj",
|
| 155 |
+
"language_model.model.layers.39.self_attn.q_b_proj",
|
| 156 |
+
"language_model.model.layers.4.self_attn.kv_a_proj_with_mqa",
|
| 157 |
+
"language_model.model.layers.4.self_attn.kv_b_proj",
|
| 158 |
+
"language_model.model.layers.4.self_attn.q_a_proj",
|
| 159 |
+
"language_model.model.layers.4.self_attn.q_b_proj",
|
| 160 |
+
"language_model.model.layers.40.self_attn.kv_a_proj_with_mqa",
|
| 161 |
+
"language_model.model.layers.40.self_attn.kv_b_proj",
|
| 162 |
+
"language_model.model.layers.40.self_attn.q_a_proj",
|
| 163 |
+
"language_model.model.layers.40.self_attn.q_b_proj",
|
| 164 |
+
"language_model.model.layers.41.self_attn.kv_a_proj_with_mqa",
|
| 165 |
+
"language_model.model.layers.41.self_attn.kv_b_proj",
|
| 166 |
+
"language_model.model.layers.41.self_attn.q_a_proj",
|
| 167 |
+
"language_model.model.layers.41.self_attn.q_b_proj",
|
| 168 |
+
"language_model.model.layers.42.self_attn.kv_a_proj_with_mqa",
|
| 169 |
+
"language_model.model.layers.42.self_attn.kv_b_proj",
|
| 170 |
+
"language_model.model.layers.42.self_attn.q_a_proj",
|
| 171 |
+
"language_model.model.layers.42.self_attn.q_b_proj",
|
| 172 |
+
"language_model.model.layers.43.self_attn.kv_a_proj_with_mqa",
|
| 173 |
+
"language_model.model.layers.43.self_attn.kv_b_proj",
|
| 174 |
+
"language_model.model.layers.43.self_attn.q_a_proj",
|
| 175 |
+
"language_model.model.layers.43.self_attn.q_b_proj",
|
| 176 |
+
"language_model.model.layers.44.self_attn.kv_a_proj_with_mqa",
|
| 177 |
+
"language_model.model.layers.44.self_attn.kv_b_proj",
|
| 178 |
+
"language_model.model.layers.44.self_attn.q_a_proj",
|
| 179 |
+
"language_model.model.layers.44.self_attn.q_b_proj",
|
| 180 |
+
"language_model.model.layers.45.self_attn.kv_a_proj_with_mqa",
|
| 181 |
+
"language_model.model.layers.45.self_attn.kv_b_proj",
|
| 182 |
+
"language_model.model.layers.45.self_attn.q_a_proj",
|
| 183 |
+
"language_model.model.layers.45.self_attn.q_b_proj",
|
| 184 |
+
"language_model.model.layers.46.self_attn.kv_a_proj_with_mqa",
|
| 185 |
+
"language_model.model.layers.46.self_attn.kv_b_proj",
|
| 186 |
+
"language_model.model.layers.46.self_attn.q_a_proj",
|
| 187 |
+
"language_model.model.layers.46.self_attn.q_b_proj",
|
| 188 |
+
"language_model.model.layers.47.self_attn.kv_a_proj_with_mqa",
|
| 189 |
+
"language_model.model.layers.47.self_attn.kv_b_proj",
|
| 190 |
+
"language_model.model.layers.47.self_attn.q_a_proj",
|
| 191 |
+
"language_model.model.layers.47.self_attn.q_b_proj",
|
| 192 |
+
"language_model.model.layers.48.self_attn.kv_a_proj_with_mqa",
|
| 193 |
+
"language_model.model.layers.48.self_attn.kv_b_proj",
|
| 194 |
+
"language_model.model.layers.48.self_attn.q_a_proj",
|
| 195 |
+
"language_model.model.layers.48.self_attn.q_b_proj",
|
| 196 |
+
"language_model.model.layers.49.self_attn.kv_a_proj_with_mqa",
|
| 197 |
+
"language_model.model.layers.49.self_attn.kv_b_proj",
|
| 198 |
+
"language_model.model.layers.49.self_attn.q_a_proj",
|
| 199 |
+
"language_model.model.layers.49.self_attn.q_b_proj",
|
| 200 |
+
"language_model.model.layers.5.self_attn.kv_a_proj_with_mqa",
|
| 201 |
+
"language_model.model.layers.5.self_attn.kv_b_proj",
|
| 202 |
+
"language_model.model.layers.5.self_attn.q_a_proj",
|
| 203 |
+
"language_model.model.layers.5.self_attn.q_b_proj",
|
| 204 |
+
"language_model.model.layers.50.self_attn.kv_a_proj_with_mqa",
|
| 205 |
+
"language_model.model.layers.50.self_attn.kv_b_proj",
|
| 206 |
+
"language_model.model.layers.50.self_attn.q_a_proj",
|
| 207 |
+
"language_model.model.layers.50.self_attn.q_b_proj",
|
| 208 |
+
"language_model.model.layers.51.self_attn.kv_a_proj_with_mqa",
|
| 209 |
+
"language_model.model.layers.51.self_attn.kv_b_proj",
|
| 210 |
+
"language_model.model.layers.51.self_attn.q_a_proj",
|
| 211 |
+
"language_model.model.layers.51.self_attn.q_b_proj",
|
| 212 |
+
"language_model.model.layers.52.self_attn.kv_a_proj_with_mqa",
|
| 213 |
+
"language_model.model.layers.52.self_attn.kv_b_proj",
|
| 214 |
+
"language_model.model.layers.52.self_attn.q_a_proj",
|
| 215 |
+
"language_model.model.layers.52.self_attn.q_b_proj",
|
| 216 |
+
"language_model.model.layers.53.self_attn.kv_a_proj_with_mqa",
|
| 217 |
+
"language_model.model.layers.53.self_attn.kv_b_proj",
|
| 218 |
+
"language_model.model.layers.53.self_attn.q_a_proj",
|
| 219 |
+
"language_model.model.layers.53.self_attn.q_b_proj",
|
| 220 |
+
"language_model.model.layers.54.self_attn.kv_a_proj_with_mqa",
|
| 221 |
+
"language_model.model.layers.54.self_attn.kv_b_proj",
|
| 222 |
+
"language_model.model.layers.54.self_attn.q_a_proj",
|
| 223 |
+
"language_model.model.layers.54.self_attn.q_b_proj",
|
| 224 |
+
"language_model.model.layers.55.self_attn.kv_a_proj_with_mqa",
|
| 225 |
+
"language_model.model.layers.55.self_attn.kv_b_proj",
|
| 226 |
+
"language_model.model.layers.55.self_attn.q_a_proj",
|
| 227 |
+
"language_model.model.layers.55.self_attn.q_b_proj",
|
| 228 |
+
"language_model.model.layers.56.self_attn.kv_a_proj_with_mqa",
|
| 229 |
+
"language_model.model.layers.56.self_attn.kv_b_proj",
|
| 230 |
+
"language_model.model.layers.56.self_attn.q_a_proj",
|
| 231 |
+
"language_model.model.layers.56.self_attn.q_b_proj",
|
| 232 |
+
"language_model.model.layers.57.self_attn.kv_a_proj_with_mqa",
|
| 233 |
+
"language_model.model.layers.57.self_attn.kv_b_proj",
|
| 234 |
+
"language_model.model.layers.57.self_attn.q_a_proj",
|
| 235 |
+
"language_model.model.layers.57.self_attn.q_b_proj",
|
| 236 |
+
"language_model.model.layers.58.self_attn.kv_a_proj_with_mqa",
|
| 237 |
+
"language_model.model.layers.58.self_attn.kv_b_proj",
|
| 238 |
+
"language_model.model.layers.58.self_attn.q_a_proj",
|
| 239 |
+
"language_model.model.layers.58.self_attn.q_b_proj",
|
| 240 |
+
"language_model.model.layers.59.self_attn.kv_a_proj_with_mqa",
|
| 241 |
+
"language_model.model.layers.59.self_attn.kv_b_proj",
|
| 242 |
+
"language_model.model.layers.59.self_attn.q_a_proj",
|
| 243 |
+
"language_model.model.layers.59.self_attn.q_b_proj",
|
| 244 |
+
"language_model.model.layers.6.self_attn.kv_a_proj_with_mqa",
|
| 245 |
+
"language_model.model.layers.6.self_attn.kv_b_proj",
|
| 246 |
+
"language_model.model.layers.6.self_attn.q_a_proj",
|
| 247 |
+
"language_model.model.layers.6.self_attn.q_b_proj",
|
| 248 |
+
"language_model.model.layers.60.self_attn.kv_a_proj_with_mqa",
|
| 249 |
+
"language_model.model.layers.60.self_attn.kv_b_proj",
|
| 250 |
+
"language_model.model.layers.60.self_attn.q_a_proj",
|
| 251 |
+
"language_model.model.layers.60.self_attn.q_b_proj",
|
| 252 |
+
"language_model.model.layers.7.self_attn.kv_a_proj_with_mqa",
|
| 253 |
+
"language_model.model.layers.7.self_attn.kv_b_proj",
|
| 254 |
+
"language_model.model.layers.7.self_attn.q_a_proj",
|
| 255 |
+
"language_model.model.layers.7.self_attn.q_b_proj",
|
| 256 |
+
"language_model.model.layers.8.self_attn.kv_a_proj_with_mqa",
|
| 257 |
+
"language_model.model.layers.8.self_attn.kv_b_proj",
|
| 258 |
+
"language_model.model.layers.8.self_attn.q_a_proj",
|
| 259 |
+
"language_model.model.layers.8.self_attn.q_b_proj",
|
| 260 |
+
"language_model.model.layers.9.self_attn.kv_a_proj_with_mqa",
|
| 261 |
+
"language_model.model.layers.9.self_attn.kv_b_proj",
|
| 262 |
+
"language_model.model.layers.9.self_attn.q_a_proj",
|
| 263 |
+
"language_model.model.layers.9.self_attn.q_b_proj",
|
| 264 |
+
"mm_projector*",
|
| 265 |
+
"vision_tower*"
|
| 266 |
+
],
|
| 267 |
+
"quant_algo": "MXFP8",
|
| 268 |
+
"producer": {
|
| 269 |
+
"name": "modelopt",
|
| 270 |
+
"version": "0.41.0rc2.dev72+g886781332"
|
| 271 |
+
},
|
| 272 |
+
"quant_method": "modelopt"
|
| 273 |
+
},
|
| 274 |
+
"text_config": {
|
| 275 |
+
"_name_or_path": "",
|
| 276 |
+
"add_cross_attention": false,
|
| 277 |
+
"architectures": [
|
| 278 |
+
"DeepseekV3ForCausalLM"
|
| 279 |
+
],
|
| 280 |
+
"attention_bias": false,
|
| 281 |
+
"attention_dropout": 0.0,
|
| 282 |
+
"auto_map": {
|
| 283 |
+
"AutoConfig": "configuration_deepseek.DeepseekV3Config",
|
| 284 |
+
"AutoModel": "modeling_deepseek.DeepseekV3Model",
|
| 285 |
+
"AutoModelForCausalLM": "modeling_deepseek.DeepseekV3ForCausalLM"
|
| 286 |
+
},
|
| 287 |
+
"aux_loss_alpha": 0.001,
|
| 288 |
+
"bad_words_ids": null,
|
| 289 |
+
"begin_suppress_tokens": null,
|
| 290 |
+
"bos_token_id": 163584,
|
| 291 |
+
"chunk_size_feed_forward": 0,
|
| 292 |
+
"cross_attention_hidden_size": null,
|
| 293 |
+
"decoder_start_token_id": null,
|
| 294 |
+
"diversity_penalty": 0.0,
|
| 295 |
+
"do_sample": false,
|
| 296 |
+
"dtype": "bfloat16",
|
| 297 |
+
"early_stopping": false,
|
| 298 |
+
"encoder_no_repeat_ngram_size": 0,
|
| 299 |
+
"eos_token_id": 163585,
|
| 300 |
+
"ep_size": 1,
|
| 301 |
+
"exponential_decay_length_penalty": null,
|
| 302 |
+
"finetuning_task": null,
|
| 303 |
+
"first_k_dense_replace": 1,
|
| 304 |
+
"forced_bos_token_id": null,
|
| 305 |
+
"forced_eos_token_id": null,
|
| 306 |
+
"hidden_act": "silu",
|
| 307 |
+
"hidden_size": 7168,
|
| 308 |
+
"id2label": {
|
| 309 |
+
"0": "LABEL_0",
|
| 310 |
+
"1": "LABEL_1"
|
| 311 |
+
},
|
| 312 |
+
"initializer_range": 0.02,
|
| 313 |
+
"intermediate_size": 18432,
|
| 314 |
+
"is_decoder": false,
|
| 315 |
+
"is_encoder_decoder": false,
|
| 316 |
+
"kv_lora_rank": 512,
|
| 317 |
+
"label2id": {
|
| 318 |
+
"LABEL_0": 0,
|
| 319 |
+
"LABEL_1": 1
|
| 320 |
+
},
|
| 321 |
+
"length_penalty": 1.0,
|
| 322 |
+
"max_length": 20,
|
| 323 |
+
"max_position_embeddings": 262144,
|
| 324 |
+
"min_length": 0,
|
| 325 |
+
"model_type": "deepseek_v3",
|
| 326 |
+
"moe_intermediate_size": 2048,
|
| 327 |
+
"moe_layer_freq": 1,
|
| 328 |
+
"n_group": 1,
|
| 329 |
+
"n_routed_experts": 384,
|
| 330 |
+
"n_shared_experts": 1,
|
| 331 |
+
"no_repeat_ngram_size": 0,
|
| 332 |
+
"norm_topk_prob": true,
|
| 333 |
+
"num_attention_heads": 64,
|
| 334 |
+
"num_beam_groups": 1,
|
| 335 |
+
"num_beams": 1,
|
| 336 |
+
"num_experts_per_tok": 8,
|
| 337 |
+
"num_hidden_layers": 61,
|
| 338 |
+
"num_key_value_heads": 64,
|
| 339 |
+
"num_nextn_predict_layers": 0,
|
| 340 |
+
"num_return_sequences": 1,
|
| 341 |
+
"output_attentions": false,
|
| 342 |
+
"output_hidden_states": false,
|
| 343 |
+
"output_scores": false,
|
| 344 |
+
"pad_token_id": 163839,
|
| 345 |
+
"prefix": null,
|
| 346 |
+
"pretraining_tp": 1,
|
| 347 |
+
"problem_type": null,
|
| 348 |
+
"pruned_heads": {},
|
| 349 |
+
"q_lora_rank": 1536,
|
| 350 |
+
"qk_nope_head_dim": 128,
|
| 351 |
+
"qk_rope_head_dim": 64,
|
| 352 |
+
"quantization_config": {
|
| 353 |
+
"config_groups": {
|
| 354 |
+
"group_0": {
|
| 355 |
+
"input_activations": null,
|
| 356 |
+
"output_activations": null,
|
| 357 |
+
"targets": [
|
| 358 |
+
"Linear"
|
| 359 |
+
],
|
| 360 |
+
"weights": {
|
| 361 |
+
"actorder": null,
|
| 362 |
+
"block_structure": null,
|
| 363 |
+
"dynamic": false,
|
| 364 |
+
"group_size": 32,
|
| 365 |
+
"num_bits": 4,
|
| 366 |
+
"observer": "minmax",
|
| 367 |
+
"observer_kwargs": {},
|
| 368 |
+
"strategy": "group",
|
| 369 |
+
"symmetric": true,
|
| 370 |
+
"type": "int"
|
| 371 |
+
}
|
| 372 |
+
}
|
| 373 |
+
},
|
| 374 |
+
"format": "pack-quantized",
|
| 375 |
+
"ignore": [
|
| 376 |
+
"lm_head",
|
| 377 |
+
"re:.*self_attn.*",
|
| 378 |
+
"re:.*shared_experts.*",
|
| 379 |
+
"re:.*mlp\\.(gate|up|gate_up|down)_proj.*"
|
| 380 |
+
],
|
| 381 |
+
"kv_cache_scheme": null,
|
| 382 |
+
"quant_method": "compressed-tensors",
|
| 383 |
+
"quantization_status": "compressed"
|
| 384 |
+
},
|
| 385 |
+
"remove_invalid_values": false,
|
| 386 |
+
"repetition_penalty": 1.0,
|
| 387 |
+
"return_dict": true,
|
| 388 |
+
"return_dict_in_generate": false,
|
| 389 |
+
"rms_norm_eps": 1e-05,
|
| 390 |
+
"rope_scaling": {
|
| 391 |
+
"beta_fast": 32.0,
|
| 392 |
+
"beta_slow": 1.0,
|
| 393 |
+
"factor": 64.0,
|
| 394 |
+
"mscale": 1.0,
|
| 395 |
+
"mscale_all_dim": 1.0,
|
| 396 |
+
"original_max_position_embeddings": 4096,
|
| 397 |
+
"type": "yarn"
|
| 398 |
+
},
|
| 399 |
+
"rope_theta": 50000.0,
|
| 400 |
+
"routed_scaling_factor": 2.827,
|
| 401 |
+
"scoring_func": "sigmoid",
|
| 402 |
+
"sep_token_id": null,
|
| 403 |
+
"seq_aux": true,
|
| 404 |
+
"suppress_tokens": null,
|
| 405 |
+
"task_specific_params": null,
|
| 406 |
+
"temperature": 1.0,
|
| 407 |
+
"tf_legacy_loss": false,
|
| 408 |
+
"tie_encoder_decoder": false,
|
| 409 |
+
"tie_word_embeddings": false,
|
| 410 |
+
"tokenizer_class": null,
|
| 411 |
+
"top_k": 50,
|
| 412 |
+
"top_p": 1.0,
|
| 413 |
+
"topk_group": 1,
|
| 414 |
+
"topk_method": "noaux_tc",
|
| 415 |
+
"torchscript": false,
|
| 416 |
+
"typical_p": 1.0,
|
| 417 |
+
"use_bfloat16": false,
|
| 418 |
+
"use_cache": true,
|
| 419 |
+
"v_head_dim": 128,
|
| 420 |
+
"vocab_size": 163840
|
| 421 |
+
},
|
| 422 |
+
"tie_word_embeddings": false,
|
| 423 |
+
"transformers_version": "4.57.6",
|
| 424 |
+
"use_unified_vision_chunk": true,
|
| 425 |
+
"video_placeholder": "<|kimi_k25_video_placeholder|>",
|
| 426 |
+
"vision_config": {
|
| 427 |
+
"init_pos_emb_height": 64,
|
| 428 |
+
"init_pos_emb_time": 4,
|
| 429 |
+
"init_pos_emb_width": 64,
|
| 430 |
+
"merge_kernel_size": [
|
| 431 |
+
2,
|
| 432 |
+
2
|
| 433 |
+
],
|
| 434 |
+
"merge_type": "sd2_tpool",
|
| 435 |
+
"mm_hidden_size": 1152,
|
| 436 |
+
"mm_projector_type": "patchmerger",
|
| 437 |
+
"model_type": "",
|
| 438 |
+
"patch_size": 14,
|
| 439 |
+
"pos_emb_type": "divided_fixed",
|
| 440 |
+
"projector_hidden_act": "gelu",
|
| 441 |
+
"projector_ln_eps": 1e-05,
|
| 442 |
+
"text_hidden_size": 7168,
|
| 443 |
+
"video_attn_type": "spatial_temporal",
|
| 444 |
+
"vt_hidden_size": 1152,
|
| 445 |
+
"vt_intermediate_size": 4304,
|
| 446 |
+
"vt_num_attention_heads": 16,
|
| 447 |
+
"vt_num_hidden_layers": 27
|
| 448 |
+
}
|
| 449 |
+
}
|
configuration_deepseek.py
ADDED
|
@@ -0,0 +1,214 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Copy from https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/configuration_deepseek.py
|
| 2 |
+
|
| 3 |
+
from transformers.configuration_utils import PretrainedConfig
|
| 4 |
+
from transformers.utils import logging
|
| 5 |
+
|
| 6 |
+
logger = logging.get_logger(__name__)
|
| 7 |
+
|
| 8 |
+
DEEPSEEK_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
class DeepseekV3Config(PretrainedConfig):
|
| 12 |
+
r"""
|
| 13 |
+
This is the configuration class to store the configuration of a [`DeepseekV3Model`]. It is used to instantiate an DeepSeek
|
| 14 |
+
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
|
| 15 |
+
defaults will yield a similar configuration to that of the DeepSeek-V3.
|
| 16 |
+
|
| 17 |
+
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
|
| 18 |
+
documentation from [`PretrainedConfig`] for more information.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
Args:
|
| 22 |
+
vocab_size (`int`, *optional*, defaults to 129280):
|
| 23 |
+
Vocabulary size of the Deep model. Defines the number of different tokens that can be represented by the
|
| 24 |
+
`inputs_ids` passed when calling [`DeepseekV3Model`]
|
| 25 |
+
hidden_size (`int`, *optional*, defaults to 4096):
|
| 26 |
+
Dimension of the hidden representations.
|
| 27 |
+
intermediate_size (`int`, *optional*, defaults to 11008):
|
| 28 |
+
Dimension of the MLP representations.
|
| 29 |
+
moe_intermediate_size (`int`, *optional*, defaults to 1407):
|
| 30 |
+
Dimension of the MoE representations.
|
| 31 |
+
num_hidden_layers (`int`, *optional*, defaults to 32):
|
| 32 |
+
Number of hidden layers in the Transformer decoder.
|
| 33 |
+
num_nextn_predict_layers (`int`, *optional*, defaults to 1):
|
| 34 |
+
Number of nextn predict layers in the DeepSeekV3 Model.
|
| 35 |
+
num_attention_heads (`int`, *optional*, defaults to 32):
|
| 36 |
+
Number of attention heads for each attention layer in the Transformer decoder.
|
| 37 |
+
n_shared_experts (`int`, *optional*, defaults to None):
|
| 38 |
+
Number of shared experts, None means dense model.
|
| 39 |
+
n_routed_experts (`int`, *optional*, defaults to None):
|
| 40 |
+
Number of routed experts, None means dense model.
|
| 41 |
+
routed_scaling_factor (`float`, *optional*, defaults to 1.0):
|
| 42 |
+
Scaling factor or routed experts.
|
| 43 |
+
topk_method (`str`, *optional*, defaults to `gready`):
|
| 44 |
+
Topk method used in routed gate.
|
| 45 |
+
n_group (`int`, *optional*, defaults to None):
|
| 46 |
+
Number of groups for routed experts.
|
| 47 |
+
topk_group (`int`, *optional*, defaults to None):
|
| 48 |
+
Number of selected groups for each token(for each token, ensuring the selected experts is only within `topk_group` groups).
|
| 49 |
+
num_experts_per_tok (`int`, *optional*, defaults to None):
|
| 50 |
+
Number of selected experts, None means dense model.
|
| 51 |
+
moe_layer_freq (`int`, *optional*, defaults to 1):
|
| 52 |
+
The frequency of the MoE layer: one expert layer for every `moe_layer_freq - 1` dense layers.
|
| 53 |
+
first_k_dense_replace (`int`, *optional*, defaults to 0):
|
| 54 |
+
Number of dense layers in shallow layers(embed->dense->dense->...->dense->moe->moe...->lm_head).
|
| 55 |
+
\--k dense layers--/
|
| 56 |
+
norm_topk_prob (`bool`, *optional*, defaults to False):
|
| 57 |
+
Whether to normalize the weights of the routed experts.
|
| 58 |
+
scoring_func (`str`, *optional*, defaults to 'softmax'):
|
| 59 |
+
Method of computing expert weights.
|
| 60 |
+
aux_loss_alpha (`float`, *optional*, defaults to 0.001):
|
| 61 |
+
Auxiliary loss weight coefficient.
|
| 62 |
+
seq_aux = (`bool`, *optional*, defaults to True):
|
| 63 |
+
Whether to compute the auxiliary loss for each individual sample.
|
| 64 |
+
num_key_value_heads (`int`, *optional*):
|
| 65 |
+
This is the number of key_value heads that should be used to implement Grouped Query Attention. If
|
| 66 |
+
`num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
|
| 67 |
+
`num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
|
| 68 |
+
converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
|
| 69 |
+
by meanpooling all the original heads within that group. For more details checkout [this
|
| 70 |
+
paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
|
| 71 |
+
`num_attention_heads`.
|
| 72 |
+
hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
|
| 73 |
+
The non-linear activation function (function or string) in the decoder.
|
| 74 |
+
max_position_embeddings (`int`, *optional*, defaults to 2048):
|
| 75 |
+
The maximum sequence length that this model might ever be used with.
|
| 76 |
+
initializer_range (`float`, *optional*, defaults to 0.02):
|
| 77 |
+
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
|
| 78 |
+
rms_norm_eps (`float`, *optional*, defaults to 1e-06):
|
| 79 |
+
The epsilon used by the rms normalization layers.
|
| 80 |
+
use_cache (`bool`, *optional*, defaults to `True`):
|
| 81 |
+
Whether or not the model should return the last key/values attentions (not used by all models). Only
|
| 82 |
+
relevant if `config.is_decoder=True`.
|
| 83 |
+
pad_token_id (`int`, *optional*):
|
| 84 |
+
Padding token id.
|
| 85 |
+
bos_token_id (`int`, *optional*, defaults to 1):
|
| 86 |
+
Beginning of stream token id.
|
| 87 |
+
eos_token_id (`int`, *optional*, defaults to 2):
|
| 88 |
+
End of stream token id.
|
| 89 |
+
pretraining_tp (`int`, *optional*, defaults to 1):
|
| 90 |
+
Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
|
| 91 |
+
document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
|
| 92 |
+
necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
|
| 93 |
+
issue](https://github.com/pytorch/pytorch/issues/76232).
|
| 94 |
+
tie_word_embeddings (`bool`, *optional*, defaults to `False`):
|
| 95 |
+
Whether to tie weight embeddings
|
| 96 |
+
rope_theta (`float`, *optional*, defaults to 10000.0):
|
| 97 |
+
The base period of the RoPE embeddings.
|
| 98 |
+
rope_scaling (`Dict`, *optional*):
|
| 99 |
+
Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
|
| 100 |
+
strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
|
| 101 |
+
`{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
|
| 102 |
+
`max_position_embeddings` to the expected new maximum.
|
| 103 |
+
attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
|
| 104 |
+
Whether to use a bias in the query, key, value and output projection layers during self-attention.
|
| 105 |
+
attention_dropout (`float`, *optional*, defaults to 0.0):
|
| 106 |
+
The dropout ratio for the attention probabilities.
|
| 107 |
+
|
| 108 |
+
```python
|
| 109 |
+
>>> from transformers import DeepseekV3Model, DeepseekV3Config
|
| 110 |
+
|
| 111 |
+
>>> # Initializing a Deepseek-V3 style configuration
|
| 112 |
+
>>> configuration = DeepseekV3Config()
|
| 113 |
+
|
| 114 |
+
>>> # Accessing the model configuration
|
| 115 |
+
>>> configuration = model.config
|
| 116 |
+
```"""
|
| 117 |
+
|
| 118 |
+
model_type = "deepseek_v3"
|
| 119 |
+
keys_to_ignore_at_inference = ["past_key_values"]
|
| 120 |
+
|
| 121 |
+
def __init__(
|
| 122 |
+
self,
|
| 123 |
+
vocab_size=129280,
|
| 124 |
+
hidden_size=7168,
|
| 125 |
+
intermediate_size=18432,
|
| 126 |
+
moe_intermediate_size=2048,
|
| 127 |
+
num_hidden_layers=61,
|
| 128 |
+
num_nextn_predict_layers=1,
|
| 129 |
+
num_attention_heads=128,
|
| 130 |
+
num_key_value_heads=128,
|
| 131 |
+
n_shared_experts=1,
|
| 132 |
+
n_routed_experts=256,
|
| 133 |
+
ep_size=1,
|
| 134 |
+
routed_scaling_factor=2.5,
|
| 135 |
+
kv_lora_rank=512,
|
| 136 |
+
q_lora_rank=1536,
|
| 137 |
+
qk_rope_head_dim=64,
|
| 138 |
+
v_head_dim=128,
|
| 139 |
+
qk_nope_head_dim=128,
|
| 140 |
+
topk_method='noaux_tc',
|
| 141 |
+
n_group=8,
|
| 142 |
+
topk_group=4,
|
| 143 |
+
num_experts_per_tok=8,
|
| 144 |
+
moe_layer_freq=1,
|
| 145 |
+
first_k_dense_replace=3,
|
| 146 |
+
norm_topk_prob=True,
|
| 147 |
+
scoring_func='sigmoid',
|
| 148 |
+
aux_loss_alpha=0.001,
|
| 149 |
+
seq_aux=True,
|
| 150 |
+
hidden_act="silu",
|
| 151 |
+
max_position_embeddings=4096,
|
| 152 |
+
initializer_range=0.02,
|
| 153 |
+
rms_norm_eps=1e-6,
|
| 154 |
+
use_cache=True,
|
| 155 |
+
pad_token_id=None,
|
| 156 |
+
bos_token_id=0,
|
| 157 |
+
eos_token_id=1,
|
| 158 |
+
pretraining_tp=1,
|
| 159 |
+
tie_word_embeddings=False,
|
| 160 |
+
rope_theta=10000.0,
|
| 161 |
+
rope_scaling=None,
|
| 162 |
+
attention_bias=False,
|
| 163 |
+
attention_dropout=0.0,
|
| 164 |
+
**kwargs,
|
| 165 |
+
):
|
| 166 |
+
self.vocab_size = vocab_size
|
| 167 |
+
self.max_position_embeddings = max_position_embeddings
|
| 168 |
+
self.hidden_size = hidden_size
|
| 169 |
+
self.intermediate_size = intermediate_size
|
| 170 |
+
self.moe_intermediate_size = moe_intermediate_size
|
| 171 |
+
self.num_hidden_layers = num_hidden_layers
|
| 172 |
+
self.num_nextn_predict_layers = num_nextn_predict_layers
|
| 173 |
+
self.num_attention_heads = num_attention_heads
|
| 174 |
+
self.n_shared_experts = n_shared_experts
|
| 175 |
+
self.n_routed_experts = n_routed_experts
|
| 176 |
+
self.ep_size = ep_size
|
| 177 |
+
self.routed_scaling_factor = routed_scaling_factor
|
| 178 |
+
self.kv_lora_rank = kv_lora_rank
|
| 179 |
+
self.q_lora_rank = q_lora_rank
|
| 180 |
+
self.qk_rope_head_dim = qk_rope_head_dim
|
| 181 |
+
self.v_head_dim = v_head_dim
|
| 182 |
+
self.qk_nope_head_dim = qk_nope_head_dim
|
| 183 |
+
self.topk_method = topk_method
|
| 184 |
+
self.n_group = n_group
|
| 185 |
+
self.topk_group = topk_group
|
| 186 |
+
self.num_experts_per_tok = num_experts_per_tok
|
| 187 |
+
self.moe_layer_freq = moe_layer_freq
|
| 188 |
+
self.first_k_dense_replace = first_k_dense_replace
|
| 189 |
+
self.norm_topk_prob = norm_topk_prob
|
| 190 |
+
self.scoring_func = scoring_func
|
| 191 |
+
self.aux_loss_alpha = aux_loss_alpha
|
| 192 |
+
self.seq_aux = seq_aux
|
| 193 |
+
# for backward compatibility
|
| 194 |
+
if num_key_value_heads is None:
|
| 195 |
+
num_key_value_heads = num_attention_heads
|
| 196 |
+
|
| 197 |
+
self.num_key_value_heads = num_key_value_heads
|
| 198 |
+
self.hidden_act = hidden_act
|
| 199 |
+
self.initializer_range = initializer_range
|
| 200 |
+
self.rms_norm_eps = rms_norm_eps
|
| 201 |
+
self.pretraining_tp = pretraining_tp
|
| 202 |
+
self.use_cache = use_cache
|
| 203 |
+
self.rope_theta = rope_theta
|
| 204 |
+
self.rope_scaling = rope_scaling
|
| 205 |
+
self.attention_bias = attention_bias
|
| 206 |
+
self.attention_dropout = attention_dropout
|
| 207 |
+
|
| 208 |
+
super().__init__(
|
| 209 |
+
pad_token_id=pad_token_id,
|
| 210 |
+
bos_token_id=bos_token_id,
|
| 211 |
+
eos_token_id=eos_token_id,
|
| 212 |
+
tie_word_embeddings=tie_word_embeddings,
|
| 213 |
+
**kwargs,
|
| 214 |
+
)
|
configuration_kimi_k25.py
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from transformers.configuration_utils import PretrainedConfig
|
| 2 |
+
|
| 3 |
+
try:
|
| 4 |
+
from configuration_deepseek import DeepseekV3Config
|
| 5 |
+
except ImportError:
|
| 6 |
+
from .configuration_deepseek import DeepseekV3Config
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
class KimiK25VisionConfig(PretrainedConfig):
|
| 10 |
+
|
| 11 |
+
def __init__(
|
| 12 |
+
self,
|
| 13 |
+
patch_size: int = 14,
|
| 14 |
+
init_pos_emb_height: int = 64,
|
| 15 |
+
init_pos_emb_width: int = 64,
|
| 16 |
+
init_pos_emb_time: int = 4,
|
| 17 |
+
pos_emb_type: str = 'divided_fixed',
|
| 18 |
+
vt_num_attention_heads: int = 16,
|
| 19 |
+
vt_num_hidden_layers: int = 27,
|
| 20 |
+
vt_hidden_size: int = 1152,
|
| 21 |
+
vt_intermediate_size: int = 4304,
|
| 22 |
+
merge_kernel_size: tuple = (2, 2),
|
| 23 |
+
video_attn_type: str = 'spatial_temporal',
|
| 24 |
+
merge_type: str = 'sd2_tpool',
|
| 25 |
+
_attn_implementation: str = 'flash_attention_2',
|
| 26 |
+
# MM Projector parameters
|
| 27 |
+
mm_projector_type: str = 'patchmerger',
|
| 28 |
+
mm_hidden_size: int | None = None,
|
| 29 |
+
projector_hidden_act: str = "gelu",
|
| 30 |
+
projector_ln_eps: float = 1e-5,
|
| 31 |
+
# Other parameters
|
| 32 |
+
ignore_index: int = -100,
|
| 33 |
+
media_placeholder_token_id: int = 163605,
|
| 34 |
+
pad_token_id: int = 0,
|
| 35 |
+
use_unified_vision_chunk: bool = True,
|
| 36 |
+
video_placeholder="<|kimi_k25_video_placeholder|>",
|
| 37 |
+
text_hidden_size=7168,
|
| 38 |
+
**vision_config_kwargs):
|
| 39 |
+
|
| 40 |
+
self.patch_size = patch_size
|
| 41 |
+
self.init_pos_emb_height = init_pos_emb_height
|
| 42 |
+
self.init_pos_emb_width = init_pos_emb_width
|
| 43 |
+
self.init_pos_emb_time = init_pos_emb_time
|
| 44 |
+
self.pos_emb_type = pos_emb_type
|
| 45 |
+
self.vt_num_attention_heads = vt_num_attention_heads
|
| 46 |
+
self.vt_num_hidden_layers = vt_num_hidden_layers
|
| 47 |
+
self.vt_hidden_size = vt_hidden_size
|
| 48 |
+
self.vt_intermediate_size = vt_intermediate_size
|
| 49 |
+
self.merge_kernel_size = merge_kernel_size
|
| 50 |
+
self.video_attn_type = video_attn_type
|
| 51 |
+
self.merge_type = merge_type
|
| 52 |
+
self._attn_implementation = _attn_implementation
|
| 53 |
+
|
| 54 |
+
# MM Projector config
|
| 55 |
+
self.mm_projector_type = mm_projector_type
|
| 56 |
+
self.mm_hidden_size = mm_hidden_size if mm_hidden_size is not None else vt_hidden_size
|
| 57 |
+
self.projector_hidden_act = projector_hidden_act
|
| 58 |
+
self.projector_ln_eps = projector_ln_eps
|
| 59 |
+
self.text_hidden_size = text_hidden_size
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
class KimiK25Config(PretrainedConfig):
|
| 63 |
+
"""Kimi-K2.5 model configuration.
|
| 64 |
+
|
| 65 |
+
Args:
|
| 66 |
+
text_config (dict | DeepseekV3Config): Configuration for the text model.
|
| 67 |
+
|
| 68 |
+
Vision Tower Parameters (from MoonViT3dConfig):
|
| 69 |
+
patch_size (int): Patch size for vision tower.
|
| 70 |
+
init_pos_emb_height (int): Initial position embedding height.
|
| 71 |
+
init_pos_emb_width (int): Initial position embedding width.
|
| 72 |
+
init_pos_emb_time (int): Initial position embedding time dimension.
|
| 73 |
+
pos_emb_type (str): Type of position embedding.
|
| 74 |
+
vt_num_attention_heads (int): Number of attention heads in vision tower.
|
| 75 |
+
vt_num_hidden_layers (int): Number of hidden layers in vision tower.
|
| 76 |
+
vt_hidden_size (int): Hidden size of vision tower.
|
| 77 |
+
vt_intermediate_size (int): Intermediate size in vision tower FFN.
|
| 78 |
+
merge_kernel_size (tuple): Kernel size for patch merging.
|
| 79 |
+
video_attn_type (str): Type of video attention.
|
| 80 |
+
merge_type (str): Type of merge operation.
|
| 81 |
+
_attn_implementation (str): Attention implementation type.
|
| 82 |
+
|
| 83 |
+
MM Projector Parameters (from MultiModalProjectorConfig):
|
| 84 |
+
mm_projector_type (str): Type of multimodal projector.
|
| 85 |
+
mm_hidden_size (int): Hidden size from vision tower (should match vt_hidden_size).
|
| 86 |
+
projector_hidden_act (str): Activation function for projector.
|
| 87 |
+
projector_ln_eps (float): Layer norm epsilon for projector.
|
| 88 |
+
|
| 89 |
+
Other Parameters:
|
| 90 |
+
ignore_index (int): The ignore index for the loss function.
|
| 91 |
+
media_placeholder_token_id (int): The token ID to use for media placeholders.
|
| 92 |
+
pad_token_id (int): The token ID to use for padding.
|
| 93 |
+
"""
|
| 94 |
+
|
| 95 |
+
model_type = "kimi_k25"
|
| 96 |
+
|
| 97 |
+
def __init__(
|
| 98 |
+
self,
|
| 99 |
+
text_config: dict | DeepseekV3Config = None,
|
| 100 |
+
vision_config: dict | KimiK25VisionConfig = None,
|
| 101 |
+
# Other parameters
|
| 102 |
+
ignore_index: int = -100,
|
| 103 |
+
media_placeholder_token_id: int = 163605,
|
| 104 |
+
pad_token_id: int = 0,
|
| 105 |
+
use_unified_vision_chunk: bool = True,
|
| 106 |
+
video_placeholder="<|kimi_k25_video_placeholder|>",
|
| 107 |
+
**kwargs,
|
| 108 |
+
):
|
| 109 |
+
if isinstance(text_config, dict):
|
| 110 |
+
text_config = DeepseekV3Config(**text_config)
|
| 111 |
+
if isinstance(vision_config, dict):
|
| 112 |
+
vision_config = KimiK25VisionConfig(**vision_config)
|
| 113 |
+
self.text_config = text_config
|
| 114 |
+
self.vision_config = vision_config
|
| 115 |
+
# Other config
|
| 116 |
+
self.ignore_index = ignore_index
|
| 117 |
+
self.media_placeholder_token_id = media_placeholder_token_id
|
| 118 |
+
self.use_unified_vision_chunk = use_unified_vision_chunk
|
| 119 |
+
self.video_placeholder = video_placeholder
|
| 120 |
+
if getattr(self.text_config, "quantization_config", None) is not None:
|
| 121 |
+
self.quantization_config = self.text_config.quantization_config
|
| 122 |
+
|
| 123 |
+
super().__init__(pad_token_id=pad_token_id, **kwargs)
|
generation_config.json
ADDED
|
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"max_length": 262144,
|
| 3 |
+
"eos_token_id": 163586
|
| 4 |
+
}
|
hf_quant_config.json
ADDED
|
@@ -0,0 +1,260 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"producer": {
|
| 3 |
+
"name": "modelopt",
|
| 4 |
+
"version": "0.41.0rc2.dev72+g886781332"
|
| 5 |
+
},
|
| 6 |
+
"quantization": {
|
| 7 |
+
"quant_algo": "MXFP8",
|
| 8 |
+
"kv_cache_quant_algo": null,
|
| 9 |
+
"group_size": 32,
|
| 10 |
+
"exclude_modules": [
|
| 11 |
+
"language_model.lm_head",
|
| 12 |
+
"language_model.model.layers.0.self_attn.kv_a_proj_with_mqa",
|
| 13 |
+
"language_model.model.layers.0.self_attn.kv_b_proj",
|
| 14 |
+
"language_model.model.layers.0.self_attn.q_a_proj",
|
| 15 |
+
"language_model.model.layers.0.self_attn.q_b_proj",
|
| 16 |
+
"language_model.model.layers.1.self_attn.kv_a_proj_with_mqa",
|
| 17 |
+
"language_model.model.layers.1.self_attn.kv_b_proj",
|
| 18 |
+
"language_model.model.layers.1.self_attn.q_a_proj",
|
| 19 |
+
"language_model.model.layers.1.self_attn.q_b_proj",
|
| 20 |
+
"language_model.model.layers.10.self_attn.kv_a_proj_with_mqa",
|
| 21 |
+
"language_model.model.layers.10.self_attn.kv_b_proj",
|
| 22 |
+
"language_model.model.layers.10.self_attn.q_a_proj",
|
| 23 |
+
"language_model.model.layers.10.self_attn.q_b_proj",
|
| 24 |
+
"language_model.model.layers.11.self_attn.kv_a_proj_with_mqa",
|
| 25 |
+
"language_model.model.layers.11.self_attn.kv_b_proj",
|
| 26 |
+
"language_model.model.layers.11.self_attn.q_a_proj",
|
| 27 |
+
"language_model.model.layers.11.self_attn.q_b_proj",
|
| 28 |
+
"language_model.model.layers.12.self_attn.kv_a_proj_with_mqa",
|
| 29 |
+
"language_model.model.layers.12.self_attn.kv_b_proj",
|
| 30 |
+
"language_model.model.layers.12.self_attn.q_a_proj",
|
| 31 |
+
"language_model.model.layers.12.self_attn.q_b_proj",
|
| 32 |
+
"language_model.model.layers.13.self_attn.kv_a_proj_with_mqa",
|
| 33 |
+
"language_model.model.layers.13.self_attn.kv_b_proj",
|
| 34 |
+
"language_model.model.layers.13.self_attn.q_a_proj",
|
| 35 |
+
"language_model.model.layers.13.self_attn.q_b_proj",
|
| 36 |
+
"language_model.model.layers.14.self_attn.kv_a_proj_with_mqa",
|
| 37 |
+
"language_model.model.layers.14.self_attn.kv_b_proj",
|
| 38 |
+
"language_model.model.layers.14.self_attn.q_a_proj",
|
| 39 |
+
"language_model.model.layers.14.self_attn.q_b_proj",
|
| 40 |
+
"language_model.model.layers.15.self_attn.kv_a_proj_with_mqa",
|
| 41 |
+
"language_model.model.layers.15.self_attn.kv_b_proj",
|
| 42 |
+
"language_model.model.layers.15.self_attn.q_a_proj",
|
| 43 |
+
"language_model.model.layers.15.self_attn.q_b_proj",
|
| 44 |
+
"language_model.model.layers.16.self_attn.kv_a_proj_with_mqa",
|
| 45 |
+
"language_model.model.layers.16.self_attn.kv_b_proj",
|
| 46 |
+
"language_model.model.layers.16.self_attn.q_a_proj",
|
| 47 |
+
"language_model.model.layers.16.self_attn.q_b_proj",
|
| 48 |
+
"language_model.model.layers.17.self_attn.kv_a_proj_with_mqa",
|
| 49 |
+
"language_model.model.layers.17.self_attn.kv_b_proj",
|
| 50 |
+
"language_model.model.layers.17.self_attn.q_a_proj",
|
| 51 |
+
"language_model.model.layers.17.self_attn.q_b_proj",
|
| 52 |
+
"language_model.model.layers.18.self_attn.kv_a_proj_with_mqa",
|
| 53 |
+
"language_model.model.layers.18.self_attn.kv_b_proj",
|
| 54 |
+
"language_model.model.layers.18.self_attn.q_a_proj",
|
| 55 |
+
"language_model.model.layers.18.self_attn.q_b_proj",
|
| 56 |
+
"language_model.model.layers.19.self_attn.kv_a_proj_with_mqa",
|
| 57 |
+
"language_model.model.layers.19.self_attn.kv_b_proj",
|
| 58 |
+
"language_model.model.layers.19.self_attn.q_a_proj",
|
| 59 |
+
"language_model.model.layers.19.self_attn.q_b_proj",
|
| 60 |
+
"language_model.model.layers.2.self_attn.kv_a_proj_with_mqa",
|
| 61 |
+
"language_model.model.layers.2.self_attn.kv_b_proj",
|
| 62 |
+
"language_model.model.layers.2.self_attn.q_a_proj",
|
| 63 |
+
"language_model.model.layers.2.self_attn.q_b_proj",
|
| 64 |
+
"language_model.model.layers.20.self_attn.kv_a_proj_with_mqa",
|
| 65 |
+
"language_model.model.layers.20.self_attn.kv_b_proj",
|
| 66 |
+
"language_model.model.layers.20.self_attn.q_a_proj",
|
| 67 |
+
"language_model.model.layers.20.self_attn.q_b_proj",
|
| 68 |
+
"language_model.model.layers.21.self_attn.kv_a_proj_with_mqa",
|
| 69 |
+
"language_model.model.layers.21.self_attn.kv_b_proj",
|
| 70 |
+
"language_model.model.layers.21.self_attn.q_a_proj",
|
| 71 |
+
"language_model.model.layers.21.self_attn.q_b_proj",
|
| 72 |
+
"language_model.model.layers.22.self_attn.kv_a_proj_with_mqa",
|
| 73 |
+
"language_model.model.layers.22.self_attn.kv_b_proj",
|
| 74 |
+
"language_model.model.layers.22.self_attn.q_a_proj",
|
| 75 |
+
"language_model.model.layers.22.self_attn.q_b_proj",
|
| 76 |
+
"language_model.model.layers.23.self_attn.kv_a_proj_with_mqa",
|
| 77 |
+
"language_model.model.layers.23.self_attn.kv_b_proj",
|
| 78 |
+
"language_model.model.layers.23.self_attn.q_a_proj",
|
| 79 |
+
"language_model.model.layers.23.self_attn.q_b_proj",
|
| 80 |
+
"language_model.model.layers.24.self_attn.kv_a_proj_with_mqa",
|
| 81 |
+
"language_model.model.layers.24.self_attn.kv_b_proj",
|
| 82 |
+
"language_model.model.layers.24.self_attn.q_a_proj",
|
| 83 |
+
"language_model.model.layers.24.self_attn.q_b_proj",
|
| 84 |
+
"language_model.model.layers.25.self_attn.kv_a_proj_with_mqa",
|
| 85 |
+
"language_model.model.layers.25.self_attn.kv_b_proj",
|
| 86 |
+
"language_model.model.layers.25.self_attn.q_a_proj",
|
| 87 |
+
"language_model.model.layers.25.self_attn.q_b_proj",
|
| 88 |
+
"language_model.model.layers.26.self_attn.kv_a_proj_with_mqa",
|
| 89 |
+
"language_model.model.layers.26.self_attn.kv_b_proj",
|
| 90 |
+
"language_model.model.layers.26.self_attn.q_a_proj",
|
| 91 |
+
"language_model.model.layers.26.self_attn.q_b_proj",
|
| 92 |
+
"language_model.model.layers.27.self_attn.kv_a_proj_with_mqa",
|
| 93 |
+
"language_model.model.layers.27.self_attn.kv_b_proj",
|
| 94 |
+
"language_model.model.layers.27.self_attn.q_a_proj",
|
| 95 |
+
"language_model.model.layers.27.self_attn.q_b_proj",
|
| 96 |
+
"language_model.model.layers.28.self_attn.kv_a_proj_with_mqa",
|
| 97 |
+
"language_model.model.layers.28.self_attn.kv_b_proj",
|
| 98 |
+
"language_model.model.layers.28.self_attn.q_a_proj",
|
| 99 |
+
"language_model.model.layers.28.self_attn.q_b_proj",
|
| 100 |
+
"language_model.model.layers.29.self_attn.kv_a_proj_with_mqa",
|
| 101 |
+
"language_model.model.layers.29.self_attn.kv_b_proj",
|
| 102 |
+
"language_model.model.layers.29.self_attn.q_a_proj",
|
| 103 |
+
"language_model.model.layers.29.self_attn.q_b_proj",
|
| 104 |
+
"language_model.model.layers.3.self_attn.kv_a_proj_with_mqa",
|
| 105 |
+
"language_model.model.layers.3.self_attn.kv_b_proj",
|
| 106 |
+
"language_model.model.layers.3.self_attn.q_a_proj",
|
| 107 |
+
"language_model.model.layers.3.self_attn.q_b_proj",
|
| 108 |
+
"language_model.model.layers.30.self_attn.kv_a_proj_with_mqa",
|
| 109 |
+
"language_model.model.layers.30.self_attn.kv_b_proj",
|
| 110 |
+
"language_model.model.layers.30.self_attn.q_a_proj",
|
| 111 |
+
"language_model.model.layers.30.self_attn.q_b_proj",
|
| 112 |
+
"language_model.model.layers.31.self_attn.kv_a_proj_with_mqa",
|
| 113 |
+
"language_model.model.layers.31.self_attn.kv_b_proj",
|
| 114 |
+
"language_model.model.layers.31.self_attn.q_a_proj",
|
| 115 |
+
"language_model.model.layers.31.self_attn.q_b_proj",
|
| 116 |
+
"language_model.model.layers.32.self_attn.kv_a_proj_with_mqa",
|
| 117 |
+
"language_model.model.layers.32.self_attn.kv_b_proj",
|
| 118 |
+
"language_model.model.layers.32.self_attn.q_a_proj",
|
| 119 |
+
"language_model.model.layers.32.self_attn.q_b_proj",
|
| 120 |
+
"language_model.model.layers.33.self_attn.kv_a_proj_with_mqa",
|
| 121 |
+
"language_model.model.layers.33.self_attn.kv_b_proj",
|
| 122 |
+
"language_model.model.layers.33.self_attn.q_a_proj",
|
| 123 |
+
"language_model.model.layers.33.self_attn.q_b_proj",
|
| 124 |
+
"language_model.model.layers.34.self_attn.kv_a_proj_with_mqa",
|
| 125 |
+
"language_model.model.layers.34.self_attn.kv_b_proj",
|
| 126 |
+
"language_model.model.layers.34.self_attn.q_a_proj",
|
| 127 |
+
"language_model.model.layers.34.self_attn.q_b_proj",
|
| 128 |
+
"language_model.model.layers.35.self_attn.kv_a_proj_with_mqa",
|
| 129 |
+
"language_model.model.layers.35.self_attn.kv_b_proj",
|
| 130 |
+
"language_model.model.layers.35.self_attn.q_a_proj",
|
| 131 |
+
"language_model.model.layers.35.self_attn.q_b_proj",
|
| 132 |
+
"language_model.model.layers.36.self_attn.kv_a_proj_with_mqa",
|
| 133 |
+
"language_model.model.layers.36.self_attn.kv_b_proj",
|
| 134 |
+
"language_model.model.layers.36.self_attn.q_a_proj",
|
| 135 |
+
"language_model.model.layers.36.self_attn.q_b_proj",
|
| 136 |
+
"language_model.model.layers.37.self_attn.kv_a_proj_with_mqa",
|
| 137 |
+
"language_model.model.layers.37.self_attn.kv_b_proj",
|
| 138 |
+
"language_model.model.layers.37.self_attn.q_a_proj",
|
| 139 |
+
"language_model.model.layers.37.self_attn.q_b_proj",
|
| 140 |
+
"language_model.model.layers.38.self_attn.kv_a_proj_with_mqa",
|
| 141 |
+
"language_model.model.layers.38.self_attn.kv_b_proj",
|
| 142 |
+
"language_model.model.layers.38.self_attn.q_a_proj",
|
| 143 |
+
"language_model.model.layers.38.self_attn.q_b_proj",
|
| 144 |
+
"language_model.model.layers.39.self_attn.kv_a_proj_with_mqa",
|
| 145 |
+
"language_model.model.layers.39.self_attn.kv_b_proj",
|
| 146 |
+
"language_model.model.layers.39.self_attn.q_a_proj",
|
| 147 |
+
"language_model.model.layers.39.self_attn.q_b_proj",
|
| 148 |
+
"language_model.model.layers.4.self_attn.kv_a_proj_with_mqa",
|
| 149 |
+
"language_model.model.layers.4.self_attn.kv_b_proj",
|
| 150 |
+
"language_model.model.layers.4.self_attn.q_a_proj",
|
| 151 |
+
"language_model.model.layers.4.self_attn.q_b_proj",
|
| 152 |
+
"language_model.model.layers.40.self_attn.kv_a_proj_with_mqa",
|
| 153 |
+
"language_model.model.layers.40.self_attn.kv_b_proj",
|
| 154 |
+
"language_model.model.layers.40.self_attn.q_a_proj",
|
| 155 |
+
"language_model.model.layers.40.self_attn.q_b_proj",
|
| 156 |
+
"language_model.model.layers.41.self_attn.kv_a_proj_with_mqa",
|
| 157 |
+
"language_model.model.layers.41.self_attn.kv_b_proj",
|
| 158 |
+
"language_model.model.layers.41.self_attn.q_a_proj",
|
| 159 |
+
"language_model.model.layers.41.self_attn.q_b_proj",
|
| 160 |
+
"language_model.model.layers.42.self_attn.kv_a_proj_with_mqa",
|
| 161 |
+
"language_model.model.layers.42.self_attn.kv_b_proj",
|
| 162 |
+
"language_model.model.layers.42.self_attn.q_a_proj",
|
| 163 |
+
"language_model.model.layers.42.self_attn.q_b_proj",
|
| 164 |
+
"language_model.model.layers.43.self_attn.kv_a_proj_with_mqa",
|
| 165 |
+
"language_model.model.layers.43.self_attn.kv_b_proj",
|
| 166 |
+
"language_model.model.layers.43.self_attn.q_a_proj",
|
| 167 |
+
"language_model.model.layers.43.self_attn.q_b_proj",
|
| 168 |
+
"language_model.model.layers.44.self_attn.kv_a_proj_with_mqa",
|
| 169 |
+
"language_model.model.layers.44.self_attn.kv_b_proj",
|
| 170 |
+
"language_model.model.layers.44.self_attn.q_a_proj",
|
| 171 |
+
"language_model.model.layers.44.self_attn.q_b_proj",
|
| 172 |
+
"language_model.model.layers.45.self_attn.kv_a_proj_with_mqa",
|
| 173 |
+
"language_model.model.layers.45.self_attn.kv_b_proj",
|
| 174 |
+
"language_model.model.layers.45.self_attn.q_a_proj",
|
| 175 |
+
"language_model.model.layers.45.self_attn.q_b_proj",
|
| 176 |
+
"language_model.model.layers.46.self_attn.kv_a_proj_with_mqa",
|
| 177 |
+
"language_model.model.layers.46.self_attn.kv_b_proj",
|
| 178 |
+
"language_model.model.layers.46.self_attn.q_a_proj",
|
| 179 |
+
"language_model.model.layers.46.self_attn.q_b_proj",
|
| 180 |
+
"language_model.model.layers.47.self_attn.kv_a_proj_with_mqa",
|
| 181 |
+
"language_model.model.layers.47.self_attn.kv_b_proj",
|
| 182 |
+
"language_model.model.layers.47.self_attn.q_a_proj",
|
| 183 |
+
"language_model.model.layers.47.self_attn.q_b_proj",
|
| 184 |
+
"language_model.model.layers.48.self_attn.kv_a_proj_with_mqa",
|
| 185 |
+
"language_model.model.layers.48.self_attn.kv_b_proj",
|
| 186 |
+
"language_model.model.layers.48.self_attn.q_a_proj",
|
| 187 |
+
"language_model.model.layers.48.self_attn.q_b_proj",
|
| 188 |
+
"language_model.model.layers.49.self_attn.kv_a_proj_with_mqa",
|
| 189 |
+
"language_model.model.layers.49.self_attn.kv_b_proj",
|
| 190 |
+
"language_model.model.layers.49.self_attn.q_a_proj",
|
| 191 |
+
"language_model.model.layers.49.self_attn.q_b_proj",
|
| 192 |
+
"language_model.model.layers.5.self_attn.kv_a_proj_with_mqa",
|
| 193 |
+
"language_model.model.layers.5.self_attn.kv_b_proj",
|
| 194 |
+
"language_model.model.layers.5.self_attn.q_a_proj",
|
| 195 |
+
"language_model.model.layers.5.self_attn.q_b_proj",
|
| 196 |
+
"language_model.model.layers.50.self_attn.kv_a_proj_with_mqa",
|
| 197 |
+
"language_model.model.layers.50.self_attn.kv_b_proj",
|
| 198 |
+
"language_model.model.layers.50.self_attn.q_a_proj",
|
| 199 |
+
"language_model.model.layers.50.self_attn.q_b_proj",
|
| 200 |
+
"language_model.model.layers.51.self_attn.kv_a_proj_with_mqa",
|
| 201 |
+
"language_model.model.layers.51.self_attn.kv_b_proj",
|
| 202 |
+
"language_model.model.layers.51.self_attn.q_a_proj",
|
| 203 |
+
"language_model.model.layers.51.self_attn.q_b_proj",
|
| 204 |
+
"language_model.model.layers.52.self_attn.kv_a_proj_with_mqa",
|
| 205 |
+
"language_model.model.layers.52.self_attn.kv_b_proj",
|
| 206 |
+
"language_model.model.layers.52.self_attn.q_a_proj",
|
| 207 |
+
"language_model.model.layers.52.self_attn.q_b_proj",
|
| 208 |
+
"language_model.model.layers.53.self_attn.kv_a_proj_with_mqa",
|
| 209 |
+
"language_model.model.layers.53.self_attn.kv_b_proj",
|
| 210 |
+
"language_model.model.layers.53.self_attn.q_a_proj",
|
| 211 |
+
"language_model.model.layers.53.self_attn.q_b_proj",
|
| 212 |
+
"language_model.model.layers.54.self_attn.kv_a_proj_with_mqa",
|
| 213 |
+
"language_model.model.layers.54.self_attn.kv_b_proj",
|
| 214 |
+
"language_model.model.layers.54.self_attn.q_a_proj",
|
| 215 |
+
"language_model.model.layers.54.self_attn.q_b_proj",
|
| 216 |
+
"language_model.model.layers.55.self_attn.kv_a_proj_with_mqa",
|
| 217 |
+
"language_model.model.layers.55.self_attn.kv_b_proj",
|
| 218 |
+
"language_model.model.layers.55.self_attn.q_a_proj",
|
| 219 |
+
"language_model.model.layers.55.self_attn.q_b_proj",
|
| 220 |
+
"language_model.model.layers.56.self_attn.kv_a_proj_with_mqa",
|
| 221 |
+
"language_model.model.layers.56.self_attn.kv_b_proj",
|
| 222 |
+
"language_model.model.layers.56.self_attn.q_a_proj",
|
| 223 |
+
"language_model.model.layers.56.self_attn.q_b_proj",
|
| 224 |
+
"language_model.model.layers.57.self_attn.kv_a_proj_with_mqa",
|
| 225 |
+
"language_model.model.layers.57.self_attn.kv_b_proj",
|
| 226 |
+
"language_model.model.layers.57.self_attn.q_a_proj",
|
| 227 |
+
"language_model.model.layers.57.self_attn.q_b_proj",
|
| 228 |
+
"language_model.model.layers.58.self_attn.kv_a_proj_with_mqa",
|
| 229 |
+
"language_model.model.layers.58.self_attn.kv_b_proj",
|
| 230 |
+
"language_model.model.layers.58.self_attn.q_a_proj",
|
| 231 |
+
"language_model.model.layers.58.self_attn.q_b_proj",
|
| 232 |
+
"language_model.model.layers.59.self_attn.kv_a_proj_with_mqa",
|
| 233 |
+
"language_model.model.layers.59.self_attn.kv_b_proj",
|
| 234 |
+
"language_model.model.layers.59.self_attn.q_a_proj",
|
| 235 |
+
"language_model.model.layers.59.self_attn.q_b_proj",
|
| 236 |
+
"language_model.model.layers.6.self_attn.kv_a_proj_with_mqa",
|
| 237 |
+
"language_model.model.layers.6.self_attn.kv_b_proj",
|
| 238 |
+
"language_model.model.layers.6.self_attn.q_a_proj",
|
| 239 |
+
"language_model.model.layers.6.self_attn.q_b_proj",
|
| 240 |
+
"language_model.model.layers.60.self_attn.kv_a_proj_with_mqa",
|
| 241 |
+
"language_model.model.layers.60.self_attn.kv_b_proj",
|
| 242 |
+
"language_model.model.layers.60.self_attn.q_a_proj",
|
| 243 |
+
"language_model.model.layers.60.self_attn.q_b_proj",
|
| 244 |
+
"language_model.model.layers.7.self_attn.kv_a_proj_with_mqa",
|
| 245 |
+
"language_model.model.layers.7.self_attn.kv_b_proj",
|
| 246 |
+
"language_model.model.layers.7.self_attn.q_a_proj",
|
| 247 |
+
"language_model.model.layers.7.self_attn.q_b_proj",
|
| 248 |
+
"language_model.model.layers.8.self_attn.kv_a_proj_with_mqa",
|
| 249 |
+
"language_model.model.layers.8.self_attn.kv_b_proj",
|
| 250 |
+
"language_model.model.layers.8.self_attn.q_a_proj",
|
| 251 |
+
"language_model.model.layers.8.self_attn.q_b_proj",
|
| 252 |
+
"language_model.model.layers.9.self_attn.kv_a_proj_with_mqa",
|
| 253 |
+
"language_model.model.layers.9.self_attn.kv_b_proj",
|
| 254 |
+
"language_model.model.layers.9.self_attn.q_a_proj",
|
| 255 |
+
"language_model.model.layers.9.self_attn.q_b_proj",
|
| 256 |
+
"mm_projector*",
|
| 257 |
+
"vision_tower*"
|
| 258 |
+
]
|
| 259 |
+
}
|
| 260 |
+
}
|
kimi_k25_processor.py
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from transformers.feature_extraction_utils import BatchFeature
|
| 2 |
+
from transformers.processing_utils import ProcessorMixin
|
| 3 |
+
from transformers.utils import logging
|
| 4 |
+
|
| 5 |
+
logger = logging.get_logger(__name__)
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
class KimiK25Processor(ProcessorMixin):
|
| 9 |
+
r"""
|
| 10 |
+
Constructs a KimiK25 processor which wraps a KimiK25 image processor and a tokenizer into a single processor.
|
| 11 |
+
|
| 12 |
+
[`KimiK25Processor`] offers all the functionalities of [`KimiK25ImageProcessor`] and [`TikTokenTokenizer`]. See the
|
| 13 |
+
[`~KimiK25Processor.__call__`] and [`~KimiK25Processor.decode`] for more information.
|
| 14 |
+
|
| 15 |
+
Args:
|
| 16 |
+
image_processor ([`KimiK25ImageProcessor`], *optional*):
|
| 17 |
+
The image processor is a required input.
|
| 18 |
+
tokenizer ([`TikTokenTokenizer`], *optional*):
|
| 19 |
+
The tokenizer is a required input.
|
| 20 |
+
chat_template (`str`, *optional*): A Jinja template which will be used to convert lists of messages
|
| 21 |
+
in a chat into a tokenizable string.
|
| 22 |
+
"""
|
| 23 |
+
|
| 24 |
+
attributes = ["image_processor", "tokenizer"]
|
| 25 |
+
valid_kwargs = ["chat_template"]
|
| 26 |
+
image_processor_class = "AutoImageProcessor"
|
| 27 |
+
tokenizer_class = "AutoTokenizer"
|
| 28 |
+
|
| 29 |
+
def __init__(
|
| 30 |
+
self,
|
| 31 |
+
image_processor=None,
|
| 32 |
+
tokenizer=None,
|
| 33 |
+
chat_template=None,
|
| 34 |
+
**kwargs,
|
| 35 |
+
):
|
| 36 |
+
super().__init__(image_processor,
|
| 37 |
+
tokenizer,
|
| 38 |
+
chat_template=chat_template)
|
| 39 |
+
self.media_processor = image_processor
|
| 40 |
+
# A special temporal placeholder to be replaced by actual video placeholders
|
| 41 |
+
self.video_placeholder = "<|kimi_k25_video_placeholder|>"
|
| 42 |
+
|
| 43 |
+
def update_raw_text(self, text: str, video_prompts: list[str]) -> str:
|
| 44 |
+
# replace video prompt in text with video chunk prompts
|
| 45 |
+
video_count = text.count(self.video_placeholder)
|
| 46 |
+
if video_count == 0:
|
| 47 |
+
return text
|
| 48 |
+
assert video_count == len(video_prompts)
|
| 49 |
+
text_parts = text.split(self.video_placeholder)
|
| 50 |
+
assert len(text_parts) == len(video_prompts) + 1
|
| 51 |
+
text = "".join([
|
| 52 |
+
text_parts[i] + video_prompts[i] for i in range(len(video_prompts))
|
| 53 |
+
])
|
| 54 |
+
text += text_parts[-1]
|
| 55 |
+
return text
|
| 56 |
+
|
| 57 |
+
def preprocess_medias(self, medias: list[dict]) -> list[dict]:
|
| 58 |
+
updated_medias = []
|
| 59 |
+
video_prompts = []
|
| 60 |
+
for media in medias:
|
| 61 |
+
if media['type'] == 'image':
|
| 62 |
+
updated_medias.append(media)
|
| 63 |
+
elif media['type'] == 'video':
|
| 64 |
+
video_chunks = self.media_processor.split_video_chunks(
|
| 65 |
+
media['video'])
|
| 66 |
+
updated_medias.extend(video_chunks)
|
| 67 |
+
video_prompts.append("".join(
|
| 68 |
+
[vc['prompt'] for vc in video_chunks]))
|
| 69 |
+
else:
|
| 70 |
+
raise ValueError(f"unsupported media type: {media['type']}")
|
| 71 |
+
return updated_medias, video_prompts
|
| 72 |
+
|
| 73 |
+
def __call__(self,
|
| 74 |
+
messages: list[dict] = None,
|
| 75 |
+
medias: list[dict] = None,
|
| 76 |
+
text: str = None,
|
| 77 |
+
return_tensors: str = "pt",
|
| 78 |
+
**kwargs) -> BatchFeature:
|
| 79 |
+
"""
|
| 80 |
+
Process multimodal inputs for Kimi-K2.5 model.
|
| 81 |
+
|
| 82 |
+
This processor accepts ordered messages and extracts both media and text in a single pass.
|
| 83 |
+
text will be automatically updated if video input detected in messages
|
| 84 |
+
|
| 85 |
+
Args:
|
| 86 |
+
messages: List of message dicts with 'role' and 'content' fields.
|
| 87 |
+
If provided, medias and text will be extracted automatically.
|
| 88 |
+
medias: Pre-extracted list of media dicts. If None, extracted from messages.
|
| 89 |
+
text: Pre-formatted text string. If None, generated via apply_chat_template.
|
| 90 |
+
return_tensors: Format of returned tensors ('pt', 'np', 'tf'). Default: 'pt'.
|
| 91 |
+
**kwargs: Additional arguments passed to tokenizer.apply_chat_template.
|
| 92 |
+
|
| 93 |
+
Returns:
|
| 94 |
+
BatchFeature with fields: input_ids, attention_mask, pixel_values, grid_thws.
|
| 95 |
+
"""
|
| 96 |
+
if messages is None and (medias is None or text is None):
|
| 97 |
+
raise ValueError(
|
| 98 |
+
"Provide either 'messages' or both 'medias' and 'text'")
|
| 99 |
+
|
| 100 |
+
if medias is not None and text is not None:
|
| 101 |
+
updated_medias, video_prompts = self.preprocess_medias(medias)
|
| 102 |
+
preprocessed = self.media_processor.preprocess(
|
| 103 |
+
updated_medias, return_tensors=return_tensors)
|
| 104 |
+
text = self.update_raw_text(text, video_prompts)
|
| 105 |
+
text_inputs = self.tokenizer(text, return_tensors=return_tensors)
|
| 106 |
+
return BatchFeature(data={**text_inputs, **preprocessed.data})
|
| 107 |
+
|
| 108 |
+
if medias is None:
|
| 109 |
+
medias = self._extract_medias_from_messages(messages)
|
| 110 |
+
updated_medias, video_prompts = self.preprocess_medias(medias)
|
| 111 |
+
preprocessed = self.media_processor.preprocess(
|
| 112 |
+
updated_medias, return_tensors=return_tensors)
|
| 113 |
+
|
| 114 |
+
# Generate text if not provided
|
| 115 |
+
if text is None:
|
| 116 |
+
text = self.tokenizer.apply_chat_template(messages, **kwargs)
|
| 117 |
+
|
| 118 |
+
text = self.update_raw_text(text, video_prompts)
|
| 119 |
+
|
| 120 |
+
text_inputs = self.tokenizer(text, return_tensors=return_tensors)
|
| 121 |
+
return BatchFeature(data={**text_inputs, **preprocessed.data})
|
| 122 |
+
|
| 123 |
+
@staticmethod
|
| 124 |
+
def _extract_medias_from_messages(messages: list[dict]) -> list[dict]:
|
| 125 |
+
"""
|
| 126 |
+
Extract media items from messages in a single pass.
|
| 127 |
+
|
| 128 |
+
This is an optimized version that processes messages only once.
|
| 129 |
+
Kept as internal method since external callers should use __call__.
|
| 130 |
+
"""
|
| 131 |
+
medias = []
|
| 132 |
+
for msg in messages:
|
| 133 |
+
if msg['role'] != 'user' or not msg.get('content'):
|
| 134 |
+
continue
|
| 135 |
+
|
| 136 |
+
for content_part in msg['content']:
|
| 137 |
+
if not isinstance(content_part, dict):
|
| 138 |
+
continue
|
| 139 |
+
|
| 140 |
+
content_type = content_part.get('type')
|
| 141 |
+
if content_type in ['video_url', 'video']:
|
| 142 |
+
medias.append({
|
| 143 |
+
'type': 'video',
|
| 144 |
+
'video': content_part['video_url']['url'],
|
| 145 |
+
'first_frame_timestamp': 0.0
|
| 146 |
+
})
|
| 147 |
+
elif content_type in ['image_url', 'image']:
|
| 148 |
+
medias.append({
|
| 149 |
+
'type': 'image',
|
| 150 |
+
'image': content_part['image_url'],
|
| 151 |
+
})
|
| 152 |
+
return medias
|
| 153 |
+
|
| 154 |
+
def apply_chat_template(self, messages, **kwargs):
|
| 155 |
+
return self.tokenizer.apply_chat_template(messages, **kwargs)
|
| 156 |
+
|
| 157 |
+
def batch_decode(self, *args, **kwargs):
|
| 158 |
+
return self.tokenizer.batch_decode(*args, **kwargs)
|
| 159 |
+
|
| 160 |
+
def decode(self, *args, **kwargs):
|
| 161 |
+
return self.tokenizer.decode(*args, **kwargs)
|
| 162 |
+
|
| 163 |
+
@property
|
| 164 |
+
def model_input_names(self):
|
| 165 |
+
return ['input_ids', 'attention_mask', 'pixel_values', 'grid_thws']
|
kimi_k25_vision_processing.py
ADDED
|
@@ -0,0 +1,251 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Image processor class for Kimi-K2.5.
|
| 2 |
+
"""
|
| 3 |
+
|
| 4 |
+
import json
|
| 5 |
+
from typing import Any, Dict, Optional, Union
|
| 6 |
+
|
| 7 |
+
import numpy as np
|
| 8 |
+
import torch
|
| 9 |
+
from PIL import Image
|
| 10 |
+
from transformers.image_processing_utils import (BaseImageProcessor,
|
| 11 |
+
BatchFeature)
|
| 12 |
+
from transformers.utils import TensorType
|
| 13 |
+
|
| 14 |
+
from .media_utils import (MediaInput, VideoChunkInput, _to_tensor,
|
| 15 |
+
ensure_media_type, get_video_meta, image_to_np,
|
| 16 |
+
navit_patchify, navit_resize_image,
|
| 17 |
+
navit_resize_video, normalize,
|
| 18 |
+
real_sample_fps_and_max_num_frames, timestamp_as_str)
|
| 19 |
+
|
| 20 |
+
try:
|
| 21 |
+
from mecord import VideoReader
|
| 22 |
+
except ImportError:
|
| 23 |
+
VideoReader = None
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
def resampling(video_bytes: bytes,
|
| 27 |
+
sample_indices: list[int],
|
| 28 |
+
key_indices=None,
|
| 29 |
+
frame_time_info=None,
|
| 30 |
+
num_threads=4) -> str:
|
| 31 |
+
video = VideoReader(video_bytes,
|
| 32 |
+
num_threads=num_threads,
|
| 33 |
+
frame_time_info=frame_time_info,
|
| 34 |
+
key_indices=key_indices)
|
| 35 |
+
# extract target frames
|
| 36 |
+
frames = video[sample_indices]
|
| 37 |
+
frames = [Image.fromarray(frame) for frame in frames]
|
| 38 |
+
return frames
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
class KimiK25VisionProcessor(BaseImageProcessor):
|
| 42 |
+
model_type = "kimi_k25"
|
| 43 |
+
|
| 44 |
+
def __init__(
|
| 45 |
+
self,
|
| 46 |
+
media_proc_cfg: dict,
|
| 47 |
+
**kwargs,
|
| 48 |
+
):
|
| 49 |
+
super().__init__(**kwargs)
|
| 50 |
+
self.media_proc_cfg = media_proc_cfg
|
| 51 |
+
self.num_frames_per_chunk = media_proc_cfg[
|
| 52 |
+
'temporal_merge_kernel_size']
|
| 53 |
+
|
| 54 |
+
def media_tokens_calculator(self, media: MediaInput):
|
| 55 |
+
media = ensure_media_type(media)
|
| 56 |
+
ret = self.get_resize_config(media)
|
| 57 |
+
return ret['num_tokens']
|
| 58 |
+
|
| 59 |
+
@classmethod
|
| 60 |
+
def make_chunk_prompt(cls, timestamp_text: str) -> str:
|
| 61 |
+
return f"{timestamp_text}<|media_begin|>video<|media_content|><|media_pad|><|media_end|>"
|
| 62 |
+
|
| 63 |
+
def split_video_chunks(self,
|
| 64 |
+
video_url: str | bytes) -> list[list[Image.Image]]:
|
| 65 |
+
# video_url should be base64 str or bytes
|
| 66 |
+
video_spec = get_video_meta(video_url)
|
| 67 |
+
sample_fps = min(self.media_proc_cfg['sample_fps'], video_spec.fps)
|
| 68 |
+
sampled_nframes = max(
|
| 69 |
+
round(video_spec.num_frames * sample_fps / video_spec.fps), 1)
|
| 70 |
+
frame_inds = np.linspace(0, video_spec.num_frames - 1,
|
| 71 |
+
sampled_nframes).round().astype(int)
|
| 72 |
+
frame_inds = frame_inds.tolist()
|
| 73 |
+
sampled_frame_ids = []
|
| 74 |
+
temporal_merge_kernel_size = self.media_proc_cfg[
|
| 75 |
+
"temporal_merge_kernel_size"]
|
| 76 |
+
num_chunks = 0
|
| 77 |
+
chunk_timestamp = []
|
| 78 |
+
for i in range(0, len(frame_inds), temporal_merge_kernel_size):
|
| 79 |
+
sampled_frame_ids.extend(frame_inds[i:i +
|
| 80 |
+
temporal_merge_kernel_size])
|
| 81 |
+
start_time = frame_inds[i] / float(video_spec.fps)
|
| 82 |
+
timestamp_text = timestamp_as_str(
|
| 83 |
+
start_time, self.media_proc_cfg["timestamp_mode"])
|
| 84 |
+
chunk_timestamp.append(timestamp_text)
|
| 85 |
+
num_chunks += 1
|
| 86 |
+
|
| 87 |
+
sampled_frames = resampling(video_url, sampled_frame_ids)
|
| 88 |
+
chunks = []
|
| 89 |
+
for chunk_id in range(num_chunks):
|
| 90 |
+
chunk = sampled_frames[chunk_id *
|
| 91 |
+
temporal_merge_kernel_size:(chunk_id + 1) *
|
| 92 |
+
temporal_merge_kernel_size]
|
| 93 |
+
chunks.append(
|
| 94 |
+
VideoChunkInput(type="video_chunk",
|
| 95 |
+
video_chunk=chunk,
|
| 96 |
+
prompt=self.make_chunk_prompt(
|
| 97 |
+
chunk_timestamp[chunk_id])))
|
| 98 |
+
return chunks
|
| 99 |
+
|
| 100 |
+
def get_resize_config(self, media_input: MediaInput) -> dict:
|
| 101 |
+
if media_input['type'] == 'image':
|
| 102 |
+
w, h = media_input['image'].size
|
| 103 |
+
ret = navit_resize_image(
|
| 104 |
+
w, h, self.media_proc_cfg['patch_size'],
|
| 105 |
+
self.media_proc_cfg['merge_kernel_size'],
|
| 106 |
+
self.media_proc_cfg['in_patch_limit'],
|
| 107 |
+
self.media_proc_cfg['patch_limit_on_one_side'],
|
| 108 |
+
self.media_proc_cfg['fixed_output_tokens'])
|
| 109 |
+
return ret
|
| 110 |
+
elif media_input['type'] == 'video_chunk':
|
| 111 |
+
frame = media_input['video_chunk'][0]
|
| 112 |
+
width, height = frame.size
|
| 113 |
+
num_frames = len(media_input["video_chunk"])
|
| 114 |
+
fps = 1.0
|
| 115 |
+
|
| 116 |
+
sample_fps, max_num_frames_each_video = real_sample_fps_and_max_num_frames(
|
| 117 |
+
media_input["type"],
|
| 118 |
+
self.media_proc_cfg['sample_fps'],
|
| 119 |
+
self.media_proc_cfg['max_num_frames_each_video'],
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
in_patch_limit_each_frame = self.media_proc_cfg[
|
| 123 |
+
'in_patch_limit_each_frame']
|
| 124 |
+
if in_patch_limit_each_frame is None:
|
| 125 |
+
in_patch_limit_each_frame = self.media_proc_cfg[
|
| 126 |
+
'in_patch_limit']
|
| 127 |
+
|
| 128 |
+
ret = navit_resize_video(
|
| 129 |
+
width,
|
| 130 |
+
height,
|
| 131 |
+
num_frames,
|
| 132 |
+
fps,
|
| 133 |
+
sample_fps,
|
| 134 |
+
self.media_proc_cfg['patch_size'],
|
| 135 |
+
self.media_proc_cfg['merge_kernel_size'],
|
| 136 |
+
in_patch_limit_each_frame,
|
| 137 |
+
self.media_proc_cfg['patch_limit_on_one_side'],
|
| 138 |
+
self.media_proc_cfg['in_patch_limit_video'],
|
| 139 |
+
max_num_frames_each_video,
|
| 140 |
+
self.media_proc_cfg['fixed_output_tokens'],
|
| 141 |
+
)
|
| 142 |
+
return ret
|
| 143 |
+
else:
|
| 144 |
+
raise ValueError("Unsupported type: {}".format(
|
| 145 |
+
media_input['type']))
|
| 146 |
+
|
| 147 |
+
def resize_image(self, image: Image.Image, new_width: int, new_height: int,
|
| 148 |
+
pad_width: int, pad_height: int) -> np.ndarray:
|
| 149 |
+
image_np = image_to_np(image, (new_width, new_height), "resize")
|
| 150 |
+
image_np = np.pad(
|
| 151 |
+
image_np,
|
| 152 |
+
((0, pad_height), (0, pad_width), (0, 0)),
|
| 153 |
+
mode="constant",
|
| 154 |
+
constant_values=0,
|
| 155 |
+
)
|
| 156 |
+
return image_np
|
| 157 |
+
|
| 158 |
+
def preprocess(
|
| 159 |
+
self,
|
| 160 |
+
medias: list[MediaInput],
|
| 161 |
+
return_tensors: Optional[Union[str, TensorType]] = None,
|
| 162 |
+
) -> BatchFeature:
|
| 163 |
+
"""
|
| 164 |
+
Preprocess a atom vision input (images/video_chunk) into model-ready tensors.
|
| 165 |
+
|
| 166 |
+
Args:
|
| 167 |
+
medias: List of MediaInput.
|
| 168 |
+
return_tensors: Desired output format ('pt', 'np', 'tf', or None).
|
| 169 |
+
|
| 170 |
+
Returns:
|
| 171 |
+
BatchFeature containing 'pixel_values' and 'grid_thws' tensors.
|
| 172 |
+
"""
|
| 173 |
+
if not isinstance(medias, list):
|
| 174 |
+
medias = [medias]
|
| 175 |
+
if medias:
|
| 176 |
+
pixel_values = []
|
| 177 |
+
for item in medias:
|
| 178 |
+
item = ensure_media_type(item)
|
| 179 |
+
resize_config = self.get_resize_config(item)
|
| 180 |
+
new_width, new_height, pad_width, pad_height = resize_config[
|
| 181 |
+
'new_width'], resize_config['new_height'], resize_config[
|
| 182 |
+
'pad_width'], resize_config['pad_height']
|
| 183 |
+
if item['type'] == 'image':
|
| 184 |
+
image = item['image']
|
| 185 |
+
image_np = self.resize_image(image, new_width, new_height,
|
| 186 |
+
pad_width, pad_height)
|
| 187 |
+
pixel_values.append(np.expand_dims(image_np, axis=0))
|
| 188 |
+
elif item['type'] == 'video_chunk':
|
| 189 |
+
pixels = []
|
| 190 |
+
for frame in item['video_chunk']:
|
| 191 |
+
frame_np = self.resize_image(frame, new_width,
|
| 192 |
+
new_height, pad_width,
|
| 193 |
+
pad_height)
|
| 194 |
+
pixels.append(frame_np)
|
| 195 |
+
pixel_values.append(np.stack(pixels, axis=0))
|
| 196 |
+
else:
|
| 197 |
+
raise ValueError("Unsupported type: {}".format(
|
| 198 |
+
item['type']))
|
| 199 |
+
normalized_pixel_values = []
|
| 200 |
+
image_std_inv = 1.0 / np.array(self.media_proc_cfg['image_std'])
|
| 201 |
+
image_mean = np.array(self.media_proc_cfg['image_mean'])
|
| 202 |
+
for pixels in pixel_values:
|
| 203 |
+
pixels = normalize(pixels, image_mean, image_std_inv)
|
| 204 |
+
pixels_and_thw = navit_patchify(
|
| 205 |
+
pixels,
|
| 206 |
+
self.media_proc_cfg['patch_size'],
|
| 207 |
+
)
|
| 208 |
+
normalized_pixel_values.append(pixels_and_thw)
|
| 209 |
+
|
| 210 |
+
pixel_values = torch.cat([
|
| 211 |
+
_to_tensor(pixel_value['pixel_values'])
|
| 212 |
+
for pixel_value in normalized_pixel_values
|
| 213 |
+
])
|
| 214 |
+
grid_thws = torch.cat([
|
| 215 |
+
_to_tensor(pixel_value['grid_thw'],
|
| 216 |
+
dtype=torch.int64).unsqueeze(0)
|
| 217 |
+
for pixel_value in normalized_pixel_values
|
| 218 |
+
])
|
| 219 |
+
|
| 220 |
+
data = {
|
| 221 |
+
'pixel_values': pixel_values,
|
| 222 |
+
'grid_thws': grid_thws,
|
| 223 |
+
}
|
| 224 |
+
|
| 225 |
+
else:
|
| 226 |
+
data = {}
|
| 227 |
+
|
| 228 |
+
return BatchFeature(data=data, tensor_type=return_tensors)
|
| 229 |
+
|
| 230 |
+
def __repr__(self):
|
| 231 |
+
return f"KimiK25VisionProcessor(media_proc_cfg={self.media_proc_cfg})"
|
| 232 |
+
|
| 233 |
+
def to_dict(self) -> Dict[str, Any]:
|
| 234 |
+
output = super().to_dict()
|
| 235 |
+
output["media_proc_cfg"] = self.media_proc_cfg
|
| 236 |
+
if "media_processor" in output:
|
| 237 |
+
del output["media_processor"]
|
| 238 |
+
return output
|
| 239 |
+
|
| 240 |
+
@classmethod
|
| 241 |
+
def from_dict(cls, config_dict: Dict[str, Any], **kwargs):
|
| 242 |
+
config = config_dict.copy()
|
| 243 |
+
media_proc_cfg = config.pop("media_proc_cfg", {})
|
| 244 |
+
return cls(media_proc_cfg=media_proc_cfg, **config, **kwargs)
|
| 245 |
+
|
| 246 |
+
def to_json_string(self):
|
| 247 |
+
dictionary = self.to_dict()
|
| 248 |
+
for key, value in dictionary.items():
|
| 249 |
+
if hasattr(value, 'tolist'):
|
| 250 |
+
dictionary[key] = value.tolist()
|
| 251 |
+
return json.dumps(dictionary, indent=2, sort_keys=True) + "\n"
|
media_utils.py
ADDED
|
@@ -0,0 +1,368 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import base64
|
| 2 |
+
import io
|
| 3 |
+
import math
|
| 4 |
+
import os
|
| 5 |
+
from datetime import datetime, timezone
|
| 6 |
+
from typing import List, Literal, Optional, TypedDict
|
| 7 |
+
|
| 8 |
+
import numpy as np
|
| 9 |
+
from PIL import Image
|
| 10 |
+
from pydantic import BaseModel, Field
|
| 11 |
+
|
| 12 |
+
try:
|
| 13 |
+
from mecord import VideoReader
|
| 14 |
+
except ImportError:
|
| 15 |
+
VideoReader = None
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class VideoSpec(BaseModel):
|
| 19 |
+
media_type: str = Literal['video']
|
| 20 |
+
height: int = Field(..., gt=0, description="video frame height")
|
| 21 |
+
width: int = Field(..., gt=0, description="video frame width")
|
| 22 |
+
num_frames: int = Field(..., gt=0, description="num frames")
|
| 23 |
+
fps: float = Field(..., gt=0, description="average fps")
|
| 24 |
+
|
| 25 |
+
# optional, help to accelerate video reading
|
| 26 |
+
key_indices: list[int] = Field(None, description="key indices")
|
| 27 |
+
frame_time_info: dict = Field(None, description="frame time info")
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
class ImageInput(TypedDict):
|
| 31 |
+
type: Literal['image']
|
| 32 |
+
image: Image.Image
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
class VideoChunkInput(TypedDict):
|
| 36 |
+
type: Literal['video_chunk']
|
| 37 |
+
video_chunk: List[Image.Image]
|
| 38 |
+
prompt: Optional[str] = None
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
MediaInput = ImageInput | VideoChunkInput
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def get_video_meta(video_src: bytes | str | os.PathLike,
|
| 45 |
+
accurate: bool = True) -> dict:
|
| 46 |
+
"""Get the dimensions of a video."""
|
| 47 |
+
if isinstance(video_src, os.PathLike):
|
| 48 |
+
video_src = str(video_src)
|
| 49 |
+
# if b64 string, decode to bytes
|
| 50 |
+
if isinstance(video_src,
|
| 51 |
+
str) and video_src.startswith('data:video/mp4;base64,'):
|
| 52 |
+
video_src = base64.b64decode(video_src.split(',')[1])
|
| 53 |
+
video = VideoReader(video_src, auto_init=accurate, num_threads=1)
|
| 54 |
+
assert video.num_frames > 0, "Invalid video format."
|
| 55 |
+
assert video.original_width > 0 and video.original_height > 0, (
|
| 56 |
+
"Invalid video format.")
|
| 57 |
+
assert video.avg_fps > 0, "Invalid video format."
|
| 58 |
+
return VideoSpec(media_type='video',
|
| 59 |
+
height=video.original_height,
|
| 60 |
+
width=video.original_width,
|
| 61 |
+
num_frames=video.num_frames,
|
| 62 |
+
fps=video.avg_fps,
|
| 63 |
+
key_indices=video.key_indices,
|
| 64 |
+
frame_time_info=video.frame_time_info)
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
def timestamp_as_str(timestamp: float,
|
| 68 |
+
timestamp_mode: str = "hh:mm:ss.fff") -> str:
|
| 69 |
+
"""Convert a timestamp to a string in the format of HH:MM:SS.mmm."""
|
| 70 |
+
if timestamp_mode == "hh:mm:ss.fff":
|
| 71 |
+
return (datetime.fromtimestamp(timestamp,
|
| 72 |
+
tz=timezone.utc).strftime("%H:%M:%S") +
|
| 73 |
+
f".{int((timestamp % 1) * 1000):03d}")
|
| 74 |
+
elif timestamp_mode == "mm:ss.fff":
|
| 75 |
+
return (datetime.fromtimestamp(timestamp,
|
| 76 |
+
tz=timezone.utc).strftime("%M:%S") +
|
| 77 |
+
f".{int((timestamp % 1) * 1000):03d}")
|
| 78 |
+
elif timestamp_mode == "mm:ss":
|
| 79 |
+
return datetime.fromtimestamp(timestamp,
|
| 80 |
+
tz=timezone.utc).strftime("%M:%S")
|
| 81 |
+
else:
|
| 82 |
+
raise ValueError(f"Invalid timestamp mode: {timestamp_mode}")
|
| 83 |
+
|
| 84 |
+
|
| 85 |
+
def navit_resize_image(
|
| 86 |
+
width: int,
|
| 87 |
+
height: int,
|
| 88 |
+
patch_size: int,
|
| 89 |
+
merge_kernel_size: int,
|
| 90 |
+
in_patch_limit: int,
|
| 91 |
+
patch_limit_on_one_side: int,
|
| 92 |
+
fixed_output_tokens: int | None,
|
| 93 |
+
):
|
| 94 |
+
# Apply the patch limits.
|
| 95 |
+
s1 = math.sqrt(
|
| 96 |
+
in_patch_limit /
|
| 97 |
+
(max(1.0, width // patch_size) * max(1.0, height // patch_size)))
|
| 98 |
+
s2 = patch_limit_on_one_side * patch_size / width
|
| 99 |
+
s3 = patch_limit_on_one_side * patch_size / height
|
| 100 |
+
scale = min(1.0, s1, s2, s3)
|
| 101 |
+
new_w, new_h = max(1, int(width * scale)), max(1, int(height * scale))
|
| 102 |
+
new_w = min(new_w, patch_limit_on_one_side * patch_size)
|
| 103 |
+
new_h = min(new_h, patch_limit_on_one_side * patch_size)
|
| 104 |
+
|
| 105 |
+
# Calculate the padding to make the height and width divisible by the merge kernel size and patch size.
|
| 106 |
+
factor = merge_kernel_size * patch_size
|
| 107 |
+
|
| 108 |
+
pad_height = (factor - new_h % factor) % factor
|
| 109 |
+
pad_width = (factor - new_w % factor) % factor
|
| 110 |
+
|
| 111 |
+
if fixed_output_tokens is not None:
|
| 112 |
+
num_tokens = fixed_output_tokens
|
| 113 |
+
else:
|
| 114 |
+
# Calculate new dimensions after padding and patching
|
| 115 |
+
token_height = (new_h + pad_height) // factor
|
| 116 |
+
token_width = (new_w + pad_width) // factor
|
| 117 |
+
|
| 118 |
+
assert token_height * merge_kernel_size <= patch_limit_on_one_side, (
|
| 119 |
+
f"token_height {token_height} * merge_kernel_size {merge_kernel_size} > patch_limit_on_one_side {patch_limit_on_one_side}"
|
| 120 |
+
)
|
| 121 |
+
assert token_width * merge_kernel_size <= patch_limit_on_one_side, (
|
| 122 |
+
f"token_width {token_width} * merge_kernel_size {merge_kernel_size} > patch_limit_on_one_side {patch_limit_on_one_side}"
|
| 123 |
+
)
|
| 124 |
+
|
| 125 |
+
num_tokens = token_height * token_width
|
| 126 |
+
return {
|
| 127 |
+
"num_tokens": num_tokens,
|
| 128 |
+
"new_width": new_w,
|
| 129 |
+
"new_height": new_h,
|
| 130 |
+
"pad_width": pad_width,
|
| 131 |
+
"pad_height": pad_height,
|
| 132 |
+
"sampled_nframes": 1,
|
| 133 |
+
}
|
| 134 |
+
|
| 135 |
+
|
| 136 |
+
def navit_resize_video(
|
| 137 |
+
width: int,
|
| 138 |
+
height: int,
|
| 139 |
+
nframes: int,
|
| 140 |
+
avg_fps: float,
|
| 141 |
+
sample_fps: float,
|
| 142 |
+
patch_size: int,
|
| 143 |
+
merge_kernel_size: int,
|
| 144 |
+
in_patch_limit_each_frame: int,
|
| 145 |
+
patch_limit_on_one_side: int,
|
| 146 |
+
in_patch_limit_total: int | None,
|
| 147 |
+
max_num_frames_each_video: int | None,
|
| 148 |
+
fixed_output_tokens_each_frame: int | None,
|
| 149 |
+
):
|
| 150 |
+
sample_fps = min(sample_fps, avg_fps)
|
| 151 |
+
# Calculate the number of frames to sample based on target FPS
|
| 152 |
+
sampled_nframes = max(round(nframes * sample_fps / avg_fps), 1)
|
| 153 |
+
if max_num_frames_each_video is not None:
|
| 154 |
+
sampled_nframes = min(sampled_nframes, max_num_frames_each_video)
|
| 155 |
+
|
| 156 |
+
if in_patch_limit_total is not None:
|
| 157 |
+
in_patch_limit_each_frame = min(
|
| 158 |
+
round(in_patch_limit_total / sampled_nframes),
|
| 159 |
+
in_patch_limit_each_frame)
|
| 160 |
+
|
| 161 |
+
ret = navit_resize_image(
|
| 162 |
+
width,
|
| 163 |
+
height,
|
| 164 |
+
patch_size,
|
| 165 |
+
merge_kernel_size,
|
| 166 |
+
in_patch_limit_each_frame,
|
| 167 |
+
patch_limit_on_one_side,
|
| 168 |
+
fixed_output_tokens_each_frame,
|
| 169 |
+
)
|
| 170 |
+
ret["sampled_nframes"] = sampled_nframes
|
| 171 |
+
return ret
|
| 172 |
+
|
| 173 |
+
|
| 174 |
+
def real_sample_fps_and_max_num_frames(
|
| 175 |
+
type_name: Literal["video", "video_chunk"],
|
| 176 |
+
sample_fps: float,
|
| 177 |
+
max_num_frames_each_video: int | None,
|
| 178 |
+
) -> tuple[int, int | None]:
|
| 179 |
+
if type_name == "video":
|
| 180 |
+
return sample_fps, max_num_frames_each_video
|
| 181 |
+
elif type_name == "video_chunk":
|
| 182 |
+
max_num_frames_each_video = None
|
| 183 |
+
sample_fps = math.inf
|
| 184 |
+
return sample_fps, max_num_frames_each_video
|
| 185 |
+
else:
|
| 186 |
+
return math.inf, None
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
def _to_pil(data: str | bytes):
|
| 190 |
+
if isinstance(data, Image.Image):
|
| 191 |
+
|
| 192 |
+
return data.convert("RGB")
|
| 193 |
+
elif isinstance(data, str):
|
| 194 |
+
if data.startswith("data:"):
|
| 195 |
+
raw_base64 = data.split(",")[1]
|
| 196 |
+
return Image.open(io.BytesIO(
|
| 197 |
+
base64.b64decode(raw_base64))).convert("RGB")
|
| 198 |
+
else:
|
| 199 |
+
return Image.open(data).convert("RGB")
|
| 200 |
+
elif isinstance(data, bytes):
|
| 201 |
+
return Image.open(io.BytesIO(data)).convert("RGB")
|
| 202 |
+
else:
|
| 203 |
+
raise ValueError(f"Unsupported data type: {type(data)}")
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
def ensure_media_type(media: MediaInput) -> MediaInput:
|
| 207 |
+
if media['type'] == 'image':
|
| 208 |
+
media['image'] = _to_pil(media['image'])
|
| 209 |
+
return media
|
| 210 |
+
elif media['type'] == 'video_chunk':
|
| 211 |
+
media['video_chunk'] = [
|
| 212 |
+
_to_pil(frame) for frame in media['video_chunk']
|
| 213 |
+
]
|
| 214 |
+
return media
|
| 215 |
+
else:
|
| 216 |
+
raise ValueError(f"Unsupported media type: {media['type']}")
|
| 217 |
+
|
| 218 |
+
|
| 219 |
+
def image_to_np(
|
| 220 |
+
image: Image.Image,
|
| 221 |
+
resize_to: tuple[int, int] | None = None,
|
| 222 |
+
mode: str = "resize",
|
| 223 |
+
raise_error_for_ill_resize: bool = True,
|
| 224 |
+
) -> np.ndarray:
|
| 225 |
+
"""Convert an image to a numpy array.
|
| 226 |
+
|
| 227 |
+
Args:
|
| 228 |
+
content: The image to convert.
|
| 229 |
+
resize_to: The size to resize the image to.
|
| 230 |
+
mode: The mode to resize the image to.
|
| 231 |
+
raise_error_for_ill_resize: Whether to raise an error for ill-sized resize.
|
| 232 |
+
|
| 233 |
+
Returns:
|
| 234 |
+
A numpy array.
|
| 235 |
+
"""
|
| 236 |
+
assert isinstance(image, Image.Image), "image must be a PIL Image"
|
| 237 |
+
if resize_to is not None:
|
| 238 |
+
if mode == "resize":
|
| 239 |
+
image = image.resize(resize_to, resample=Image.Resampling.BICUBIC)
|
| 240 |
+
|
| 241 |
+
elif mode == "rescale_and_pad_to_center":
|
| 242 |
+
scale = min(resize_to[0] / image.width,
|
| 243 |
+
resize_to[1] / image.height, 1.0)
|
| 244 |
+
new_width = round(image.width * scale)
|
| 245 |
+
new_height = round(image.height * scale)
|
| 246 |
+
if new_width == 0 or new_height == 0:
|
| 247 |
+
if raise_error_for_ill_resize:
|
| 248 |
+
raise ValueError(
|
| 249 |
+
f"Invalid resize to: {resize_to}, from image size: {image.size}"
|
| 250 |
+
)
|
| 251 |
+
else:
|
| 252 |
+
return np.zeros((resize_to[1], resize_to[0], 3),
|
| 253 |
+
dtype=np.uint8)
|
| 254 |
+
|
| 255 |
+
image = image.resize((new_width, new_height),
|
| 256 |
+
resample=Image.Resampling.BICUBIC)
|
| 257 |
+
padding_left = (resize_to[0] - new_width) // 2
|
| 258 |
+
padding_right = resize_to[0] - new_width - padding_left
|
| 259 |
+
padding_top = (resize_to[1] - new_height) // 2
|
| 260 |
+
padding_bottom = resize_to[1] - new_height - padding_top
|
| 261 |
+
image = np.asarray(image)
|
| 262 |
+
image = np.pad(
|
| 263 |
+
image,
|
| 264 |
+
((padding_top, padding_bottom), (padding_left, padding_right),
|
| 265 |
+
(0, 0)),
|
| 266 |
+
mode="constant",
|
| 267 |
+
constant_values=0,
|
| 268 |
+
)
|
| 269 |
+
assert image.shape == (resize_to[1], resize_to[0], 3)
|
| 270 |
+
|
| 271 |
+
elif mode == "rescale_and_pad_to_rightbottom":
|
| 272 |
+
scale = min(resize_to[0] / image.width,
|
| 273 |
+
resize_to[1] / image.height, 1.0)
|
| 274 |
+
new_width = round(image.width * scale)
|
| 275 |
+
new_height = round(image.height * scale)
|
| 276 |
+
if new_width == 0 or new_height == 0:
|
| 277 |
+
if raise_error_for_ill_resize:
|
| 278 |
+
raise ValueError(
|
| 279 |
+
f"Invalid resize to: {resize_to}, from image size: {image.size}"
|
| 280 |
+
)
|
| 281 |
+
else:
|
| 282 |
+
return np.zeros((resize_to[1], resize_to[0], 3),
|
| 283 |
+
dtype=np.uint8)
|
| 284 |
+
|
| 285 |
+
image = image.resize((new_width, new_height),
|
| 286 |
+
resample=Image.Resampling.BICUBIC)
|
| 287 |
+
padding_right = resize_to[0] - new_width
|
| 288 |
+
padding_bottom = resize_to[1] - new_height
|
| 289 |
+
image = np.asarray(image)
|
| 290 |
+
image = np.pad(
|
| 291 |
+
image,
|
| 292 |
+
((0, padding_bottom), (0, padding_right), (0, 0)),
|
| 293 |
+
mode="constant",
|
| 294 |
+
constant_values=0,
|
| 295 |
+
)
|
| 296 |
+
assert image.shape == (resize_to[1], resize_to[0], 3)
|
| 297 |
+
|
| 298 |
+
else:
|
| 299 |
+
raise ValueError(f"Invalid mode: {mode}")
|
| 300 |
+
|
| 301 |
+
if isinstance(image, Image.Image):
|
| 302 |
+
return np.asarray(image)
|
| 303 |
+
else:
|
| 304 |
+
return image
|
| 305 |
+
|
| 306 |
+
|
| 307 |
+
def navit_patchify(pixel_values: np.ndarray,
|
| 308 |
+
patch_size: int) -> dict[str, np.ndarray]:
|
| 309 |
+
"""Reshape the pixel values to a navit shape.
|
| 310 |
+
|
| 311 |
+
Args:
|
| 312 |
+
pixel_values: np.ndarray, shape (t, h, w, c)
|
| 313 |
+
patch_size: int
|
| 314 |
+
|
| 315 |
+
Returns:
|
| 316 |
+
dict[str, np.ndarray]
|
| 317 |
+
- patches: np.ndarray, shape (t * h//patch_size * w//patch_size, c, patch_size, patch_size)
|
| 318 |
+
- grid_thw: np.ndarray, (t, h//patch_size, w//patch_size)
|
| 319 |
+
"""
|
| 320 |
+
T, H, W, C = pixel_values.shape
|
| 321 |
+
assert C == 3, "pixel_values must have 3 channels"
|
| 322 |
+
|
| 323 |
+
patches = pixel_values.reshape(T, H // patch_size, patch_size,
|
| 324 |
+
W // patch_size, patch_size, C)
|
| 325 |
+
# (T, H//patch_size, W//patch_size, C, patch_size, patch_size)
|
| 326 |
+
patches = patches.transpose(0, 1, 3, 5, 2, 4)
|
| 327 |
+
patches = patches.reshape(-1, C, patch_size, patch_size)
|
| 328 |
+
grid_thw = np.array([T, H // patch_size, W // patch_size])
|
| 329 |
+
return {"pixel_values": patches, "grid_thw": grid_thw}
|
| 330 |
+
|
| 331 |
+
|
| 332 |
+
def normalize(x: np.ndarray,
|
| 333 |
+
mean,
|
| 334 |
+
std_inv,
|
| 335 |
+
pixels_dtype: np.dtype = np.float32) -> np.ndarray:
|
| 336 |
+
"""Normalize the image.
|
| 337 |
+
|
| 338 |
+
Args:
|
| 339 |
+
x: The image to normalize. The shape is (..., 3). The dtype is uint8. The range is [0, 255].
|
| 340 |
+
mean: The mean of the image.
|
| 341 |
+
std_inv: The inverse of the std of the image.
|
| 342 |
+
pixels_dtype: The dtype of the image.
|
| 343 |
+
Returns:
|
| 344 |
+
The normalized image. The shape is (..., 3). The dtype is determined by the pixels_dtype.
|
| 345 |
+
"""
|
| 346 |
+
x = (x / 255.0).astype(pixels_dtype)
|
| 347 |
+
x -= mean
|
| 348 |
+
x *= std_inv
|
| 349 |
+
return x
|
| 350 |
+
|
| 351 |
+
|
| 352 |
+
def _to_tensor(data, **kwargs):
|
| 353 |
+
import torch
|
| 354 |
+
|
| 355 |
+
if isinstance(data, np.ndarray):
|
| 356 |
+
return torch.from_numpy(data).to(**kwargs)
|
| 357 |
+
elif isinstance(data, torch.Tensor):
|
| 358 |
+
return data.to(**kwargs)
|
| 359 |
+
elif isinstance(data, list):
|
| 360 |
+
return [_to_tensor(item, **kwargs) for item in data]
|
| 361 |
+
elif isinstance(data, tuple):
|
| 362 |
+
return tuple(_to_tensor(item, **kwargs) for item in data)
|
| 363 |
+
elif isinstance(data, dict):
|
| 364 |
+
return {k: _to_tensor(v, **kwargs) for k, v in data.items()}
|
| 365 |
+
elif data is None:
|
| 366 |
+
return None
|
| 367 |
+
else:
|
| 368 |
+
raise ValueError(f"Unsupported data type: {type(data)}")
|
model-00001-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:198956be53f7468823d2e789daa445a78e8a6dfd77ef99fab3391d1717a898d2
|
| 3 |
+
size 4989823800
|
model-00002-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:05956705bdaab932a41ffac0546a4122a4e82d72c12e01477d49e7562437a2ac
|
| 3 |
+
size 4995903192
|
model-00003-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f9abb20f577b1cdf214ecd3e8729d3b48fc67889711ffc3048588054924a9757
|
| 3 |
+
size 4995903656
|
model-00004-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0056ed3536a45b6b4f3d4c4ae5b697e315a87996cb4eb05e03011e390d673952
|
| 3 |
+
size 4995903656
|
model-00005-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:822741fd8f15f58cd3f74e9bbf83d41629ae9df35d59a738794085d4180b4294
|
| 3 |
+
size 4995411128
|
model-00006-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1d8f8aeb07ba5e82f4dedf92921801efe273a8394ed9af7c09b8c5ee94132b1f
|
| 3 |
+
size 4995903496
|
model-00007-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:90ae184d62f3e775f7937f9e7688c224c19c1562488804e6b5bd7f874a2f6255
|
| 3 |
+
size 4995903656
|
model-00008-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2c919d6c6e768403f4673b3fe60073adbe4537188e5e6b5913db97f8aff0fffb
|
| 3 |
+
size 4995411480
|
model-00009-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7578de9d665f5b23ae47053921234a52f46188a7371c665af15219cb2102e4a9
|
| 3 |
+
size 4995903152
|
model-00010-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9961e676f0cff04872ccce764e82c45917117ee875ed2b7a5a1859480f9b68cc
|
| 3 |
+
size 4995903656
|
model-00011-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:86997530dccb7db8540b17280373397dbcbab005cfd5d88b13a935c07906a1c5
|
| 3 |
+
size 4995903656
|
model-00012-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7f1e0eb108b4d036fbb9cf399f063371302c222708649c0baa9914e37632e73
|
| 3 |
+
size 4995411168
|
model-00013-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b91a51b56669ef998b93ecfeee26a321915babf8cb31e686a0bbb2da5763bf5
|
| 3 |
+
size 4995903456
|
model-00014-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:456f8c066ed456cd7d8130c965829b3de1e894540d768bacc24209e65fb422a9
|
| 3 |
+
size 4995903656
|
model-00015-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a2fe7d9b74eccf9c4e08cb50adc6dc86d38da6d9a03ede8ce5bd537d9aee8476
|
| 3 |
+
size 4995411528
|
model-00016-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:16048e780ed115e85d61317c11862f19190b1f8e96253407a203c69bf104e102
|
| 3 |
+
size 4995903104
|
model-00017-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4085a09f53da3b92c2cc68012b402ac08b087f034df30f7be70d4fc345f3706e
|
| 3 |
+
size 4995903656
|
model-00018-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:663fef64245d6b010c037adbe562d98cb9c5c1850a82eb8bc854f2441d9a4d49
|
| 3 |
+
size 4995903656
|
model-00019-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:656709286f727790e624a32ac226392bd1553d8393da1c33ffd3e845c3f10bd2
|
| 3 |
+
size 4995411208
|
model-00020-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ed517bd9e95b054dfbb212d9719693ce45b5c9b17827d4c31bcb3c0d11c3ed82
|
| 3 |
+
size 4995903416
|
model-00021-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78e16d261cfcc5e3b1f6182840297ec53b3a72ee60aaeff8247bfad70bc51c4b
|
| 3 |
+
size 4995903656
|
model-00022-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4824c2ff298c8aa0253cd97d1255f85d76e39d401fdd7aec0393b13d9eaf7e7e
|
| 3 |
+
size 4995411608
|
model-00023-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c65e44477ef5d0d47935e4ef38dda820713cb4d4eb414090726f070fdd0bfdec
|
| 3 |
+
size 4995903024
|
model-00024-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:179a2becbd88a81987a7d5a17dd5691460e95ad84ec5abe55f9a18f8fa88daff
|
| 3 |
+
size 4995903656
|
model-00025-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ff51369231e664dcb3e9157b189d95c1cd5d4b17bdd838a0dac2d3e393f5292e
|
| 3 |
+
size 4995903656
|
model-00026-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3bacf76fbcaefbed6d2b4d1b44d7101466264ceb2109106007b6cb841412ee89
|
| 3 |
+
size 4995411248
|
model-00027-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6f84904610e7537854bc72ae3c3d2f9b16060655d310db8e731573414f777284
|
| 3 |
+
size 4995903376
|
model-00028-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:14aeaf2778bd9430a5e17ed52185a609e0dd9da5ede7cd09d6200e2e360234a7
|
| 3 |
+
size 4995903656
|
model-00029-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e440901f5b2fa36acfa8dc73b7df789ee0372c755b42d1faaa55e654fb5ae10f
|
| 3 |
+
size 4995903656
|
model-00030-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:034a669bc8e23554b860e041654232932b6ef7bccd4162f24425a0691581945d
|
| 3 |
+
size 4995410976
|
model-00031-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5a085e62bc5e5c42ae68a0a158d7eb12e578fd08b3a00c0305d22c7f4d73274
|
| 3 |
+
size 4995903656
|
model-00032-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a6ceaf762c646112a11a64570734fb2a3ce4a852b9d525e6e21b13c165f65bb
|
| 3 |
+
size 4995903656
|
model-00033-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c0432d14e4c706f3525a5b0039d796ebc390e0b6ddc766b812ea51ba46bce5e
|
| 3 |
+
size 4995411584
|
model-00034-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc1d8b6d80e2e034a24c8c4ce5956c9d04dcd604cf7ee6a5470f642cb3164d97
|
| 3 |
+
size 4995904000
|
model-00035-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e43739f223a970dcb257646b7f956c8cf848e7d0d4637ca1faaa5162fe2b9ea4
|
| 3 |
+
size 4995904320
|
model-00036-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5b7a20a5e8630c6d93f6f185774c27de85c2ab946bbe728f07ee9fd081036941
|
| 3 |
+
size 4995904320
|
model-00037-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7100a4de949bfc52ee2882d4a36e8a932b6624b901f6430eb467025d8c1c6a55
|
| 3 |
+
size 4995411632
|
model-00038-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:698a8dc08809edda2c8b98022028a9a5a4ee2bb27d2d7d94848e34f97ed4978e
|
| 3 |
+
size 4995904312
|
model-00039-of-00214.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dcdc2727229ee0d37eccbc8d2a147ea3c0bd75d3f3d0ce13f7d53e3c5016aed6
|
| 3 |
+
size 4995904320
|