SiyouLi commited on
Commit
20e4fd0
·
verified ·
1 Parent(s): b298779

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,3 +1,214 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ pipeline_tag: image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - multimodal
9
+ - vision
10
+ - video
11
+ - long-video
12
+ - token-selection
13
+ - compression
14
+ - qwen2.5-vl
15
+ - qtsplus
16
+ ---
17
+
18
+ [![arXiv](https://img.shields.io/badge/arXiv-2511.11910-grey?labelColor=B31B1B&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2511.11910)
19
+ [![Website](https://img.shields.io/badge/Website-QTSplus-grey?labelColor=3776AB&logo=GoogleChrome&logoColor=white)](https://qtsplus.github.io/)
20
+ [![Github](https://img.shields.io/badge/Github-QTSplus-grey?labelColor=000&logo=github)](https://github.com/Siyou-Li/QTSplus)
21
+
22
+ ## Model Description
23
+ ![](./assets/qtsplus.svg)
24
+
25
+ QTSplus-InternVL2.5-8B is a Qwen2.5-VL–based multimodal LLM finetuned with Query‑Aware Token Selector (QTSplus), a lightweight visual token selection module that acts as an information gate between the vision encoder and the LLM.
26
+
27
+ - Query‑aware selection: scores vision tokens via cross‑attention against the input text query.
28
+ - Adaptive retention: predicts an instance‑specific budget and keeps only the most relevant tokens.
29
+ - Temporal reasoning: a small re‑encoder preserves temporal order with absolute time cues.
30
+ - Efficient long‑video understanding: up to 89% vision token compression and 28% end‑to‑end latency reduction on long videos (see paper for details).
31
+
32
+ ## Intended Uses & Limitations
33
+
34
+ Intended uses
35
+ - Long‑video question answering and captioning
36
+ - Multi‑image reasoning and story understanding
37
+ - Efficient multimodal chat with reduced latency on long inputs
38
+
39
+ Limitations
40
+ - May miss fine details if the predicted retention budget is too small.
41
+ - Inherits biases and failure modes from the base Qwen2.5‑VL model and training data.
42
+ - Not a safety‑aligned system; outputs may be inaccurate or unsafe without human oversight.
43
+
44
+ ## Quick Start
45
+
46
+ The repository is designed around a conda‑based Python 3.11 environment with a CUDA‑enabled GPU.
47
+
48
+ 1. **Create and activate the conda environment**
49
+
50
+ ```bash
51
+ conda create -n qtsplus python=3.11 -y
52
+ conda activate qtsplus
53
+ ```
54
+
55
+ 2. **Install toolchain and CUDA toolkit**
56
+
57
+ ```bash
58
+ conda install conda-forge::gcc=11 conda-forge::gxx=11 -y
59
+ conda install nvidia/label/cuda-12.8.1::cuda-toolkit -y
60
+ conda install av -c conda-forge -y
61
+ ```
62
+
63
+ 3. **Install PyTorch with CUDA 12.8 support**
64
+
65
+ ```bash
66
+ pip3 install torch==2.9.0 torchvision --index-url https://download.pytorch.org/whl/cu128
67
+ ```
68
+
69
+ 4. **Install core Python libraries**
70
+
71
+ ```bash
72
+ pip install transformers==4.57.1
73
+ DS_BUILD_CUTLASS_OPS=0 DS_BUILD_RAGGED_DEVICE_OPS=0 DS_BUILD_EVOFORMER_ATTN=0 pip install deepspeed
74
+ pip install accelerate pandas wandb matplotlib scikit-learn datasets evaluate ftfy sentencepiece bitsandbytes
75
+ ```
76
+
77
+ 5. **Install FlashAttention (prebuilt wheel)**
78
+
79
+ ```bash
80
+ pip install https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.4.22/flash_attn-2.8.1+cu128torch2.9-cp311-cp311-linux_x86_64.whl
81
+ ```
82
+
83
+ This wheel is specific to Linux x86_64, CUDA 12.8, PyTorch 2.9.0 and Python 3.11; if you deviate from this configuration, you will need to install a compatible FlashAttention build instead.
84
+
85
+ 6. **Verify installation**
86
+
87
+ After installation, you should be able to run:
88
+
89
+ ```bash
90
+ python -c "import torch, transformers, deepspeed, accelerate; print(torch.cuda.is_available())"
91
+ ```
92
+
93
+ which should print `True` on a correctly configured GPU machine.
94
+
95
+ Video example
96
+ ```python
97
+ from __future__ import annotations
98
+
99
+ import argparse
100
+
101
+ import torch
102
+ from transformers import AutoModelForCausalLM, AutoProcessor
103
+
104
+
105
+ def main() -> int:
106
+ parser = argparse.ArgumentParser(description="QTSplus-InternVL2.5-8B video QA demo")
107
+ parser.add_argument(
108
+ "--model",
109
+ type=str,
110
+ default="AlpachinoNLP/QTSplus-InternVL2.5-8B",
111
+ help="Model ID or path",
112
+ )
113
+ parser.add_argument(
114
+ "--video",
115
+ type=str,
116
+ default="your/path/to/video.mp4",
117
+ help="Path to a video file",
118
+ )
119
+ parser.add_argument(
120
+ "--question",
121
+ type=str,
122
+ default="play the video and describe what is happening?",
123
+ help="Question about the video (if omitted, you will be prompted)",
124
+ )
125
+ parser.add_argument("--num_frames", type=int, default=20, help="Number of frames uniformly sampled from the video")
126
+ parser.add_argument("--max_new_tokens", type=int, default=512, help="Max new tokens to generate")
127
+ args = parser.parse_args()
128
+
129
+ question = args.question
130
+ if not question:
131
+ question = input("Question: ").strip()
132
+ if not question:
133
+ raise SystemExit("Empty question.")
134
+
135
+ processor = AutoProcessor.from_pretrained(args.model, trust_remote_code=True)
136
+ dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
137
+ try:
138
+ model = AutoModelForCausalLM.from_pretrained(
139
+ args.model,
140
+ trust_remote_code=True,
141
+ dtype=dtype,
142
+ device_map="auto" if torch.cuda.is_available() else None,
143
+ low_cpu_mem_usage=True,
144
+ ).eval()
145
+ except TypeError:
146
+ model = AutoModelForCausalLM.from_pretrained(
147
+ args.model,
148
+ trust_remote_code=True,
149
+ torch_dtype=dtype,
150
+ device_map="auto" if torch.cuda.is_available() else None,
151
+ low_cpu_mem_usage=True,
152
+ ).eval()
153
+
154
+ inputs = processor(text=question, videos=args.video, num_frames=args.num_frames, return_tensors="pt")
155
+ for k, v in list(inputs.items()):
156
+ if isinstance(v, torch.Tensor):
157
+ inputs[k] = v.to(model.device)
158
+
159
+ with torch.inference_mode():
160
+ output_ids = model.generate(**inputs, max_new_tokens=args.max_new_tokens, do_sample=False)
161
+
162
+ input_len = int(inputs["input_ids"].shape[1])
163
+ gen_ids = output_ids[0, input_len:]
164
+ answer = processor.tokenizer.decode(gen_ids, skip_special_tokens=True).strip()
165
+
166
+ print("\nAnswer:\n" + answer)
167
+ return 0
168
+
169
+
170
+ if __name__ == "__main__":
171
+ raise SystemExit(main())
172
+
173
+ ```
174
+
175
+ Multiple images (treated as a video sequence)
176
+ ```python
177
+
178
+ ```
179
+
180
+ Notes
181
+ - The chat template is applied via `processor.apply_chat_template` and expects the messages schema shown above.
182
+ - QTSplus expects the vision payload under the `vision_input` keyword argument during generation.
183
+ - For fully offline use, pass `local_files_only=True` to `from_pretrained` calls once the files are cached locally.
184
+
185
+ ## Efficiency & Controls
186
+
187
+ The following QTSplus hyperparameters in `config.json` control compression and selection behavior:
188
+ - `qts_plus_rho_min` / `qts_plus_rho_max`: min/max retention ratio bounds.(default: 0.05 / 0.5)
189
+ - `qts_plus_tau_s`: scoring temperature for cross‑attention.(default: 0.5)
190
+ - `qts_plus_nmax`: hard cap on selected tokens per sample. (default: 25600)
191
+ These trade off detail vs. speed/memory. See the paper for guidance, ablations, and latency/throughput measurements.
192
+
193
+
194
+ ## Safety, Bias, and Limitations
195
+
196
+ - Outputs may be factually incorrect, biased, or unsafe. Do not use without human oversight.
197
+ - QTSplus compresses the vision stream; extremely small budgets may drop rare but important details.
198
+ - Inherits safety/bias characteristics from the underlying Qwen2.5‑VL model and training data.
199
+
200
+ ## Citation
201
+
202
+ If you find this work helpful, please cite:
203
+
204
+ ```bibtex
205
+ @misc{li2025seeingforesttreesqueryaware,
206
+ title = {Seeing the Forest and the Trees: Query-Aware Tokenizer for Long-Video Multimodal Language Models},
207
+ author = {Siyou Li and Huanan Wu and Juexi Shao and Yinghao Ma and Yujian Gan and Yihao Luo and Yuwei Wang and Dong Nie and Lu Wang and Wengqing Wu and Le Zhang and Massimo Poesio and Juntao Yu},
208
+ year = {2025},
209
+ eprint = {2511.11910},
210
+ archivePrefix= {arXiv},
211
+ primaryClass = {cs.CV},
212
+ url = {https://arxiv.org/abs/2511.11910}
213
+ }
214
+ ```
added_tokens.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</box>": 92552,
3
+ "</img>": 92545,
4
+ "</quad>": 92548,
5
+ "</ref>": 92550,
6
+ "<IMG_CONTEXT>": 92546,
7
+ "<box>": 92551,
8
+ "<img>": 92544,
9
+ "<quad>": 92547,
10
+ "<ref>": 92549
11
+ }
assets/dataset.svg ADDED
assets/logo_with_glasses.svg ADDED
assets/qtsplus.svg ADDED
assets/system_load.svg ADDED
assets/training_process.svg ADDED
chat_template.jinja ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {{ bos_token }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
2
+ ' + message['content'] + '<|im_end|>' + '
3
+ '}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
4
+ ' }}{% endif %}
config.json ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "QTSplusInternLM2_ForCausalLM"
4
+ ],
5
+ "attn_implementation": "flash_attention_2",
6
+ "auto_map": {
7
+ "AutoConfig": "qts_plus_internlm2_lm.QTSplusInternLM2_CausalLM_Config",
8
+ "AutoModelForCausalLM": "qts_plus_internlm2_lm.QTSplusInternLM2_ForCausalLM",
9
+ "AutoProcessor": "processing_qts_plus_internvl2_5.QTSplusInternVL2_5_Processor"
10
+ },
11
+ "bias": false,
12
+ "bos_token_id": 1,
13
+ "downsample_ratio": 0.5,
14
+ "dtype": "bfloat16",
15
+ "enable_qts_plus": true,
16
+ "eos_token_id": 2,
17
+ "force_image_size": 448,
18
+ "hidden_act": "silu",
19
+ "hidden_size": 4096,
20
+ "image_token_id": 92546,
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 14336,
23
+ "lambda_m": 0,
24
+ "lambda_s": 0,
25
+ "lambda_t": 0,
26
+ "llm_hidden_size": 4096,
27
+ "max_position_embeddings": 32768,
28
+ "model_type": "qts_plus_internlm2_causal_lm",
29
+ "num_attention_heads": 32,
30
+ "num_hidden_layers": 32,
31
+ "num_key_value_heads": 8,
32
+ "out_hidden_size": 4096,
33
+ "pad_token_id": 2,
34
+ "pretraining_tp": 1,
35
+ "project_text_if_needed": false,
36
+ "ps_version": "v2",
37
+ "qts_plus_block_dropout": 0.0,
38
+ "qts_plus_nmax": 2560,
39
+ "qts_plus_reencode": false,
40
+ "qts_plus_reencode_layers": 0,
41
+ "qts_plus_rho_max": 0.5,
42
+ "qts_plus_rho_min": 0.05,
43
+ "qts_plus_scoring_layers": 4,
44
+ "qts_plus_tau_s": 0.1,
45
+ "rms_norm_eps": 1e-05,
46
+ "rope_scaling": {
47
+ "factor": 2.0,
48
+ "type": "dynamic"
49
+ },
50
+ "rope_theta": 1000000,
51
+ "select_layer": -1,
52
+ "tie_word_embeddings": false,
53
+ "torch_dtype": "bfloat16",
54
+ "transformers_version": "4.57.6",
55
+ "use_bfloat16": true,
56
+ "use_cache": true,
57
+ "vision_config": {
58
+ "architectures": [
59
+ "InternVisionModel"
60
+ ],
61
+ "attention_dropout": 0.0,
62
+ "drop_path_rate": 0.0,
63
+ "dropout": 0.0,
64
+ "dtype": "bfloat16",
65
+ "hidden_act": "gelu",
66
+ "hidden_size": 1024,
67
+ "image_size": 448,
68
+ "initializer_factor": 1.0,
69
+ "initializer_range": 0.02,
70
+ "intermediate_size": 4096,
71
+ "layer_norm_eps": 1e-06,
72
+ "model_type": "intern_vit_6b",
73
+ "norm_type": "layer_norm",
74
+ "num_attention_heads": 16,
75
+ "num_channels": 3,
76
+ "num_hidden_layers": 24,
77
+ "patch_size": 14,
78
+ "qk_normalization": false,
79
+ "qkv_bias": true,
80
+ "use_bfloat16": true,
81
+ "use_flash_attn": true
82
+ },
83
+ "vision_embed_size": 4096,
84
+ "vision_tower": "internvl2_5_vision",
85
+ "vocab_size": 92553
86
+ }
configuration_intern_vit.py ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
+ # Licensed under The MIT License [see LICENSE for details]
5
+ # --------------------------------------------------------
6
+
7
+ import os
8
+ from typing import Union
9
+
10
+ from transformers.configuration_utils import PretrainedConfig
11
+ from transformers.utils import logging
12
+
13
+ logger = logging.get_logger(__name__)
14
+
15
+
16
+ class InternVisionConfig(PretrainedConfig):
17
+ r"""
18
+ Configuration class for InternVL's vision encoder (`InternVisionModel`).
19
+ """
20
+
21
+ model_type = "intern_vit_6b"
22
+
23
+ def __init__(
24
+ self,
25
+ num_channels=3,
26
+ patch_size=14,
27
+ image_size=224,
28
+ qkv_bias=False,
29
+ hidden_size=3200,
30
+ num_attention_heads=25,
31
+ intermediate_size=12800,
32
+ qk_normalization=True,
33
+ num_hidden_layers=48,
34
+ use_flash_attn=True,
35
+ hidden_act="gelu",
36
+ norm_type="rms_norm",
37
+ layer_norm_eps=1e-6,
38
+ dropout=0.0,
39
+ drop_path_rate=0.0,
40
+ attention_dropout=0.0,
41
+ initializer_range=0.02,
42
+ initializer_factor=0.1,
43
+ **kwargs,
44
+ ):
45
+ super().__init__(**kwargs)
46
+
47
+ self.hidden_size = hidden_size
48
+ self.intermediate_size = intermediate_size
49
+ self.dropout = dropout
50
+ self.drop_path_rate = drop_path_rate
51
+ self.num_hidden_layers = num_hidden_layers
52
+ self.num_attention_heads = num_attention_heads
53
+ self.num_channels = num_channels
54
+ self.patch_size = patch_size
55
+ self.image_size = image_size
56
+ self.initializer_range = initializer_range
57
+ self.initializer_factor = initializer_factor
58
+ self.attention_dropout = attention_dropout
59
+ self.layer_norm_eps = layer_norm_eps
60
+ self.hidden_act = hidden_act
61
+ self.norm_type = norm_type
62
+ self.qkv_bias = qkv_bias
63
+ self.qk_normalization = qk_normalization
64
+ self.use_flash_attn = use_flash_attn
65
+
66
+ @classmethod
67
+ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) -> "PretrainedConfig":
68
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
69
+
70
+ # InternVL chat checkpoints store this config under `vision_config`.
71
+ if "vision_config" in config_dict:
72
+ config_dict = config_dict["vision_config"]
73
+
74
+ if (
75
+ "model_type" in config_dict
76
+ and hasattr(cls, "model_type")
77
+ and config_dict["model_type"] != cls.model_type
78
+ ):
79
+ logger.warning(
80
+ f"You are using a model of type {config_dict['model_type']} to instantiate a model of type "
81
+ f"{cls.model_type}. This is not supported for all configurations of models and can yield errors."
82
+ )
83
+
84
+ return cls.from_dict(config_dict, **kwargs)
85
+
configuration_internlm2.py ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/configuration_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ InternLM2 model configuration"""
17
+
18
+ from transformers.configuration_utils import PretrainedConfig
19
+ from transformers.utils import logging
20
+
21
+ logger = logging.get_logger(__name__)
22
+
23
+ INTERNLM2_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
24
+
25
+
26
+ # Modified from transformers.model.llama.configuration_llama.LlamaConfig
27
+ class InternLM2Config(PretrainedConfig):
28
+ r"""
29
+ This is the configuration class to store the configuration of a [`InternLM2Model`]. It is used to instantiate
30
+ an InternLM2 model according to the specified arguments, defining the model architecture. Instantiating a
31
+ configuration with the defaults will yield a similar configuration to that of the InternLM2-7B.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 32000):
39
+ Vocabulary size of the InternLM2 model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`InternLM2Model`]
41
+ hidden_size (`int`, *optional*, defaults to 4096):
42
+ Dimension of the hidden representations.
43
+ intermediate_size (`int`, *optional*, defaults to 11008):
44
+ Dimension of the MLP representations.
45
+ num_hidden_layers (`int`, *optional*, defaults to 32):
46
+ Number of hidden layers in the Transformer encoder.
47
+ num_attention_heads (`int`, *optional*, defaults to 32):
48
+ Number of attention heads for each attention layer in the Transformer encoder.
49
+ num_key_value_heads (`int`, *optional*):
50
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
51
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
52
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
53
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
54
+ by meanpooling all the original heads within that group. For more details checkout [this
55
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
56
+ `num_attention_heads`.
57
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
58
+ The non-linear activation function (function or string) in the decoder.
59
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
60
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
61
+ just in case (e.g., 512 or 1024 or 2048).
62
+ initializer_range (`float`, *optional*, defaults to 0.02):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ rms_norm_eps (`float`, *optional*, defaults to 1e-12):
65
+ The epsilon used by the rms normalization layers.
66
+ use_cache (`bool`, *optional*, defaults to `True`):
67
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
68
+ relevant if `config.is_decoder=True`.
69
+ tie_word_embeddings(`bool`, *optional*, defaults to `False`):
70
+ Whether to tie weight embeddings
71
+ Example:
72
+
73
+ """
74
+ model_type = 'internlm2'
75
+ _auto_class = 'AutoConfig'
76
+
77
+ def __init__( # pylint: disable=W0102
78
+ self,
79
+ vocab_size=103168,
80
+ hidden_size=4096,
81
+ intermediate_size=11008,
82
+ num_hidden_layers=32,
83
+ num_attention_heads=32,
84
+ num_key_value_heads=None,
85
+ hidden_act='silu',
86
+ max_position_embeddings=2048,
87
+ initializer_range=0.02,
88
+ rms_norm_eps=1e-6,
89
+ use_cache=True,
90
+ pad_token_id=0,
91
+ bos_token_id=1,
92
+ eos_token_id=2,
93
+ tie_word_embeddings=False,
94
+ bias=True,
95
+ rope_theta=10000,
96
+ rope_scaling=None,
97
+ attn_implementation='eager',
98
+ **kwargs,
99
+ ):
100
+ self.vocab_size = vocab_size
101
+ self.max_position_embeddings = max_position_embeddings
102
+ self.hidden_size = hidden_size
103
+ self.intermediate_size = intermediate_size
104
+ self.num_hidden_layers = num_hidden_layers
105
+ self.num_attention_heads = num_attention_heads
106
+ self.bias = bias
107
+
108
+ if num_key_value_heads is None:
109
+ num_key_value_heads = num_attention_heads
110
+ self.num_key_value_heads = num_key_value_heads
111
+
112
+ self.hidden_act = hidden_act
113
+ self.initializer_range = initializer_range
114
+ self.rms_norm_eps = rms_norm_eps
115
+ self.use_cache = use_cache
116
+ self.rope_theta = rope_theta
117
+ self.rope_scaling = rope_scaling
118
+ self._rope_scaling_validation()
119
+
120
+ self.attn_implementation = attn_implementation
121
+ if self.attn_implementation is None:
122
+ self.attn_implementation = 'eager'
123
+ super().__init__(
124
+ pad_token_id=pad_token_id,
125
+ bos_token_id=bos_token_id,
126
+ eos_token_id=eos_token_id,
127
+ tie_word_embeddings=tie_word_embeddings,
128
+ **kwargs,
129
+ )
130
+
131
+ def _rope_scaling_validation(self):
132
+ """
133
+ Validate the `rope_scaling` configuration.
134
+ """
135
+ if self.rope_scaling is None:
136
+ return
137
+
138
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
139
+ raise ValueError(
140
+ '`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, '
141
+ f'got {self.rope_scaling}'
142
+ )
143
+ rope_scaling_type = self.rope_scaling.get('type', None)
144
+ rope_scaling_factor = self.rope_scaling.get('factor', None)
145
+ if rope_scaling_type is None or rope_scaling_type not in ['linear', 'dynamic']:
146
+ raise ValueError(
147
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
148
+ )
149
+ if rope_scaling_factor is None or not isinstance(rope_scaling_factor, float) or rope_scaling_factor < 1.0:
150
+ raise ValueError(f"`rope_scaling`'s factor field must be a float >= 1, got {rope_scaling_factor}")
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": [
5
+ 2
6
+ ],
7
+ "pad_token_id": 2,
8
+ "transformers_version": "4.57.6"
9
+ }
latest ADDED
@@ -0,0 +1 @@
 
 
1
+ global_step2000
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7110f58be7dc116b07aaf7eca3ecc1d15db3162e14375ce785f71aa046e2bec
3
+ size 17564508786
modeling_intern_vit.py ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # --------------------------------------------------------
2
+ # InternVL
3
+ # Copyright (c) 2024 OpenGVLab
4
+ # Licensed under The MIT License [see LICENSE for details]
5
+ # --------------------------------------------------------
6
+
7
+ from typing import Optional, Tuple, Union
8
+
9
+ import torch
10
+ import torch.nn.functional as F
11
+ import torch.utils.checkpoint
12
+ from einops import rearrange
13
+ from torch import nn
14
+ from transformers.activations import ACT2FN
15
+ from transformers.modeling_outputs import BaseModelOutput, BaseModelOutputWithPooling
16
+ from transformers.modeling_utils import PreTrainedModel
17
+ from transformers.utils import logging
18
+
19
+ from .configuration_intern_vit import InternVisionConfig
20
+
21
+ logger = logging.get_logger(__name__)
22
+
23
+
24
+ try: # Optional dependency
25
+ from timm.models.layers import DropPath as _DropPath # type: ignore
26
+
27
+ DropPath = _DropPath
28
+ except Exception: # pragma: no cover
29
+ class DropPath(nn.Module):
30
+ """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks)."""
31
+
32
+ def __init__(self, drop_prob: float = 0.0) -> None:
33
+ super().__init__()
34
+ self.drop_prob = float(drop_prob)
35
+
36
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
37
+ if self.drop_prob == 0.0 or not self.training:
38
+ return x
39
+ keep_prob = 1.0 - self.drop_prob
40
+ shape = (x.shape[0],) + (1,) * (x.ndim - 1)
41
+ random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
42
+ random_tensor = random_tensor.floor()
43
+ return x.div(keep_prob) * random_tensor
44
+
45
+
46
+ try:
47
+ from flash_attn.bert_padding import pad_input, unpad_input # type: ignore
48
+ from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func # type: ignore
49
+
50
+ has_flash_attn = True
51
+ except Exception: # pragma: no cover
52
+ pad_input, unpad_input, flash_attn_varlen_qkvpacked_func = None, None, None
53
+ has_flash_attn = False
54
+
55
+
56
+ class FlashAttention(nn.Module):
57
+ """Scaled dot-product attention implemented with FlashAttention2."""
58
+
59
+ def __init__(self, softmax_scale=None, attention_dropout=0.0, device=None, dtype=None):
60
+ super().__init__()
61
+ self.softmax_scale = softmax_scale
62
+ self.dropout_p = attention_dropout
63
+
64
+ def forward(
65
+ self,
66
+ qkv,
67
+ key_padding_mask=None,
68
+ causal=False,
69
+ cu_seqlens=None,
70
+ max_s=None,
71
+ need_weights=False,
72
+ ):
73
+ assert not need_weights
74
+ assert qkv.dtype in [torch.float16, torch.bfloat16]
75
+ assert qkv.is_cuda
76
+
77
+ if cu_seqlens is None:
78
+ batch_size = qkv.shape[0]
79
+ seqlen = qkv.shape[1]
80
+ if key_padding_mask is None:
81
+ qkv = rearrange(qkv, "b s ... -> (b s) ...")
82
+ max_s = seqlen
83
+ cu_seqlens = torch.arange(
84
+ 0,
85
+ (batch_size + 1) * seqlen,
86
+ step=seqlen,
87
+ dtype=torch.int32,
88
+ device=qkv.device,
89
+ )
90
+ output = flash_attn_varlen_qkvpacked_func(
91
+ qkv,
92
+ cu_seqlens,
93
+ max_s,
94
+ self.dropout_p if self.training else 0.0,
95
+ softmax_scale=self.softmax_scale,
96
+ causal=causal,
97
+ )
98
+ output = rearrange(output, "(b s) ... -> b s ...", b=batch_size)
99
+ else:
100
+ nheads = qkv.shape[-2]
101
+ x = rearrange(qkv, "b s three h d -> b s (three h d)")
102
+ x_unpad, indices, cu_seqlens, max_s = unpad_input(x, key_padding_mask)
103
+ x_unpad = rearrange(x_unpad, "nnz (three h d) -> nnz three h d", three=3, h=nheads)
104
+ output_unpad = flash_attn_varlen_qkvpacked_func(
105
+ x_unpad,
106
+ cu_seqlens,
107
+ max_s,
108
+ self.dropout_p if self.training else 0.0,
109
+ softmax_scale=self.softmax_scale,
110
+ causal=causal,
111
+ )
112
+ output = rearrange(
113
+ pad_input(rearrange(output_unpad, "nnz h d -> nnz (h d)"), indices, batch_size, seqlen),
114
+ "b s (h d) -> b s h d",
115
+ h=nheads,
116
+ )
117
+ else:
118
+ assert max_s is not None
119
+ output = flash_attn_varlen_qkvpacked_func(
120
+ qkv,
121
+ cu_seqlens,
122
+ max_s,
123
+ self.dropout_p if self.training else 0.0,
124
+ softmax_scale=self.softmax_scale,
125
+ causal=causal,
126
+ )
127
+
128
+ return output, None
129
+
130
+
131
+ class InternRMSNorm(nn.Module):
132
+ def __init__(self, hidden_size, eps=1e-6):
133
+ super().__init__()
134
+ self.weight = nn.Parameter(torch.ones(hidden_size))
135
+ self.variance_epsilon = eps
136
+
137
+ def forward(self, hidden_states):
138
+ input_dtype = hidden_states.dtype
139
+ hidden_states = hidden_states.to(torch.float32)
140
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
141
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
142
+ return self.weight * hidden_states.to(input_dtype)
143
+
144
+
145
+ try:
146
+ from apex.normalization import FusedRMSNorm # type: ignore
147
+
148
+ InternRMSNorm = FusedRMSNorm # noqa
149
+ logger.info("Discovered apex.normalization.FusedRMSNorm - using it instead of InternRMSNorm")
150
+ except Exception: # pragma: no cover
151
+ pass
152
+
153
+
154
+ NORM2FN = {
155
+ "rms_norm": InternRMSNorm,
156
+ "layer_norm": nn.LayerNorm,
157
+ }
158
+
159
+
160
+ class InternVisionEmbeddings(nn.Module):
161
+ def __init__(self, config: InternVisionConfig):
162
+ super().__init__()
163
+ self.config = config
164
+ self.embed_dim = config.hidden_size
165
+ self.image_size = config.image_size
166
+ self.patch_size = config.patch_size
167
+
168
+ self.class_embedding = nn.Parameter(torch.randn(1, 1, self.embed_dim))
169
+
170
+ self.patch_embedding = nn.Conv2d(
171
+ in_channels=3,
172
+ out_channels=self.embed_dim,
173
+ kernel_size=self.patch_size,
174
+ stride=self.patch_size,
175
+ )
176
+
177
+ self.num_patches = (self.image_size // self.patch_size) ** 2
178
+ self.num_positions = self.num_patches + 1
179
+
180
+ self.position_embedding = nn.Parameter(torch.randn(1, self.num_positions, self.embed_dim))
181
+
182
+ def _get_pos_embed(self, pos_embed, H, W):
183
+ target_dtype = pos_embed.dtype
184
+ pos_embed = (
185
+ pos_embed.float()
186
+ .reshape(1, self.image_size // self.patch_size, self.image_size // self.patch_size, -1)
187
+ .permute(0, 3, 1, 2)
188
+ )
189
+ pos_embed = (
190
+ F.interpolate(pos_embed, size=(H, W), mode="bicubic", align_corners=False)
191
+ .reshape(1, -1, H * W)
192
+ .permute(0, 2, 1)
193
+ .to(target_dtype)
194
+ )
195
+ return pos_embed
196
+
197
+ def forward(self, pixel_values: torch.Tensor) -> torch.Tensor:
198
+ batch_size = pixel_values.shape[0]
199
+ patch_embeds = self.patch_embedding(pixel_values) # shape = [*, width, grid, grid]
200
+ H = patch_embeds.shape[-2]
201
+ W = patch_embeds.shape[-1]
202
+ patch_embeds = patch_embeds.flatten(2).transpose(1, 2) # shape = [*, grid ** 2, width]
203
+
204
+ class_embeds = self.class_embedding.expand(batch_size, -1, -1)
205
+ embeddings = torch.cat([class_embeds, patch_embeds], dim=1)
206
+
207
+ pos_embeds = self.position_embedding
208
+ if H != self.image_size // self.patch_size or W != self.image_size // self.patch_size:
209
+ pos_embeds = torch.cat(
210
+ [pos_embeds[:, :1, :], self._get_pos_embed(pos_embeds[:, 1:, :], H, W)],
211
+ dim=1,
212
+ )
213
+
214
+ embeddings = embeddings + pos_embeds
215
+ return embeddings
216
+
217
+
218
+ class InternSelfAttention(nn.Module):
219
+ def __init__(self, config: InternVisionConfig):
220
+ super().__init__()
221
+ self.config = config
222
+ self.embed_dim = config.hidden_size
223
+ self.num_heads = config.num_attention_heads
224
+ self.head_dim = self.embed_dim // self.num_heads
225
+ self.scale = self.head_dim**-0.5
226
+ self.qkv_bias = config.qkv_bias
227
+
228
+ self.qkv = nn.Linear(self.embed_dim, self.embed_dim * 3, bias=self.qkv_bias)
229
+ self.proj = nn.Linear(self.embed_dim, self.embed_dim)
230
+
231
+ self.attn_drop = nn.Dropout(config.attention_dropout)
232
+ self.proj_drop = nn.Dropout(config.dropout)
233
+
234
+ self.qk_normalization = config.qk_normalization
235
+ if self.qk_normalization:
236
+ self.q_norm = InternRMSNorm(self.head_dim)
237
+ self.k_norm = InternRMSNorm(self.head_dim)
238
+
239
+ if config.use_flash_attn and has_flash_attn:
240
+ self.inner_attn = FlashAttention(softmax_scale=None, attention_dropout=config.attention_dropout)
241
+ else:
242
+ self.inner_attn = None
243
+
244
+ def forward(self, x: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
245
+ B, N, C = x.shape
246
+ qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim)
247
+
248
+ if self.qk_normalization:
249
+ q, k, v = qkv.unbind(dim=2)
250
+ q = self.q_norm(q)
251
+ k = self.k_norm(k)
252
+ qkv = torch.stack([q, k, v], dim=2)
253
+
254
+ if self.inner_attn is not None and x.is_cuda:
255
+ attn_output, _ = self.inner_attn(qkv=qkv, key_padding_mask=attn_mask, need_weights=False)
256
+ attn_output = rearrange(attn_output, "b s h d -> b s (h d)")
257
+ else:
258
+ qkv = qkv.permute(2, 0, 3, 1, 4)
259
+ q, k, v = qkv[0], qkv[1], qkv[2]
260
+ attn = (q @ k.transpose(-2, -1)) * self.scale
261
+ if attn_mask is not None:
262
+ attn = attn.masked_fill(attn_mask.unsqueeze(1).unsqueeze(2).to(dtype=torch.bool), float("-inf"))
263
+ attn = attn.softmax(dim=-1)
264
+ attn = self.attn_drop(attn)
265
+ attn_output = (attn @ v).transpose(1, 2).reshape(B, N, C)
266
+
267
+ x = self.proj(attn_output)
268
+ x = self.proj_drop(x)
269
+ return x
270
+
271
+
272
+ class InternMLP(nn.Module):
273
+ def __init__(self, config: InternVisionConfig):
274
+ super().__init__()
275
+ self.fc1 = nn.Linear(config.hidden_size, config.intermediate_size)
276
+ self.act = ACT2FN[config.hidden_act]
277
+ self.fc2 = nn.Linear(config.intermediate_size, config.hidden_size)
278
+ self.drop = nn.Dropout(config.dropout)
279
+
280
+ def forward(self, x):
281
+ x = self.fc1(x)
282
+ x = self.act(x)
283
+ x = self.drop(x)
284
+ x = self.fc2(x)
285
+ x = self.drop(x)
286
+ return x
287
+
288
+
289
+ class InternVisionEncoderLayer(nn.Module):
290
+ def __init__(self, config: InternVisionConfig, drop_path_rate: float):
291
+ super().__init__()
292
+ self.norm1 = NORM2FN[config.norm_type](config.hidden_size, eps=config.layer_norm_eps)
293
+ self.attn = InternSelfAttention(config)
294
+ self.drop_path = DropPath(drop_path_rate) if drop_path_rate > 0.0 else nn.Identity()
295
+ self.norm2 = NORM2FN[config.norm_type](config.hidden_size, eps=config.layer_norm_eps)
296
+ self.mlp = InternMLP(config)
297
+
298
+ def forward(self, hidden_states: torch.Tensor, attn_mask: Optional[torch.Tensor] = None):
299
+ hidden_states = hidden_states + self.drop_path(self.attn(self.norm1(hidden_states), attn_mask=attn_mask))
300
+ hidden_states = hidden_states + self.drop_path(self.mlp(self.norm2(hidden_states)))
301
+ return hidden_states
302
+
303
+
304
+ class InternVisionEncoder(nn.Module):
305
+ def __init__(self, config: InternVisionConfig):
306
+ super().__init__()
307
+ dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, config.num_hidden_layers)]
308
+ self.layers = nn.ModuleList(
309
+ [InternVisionEncoderLayer(config, drop_path_rate=dpr[i]) for i in range(config.num_hidden_layers)]
310
+ )
311
+
312
+ def forward(
313
+ self,
314
+ inputs_embeds: torch.Tensor,
315
+ attn_mask: Optional[torch.Tensor] = None,
316
+ output_hidden_states: bool = False,
317
+ return_dict: bool = True,
318
+ ) -> Union[Tuple, BaseModelOutput]:
319
+ hidden_states = inputs_embeds
320
+ all_hidden_states = () if output_hidden_states else None
321
+ for layer in self.layers:
322
+ if output_hidden_states:
323
+ all_hidden_states = all_hidden_states + (hidden_states,)
324
+ if self.training:
325
+ hidden_states = torch.utils.checkpoint.checkpoint(layer, hidden_states, attn_mask)
326
+ else:
327
+ hidden_states = layer(hidden_states, attn_mask=attn_mask)
328
+ if output_hidden_states:
329
+ all_hidden_states = all_hidden_states + (hidden_states,)
330
+
331
+ if not return_dict:
332
+ return tuple(v for v in [hidden_states, all_hidden_states] if v is not None)
333
+
334
+ return BaseModelOutput(last_hidden_state=hidden_states, hidden_states=all_hidden_states)
335
+
336
+
337
+ class InternVisionModel(PreTrainedModel):
338
+ config_class = InternVisionConfig
339
+ main_input_name = "pixel_values"
340
+ _no_split_modules = ["InternVisionEncoderLayer"]
341
+
342
+ def __init__(self, config: InternVisionConfig):
343
+ super().__init__(config)
344
+ self.embeddings = InternVisionEmbeddings(config)
345
+ self.encoder = InternVisionEncoder(config)
346
+ self.post_layernorm = NORM2FN[config.norm_type](config.hidden_size, eps=config.layer_norm_eps)
347
+ self.pooler = nn.Linear(config.hidden_size, config.hidden_size)
348
+
349
+ self.post_init()
350
+
351
+ def forward(
352
+ self,
353
+ pixel_values: Optional[torch.FloatTensor] = None,
354
+ output_hidden_states: Optional[bool] = None,
355
+ return_dict: Optional[bool] = None,
356
+ ) -> Union[Tuple, BaseModelOutputWithPooling]:
357
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
358
+ output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
359
+
360
+ if pixel_values is None:
361
+ raise ValueError("You have to specify pixel_values")
362
+
363
+ embeddings = self.embeddings(pixel_values)
364
+ encoder_outputs = self.encoder(
365
+ inputs_embeds=embeddings,
366
+ output_hidden_states=output_hidden_states,
367
+ return_dict=return_dict,
368
+ )
369
+ last_hidden_state = encoder_outputs[0]
370
+ last_hidden_state = self.post_layernorm(last_hidden_state)
371
+
372
+ pooled_output = last_hidden_state[:, 0, :]
373
+ pooled_output = self.pooler(pooled_output)
374
+
375
+ if not return_dict:
376
+ return (last_hidden_state, pooled_output) + encoder_outputs[1:]
377
+
378
+ return BaseModelOutputWithPooling(
379
+ last_hidden_state=last_hidden_state,
380
+ pooler_output=pooled_output,
381
+ hidden_states=encoder_outputs.hidden_states,
382
+ attentions=None,
383
+ )
384
+
385
+
386
+ __all__ = [
387
+ "InternVisionConfig",
388
+ "InternVisionModel",
389
+ "has_flash_attn",
390
+ ]
391
+
modeling_internlm2.py ADDED
@@ -0,0 +1,1453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/modeling_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ PyTorch InternLM2 model."""
17
+ import math
18
+ import queue
19
+ import threading
20
+ import warnings
21
+ from typing import List, Optional, Tuple, Union
22
+
23
+ import torch
24
+ import torch.nn.functional as F
25
+ import torch.utils.checkpoint
26
+ from einops import rearrange
27
+ from torch import nn
28
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
29
+ from transformers.activations import ACT2FN
30
+ from transformers.modeling_outputs import (BaseModelOutputWithPast,
31
+ CausalLMOutputWithPast,
32
+ SequenceClassifierOutputWithPast)
33
+ from transformers.modeling_utils import PreTrainedModel
34
+ # `generate()` lives on `GenerationMixin` in recent Transformers. Some versions
35
+ # are deprecating `PreTrainedModel` inheriting from it, so include it explicitly
36
+ # to keep `generate()` available without changing the installed transformers.
37
+ try: # transformers>=4.27
38
+ from transformers.generation.utils import GenerationMixin # type: ignore
39
+ except Exception: # pragma: no cover
40
+ try: # transformers<4.27
41
+ from transformers.generation_utils import GenerationMixin # type: ignore
42
+ except Exception: # pragma: no cover
43
+ GenerationMixin = object # type: ignore[misc,assignment]
44
+ from transformers.utils import (add_start_docstrings,
45
+ add_start_docstrings_to_model_forward, logging,
46
+ replace_return_docstrings)
47
+
48
+ try:
49
+ from transformers.generation.streamers import BaseStreamer
50
+ except: # noqa # pylint: disable=bare-except
51
+ BaseStreamer = None
52
+
53
+ from .configuration_internlm2 import InternLM2Config
54
+
55
+ logger = logging.get_logger(__name__)
56
+
57
+ _CONFIG_FOR_DOC = 'InternLM2Config'
58
+
59
+ flash_attn_func, flash_attn_varlen_func = None, None
60
+ pad_input, index_first_axis, unpad_input = None, None, None
61
+ try:
62
+ from flash_attn import flash_attn_func as _flash_attn_func
63
+ from flash_attn import flash_attn_varlen_func as _flash_attn_varlen_func
64
+ from flash_attn.bert_padding import index_first_axis as _index_first_axis
65
+ from flash_attn.bert_padding import pad_input as _pad_input
66
+ from flash_attn.bert_padding import unpad_input as _unpad_input
67
+
68
+ flash_attn_func, flash_attn_varlen_func = _flash_attn_func, _flash_attn_varlen_func
69
+ pad_input, index_first_axis, unpad_input = _pad_input, _index_first_axis, _unpad_input
70
+ has_flash_attn = True
71
+ except:
72
+ has_flash_attn = False
73
+
74
+
75
+ def _import_flash_attn():
76
+ global flash_attn_func, flash_attn_varlen_func
77
+ global pad_input, index_first_axis, unpad_input
78
+ try:
79
+ from flash_attn import flash_attn_func as _flash_attn_func
80
+ from flash_attn import \
81
+ flash_attn_varlen_func as _flash_attn_varlen_func
82
+ from flash_attn.bert_padding import \
83
+ index_first_axis as _index_first_axis
84
+ from flash_attn.bert_padding import pad_input as _pad_input
85
+ from flash_attn.bert_padding import unpad_input as _unpad_input
86
+ flash_attn_func, flash_attn_varlen_func = _flash_attn_func, _flash_attn_varlen_func
87
+ pad_input, index_first_axis, unpad_input = _pad_input, _index_first_axis, _unpad_input
88
+ except ImportError:
89
+ raise ImportError('flash_attn is not installed.')
90
+
91
+
92
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
93
+ def _get_unpad_data(attention_mask):
94
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
95
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
96
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
97
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.torch.int32), (1, 0))
98
+ return (
99
+ indices,
100
+ cu_seqlens,
101
+ max_seqlen_in_batch,
102
+ )
103
+
104
+
105
+ # Copied from transformers.models.bart.modeling_bart._make_causal_mask
106
+ def _make_causal_mask(
107
+ input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0
108
+ ):
109
+ """
110
+ Make causal mask used for bi-directional self-attention.
111
+ """
112
+ bsz, tgt_len = input_ids_shape
113
+ mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device)
114
+ mask_cond = torch.arange(mask.size(-1), device=device)
115
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
116
+ mask = mask.to(dtype)
117
+
118
+ if past_key_values_length > 0:
119
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1)
120
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
121
+
122
+
123
+ # Copied from transformers.models.bart.modeling_bart._expand_mask
124
+ def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
125
+ """
126
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
127
+ """
128
+ bsz, src_len = mask.size()
129
+ tgt_len = tgt_len if tgt_len is not None else src_len
130
+
131
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
132
+
133
+ inverted_mask = 1.0 - expanded_mask
134
+
135
+ return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)
136
+
137
+
138
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->InternLM2
139
+ class InternLM2RMSNorm(nn.Module):
140
+ def __init__(self, hidden_size, eps=1e-6):
141
+ """
142
+ InternLM2RMSNorm is equivalent to T5LayerNorm
143
+ """
144
+ super().__init__()
145
+ self.weight = nn.Parameter(torch.ones(hidden_size))
146
+ self.variance_epsilon = eps
147
+
148
+ def forward(self, hidden_states):
149
+ input_dtype = hidden_states.dtype
150
+ hidden_states = hidden_states.to(torch.float32)
151
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
152
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
153
+ return self.weight * hidden_states.to(input_dtype)
154
+
155
+
156
+ try:
157
+ from functools import partial
158
+
159
+ from apex.normalization import FusedRMSNorm
160
+ InternLM2RMSNorm = partial(FusedRMSNorm, eps=1e-6) # noqa
161
+ print('Discovered apex.normalization.FusedRMSNorm - will use it instead of InternLM2RMSNorm')
162
+ except ImportError:
163
+ # using the normal LlamaRMSNorm
164
+ pass
165
+ except Exception:
166
+ print('discovered apex but it failed to load, falling back to InternLM2RMSNorm')
167
+ pass
168
+
169
+
170
+ # Copied from transformers.model.llama.modeling_llama.LlamaRotaryEmbedding with Llama->InternLM2
171
+ class InternLM2RotaryEmbedding(nn.Module):
172
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
173
+ super().__init__()
174
+
175
+ self.dim = dim
176
+ self.max_position_embeddings = max_position_embeddings
177
+ self.base = base
178
+ inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
179
+ self.register_buffer('inv_freq', inv_freq, persistent=False)
180
+
181
+ # Build here to make `torch.jit.trace` work.
182
+ self._set_cos_sin_cache(
183
+ seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype()
184
+ )
185
+
186
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
187
+ self.max_seq_len_cached = seq_len
188
+ t = torch.arange(self.max_seq_len_cached, device=device).to(dtype=self.inv_freq.dtype)
189
+
190
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
191
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
192
+ emb = torch.cat((freqs, freqs), dim=-1)
193
+ self.register_buffer('cos_cached', emb.cos().to(dtype), persistent=False)
194
+ self.register_buffer('sin_cached', emb.sin().to(dtype), persistent=False)
195
+
196
+ def forward(self, x, seq_len=None):
197
+ # x: [bs, num_attention_heads, seq_len, head_size]
198
+ if seq_len > self.max_seq_len_cached:
199
+ self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=torch.float32)
200
+
201
+ return (
202
+ self.cos_cached[:seq_len].to(dtype=x.dtype),
203
+ self.sin_cached[:seq_len].to(dtype=x.dtype),
204
+ )
205
+
206
+
207
+ # Copied from transformers.model.llama.modeling_llama.LlamaLinearScalingRotaryEmbedding with Llama->InternLM2
208
+ class InternLM2LinearScalingRotaryEmbedding(InternLM2RotaryEmbedding):
209
+ """InternLM2RotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
210
+
211
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
212
+ self.scaling_factor = scaling_factor
213
+ super().__init__(dim, max_position_embeddings, base, device)
214
+
215
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
216
+ self.max_seq_len_cached = seq_len
217
+ t = torch.arange(self.max_seq_len_cached, device=device).to(dtype=self.inv_freq.dtype)
218
+ t = t / self.scaling_factor
219
+
220
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
221
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
222
+ emb = torch.cat((freqs, freqs), dim=-1)
223
+ self.register_buffer('cos_cached', emb.cos().to(dtype), persistent=False)
224
+ self.register_buffer('sin_cached', emb.sin().to(dtype), persistent=False)
225
+
226
+
227
+ # Copied from transformers.model.llama.modeling_llama.LlamaDynamicNTKScalingRotaryEmbedding with Llama->InternLM2
228
+ class InternLM2DynamicNTKScalingRotaryEmbedding(InternLM2RotaryEmbedding):
229
+ """InternLM2RotaryEmbedding extended with Dynamic NTK scaling.
230
+ Credits to the Reddit users /u/bloc97 and /u/emozilla.
231
+ """
232
+
233
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
234
+ self.scaling_factor = scaling_factor
235
+ super().__init__(dim, max_position_embeddings, base, device)
236
+
237
+ def _set_cos_sin_cache(self, seq_len, device, dtype):
238
+ self.max_seq_len_cached = seq_len
239
+
240
+ if seq_len > self.max_position_embeddings:
241
+ base = self.base * (
242
+ (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
243
+ ) ** (self.dim / (self.dim - 2))
244
+ inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2).float().to(device) / self.dim))
245
+ self.register_buffer('inv_freq', inv_freq, persistent=False)
246
+
247
+ t = torch.arange(self.max_seq_len_cached, device=device).to(dtype=self.inv_freq.dtype)
248
+
249
+ freqs = torch.einsum('i,j->ij', t, self.inv_freq)
250
+ # Different from paper, but it uses a different permutation in order to obtain the same calculation
251
+ emb = torch.cat((freqs, freqs), dim=-1)
252
+ self.register_buffer('cos_cached', emb.cos().to(dtype), persistent=False)
253
+ self.register_buffer('sin_cached', emb.sin().to(dtype), persistent=False)
254
+
255
+
256
+ # Copied from transformers.model.llama.modeling_llama.rotate_half
257
+ def rotate_half(x):
258
+ """Rotates half the hidden dims of the input."""
259
+ x1 = x[..., : x.shape[-1] // 2]
260
+ x2 = x[..., x.shape[-1] // 2:]
261
+ return torch.cat((-x2, x1), dim=-1)
262
+
263
+
264
+ # Copied from transformers.model.llama.modeling_llama.apply_rotary_pos_emb
265
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim=1):
266
+ """Applies Rotary Position Embedding to the query and key tensors."""
267
+ cos = cos[position_ids].unsqueeze(unsqueeze_dim)
268
+ sin = sin[position_ids].unsqueeze(unsqueeze_dim)
269
+ q_embed = (q * cos) + (rotate_half(q) * sin)
270
+ k_embed = (k * cos) + (rotate_half(k) * sin)
271
+ return q_embed, k_embed
272
+
273
+
274
+ class InternLM2MLP(nn.Module):
275
+ def __init__(self, config):
276
+ super().__init__()
277
+ self.config = config
278
+ self.hidden_size = config.hidden_size
279
+ self.intermediate_size = config.intermediate_size
280
+ self.w1 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
281
+ self.w3 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
282
+ self.w2 = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
283
+ self.act_fn = ACT2FN[config.hidden_act]
284
+
285
+ def forward(self, x):
286
+ down_proj = self.w2(self.act_fn(self.w1(x)) * self.w3(x))
287
+
288
+ return down_proj
289
+
290
+
291
+ # Copied from transformers.model.llama.modeling_llama.repeat_kv
292
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
293
+ """
294
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
295
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
296
+ """
297
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
298
+ if n_rep == 1:
299
+ return hidden_states
300
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
301
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
302
+
303
+
304
+ # Modified from transformers.model.llama.modeling_llama.LlamaAttention
305
+ class InternLM2Attention(nn.Module):
306
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
307
+
308
+ def __init__(self, config: InternLM2Config):
309
+ super().__init__()
310
+ self.config = config
311
+ self.hidden_size = config.hidden_size
312
+ self.num_heads = config.num_attention_heads
313
+ self.head_dim = self.hidden_size // self.num_heads
314
+ self.num_key_value_heads = config.num_key_value_heads
315
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
316
+ self.max_position_embeddings = config.max_position_embeddings
317
+ self.is_causal = True
318
+
319
+ if (self.head_dim * self.num_heads) != self.hidden_size:
320
+ raise ValueError(
321
+ f'hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}'
322
+ f' and `num_heads`: {self.num_heads}).'
323
+ )
324
+
325
+ self.wqkv = nn.Linear(
326
+ self.hidden_size,
327
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
328
+ bias=config.bias,
329
+ )
330
+
331
+ self.wo = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
332
+ self._init_rope()
333
+
334
+ def _init_rope(self):
335
+ if self.config.rope_scaling is None:
336
+ self.rotary_emb = InternLM2RotaryEmbedding(
337
+ self.head_dim,
338
+ max_position_embeddings=self.max_position_embeddings,
339
+ base=self.config.rope_theta,
340
+ )
341
+ else:
342
+ scaling_type = self.config.rope_scaling['type']
343
+ scaling_factor = self.config.rope_scaling['factor']
344
+ if scaling_type == 'dynamic':
345
+ self.rotary_emb = InternLM2DynamicNTKScalingRotaryEmbedding(
346
+ self.head_dim,
347
+ max_position_embeddings=self.max_position_embeddings,
348
+ base=self.config.rope_theta,
349
+ scaling_factor=scaling_factor,
350
+ )
351
+ elif scaling_type == 'linear':
352
+ self.rotary_emb = InternLM2LinearScalingRotaryEmbedding(
353
+ self.head_dim,
354
+ max_position_embeddings=self.max_position_embeddings,
355
+ base=self.config.rope_theta,
356
+ scaling_factor=scaling_factor,
357
+ )
358
+ else:
359
+ raise ValueError("Currently we only support rotary embedding's type being 'dynamic' or 'linear'.")
360
+ return self.rotary_emb
361
+
362
+ def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):
363
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
364
+
365
+ def forward(
366
+ self,
367
+ hidden_states: torch.Tensor,
368
+ attention_mask: Optional[torch.Tensor] = None,
369
+ position_ids: Optional[torch.LongTensor] = None,
370
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
371
+ output_attentions: bool = False,
372
+ use_cache: bool = False,
373
+ **kwargs,
374
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
375
+ if 'padding_mask' in kwargs:
376
+ warnings.warn(
377
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
378
+ 'Please make sure use `attention_mask` instead.`'
379
+ )
380
+
381
+ bsz, q_len, _ = hidden_states.size()
382
+
383
+ qkv_states = self.wqkv(hidden_states)
384
+
385
+ qkv_states = rearrange(
386
+ qkv_states,
387
+ 'b q (h gs d) -> b q h gs d',
388
+ gs=2 + self.num_key_value_groups,
389
+ d=self.head_dim,
390
+ )
391
+
392
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
393
+ query_states = rearrange(query_states, 'b q h gs d -> b q (h gs) d')
394
+ key_states = qkv_states[..., -2, :]
395
+ value_states = qkv_states[..., -1, :]
396
+
397
+ query_states = query_states.transpose(1, 2)
398
+ key_states = key_states.transpose(1, 2)
399
+ value_states = value_states.transpose(1, 2)
400
+
401
+ kv_seq_len = key_states.shape[-2]
402
+ if past_key_value is not None:
403
+ kv_seq_len += past_key_value[0].shape[-2]
404
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
405
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
406
+
407
+ if past_key_value is not None:
408
+ # reuse k, v, self_attention
409
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
410
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
411
+
412
+ past_key_value = (key_states, value_states) if use_cache else None
413
+
414
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
415
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
416
+
417
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
418
+
419
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
420
+ raise ValueError(
421
+ f'Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is'
422
+ f' {attn_weights.size()}'
423
+ )
424
+
425
+ if attention_mask is not None:
426
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
427
+ raise ValueError(
428
+ f'Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}'
429
+ )
430
+ attn_weights = attn_weights + attention_mask
431
+
432
+ # upcast attention to fp32
433
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
434
+ attn_output = torch.matmul(attn_weights, value_states)
435
+
436
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
437
+ raise ValueError(
438
+ f'`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is'
439
+ f' {attn_output.size()}'
440
+ )
441
+
442
+ attn_output = attn_output.transpose(1, 2).contiguous()
443
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
444
+
445
+ attn_output = self.wo(attn_output)
446
+
447
+ if not output_attentions:
448
+ attn_weights = None
449
+
450
+ return attn_output, attn_weights, past_key_value
451
+
452
+
453
+ # Modified from transformers.model.llama.modeling_llama.InternLM2FlashAttention2
454
+ class InternLM2FlashAttention2(InternLM2Attention):
455
+ """
456
+ InternLM2 flash attention module. This module inherits from `InternLM2Attention` as the weights of the module stays
457
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
458
+ flash attention and deal with padding tokens in case the input contains any of them.
459
+ """
460
+
461
+ def forward(
462
+ self,
463
+ hidden_states: torch.Tensor,
464
+ attention_mask: Optional[torch.LongTensor] = None,
465
+ position_ids: Optional[torch.LongTensor] = None,
466
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
467
+ output_attentions: bool = False,
468
+ use_cache: bool = False,
469
+ **kwargs,
470
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
471
+ # InternLM2FlashAttention2 attention does not support output_attentions
472
+ if 'padding_mask' in kwargs:
473
+ warnings.warn(
474
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
475
+ 'Please make sure use `attention_mask` instead.`'
476
+ )
477
+
478
+ # overwrite attention_mask with padding_mask
479
+ attention_mask = kwargs.pop('padding_mask')
480
+
481
+ output_attentions = False
482
+
483
+ bsz, q_len, _ = hidden_states.size()
484
+
485
+ qkv_states = self.wqkv(hidden_states)
486
+
487
+ qkv_states = rearrange(
488
+ qkv_states,
489
+ 'b q (h gs d) -> b q h gs d',
490
+ gs=2 + self.num_key_value_groups,
491
+ d=self.head_dim,
492
+ )
493
+
494
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
495
+ query_states = rearrange(query_states, 'b q h gs d -> b q (h gs) d')
496
+ key_states = qkv_states[..., -2, :]
497
+ value_states = qkv_states[..., -1, :]
498
+
499
+ query_states = query_states.transpose(1, 2)
500
+ key_states = key_states.transpose(1, 2)
501
+ value_states = value_states.transpose(1, 2)
502
+
503
+ kv_seq_len = key_states.shape[-2]
504
+ if past_key_value is not None:
505
+ kv_seq_len += past_key_value[0].shape[-2]
506
+
507
+ cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len)
508
+
509
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
510
+
511
+ if past_key_value is not None:
512
+ # reuse k, v, self_attention
513
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
514
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
515
+
516
+ past_key_value = (key_states, value_states) if use_cache else None
517
+
518
+ query_states = query_states.transpose(1, 2)
519
+ key_states = key_states.transpose(1, 2)
520
+ value_states = value_states.transpose(1, 2)
521
+ attn_output = self._flash_attention_forward(
522
+ query_states, key_states, value_states, attention_mask, q_len
523
+ )
524
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
525
+ attn_output = self.wo(attn_output)
526
+
527
+ if not output_attentions:
528
+ attn_weights = None
529
+
530
+ return attn_output, attn_weights, past_key_value
531
+
532
+ def _flash_attention_forward(
533
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
534
+ ):
535
+ """
536
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
537
+ first unpad the input, then computes the attention scores and pad the final attention scores.
538
+
539
+ Args:
540
+ query_states (`torch.Tensor`):
541
+ Input query states to be passed to Flash Attention API
542
+ key_states (`torch.Tensor`):
543
+ Input key states to be passed to Flash Attention API
544
+ value_states (`torch.Tensor`):
545
+ Input value states to be passed to Flash Attention API
546
+ attention_mask (`torch.Tensor`):
547
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
548
+ position of padding tokens and 1 for the position of non-padding tokens.
549
+ dropout (`int`, *optional*):
550
+ Attention dropout
551
+ softmax_scale (`float`, *optional*):
552
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
553
+ """
554
+ # Contains at least one padding token in the sequence
555
+ causal = self.is_causal and query_length != 1
556
+ if attention_mask is not None:
557
+ batch_size = query_states.shape[0]
558
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._unpad_input(
559
+ query_states, key_states, value_states, attention_mask, query_length
560
+ )
561
+
562
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
563
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
564
+
565
+ attn_output_unpad = flash_attn_varlen_func(
566
+ query_states,
567
+ key_states,
568
+ value_states,
569
+ cu_seqlens_q=cu_seqlens_q,
570
+ cu_seqlens_k=cu_seqlens_k,
571
+ max_seqlen_q=max_seqlen_in_batch_q,
572
+ max_seqlen_k=max_seqlen_in_batch_k,
573
+ dropout_p=dropout,
574
+ softmax_scale=softmax_scale,
575
+ causal=causal,
576
+ )
577
+
578
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
579
+ else:
580
+ attn_output = flash_attn_func(
581
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
582
+ )
583
+
584
+ return attn_output
585
+
586
+ def _unpad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
587
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
588
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
589
+
590
+ key_layer = index_first_axis(
591
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
592
+ )
593
+ value_layer = index_first_axis(
594
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
595
+ )
596
+
597
+ if query_length == kv_seq_len:
598
+ query_layer = index_first_axis(
599
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
600
+ )
601
+ cu_seqlens_q = cu_seqlens_k
602
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
603
+ indices_q = indices_k
604
+ elif query_length == 1:
605
+ max_seqlen_in_batch_q = 1
606
+ cu_seqlens_q = torch.arange(
607
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
608
+ ) # There is a memcpy here, that is very bad.
609
+ indices_q = cu_seqlens_q[:-1]
610
+ query_layer = query_layer.squeeze(1)
611
+ else:
612
+ # The -q_len: slice assumes left padding.
613
+ attention_mask = attention_mask[:, -query_length:]
614
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
615
+
616
+ return (
617
+ query_layer,
618
+ key_layer,
619
+ value_layer,
620
+ indices_q.to(torch.int64),
621
+ (cu_seqlens_q, cu_seqlens_k),
622
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
623
+ )
624
+
625
+
626
+ INTERNLM2_ATTENTION_CLASSES = {
627
+ 'eager': InternLM2Attention,
628
+ 'flash_attention_2': InternLM2FlashAttention2,
629
+ }
630
+
631
+
632
+ # Modified from transformers.model.llama.modeling_llama.LlamaDecoderLayer
633
+ class InternLM2DecoderLayer(nn.Module):
634
+ def __init__(self, config: InternLM2Config):
635
+ super().__init__()
636
+ self.hidden_size = config.hidden_size
637
+
638
+ self.attention = INTERNLM2_ATTENTION_CLASSES[config.attn_implementation](config=config)
639
+
640
+ self.feed_forward = InternLM2MLP(config)
641
+ self.attention_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
642
+ self.ffn_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
643
+
644
+ def forward(
645
+ self,
646
+ hidden_states: torch.Tensor,
647
+ attention_mask: Optional[torch.Tensor] = None,
648
+ position_ids: Optional[torch.LongTensor] = None,
649
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
650
+ output_attentions: Optional[bool] = False,
651
+ use_cache: Optional[bool] = False,
652
+ **kwargs,
653
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
654
+ """
655
+ Args:
656
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
657
+ attention_mask (`torch.FloatTensor`, *optional*):
658
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
659
+ query_sequence_length, key_sequence_length)` if default attention is used.
660
+ output_attentions (`bool`, *optional*):
661
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
662
+ returned tensors for more detail.
663
+ use_cache (`bool`, *optional*):
664
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
665
+ (see `past_key_values`).
666
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
667
+ """
668
+ if 'padding_mask' in kwargs:
669
+ warnings.warn(
670
+ 'Passing `padding_mask` is deprecated and will be removed in v4.37. '
671
+ 'Please make sure use `attention_mask` instead.`'
672
+ )
673
+
674
+ residual = hidden_states
675
+
676
+ hidden_states = self.attention_norm(hidden_states)
677
+
678
+ # Self Attention
679
+ hidden_states, self_attn_weights, present_key_value = self.attention(
680
+ hidden_states=hidden_states,
681
+ attention_mask=attention_mask,
682
+ position_ids=position_ids,
683
+ past_key_value=past_key_value,
684
+ output_attentions=output_attentions,
685
+ use_cache=use_cache,
686
+ **kwargs,
687
+ )
688
+ hidden_states = residual + hidden_states
689
+
690
+ # Fully Connected
691
+ residual = hidden_states
692
+ hidden_states = self.ffn_norm(hidden_states)
693
+ hidden_states = self.feed_forward(hidden_states)
694
+ hidden_states = residual + hidden_states
695
+
696
+ outputs = (hidden_states,)
697
+
698
+ if output_attentions:
699
+ outputs += (self_attn_weights,)
700
+
701
+ if use_cache:
702
+ outputs += (present_key_value,)
703
+
704
+ return outputs
705
+
706
+
707
+ InternLM2_START_DOCSTRING = r"""
708
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
709
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
710
+ etc.)
711
+
712
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
713
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
714
+ and behavior.
715
+
716
+ Parameters:
717
+ config ([`InternLM2Config`]):
718
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
719
+ load the weights associated with the model, only the configuration. Check out the
720
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
721
+ """
722
+
723
+
724
+ # Copied from transformers.models.llama.modeling_llama.LlamaPreTrainedModel with Llama->InternLM2
725
+ @add_start_docstrings(
726
+ 'The bare InternLM2 Model outputting raw hidden-states without any specific head on top.',
727
+ InternLM2_START_DOCSTRING,
728
+ )
729
+ class InternLM2PreTrainedModel(PreTrainedModel):
730
+ config_class = InternLM2Config
731
+ base_model_prefix = 'model'
732
+ supports_gradient_checkpointing = True
733
+ _no_split_modules = ['InternLM2DecoderLayer']
734
+ _skip_keys_device_placement = 'past_key_values'
735
+ _supports_flash_attn_2 = True
736
+
737
+ def _init_weights(self, module):
738
+ std = self.config.initializer_range
739
+ if isinstance(module, nn.Linear):
740
+ module.weight.data.normal_(mean=0.0, std=std)
741
+ if module.bias is not None:
742
+ module.bias.data.zero_()
743
+ elif isinstance(module, nn.Embedding):
744
+ module.weight.data.normal_(mean=0.0, std=std)
745
+ if module.padding_idx is not None:
746
+ module.weight.data[module.padding_idx].zero_()
747
+
748
+
749
+ InternLM2_INPUTS_DOCSTRING = r"""
750
+ Args:
751
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
752
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
753
+ it.
754
+
755
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
756
+ [`PreTrainedTokenizer.__call__`] for details.
757
+
758
+ [What are input IDs?](../glossary#input-ids)
759
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
760
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
761
+
762
+ - 1 for tokens that are **not masked**,
763
+ - 0 for tokens that are **masked**.
764
+
765
+ [What are attention masks?](../glossary#attention-mask)
766
+
767
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
768
+ [`PreTrainedTokenizer.__call__`] for details.
769
+
770
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
771
+ `past_key_values`).
772
+
773
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
774
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
775
+ information on the default strategy.
776
+
777
+ - 1 indicates the head is **not masked**,
778
+ - 0 indicates the head is **masked**.
779
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
780
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
781
+ config.n_positions - 1]`.
782
+
783
+ [What are position IDs?](../glossary#position-ids)
784
+ past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or
785
+ when `config.use_cache=True`):
786
+ Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape
787
+ `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape
788
+ `(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)`.
789
+
790
+ Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
791
+ blocks) that can be used (see `past_key_values` input) to speed up sequential decoding.
792
+
793
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
794
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
795
+ of shape `(batch_size, sequence_length)`.
796
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
797
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
798
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
799
+ model's internal embedding lookup matrix.
800
+ use_cache (`bool`, *optional*):
801
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
802
+ `past_key_values`).
803
+ output_attentions (`bool`, *optional*):
804
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
805
+ tensors for more detail.
806
+ output_hidden_states (`bool`, *optional*):
807
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
808
+ more detail.
809
+ return_dict (`bool`, *optional*):
810
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
811
+ """
812
+
813
+
814
+ # Modified from transformers.model.llama.modeling_llama.LlamaModel
815
+ @add_start_docstrings(
816
+ 'The bare InternLM2 Model outputting raw hidden-states without any specific head on top.',
817
+ InternLM2_START_DOCSTRING,
818
+ )
819
+ class InternLM2Model(InternLM2PreTrainedModel):
820
+ """
821
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLM2DecoderLayer`]
822
+
823
+ Args:
824
+ config: InternLM2Config
825
+ """
826
+
827
+ _auto_class = 'AutoModel'
828
+
829
+ def __init__(self, config: InternLM2Config):
830
+ super().__init__(config)
831
+ self.padding_idx = config.pad_token_id
832
+ self.vocab_size = config.vocab_size
833
+ self.config = config
834
+ # import pdb; pdb.set_trace()
835
+ if not has_flash_attn:
836
+ self.config.attn_implementation = 'eager'
837
+ print('Warning: Flash attention is not available, using eager attention instead.')
838
+
839
+ self.tok_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
840
+
841
+ self.layers = nn.ModuleList([InternLM2DecoderLayer(config) for _ in range(config.num_hidden_layers)])
842
+ self.norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
843
+
844
+ self.gradient_checkpointing = False
845
+ # Initialize weights and apply final processing
846
+ self.post_init()
847
+
848
+ def get_input_embeddings(self):
849
+ return self.tok_embeddings
850
+
851
+ def set_input_embeddings(self, value):
852
+ self.tok_embeddings = value
853
+
854
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
855
+ # create causal mask
856
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
857
+ combined_attention_mask = None
858
+ if input_shape[-1] > 1:
859
+ combined_attention_mask = _make_causal_mask(
860
+ input_shape,
861
+ inputs_embeds.dtype,
862
+ device=inputs_embeds.device,
863
+ past_key_values_length=past_key_values_length,
864
+ )
865
+
866
+ if attention_mask is not None:
867
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
868
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to(
869
+ inputs_embeds.device
870
+ )
871
+ combined_attention_mask = (
872
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
873
+ )
874
+
875
+ return combined_attention_mask
876
+
877
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
878
+ def forward(
879
+ self,
880
+ input_ids: torch.LongTensor = None,
881
+ attention_mask: Optional[torch.Tensor] = None,
882
+ position_ids: Optional[torch.LongTensor] = None,
883
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
884
+ inputs_embeds: Optional[torch.FloatTensor] = None,
885
+ use_cache: Optional[bool] = None,
886
+ output_attentions: Optional[bool] = None,
887
+ output_hidden_states: Optional[bool] = None,
888
+ return_dict: Optional[bool] = None,
889
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
890
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
891
+ output_hidden_states = (
892
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
893
+ )
894
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
895
+
896
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
897
+
898
+ if self.config.attn_implementation == 'flash_attention_2':
899
+ _import_flash_attn()
900
+
901
+ # retrieve input_ids and inputs_embeds
902
+ if input_ids is not None and inputs_embeds is not None:
903
+ raise ValueError('You cannot specify both input_ids and inputs_embeds at the same time')
904
+ elif input_ids is not None:
905
+ batch_size, seq_length = input_ids.shape[:2]
906
+ elif inputs_embeds is not None:
907
+ batch_size, seq_length = inputs_embeds.shape[:2]
908
+ else:
909
+ raise ValueError('You have to specify either input_ids or inputs_embeds')
910
+
911
+ seq_length_with_past = seq_length
912
+ past_key_values_length = 0
913
+ if past_key_values is not None:
914
+ past_key_values_length = past_key_values[0][0].shape[2]
915
+ seq_length_with_past = seq_length_with_past + past_key_values_length
916
+
917
+ if position_ids is None:
918
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
919
+ position_ids = torch.arange(
920
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
921
+ )
922
+ position_ids = position_ids.unsqueeze(0)
923
+
924
+ if inputs_embeds is None:
925
+ inputs_embeds = self.tok_embeddings(input_ids)
926
+
927
+ if self.config.attn_implementation == 'flash_attention_2':
928
+ # 2d mask is passed through the layers
929
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
930
+ else:
931
+ if attention_mask is None:
932
+ attention_mask = torch.ones(
933
+ (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device
934
+ )
935
+ attention_mask = self._prepare_decoder_attention_mask(
936
+ attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length
937
+ )
938
+
939
+ # embed positions
940
+ hidden_states = inputs_embeds
941
+
942
+ if self.gradient_checkpointing and self.training:
943
+ if use_cache:
944
+ logger.warning_once(
945
+ '`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...'
946
+ )
947
+ use_cache = False
948
+
949
+ # decoder layers
950
+ all_hidden_states = () if output_hidden_states else None
951
+ all_self_attns = () if output_attentions else None
952
+ next_decoder_cache = () if use_cache else None
953
+
954
+ for idx, decoder_layer in enumerate(self.layers):
955
+ if output_hidden_states:
956
+ all_hidden_states += (hidden_states,)
957
+
958
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
959
+
960
+ if self.gradient_checkpointing and self.training:
961
+
962
+ def create_custom_forward(module):
963
+ def custom_forward(*inputs):
964
+ # None for past_key_value
965
+ return module(*inputs, output_attentions, None)
966
+
967
+ return custom_forward
968
+
969
+ layer_outputs = torch.utils.checkpoint.checkpoint(
970
+ create_custom_forward(decoder_layer),
971
+ hidden_states,
972
+ attention_mask,
973
+ position_ids,
974
+ None,
975
+ )
976
+ else:
977
+ layer_outputs = decoder_layer(
978
+ hidden_states,
979
+ attention_mask=attention_mask,
980
+ position_ids=position_ids,
981
+ past_key_value=past_key_value,
982
+ output_attentions=output_attentions,
983
+ use_cache=use_cache,
984
+ )
985
+
986
+ hidden_states = layer_outputs[0]
987
+
988
+ if use_cache:
989
+ next_decoder_cache += (layer_outputs[2 if output_attentions else 1],)
990
+
991
+ if output_attentions:
992
+ all_self_attns += (layer_outputs[1],)
993
+
994
+ hidden_states = self.norm(hidden_states)
995
+
996
+ # add hidden states from the last decoder layer
997
+ if output_hidden_states:
998
+ all_hidden_states += (hidden_states,)
999
+
1000
+ next_cache = next_decoder_cache if use_cache else None
1001
+ if not return_dict:
1002
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
1003
+ return BaseModelOutputWithPast(
1004
+ last_hidden_state=hidden_states,
1005
+ past_key_values=next_cache,
1006
+ hidden_states=all_hidden_states,
1007
+ attentions=all_self_attns,
1008
+ )
1009
+
1010
+
1011
+ # Modified from transformers.model.llama.modeling_llama.LlamaForCausalLM
1012
+ class InternLM2ForCausalLM(InternLM2PreTrainedModel, GenerationMixin):
1013
+ _auto_class = 'AutoModelForCausalLM'
1014
+
1015
+ _tied_weights_keys = ['output.weight']
1016
+
1017
+ def __init__(self, config):
1018
+ super().__init__(config)
1019
+ self.model = InternLM2Model(config)
1020
+ self.vocab_size = config.vocab_size
1021
+ self.output = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1022
+
1023
+ # Initialize weights and apply final processing
1024
+ self.post_init()
1025
+
1026
+ def get_input_embeddings(self):
1027
+ return self.model.tok_embeddings
1028
+
1029
+ def set_input_embeddings(self, value):
1030
+ self.model.tok_embeddings = value
1031
+
1032
+ def get_output_embeddings(self):
1033
+ return self.output
1034
+
1035
+ def set_output_embeddings(self, new_embeddings):
1036
+ self.output = new_embeddings
1037
+
1038
+ def set_decoder(self, decoder):
1039
+ self.model = decoder
1040
+
1041
+ def get_decoder(self):
1042
+ return self.model
1043
+
1044
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1045
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1046
+ def forward(
1047
+ self,
1048
+ input_ids: torch.LongTensor = None,
1049
+ attention_mask: Optional[torch.Tensor] = None,
1050
+ position_ids: Optional[torch.LongTensor] = None,
1051
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1052
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1053
+ labels: Optional[torch.LongTensor] = None,
1054
+ use_cache: Optional[bool] = None,
1055
+ output_attentions: Optional[bool] = None,
1056
+ output_hidden_states: Optional[bool] = None,
1057
+ return_dict: Optional[bool] = None,
1058
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1059
+ r"""
1060
+ Args:
1061
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1062
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1063
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1064
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1065
+
1066
+ Returns:
1067
+
1068
+ Example:
1069
+
1070
+ ```python
1071
+ >>> from transformers import AutoTokenizer, InternLM2ForCausalLM
1072
+
1073
+ >>> model = InternLM2ForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
1074
+ >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
1075
+
1076
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1077
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1078
+
1079
+ >>> # Generate
1080
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1081
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1082
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1083
+ ```"""
1084
+
1085
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1086
+ output_hidden_states = (
1087
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1088
+ )
1089
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1090
+
1091
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1092
+ outputs = self.model(
1093
+ input_ids=input_ids,
1094
+ attention_mask=attention_mask,
1095
+ position_ids=position_ids,
1096
+ past_key_values=past_key_values,
1097
+ inputs_embeds=inputs_embeds,
1098
+ use_cache=use_cache,
1099
+ output_attentions=output_attentions,
1100
+ output_hidden_states=output_hidden_states,
1101
+ return_dict=return_dict,
1102
+ )
1103
+
1104
+ hidden_states = outputs[0]
1105
+ logits = self.output(hidden_states)
1106
+ logits = logits.float()
1107
+
1108
+ loss = None
1109
+ if labels is not None:
1110
+ # Shift so that tokens < n predict n
1111
+ shift_logits = logits[..., :-1, :].contiguous()
1112
+ shift_labels = labels[..., 1:].contiguous()
1113
+ # Flatten the tokens
1114
+ loss_fct = CrossEntropyLoss()
1115
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1116
+ shift_labels = shift_labels.view(-1)
1117
+ # Enable model parallelism
1118
+ shift_labels = shift_labels.to(shift_logits.device)
1119
+ loss = loss_fct(shift_logits, shift_labels)
1120
+
1121
+ if not return_dict:
1122
+ output = (logits,) + outputs[1:]
1123
+ return (loss,) + output if loss is not None else output
1124
+
1125
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
1126
+ output = CausalLMOutputWithPast(
1127
+ loss=loss,
1128
+ logits=logits,
1129
+ past_key_values=outputs.past_key_values,
1130
+ hidden_states=outputs.hidden_states,
1131
+ attentions=outputs.attentions,
1132
+ )
1133
+ output['logits'] = output['logits'].to(device)
1134
+ return output
1135
+
1136
+ def prepare_inputs_for_generation(
1137
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
1138
+ ):
1139
+ # HF>=4.57 may initialize `past_key_values` as a Cache object (e.g. `DynamicCache`).
1140
+ # InternLM2 currently implements the legacy tuple-of-tuples cache format, so we convert
1141
+ # non-empty Cache instances back to the legacy format and treat empty caches as None.
1142
+ if past_key_values is not None and hasattr(past_key_values, "to_legacy_cache"):
1143
+ try:
1144
+ cache_len = past_key_values.get_seq_length() if hasattr(past_key_values, "get_seq_length") else 0
1145
+ except Exception:
1146
+ cache_len = 0
1147
+ if cache_len == 0:
1148
+ past_key_values = None
1149
+ else:
1150
+ past_key_values = past_key_values.to_legacy_cache()
1151
+
1152
+ if past_key_values is not None:
1153
+ first_key = past_key_values[0][0] if past_key_values[0] is not None else None
1154
+ past_length = first_key.shape[2] if first_key is not None else 0
1155
+
1156
+ # Some generation methods already pass only the last input ID
1157
+ if input_ids.shape[1] > past_length:
1158
+ remove_prefix_length = past_length
1159
+ else:
1160
+ # Default to old behavior: keep only final ID
1161
+ remove_prefix_length = input_ids.shape[1] - 1
1162
+
1163
+ input_ids = input_ids[:, remove_prefix_length:]
1164
+
1165
+ position_ids = kwargs.get('position_ids', None)
1166
+ if attention_mask is not None and position_ids is None:
1167
+ # create position_ids on the fly for batch generation
1168
+ position_ids = attention_mask.long().cumsum(-1) - 1
1169
+ position_ids.masked_fill_(attention_mask == 0, 1)
1170
+ if past_key_values:
1171
+ position_ids = position_ids[:, -input_ids.shape[1]:]
1172
+
1173
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1174
+ if inputs_embeds is not None and past_key_values is None:
1175
+ model_inputs = {'inputs_embeds': inputs_embeds}
1176
+ else:
1177
+ model_inputs = {'input_ids': input_ids}
1178
+
1179
+ model_inputs.update(
1180
+ {
1181
+ 'position_ids': position_ids,
1182
+ 'past_key_values': past_key_values,
1183
+ 'use_cache': kwargs.get('use_cache'),
1184
+ 'attention_mask': attention_mask,
1185
+ }
1186
+ )
1187
+ return model_inputs
1188
+
1189
+ @staticmethod
1190
+ def _reorder_cache(past_key_values, beam_idx):
1191
+ reordered_past = ()
1192
+ for layer_past in past_key_values:
1193
+ reordered_past += (
1194
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
1195
+ )
1196
+ return reordered_past
1197
+
1198
+ def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = [], meta_instruction=''):
1199
+ if tokenizer.add_bos_token:
1200
+ prompt = ''
1201
+ else:
1202
+ prompt = tokenizer.bos_token
1203
+ if meta_instruction:
1204
+ prompt += f"""<|im_start|>system\n{meta_instruction}<|im_end|>\n"""
1205
+ for record in history:
1206
+ prompt += f"""<|im_start|>user\n{record[0]}<|im_end|>\n<|im_start|>assistant\n{record[1]}<|im_end|>\n"""
1207
+ prompt += f"""<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n"""
1208
+ return tokenizer([prompt], return_tensors='pt')
1209
+
1210
+ @torch.no_grad()
1211
+ def chat(
1212
+ self,
1213
+ tokenizer,
1214
+ query: str,
1215
+ history: List[Tuple[str, str]] = [],
1216
+ streamer: Optional[BaseStreamer] = None,
1217
+ max_new_tokens: int = 1024,
1218
+ do_sample: bool = True,
1219
+ temperature: float = 0.8,
1220
+ top_p: float = 0.8,
1221
+ meta_instruction: str = 'You are an AI assistant whose name is InternLM (书生·浦语).\n'
1222
+ '- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory (上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n'
1223
+ '- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such as English and 中文.',
1224
+ **kwargs,
1225
+ ):
1226
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
1227
+ inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
1228
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
1229
+ eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(['<|im_end|>'])[0]]
1230
+ outputs = self.generate(
1231
+ **inputs,
1232
+ streamer=streamer,
1233
+ max_new_tokens=max_new_tokens,
1234
+ do_sample=do_sample,
1235
+ temperature=temperature,
1236
+ top_p=top_p,
1237
+ eos_token_id=eos_token_id,
1238
+ **kwargs,
1239
+ )
1240
+ outputs = outputs[0].cpu().tolist()[len(inputs['input_ids'][0]):]
1241
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
1242
+ response = response.split('<|im_end|>')[0]
1243
+ history = history + [(query, response)]
1244
+ return response, history
1245
+
1246
+ @torch.no_grad()
1247
+ def stream_chat(
1248
+ self,
1249
+ tokenizer,
1250
+ query: str,
1251
+ history: List[Tuple[str, str]] = [],
1252
+ max_new_tokens: int = 1024,
1253
+ do_sample: bool = True,
1254
+ temperature: float = 0.8,
1255
+ top_p: float = 0.8,
1256
+ **kwargs,
1257
+ ):
1258
+ """
1259
+ Return a generator in format: (response, history)
1260
+ Eg.
1261
+ ('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')])
1262
+ ('你好,有什么���以帮助您的吗?', [('你好', '你好,有什么可以帮助您的吗?')])
1263
+ """
1264
+ if BaseStreamer is None:
1265
+ raise ModuleNotFoundError(
1266
+ 'The version of `transformers` is too low. Please make sure '
1267
+ 'that you have installed `transformers>=4.28.0`.'
1268
+ )
1269
+
1270
+ response_queue = queue.Queue(maxsize=20)
1271
+
1272
+ class ChatStreamer(BaseStreamer):
1273
+ def __init__(self, tokenizer) -> None:
1274
+ super().__init__()
1275
+ self.tokenizer = tokenizer
1276
+ self.queue = response_queue
1277
+ self.query = query
1278
+ self.history = history
1279
+ self.response = ''
1280
+ self.cache = []
1281
+ self.received_inputs = False
1282
+ self.queue.put((self.response, history + [(self.query, self.response)]))
1283
+
1284
+ def put(self, value):
1285
+ if len(value.shape) > 1 and value.shape[0] > 1:
1286
+ raise ValueError('ChatStreamer only supports batch size 1')
1287
+ elif len(value.shape) > 1:
1288
+ value = value[0]
1289
+
1290
+ if not self.received_inputs:
1291
+ # The first received value is input_ids, ignore here
1292
+ self.received_inputs = True
1293
+ return
1294
+
1295
+ self.cache.extend(value.tolist())
1296
+ token = self.tokenizer.decode(self.cache, skip_special_tokens=True)
1297
+ if token.strip() != '<|im_end|>':
1298
+ self.response = self.response + token
1299
+ history = self.history + [(self.query, self.response)]
1300
+ self.queue.put((self.response, history))
1301
+ self.cache = []
1302
+ else:
1303
+ self.end()
1304
+
1305
+ def end(self):
1306
+ self.queue.put(None)
1307
+
1308
+ def stream_producer():
1309
+ return self.chat(
1310
+ tokenizer=tokenizer,
1311
+ query=query,
1312
+ streamer=ChatStreamer(tokenizer=tokenizer),
1313
+ history=history,
1314
+ max_new_tokens=max_new_tokens,
1315
+ do_sample=do_sample,
1316
+ temperature=temperature,
1317
+ top_p=top_p,
1318
+ **kwargs,
1319
+ )
1320
+
1321
+ def consumer():
1322
+ producer = threading.Thread(target=stream_producer)
1323
+ producer.start()
1324
+ while True:
1325
+ res = response_queue.get()
1326
+ if res is None:
1327
+ return
1328
+ yield res
1329
+
1330
+ return consumer()
1331
+
1332
+
1333
+ # Copied from transformers.model.llama.modeling_llama.LlamaForSequenceClassification with Llama->InternLM2
1334
+ @add_start_docstrings(
1335
+ """
1336
+ The InternLM2 Model transformer with a sequence classification head on top (linear layer).
1337
+
1338
+ [`InternLM2ForSequenceClassification`] uses the last token in order to do the classification,
1339
+ as other causal models (e.g. GPT-2) do.
1340
+
1341
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1342
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1343
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1344
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1345
+ each row of the batch).
1346
+ """,
1347
+ InternLM2_START_DOCSTRING,
1348
+ )
1349
+ class InternLM2ForSequenceClassification(InternLM2PreTrainedModel):
1350
+ def __init__(self, config):
1351
+ super().__init__(config)
1352
+ self.num_labels = config.num_labels
1353
+ self.model = InternLM2Model(config)
1354
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1355
+
1356
+ # Initialize weights and apply final processing
1357
+ self.post_init()
1358
+
1359
+ def get_input_embeddings(self):
1360
+ return self.model.tok_embeddings
1361
+
1362
+ def set_input_embeddings(self, value):
1363
+ self.model.tok_embeddings = value
1364
+
1365
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1366
+ def forward(
1367
+ self,
1368
+ input_ids: torch.LongTensor = None,
1369
+ attention_mask: Optional[torch.Tensor] = None,
1370
+ position_ids: Optional[torch.LongTensor] = None,
1371
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1372
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1373
+ labels: Optional[torch.LongTensor] = None,
1374
+ use_cache: Optional[bool] = None,
1375
+ output_attentions: Optional[bool] = None,
1376
+ output_hidden_states: Optional[bool] = None,
1377
+ return_dict: Optional[bool] = None,
1378
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1379
+ r"""
1380
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1381
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1382
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1383
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1384
+ """
1385
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1386
+
1387
+ transformer_outputs = self.model(
1388
+ input_ids,
1389
+ attention_mask=attention_mask,
1390
+ position_ids=position_ids,
1391
+ past_key_values=past_key_values,
1392
+ inputs_embeds=inputs_embeds,
1393
+ use_cache=use_cache,
1394
+ output_attentions=output_attentions,
1395
+ output_hidden_states=output_hidden_states,
1396
+ return_dict=return_dict,
1397
+ )
1398
+ hidden_states = transformer_outputs[0]
1399
+ logits = self.score(hidden_states)
1400
+
1401
+ if input_ids is not None:
1402
+ batch_size = input_ids.shape[0]
1403
+ else:
1404
+ batch_size = inputs_embeds.shape[0]
1405
+
1406
+ if self.config.pad_token_id is None and batch_size != 1:
1407
+ raise ValueError('Cannot handle batch sizes > 1 if no padding token is defined.')
1408
+ if self.config.pad_token_id is None:
1409
+ sequence_lengths = -1
1410
+ else:
1411
+ if input_ids is not None:
1412
+ sequence_lengths = (torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1).to(
1413
+ logits.device
1414
+ )
1415
+ else:
1416
+ sequence_lengths = -1
1417
+
1418
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1419
+
1420
+ loss = None
1421
+ if labels is not None:
1422
+ labels = labels.to(logits.device)
1423
+ if self.config.problem_type is None:
1424
+ if self.num_labels == 1:
1425
+ self.config.problem_type = 'regression'
1426
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1427
+ self.config.problem_type = 'single_label_classification'
1428
+ else:
1429
+ self.config.problem_type = 'multi_label_classification'
1430
+
1431
+ if self.config.problem_type == 'regression':
1432
+ loss_fct = MSELoss()
1433
+ if self.num_labels == 1:
1434
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1435
+ else:
1436
+ loss = loss_fct(pooled_logits, labels)
1437
+ elif self.config.problem_type == 'single_label_classification':
1438
+ loss_fct = CrossEntropyLoss()
1439
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1440
+ elif self.config.problem_type == 'multi_label_classification':
1441
+ loss_fct = BCEWithLogitsLoss()
1442
+ loss = loss_fct(pooled_logits, labels)
1443
+ if not return_dict:
1444
+ output = (pooled_logits,) + transformer_outputs[1:]
1445
+ return ((loss,) + output) if loss is not None else output
1446
+
1447
+ return SequenceClassifierOutputWithPast(
1448
+ loss=loss,
1449
+ logits=pooled_logits,
1450
+ past_key_values=transformer_outputs.past_key_values,
1451
+ hidden_states=transformer_outputs.hidden_states,
1452
+ attentions=transformer_outputs.attentions,
1453
+ )
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": {
3
+ "height": 448,
4
+ "width": 448
5
+ },
6
+ "do_center_crop": true,
7
+ "do_convert_rgb": true,
8
+ "do_normalize": true,
9
+ "do_rescale": true,
10
+ "do_resize": true,
11
+ "image_mean": [
12
+ 0.485,
13
+ 0.456,
14
+ 0.406
15
+ ],
16
+ "image_processor_type": "CLIPImageProcessor",
17
+ "image_std": [
18
+ 0.229,
19
+ 0.224,
20
+ 0.225
21
+ ],
22
+ "resample": 3,
23
+ "rescale_factor": 0.00392156862745098,
24
+ "size": {
25
+ "shortest_edge": 448
26
+ }
27
+ }
processing_qts_plus_internvl2_5.py ADDED
@@ -0,0 +1,164 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Self-contained processor shim for trust_remote_code.
3
+
4
+ This processor:
5
+ - loads frames from a video path (OpenCV)
6
+ - applies InternVL2.5 `<img>...<IMG_CONTEXT>*N...</img>` token template
7
+ - returns `vision_input`, `input_ids`, `attention_mask`, and `question_input_ids`
8
+ compatible with `QTSplusInternLM2_ForCausalLM.generate`.
9
+ """
10
+
11
+ from __future__ import annotations
12
+
13
+ from dataclasses import dataclass
14
+ from typing import List, Optional, Union
15
+
16
+ import numpy as np
17
+ from PIL import Image
18
+
19
+ from transformers.feature_extraction_utils import BatchFeature
20
+ from transformers.processing_utils import ProcessorMixin
21
+ from transformers.tokenization_utils_base import PreTokenizedInput, TextInput
22
+
23
+
24
+ def _uniform_indices(num_frames: int, vlen: int) -> List[int]:
25
+ num_frames = max(int(num_frames), 1)
26
+ if vlen <= 0:
27
+ return []
28
+ if num_frames == 1:
29
+ return [max(0, (vlen - 1) // 2)]
30
+ last = vlen - 1
31
+ return [int(round(i * last / (num_frames - 1))) for i in range(num_frames)]
32
+
33
+
34
+ def _load_video_frames_cv2(path: str, num_frames: int = 8) -> List[Image.Image]:
35
+ import cv2
36
+
37
+ cap = cv2.VideoCapture(path)
38
+ if not cap.isOpened():
39
+ raise FileNotFoundError(f"Failed to open video: {path}")
40
+
41
+ vlen = int(cap.get(cv2.CAP_PROP_FRAME_COUNT) or 0)
42
+ if vlen <= 0:
43
+ # Fallback: decode sequentially and take the first `num_frames`.
44
+ frames: List[Image.Image] = []
45
+ while len(frames) < num_frames:
46
+ ok, frame = cap.read()
47
+ if not ok:
48
+ break
49
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
50
+ frames.append(Image.fromarray(frame))
51
+ cap.release()
52
+ return frames
53
+
54
+ indices = _uniform_indices(num_frames, vlen)
55
+ frames = []
56
+ for idx in indices:
57
+ cap.set(cv2.CAP_PROP_POS_FRAMES, int(idx))
58
+ ok, frame = cap.read()
59
+ if not ok:
60
+ continue
61
+ frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
62
+ frames.append(Image.fromarray(frame))
63
+ cap.release()
64
+ return frames
65
+
66
+
67
+ @dataclass
68
+ class QTSplusInternVL2_5_ProcessorKwargs:
69
+ num_frames: int = 8
70
+ system_prompt: str = "You are a helpful assistant."
71
+
72
+
73
+ class QTSplusInternVL2_5_Processor(ProcessorMixin):
74
+ attributes = ["image_processor", "tokenizer"]
75
+ image_processor_class = "AutoImageProcessor"
76
+ tokenizer_class = "AutoTokenizer"
77
+
78
+ def __init__(self, image_processor=None, tokenizer=None, **kwargs):
79
+ super().__init__(image_processor=image_processor, tokenizer=tokenizer)
80
+ self.img_start_token = "<img>"
81
+ self.img_end_token = "</img>"
82
+ self.img_context_token = "<IMG_CONTEXT>"
83
+
84
+ # InternVL2.5 default: (448/14)^2 * (0.5^2) = 256 tokens per image.
85
+ self.num_image_token = int(kwargs.pop("num_image_token", 256))
86
+ self.system_prompt = str(kwargs.pop("system_prompt", "You are a helpful assistant."))
87
+
88
+ def _build_image_tokens(self, num_images: int) -> str:
89
+ num_images = max(int(num_images), 1)
90
+ one = self.img_start_token + (self.img_context_token * int(self.num_image_token)) + self.img_end_token
91
+ return one * num_images
92
+
93
+ def __call__(
94
+ self,
95
+ text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None,
96
+ images=None,
97
+ videos: Optional[Union[str, List[Image.Image]]] = None,
98
+ return_tensors: Optional[str] = "pt",
99
+ num_frames: int = 8,
100
+ system_prompt: Optional[str] = None,
101
+ **kwargs,
102
+ ) -> BatchFeature:
103
+ if text is None:
104
+ raise ValueError("`text` is required")
105
+ if isinstance(text, list):
106
+ if len(text) != 1:
107
+ raise ValueError("Only single-example processing is supported for now")
108
+ text = text[0]
109
+
110
+ if videos is not None and images is not None:
111
+ raise ValueError("Pass only one of `videos` or `images`")
112
+
113
+ if videos is not None:
114
+ if isinstance(videos, str):
115
+ frames = _load_video_frames_cv2(videos, num_frames=num_frames)
116
+ elif isinstance(videos, list):
117
+ frames = videos
118
+ else:
119
+ raise ValueError(f"Unsupported `videos` type: {type(videos)}")
120
+ images = frames
121
+
122
+ if images is None:
123
+ raise ValueError("Either `videos` or `images` must be provided")
124
+ if isinstance(images, Image.Image):
125
+ images = [images]
126
+
127
+ if not isinstance(images, list) or not images:
128
+ raise ValueError("No frames/images loaded")
129
+
130
+ img_tokens = self._build_image_tokens(num_images=len(images))
131
+ user_content = f"{img_tokens}\n{text}"
132
+ messages = [
133
+ {"role": "system", "content": system_prompt or self.system_prompt},
134
+ {"role": "user", "content": user_content},
135
+ ]
136
+
137
+ if not hasattr(self.tokenizer, "apply_chat_template"):
138
+ raise ValueError("Tokenizer does not support apply_chat_template; missing chat_template.jinja?")
139
+
140
+ prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
141
+
142
+ text_inputs = self.tokenizer(
143
+ prompt,
144
+ add_special_tokens=False,
145
+ return_tensors=return_tensors,
146
+ )
147
+ question_inputs = self.tokenizer(
148
+ str(text),
149
+ add_special_tokens=False,
150
+ return_tensors=return_tensors,
151
+ )
152
+
153
+ vision_inputs = self.image_processor(images=images, return_tensors=return_tensors)
154
+ pixel_values = vision_inputs["pixel_values"]
155
+
156
+ return BatchFeature(
157
+ data={
158
+ "input_ids": text_inputs["input_ids"],
159
+ "attention_mask": text_inputs.get("attention_mask"),
160
+ "question_input_ids": question_inputs["input_ids"],
161
+ "vision_input": pixel_values,
162
+ },
163
+ tensor_type=return_tensors,
164
+ )
processor_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_qts_plus_internvl2_5.QTSplusInternVL2_5_Processor"
4
+ },
5
+ "image_processor_type": "CLIPImageProcessor",
6
+ "processor_class": "QTSplusInternVL2_5_Processor",
7
+ "tokenizer_class": "AutoTokenizer"
8
+ }
qts_plus.py ADDED
@@ -0,0 +1,791 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- encoding: utf-8 -*-
2
+ # @File : qts_plus.py
3
+ # @Time : 2025/08/27 03:12:40
4
+ # @Author : Siyou
5
+ # @Description :
6
+
7
+ from __future__ import annotations
8
+ import math
9
+ from dataclasses import dataclass
10
+ from typing import Optional, Tuple, Dict, Any
11
+
12
+ import torch
13
+ import torch.nn as nn
14
+ import torch.nn.functional as F
15
+
16
+
17
+
18
+ # Small utilities
19
+ class RMSNorm(nn.Module):
20
+ def __init__(self, d: int, eps: float = 1e-6):
21
+ super().__init__()
22
+ self.weight = nn.Parameter(torch.ones(d))
23
+ self.eps = eps
24
+
25
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
26
+ # x: [B, T, D]
27
+ norm = x.pow(2).mean(dim=-1, keepdim=True)
28
+ x = x * torch.rsqrt(norm + self.eps)
29
+ return self.weight * x
30
+
31
+
32
+ class RMSNormFp32(nn.Module):
33
+ """RMSNorm that computes statistics in fp32 for stability (common in modern LMs)."""
34
+
35
+ def __init__(self, d: int, eps: float = 1e-6):
36
+ super().__init__()
37
+ self.weight = nn.Parameter(torch.ones(d))
38
+ self.eps = eps
39
+
40
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
41
+ dtype = x.dtype
42
+ x_fp32 = x.to(torch.float32)
43
+ var = x_fp32.pow(2).mean(dim=-1, keepdim=True)
44
+ x_norm = x_fp32 * torch.rsqrt(var + self.eps)
45
+ return (self.weight * x_norm).to(dtype)
46
+
47
+
48
+ class FeedForward(nn.Module):
49
+ def __init__(self, d_model: int, d_ff: int, dropout: float = 0.0):
50
+ super().__init__()
51
+ self.net = nn.Sequential(
52
+ nn.Linear(d_model, d_ff),
53
+ nn.GELU(),
54
+ nn.Linear(d_ff, d_model),
55
+ nn.Dropout(dropout),
56
+ )
57
+
58
+ def forward(self, x): # [B, T, D]
59
+ return self.net(x)
60
+
61
+
62
+ class TinyTransformerBlock(nn.Module):
63
+ """
64
+ Lightweight re-encoding block used after pruning.
65
+ Single block with RMSNorms, MHA, FFN.
66
+ """
67
+ def __init__(self, d_model: int, n_heads: int = 8, d_ff: Optional[int] = None, dropout: float = 0.0):
68
+ super().__init__()
69
+ self.norm1 = RMSNorm(d_model)
70
+ self.mha = nn.MultiheadAttention(d_model, n_heads, dropout=dropout, batch_first=True)
71
+ self.norm2 = RMSNorm(d_model)
72
+ self.ffn = FeedForward(d_model, d_ff or (4 * d_model), dropout=dropout)
73
+
74
+ def forward(self, x: torch.Tensor, key_padding_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
75
+ # x: [B, T, D]
76
+ h = self.norm1(x)
77
+ # self-attention on pruned tokens; support key_padding_mask for padded positions
78
+ attn_out, _ = self.mha(h, h, h, key_padding_mask=key_padding_mask, need_weights=False)
79
+ x = x + attn_out
80
+ h = self.norm2(x)
81
+ x = x + self.ffn(h)
82
+ return x
83
+
84
+ class ScoringCrossAttentionLayer(nn.Module):
85
+ """
86
+ Cross-attention block: pre-norm Q and KV, MHA(Q, K=V), then FFN on Q path.
87
+ Returns updated Q and optional attention weights.
88
+ """
89
+ def __init__(self, d_model: int, n_heads: int, dropout: float = 0.0, d_ff: Optional[int] = None):
90
+ super().__init__()
91
+ self.q_norm = RMSNorm(d_model)
92
+ self.kv_norm = RMSNorm(d_model)
93
+ self.mha = nn.MultiheadAttention(d_model, n_heads, dropout=dropout, batch_first=True)
94
+ self.ffn_norm = RMSNorm(d_model)
95
+ self.ffn = FeedForward(d_model, d_ff or (4 * d_model), dropout=dropout)
96
+
97
+ def forward(
98
+ self,
99
+ q: torch.Tensor, # [B, L, D]
100
+ kv: torch.Tensor, # [B, M, D]
101
+ kv_key_padding_mask: Optional[torch.Tensor] = None, # [B, M]
102
+ need_weights: bool = False,
103
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
104
+ hq = self.q_norm(q)
105
+ hkv = self.kv_norm(kv)
106
+ out, attn = self.mha(
107
+ hq, hkv, hkv,
108
+ key_padding_mask=kv_key_padding_mask,
109
+ need_weights=need_weights,
110
+ average_attn_weights=False
111
+ )
112
+ q = q + out
113
+ h = self.ffn_norm(q)
114
+ q = q + self.ffn(h)
115
+ return q, attn
116
+
117
+ class LMScoringCrossAttentionLayer(nn.Module):
118
+ """
119
+ Cross-attention block that can be initialized from a downstream decoder LM layer.
120
+ - Separate q/k/v projections with optional GQA/MQA (num_key_value_heads).
121
+ - Rotary embeddings are intentionally NOT applied (kv come from vision features).
122
+ - Pre/post RMSNorm + simple FFN on the query path.
123
+
124
+ Notes:
125
+ - If num_key_value_heads < num_heads, k/v heads are repeated to match num_heads.
126
+ """
127
+ def __init__(
128
+ self,
129
+ d_model: int,
130
+ num_heads: int,
131
+ num_key_value_heads: Optional[int] = None,
132
+ dropout: float = 0.0,
133
+ d_ff: Optional[int] = None,
134
+ rms_norm_eps: float = 1e-6,
135
+ ):
136
+ super().__init__()
137
+ assert d_model % num_heads == 0, "hidden size must be divisible by num_heads"
138
+ self.hidden_size = d_model
139
+ self.num_heads = num_heads
140
+ self.head_dim = d_model // num_heads
141
+ self.num_key_value_heads = num_key_value_heads or num_heads
142
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
143
+ self.attention_dropout = dropout
144
+
145
+ # Norms (fp32 stats for stability; common across modern decoder LMs)
146
+ self.q_norm = RMSNormFp32(d_model, eps=rms_norm_eps)
147
+ self.kv_norm = RMSNormFp32(d_model, eps=rms_norm_eps)
148
+ self.ffn_norm = RMSNormFp32(d_model, eps=rms_norm_eps)
149
+
150
+ # LM-style projections (q/k/v may be GQA/MQA with fewer kv heads)
151
+ self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=True)
152
+ self.k_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
153
+ self.v_proj = nn.Linear(self.hidden_size, self.num_key_value_heads * self.head_dim, bias=True)
154
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
155
+
156
+ # FFN on query path
157
+ self.ffn = FeedForward(d_model, d_ff or (4 * d_model), dropout=dropout)
158
+
159
+ @staticmethod
160
+ def _repeat_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor:
161
+ # x: [B, H_kv, T, Dh] -> [B, H, T, Dh]
162
+ b, h_kv, t, dh = x.shape
163
+ if n_rep == 1:
164
+ return x
165
+ x = x[:, :, None, :, :].expand(b, h_kv, n_rep, t, dh)
166
+ return x.reshape(b, h_kv * n_rep, t, dh)
167
+
168
+ def forward(
169
+ self,
170
+ q: torch.Tensor, # [B, L, D]
171
+ kv: torch.Tensor, # [B, M, D]
172
+ kv_key_padding_mask: Optional[torch.Tensor] = None, # [B, M]
173
+ need_weights: bool = False,
174
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
175
+ B, L, _ = q.shape
176
+ _, M, _ = kv.shape
177
+
178
+ # Pre norms
179
+ qn = self.q_norm(q)
180
+ kvn = self.kv_norm(kv)
181
+
182
+ # Projections
183
+ q_states = self.q_proj(qn) # [B, L, H*Dh]
184
+ k_states = self.k_proj(kvn) # [B, M, Hkv*Dh]
185
+ v_states = self.v_proj(kvn) # [B, M, Hkv*Dh]
186
+
187
+ # Reshape to heads
188
+ q_states = q_states.view(B, L, self.num_heads, self.head_dim).transpose(1, 2) # [B, H, L, Dh]
189
+ k_states = k_states.view(B, M, self.num_key_value_heads, self.head_dim).transpose(1, 2) # [B, Hkv, M, Dh]
190
+ v_states = v_states.view(B, M, self.num_key_value_heads, self.head_dim).transpose(1, 2) # [B, Hkv, M, Dh]
191
+
192
+ # Repeat kv if necessary
193
+ if self.num_key_value_groups > 1:
194
+ k_states = self._repeat_kv(k_states, self.num_key_value_groups)
195
+ v_states = self._repeat_kv(v_states, self.num_key_value_groups)
196
+
197
+ # Attention weights [B, H, L, M]
198
+ attn_weights = torch.matmul(q_states, k_states.transpose(2, 3)) / math.sqrt(self.head_dim)
199
+
200
+ if kv_key_padding_mask is not None:
201
+ # Convert mask [B, M] -> broadcast [B, 1, 1, M]
202
+ mask = kv_key_padding_mask[:, None, None, :].to(dtype=attn_weights.dtype)
203
+ attn_weights = attn_weights.masked_fill(mask > 0.5, float('-inf'))
204
+
205
+ # Softmax in float32 for stability
206
+ attn_dtype = attn_weights.dtype
207
+ attn_weights = F.softmax(attn_weights, dim=-1, dtype=torch.float32).to(attn_dtype)
208
+ attn_weights = F.dropout(attn_weights, p=self.attention_dropout, training=self.training)
209
+
210
+ # Aggregate values -> [B, H, L, Dh]
211
+ attn_output = torch.matmul(attn_weights, v_states)
212
+ attn_output = attn_output.transpose(1, 2).contiguous().view(B, L, self.num_heads * self.head_dim)
213
+
214
+ # Final projection and residual + FFN
215
+ out = self.o_proj(attn_output)
216
+ q = q + out
217
+ q = q + self.ffn(self.ffn_norm(q))
218
+
219
+ return q, (attn_weights if need_weights else None)
220
+
221
+ def init_from_lm_attn(
222
+ self,
223
+ lm_attn: nn.Module,
224
+ lm_input_norm: Optional[nn.Module] = None,
225
+ lm_post_attn_norm: Optional[nn.Module] = None,
226
+ ) -> None:
227
+ """Best-effort init from a downstream LM attention module + norms.
228
+
229
+ Supported projection patterns:
230
+ - Separate projections: `q_proj`, `k_proj`, `v_proj`, and (`o_proj` or `out_proj`).
231
+ - Fused QKV: (`query_key_value` or `c_attn`) where weights/biases are split into q/k/v.
232
+ """
233
+
234
+ def _copy_param(dst: torch.nn.Parameter, src: torch.Tensor, name: str) -> bool:
235
+ if dst.shape != src.shape:
236
+ print(f"[QTS+] Skip init for {name}: shape mismatch {tuple(src.shape)} -> {tuple(dst.shape)}")
237
+ return False
238
+ dst.copy_(src)
239
+ return True
240
+
241
+ def _maybe_copy_linear(dst: nn.Linear, src_w: torch.Tensor, src_b: Optional[torch.Tensor], prefix: str) -> None:
242
+ _copy_param(dst.weight, src_w, f"{prefix}.weight")
243
+ if dst.bias is None:
244
+ return
245
+ if src_b is None:
246
+ dst.bias.zero_()
247
+ return
248
+ _copy_param(dst.bias, src_b, f"{prefix}.bias")
249
+
250
+ with torch.no_grad():
251
+ # 1) Projections
252
+ if all(hasattr(lm_attn, n) for n in ("q_proj", "k_proj", "v_proj")):
253
+ q_src = lm_attn.q_proj
254
+ k_src = lm_attn.k_proj
255
+ v_src = lm_attn.v_proj
256
+ o_src = getattr(lm_attn, "o_proj", None) or getattr(lm_attn, "out_proj", None)
257
+ if o_src is None:
258
+ o_src = getattr(lm_attn, "c_proj", None)
259
+ if q_src is not None:
260
+ _maybe_copy_linear(self.q_proj, q_src.weight, getattr(q_src, "bias", None), "q_proj")
261
+ if k_src is not None:
262
+ _maybe_copy_linear(self.k_proj, k_src.weight, getattr(k_src, "bias", None), "k_proj")
263
+ if v_src is not None:
264
+ _maybe_copy_linear(self.v_proj, v_src.weight, getattr(v_src, "bias", None), "v_proj")
265
+ if o_src is not None and hasattr(o_src, "weight"):
266
+ _copy_param(self.o_proj.weight, o_src.weight, "o_proj.weight")
267
+ else:
268
+ # Fused QKV weights common in some HF models (e.g., GPT-NeoX, GPT-2 style)
269
+ fused = getattr(lm_attn, "query_key_value", None) or getattr(lm_attn, "c_attn", None)
270
+ out = getattr(lm_attn, "o_proj", None) or getattr(lm_attn, "out_proj", None) or getattr(lm_attn, "c_proj", None)
271
+ if fused is not None and hasattr(fused, "weight"):
272
+ w = fused.weight
273
+ b = getattr(fused, "bias", None)
274
+ # Handle both (3D, D) and (D, 3D) conventions
275
+ if w.shape[0] == 3 * self.hidden_size and w.shape[1] == self.hidden_size:
276
+ qw, kw, vw = w.split(self.hidden_size, dim=0)
277
+ qb, kb, vb = (b.split(self.hidden_size, dim=0) if b is not None and b.numel() == 3 * self.hidden_size else (None, None, None))
278
+ elif w.shape[0] == self.hidden_size and w.shape[1] == 3 * self.hidden_size:
279
+ qw, kw, vw = w.split(self.hidden_size, dim=1)
280
+ qw, kw, vw = qw.t(), kw.t(), vw.t()
281
+ qb, kb, vb = (b.split(self.hidden_size, dim=0) if b is not None and b.numel() == 3 * self.hidden_size else (None, None, None))
282
+ else:
283
+ qw = kw = vw = qb = kb = vb = None
284
+
285
+ if qw is not None:
286
+ _maybe_copy_linear(self.q_proj, qw, qb, "q_proj")
287
+ _maybe_copy_linear(self.k_proj, kw, kb, "k_proj")
288
+ _maybe_copy_linear(self.v_proj, vw, vb, "v_proj")
289
+ if out is not None and hasattr(out, "weight"):
290
+ _copy_param(self.o_proj.weight, out.weight, "o_proj.weight")
291
+
292
+ # 2) Norms
293
+ if lm_input_norm is not None and hasattr(lm_input_norm, "weight"):
294
+ _copy_param(self.q_norm.weight, lm_input_norm.weight, "q_norm.weight")
295
+ _copy_param(self.kv_norm.weight, lm_input_norm.weight, "kv_norm.weight")
296
+ if lm_post_attn_norm is not None and hasattr(lm_post_attn_norm, "weight"):
297
+ _copy_param(self.ffn_norm.weight, lm_post_attn_norm.weight, "ffn_norm.weight")
298
+ elif lm_input_norm is not None and hasattr(lm_input_norm, "weight"):
299
+ _copy_param(self.ffn_norm.weight, lm_input_norm.weight, "ffn_norm.weight")
300
+
301
+ print("[QTS+] Scoring layer initialized from downstream LM weights (best-effort).")
302
+
303
+ class LMSelfReencodeLayer(nn.Module):
304
+ """
305
+ Thin wrapper that reuses LMScoringCrossAttentionLayer as a self-attention
306
+ re-encoding block (q == kv).
307
+ """
308
+ def __init__(
309
+ self,
310
+ d_model: int,
311
+ num_heads: int,
312
+ num_key_value_heads: Optional[int] = None,
313
+ dropout: float = 0.0,
314
+ d_ff: Optional[int] = None,
315
+ rms_norm_eps: float = 1e-6,
316
+ ):
317
+ super().__init__()
318
+ self.core = LMScoringCrossAttentionLayer(
319
+ d_model=d_model,
320
+ num_heads=num_heads,
321
+ num_key_value_heads=num_key_value_heads or num_heads,
322
+ dropout=dropout,
323
+ d_ff=d_ff,
324
+ rms_norm_eps=rms_norm_eps,
325
+ )
326
+
327
+ def forward(self, x: torch.Tensor, key_padding_mask: Optional[torch.Tensor] = None) -> torch.Tensor:
328
+ y, _ = self.core(x, x, kv_key_padding_mask=key_padding_mask, need_weights=False)
329
+ return y
330
+
331
+ def init_from_lm_attn(
332
+ self,
333
+ lm_attn: nn.Module,
334
+ lm_input_norm: Optional[nn.Module] = None,
335
+ lm_post_attn_norm: Optional[nn.Module] = None,
336
+ ) -> None:
337
+ self.core.init_from_lm_attn(lm_attn, lm_input_norm=lm_input_norm, lm_post_attn_norm=lm_post_attn_norm)
338
+
339
+ # QTS+
340
+ class BudgetHead(nn.Module):
341
+ """
342
+ ρ = ρ_min + (ρ_max - ρ_min) * σ( MLP([sq, log M, max r, H(p)]) )
343
+ where sq is the mean query embedding.
344
+ """
345
+ def __init__(self, d_model: int, hidden: int = 256, rho_min: float = 0.05, rho_max: float = 0.5):
346
+ super().__init__()
347
+ self.rho_min = rho_min
348
+ self.rho_max = rho_max
349
+ self.mlp = nn.Sequential(
350
+ nn.Linear(d_model + 3, hidden),
351
+ nn.GELU(),
352
+ nn.Linear(hidden, 1)
353
+ )
354
+
355
+ def forward(self, sq: torch.Tensor, logM: torch.Tensor, r_max: torch.Tensor, H: torch.Tensor) -> torch.Tensor:
356
+ """
357
+ sq: [B, D] (mean of query embeddings)
358
+ logM, r_max, H: [B]
359
+ returns ρ in [rho_min, rho_max], shape [B]
360
+ """
361
+ B, D = sq.shape
362
+ x = torch.cat([sq, logM.view(B, 1), r_max.view(B, 1), H.view(B, 1)], dim=1)
363
+ logits = self.mlp(x).squeeze(1)
364
+ rho = self.rho_min + (self.rho_max - self.rho_min) * torch.sigmoid(logits)
365
+ return rho
366
+
367
+ class QTSplus(nn.Module):
368
+ """
369
+ Query-Aware Token Selector with Adaptive Budget.
370
+ - Cross-attention scoring: r in [0,1]^M via max over text & heads.
371
+ - Predict ρ from query & video stats.
372
+ - Train mode: differentiable threshold gate with bisection.
373
+ - Infer mode: hard Top-n selection.
374
+ - Then one tiny re-encoding transformer block.
375
+ """
376
+ def __init__(
377
+ self,
378
+ d_model: int,
379
+ n_heads: int = 8,
380
+ n_kv_heads: Optional[int] = None,
381
+ tau_s: float = 0.1,
382
+ nmax: int = 2560,
383
+ rho_min: float = 0.05,
384
+ rho_max: float = 0.5,
385
+ block_dropout: float = 0.0,
386
+ use_reencode: bool = True,
387
+ n_scoring_layers: int = 1,
388
+ n_reencode_layers: int = 1,
389
+ ):
390
+ super().__init__()
391
+ assert d_model % n_heads == 0
392
+ self.d_model = d_model
393
+ self.n_heads = n_heads
394
+ self.d_head = d_model // n_heads
395
+ self.tau_s = tau_s
396
+ self.nmax = nmax
397
+ self.use_reencode = use_reencode
398
+ self.n_scoring_layers = max(int(n_scoring_layers), 1)
399
+ self.n_reencode_layers = max(int(n_reencode_layers), 1)
400
+
401
+ # linear projections for cross-attn scoring
402
+ # self.Wk = nn.Linear(d_model, d_model, bias=False)
403
+ # self.Wq = nn.Linear(d_model, d_model, bias=False)
404
+
405
+ # scoring layers: initialized from downstream LM when available
406
+ n_heads_eff = self.n_heads
407
+ n_kv_heads_eff = int(n_kv_heads) if (n_kv_heads is not None and int(n_kv_heads) > 0) else self.n_heads
408
+ self.scoring_layers = nn.ModuleList([
409
+ LMScoringCrossAttentionLayer(
410
+ d_model,
411
+ num_heads=n_heads_eff,
412
+ num_key_value_heads=n_kv_heads_eff,
413
+ dropout=0.0,
414
+ rms_norm_eps=1e-6,
415
+ ) for _ in range(self.n_scoring_layers)
416
+ ])
417
+
418
+ self.budget = BudgetHead(d_model, rho_min=rho_min, rho_max=rho_max)
419
+
420
+ # re-encode layers: self-attention blocks that can be initialized from downstream LM
421
+ if use_reencode:
422
+ self.reencode_layers = nn.ModuleList([
423
+ LMSelfReencodeLayer(
424
+ d_model,
425
+ num_heads=self.n_heads,
426
+ num_key_value_heads=n_kv_heads_eff,
427
+ dropout=block_dropout,
428
+ rms_norm_eps=1e-6,
429
+ ) for _ in range(self.n_reencode_layers)
430
+ ])
431
+ else:
432
+ self.reencode_layers = None
433
+
434
+ @staticmethod
435
+ def _entropy_from_r(r: torch.Tensor, eps: float = 1e-8) -> torch.Tensor:
436
+ # r: [B, M] relevance in [0,1]; form normalized p then H(p)
437
+ p = r / (r.sum(dim=1, keepdim=True) + eps) # [B, M]
438
+ H = -(p * (p + eps).log()).sum(dim=1) # [B]
439
+ return H.clamp_min(0.0), p
440
+
441
+ def _find_threshold(self, r: torch.Tensor, rho: torch.Tensor, tau_s: float, iters: int = 10) -> torch.Tensor:
442
+ """
443
+ Bisection per-batch-element for t s.t. sum σ((r - t)/τ) = ρ M
444
+ r: [B, M], rho: [B]
445
+ returns t: [B]
446
+ """
447
+ B, M = r.shape
448
+ t_low = r.min(dim=1).values - 6.0 * tau_s
449
+ t_high = r.max(dim=1).values + 6.0 * tau_s
450
+ for _ in range(iters):
451
+ t = 0.5 * (t_low + t_high)
452
+ s = torch.sigmoid((r - t.unsqueeze(1)) / tau_s).sum(dim=1) - (rho * M)
453
+ go_low = s > 0 # if too many kept, increase threshold lower bound
454
+ t_low = torch.where(go_low, t, t_low)
455
+ t_high = torch.where(go_low, t_high, t)
456
+ return 0.5 * (t_low + t_high)
457
+
458
+ def _find_threshold_differentiable(self, r, rho, tau_s, iters=6, eps=1e-6):
459
+ # r: [B, M], rho: [B]
460
+ t = r.median(dim=1, keepdim=True).values # good starting point
461
+ M = r.size(1)
462
+ for _ in range(iters):
463
+ s = torch.sigmoid((r - t) / tau_s) # [B, M]
464
+ g = s.sum(dim=1, keepdim=True) - (rho*M).view(-1,1)
465
+ gp = -(s * (1 - s) / tau_s).sum(dim=1, keepdim=True) # d/dt
466
+ t = t - g / (gp + eps)
467
+ return t.squeeze(1)
468
+
469
+ def _cross_attention_scores(self, Xv: torch.Tensor, Qt: torch.Tensor) -> torch.Tensor:
470
+ """
471
+ Xv: [B, M, D] visual tokens (after codebook, with abs pos encoding kept upstream)
472
+ Qt: [B, L, D] text tokens
473
+ returns r: [B, M] in [0,1]
474
+ """
475
+ q = Qt
476
+ attn_weights: Optional[torch.Tensor] = None
477
+ for i, layer in enumerate(self.scoring_layers):
478
+ need_w = (i == len(self.scoring_layers) - 1)
479
+ q, w = layer(q, Xv, kv_key_padding_mask=None, need_weights=need_w)
480
+ if need_w:
481
+ attn_weights = w # [B, h, L, M]
482
+ # r via max over text (L) and heads (h)
483
+ assert attn_weights is not None
484
+ r = attn_weights.amax(dim=2).amax(dim=1) # [B, M]
485
+ return r
486
+
487
+ # B, M, D = Xv.shape
488
+ # _, L, _ = Qt.shape
489
+ # # Fallback: manual scaled dot-product using explicit Wq/Wk
490
+ # K = self.Wk(Xv) # [B, M, D]
491
+ # U = self.Wq(Qt) # [B, L, D]
492
+
493
+ # # reshape to heads
494
+ # K = K.view(B, M, self.n_heads, self.d_head).transpose(1, 2) # [B, h, M, dh]
495
+ # U = U.view(B, L, self.n_heads, self.d_head).transpose(1, 2) # [B, h, L, dh]
496
+
497
+ # # attention: softmax over visual positions (M)
498
+ # # scores: [B, h, L, M]
499
+ # scores = torch.matmul(U, K.transpose(-2, -1)) / math.sqrt(self.d_head)
500
+ # A = F.softmax(scores, dim=-1)
501
+
502
+ # # max-pool over text (L) and heads (h): r in [0,1]^M
503
+ # r = A.amax(dim=2).amax(dim=1) # [B, M]
504
+ # return r
505
+
506
+ def forward(
507
+ self,
508
+ Xv: torch.Tensor, # [B, M, D]
509
+ Qt: torch.Tensor, # [B, L, D]
510
+ mode: str = "train",
511
+ ) -> Dict[str, Any]:
512
+ B, M, D = Xv.shape
513
+ assert D == self.d_model
514
+
515
+ # 1) Cross-attention scoring
516
+ r = self._cross_attention_scores(Xv, Qt) # [B, M] in [0,1]
517
+
518
+ # 2) Adaptive budget prediction
519
+ H, p = self._entropy_from_r(r)
520
+ sq = Qt.mean(dim=1) # [B, D]
521
+ logM = torch.full((B,), float(math.log(max(M, 1))), device=Xv.device, dtype=Xv.dtype)
522
+ r_max = r.max(dim=1).values
523
+ rho = self.budget(sq, logM, r_max, H) # [B], clamp in head
524
+
525
+ # fixed rho for debugging
526
+ # rho = torch.full_like(rho, 0.5)
527
+
528
+ n_pred = torch.clamp((rho * M).ceil().long(), min=1) # at least 1
529
+ n = torch.minimum(n_pred, torch.full_like(n_pred, self.nmax))
530
+
531
+ # 3) Train-time differentiable gate / Inference hard Top-n
532
+ if mode == "train":
533
+ # Differentiable threshold with Newton-style refinement (keeps budget expectation)
534
+ t = self._find_threshold_differentiable(r, rho, self.tau_s, iters=10) # [B]
535
+
536
+ # Replace TopK + manual straight-through with Gumbel-Softmax (binary keep/drop)
537
+ # logits_keep ~ (r - t); logits_drop ~ 0; temperature = tau_s
538
+ logits = torch.stack([r - t.unsqueeze(1), torch.zeros_like(r)], dim=-1) # [B, M, 2]
539
+ y = F.gumbel_softmax(logits, tau=self.tau_s, hard=True, dim=-1) # one-hot along 2
540
+ s_keep = y[..., 0] # [B, M] in {0,1}, grad via GS
541
+
542
+ # Ensure at least one token per sample (rare edge if GS picks all drop)
543
+ with torch.no_grad():
544
+ none_kept = (s_keep.sum(dim=1) < 0.5)
545
+ if none_kept.any():
546
+ for b in torch.nonzero(none_kept, as_tuple=False).view(-1):
547
+ j = torch.argmax(r[b])
548
+ s_keep[b].zero_()
549
+ s_keep[b, j] = 1.0
550
+
551
+ Z = s_keep.unsqueeze(-1) * Xv # [B, M, D]
552
+
553
+ # Gather kept tokens per sample in original order
554
+ kept_list = []
555
+ kept_idx_list = []
556
+ for b in range(B):
557
+ kb = (s_keep[b] > 0.5).nonzero(as_tuple=False).squeeze(1)
558
+ kb, _ = torch.sort(kb)
559
+ kept_list.append(Z[b, kb])
560
+ kept_idx_list.append(kb)
561
+
562
+ if self.use_reencode:
563
+ # Pad/tile to max kept for batched re-encoding
564
+ max_keep = int(max([len(x) for x in kept_list]))
565
+ Zb = []
566
+ for b in range(B):
567
+ x = kept_list[b]
568
+ if x.size(0) < max_keep:
569
+ # Repeat last kept token to pad; guaranteed at least one by fallback above
570
+ pad = x[-1:].repeat(max_keep - x.size(0), 1)
571
+ x = torch.cat([x, pad], dim=0)
572
+ Zb.append(x.unsqueeze(0))
573
+ Zb = torch.cat(Zb, dim=0) # [B, max_keep, D]
574
+
575
+ # Debug: skip previous step and re-encode all visual tokens
576
+ # Zb = Xv
577
+
578
+ # Apply each re-encode block sequentially
579
+ for layer in self.reencode_layers:
580
+ Zb = layer(Zb)
581
+ # Slice back to each sample's true kept count
582
+ Z_out = []
583
+ for b in range(B):
584
+ Z_out.append(Zb[b, : kept_idx_list[b].numel()])
585
+ else:
586
+ # Skip re-encoding; directly return kept features
587
+ Z_out = kept_list
588
+ # ragged output, collate as list for flexibility
589
+ return {
590
+ "Z": Z_out, # list of [n[b], D]
591
+ "indices": kept_idx_list, # list of [n[b]]
592
+ "rho": rho, # [B]
593
+ "r": r, # [B, M]
594
+ "p": p, # [B, M]
595
+ "n": n, # [B]
596
+ }
597
+ else:
598
+ # inference: hard Top-n, but preserve original temporal order
599
+ kept_idx_list = []
600
+ Z_out = []
601
+ for b in range(B):
602
+ kb = torch.topk(r[b], k=int(n[b].item()), dim=0).indices
603
+ kb, _ = torch.sort(kb) # keep ascending to preserve original positions
604
+ kept_idx_list.append(kb)
605
+ Z_out.append(Xv[b, kb])
606
+ if self.use_reencode:
607
+ # optional single re-encoding applied per batch via padding
608
+ max_keep = int(max([z.size(0) for z in Z_out]))
609
+ Zb = []
610
+ for z in Z_out:
611
+ if z.size(0) < max_keep:
612
+ pad = z[-1:].repeat(max_keep - z.size(0), 1)
613
+ z = torch.cat([z, pad], dim=0)
614
+ Zb.append(z.unsqueeze(0))
615
+ Zb = torch.cat(Zb, dim=0) # [B, max_keep, D]
616
+ # apply each re-encode block sequentially
617
+ for layer in self.reencode_layers:
618
+ Zb = layer(Zb)
619
+ Z_final = []
620
+ for b in range(B):
621
+ Z_final.append(Zb[b, : kept_idx_list[b].numel()])
622
+ else:
623
+ # Skip re-encoding; return selected tokens directly
624
+ Z_final = Z_out
625
+ return {
626
+ "Z": Z_final,
627
+ "indices": kept_idx_list,
628
+ "rho": rho,
629
+ "r": r,
630
+ "n": n,
631
+ }
632
+
633
+ # --- Utilities to initialize scoring/re-encode layers from a downstream LM ---
634
+ @staticmethod
635
+ def _collect_lm_decoder_layers(lm_model: nn.Module) -> list[nn.Module]:
636
+ """Best-effort extraction of decoder layers across common HF model layouts."""
637
+ candidates = [
638
+ ("layers",),
639
+ ("model", "layers"),
640
+ ("model", "model", "layers"),
641
+ ("transformer", "h"),
642
+ ("gpt_neox", "layers"),
643
+ ("decoder", "layers"),
644
+ ("model", "decoder", "layers"),
645
+ ]
646
+ for chain in candidates:
647
+ cur = lm_model
648
+ ok = True
649
+ for attr in chain:
650
+ if not hasattr(cur, attr):
651
+ ok = False
652
+ break
653
+ cur = getattr(cur, attr)
654
+ if ok and isinstance(cur, (nn.ModuleList, list, tuple)):
655
+ return list(cur)
656
+
657
+ # Fallback: scan for the first ModuleList whose *final* component is `layers` or `h`.
658
+ for name, mod in lm_model.named_modules():
659
+ if not isinstance(mod, nn.ModuleList):
660
+ continue
661
+ last = name.split(".")[-1]
662
+ if last in {"layers", "h"}:
663
+ return list(mod)
664
+ return []
665
+
666
+ @staticmethod
667
+ def _extract_lm_layer_components(
668
+ lm_layer: nn.Module,
669
+ ) -> tuple[Optional[nn.Module], Optional[nn.Module], Optional[nn.Module]]:
670
+ """Return (attn, input_norm, post_attn_norm) for a decoder layer, best-effort."""
671
+ attn = None
672
+ for n in ("self_attn", "attn", "attention"):
673
+ if hasattr(lm_layer, n):
674
+ attn = getattr(lm_layer, n)
675
+ break
676
+
677
+ input_norm = None
678
+ for n in ("input_layernorm", "ln_1", "layernorm1", "norm1", "pre_attention_layernorm"):
679
+ if hasattr(lm_layer, n):
680
+ input_norm = getattr(lm_layer, n)
681
+ break
682
+
683
+ post_attn_norm = None
684
+ for n in ("post_attention_layernorm", "ln_2", "layernorm2", "norm2", "post_attention_norm"):
685
+ if hasattr(lm_layer, n):
686
+ post_attn_norm = getattr(lm_layer, n)
687
+ break
688
+
689
+ return attn, input_norm, post_attn_norm
690
+
691
+ def init_scoring_from_lm_model(self, lm_model: nn.Module, layer_indices: list, rms_norm_eps: Optional[float] = None):
692
+ """Initialize scoring layers from the provided downstream LM layers (best-effort)."""
693
+ text_cfg = getattr(lm_model, 'config', None)
694
+ hidden_size = getattr(text_cfg, 'hidden_size', self.d_model)
695
+ num_heads = getattr(text_cfg, 'num_attention_heads', self.n_heads)
696
+ num_kv_heads = getattr(text_cfg, 'num_key_value_heads', num_heads)
697
+ if rms_norm_eps is None:
698
+ rms_norm_eps = getattr(text_cfg, 'rms_norm_eps', 1e-6)
699
+
700
+ # Rebuild if d_model differs or head counts differ and are compatible
701
+ want_heads = int(num_heads)
702
+ can_use_lm_heads = (self.d_model % want_heads) == 0
703
+ cur_kv_heads = None
704
+ if hasattr(self, 'scoring_layers') and len(self.scoring_layers) > 0:
705
+ cur_kv_heads = getattr(self.scoring_layers[0], 'num_key_value_heads', None)
706
+ rebuild = (
707
+ (hidden_size != self.d_model)
708
+ or ((want_heads != self.n_heads) and can_use_lm_heads)
709
+ or (cur_kv_heads is None or int(cur_kv_heads) != int(num_kv_heads))
710
+ )
711
+ if rebuild:
712
+ # Only adopt LM head count if compatible with our d_model; else keep current heads
713
+ self.n_heads = want_heads if can_use_lm_heads else self.n_heads
714
+ self.d_head = self.d_model // self.n_heads
715
+ self.scoring_layers = nn.ModuleList([
716
+ LMScoringCrossAttentionLayer(
717
+ self.d_model,
718
+ num_heads=self.n_heads,
719
+ num_key_value_heads=int(num_kv_heads),
720
+ dropout=0.0,
721
+ rms_norm_eps=rms_norm_eps,
722
+ ) for _ in range(self.n_scoring_layers)
723
+ ])
724
+
725
+ # Collect LM layers and copy
726
+ lm_layers = self._collect_lm_decoder_layers(lm_model)
727
+ if not lm_layers:
728
+ return # can't proceed
729
+
730
+ for i, layer in enumerate(self.scoring_layers):
731
+ idx = int(layer_indices[i]) if i < len(layer_indices) else int(layer_indices[-1])
732
+ idx = max(0, min(idx, len(lm_layers) - 1))
733
+ q_layer = lm_layers[idx]
734
+ attn, in_norm, post_norm = self._extract_lm_layer_components(q_layer)
735
+ if attn is None:
736
+ continue
737
+ layer.init_from_lm_attn(attn, lm_input_norm=in_norm, lm_post_attn_norm=post_norm)
738
+ print("[QTS+] Scoring layers initialized from downstream LM model (where shapes matched).")
739
+
740
+ def init_reencode_from_lm_model(self, lm_model: nn.Module, layer_indices: list, rms_norm_eps: Optional[float] = None):
741
+ """Initialize re-encoding self-attention layers from downstream LM layers (best-effort)."""
742
+ if not self.use_reencode:
743
+ return
744
+
745
+ text_cfg = getattr(lm_model, 'config', None)
746
+ hidden_size = getattr(text_cfg, 'hidden_size', self.d_model)
747
+ num_heads = getattr(text_cfg, 'num_attention_heads', self.n_heads)
748
+ num_kv_heads = getattr(text_cfg, 'num_key_value_heads', num_heads)
749
+ if rms_norm_eps is None:
750
+ rms_norm_eps = getattr(text_cfg, 'rms_norm_eps', 1e-6)
751
+
752
+ # Rebuild re-encode layers if head/kv-head config differs
753
+ can_use_lm_heads = (self.d_model % int(num_heads)) == 0
754
+ # Detect current kv heads from the first reencode layer (wrapper -> core)
755
+ cur_kv_heads = None
756
+ if hasattr(self, 'reencode_layers') and self.reencode_layers is not None and len(self.reencode_layers) > 0:
757
+ core0 = getattr(self.reencode_layers[0], 'core', self.reencode_layers[0])
758
+ cur_kv_heads = getattr(core0, 'num_key_value_heads', None)
759
+ rebuild = (
760
+ (hidden_size != self.d_model)
761
+ or ((int(num_heads) != self.n_heads) and can_use_lm_heads)
762
+ or (cur_kv_heads is None or int(cur_kv_heads) != int(num_kv_heads))
763
+ )
764
+ if rebuild:
765
+ if can_use_lm_heads:
766
+ self.n_heads = int(num_heads)
767
+ self.d_head = self.d_model // self.n_heads
768
+ self.reencode_layers = nn.ModuleList([
769
+ LMSelfReencodeLayer(
770
+ self.d_model,
771
+ num_heads=self.n_heads,
772
+ num_key_value_heads=int(num_kv_heads),
773
+ dropout=0.0,
774
+ rms_norm_eps=rms_norm_eps,
775
+ ) for _ in range(self.n_reencode_layers)
776
+ ])
777
+
778
+ # Collect LM layers and copy
779
+ lm_layers = self._collect_lm_decoder_layers(lm_model)
780
+ if not lm_layers:
781
+ return
782
+
783
+ for i, layer in enumerate(self.reencode_layers):
784
+ idx = int(layer_indices[i]) if i < len(layer_indices) else int(layer_indices[-1])
785
+ idx = max(0, min(idx, len(lm_layers) - 1))
786
+ q_layer = lm_layers[idx]
787
+ attn, in_norm, post_norm = self._extract_lm_layer_components(q_layer)
788
+ if attn is None:
789
+ continue
790
+ layer.init_from_lm_attn(attn, lm_input_norm=in_norm, lm_post_attn_norm=post_norm)
791
+ print("[QTS+] Re-encode layers initialized from downstream LM model (where shapes matched).")
qts_plus_arch.py ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from abc import ABC, abstractmethod
4
+ from typing import Any, Optional, Tuple
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ from transformers.configuration_utils import PretrainedConfig
9
+ from transformers.modeling_utils import PreTrainedModel
10
+
11
+ from .configuration_intern_vit import InternVisionConfig
12
+ from .modeling_intern_vit import InternVisionModel
13
+ from .qts_plus_tokenizer import QTSplusTokenizer, QTSplusTokenizerConfig
14
+
15
+
16
+ def qts_integrate_embeddings(
17
+ vision_features: torch.Tensor,
18
+ input_ids: torch.Tensor,
19
+ attention_mask: torch.Tensor,
20
+ labels: Optional[torch.Tensor] = None,
21
+ image_token_id: Optional[int] = None,
22
+ video_token_id: Optional[int] = None,
23
+ text_model_embed_layer: Optional[nn.Embedding] = None,
24
+ kept_indices: Optional[torch.Tensor] = None,
25
+ ) -> Tuple[torch.Tensor, torch.Tensor, Optional[torch.Tensor]]:
26
+ """Replace multimodal placeholder token embeddings with vision features.
27
+
28
+ Supports two prompt formats:
29
+ - multiple placeholders (e.g. InternVL `<IMG_CONTEXT>` repeated per vision token)
30
+ - a single placeholder token (expanded into N vision tokens)
31
+ """
32
+
33
+ if text_model_embed_layer is None:
34
+ raise ValueError("text_model_embed_layer is required")
35
+ if input_ids.dtype is not torch.long:
36
+ input_ids = input_ids.long()
37
+
38
+ placeholder_token_id = video_token_id if video_token_id is not None else image_token_id
39
+ if placeholder_token_id is None:
40
+ raise ValueError("Either `image_token_id` or `video_token_id` must be provided")
41
+
42
+ inputs_embeds = text_model_embed_layer(input_ids)
43
+ if vision_features.ndim != 2:
44
+ raise ValueError(f"vision_features must be [N, D], got {tuple(vision_features.shape)}")
45
+
46
+ if input_ids.ndim != 2 or input_ids.shape[0] != 1:
47
+ raise ValueError("Only batch_size==1 is currently supported")
48
+
49
+ pos = (input_ids[0] == int(placeholder_token_id)).nonzero(as_tuple=False).flatten()
50
+ if pos.numel() == 0:
51
+ raise ValueError("No multimodal placeholder tokens found in input_ids")
52
+
53
+ n_feats = int(vision_features.shape[0])
54
+ if n_feats <= 0:
55
+ raise ValueError("vision_features must contain at least one vector")
56
+
57
+ # Single placeholder: expand into N tokens.
58
+ if pos.numel() == 1 and n_feats >= 1:
59
+ insert_at = int(pos.item())
60
+ vision_features = vision_features.to(inputs_embeds.device, inputs_embeds.dtype)
61
+
62
+ pre = inputs_embeds[:, :insert_at, :]
63
+ post = inputs_embeds[:, insert_at + 1 :, :]
64
+ inputs_embeds = torch.cat([pre, vision_features.unsqueeze(0), post], dim=1)
65
+
66
+ pre_mask = attention_mask[:, :insert_at]
67
+ post_mask = attention_mask[:, insert_at + 1 :]
68
+ feats_mask = torch.ones((1, n_feats), device=attention_mask.device, dtype=attention_mask.dtype)
69
+ attention_mask = torch.cat([pre_mask, feats_mask, post_mask], dim=1)
70
+
71
+ if labels is not None:
72
+ pre_lab = labels[:, :insert_at]
73
+ post_lab = labels[:, insert_at + 1 :]
74
+ feats_lab = torch.full((1, n_feats), -100, device=labels.device, dtype=labels.dtype)
75
+ labels = torch.cat([pre_lab, feats_lab, post_lab], dim=1)
76
+
77
+ return inputs_embeds, attention_mask.to(inputs_embeds.device), labels
78
+
79
+ # Multi-placeholder: drop unselected placeholders, then replace remaining.
80
+ vision_features = vision_features.to(inputs_embeds.device, inputs_embeds.dtype)
81
+ m_placeholders = int(pos.numel())
82
+ if n_feats > m_placeholders:
83
+ raise ValueError(
84
+ f"Number of vision features ({n_feats}) exceeds placeholder tokens ({m_placeholders}). "
85
+ "Ensure the prompt inserts enough <IMG_CONTEXT> tokens."
86
+ )
87
+
88
+ if n_feats < m_placeholders:
89
+ if kept_indices is not None:
90
+ keep_idx = kept_indices.flatten().to(device=pos.device, dtype=torch.long)
91
+ keep_idx = keep_idx[(keep_idx >= 0) & (keep_idx < m_placeholders)]
92
+ if keep_idx.numel() != n_feats:
93
+ keep_idx = torch.arange(n_feats, device=pos.device, dtype=torch.long)
94
+ order = torch.argsort(keep_idx)
95
+ keep_idx = keep_idx[order]
96
+ vision_features = vision_features[order.to(device=vision_features.device)]
97
+
98
+ keep_mask = torch.zeros((m_placeholders,), device=pos.device, dtype=torch.bool)
99
+ keep_mask[keep_idx] = True
100
+ drop_pos = pos[~keep_mask]
101
+ else:
102
+ drop_pos = pos[n_feats:]
103
+
104
+ if drop_pos.numel() > 0:
105
+ keep_seq = torch.ones((input_ids.shape[1],), device=input_ids.device, dtype=torch.bool)
106
+ keep_seq[drop_pos] = False
107
+ input_ids = input_ids[:, keep_seq]
108
+ attention_mask = attention_mask[:, keep_seq]
109
+ inputs_embeds = inputs_embeds[:, keep_seq, :]
110
+ if labels is not None:
111
+ labels = labels[:, keep_seq]
112
+
113
+ pos = (input_ids[0] == int(placeholder_token_id)).nonzero(as_tuple=False).flatten()
114
+
115
+ # Replace placeholder embeddings.
116
+ if int(pos.numel()) != n_feats:
117
+ raise ValueError(f"Placeholder tokens ({int(pos.numel())}) != vision features ({n_feats}) after trimming")
118
+
119
+ for i in range(n_feats):
120
+ inputs_embeds[0, int(pos[i].item()), :] = vision_features[i, :]
121
+
122
+ if labels is not None and n_feats > 0:
123
+ labels = labels.clone()
124
+ labels[0, pos[:n_feats]] = -100
125
+
126
+ return inputs_embeds, attention_mask.to(inputs_embeds.device), labels
127
+
128
+
129
+ class InternVL2_5VisionConfig(PretrainedConfig):
130
+ model_type = "internvl2_5_vision"
131
+ is_composition = True
132
+
133
+ def __init__(
134
+ self,
135
+ vision_config: Optional[dict[str, Any]] = None,
136
+ llm_hidden_size: Optional[int] = None,
137
+ select_layer: int = -1,
138
+ force_image_size: Optional[int] = None,
139
+ downsample_ratio: float = 0.5,
140
+ ps_version: str = "v2",
141
+ **kwargs: Any,
142
+ ) -> None:
143
+ super().__init__(**kwargs)
144
+
145
+ if vision_config is None:
146
+ vision_config = {"architectures": ["InternVisionModel"]}
147
+ self.vision_config = InternVisionConfig(**vision_config)
148
+
149
+ self.select_layer = int(select_layer)
150
+ self.force_image_size = int(force_image_size) if force_image_size is not None else None
151
+ self.downsample_ratio = float(downsample_ratio)
152
+ self.ps_version = str(ps_version)
153
+
154
+ self.hidden_size = int(self.vision_config.hidden_size)
155
+ self.out_hidden_size = int(llm_hidden_size) if llm_hidden_size is not None else int(self.hidden_size)
156
+ self.llm_hidden_size = int(self.out_hidden_size)
157
+
158
+ self.architectures = ["InternVL2_5VisionTower"]
159
+
160
+ def to_dict(self) -> dict[str, Any]:
161
+ out = dict(self.__dict__)
162
+ out["vision_config"] = self.vision_config.to_dict()
163
+ out["model_type"] = self.__class__.model_type
164
+ return out
165
+
166
+
167
+ class InternVL2_5VisionTower(PreTrainedModel):
168
+ config_class = InternVL2_5VisionConfig
169
+ main_input_name = "pixel_values"
170
+
171
+ def __init__(self, config: InternVL2_5VisionConfig):
172
+ super().__init__(config)
173
+
174
+ vision_cfg = config.vision_config
175
+ if config.force_image_size is not None:
176
+ vision_cfg = InternVisionConfig(**vision_cfg.to_dict())
177
+ vision_cfg.image_size = int(config.force_image_size)
178
+
179
+ self.vision_model = InternVisionModel(vision_cfg)
180
+ self.select_layer = int(config.select_layer)
181
+ self.downsample_ratio = float(config.downsample_ratio)
182
+ self.ps_version = str(config.ps_version)
183
+
184
+ vit_hidden_size = int(vision_cfg.hidden_size)
185
+ llm_hidden_size = int(config.out_hidden_size)
186
+ mlp_in = vit_hidden_size * int(1 / self.downsample_ratio) ** 2
187
+ self.mlp1 = nn.Sequential(
188
+ nn.LayerNorm(mlp_in),
189
+ nn.Linear(mlp_in, llm_hidden_size),
190
+ nn.GELU(),
191
+ nn.Linear(llm_hidden_size, llm_hidden_size),
192
+ )
193
+
194
+ self.post_init()
195
+
196
+ def pixel_shuffle(self, x: torch.Tensor, scale_factor: float = 0.5) -> torch.Tensor:
197
+ n, w, h, c = x.size()
198
+ x = x.view(n, w, int(h * scale_factor), int(c / scale_factor))
199
+ x = x.permute(0, 2, 1, 3).contiguous()
200
+ x = x.view(
201
+ n,
202
+ int(h * scale_factor),
203
+ int(w * scale_factor),
204
+ int(c / (scale_factor * scale_factor)),
205
+ )
206
+ if self.ps_version != "v1":
207
+ x = x.permute(0, 2, 1, 3).contiguous()
208
+ return x
209
+
210
+ def extract_feature(self, pixel_values: torch.Tensor) -> torch.Tensor:
211
+ if self.select_layer == -1:
212
+ vit_out = self.vision_model(
213
+ pixel_values=pixel_values,
214
+ output_hidden_states=False,
215
+ return_dict=True,
216
+ ).last_hidden_state
217
+ else:
218
+ vit_out = self.vision_model(
219
+ pixel_values=pixel_values,
220
+ output_hidden_states=True,
221
+ return_dict=True,
222
+ ).hidden_states[self.select_layer]
223
+
224
+ vit_out = vit_out[:, 1:, :] # drop CLS
225
+ h = w = int(vit_out.shape[1] ** 0.5)
226
+ vit_out = vit_out.reshape(vit_out.shape[0], h, w, -1)
227
+ vit_out = self.pixel_shuffle(vit_out, scale_factor=self.downsample_ratio)
228
+ vit_out = vit_out.reshape(vit_out.shape[0], -1, vit_out.shape[-1])
229
+ vit_out = self.mlp1(vit_out)
230
+ return vit_out
231
+
232
+ def get_image_features(self, pixel_values: torch.Tensor) -> torch.Tensor:
233
+ return self.extract_feature(pixel_values)
234
+
235
+ def forward(self, pixel_values: torch.Tensor, **_: Any) -> torch.Tensor:
236
+ return self.get_image_features(pixel_values)
237
+
238
+
239
+ def build_vision_tower(config: PretrainedConfig) -> InternVL2_5VisionTower:
240
+ vision_cfg = getattr(config, "vision_config", None)
241
+ if not isinstance(vision_cfg, dict):
242
+ raise ValueError("Missing `vision_config` in model config for InternVL2.5 vision tower")
243
+
244
+ llm_hidden = getattr(config, "hidden_size", None)
245
+ if not isinstance(llm_hidden, int) or llm_hidden <= 0:
246
+ llm_hidden = getattr(config, "llm_hidden_size", None)
247
+ if not isinstance(llm_hidden, int) or llm_hidden <= 0:
248
+ raise ValueError("Missing `hidden_size` / `llm_hidden_size` in config")
249
+
250
+ vt_cfg = InternVL2_5VisionConfig(
251
+ vision_config=vision_cfg,
252
+ llm_hidden_size=int(llm_hidden),
253
+ select_layer=int(getattr(config, "select_layer", -1)),
254
+ force_image_size=getattr(config, "force_image_size", None),
255
+ downsample_ratio=float(getattr(config, "downsample_ratio", 0.5)),
256
+ ps_version=str(getattr(config, "ps_version", "v2")),
257
+ )
258
+ return InternVL2_5VisionTower(vt_cfg)
259
+
260
+
261
+ def build_qts_plus_tower(config: PretrainedConfig) -> QTSplusTokenizer:
262
+ vision_dim = getattr(config, "vision_embed_size", None)
263
+ if not isinstance(vision_dim, int) or vision_dim <= 0:
264
+ vision_dim = getattr(config, "hidden_size", None)
265
+ if not isinstance(vision_dim, int) or vision_dim <= 0:
266
+ raise ValueError("Missing `vision_embed_size` / `hidden_size` in config")
267
+
268
+ lm_heads = getattr(config, "num_attention_heads", None)
269
+ if not isinstance(lm_heads, int) or lm_heads <= 0:
270
+ raise ValueError("Missing `num_attention_heads` in config")
271
+ if vision_dim % lm_heads != 0:
272
+ raise ValueError(f"vision_embed_size ({vision_dim}) must be divisible by num_attention_heads ({lm_heads})")
273
+
274
+ kv_heads = getattr(config, "num_key_value_heads", None)
275
+
276
+ cfg = QTSplusTokenizerConfig(
277
+ embedding_dim=int(vision_dim),
278
+ n_heads=int(lm_heads),
279
+ num_kv_heads=int(kv_heads) if isinstance(kv_heads, int) and kv_heads > 0 else None,
280
+ tau_s=float(getattr(config, "qts_plus_tau_s", 0.1)),
281
+ nmax=int(getattr(config, "qts_plus_nmax", 2560)),
282
+ rho_min=float(getattr(config, "qts_plus_rho_min", 0.05)),
283
+ rho_max=float(getattr(config, "qts_plus_rho_max", 0.5)),
284
+ block_dropout=float(getattr(config, "qts_plus_block_dropout", 0.0)),
285
+ reencode=bool(getattr(config, "qts_plus_reencode", False)),
286
+ scoring_layers=int(getattr(config, "qts_plus_scoring_layers", 1)),
287
+ reencode_layers=int(getattr(config, "qts_plus_reencode_layers", 0)),
288
+ lambda_t=float(getattr(config, "lambda_t", 1.0)),
289
+ lambda_m=float(getattr(config, "lambda_m", 1.7)),
290
+ lambda_s=float(getattr(config, "lambda_s", 0.05)),
291
+ project_text_if_needed=bool(getattr(config, "project_text_if_needed", False)),
292
+ )
293
+ return QTSplusTokenizer(cfg)
294
+
295
+
296
+ class QTSplusMetaModel:
297
+ def __init__(self, config: PretrainedConfig):
298
+ super().__init__(config)
299
+ self.config = config
300
+
301
+ self.vision_tower = None
302
+ if getattr(config, "vision_tower", None) in {"internvl2_5_vision", "internvl_vision"}:
303
+ self.vision_tower = build_vision_tower(config)
304
+
305
+ self.qts_plus = None
306
+ if getattr(config, "enable_qts_plus", False):
307
+ self.qts_plus = build_qts_plus_tower(config)
308
+
309
+ def get_qts_plus_tower(self):
310
+ return getattr(self, "qts_plus", None)
311
+
312
+ def get_vision_tower(self):
313
+ return getattr(self, "vision_tower", None)
314
+
315
+
316
+ class QTSplusMetaForCausalLM(ABC):
317
+ @abstractmethod
318
+ def get_model(self): # pragma: no cover
319
+ raise NotImplementedError
320
+
321
+ def get_qts_plus_tower(self):
322
+ return self.get_model().get_qts_plus_tower()
323
+
324
+ def get_vision_tower(self):
325
+ return self.get_model().get_vision_tower()
326
+
327
+ def prepare_inputs_for_multimodal(
328
+ self,
329
+ vision_input: Optional[torch.FloatTensor] = None,
330
+ input_ids: Optional[torch.LongTensor] = None,
331
+ position_ids: Optional[torch.LongTensor] = None,
332
+ attention_mask: Optional[torch.Tensor] = None,
333
+ past_key_values: Optional[list[torch.FloatTensor]] = None,
334
+ labels: Optional[torch.LongTensor] = None,
335
+ question_input_ids: Optional[torch.LongTensor] = None,
336
+ image_token_id: Optional[int] = None,
337
+ video_token_id: Optional[int] = None,
338
+ mode: str = "train",
339
+ ):
340
+ if attention_mask is None and input_ids is not None:
341
+ attention_mask = torch.ones_like(input_ids, dtype=torch.long, device=input_ids.device)
342
+
343
+ # Default: no multimodal inputs -> no-op.
344
+ if vision_input is None:
345
+ z = torch.tensor(0.0, device=input_ids.device if input_ids is not None else None)
346
+ return vision_input, position_ids, attention_mask, past_key_values, None, labels, z, z, z
347
+
348
+ if question_input_ids is None:
349
+ raise ValueError("`question_input_ids` is required for QTSplus InternVL2.5 inference/training.")
350
+ if question_input_ids.dtype is not torch.long:
351
+ question_input_ids = question_input_ids.long()
352
+ if question_input_ids.ndim == 1:
353
+ question_input_ids = question_input_ids.unsqueeze(0)
354
+
355
+ vision_tower = self.get_vision_tower()
356
+ qts_plus_tower = self.get_qts_plus_tower()
357
+ text_embed_layer = self.get_model().get_input_embeddings()
358
+
359
+ if vision_tower is None or qts_plus_tower is None:
360
+ raise ValueError("Both `vision_tower` and `qts_plus` must be initialized for multimodal inference.")
361
+
362
+ # Normalize `vision_input` into a pixel_values tensor.
363
+ if isinstance(vision_input, list):
364
+ if len(vision_input) == 0:
365
+ z = torch.tensor(0.0, device=input_ids.device)
366
+ return None, position_ids, attention_mask, past_key_values, None, labels, z, z, z
367
+ vision_input = vision_input[0]
368
+
369
+ pixel_values = vision_input.get("pixel_values") if isinstance(vision_input, dict) else vision_input
370
+ if not isinstance(pixel_values, torch.Tensor):
371
+ raise ValueError(f"vision_input must be a torch.Tensor or dict with pixel_values, got {type(vision_input)}")
372
+
373
+ if pixel_values.ndim == 3: # [3, H, W]
374
+ pixel_values = pixel_values.unsqueeze(0).unsqueeze(0) # [1, 1, 3, H, W]
375
+ elif pixel_values.ndim == 4: # [B, 3, H, W] or [T, 3, H, W]
376
+ b_txt = int(question_input_ids.shape[0])
377
+ if pixel_values.shape[0] == b_txt:
378
+ pixel_values = pixel_values.unsqueeze(1) # [B, 1, 3, H, W]
379
+ else:
380
+ pixel_values = pixel_values.unsqueeze(0) # [1, T, 3, H, W]
381
+ elif pixel_values.ndim != 5:
382
+ raise ValueError(f"Unsupported InternVL pixel_values shape: {tuple(pixel_values.shape)}")
383
+
384
+ b, t, c, h, w = pixel_values.shape
385
+ pixel_values_flat = pixel_values.reshape(b * t, c, h, w)
386
+
387
+ try:
388
+ vt_param = next(vision_tower.parameters())
389
+ vt_device = vt_param.device
390
+ vt_dtype = vt_param.dtype
391
+ except StopIteration:
392
+ vt_device = pixel_values_flat.device
393
+ vt_dtype = pixel_values_flat.dtype
394
+
395
+ vision_features = vision_tower.get_image_features(pixel_values_flat.to(device=vt_device, dtype=vt_dtype))
396
+ if not (isinstance(vision_features, torch.Tensor) and vision_features.ndim == 3):
397
+ raise ValueError(f"vision_tower must return [B, N, D], got {type(vision_features)} {vision_features.shape}")
398
+ vision_features = vision_features.reshape(b, t * vision_features.shape[1], vision_features.shape[2])
399
+
400
+ text_embeddings = text_embed_layer(question_input_ids.to(text_embed_layer.weight.device))
401
+ vision_features = vision_features.to(device=text_embeddings.device, dtype=text_embeddings.dtype)
402
+ try:
403
+ qts_plus_tower.to(device=text_embeddings.device, dtype=text_embeddings.dtype)
404
+ except Exception:
405
+ qts_plus_tower.to(device=text_embeddings.device)
406
+
407
+ qts_plus_out = qts_plus_tower(vision_features, text_embeddings, mode=mode)
408
+ z_list = qts_plus_out["Z"]
409
+ if not (isinstance(z_list, list) and len(z_list) == 1 and isinstance(z_list[0], torch.Tensor)):
410
+ raise ValueError("Expected QTSplusTokenizer to return a list of 1 tensor for batch_size==1")
411
+
412
+ kept = None
413
+ try:
414
+ kept_list = qts_plus_out.get("indices")
415
+ kept = kept_list[0] if isinstance(kept_list, list) and len(kept_list) == 1 else None
416
+ except Exception:
417
+ kept = None
418
+
419
+ if image_token_id is None:
420
+ image_token_id = getattr(self.config, "image_token_id", 92546)
421
+
422
+ inputs_embeds, attention_mask, labels = qts_integrate_embeddings(
423
+ vision_features=z_list[0],
424
+ input_ids=input_ids,
425
+ attention_mask=attention_mask,
426
+ labels=labels,
427
+ image_token_id=image_token_id,
428
+ video_token_id=video_token_id,
429
+ text_model_embed_layer=text_embed_layer,
430
+ kept_indices=kept,
431
+ )
432
+
433
+ add_loss = qts_plus_out.get("add_loss") or {}
434
+ flops_loss = add_loss.get("flops", 0.0)
435
+ kv_loss = add_loss.get("kv", 0.0)
436
+ smooth_loss = add_loss.get("smooth", 0.0)
437
+
438
+ # Return `inputs_embeds` so the LM consumes the integrated embeddings.
439
+ return (
440
+ vision_input,
441
+ position_ids,
442
+ attention_mask,
443
+ past_key_values,
444
+ inputs_embeds,
445
+ labels,
446
+ flops_loss,
447
+ kv_loss,
448
+ smooth_loss,
449
+ )
qts_plus_internlm2_lm.py ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- encoding: utf-8 -*-
2
+ # QTSplus wrapper for InternLM2 Causal LM (used by InternVL2.5-8B)
3
+
4
+ from __future__ import annotations
5
+
6
+ import os
7
+ from typing import Any, List, Optional, Union
8
+
9
+ import torch
10
+ import torch.nn as nn
11
+
12
+ from transformers import AutoConfig, AutoModelForCausalLM
13
+ from transformers.modeling_flash_attention_utils import is_flash_attn_available
14
+
15
+ from .configuration_internlm2 import InternLM2Config
16
+ from .modeling_internlm2 import InternLM2ForCausalLM, InternLM2Model
17
+ from .qts_plus_arch import QTSplusMetaForCausalLM, QTSplusMetaModel
18
+
19
+ # Ensure nested trust_remote_code dependencies are captured by Transformers'
20
+ # dynamic module snapshotting (it only bundles files imported from this module).
21
+ from .configuration_intern_vit import InternVisionConfig as _InternVisionConfig # noqa: F401
22
+ from .modeling_intern_vit import InternVisionModel as _InternVisionModel # noqa: F401
23
+ from .qts_plus import QTSplus as _QTSplus # noqa: F401
24
+ from .qts_plus_tokenizer import QTSplusTokenizer as _QTSplusTokenizer # noqa: F401
25
+
26
+ def _hf_generate_fallback(model: "QTSplusInternLM2_ForCausalLM", **kwargs):
27
+ """Call into HF generation even when `super().generate` is unavailable."""
28
+ try:
29
+ return super(QTSplusInternLM2_ForCausalLM, model).generate(**kwargs)
30
+ except AttributeError as e:
31
+ msg = str(e)
32
+ if "generate" not in msg:
33
+ raise
34
+ try:
35
+ from transformers.generation.utils import GenerationMixin # type: ignore
36
+ except Exception: # pragma: no cover
37
+ from transformers.generation_utils import GenerationMixin # type: ignore
38
+ return GenerationMixin.generate(model, **kwargs)
39
+
40
+
41
+ class QTSplusInternLM2_CausalLM_Config(InternLM2Config):
42
+ model_type = "qts_plus_internlm2_causal_lm"
43
+
44
+ @classmethod
45
+ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs):
46
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
47
+ if str(config_dict.get("model_type") or "").lower() == "internlm2":
48
+ config_dict["model_type"] = cls.model_type
49
+ return cls.from_dict(config_dict, **kwargs)
50
+
51
+
52
+ class QTSplusInternLM2_Model(QTSplusMetaModel, InternLM2Model):
53
+ config_class = QTSplusInternLM2_CausalLM_Config
54
+
55
+ def __init__(self, config: InternLM2Config):
56
+ super().__init__(config)
57
+
58
+
59
+ class QTSplusInternLM2_ForCausalLM(QTSplusMetaForCausalLM, InternLM2ForCausalLM):
60
+ config_class = QTSplusInternLM2_CausalLM_Config
61
+ _tied_weights_keys = ["output.weight"]
62
+
63
+ def __init__(self, config: InternLM2Config):
64
+ # Configure attention backend before modules are built.
65
+ try:
66
+ cfg_attn = getattr(config, "attn_implementation", None)
67
+ if (cfg_attn is None or str(cfg_attn) == "auto") and is_flash_attn_available():
68
+ setattr(config, "attn_implementation", "flash_attention_2")
69
+ except Exception:
70
+ pass
71
+
72
+ # Skip InternLM2ForCausalLM.__init__ to avoid constructing a second backbone.
73
+ super(InternLM2ForCausalLM, self).__init__(config)
74
+ self.model = QTSplusInternLM2_Model(config)
75
+ self.vocab_size = config.vocab_size
76
+ self.output = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
77
+ self.post_init()
78
+
79
+ def get_model(self):
80
+ return self.model
81
+
82
+ def forward(
83
+ self,
84
+ vision_input: Optional[torch.FloatTensor] = None,
85
+ input_ids: Optional[torch.LongTensor] = None,
86
+ labels: Optional[torch.LongTensor] = None,
87
+ attention_mask: Optional[torch.Tensor] = None,
88
+ position_ids: Optional[torch.LongTensor] = None,
89
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
90
+ inputs_embeds: Optional[torch.FloatTensor] = None,
91
+ use_cache: Optional[bool] = None,
92
+ output_attentions: Optional[bool] = None,
93
+ output_hidden_states: Optional[bool] = None,
94
+ return_dict: Optional[bool] = None,
95
+ cache_position: Optional[torch.LongTensor] = None,
96
+ question_input_ids: Optional[torch.LongTensor] = None,
97
+ image_token_id: Optional[int] = None,
98
+ video_token_id: Optional[int] = None,
99
+ **kwargs: Any,
100
+ ):
101
+ # InternLM2ForCausalLM doesn't accept cache_position; ignore for compatibility with newer HF.
102
+ _ = cache_position
103
+ # HF Trainer (>=4.56) may pass this for loss normalization; InternLM2 forward doesn't accept it.
104
+ kwargs.pop("num_items_in_batch", None)
105
+
106
+ if inputs_embeds is not None:
107
+ input_ids = None
108
+
109
+ if inputs_embeds is None:
110
+ (
111
+ vision_input,
112
+ position_ids,
113
+ attention_mask,
114
+ past_key_values,
115
+ inputs_embeds,
116
+ labels,
117
+ flops_loss,
118
+ kv_loss,
119
+ smooth_loss,
120
+ ) = self.prepare_inputs_for_multimodal(
121
+ vision_input,
122
+ input_ids,
123
+ position_ids,
124
+ attention_mask,
125
+ past_key_values,
126
+ labels,
127
+ question_input_ids,
128
+ video_token_id=video_token_id,
129
+ image_token_id=image_token_id,
130
+ mode="train" if self.training else "infer",
131
+ )
132
+ if inputs_embeds is None and input_ids is not None:
133
+ inputs_embeds = self.get_model().get_input_embeddings()(input_ids)
134
+
135
+ input_ids = None
136
+
137
+ outputs = super().forward(
138
+ attention_mask=attention_mask,
139
+ position_ids=position_ids,
140
+ past_key_values=past_key_values,
141
+ inputs_embeds=inputs_embeds,
142
+ labels=labels,
143
+ use_cache=use_cache,
144
+ output_attentions=output_attentions,
145
+ output_hidden_states=output_hidden_states,
146
+ return_dict=return_dict,
147
+ **kwargs,
148
+ )
149
+
150
+ add_loss = {
151
+ "flops_loss": flops_loss if vision_input is not None else 0.0,
152
+ "kv_loss": kv_loss if vision_input is not None else 0.0,
153
+ "smooth_loss": smooth_loss if vision_input is not None else 0.0,
154
+ }
155
+
156
+ if labels is None and not self.training:
157
+ return outputs
158
+
159
+ return (outputs, add_loss)
160
+
161
+ @torch.no_grad()
162
+ def generate(
163
+ self,
164
+ vision_input: Optional[torch.Tensor] = None,
165
+ input_ids: Optional[torch.Tensor] = None,
166
+ question_input_ids: Optional[torch.Tensor] = None,
167
+ image_token_id: Optional[int] = None,
168
+ video_token_id: Optional[int] = None,
169
+ **kwargs,
170
+ ):
171
+ # `generate()` should run in eval mode to avoid returning the training-only
172
+ # tuple `(outputs, add_loss)` from `forward()` when `self.training == True`.
173
+ was_training = self.training
174
+ if was_training:
175
+ self.eval()
176
+ position_ids = kwargs.pop("position_ids", None)
177
+ attention_mask = kwargs.pop("attention_mask", None)
178
+ if attention_mask is None and input_ids is not None:
179
+ attention_mask = torch.ones_like(input_ids, dtype=torch.long, device=input_ids.device)
180
+ if "inputs_embeds" in kwargs:
181
+ raise NotImplementedError("`inputs_embeds` is not supported")
182
+
183
+ if vision_input is not None:
184
+ (
185
+ vision_input,
186
+ position_ids,
187
+ attention_mask,
188
+ _,
189
+ inputs_embeds,
190
+ _,
191
+ *_unused_losses,
192
+ ) = self.prepare_inputs_for_multimodal(
193
+ vision_input,
194
+ input_ids,
195
+ position_ids,
196
+ attention_mask,
197
+ None,
198
+ None,
199
+ question_input_ids,
200
+ video_token_id=video_token_id,
201
+ image_token_id=image_token_id,
202
+ mode="infer",
203
+ )
204
+ else:
205
+ inputs_embeds = self.get_model().get_input_embeddings()(input_ids)
206
+
207
+ kwargs["attention_mask"] = attention_mask
208
+ if position_ids is not None:
209
+ kwargs["position_ids"] = position_ids
210
+ kwargs.pop("input_ids", None)
211
+
212
+ if "use_cache" not in kwargs:
213
+ kwargs["use_cache"] = True
214
+ try:
215
+ output_ids = _hf_generate_fallback(self, inputs_embeds=inputs_embeds, **kwargs)
216
+ finally:
217
+ if was_training:
218
+ self.train()
219
+ if input_ids is not None:
220
+ input_ids = input_ids.to(output_ids.device)
221
+ output_ids = torch.cat([input_ids, output_ids], dim=1)
222
+ return output_ids
223
+
224
+
225
+ AutoConfig.register("qts_plus_internlm2_causal_lm", QTSplusInternLM2_CausalLM_Config)
226
+ AutoModelForCausalLM.register(QTSplusInternLM2_CausalLM_Config, QTSplusInternLM2_ForCausalLM)
qts_plus_tokenizer.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- encoding: utf-8 -*-
2
+ # @File : tokenizer.py
3
+ # @Time : 2025/03/16 20:45:07
4
+ # @Author : Siyou
5
+ # @Description :
6
+ from __future__ import annotations
7
+
8
+ from dataclasses import dataclass
9
+ from typing import Optional, Dict, Any, List, Tuple
10
+ import torch
11
+ import torch.nn as nn
12
+ import torch.nn.functional as F
13
+
14
+ from .qts_plus import QTSplus
15
+
16
+
17
+ @dataclass
18
+ class QTSplusTokenizerConfig:
19
+
20
+ embedding_dim: int
21
+
22
+ # QTS+
23
+ n_heads: int = 8
24
+ num_kv_heads: Optional[int] = None
25
+ tau_s: float = 0.1
26
+ nmax: int = 2560
27
+ rho_min: float = 0.05
28
+ rho_max: float = 0.5
29
+ block_dropout: float = 0.0
30
+ reencode: bool = True
31
+ scoring_layers: int = 1
32
+ reencode_layers: int = 1
33
+
34
+ lambda_t: float = 1.0
35
+ lambda_m: float = 1.7
36
+ lambda_s: float = 0.05
37
+
38
+ # Misc
39
+ project_text_if_needed: bool = False
40
+
41
+ # Scoring/re-encode layers can be initialized from downstream LM weights.
42
+
43
+
44
+ class QTSplusTokenizer(nn.Module):
45
+ """
46
+ End-to-end *QTSplusTok* tokenizer.
47
+
48
+ Pipeline:
49
+ X_v --(QTS+)--> X′
50
+ """
51
+ def __init__(self, cfg: QTSplusTokenizerConfig):
52
+ super().__init__()
53
+ self.cfg = cfg
54
+
55
+ self.selector = QTSplus(
56
+ d_model=cfg.embedding_dim,
57
+ n_heads=cfg.n_heads,
58
+ n_kv_heads=cfg.num_kv_heads or cfg.n_heads,
59
+ tau_s=cfg.tau_s,
60
+ nmax=cfg.nmax,
61
+ rho_min=cfg.rho_min,
62
+ rho_max=cfg.rho_max,
63
+ block_dropout=cfg.block_dropout,
64
+ use_reencode=cfg.reencode,
65
+ n_scoring_layers=cfg.scoring_layers,
66
+ n_reencode_layers=cfg.reencode_layers,
67
+ )
68
+
69
+ # If text embeddings come in a different dimensionality, learn a light projection.
70
+ self.text_proj: Optional[nn.Linear] = None
71
+
72
+ self.rho_sum = 0
73
+ self.rho_count = 0
74
+
75
+ def forward(
76
+ self,
77
+ X_v: torch.Tensor, # [B, M, D]
78
+ Q_t: torch.Tensor, # [B, L, D_txt]
79
+ mode: str = "train", # 'train' | 'infer'
80
+ ) -> Dict[str, Any]:
81
+ assert mode in ("train", "infer")
82
+ B, M, D = X_v.shape
83
+ D_txt = Q_t.shape[-1]
84
+
85
+ # --- Project text if needed ---
86
+ if D_txt != D:
87
+ if self.cfg.project_text_if_needed:
88
+ if self.text_proj is None:
89
+ self.text_proj = nn.Linear(D_txt, D, bias=False)
90
+ # Ensure the projection layer uses the same dtype as input
91
+ self.text_proj = self.text_proj.to(device=Q_t.device, dtype=Q_t.dtype)
92
+ Q_proj = self.text_proj(Q_t)
93
+ else:
94
+ raise ValueError(f"QTS+ expects text dim {D}, got {D_txt}. Set project_text_if_needed=True.")
95
+ else:
96
+ Q_proj = Q_t
97
+
98
+ sel = self.selector(X_v, Q_proj, mode=mode) # returns dict per qts_plus.py
99
+ Z_list: List[torch.Tensor] = sel["Z"] # list of [T_b, D] tensors per sample
100
+ n_vec: torch.Tensor = sel["n"] # [B]
101
+ rho: torch.Tensor = sel["rho"] # [B]
102
+ r: torch.Tensor = sel["r"] # [B, M]
103
+
104
+ # Compute Eq. (1) compute proxies (per-batch averages for convenience)
105
+ # flops ~ (ρM)^2 / n_max^2 ; kv ~ (ρM) / n_max
106
+ M_tensor = torch.tensor(float(M), device=X_v.device)
107
+ flops_proxy = ((rho * M_tensor) ** 2) / float(self.cfg.nmax ** 2)
108
+ kv_proxy = (rho * M_tensor) / float(self.cfg.nmax)
109
+ self.rho_sum += rho.sum().item()
110
+ self.rho_count += B
111
+ rho_loss = (rho - self.rho_sum / self.rho_count) ** 2
112
+
113
+ return {
114
+ "indices": sel["indices"], # kept indices per sample (list[LongTensor])
115
+ "Z": Z_list,
116
+ "rho": rho,
117
+ "r": r,
118
+ "n": n_vec,
119
+ "add_loss": {
120
+ "flops": flops_proxy.mean() * self.cfg.lambda_t,
121
+ "kv": kv_proxy.mean() * self.cfg.lambda_m,
122
+ "smooth": rho_loss.mean() * self.cfg.lambda_s,
123
+ },
124
+ }
125
+
126
+ if __name__ == "__main__":
127
+ cfg = QTSplusTokenizerConfig(
128
+ embedding_dim=1024, n_heads=8, tau_s=0.1, nmax=512, rho_min=0.05, rho_max=0.5
129
+ )
130
+
131
+ qts = QTSplusTokenizer(cfg)
132
+
133
+ # X_v: [B, M, D] vision latents (abs. pos kept upstream)
134
+ X_v = torch.randn(1, 4096, 1024)
135
+ # Q_t: [B, L, D] text/query embeddings (will be projected if D differs)
136
+ Q_t = torch.randn(1, 77, 1024)
137
+ out = qts(X_v, Q_t, mode='train')
138
+
139
+ for k, v in out.items():
140
+ if k != "indices":
141
+ print(f"{k}: {v}")
142
+ else:
143
+ print(f"indices: {[x.shape for x in v]}")
special_tokens_map.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|action_start|>",
6
+ "<|action_end|>",
7
+ "<|interpreter|>",
8
+ "<|plugin|>",
9
+ "<img>",
10
+ "</img>",
11
+ "<IMG_CONTEXT>",
12
+ "<quad>",
13
+ "</quad>",
14
+ "<ref>",
15
+ "</ref>",
16
+ "<box>",
17
+ "</box>"
18
+ ],
19
+ "bos_token": {
20
+ "content": "<s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ "eos_token": {
27
+ "content": "</s>",
28
+ "lstrip": false,
29
+ "normalized": false,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ },
33
+ "pad_token": {
34
+ "content": "</s>",
35
+ "lstrip": false,
36
+ "normalized": false,
37
+ "rstrip": false,
38
+ "single_word": false
39
+ },
40
+ "unk_token": {
41
+ "content": "<unk>",
42
+ "lstrip": false,
43
+ "normalized": false,
44
+ "rstrip": false,
45
+ "single_word": false
46
+ }
47
+ }
tokenization_internlm2.py ADDED
@@ -0,0 +1,235 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/tokenization_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+
17
+ """Tokenization classes for InternLM."""
18
+ import os
19
+ from shutil import copyfile
20
+ from typing import Any, Dict, List, Optional, Tuple
21
+
22
+ import sentencepiece as spm
23
+ from transformers.tokenization_utils import PreTrainedTokenizer
24
+ from transformers.utils import logging
25
+
26
+ logger = logging.get_logger(__name__)
27
+
28
+ VOCAB_FILES_NAMES = {'vocab_file': './tokenizer.model'}
29
+
30
+ PRETRAINED_VOCAB_FILES_MAP = {}
31
+
32
+
33
+ # Modified from transformers.model.llama.tokenization_llama.LlamaTokenizer
34
+ class InternLM2Tokenizer(PreTrainedTokenizer):
35
+ """
36
+ Construct a InternLM2 tokenizer. Based on byte-level Byte-Pair-Encoding.
37
+
38
+ Args:
39
+ vocab_file (`str`):
40
+ Path to the vocabulary file.
41
+ """
42
+
43
+ vocab_files_names = VOCAB_FILES_NAMES
44
+ pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
45
+ model_input_names = ['input_ids', 'attention_mask']
46
+ _auto_class = 'AutoTokenizer'
47
+
48
+ def __init__(
49
+ self,
50
+ vocab_file,
51
+ unk_token='<unk>',
52
+ bos_token='<s>',
53
+ eos_token='</s>',
54
+ pad_token='</s>',
55
+ sp_model_kwargs: Optional[Dict[str, Any]] = None,
56
+ add_bos_token=True,
57
+ add_eos_token=False,
58
+ decode_with_prefix_space=False,
59
+ clean_up_tokenization_spaces=False,
60
+ **kwargs,
61
+ ):
62
+ self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
63
+ self.vocab_file = vocab_file
64
+ self.add_bos_token = add_bos_token
65
+ self.add_eos_token = add_eos_token
66
+ self.decode_with_prefix_space = decode_with_prefix_space
67
+ self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
68
+ self.sp_model.Load(vocab_file)
69
+ self._no_prefix_space_tokens = None
70
+ super().__init__(
71
+ bos_token=bos_token,
72
+ eos_token=eos_token,
73
+ unk_token=unk_token,
74
+ pad_token=pad_token,
75
+ clean_up_tokenization_spaces=clean_up_tokenization_spaces,
76
+ **kwargs,
77
+ )
78
+
79
+ @property
80
+ def no_prefix_space_tokens(self):
81
+ if self._no_prefix_space_tokens is None:
82
+ vocab = self.convert_ids_to_tokens(list(range(self.vocab_size)))
83
+ self._no_prefix_space_tokens = {i for i, tok in enumerate(vocab) if not tok.startswith('▁')}
84
+ return self._no_prefix_space_tokens
85
+
86
+ @property
87
+ def vocab_size(self):
88
+ """Returns vocab size"""
89
+ return self.sp_model.get_piece_size()
90
+
91
+ @property
92
+ def bos_token_id(self) -> Optional[int]:
93
+ return self.sp_model.bos_id()
94
+
95
+ @property
96
+ def eos_token_id(self) -> Optional[int]:
97
+ return self.sp_model.eos_id()
98
+
99
+ def get_vocab(self):
100
+ """Returns vocab as a dict"""
101
+ vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
102
+ vocab.update(self.added_tokens_encoder)
103
+ return vocab
104
+
105
+ def _tokenize(self, text):
106
+ """Returns a tokenized string."""
107
+ return self.sp_model.encode(text, out_type=str)
108
+
109
+ def _convert_token_to_id(self, token):
110
+ """Converts a token (str) in an id using the vocab."""
111
+ return self.sp_model.piece_to_id(token)
112
+
113
+ def _convert_id_to_token(self, index):
114
+ """Converts an index (integer) in a token (str) using the vocab."""
115
+ token = self.sp_model.IdToPiece(index)
116
+ return token
117
+
118
+ def _maybe_add_prefix_space(self, tokens, decoded):
119
+ if tokens and tokens[0] not in self.no_prefix_space_tokens:
120
+ return ' ' + decoded
121
+ else:
122
+ return decoded
123
+
124
+ def convert_tokens_to_string(self, tokens):
125
+ """Converts a sequence of tokens (string) in a single string."""
126
+ current_sub_tokens = []
127
+ out_string = ''
128
+ prev_is_special = False
129
+ for token in tokens:
130
+ # make sure that special tokens are not decoded using sentencepiece model
131
+ if token in self.all_special_tokens:
132
+ if not prev_is_special:
133
+ out_string += ' '
134
+ out_string += self.sp_model.decode(current_sub_tokens) + token
135
+ prev_is_special = True
136
+ current_sub_tokens = []
137
+ else:
138
+ current_sub_tokens.append(token)
139
+ prev_is_special = False
140
+ out_string += self.sp_model.decode(current_sub_tokens)
141
+ out_string = self.clean_up_tokenization(out_string)
142
+ out_string = self._maybe_add_prefix_space(tokens=tokens, decoded=out_string)
143
+ return out_string[1:]
144
+
145
+ def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]:
146
+ """
147
+ Save the vocabulary and special tokens file to a directory.
148
+
149
+ Args:
150
+ save_directory (`str`):
151
+ The directory in which to save the vocabulary.
152
+
153
+ Returns:
154
+ `Tuple(str)`: Paths to the files saved.
155
+ """
156
+ if not os.path.isdir(save_directory):
157
+ logger.error(f'Vocabulary path ({save_directory}) should be a directory')
158
+ return
159
+ out_vocab_file = os.path.join(
160
+ save_directory, (filename_prefix + '-' if filename_prefix else '') + VOCAB_FILES_NAMES['vocab_file']
161
+ )
162
+
163
+ if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file):
164
+ copyfile(self.vocab_file, out_vocab_file)
165
+ elif not os.path.isfile(self.vocab_file):
166
+ with open(out_vocab_file, 'wb') as fi:
167
+ content_spiece_model = self.sp_model.serialized_model_proto()
168
+ fi.write(content_spiece_model)
169
+
170
+ return (out_vocab_file,)
171
+
172
+ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
173
+ if self.add_bos_token:
174
+ bos_token_ids = [self.bos_token_id]
175
+ else:
176
+ bos_token_ids = []
177
+
178
+ output = bos_token_ids + token_ids_0
179
+
180
+ if token_ids_1 is not None:
181
+ output = output + token_ids_1
182
+
183
+ if self.add_eos_token:
184
+ output = output + [self.eos_token_id]
185
+
186
+ return output
187
+
188
+ def get_special_tokens_mask(
189
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
190
+ ) -> List[int]:
191
+ """
192
+ Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
193
+ special tokens using the tokenizer `prepare_for_model` method.
194
+
195
+ Args:
196
+ token_ids_0 (`List[int]`):
197
+ List of IDs.
198
+ token_ids_1 (`List[int]`, *optional*):
199
+ Optional second list of IDs for sequence pairs.
200
+ already_has_special_tokens (`bool`, *optional*, defaults to `False`):
201
+ Whether or not the token list is already formatted with special tokens for the model.
202
+
203
+ Returns:
204
+ `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
205
+ """
206
+ if already_has_special_tokens:
207
+ return super().get_special_tokens_mask(
208
+ token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
209
+ )
210
+
211
+ if token_ids_1 is None:
212
+ return [1] + ([0] * len(token_ids_0)) + [1]
213
+ return [1] + ([0] * len(token_ids_0)) + [1, 1] + ([0] * len(token_ids_1)) + [1]
214
+
215
+ def create_token_type_ids_from_sequences(
216
+ self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
217
+ ) -> List[int]:
218
+ """
219
+ Create a mask from the two sequences passed to be used in a sequence-pair classification task. T5 does not make
220
+ use of token type ids, therefore a list of zeros is returned.
221
+
222
+ Args:
223
+ token_ids_0 (`List[int]`):
224
+ List of IDs.
225
+ token_ids_1 (`List[int]`, *optional*):
226
+ Optional second list of IDs for sequence pairs.
227
+
228
+ Returns:
229
+ `List[int]`: List of zeros.
230
+ """
231
+ eos = [self.eos_token_id]
232
+
233
+ if token_ids_1 is None:
234
+ return len(token_ids_0 + eos) * [0]
235
+ return len(token_ids_0 + eos + token_ids_1 + eos) * [0]
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f868398fc4e05ee1e8aeba95ddf18ddcc45b8bce55d5093bead5bbf80429b48b
3
+ size 1477754
tokenizer_config.json ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<unk>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<s>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "92538": {
28
+ "content": "<|plugin|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "92539": {
36
+ "content": "<|interpreter|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "92540": {
44
+ "content": "<|action_end|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "92541": {
52
+ "content": "<|action_start|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "92542": {
60
+ "content": "<|im_end|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "92543": {
68
+ "content": "<|im_start|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "92544": {
76
+ "content": "<img>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "92545": {
84
+ "content": "</img>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "92546": {
92
+ "content": "<IMG_CONTEXT>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "92547": {
100
+ "content": "<quad>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "92548": {
108
+ "content": "</quad>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "92549": {
116
+ "content": "<ref>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "92550": {
124
+ "content": "</ref>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "92551": {
132
+ "content": "<box>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "92552": {
140
+ "content": "</box>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ }
147
+ },
148
+ "additional_special_tokens": [
149
+ "<|im_start|>",
150
+ "<|im_end|>",
151
+ "<|action_start|>",
152
+ "<|action_end|>",
153
+ "<|interpreter|>",
154
+ "<|plugin|>",
155
+ "<img>",
156
+ "</img>",
157
+ "<IMG_CONTEXT>",
158
+ "<quad>",
159
+ "</quad>",
160
+ "<ref>",
161
+ "</ref>",
162
+ "<box>",
163
+ "</box>"
164
+ ],
165
+ "auto_map": {
166
+ "AutoTokenizer": [
167
+ "tokenization_internlm2.InternLM2Tokenizer",
168
+ null
169
+ ]
170
+ },
171
+ "bos_token": "<s>",
172
+ "clean_up_tokenization_spaces": false,
173
+ "eos_token": "</s>",
174
+ "extra_special_tokens": {},
175
+ "model_max_length": 16384,
176
+ "pad_token": "</s>",
177
+ "tokenizer_class": "InternLM2Tokenizer",
178
+ "unk_token": "<unk>"
179
+ }
zero_to_fp32.py ADDED
@@ -0,0 +1,760 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright (c) Microsoft Corporation.
4
+ # SPDX-License-Identifier: Apache-2.0
5
+
6
+ # DeepSpeed Team
7
+
8
+ # This script extracts fp32 consolidated weights from a zero 1, 2 and 3 DeepSpeed checkpoints. It gets
9
+ # copied into the top level checkpoint dir, so the user can easily do the conversion at any point in
10
+ # the future. Once extracted, the weights don't require DeepSpeed and can be used in any
11
+ # application.
12
+ #
13
+ # example:
14
+ # python zero_to_fp32.py . output_dir/
15
+ # or
16
+ # python zero_to_fp32.py . output_dir/ --safe_serialization
17
+
18
+ import argparse
19
+ import torch
20
+ import glob
21
+ import math
22
+ import os
23
+ import re
24
+ import gc
25
+ import json
26
+ import numpy as np
27
+ from tqdm import tqdm
28
+ from collections import OrderedDict
29
+ from dataclasses import dataclass
30
+
31
+ # while this script doesn't use deepspeed to recover data, since the checkpoints are pickled with
32
+ # DeepSpeed data structures it has to be available in the current python environment.
33
+ from deepspeed.utils import logger
34
+ from deepspeed.checkpoint.constants import (DS_VERSION, OPTIMIZER_STATE_DICT, SINGLE_PARTITION_OF_FP32_GROUPS,
35
+ FP32_FLAT_GROUPS, ZERO_STAGE, PARTITION_COUNT, PARAM_SHAPES, BUFFER_NAMES,
36
+ FROZEN_PARAM_SHAPES, FROZEN_PARAM_FRAGMENTS)
37
+
38
+
39
+ @dataclass
40
+ class zero_model_state:
41
+ buffers: dict()
42
+ param_shapes: dict()
43
+ shared_params: list
44
+ ds_version: int
45
+ frozen_param_shapes: dict()
46
+ frozen_param_fragments: dict()
47
+
48
+
49
+ debug = 0
50
+
51
+ # load to cpu
52
+ device = torch.device('cpu')
53
+
54
+
55
+ def atoi(text):
56
+ return int(text) if text.isdigit() else text
57
+
58
+
59
+ def natural_keys(text):
60
+ '''
61
+ alist.sort(key=natural_keys) sorts in human order
62
+ http://nedbatchelder.com/blog/200712/human_sorting.html
63
+ (See Toothy's implementation in the comments)
64
+ '''
65
+ return [atoi(c) for c in re.split(r'(\d+)', text)]
66
+
67
+
68
+ def get_model_state_file(checkpoint_dir, zero_stage):
69
+ if not os.path.isdir(checkpoint_dir):
70
+ raise FileNotFoundError(f"Directory '{checkpoint_dir}' doesn't exist")
71
+
72
+ # there should be only one file
73
+ if zero_stage <= 2:
74
+ file = os.path.join(checkpoint_dir, "mp_rank_00_model_states.pt")
75
+ elif zero_stage == 3:
76
+ file = os.path.join(checkpoint_dir, "zero_pp_rank_0_mp_rank_00_model_states.pt")
77
+
78
+ if not os.path.exists(file):
79
+ raise FileNotFoundError(f"can't find model states file at '{file}'")
80
+
81
+ return file
82
+
83
+
84
+ def get_checkpoint_files(checkpoint_dir, glob_pattern):
85
+ # XXX: need to test that this simple glob rule works for multi-node setup too
86
+ ckpt_files = sorted(glob.glob(os.path.join(checkpoint_dir, glob_pattern)), key=natural_keys)
87
+
88
+ if len(ckpt_files) == 0:
89
+ raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
90
+
91
+ return ckpt_files
92
+
93
+
94
+ def get_optim_files(checkpoint_dir):
95
+ return get_checkpoint_files(checkpoint_dir, "*_optim_states.pt")
96
+
97
+
98
+ def get_model_state_files(checkpoint_dir):
99
+ return get_checkpoint_files(checkpoint_dir, "*_model_states.pt")
100
+
101
+
102
+ def parse_model_states(files):
103
+ zero_model_states = []
104
+ for file in files:
105
+ state_dict = torch.load(file, map_location=device, weights_only=False)
106
+
107
+ if BUFFER_NAMES not in state_dict:
108
+ raise ValueError(f"{file} is not a model state checkpoint")
109
+ buffer_names = state_dict[BUFFER_NAMES]
110
+ if debug:
111
+ print("Found buffers:", buffer_names)
112
+
113
+ # recover just the buffers while restoring them to fp32 if they were saved in fp16
114
+ buffers = {k: v.float() for k, v in state_dict["module"].items() if k in buffer_names}
115
+ param_shapes = state_dict[PARAM_SHAPES]
116
+
117
+ # collect parameters that are included in param_shapes
118
+ param_names = []
119
+ for s in param_shapes:
120
+ for name in s.keys():
121
+ param_names.append(name)
122
+
123
+ # update with frozen parameters
124
+ frozen_param_shapes = state_dict.get(FROZEN_PARAM_SHAPES, None)
125
+ if frozen_param_shapes is not None:
126
+ if debug:
127
+ print(f"Found frozen_param_shapes: {frozen_param_shapes}")
128
+ param_names += list(frozen_param_shapes.keys())
129
+
130
+ # handle shared params
131
+ shared_params = [[k, v] for k, v in state_dict["shared_params"].items()]
132
+
133
+ ds_version = state_dict.get(DS_VERSION, None)
134
+
135
+ frozen_param_fragments = state_dict.get(FROZEN_PARAM_FRAGMENTS, None)
136
+
137
+ z_model_state = zero_model_state(buffers=buffers,
138
+ param_shapes=param_shapes,
139
+ shared_params=shared_params,
140
+ ds_version=ds_version,
141
+ frozen_param_shapes=frozen_param_shapes,
142
+ frozen_param_fragments=frozen_param_fragments)
143
+ zero_model_states.append(z_model_state)
144
+
145
+ return zero_model_states
146
+
147
+
148
+ def parse_optim_states(files, ds_checkpoint_dir):
149
+ total_files = len(files)
150
+ state_dicts = []
151
+ for f in tqdm(files, desc='Loading checkpoint shards'):
152
+ state_dict = torch.load(f, map_location=device, mmap=True, weights_only=False)
153
+ # immediately discard the potentially huge 2 optimizer states as we only care for fp32 master weights
154
+ # and also handle the case where it was already removed by another helper script
155
+ state_dict["optimizer_state_dict"].pop("optimizer_state_dict", None)
156
+ state_dicts.append(state_dict)
157
+
158
+ if ZERO_STAGE not in state_dicts[0][OPTIMIZER_STATE_DICT]:
159
+ raise ValueError(f"{files[0]} is not a zero checkpoint")
160
+ zero_stage = state_dicts[0][OPTIMIZER_STATE_DICT][ZERO_STAGE]
161
+ world_size = state_dicts[0][OPTIMIZER_STATE_DICT][PARTITION_COUNT]
162
+
163
+ # For ZeRO-2 each param group can have different partition_count as data parallelism for expert
164
+ # parameters can be different from data parallelism for non-expert parameters. So we can just
165
+ # use the max of the partition_count to get the dp world_size.
166
+
167
+ if type(world_size) is list:
168
+ world_size = max(world_size)
169
+
170
+ if world_size != total_files:
171
+ raise ValueError(
172
+ f"Expected {world_size} of '*_optim_states.pt' under '{ds_checkpoint_dir}' but found {total_files} files. "
173
+ "Possibly due to an overwrite of an old checkpoint, or a checkpoint didn't get saved by one or more processes."
174
+ )
175
+
176
+ # the groups are named differently in each stage
177
+ if zero_stage <= 2:
178
+ fp32_groups_key = SINGLE_PARTITION_OF_FP32_GROUPS
179
+ elif zero_stage == 3:
180
+ fp32_groups_key = FP32_FLAT_GROUPS
181
+ else:
182
+ raise ValueError(f"unknown zero stage {zero_stage}")
183
+
184
+ fp32_flat_groups = [state_dicts[i][OPTIMIZER_STATE_DICT][fp32_groups_key] for i in range(len(state_dicts))]
185
+ return zero_stage, world_size, fp32_flat_groups
186
+
187
+
188
+ def _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters):
189
+ """
190
+ Returns fp32 state_dict reconstructed from ds checkpoint
191
+
192
+ Args:
193
+ - ``ds_checkpoint_dir``: path to the deepspeed checkpoint folder (where the optimizer files are)
194
+
195
+ """
196
+ print(f"Processing zero checkpoint '{ds_checkpoint_dir}'")
197
+
198
+ optim_files = get_optim_files(ds_checkpoint_dir)
199
+ zero_stage, world_size, fp32_flat_groups = parse_optim_states(optim_files, ds_checkpoint_dir)
200
+ print(f"Detected checkpoint of type zero stage {zero_stage}, world_size: {world_size}")
201
+
202
+ model_files = get_model_state_files(ds_checkpoint_dir)
203
+
204
+ zero_model_states = parse_model_states(model_files)
205
+ print(f'Parsing checkpoint created by deepspeed=={zero_model_states[0].ds_version}')
206
+
207
+ if zero_stage <= 2:
208
+ return _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
209
+ exclude_frozen_parameters)
210
+ elif zero_stage == 3:
211
+ return _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
212
+ exclude_frozen_parameters)
213
+
214
+
215
+ def _zero2_merge_frozen_params(state_dict, zero_model_states):
216
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
217
+ return
218
+
219
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
220
+ frozen_param_fragments = zero_model_states[0].frozen_param_fragments
221
+
222
+ if debug:
223
+ num_elem = sum(s.numel() for s in frozen_param_shapes.values())
224
+ print(f'rank 0: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
225
+
226
+ wanted_params = len(frozen_param_shapes)
227
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
228
+ avail_numel = sum([p.numel() for p in frozen_param_fragments.values()])
229
+ print(f'Frozen params: Have {avail_numel} numels to process.')
230
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
231
+
232
+ total_params = 0
233
+ total_numel = 0
234
+ for name, shape in frozen_param_shapes.items():
235
+ total_params += 1
236
+ unpartitioned_numel = shape.numel()
237
+ total_numel += unpartitioned_numel
238
+
239
+ state_dict[name] = frozen_param_fragments[name]
240
+
241
+ if debug:
242
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
243
+
244
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
245
+
246
+
247
+ def _has_callable(obj, fn):
248
+ attr = getattr(obj, fn, None)
249
+ return callable(attr)
250
+
251
+
252
+ def _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
253
+ param_shapes = zero_model_states[0].param_shapes
254
+
255
+ # Reconstruction protocol:
256
+ #
257
+ # XXX: document this
258
+
259
+ if debug:
260
+ for i in range(world_size):
261
+ for j in range(len(fp32_flat_groups[0])):
262
+ print(f"{FP32_FLAT_GROUPS}[{i}][{j}].shape={fp32_flat_groups[i][j].shape}")
263
+
264
+ # XXX: memory usage doubles here (zero2)
265
+ num_param_groups = len(fp32_flat_groups[0])
266
+ merged_single_partition_of_fp32_groups = []
267
+ for i in range(num_param_groups):
268
+ merged_partitions = [sd[i] for sd in fp32_flat_groups]
269
+ full_single_fp32_vector = torch.cat(merged_partitions, 0)
270
+ merged_single_partition_of_fp32_groups.append(full_single_fp32_vector)
271
+ avail_numel = sum(
272
+ [full_single_fp32_vector.numel() for full_single_fp32_vector in merged_single_partition_of_fp32_groups])
273
+
274
+ if debug:
275
+ wanted_params = sum([len(shapes) for shapes in param_shapes])
276
+ wanted_numel = sum([sum(shape.numel() for shape in shapes.values()) for shapes in param_shapes])
277
+ # not asserting if there is a mismatch due to possible padding
278
+ print(f"Have {avail_numel} numels to process.")
279
+ print(f"Need {wanted_numel} numels in {wanted_params} params.")
280
+
281
+ # params
282
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
283
+ # out-of-core computing solution
284
+ total_numel = 0
285
+ total_params = 0
286
+ for shapes, full_single_fp32_vector in zip(param_shapes, merged_single_partition_of_fp32_groups):
287
+ offset = 0
288
+ avail_numel = full_single_fp32_vector.numel()
289
+ for name, shape in shapes.items():
290
+
291
+ unpartitioned_numel = shape.numel() if _has_callable(shape, 'numel') else math.prod(shape)
292
+ total_numel += unpartitioned_numel
293
+ total_params += 1
294
+
295
+ if debug:
296
+ print(f"{name} full shape: {shape} unpartitioned numel {unpartitioned_numel} ")
297
+ state_dict[name] = full_single_fp32_vector.narrow(0, offset, unpartitioned_numel).view(shape)
298
+ offset += unpartitioned_numel
299
+
300
+ # Z2 started to align to 2*world_size to improve nccl performance. Therefore both offset and
301
+ # avail_numel can differ by anywhere between 0..2*world_size. Due to two unrelated complex
302
+ # paddings performed in the code it's almost impossible to predict the exact numbers w/o the
303
+ # live optimizer object, so we are checking that the numbers are within the right range
304
+ align_to = 2 * world_size
305
+
306
+ def zero2_align(x):
307
+ return align_to * math.ceil(x / align_to)
308
+
309
+ if debug:
310
+ print(f"original offset={offset}, avail_numel={avail_numel}")
311
+
312
+ offset = zero2_align(offset)
313
+ avail_numel = zero2_align(avail_numel)
314
+
315
+ if debug:
316
+ print(f"aligned offset={offset}, avail_numel={avail_numel}")
317
+
318
+ # Sanity check
319
+ if offset != avail_numel:
320
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
321
+
322
+ print(f"Reconstructed fp32 state dict with {total_params} params {total_numel} elements")
323
+
324
+
325
+ def _get_fp32_state_dict_from_zero2_checkpoint(world_size, fp32_flat_groups, zero_model_states,
326
+ exclude_frozen_parameters):
327
+ state_dict = OrderedDict()
328
+
329
+ # buffers
330
+ buffers = zero_model_states[0].buffers
331
+ state_dict.update(buffers)
332
+ if debug:
333
+ print(f"added {len(buffers)} buffers")
334
+
335
+ if not exclude_frozen_parameters:
336
+ _zero2_merge_frozen_params(state_dict, zero_model_states)
337
+
338
+ _zero2_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
339
+
340
+ # recover shared parameters
341
+ for pair in zero_model_states[0].shared_params:
342
+ if pair[1] in state_dict:
343
+ state_dict[pair[0]] = state_dict[pair[1]]
344
+
345
+ return state_dict
346
+
347
+
348
+ def zero3_partitioned_param_info(unpartitioned_numel, world_size):
349
+ remainder = unpartitioned_numel % world_size
350
+ padding_numel = (world_size - remainder) if remainder else 0
351
+ partitioned_numel = math.ceil(unpartitioned_numel / world_size)
352
+ return partitioned_numel, padding_numel
353
+
354
+
355
+ def _zero3_merge_frozen_params(state_dict, world_size, zero_model_states):
356
+ if zero_model_states[0].frozen_param_shapes is None or len(zero_model_states[0].frozen_param_shapes) == 0:
357
+ return
358
+
359
+ if debug:
360
+ for i in range(world_size):
361
+ num_elem = sum(s.numel() for s in zero_model_states[i].frozen_param_fragments.values())
362
+ print(f'rank {i}: {FROZEN_PARAM_SHAPES}.numel = {num_elem}')
363
+
364
+ frozen_param_shapes = zero_model_states[0].frozen_param_shapes
365
+ wanted_params = len(frozen_param_shapes)
366
+ wanted_numel = sum(s.numel() for s in frozen_param_shapes.values())
367
+ avail_numel = sum([p.numel() for p in zero_model_states[0].frozen_param_fragments.values()]) * world_size
368
+ print(f'Frozen params: Have {avail_numel} numels to process.')
369
+ print(f'Frozen params: Need {wanted_numel} numels in {wanted_params} params')
370
+
371
+ total_params = 0
372
+ total_numel = 0
373
+ for name, shape in zero_model_states[0].frozen_param_shapes.items():
374
+ total_params += 1
375
+ unpartitioned_numel = shape.numel()
376
+ total_numel += unpartitioned_numel
377
+
378
+ param_frags = tuple(model_state.frozen_param_fragments[name] for model_state in zero_model_states)
379
+ state_dict[name] = torch.cat(param_frags, 0).narrow(0, 0, unpartitioned_numel).view(shape)
380
+
381
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
382
+
383
+ if debug:
384
+ print(
385
+ f"Frozen params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
386
+ )
387
+
388
+ print(f"Reconstructed Frozen fp32 state dict with {total_params} params {total_numel} elements")
389
+
390
+
391
+ class GatheredTensor:
392
+ """
393
+ A pseudo tensor that collects partitioned weights.
394
+ It is more memory efficient when there are multiple groups.
395
+ """
396
+
397
+ def __init__(self, flat_groups, flat_groups_offset, offset, partitioned_numel, shape):
398
+ self.flat_groups = flat_groups
399
+ self.flat_groups_offset = flat_groups_offset
400
+ self.offset = offset
401
+ self.partitioned_numel = partitioned_numel
402
+ self.shape = shape
403
+ self.dtype = self.flat_groups[0][0].dtype
404
+
405
+ def contiguous(self):
406
+ """
407
+ Merge partitioned weights from flat_groups into a single tensor.
408
+ """
409
+ end_idx = self.offset + self.partitioned_numel
410
+ world_size = len(self.flat_groups)
411
+ pad_flat_param_chunks = []
412
+
413
+ for rank_i in range(world_size):
414
+ # for each rank, we need to collect weights from related group/groups
415
+ flat_groups_at_rank_i = self.flat_groups[rank_i]
416
+ start_group_id = None
417
+ end_group_id = None
418
+ for group_id in range(len(self.flat_groups_offset)):
419
+ if self.flat_groups_offset[group_id] <= self.offset < self.flat_groups_offset[group_id + 1]:
420
+ start_group_id = group_id
421
+ if self.flat_groups_offset[group_id] < end_idx <= self.flat_groups_offset[group_id + 1]:
422
+ end_group_id = group_id
423
+ break
424
+ # collect weights from related group/groups
425
+ for group_id in range(start_group_id, end_group_id + 1):
426
+ flat_tensor = flat_groups_at_rank_i[group_id]
427
+ start_offset = self.offset - self.flat_groups_offset[group_id]
428
+ end_offset = min(end_idx, self.flat_groups_offset[group_id + 1]) - self.flat_groups_offset[group_id]
429
+ pad_flat_param_chunks.append(flat_tensor[start_offset:end_offset])
430
+
431
+ # collect weights from all ranks
432
+ pad_flat_param = torch.cat(pad_flat_param_chunks, dim=0)
433
+ param = pad_flat_param[:self.shape.numel()].view(self.shape).contiguous()
434
+ return param
435
+
436
+
437
+ def _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states):
438
+ param_shapes = zero_model_states[0].param_shapes
439
+ avail_numel = sum([flat_group.numel() for flat_group in fp32_flat_groups[0]]) * world_size
440
+
441
+ # Reconstruction protocol: For zero3 we need to zip the partitions together at boundary of each
442
+ # param, re-consolidating each param, while dealing with padding if any
443
+
444
+ # merge list of dicts, preserving order
445
+ param_shapes = {k: v for d in param_shapes for k, v in d.items()}
446
+
447
+ if debug:
448
+ for i in range(world_size):
449
+ print(f"{FP32_FLAT_GROUPS}[{i}].shape={fp32_flat_groups[i].shape}")
450
+
451
+ wanted_params = len(param_shapes)
452
+ wanted_numel = sum(shape.numel() for shape in param_shapes.values())
453
+ # not asserting if there is a mismatch due to possible padding
454
+ avail_numel = fp32_flat_groups[0].numel() * world_size
455
+ print(f"Trainable params: Have {avail_numel} numels to process.")
456
+ print(f"Trainable params: Need {wanted_numel} numels in {wanted_params} params.")
457
+
458
+ # params
459
+ # XXX: for huge models that can't fit into the host's RAM we will have to recode this to support
460
+ # out-of-core computing solution
461
+ offset = 0
462
+ total_numel = 0
463
+ total_params = 0
464
+ flat_groups_offset = [0] + list(np.cumsum([flat_tensor.numel() for flat_tensor in fp32_flat_groups[0]]))
465
+ for name, shape in tqdm(param_shapes.items(), desc='Gathering sharded weights'):
466
+ unpartitioned_numel = shape.numel()
467
+ total_numel += unpartitioned_numel
468
+ total_params += 1
469
+ partitioned_numel, partitioned_padding_numel = zero3_partitioned_param_info(unpartitioned_numel, world_size)
470
+
471
+ if debug:
472
+ print(
473
+ f"Trainable params: {total_params} {name} full shape: {shape} partition0 numel={partitioned_numel} partitioned_padding_numel={partitioned_padding_numel}"
474
+ )
475
+
476
+ # memory efficient tensor
477
+ tensor = GatheredTensor(fp32_flat_groups, flat_groups_offset, offset, partitioned_numel, shape)
478
+ state_dict[name] = tensor
479
+ offset += partitioned_numel
480
+
481
+ offset *= world_size
482
+
483
+ # Sanity check
484
+ if offset != avail_numel:
485
+ raise ValueError(f"consumed {offset} numels out of {avail_numel} - something is wrong")
486
+
487
+ print(f"Reconstructed Trainable fp32 state dict with {total_params} params {total_numel} elements")
488
+
489
+
490
+ def _get_fp32_state_dict_from_zero3_checkpoint(world_size, fp32_flat_groups, zero_model_states,
491
+ exclude_frozen_parameters):
492
+ state_dict = OrderedDict()
493
+
494
+ # buffers
495
+ buffers = zero_model_states[0].buffers
496
+ state_dict.update(buffers)
497
+ if debug:
498
+ print(f"added {len(buffers)} buffers")
499
+
500
+ if not exclude_frozen_parameters:
501
+ _zero3_merge_frozen_params(state_dict, world_size, zero_model_states)
502
+
503
+ _zero3_merge_trainable_params(state_dict, world_size, fp32_flat_groups, zero_model_states)
504
+
505
+ # recover shared parameters
506
+ for pair in zero_model_states[0].shared_params:
507
+ if pair[1] in state_dict:
508
+ state_dict[pair[0]] = state_dict[pair[1]]
509
+
510
+ return state_dict
511
+
512
+
513
+ def to_torch_tensor(state_dict, return_empty_tensor=False):
514
+ """
515
+ Convert state_dict of GatheredTensor to torch tensor
516
+ """
517
+ torch_state_dict = {}
518
+ converted_tensors = {}
519
+ for name, tensor in state_dict.items():
520
+ tensor_id = id(tensor)
521
+ if tensor_id in converted_tensors: # shared tensors
522
+ shared_tensor = torch_state_dict[converted_tensors[tensor_id]]
523
+ torch_state_dict[name] = shared_tensor
524
+ else:
525
+ converted_tensors[tensor_id] = name
526
+ if return_empty_tensor:
527
+ torch_state_dict[name] = torch.empty(tensor.shape, dtype=tensor.dtype)
528
+ else:
529
+ torch_state_dict[name] = tensor.contiguous()
530
+ return torch_state_dict
531
+
532
+
533
+ def get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
534
+ tag=None,
535
+ exclude_frozen_parameters=False,
536
+ lazy_mode=False):
537
+ """
538
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated state_dict that can be loaded with
539
+ ``load_state_dict()`` and used for training without DeepSpeed or shared with others, for example
540
+ via a model hub.
541
+
542
+ Args:
543
+ - ``checkpoint_dir``: path to the desired checkpoint folder
544
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in 'latest' file. e.g., ``global_step14``
545
+ - ``exclude_frozen_parameters``: exclude frozen parameters
546
+ - ``lazy_mode``: get state_dict in lazy mode. It returns a dict of pesduo tensor instead of torch tensor, which is more memory efficient.
547
+ Convert the pesduo tensor to torch tensor by ``.contiguous()``
548
+
549
+ Returns:
550
+ - pytorch ``state_dict``
551
+
552
+ A typical usage might be ::
553
+
554
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
555
+ # do the training and checkpoint saving
556
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir) # already on cpu
557
+ model = model.cpu() # move to cpu
558
+ model.load_state_dict(state_dict)
559
+ # submit to model hub or save the model to share with others
560
+
561
+ In this example the ``model`` will no longer be usable in the deepspeed context of the same
562
+ application. i.e. you will need to re-initialize the deepspeed engine, since
563
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
564
+
565
+ If you want it all done for you, use ``load_state_dict_from_zero_checkpoint`` instead.
566
+
567
+ Note: the above usage may not work if your application doesn't have sufficient free CPU memory.
568
+ You may need to use the offline approach using the ``zero_to_fp32.py`` script that is saved with
569
+ the checkpoint. Or you can load state_dict in lazy mode ::
570
+
571
+ from deepspeed.utils.zero_to_fp32 import get_fp32_state_dict_from_zero_checkpoint
572
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, lazy_mode=True) # not on cpu
573
+ for name, lazy_tensor in state_dict.item():
574
+ tensor = lazy_tensor.contiguous() # to cpu
575
+ print(name, tensor)
576
+ # del tensor to release memory if it no longer in use
577
+ """
578
+ if tag is None:
579
+ latest_path = os.path.join(checkpoint_dir, 'latest')
580
+ if os.path.isfile(latest_path):
581
+ with open(latest_path, 'r') as fd:
582
+ tag = fd.read().strip()
583
+ else:
584
+ raise ValueError(f"Unable to find 'latest' file at {latest_path}")
585
+
586
+ ds_checkpoint_dir = os.path.join(checkpoint_dir, tag)
587
+
588
+ if not os.path.isdir(ds_checkpoint_dir):
589
+ raise FileNotFoundError(f"Directory '{ds_checkpoint_dir}' doesn't exist")
590
+
591
+ state_dict = _get_fp32_state_dict_from_zero_checkpoint(ds_checkpoint_dir, exclude_frozen_parameters)
592
+ if lazy_mode:
593
+ return state_dict
594
+ else:
595
+ return to_torch_tensor(state_dict)
596
+
597
+
598
+ def convert_zero_checkpoint_to_fp32_state_dict(checkpoint_dir,
599
+ output_dir,
600
+ max_shard_size="5GB",
601
+ safe_serialization=False,
602
+ tag=None,
603
+ exclude_frozen_parameters=False):
604
+ """
605
+ Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict`` file that can be
606
+ loaded with ``torch.load(file)`` + ``load_state_dict()`` and used for training without DeepSpeed.
607
+
608
+ Args:
609
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
610
+ - ``output_dir``: directory to the pytorch fp32 state_dict output files
611
+ - ``max_shard_size``: the maximum size for a checkpoint before being sharded, default value is 5GB
612
+ - ``safe_serialization``: whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).
613
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
614
+ - ``exclude_frozen_parameters``: exclude frozen parameters
615
+ """
616
+
617
+ # Dependency pre-check
618
+ if safe_serialization:
619
+ try:
620
+ from safetensors.torch import save_file
621
+ except ImportError:
622
+ print('If you want to use `safe_serialization`, please `pip install safetensors`')
623
+ raise
624
+ if max_shard_size is not None:
625
+ try:
626
+ from huggingface_hub import split_torch_state_dict_into_shards
627
+ except ImportError:
628
+ print('If you want to use `max_shard_size`, please `pip install huggingface_hub`')
629
+ raise
630
+
631
+ # Convert zero checkpoint to state_dict
632
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir,
633
+ tag,
634
+ exclude_frozen_parameters,
635
+ lazy_mode=True)
636
+
637
+ # Shard the model if it is too big.
638
+ weights_name = "model.safetensors" if safe_serialization else "pytorch_model.bin"
639
+ if max_shard_size is not None:
640
+ filename_pattern = weights_name.replace(".bin", "{suffix}.bin").replace(".safetensors", "{suffix}.safetensors")
641
+ # an memory-efficient approach for sharding
642
+ empty_state_dict = to_torch_tensor(state_dict, return_empty_tensor=True)
643
+ state_dict_split = split_torch_state_dict_into_shards(empty_state_dict,
644
+ filename_pattern=filename_pattern,
645
+ max_shard_size=max_shard_size)
646
+ else:
647
+ from collections import namedtuple
648
+ StateDictSplit = namedtuple("StateDictSplit", ["is_sharded", "filename_to_tensors"])
649
+ state_dict_split = StateDictSplit(is_sharded=False,
650
+ filename_to_tensors={weights_name: list(state_dict.keys())})
651
+
652
+ # Save the model by shard
653
+ os.makedirs(output_dir, exist_ok=True)
654
+ filename_to_tensors = state_dict_split.filename_to_tensors.items()
655
+ for shard_file, tensors in tqdm(filename_to_tensors, desc="Saving checkpoint shards"):
656
+ shard_state_dict = {tensor_name: state_dict[tensor_name] for tensor_name in tensors}
657
+ shard_state_dict = to_torch_tensor(shard_state_dict)
658
+ output_path = os.path.join(output_dir, shard_file)
659
+ if safe_serialization:
660
+ save_file(shard_state_dict, output_path, metadata={"format": "pt"})
661
+ else:
662
+ torch.save(shard_state_dict, output_path)
663
+ # release the memory of current shard
664
+ for tensor_name in list(shard_state_dict.keys()):
665
+ del state_dict[tensor_name]
666
+ del shard_state_dict[tensor_name]
667
+ del shard_state_dict
668
+ gc.collect()
669
+
670
+ # Save index if sharded
671
+ if state_dict_split.is_sharded:
672
+ index = {
673
+ "metadata": state_dict_split.metadata,
674
+ "weight_map": state_dict_split.tensor_to_filename,
675
+ }
676
+ save_index_file = "model.safetensors.index.json" if safe_serialization else "pytorch_model.bin.index.json"
677
+ save_index_file = os.path.join(output_dir, save_index_file)
678
+ with open(save_index_file, "w", encoding="utf-8") as f:
679
+ content = json.dumps(index, indent=2, sort_keys=True) + "\n"
680
+ f.write(content)
681
+
682
+
683
+ def load_state_dict_from_zero_checkpoint(model, checkpoint_dir, tag=None):
684
+ """
685
+ 1. Put the provided model to cpu
686
+ 2. Convert ZeRO 2 or 3 checkpoint into a single fp32 consolidated ``state_dict``
687
+ 3. Load it into the provided model
688
+
689
+ Args:
690
+ - ``model``: the model object to update
691
+ - ``checkpoint_dir``: path to the desired checkpoint folder. (one that contains the tag-folder, like ``global_step14``)
692
+ - ``tag``: checkpoint tag used as a unique identifier for checkpoint. If not provided will attempt to load tag in the file named ``latest`` in the checkpoint folder, e.g., ``global_step14``
693
+
694
+ Returns:
695
+ - ``model`: modified model
696
+
697
+ Make sure you have plenty of CPU memory available before you call this function. If you don't
698
+ have enough use the ``zero_to_fp32.py`` utility to do the conversion. You will find it
699
+ conveniently placed for you in the checkpoint folder.
700
+
701
+ A typical usage might be ::
702
+
703
+ from deepspeed.utils.zero_to_fp32 import load_state_dict_from_zero_checkpoint
704
+ model = load_state_dict_from_zero_checkpoint(trainer.model, checkpoint_dir)
705
+ # submit to model hub or save the model to share with others
706
+
707
+ Note, that once this was run, the ``model`` will no longer be usable in the deepspeed context
708
+ of the same application. i.e. you will need to re-initialize the deepspeed engine, since
709
+ ``model.load_state_dict(state_dict)`` will remove all the deepspeed magic from it.
710
+
711
+ """
712
+ logger.info("Extracting fp32 weights")
713
+ state_dict = get_fp32_state_dict_from_zero_checkpoint(checkpoint_dir, tag)
714
+
715
+ logger.info("Overwriting model with fp32 weights")
716
+ model = model.cpu()
717
+ model.load_state_dict(state_dict, strict=False)
718
+
719
+ return model
720
+
721
+
722
+ if __name__ == "__main__":
723
+ parser = argparse.ArgumentParser()
724
+ parser.add_argument("checkpoint_dir",
725
+ type=str,
726
+ help="path to the desired checkpoint folder, e.g., path/checkpoint-12")
727
+ parser.add_argument("output_dir",
728
+ type=str,
729
+ help="directory to the pytorch fp32 state_dict output files"
730
+ "(e.g. path/checkpoint-12-output/)")
731
+ parser.add_argument(
732
+ "--max_shard_size",
733
+ type=str,
734
+ default="5GB",
735
+ help="The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size"
736
+ "lower than this size. If expressed as a string, needs to be digits followed by a unit (like `5MB`"
737
+ "We default it to 5GB in order for models to be able to run easily on free-tier google colab instances"
738
+ "without CPU OOM issues.")
739
+ parser.add_argument(
740
+ "--safe_serialization",
741
+ default=False,
742
+ action='store_true',
743
+ help="Whether to save the model using `safetensors` or the traditional PyTorch way (that uses `pickle`).")
744
+ parser.add_argument("-t",
745
+ "--tag",
746
+ type=str,
747
+ default=None,
748
+ help="checkpoint tag used as a unique identifier for checkpoint. e.g., global_step1")
749
+ parser.add_argument("--exclude_frozen_parameters", action='store_true', help="exclude frozen parameters")
750
+ parser.add_argument("-d", "--debug", action='store_true', help="enable debug")
751
+ args = parser.parse_args()
752
+
753
+ debug = args.debug
754
+
755
+ convert_zero_checkpoint_to_fp32_state_dict(args.checkpoint_dir,
756
+ args.output_dir,
757
+ max_shard_size=args.max_shard_size,
758
+ safe_serialization=args.safe_serialization,
759
+ tag=args.tag,
760
+ exclude_frozen_parameters=args.exclude_frozen_parameters)