foundation-models commited on
Commit
6135042
·
verified ·
1 Parent(s): 3e78d71

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: openbmb/MiniCPM-V-4_5
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:openbmb/MiniCPM-V-4_5
7
+ - lora
8
+ - transformers
9
+ ---
10
+
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+
16
+
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
+
81
+ ## Training Details
82
+
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
+
186
+ [More Information Needed]
187
+
188
+ ## Glossary [optional]
189
+
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
+
192
+ [More Information Needed]
193
+
194
+ ## More Information [optional]
195
+
196
+ [More Information Needed]
197
+
198
+ ## Model Card Authors [optional]
199
+
200
+ [More Information Needed]
201
+
202
+ ## Model Card Contact
203
+
204
+ [More Information Needed]
205
+ ### Framework versions
206
+
207
+ - PEFT 0.18.1
adapter_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "openbmb/MiniCPM-V-4_5",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 16,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "k_proj",
33
+ "q_proj",
34
+ "v_proj",
35
+ "o_proj"
36
+ ],
37
+ "target_parameters": null,
38
+ "task_type": "CAUSAL_LM",
39
+ "trainable_token_indices": null,
40
+ "use_dora": false,
41
+ "use_qalora": false,
42
+ "use_rslora": false
43
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:917ac078ddc018ebf86529b5d70003987a26c9d820893c5b9d68bc61f3eed2b9
3
+ size 73348304
added_tokens.json ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</box>": 151674,
3
+ "</image>": 151670,
4
+ "</image_id>": 151682,
5
+ "</point>": 151678,
6
+ "</quad>": 151676,
7
+ "</ref>": 151672,
8
+ "</slice>": 151680,
9
+ "</think>": 151668,
10
+ "</tool_call>": 151658,
11
+ "</tool_response>": 151666,
12
+ "</unit>": 151684,
13
+ "<box>": 151673,
14
+ "")
128
+ self.slice_start_token = kwargs.pop("slice_start", "<slice>")
129
+ self.slice_end_token = kwargs.pop("slice_end", "</slice>")
130
+ self.unk_token = kwargs.pop("unk", "<unk>")
131
+ self.im_id_start = kwargs.pop("im_id_start", "<image_id>")
132
+ self.im_id_end = kwargs.pop("im_id_end", "</image_id>")
133
+ self.slice_mode = kwargs.pop("slice_mode", True)
134
+ self.mean = np.array(kwargs.pop("norm_mean", [0.5, 0.5, 0.5]))
135
+ self.std = np.array(kwargs.pop("norm_std", [0.5, 0.5, 0.5]))
136
+ self.version = kwargs.pop("version", 2.0)
137
+
138
+ def ensure_divide(self, length, patch_size):
139
+ return max(round(length / patch_size) * patch_size, patch_size)
140
+
141
+ def find_best_resize(self,
142
+ original_size,
143
+ scale_resolution,
144
+ patch_size,
145
+ allow_upscale=False):
146
+ width, height = original_size
147
+ if (width * height >
148
+ scale_resolution * scale_resolution) or allow_upscale:
149
+ r = width / height
150
+ height = int(scale_resolution / math.sqrt(r))
151
+ width = int(height * r)
152
+ best_width = self.ensure_divide(width, patch_size)
153
+ best_height = self.ensure_divide(height, patch_size)
154
+ return (best_width, best_height)
155
+
156
+ def get_refine_size(self,
157
+ original_size,
158
+ grid,
159
+ scale_resolution,
160
+ patch_size,
161
+ allow_upscale=False):
162
+ width, height = original_size
163
+ grid_x, grid_y = grid
164
+
165
+ refine_width = self.ensure_divide(width, grid_x)
166
+ refine_height = self.ensure_divide(height, grid_y)
167
+
168
+ grid_width = refine_width / grid_x
169
+ grid_height = refine_height / grid_y
170
+
171
+ best_grid_size = self.find_best_resize((grid_width, grid_height),
172
+ scale_resolution,
173
+ patch_size,
174
+ allow_upscale=allow_upscale)
175
+ refine_size = (best_grid_size[0] * grid_x, best_grid_size[1] * grid_y)
176
+ return refine_size
177
+
178
+ def split_to_patches(self, image, grid):
179
+ patches = []
180
+ width, height = image.size
181
+ grid_x = int(width / grid[0])
182
+ grid_y = int(height / grid[1])
183
+ for i in range(0, height, grid_y):
184
+ images = []
185
+ for j in range(0, width, grid_x):
186
+ box = (j, i, j + grid_x, i + grid_y)
187
+ patch = image.crop(box)
188
+ images.append(patch)
189
+ patches.append(images)
190
+ return patches
191
+
192
+ def slice_image(
193
+ self, image, max_slice_nums=9, scale_resolution=448, patch_size=14, never_split=False
194
+ ):
195
+ original_size = image.size
196
+ source_image = None
197
+ best_grid = self.get_sliced_grid(original_size, max_slice_nums, never_split)
198
+ patches = []
199
+
200
+ if best_grid is None:
201
+ # dont need to slice, upsample
202
+ best_size = self.find_best_resize(
203
+ original_size, scale_resolution, patch_size, allow_upscale=True
204
+ )
205
+ source_image = image.resize(best_size, resample=Image.Resampling.BICUBIC)
206
+ else:
207
+ # source image, down-sampling and ensure divided by patch_size
208
+ best_resize = self.find_best_resize(original_size, scale_resolution, patch_size)
209
+ source_image = image.copy().resize(best_resize, resample=Image.Resampling.BICUBIC)
210
+ refine_size = self.get_refine_size(
211
+ original_size, best_grid, scale_resolution, patch_size, allow_upscale=True
212
+ )
213
+ refine_image = image.resize(refine_size, resample=Image.Resampling.BICUBIC)
214
+ patches = self.split_to_patches(refine_image, best_grid)
215
+
216
+ return source_image, patches, best_grid
217
+
218
+ def get_grid_placeholder(self, grid):
219
+ if grid is None:
220
+ return ""
221
+ slice_image_placeholder = (
222
+ self.slice_start_token
223
+ + self.unk_token * self.image_feature_size
224
+ + self.slice_end_token
225
+ )
226
+
227
+ cols = grid[0]
228
+ rows = grid[1]
229
+ slices = []
230
+ for i in range(rows):
231
+ lines = []
232
+ for j in range(cols):
233
+ lines.append(slice_image_placeholder)
234
+ slices.append("".join(lines))
235
+
236
+ slice_placeholder = "\n".join(slices)
237
+ return slice_placeholder
238
+
239
+ def get_image_id_placeholder(self, idx=0):
240
+ return f"{self.im_id_start}{idx}{self.im_id_end}"
241
+
242
+ def get_sliced_images(self, image, max_slice_nums=None):
243
+ slice_images = []
244
+
245
+ if not self.slice_mode:
246
+ return [image]
247
+
248
+ max_slice_nums = self.max_slice_nums if max_slice_nums is None else int(max_slice_nums)
249
+ assert max_slice_nums > 0
250
+ source_image, patches, sliced_grid = self.slice_image(
251
+ image,
252
+ max_slice_nums, # default: 9
253
+ self.scale_resolution, # default: 448
254
+ self.patch_size # default: 14
255
+ )
256
+
257
+ slice_images.append(source_image)
258
+ if len(patches) > 0:
259
+ for i in range(len(patches)):
260
+ for j in range(len(patches[0])):
261
+ slice_images.append(patches[i][j])
262
+ return slice_images
263
+
264
+ def get_sliced_grid(self, image_size, max_slice_nums, nerver_split=False):
265
+ original_width, original_height = image_size
266
+ log_ratio = math.log(original_width / original_height)
267
+ ratio = original_width * original_height / (self.scale_resolution * self.scale_resolution)
268
+ multiple = min(math.ceil(ratio), max_slice_nums)
269
+ if multiple <= 1 or nerver_split:
270
+ return None
271
+ candidate_split_grids_nums = []
272
+ for i in [multiple - 1, multiple, multiple + 1]:
273
+ if i == 1 or i > max_slice_nums:
274
+ continue
275
+ candidate_split_grids_nums.append(i)
276
+
277
+ candidate_grids = []
278
+ for split_grids_nums in candidate_split_grids_nums:
279
+ m = 1
280
+ while m <= split_grids_nums:
281
+ if split_grids_nums % m == 0:
282
+ candidate_grids.append([m, split_grids_nums // m])
283
+ m += 1
284
+
285
+ best_grid = [1, 1]
286
+ min_error = float("inf")
287
+ for grid in candidate_grids:
288
+ error = abs(log_ratio - math.log(grid[0] / grid[1]))
289
+ if error < min_error:
290
+ best_grid = grid
291
+ min_error = error
292
+
293
+ return best_grid
294
+
295
+ def get_slice_image_placeholder(self, image_size, image_idx=0, max_slice_nums=None, use_image_id=None):
296
+ max_slice_nums = self.max_slice_nums if max_slice_nums is None else int(max_slice_nums)
297
+ assert max_slice_nums > 0
298
+ grid = self.get_sliced_grid(image_size=image_size, max_slice_nums=max_slice_nums)
299
+
300
+ image_placeholder = (
301
+ self.im_start_token
302
+ + self.unk_token * self.image_feature_size
303
+ + self.im_end_token
304
+ )
305
+ use_image_id = self.use_image_id if use_image_id is None else bool(use_image_id)
306
+ if use_image_id:
307
+ final_placeholder = self.get_image_id_placeholder(image_idx) + image_placeholder
308
+ else:
309
+ final_placeholder = image_placeholder
310
+
311
+ if self.slice_mode:
312
+ final_placeholder = final_placeholder + self.get_grid_placeholder(grid=grid)
313
+ return final_placeholder
314
+
315
+ def to_pil_image(self, image, rescale=None) -> PIL.Image.Image:
316
+ """
317
+ Converts `image` to a PIL Image. Optionally rescales it and puts the channel dimension back as the last axis if
318
+ needed.
319
+
320
+ Args:
321
+ image (`PIL.Image.Image` or `numpy.ndarray` or `torch.Tensor`):
322
+ The image to convert to the PIL Image format.
323
+ rescale (`bool`, *optional*):
324
+ Whether or not to apply the scaling factor (to make pixel values integers between 0 and 255). Will
325
+ default to `True` if the image type is a floating type, `False` otherwise.
326
+ """
327
+ if isinstance(image, PIL.Image.Image):
328
+ return image
329
+ if is_torch_tensor(image):
330
+ image = image.numpy()
331
+
332
+ if isinstance(image, np.ndarray):
333
+ if rescale is None:
334
+ # rescale default to the array being of floating type.
335
+ rescale = isinstance(image.flat[0], np.floating)
336
+ # If the channel as been moved to first dim, we put it back at the end.
337
+ if image.ndim == 3 and image.shape[0] in [1, 3]:
338
+ image = image.transpose(1, 2, 0)
339
+ if rescale:
340
+ image = image * 255
341
+ image = image.astype(np.uint8)
342
+ return PIL.Image.fromarray(image)
343
+ return image
344
+
345
+ def reshape_by_patch(self, image):
346
+ """
347
+ :param image: shape [3, H, W]
348
+ :param patch_size:
349
+ :return: [3, patch_size, HW/patch_size]
350
+ """
351
+ image = torch.from_numpy(image)
352
+ patch_size = self.patch_size
353
+ patches = torch.nn.functional.unfold(
354
+ image,
355
+ (patch_size, patch_size),
356
+ stride=(patch_size, patch_size)
357
+ )
358
+
359
+ patches = patches.reshape(image.size(0), patch_size, patch_size, -1)
360
+ patches = patches.permute(0, 1, 3, 2).reshape(image.size(0), patch_size, -1)
361
+ return patches.numpy()
362
+
363
+ def preprocess(
364
+ self,
365
+ images: Union[Image.Image, List[Image.Image], List[List[Image.Image]]],
366
+ do_pad: Optional[bool] = True, # TODO: add pad for MiniCPM-Llama3-V-2_5
367
+ max_slice_nums: int = None,
368
+ temporal_ids: Optional[Union[List[List[int]], List[List[List[int]]]]] = None,
369
+ return_tensors: Optional[Union[str, TensorType]] = None,
370
+ **kwargs
371
+ ) -> MiniCPMVBatchFeature:
372
+ if isinstance(images, Image.Image):
373
+ images_list = [[images]]
374
+ elif isinstance(images[0], Image.Image):
375
+ images_list = [images]
376
+ else:
377
+ images_list = images
378
+
379
+ if temporal_ids is not None:
380
+ if list_depth(temporal_ids) == 2:
381
+ temporal_ids = [temporal_ids]
382
+
383
+ new_images_list = []
384
+ image_sizes_list = []
385
+ tgt_sizes_list = []
386
+ temporal_ids_list = []
387
+ skip_image_idx_list = []
388
+
389
+ for batch_idx, _images in enumerate(images_list):
390
+ if _images is None or len(_images) == 0:
391
+ new_images_list.append([])
392
+ image_sizes_list.append([])
393
+ tgt_sizes_list.append([])
394
+ temporal_ids_list.append([])
395
+ skip_image_idx_list.append([])
396
+ continue
397
+ if not valid_images(_images):
398
+ raise ValueError(
399
+ "Invalid image type. Must be of type PIL.Image.Image, numpy.ndarray, "
400
+ "torch.Tensor, tf.Tensor or jax.ndarray."
401
+ )
402
+
403
+ _images = [self.to_pil_image(image).convert("RGB") for image in _images]
404
+ input_data_format = infer_channel_dimension_format(np.array(_images[0]))
405
+
406
+ new_images = []
407
+ image_sizes = [image.size for image in _images]
408
+ tgt_sizes = []
409
+ tp_ids = []
410
+ skip_image_idx = []
411
+
412
+ # for image in _images:
413
+ # image_patches = self.get_sliced_images(image, max_slice_nums)
414
+ # image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
415
+ # image_patches = [
416
+ # self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
417
+ # for image in image_patches
418
+ # ]
419
+ # image_patches = [
420
+ # to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
421
+ # for image in image_patches
422
+ # ]
423
+ # for slice_image in image_patches:
424
+ # new_images.append(self.reshape_by_patch(slice_image))
425
+ # tgt_sizes.append(np.array((slice_image.shape[1] // self.patch_size, slice_image.shape[2] // self.patch_size)))
426
+
427
+ if temporal_ids is None:
428
+ # no temporal ids
429
+ for image in _images:
430
+ image_patches = self.get_sliced_images(image, max_slice_nums)
431
+ image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
432
+ image_patches = [
433
+ self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
434
+ for image in image_patches
435
+ ]
436
+ image_patches = [
437
+ to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
438
+ for image in image_patches
439
+ ]
440
+ for slice_image in image_patches:
441
+ new_images.append(self.reshape_by_patch(slice_image))
442
+ tgt_sizes.append(np.array((slice_image.shape[1] // self.patch_size, slice_image.shape[2] // self.patch_size)))
443
+
444
+ tp_ids.extend([[-1]] * len(image_patches))
445
+ else:
446
+ temporal_ids_flatten = list(chain.from_iterable(temporal_ids[batch_idx]))
447
+ assert len(temporal_ids_flatten) == len(_images)
448
+ frame_groups = []
449
+ s = 0
450
+ for group in temporal_ids[batch_idx]:
451
+ frame_groups.append(_images[s:s+len(group)])
452
+ s += len(group)
453
+
454
+ skip_start = 0
455
+ for frame_group, tp_id in zip(frame_groups, temporal_ids[batch_idx]):
456
+ image_patches_group = []
457
+ for frame in frame_group:
458
+ image_patches = self.get_sliced_images(frame, max_slice_nums)
459
+ image_patches = [to_numpy_array(image).astype(np.float32) / 255 for image in image_patches]
460
+ image_patches = [
461
+ self.normalize(image=image, mean=self.mean, std=self.std, input_data_format=input_data_format)
462
+ for image in image_patches
463
+ ]
464
+ image_patches = [
465
+ to_channel_dimension_format(image, ChannelDimension.FIRST, input_channel_dim=input_data_format)
466
+ for image in image_patches
467
+ ]
468
+ image_patches_group.append(image_patches)
469
+
470
+ group_cnt = len(image_patches_group[0])
471
+ for gidx in range(group_cnt):
472
+ group_images = [s[gidx] for s in image_patches_group]
473
+ tgt_sizes.extend([np.array((i.shape[1] // self.patch_size, i.shape[2] // self.patch_size)) for i in group_images])
474
+
475
+ group_images = [self.reshape_by_patch(i) for i in group_images]
476
+ new_images.extend(group_images)
477
+ tp_ids.append(tp_id)
478
+ skip_image_idx.extend(list(range(skip_start + 1, skip_start + len(frame_group))))
479
+ skip_start += len(frame_group)
480
+
481
+ if tgt_sizes:
482
+ tgt_sizes = np.vstack(tgt_sizes)
483
+
484
+ new_images_list.append(new_images)
485
+ image_sizes_list.append(image_sizes)
486
+ tgt_sizes_list.append(tgt_sizes)
487
+ temporal_ids_list.append(tp_ids)
488
+ skip_image_idx_list.append(skip_image_idx)
489
+
490
+ data = {
491
+ "pixel_values": new_images_list,
492
+ "image_sizes": image_sizes_list,
493
+ "tgt_sizes": tgt_sizes_list,
494
+ "temporal_ids": temporal_ids_list,
495
+ "skip_image_idx": skip_image_idx_list
496
+ }
497
+
498
+
499
+ return MiniCPMVBatchFeature(data=data, tensor_type=return_tensors)
500
+
501
+ AutoImageProcessor.register("MiniCPMVImageProcessor", MiniCPMVImageProcessor)
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoImageProcessor": "image_processing_minicpmv.MiniCPMVImageProcessor",
4
+ "AutoProcessor": "processing_minicpmv.MiniCPMVProcessor"
5
+ },
6
+ "im_end": "</image>",
7
+ "im_end_token": "</image>",
8
+ "im_id_end": "</image_id>",
9
+ "im_id_start": "<image_id>",
10
+ "im_start": ")"
148
+ # images, image_sizes, tgt_sizes = images["pixel_values"], images["image_sizes"], images["tgt_sizes"]
149
+ images, image_sizes, tgt_sizes, temporal_ids, skip_image_idx = images["pixel_values"], images["image_sizes"], images["tgt_sizes"], images["temporal_ids"], images["skip_image_idx"]
150
+
151
+ if isinstance(texts, str):
152
+ texts = [texts]
153
+ input_ids_list = []
154
+ image_bounds_list = []
155
+ for index, (text, skip_idx) in enumerate(zip(texts, skip_image_idx)):
156
+ image_tags = re.findall(pattern, text)
157
+ assert len(image_tags) == len(image_sizes[index])
158
+ text_chunks = text.split(pattern)
159
+ final_text = ""
160
+
161
+ for i in range(len(image_tags)):
162
+ if i in skip_idx:
163
+ image_placeholder = ''
164
+ text_chunk = text_chunks[i].strip()
165
+
166
+ else:
167
+ image_placeholder = self.image_processor.get_slice_image_placeholder(
168
+ image_sizes[index][i],
169
+ i,
170
+ max_slice_nums,
171
+ use_image_id
172
+ )
173
+ text_chunk = text_chunks[i]
174
+
175
+ final_text = final_text + text_chunk + image_placeholder
176
+
177
+ final_text += text_chunks[-1]
178
+
179
+ input_ids, image_bounds = self._convert(final_text, max_length)
180
+ input_ids_list.append(input_ids)
181
+ image_bounds_list.append(image_bounds)
182
+ padded_input_ids, padding_lengths = self.pad(
183
+ input_ids_list,
184
+ padding_side="left"
185
+ )
186
+ for i, length in enumerate(padding_lengths):
187
+ image_bounds_list[i] = image_bounds_list[i] + length
188
+ attention_mask = padded_input_ids.ne(0)
189
+
190
+ return MiniCPMVBatchFeature(data={
191
+ "input_ids": padded_input_ids,
192
+ "attention_mask": attention_mask,
193
+ "pixel_values": images,
194
+ "image_sizes": image_sizes,
195
+ "image_bound": image_bounds_list,
196
+ "tgt_sizes": tgt_sizes,
197
+ "temporal_ids": temporal_ids
198
+ })
199
+
200
+ @property
201
+ # Copied from transformers.models.clip.processing_clip.CLIPProcessor.model_input_names
202
+ def model_input_names(self):
203
+ tokenizer_input_names = self.tokenizer.model_input_names
204
+ image_processor_input_names = self.image_processor.model_input_names
205
+ return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
206
+
207
+
208
+ def pad(self, inputs, max_length=None, padding_value=0, padding_side="left"):
209
+ items = []
210
+ if isinstance(inputs[0], list):
211
+ assert isinstance(inputs[0][0], torch.Tensor)
212
+ for it in inputs:
213
+ for tr in it:
214
+ items.append(tr)
215
+ else:
216
+ assert isinstance(inputs[0], torch.Tensor)
217
+ items = inputs
218
+
219
+ batch_size = len(items)
220
+ shape = items[0].shape
221
+ dim = len(shape)
222
+ assert dim <= 2
223
+ if max_length is None:
224
+ max_length = 0
225
+ max_length = max(max_length, max(item.shape[-1] for item in items))
226
+ min_length = min(item.shape[-1] for item in items)
227
+ dtype = items[0].dtype
228
+
229
+ if dim == 0:
230
+ return torch.stack([item for item in items], dim=0), [0]
231
+ elif dim == 1:
232
+ if max_length == min_length:
233
+ return torch.stack([item for item in items], dim=0), [0] * batch_size
234
+ tensor = torch.zeros((batch_size, max_length), dtype=dtype) + padding_value
235
+ else:
236
+ tensor = (
237
+ torch.zeros((batch_size, max_length, shape[-1]), dtype=dtype)
238
+ + padding_value
239
+ )
240
+
241
+ padding_length = []
242
+ for i, item in enumerate(items):
243
+ if dim == 1:
244
+ if padding_side == "left":
245
+ tensor[i, -len(item) :] = item.clone()
246
+ else:
247
+ tensor[i, : len(item)] = item.clone()
248
+ elif dim == 2:
249
+ if padding_side == "left":
250
+ tensor[i, -len(item) :, :] = item.clone()
251
+ else:
252
+ tensor[i, : len(item), :] = item.clone()
253
+ padding_length.append(tensor.shape[-1] - len(item))
254
+
255
+ return tensor, padding_length
processor_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "auto_map": {
3
+ "AutoProcessor": "processing_minicpmv.MiniCPMVProcessor"
4
+ },
5
+ "processor_class": "MiniCPMVProcessor"
6
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "",
6
+ "<ref>",
7
+ "</ref>",
8
+ "<box>",
9
+ "</box>",
10
+ "<quad>",
11
+ "</quad>",
12
+ "<point>",
13
+ "</point>",
14
+ "<slice>",
15
+ "</slice>",
16
+ "<image_id>",
17
+ "</image_id>",
18
+ "<unit>",
19
+ "</unit>",
20
+ "<|reserved_0|>",
21
+ "<|reserved_1|>",
22
+ "<|reserved_2|>",
23
+ "<|reserved_3|>",
24
+ "<|reserved_4|>",
25
+ "<|reserved_5|>",
26
+ "<|reserved_6|>",
27
+ "<|reserved_7|>",
28
+ "<|reserved_8|>",
29
+ "<|reserved_9|>",
30
+ "<|reserved_10|>",
31
+ "<|reserved_11|>",
32
+ "<|reserved_12|>",
33
+ "<|reserved_13|>",
34
+ "<|reserved_14|>",
35
+ "<|reserved_15|>",
36
+ "<|reserved_16|>",
37
+ "<|reserved_17|>",
38
+ "<|reserved_18|>",
39
+ "<|reserved_19|>",
40
+ "<|reserved_20|>",
41
+ "<|reserved_21|>",
42
+ "<|reserved_22|>",
43
+ "<|reserved_23|>",
44
+ "<|reserved_24|>",
45
+ "<|reserved_25|>",
46
+ "<|reserved_26|>",
47
+ "<|reserved_27|>",
48
+ "<|reserved_28|>",
49
+ "<|reserved_29|>",
50
+ "<|reserved_30|>",
51
+ "<|reserved_31|>",
52
+ "<|reserved_32|>",
53
+ "<|reserved_33|>",
54
+ "<|reserved_34|>",
55
+ "<|reserved_35|>",
56
+ "<|reserved_36|>",
57
+ "<|reserved_37|>",
58
+ "<|reserved_38|>",
59
+ "<|reserved_39|>",
60
+ "<|reserved_40|>",
61
+ "<|reserved_41|>",
62
+ "<|reserved_42|>",
63
+ "<|reserved_43|>",
64
+ "<|reserved_44|>",
65
+ "<|reserved_45|>",
66
+ "<|reserved_46|>",
67
+ "<|reserved_47|>",
68
+ "<|reserved_48|>",
69
+ "<|reserved_49|>",
70
+ "<|reserved_50|>",
71
+ "<|reserved_51|>",
72
+ "<|reserved_52|>",
73
+ "<|reserved_53|>",
74
+ "<|reserved_54|>",
75
+ "<|reserved_55|>",
76
+ "<|reserved_56|>",
77
+ "<|reserved_57|>",
78
+ "<|reserved_58|>",
79
+ "<|reserved_59|>",
80
+ "<|reserved_60|>",
81
+ "<|reserved_61|>",
82
+ "<|reserved_62|>"
83
+ ],
84
+ "bos_token": {
85
+ "content": "<|im_start|>",
86
+ "lstrip": false,
87
+ "normalized": false,
88
+ "rstrip": false,
89
+ "single_word": false
90
+ },
91
+ "eos_token": {
92
+ "content": "<|im_end|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false
97
+ },
98
+ "pad_token": {
99
+ "content": "<|endoftext|>",
100
+ "lstrip": false,
101
+ "normalized": false,
102
+ "rstrip": false,
103
+ "single_word": false
104
+ },
105
+ "unk_token": {
106
+ "content": "<unk>",
107
+ "lstrip": false,
108
+ "normalized": false,
109
+ "rstrip": false,
110
+ "single_word": false
111
+ }
112
+ }
tokenization_minicpmv_fast.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import Qwen2TokenizerFast
2
+
3
+
4
+ class MiniCPMVTokenizerFast(Qwen2TokenizerFast):
5
+ def __init__(self, **kwargs):
6
+ super().__init__(**kwargs)
7
+ self.im_start = ""
9
+ self.ref_start = "<ref>"
10
+ self.ref_end = "</ref>"
11
+ self.box_start = "<box>"
12
+ self.box_end = "</box>"
13
+ self.quad_start = "<quad>"
14
+ self.quad_end = "</quad>"
15
+ self.slice_start = "<slice>"
16
+ self.slice_end = "</slice>"
17
+ self.im_id_start = "<image_id>"
18
+ self.im_id_end = "</image_id>"
19
+
20
+ @property
21
+ def eos_id(self):
22
+ return self.eos_token_id
23
+
24
+ @property
25
+ def bos_id(self):
26
+ return self.bos_token_id
27
+
28
+ @property
29
+ def unk_id(self):
30
+ return self.unk_token_id
31
+
32
+ @property
33
+ def im_start_id(self):
34
+ return self.convert_tokens_to_ids(self.im_start)
35
+
36
+ @property
37
+ def im_end_id(self):
38
+ return self.convert_tokens_to_ids(self.im_end)
39
+
40
+ @property
41
+ def slice_start_id(self):
42
+ return self.convert_tokens_to_ids(self.slice_start)
43
+
44
+ @property
45
+ def slice_end_id(self):
46
+ return self.convert_tokens_to_ids(self.slice_end)
47
+
48
+ @property
49
+ def im_id_start_id(self):
50
+ return self.convert_tokens_to_ids(self.im_id_start)
51
+
52
+ @property
53
+ def im_id_end_id(self):
54
+ return self.convert_tokens_to_ids(self.im_id_end)
55
+
56
+ @property
57
+ def newline_id(self):
58
+ return self.convert_tokens_to_ids('\n')
59
+
60
+ @staticmethod
61
+ def escape(text: str) -> str:
62
+ return text
63
+
64
+ @staticmethod
65
+ def unescape(text: str) -> str:
66
+ return text
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5a94a2c3913b8aa2175fffb5fd6cf4301958f323d06475bfd91037c13bdd74b
3
+ size 11437868
tokenizer_config.json ADDED
@@ -0,0 +1,954 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "128244": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151643": {
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151644": {
22
+ "content": "<|im_start|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151645": {
30
+ "content": "<|im_end|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151646": {
38
+ "content": "<|object_ref_start|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151647": {
46
+ "content": "<|object_ref_end|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151648": {
54
+ "content": "<|box_start|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151649": {
62
+ "content": "<|box_end|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151650": {
70
+ "content": "<|quad_start|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151651": {
78
+ "content": "<|quad_end|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151652": {
86
+ "content": "<|vision_start|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151653": {
94
+ "content": "<|vision_end|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151654": {
102
+ "content": "<|vision_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151655": {
110
+ "content": "<|image_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151656": {
118
+ "content": "<|video_pad|>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": true
124
+ },
125
+ "151657": {
126
+ "content": "<tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151658": {
134
+ "content": "</tool_call>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151659": {
142
+ "content": "<|fim_prefix|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151660": {
150
+ "content": "<|fim_middle|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151661": {
158
+ "content": "<|fim_suffix|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151662": {
166
+ "content": "<|fim_pad|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151663": {
174
+ "content": "<|repo_name|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151664": {
182
+ "content": "<|file_sep|>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151665": {
190
+ "content": "<tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151666": {
198
+ "content": "</tool_response>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151667": {
206
+ "content": "<think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ },
213
+ "151668": {
214
+ "content": "</think>",
215
+ "lstrip": false,
216
+ "normalized": false,
217
+ "rstrip": false,
218
+ "single_word": false,
219
+ "special": false
220
+ },
221
+ "151669": {
222
+ "content": "",
231
+ "lstrip": false,
232
+ "normalized": false,
233
+ "rstrip": false,
234
+ "single_word": false,
235
+ "special": true
236
+ },
237
+ "151671": {
238
+ "content": "<ref>",
239
+ "lstrip": false,
240
+ "normalized": false,
241
+ "rstrip": false,
242
+ "single_word": false,
243
+ "special": true
244
+ },
245
+ "151672": {
246
+ "content": "</ref>",
247
+ "lstrip": false,
248
+ "normalized": false,
249
+ "rstrip": false,
250
+ "single_word": false,
251
+ "special": true
252
+ },
253
+ "151673": {
254
+ "content": "<box>",
255
+ "lstrip": false,
256
+ "normalized": false,
257
+ "rstrip": false,
258
+ "single_word": false,
259
+ "special": true
260
+ },
261
+ "151674": {
262
+ "content": "</box>",
263
+ "lstrip": false,
264
+ "normalized": false,
265
+ "rstrip": false,
266
+ "single_word": false,
267
+ "special": true
268
+ },
269
+ "151675": {
270
+ "content": "<quad>",
271
+ "lstrip": false,
272
+ "normalized": false,
273
+ "rstrip": false,
274
+ "single_word": false,
275
+ "special": true
276
+ },
277
+ "151676": {
278
+ "content": "</quad>",
279
+ "lstrip": false,
280
+ "normalized": false,
281
+ "rstrip": false,
282
+ "single_word": false,
283
+ "special": true
284
+ },
285
+ "151677": {
286
+ "content": "<point>",
287
+ "lstrip": false,
288
+ "normalized": false,
289
+ "rstrip": false,
290
+ "single_word": false,
291
+ "special": true
292
+ },
293
+ "151678": {
294
+ "content": "</point>",
295
+ "lstrip": false,
296
+ "normalized": false,
297
+ "rstrip": false,
298
+ "single_word": false,
299
+ "special": true
300
+ },
301
+ "151679": {
302
+ "content": "<slice>",
303
+ "lstrip": false,
304
+ "normalized": false,
305
+ "rstrip": false,
306
+ "single_word": false,
307
+ "special": true
308
+ },
309
+ "151680": {
310
+ "content": "</slice>",
311
+ "lstrip": false,
312
+ "normalized": false,
313
+ "rstrip": false,
314
+ "single_word": false,
315
+ "special": true
316
+ },
317
+ "151681": {
318
+ "content": "<image_id>",
319
+ "lstrip": false,
320
+ "normalized": false,
321
+ "rstrip": false,
322
+ "single_word": false,
323
+ "special": true
324
+ },
325
+ "151682": {
326
+ "content": "</image_id>",
327
+ "lstrip": false,
328
+ "normalized": false,
329
+ "rstrip": false,
330
+ "single_word": false,
331
+ "special": true
332
+ },
333
+ "151683": {
334
+ "content": "<unit>",
335
+ "lstrip": false,
336
+ "normalized": false,
337
+ "rstrip": false,
338
+ "single_word": false,
339
+ "special": true
340
+ },
341
+ "151684": {
342
+ "content": "</unit>",
343
+ "lstrip": false,
344
+ "normalized": false,
345
+ "rstrip": false,
346
+ "single_word": false,
347
+ "special": true
348
+ },
349
+ "151685": {
350
+ "content": "<|reserved_0|>",
351
+ "lstrip": false,
352
+ "normalized": false,
353
+ "rstrip": false,
354
+ "single_word": false,
355
+ "special": true
356
+ },
357
+ "151686": {
358
+ "content": "<|reserved_1|>",
359
+ "lstrip": false,
360
+ "normalized": false,
361
+ "rstrip": false,
362
+ "single_word": false,
363
+ "special": true
364
+ },
365
+ "151687": {
366
+ "content": "<|reserved_2|>",
367
+ "lstrip": false,
368
+ "normalized": false,
369
+ "rstrip": false,
370
+ "single_word": false,
371
+ "special": true
372
+ },
373
+ "151688": {
374
+ "content": "<|reserved_3|>",
375
+ "lstrip": false,
376
+ "normalized": false,
377
+ "rstrip": false,
378
+ "single_word": false,
379
+ "special": true
380
+ },
381
+ "151689": {
382
+ "content": "<|reserved_4|>",
383
+ "lstrip": false,
384
+ "normalized": false,
385
+ "rstrip": false,
386
+ "single_word": false,
387
+ "special": true
388
+ },
389
+ "151690": {
390
+ "content": "<|reserved_5|>",
391
+ "lstrip": false,
392
+ "normalized": false,
393
+ "rstrip": false,
394
+ "single_word": false,
395
+ "special": true
396
+ },
397
+ "151691": {
398
+ "content": "<|reserved_6|>",
399
+ "lstrip": false,
400
+ "normalized": false,
401
+ "rstrip": false,
402
+ "single_word": false,
403
+ "special": true
404
+ },
405
+ "151692": {
406
+ "content": "<|reserved_7|>",
407
+ "lstrip": false,
408
+ "normalized": false,
409
+ "rstrip": false,
410
+ "single_word": false,
411
+ "special": true
412
+ },
413
+ "151693": {
414
+ "content": "<|reserved_8|>",
415
+ "lstrip": false,
416
+ "normalized": false,
417
+ "rstrip": false,
418
+ "single_word": false,
419
+ "special": true
420
+ },
421
+ "151694": {
422
+ "content": "<|reserved_9|>",
423
+ "lstrip": false,
424
+ "normalized": false,
425
+ "rstrip": false,
426
+ "single_word": false,
427
+ "special": true
428
+ },
429
+ "151695": {
430
+ "content": "<|reserved_10|>",
431
+ "lstrip": false,
432
+ "normalized": false,
433
+ "rstrip": false,
434
+ "single_word": false,
435
+ "special": true
436
+ },
437
+ "151696": {
438
+ "content": "<|reserved_11|>",
439
+ "lstrip": false,
440
+ "normalized": false,
441
+ "rstrip": false,
442
+ "single_word": false,
443
+ "special": true
444
+ },
445
+ "151697": {
446
+ "content": "<|reserved_12|>",
447
+ "lstrip": false,
448
+ "normalized": false,
449
+ "rstrip": false,
450
+ "single_word": false,
451
+ "special": true
452
+ },
453
+ "151698": {
454
+ "content": "<|reserved_13|>",
455
+ "lstrip": false,
456
+ "normalized": false,
457
+ "rstrip": false,
458
+ "single_word": false,
459
+ "special": true
460
+ },
461
+ "151699": {
462
+ "content": "<|reserved_14|>",
463
+ "lstrip": false,
464
+ "normalized": false,
465
+ "rstrip": false,
466
+ "single_word": false,
467
+ "special": true
468
+ },
469
+ "151700": {
470
+ "content": "<|reserved_15|>",
471
+ "lstrip": false,
472
+ "normalized": false,
473
+ "rstrip": false,
474
+ "single_word": false,
475
+ "special": true
476
+ },
477
+ "151701": {
478
+ "content": "<|reserved_16|>",
479
+ "lstrip": false,
480
+ "normalized": false,
481
+ "rstrip": false,
482
+ "single_word": false,
483
+ "special": true
484
+ },
485
+ "151702": {
486
+ "content": "<|reserved_17|>",
487
+ "lstrip": false,
488
+ "normalized": false,
489
+ "rstrip": false,
490
+ "single_word": false,
491
+ "special": true
492
+ },
493
+ "151703": {
494
+ "content": "<|reserved_18|>",
495
+ "lstrip": false,
496
+ "normalized": false,
497
+ "rstrip": false,
498
+ "single_word": false,
499
+ "special": true
500
+ },
501
+ "151704": {
502
+ "content": "<|reserved_19|>",
503
+ "lstrip": false,
504
+ "normalized": false,
505
+ "rstrip": false,
506
+ "single_word": false,
507
+ "special": true
508
+ },
509
+ "151705": {
510
+ "content": "<|reserved_20|>",
511
+ "lstrip": false,
512
+ "normalized": false,
513
+ "rstrip": false,
514
+ "single_word": false,
515
+ "special": true
516
+ },
517
+ "151706": {
518
+ "content": "<|reserved_21|>",
519
+ "lstrip": false,
520
+ "normalized": false,
521
+ "rstrip": false,
522
+ "single_word": false,
523
+ "special": true
524
+ },
525
+ "151707": {
526
+ "content": "<|reserved_22|>",
527
+ "lstrip": false,
528
+ "normalized": false,
529
+ "rstrip": false,
530
+ "single_word": false,
531
+ "special": true
532
+ },
533
+ "151708": {
534
+ "content": "<|reserved_23|>",
535
+ "lstrip": false,
536
+ "normalized": false,
537
+ "rstrip": false,
538
+ "single_word": false,
539
+ "special": true
540
+ },
541
+ "151709": {
542
+ "content": "<|reserved_24|>",
543
+ "lstrip": false,
544
+ "normalized": false,
545
+ "rstrip": false,
546
+ "single_word": false,
547
+ "special": true
548
+ },
549
+ "151710": {
550
+ "content": "<|reserved_25|>",
551
+ "lstrip": false,
552
+ "normalized": false,
553
+ "rstrip": false,
554
+ "single_word": false,
555
+ "special": true
556
+ },
557
+ "151711": {
558
+ "content": "<|reserved_26|>",
559
+ "lstrip": false,
560
+ "normalized": false,
561
+ "rstrip": false,
562
+ "single_word": false,
563
+ "special": true
564
+ },
565
+ "151712": {
566
+ "content": "<|reserved_27|>",
567
+ "lstrip": false,
568
+ "normalized": false,
569
+ "rstrip": false,
570
+ "single_word": false,
571
+ "special": true
572
+ },
573
+ "151713": {
574
+ "content": "<|reserved_28|>",
575
+ "lstrip": false,
576
+ "normalized": false,
577
+ "rstrip": false,
578
+ "single_word": false,
579
+ "special": true
580
+ },
581
+ "151714": {
582
+ "content": "<|reserved_29|>",
583
+ "lstrip": false,
584
+ "normalized": false,
585
+ "rstrip": false,
586
+ "single_word": false,
587
+ "special": true
588
+ },
589
+ "151715": {
590
+ "content": "<|reserved_30|>",
591
+ "lstrip": false,
592
+ "normalized": false,
593
+ "rstrip": false,
594
+ "single_word": false,
595
+ "special": true
596
+ },
597
+ "151716": {
598
+ "content": "<|reserved_31|>",
599
+ "lstrip": false,
600
+ "normalized": false,
601
+ "rstrip": false,
602
+ "single_word": false,
603
+ "special": true
604
+ },
605
+ "151717": {
606
+ "content": "<|reserved_32|>",
607
+ "lstrip": false,
608
+ "normalized": false,
609
+ "rstrip": false,
610
+ "single_word": false,
611
+ "special": true
612
+ },
613
+ "151718": {
614
+ "content": "<|reserved_33|>",
615
+ "lstrip": false,
616
+ "normalized": false,
617
+ "rstrip": false,
618
+ "single_word": false,
619
+ "special": true
620
+ },
621
+ "151719": {
622
+ "content": "<|reserved_34|>",
623
+ "lstrip": false,
624
+ "normalized": false,
625
+ "rstrip": false,
626
+ "single_word": false,
627
+ "special": true
628
+ },
629
+ "151720": {
630
+ "content": "<|reserved_35|>",
631
+ "lstrip": false,
632
+ "normalized": false,
633
+ "rstrip": false,
634
+ "single_word": false,
635
+ "special": true
636
+ },
637
+ "151721": {
638
+ "content": "<|reserved_36|>",
639
+ "lstrip": false,
640
+ "normalized": false,
641
+ "rstrip": false,
642
+ "single_word": false,
643
+ "special": true
644
+ },
645
+ "151722": {
646
+ "content": "<|reserved_37|>",
647
+ "lstrip": false,
648
+ "normalized": false,
649
+ "rstrip": false,
650
+ "single_word": false,
651
+ "special": true
652
+ },
653
+ "151723": {
654
+ "content": "<|reserved_38|>",
655
+ "lstrip": false,
656
+ "normalized": false,
657
+ "rstrip": false,
658
+ "single_word": false,
659
+ "special": true
660
+ },
661
+ "151724": {
662
+ "content": "<|reserved_39|>",
663
+ "lstrip": false,
664
+ "normalized": false,
665
+ "rstrip": false,
666
+ "single_word": false,
667
+ "special": true
668
+ },
669
+ "151725": {
670
+ "content": "<|reserved_40|>",
671
+ "lstrip": false,
672
+ "normalized": false,
673
+ "rstrip": false,
674
+ "single_word": false,
675
+ "special": true
676
+ },
677
+ "151726": {
678
+ "content": "<|reserved_41|>",
679
+ "lstrip": false,
680
+ "normalized": false,
681
+ "rstrip": false,
682
+ "single_word": false,
683
+ "special": true
684
+ },
685
+ "151727": {
686
+ "content": "<|reserved_42|>",
687
+ "lstrip": false,
688
+ "normalized": false,
689
+ "rstrip": false,
690
+ "single_word": false,
691
+ "special": true
692
+ },
693
+ "151728": {
694
+ "content": "<|reserved_43|>",
695
+ "lstrip": false,
696
+ "normalized": false,
697
+ "rstrip": false,
698
+ "single_word": false,
699
+ "special": true
700
+ },
701
+ "151729": {
702
+ "content": "<|reserved_44|>",
703
+ "lstrip": false,
704
+ "normalized": false,
705
+ "rstrip": false,
706
+ "single_word": false,
707
+ "special": true
708
+ },
709
+ "151730": {
710
+ "content": "<|reserved_45|>",
711
+ "lstrip": false,
712
+ "normalized": false,
713
+ "rstrip": false,
714
+ "single_word": false,
715
+ "special": true
716
+ },
717
+ "151731": {
718
+ "content": "<|reserved_46|>",
719
+ "lstrip": false,
720
+ "normalized": false,
721
+ "rstrip": false,
722
+ "single_word": false,
723
+ "special": true
724
+ },
725
+ "151732": {
726
+ "content": "<|reserved_47|>",
727
+ "lstrip": false,
728
+ "normalized": false,
729
+ "rstrip": false,
730
+ "single_word": false,
731
+ "special": true
732
+ },
733
+ "151733": {
734
+ "content": "<|reserved_48|>",
735
+ "lstrip": false,
736
+ "normalized": false,
737
+ "rstrip": false,
738
+ "single_word": false,
739
+ "special": true
740
+ },
741
+ "151734": {
742
+ "content": "<|reserved_49|>",
743
+ "lstrip": false,
744
+ "normalized": false,
745
+ "rstrip": false,
746
+ "single_word": false,
747
+ "special": true
748
+ },
749
+ "151735": {
750
+ "content": "<|reserved_50|>",
751
+ "lstrip": false,
752
+ "normalized": false,
753
+ "rstrip": false,
754
+ "single_word": false,
755
+ "special": true
756
+ },
757
+ "151736": {
758
+ "content": "<|reserved_51|>",
759
+ "lstrip": false,
760
+ "normalized": false,
761
+ "rstrip": false,
762
+ "single_word": false,
763
+ "special": true
764
+ },
765
+ "151737": {
766
+ "content": "<|reserved_52|>",
767
+ "lstrip": false,
768
+ "normalized": false,
769
+ "rstrip": false,
770
+ "single_word": false,
771
+ "special": true
772
+ },
773
+ "151738": {
774
+ "content": "<|reserved_53|>",
775
+ "lstrip": false,
776
+ "normalized": false,
777
+ "rstrip": false,
778
+ "single_word": false,
779
+ "special": true
780
+ },
781
+ "151739": {
782
+ "content": "<|reserved_54|>",
783
+ "lstrip": false,
784
+ "normalized": false,
785
+ "rstrip": false,
786
+ "single_word": false,
787
+ "special": true
788
+ },
789
+ "151740": {
790
+ "content": "<|reserved_55|>",
791
+ "lstrip": false,
792
+ "normalized": false,
793
+ "rstrip": false,
794
+ "single_word": false,
795
+ "special": true
796
+ },
797
+ "151741": {
798
+ "content": "<|reserved_56|>",
799
+ "lstrip": false,
800
+ "normalized": false,
801
+ "rstrip": false,
802
+ "single_word": false,
803
+ "special": true
804
+ },
805
+ "151742": {
806
+ "content": "<|reserved_57|>",
807
+ "lstrip": false,
808
+ "normalized": false,
809
+ "rstrip": false,
810
+ "single_word": false,
811
+ "special": true
812
+ },
813
+ "151743": {
814
+ "content": "<|reserved_58|>",
815
+ "lstrip": false,
816
+ "normalized": false,
817
+ "rstrip": false,
818
+ "single_word": false,
819
+ "special": true
820
+ },
821
+ "151744": {
822
+ "content": "<|reserved_59|>",
823
+ "lstrip": false,
824
+ "normalized": false,
825
+ "rstrip": false,
826
+ "single_word": false,
827
+ "special": true
828
+ },
829
+ "151745": {
830
+ "content": "<|reserved_60|>",
831
+ "lstrip": false,
832
+ "normalized": false,
833
+ "rstrip": false,
834
+ "single_word": false,
835
+ "special": true
836
+ },
837
+ "151746": {
838
+ "content": "<|reserved_61|>",
839
+ "lstrip": false,
840
+ "normalized": false,
841
+ "rstrip": false,
842
+ "single_word": false,
843
+ "special": true
844
+ },
845
+ "151747": {
846
+ "content": "<|reserved_62|>",
847
+ "lstrip": false,
848
+ "normalized": false,
849
+ "rstrip": false,
850
+ "single_word": false,
851
+ "special": true
852
+ }
853
+ },
854
+ "additional_special_tokens": [
855
+ "<unk>",
856
+ "",
858
+ "<ref>",
859
+ "</ref>",
860
+ "<box>",
861
+ "</box>",
862
+ "<quad>",
863
+ "</quad>",
864
+ "<point>",
865
+ "</point>",
866
+ "<slice>",
867
+ "</slice>",
868
+ "<image_id>",
869
+ "</image_id>",
870
+ "<unit>",
871
+ "</unit>",
872
+ "<|reserved_0|>",
873
+ "<|reserved_1|>",
874
+ "<|reserved_2|>",
875
+ "<|reserved_3|>",
876
+ "<|reserved_4|>",
877
+ "<|reserved_5|>",
878
+ "<|reserved_6|>",
879
+ "<|reserved_7|>",
880
+ "<|reserved_8|>",
881
+ "<|reserved_9|>",
882
+ "<|reserved_10|>",
883
+ "<|reserved_11|>",
884
+ "<|reserved_12|>",
885
+ "<|reserved_13|>",
886
+ "<|reserved_14|>",
887
+ "<|reserved_15|>",
888
+ "<|reserved_16|>",
889
+ "<|reserved_17|>",
890
+ "<|reserved_18|>",
891
+ "<|reserved_19|>",
892
+ "<|reserved_20|>",
893
+ "<|reserved_21|>",
894
+ "<|reserved_22|>",
895
+ "<|reserved_23|>",
896
+ "<|reserved_24|>",
897
+ "<|reserved_25|>",
898
+ "<|reserved_26|>",
899
+ "<|reserved_27|>",
900
+ "<|reserved_28|>",
901
+ "<|reserved_29|>",
902
+ "<|reserved_30|>",
903
+ "<|reserved_31|>",
904
+ "<|reserved_32|>",
905
+ "<|reserved_33|>",
906
+ "<|reserved_34|>",
907
+ "<|reserved_35|>",
908
+ "<|reserved_36|>",
909
+ "<|reserved_37|>",
910
+ "<|reserved_38|>",
911
+ "<|reserved_39|>",
912
+ "<|reserved_40|>",
913
+ "<|reserved_41|>",
914
+ "<|reserved_42|>",
915
+ "<|reserved_43|>",
916
+ "<|reserved_44|>",
917
+ "<|reserved_45|>",
918
+ "<|reserved_46|>",
919
+ "<|reserved_47|>",
920
+ "<|reserved_48|>",
921
+ "<|reserved_49|>",
922
+ "<|reserved_50|>",
923
+ "<|reserved_51|>",
924
+ "<|reserved_52|>",
925
+ "<|reserved_53|>",
926
+ "<|reserved_54|>",
927
+ "<|reserved_55|>",
928
+ "<|reserved_56|>",
929
+ "<|reserved_57|>",
930
+ "<|reserved_58|>",
931
+ "<|reserved_59|>",
932
+ "<|reserved_60|>",
933
+ "<|reserved_61|>",
934
+ "<|reserved_62|>"
935
+ ],
936
+ "auto_map": {
937
+ "AutoProcessor": "processing_minicpmv.MiniCPMVProcessor",
938
+ "AutoTokenizer": [
939
+ "tokenization_qwen2.Qwen2Tokenizer",
940
+ "tokenization_minicpmv_fast.MiniCPMVTokenizerFast"
941
+ ]
942
+ },
943
+ "bos_token": "<|im_start|>",
944
+ "clean_up_tokenization_spaces": false,
945
+ "eos_token": "<|im_end|>",
946
+ "errors": "replace",
947
+ "extra_special_tokens": {},
948
+ "model_max_length": 131072,
949
+ "pad_token": "<|endoftext|>",
950
+ "processor_class": "MiniCPMVProcessor",
951
+ "split_special_tokens": false,
952
+ "tokenizer_class": "MiniCPMVTokenizer",
953
+ "unk_token": "<unk>"
954
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff